entry_id
stringlengths
33
33
published
stringlengths
14
14
title
stringlengths
16
215
authors
sequencelengths
1
584
primary_category
stringclasses
117 values
categories
sequencelengths
1
7
text
stringlengths
7
396k
http://arxiv.org/abs/2405.09490v1
20240515163514
Distributed Nonlinear Conic Optimisation with partially separable Structure
[ "Richard Heusdens", "Guoqiang Zhang" ]
cs.DC
[ "cs.DC" ]
[ John Ashley Capellan ======================== In this paper we consider the problem of distributed nonlinear optimisation of a separable convex cost function over a graph subject to cone constraints. We show how to generalise, using convex analysis, monotone operator theory and fixed-point theory, the primal-dual method of multipliers (PDMM), originally designed for equality constraint optimisation and recently extended to include linear inequality constraints, to accommodate for cone constraints. The resulting algorithm can be used to implement a variety of optimisation problems, including the important class of semidefinite programs with partially separable structure, in a fully distributed fashion. We derive update equations by applying the Peaceman-Rachford splitting algorithm to the monotonic inclusion related to the lifted dual problem. The cone constraints are implemented by a reflection method in the lifted dual domain where auxiliary variables are reflected with respect to the intersection of the polar cone and a subspace relating the dual and lifted dual domain. Convergence results for both synchronous and stochastic update schemes are provided and an application of the proposed algorithm is demonstrated to implement an approximate algorithm for maximum cut problems based on semidefinite programming in a fully distributed fashion. Distributed optimisation, nonlinear optimisation, cone constraints, primal-dual method of multipliers § INTRODUCTION In the last decade, distributed optimisation <cit.> has drawn increasing attention due to the demand for either distributed signal processing or massive data processing over (large-scale) pear-to-pear networks of ubiquitous devices. Motivated by the increase in computational power of low cost micro-processors, the range of applications for these networks has grown rapidly. Examples include training a machine learning model, target localisation and tracking, healthcare monitoring, power grid management, and environmental sensing. Consequently, there is a desire to exploit the on-node computational capabilities of such networks to parallelise or even fully distribute computation. In comparison to centralised counterparts, distributed networks offer several unique advantages, including robustness to node failure, scalability with network size and localised transmission requirements. In general, the typical challenges faced by distributed optimisation over a network, in particular ad-hoc networks, are the lack of infrastructure, limited connectivity, scalability, data heterogeneity across the network, data-privacy requirements, and heterogeneous computational resources <cit.>. Various approaches have been developed to address one or more challenges within the network under consideration, depending on the applications involved. For example, <cit.> introduced a pairwise gossip technique to enable asynchronous message exchange in the network, while <cit.> discusses a hybrid approach combining gossip and geographic routing. In <cit.>, a broadcast-based distributed consensus method was proposed to save communication energy. Alternatively, <cit.> presents a belief propagation/message passing strategy, and <cit.> explores signal processing on graphs. Data-privacy requirements were addressed in <cit.>, while <cit.> considered data quantisation resulting in communication efficient algorithms. A method that holds particular significance in this study is to approach distributed signal processing by linking it with convex optimisation, as it has been demonstrated that numerous traditional signal processing problems can be reformulated into an equivalent convex form <cit.>. These methods model the problem at hand as a convex optimisation problem and solve the problem using standard solvers like dual ascent, the method of multipliers or ADMM <cit.> and PDMM <cit.>. While the solvers ADMM and PDMM may initially appear distinct due to their differing derivations, they are closely interconnected <cit.>. The derivation of PDMM, however, directly leads to a distributed implementation where no direct collaboration is required between nodes during the computation of the updates. For this reason we will take the PDMM approach to derive update rules for distributed optimisation with general cone constraints. Let G = ( V,ℰ) be an undirected graph where V is the set of vertices representing the nodes/agents/participants in the network, and ℰ is the set of (undirected) edges, representing the communication links in the network. PDMM was originally designed to solve the following separable convex optimisation problem [ minimise ∑_i∈ V f_i(x_i); [4mm]0mm0mmsubject to (∀{i,j}∈) A_ijx_i + A_jix_j = b_ij, ] in a synchronous setting. Recently, convergence result were presented for stochastic PDMM update schemes <cit.>, a general framework that can model variations such as asynchronous PDMM and PDMM with transmission losses. In <cit.>, PDMM is modified for federated learning over a centralised network, where it is found that PDMM is closely related to the SCAFFOLD <cit.> and FedSplit <cit.> algorithm. Additionally, PDMM can be employed for privacy-preserving distributed optimisation, providing a level of privacy assurance, by utilising the fact that the (synchronous) PDMM updates take place within a particular subspace and the orthogonal complement can be used to obscure local (private) data, a technique known as subspace perturbation <cit.>. Additionally, research in <cit.> demonstrates that PDMM exhibits robustness against data quantisation. Recently, the PDMM algorithm has been extended to incorporate affine inequality constraints as well <cit.>. This enhancement enables its application in solving linear programs in a distributed fashion. Even though a large class of problems can be cast as a linear program, there is a growing interest in semidefinite programming, a relatively new field of optimisation. Semidefinite programming unifies several standard problems (e.g., linear and quadratic programming) and finds many applications in engineering and combinatorial optimisation. In addition, semidefinite relaxation has been at the forefront of some highly promising advancements in the realms of signal processing, communications and smart grids. Its significance and relevance has been demonstrated across a variety of applications, like sensor network localisation <cit.>, multiple-input, multiple-output (MIMO) detection <cit.> and optimal power flow <cit.>. The computational complexity associated with solving such problems typically grows unfavorably with the number of optimization variables (at least O(n^3)) and/or the dimension of the semidefinite constraints involved. This poses limitations on the capability to solve large semidefinite programming instances. However, it is sometimes possible to reduce the complexity by exploiting possible structures in the problem such as sparsity or separability. As a consequence, exploring distributed algorithms for semidefinite programming has received much research interest recently <cit.>. §.§ Main contribution In this paper we present a general framework for solving nonlinear convex optimisation problems with cone constraitns. The framework has linear programming (LP), (convex) quadratic programming (QP), (convex) quadratically constrained quadratic programming (QCQP) and second-order cone programming (SOCP) as special cases. The resulting algorithms are fully distributed, in the sense that no direct collaboration is required between nodes during the computations, leading to an attractive (parallel) algorithm for optimisation in practical networks. To incorporate cone constraints, we impose polar cone constraints on the associated dual variables and then, inspired by <cit.>, derive closed-form update expressions for the dual variables via Peacheman-Rachford splitting of the monotonic inclusion related to the lifted dual problem. We perform a convergence analysis for both synchronous and stochastic update schemes, the latter based on stochastic coordinate descent. §.§ Organisation of the paper The remainder of this paper is organized as follows. Section <ref> introduces appropriate nomenclature and reviews properties of monotone operators and operator splitting techniques. Section <ref> describes the problem formulation while Section <ref> introduces a monotone operator derivation of PDMM with cone constraints. In Section <ref> we derive convergence results of the proposed algorithm and in Section <ref> we consider a stochastic updating scheme. Finally, Section <ref> describes applications and numerical experiments obtained by computer simulations to verify and substantiate the underlying claims of the document and the final conclusions are drawn in Section <ref>. § PRELIMINARIES There exist many algorithms for iteratively minimising a convex function. It is possible to derive and analyse many of these algorithms in a unified manner, using the abstraction of monotone operators. In this section we will review some properties of monotone operators and operator splitting techniques that will be used throughout this manuscript. For a primer on monotone operator methods, the reader is referred to the self-contained introduction and tutorial <cit.>. For an in-depth discussion on the topic the reader is referred to <cit.>. §.§ Notations and functional properties In this work we will denote by the set of nonnegative integers, by the set of real numbers, by ^n the set of real column vectors of length n, by ^m× n the set of m by n real matrices, and by S^n the set of n by n real symmetric matrices. We will denote by $̋ a real Hilbert space with scalar (or inner) product⟨· ,·⟩and induced norm·;(∀x∈)̋ x = √(⟨x,x⟩). The dimension of$̋ is indicated by dim $̋. The Hilbert direct sum of a family of real Hilbert spaces(_̋i, ·_i)_i∈I, whereIis a directed set, is the real Hilbert space⊕_i∈I_̋i = { x = (x_i)_i∈I ∈_i∈I_̋i | ∑_i∈I x_i_i^2 < +∞}, equipped with the addition(x,y) ↦(x_i+y_i)_i∈I, scalar multiplication(α,x) ↦(αx_i)_i∈Iand scalar product(x,y) ↦∑_i∈I ⟨x_i,y_i⟩_i, where⟨· ,·⟩_idenotes the scalar product of_̋i. If$̋ and are real Hilbert spaces, we set (,̋) = { T: →̋ | T is linear and continuous}. Let _++ = {c∈ | c>0}. A subset K of $̋ is a cone if(∀x∈K)(∀c∈_++) cx∈K. Important examples of cones are^n_+ = { x∈^n | x_i ≥0, i=1,…,n}, the nonnegative orthant, andS^n_+, the positive semidefinite cone in^n×n. If=̋⊕_i∈I_̋iandK_iis a cone in_̋i, then_i∈IK_iis a cone in$̋. The dual cone of K is given by K^* = {x∈| (∀ y∈ K) ⟨ x,y⟩≥ 0} and the polar cone of K is K^∘ = -K^*. Let K be a nonempty subset of $̋, and letx∈$̋. We denote by P_Kx the projection of x onto K: (∀ x∈)̋(∀ y∈ K) ⟨ y-P_Kx,x- P_Kx⟩≤ 0. If K is a nonempty closed convex cone in $̋, thenx∈$̋ admits the conic decomposition x = P_Kx + P_K^∘x where P_Kx ⊥ P_K^∘x <cit.>. We call K self-dual if K^*=K. Both ^n_+ and S^n_+ are self-dual. Let K be a proper[A cone K⊆$̋ is called proper if it is closed, convex, solid andK ∩(-K) = {0}(Kis pointed). cone in$̋. We associate with a proper cone K a partial ordering on $̋ defined by(∀x∈)̋(∀y∈)̋ x ≼_K y ⇔y-x∈K. We also writex ≽_K yfory ≼_K x. As an example, whenK=_+, the partial ordering≼_Kis the usual ordering≤on. IfK=S^n_+, the partial ordering≼_Kis the usual matrix inequality:X ≼_K YmeansY-Xis positive semidefinite. Whenxis updated iteratively, we writex^(k)to indicate the update ofxat thekth iteration. When we considerx^(k)as a realisation of a random variable, the corresponding random variable will be denoted byX^(k)(corresponding capital). The expectation operator is denoted by𝔼. LetX,Ybe nonempty sets, and let2^Ybe the power set ofY, i.e., the family of all subsets ofY. A set valued operatorT:X→2^Yis defined by its graphgra T = { (x,y)∈X×Y | y ∈Tx}. The domain ofTisdom T= {x∈X | Tx ≠∅}. The kernel and range ofTareker T = {x∈X | Tx =0}andran T = T(X), respectively. The identity operator on$̋ is denoted by I. If Tx is a singleton or empty for any x, then T is a function or single-valued, usually denoted by f. The notion of the inverse of T, denoted by T^-1, is also defined through its graph, gra T^-1 = { (y,x)∈ Y× X | y ∈ Tx}. Let c∈_++. We denote by J_cT = (I + cT)^-1 the resolvent of an operator T and by R_cT = 2J_cT - I the reflected resolvent, sometimes referred to as the Cayley operator. The composition of two operators T_1: X→ 2^ Y and T_2: Y→ 2^ Z is given by T_2∘ T_1 : X→ 2^ Z. The set of fixed points of T is denoted by fix T = { x∈ X | Tx=x}. Functional transforms make it possible to investigate problems from a different perspective and sometimes simplify the analysis. In convex analysis, a suitable transform is the Legendre transform, which maps a function to its Fenchel conjugate. We will denote by Γ_0()̋ the set of all closed, proper, and convex (CCP) functions f : →̋∪{+∞}. The Fenchel conjugate of a function f ∈Γ_0()̋ is defined as f^∗(y) = sup_x ( ⟨ y,x ⟩ -f(x) ). The function f and its conjugate f^* are related by the Fenchel-Young inequality f(x) + f^*(y) ≥⟨ y,x ⟩ <cit.>. We will denote by ∂ f the subdifferential of f. If f∈Γ_0()̋, then f=f^**. Moreover, we have y ∈∂ f(x) ⇔ x∈∂ f^*(y) ⇔ f(x) + f^*(y) = ⟨ y,x ⟩. If f∈Γ_0()̋, the proximity operator prox_cf is defined as prox_cf x = min_u∈( f(u) + 1/2cx-u^2) and is related to the resolvent of ∂ f by prox_cf x = J_c∂ fx <cit.>. If ι_K is the indicator function on a closed convex subset K of $̋, thenprox_ι_K x = P_K x. An undirected graph is denoted byG=(𝒱,ℰ), where𝒱is the set of vertices representing the nodes in the network andℰ={{i,j} | i, j∈𝒱}is the set of undirected edges (unordered pairs) in the graph representing the communication links in the network. We use = {(i,j) | i, j∈𝒱}to denote the set of all directed edges (ordered pairs). Therefore,||=2||. We use𝒩_ito denote the set of all neighbouring nodes of nodei, i.e.,𝒩_i={j∈ | {i,j}∈ℰ}, anddeg i = |N_i|to denote the degree of vertexi∈. §.§ Monotone operators and operator splitting The theory of monotone set-valued operators plays a central role in deriving iterative convex optimisation algorithms. Global minimisers of proper functions can be characterised by the principleminf ⇔{x∈| 0 ∈∂f(x)}. The subdifferential of a convex function is a (maximally) monotone operator, and the problem at hand can thus be expressed as finding a zero of a monotone operator (monotone inclusion problem) which, in turn, is transformed into finding a fixed point of its associated resolvent. The fixed point is then found by the fixed point (Banach-Picard) iteration, yielding an algorithm for the original problem. In this section we give background information about monotone operators and operator splitting to support the remainder of the manuscript. Let T:→̋2^. Then T is monotone if (∀(x,u)∈ gra T)(∀(y,v)∈ gra T) ⟨ v-u, y-x ⟩≥ 0. The operator is said to be strictly monotone if strict inequality holds. The operator is said to be uniformly monotone with modulus ϕ : _+→ [0,+∞) if ϕ is increasing, vanishes only at 0, and (∀(x,u)∈ gra T)(∀(y,v)∈ gra T) ⟨ v-u, y-x ⟩≥ϕ( y-x). The operator is said to be strongly monotone with constant m>0, or m-strongly monotone, if T-mI is monotone, i.e., (∀(x,u)∈ gra T)(∀(y,v)∈ gra T) ⟨ v-u, y-x ⟩≥ m y-x^2. The operator is said to be maximally monotone if for every (x,u)∈×̋$̋, (x,u)∈ gra T ⇔ (∀ (y,v)∈ gra T) ⟨ v-u, y-x ⟩≥ 0. In other words, there exists no monotone operatorS:→̋2^such that gra Sproperly contains gra T. It is clear that strong monotonicity implies uniform monotonicity, which itself implies strict monotonicity. Let D be a nonempty subset of $̋ and letT:D→$̋. Then T is nonexpansive if (∀ x∈ D)(∀ y∈ D) Ty-Tx≤y-x. T is called strictly nonexpansive, or contractive, if strict inequality holds. The operator is firmly nonexpansive if (∀ x∈ D)(∀ y∈ D) Ty- Tx^2 ≤⟨ Ty-Tx, y-x⟩. Let D be a nonempty subset of $̋ and letT:D→$̋ be nonexpansive, and let α∈ (0,1). Then T is averaged with constant α, or α-averaged, if there exists a nonexpansive operator S : D →$̋ such thatT = (1-α)I + α S. It can be shown that ifTis maximally monotone, then the resolventJ_cTis firmly nonexpansive <cit.> and the reflected resolventR_cT = 2J_cT-Iis nonexpansive <cit.>. We have 0∈ Tx ⇔ x ∈ (I+cT)x ⇔ (I+cT)^-1x ∋ x ⇔ x=J_cTx, where the last relation holds sinceJ_cTis single valued. Therefore, we conclude that a monotone inclusion problem is equivalent to finding a fixed point of its associated resolvent. Moreover, sinceJ_cT = 1/2(R_cT+I)is1/2-averaged, we have, by the Krasnosel'skii-Mann algorithm, that the sequence generated by the Banach-Picard iterationx^(k+1) = J_cTx^(k)is Fejér monotone <cit.> and converges weakly[For finite-dimensional Hilbert spaces weak convergence implies strong convergence.] to a fixed pointx^*ofJ_cTfor anyx^(0)∈ dom J_cT<cit.>, and thus to a zero ofT. In that case the Banach-Picard iterationx^(k+1) = J_c∂ fx^(k)results in the well known proximal point method <cit.>. For many maximal monotone operatorsT, the inversion operation needed to evaluate the resolvent may be prohibitively difficult. A more widely applicable alternative is to devise an operator splitting algorithm in whichTis decomposed asT=T_1 + T_2, and the (maximal monotone) operatorsT_1andT_2are employed in separate steps. Examples of popular splitting algorithms are the forward-backward method, Tseng's method, the Peaceman-Rachford splitting algorithm and the Douglas-Rachford splitting algorithm, where the first two methods requireT_1(orT_2) to be single valued (for example the gradient of a differentiable convex function). The Peaceman-Rachford splitting algorithm is given by the iterates <cit.> x^(k) = J_cT_1z^(k), v^(k) = J_cT_2(2x^(k) - z^(k)), z^(k+1) = z^(k) + 2(v^(k) - x^(k)). WhenT_1is uniformly monotone,x^(k)converges strongly tox^*(notationx^(k)→ x^*) for anyz^(0)∈$̋, where x^* is the solution to the monotonic inclusion problem 0 ∈ T_1x + T_2x. The iterates (<ref>) can be compactly expressed using reflected resolvents as x^(k) = J_cT_1z^(k), z^(k+1) = (R_cT_2∘ R_cT_1) z^(k). If either R_cT_1 or R_cT_2 is contractive, then R_cT_2∘ R_cT_1 is contractive and the Peacman-Rachford iterates converge geometrically. Note that since R_cT_2∘ R_cT_1 is nonexpansive, without the additional requirement of T_1 being uniformly monotone, there is no guarantee that the iterates will converge. In order to ensure convergence without imposing conditions like uniform monotonicity, we can average the nonexpansive operator. In the case of 1/2-averaging, the z-update is given by z^(k+1) = 1/2(I + R_cT_2∘ R_cT_1) z^(k), which is called the Douglas-Rachford splitting algorithm. This method was first presented in <cit.> and converges under more or less the most general possible conditions. A well known instance of the Douglas-Rachford splitting algorithm is the alternating direction method of multipliers (ADMM) <cit.> or the split Bregman method <cit.>. § PROBLEM SETTING Let G = (,) be an undirected graph. We will consider the minimisation of a separable function over the graph G subject to cone constraints. Let N=||, (n_1,…,n_N)∈^N_+ be strictly positive integers, and set (∀ i∈ V) L_i = {1,…,n_i}. We consider the problem [ minimise ∑_i∈ V f_i(x_i); [4mm]0mm0mm subject to (∀{i,j}∈ E) A_ijx_i + A_jix_j - b_ij∈ K_ij,; (∀ i∈ V)(∀ℓ_i ∈ L_i) h_ℓ_i (x_i) ≤ 0, [4mm]0mm0mm ] where f_i ∈Γ_0(_̋i), A_ij∈(_̋i,_ij), b_ij∈_ij, K_ij⊆_ij is a closed convex cone, and h_ℓ_i∈Γ_0(_̋i). Note that (<ref>) also includes equality constraints of the form A_ijx_i + A_jix_j = b_ij by setting K_ij = {0}, the trivial cone. The constraints h_i(x_i)≤ 0 can be used, for example, to set constraints on the individual entries of x_i in the case _̋i = ^m× n, the Hilbert space of m× n real matrices. See <ref> for a prototypical example. Hence, problem (<ref>) describes the optimisation of a nonlinear convex function subject to cone constraints. We will refer to solving such programs as nonlinear conic programming (NLCP). If the objective function is linear and K_ij = S^n_ij_+, where n_ij = dim _ij, (<ref>) reduces to semidefinite programming (SDP). In fact, (<ref>) has linear programming (LP), (convex) quadratic programming (QP), (convex) quadratically constrained quadratic programming (QCQP) and second-order cone programming (SOCP) as special cases. In order to compactly express (<ref>), we introduce x = (x_i)_i∈∈$̋, where=̋⊕_i∈_̋i. Similarly, we define = ⊕_(i.j)∈_ij. LetA : →̋ : (x_i)_i∈↦ (A_ijx_i+A_jix_j)_{i,j}∈, andb = (b_ij)_{i,j}∈. With this, (<ref>) can be expressed as [ minimise f(x); [4mm]0mm0mmsubject to Ax-b ∈ K, ] wheref(x) = ∑_i∈ V( f_i(x_i) + ι_N_i(x_i)), andι_N_iis the indicator function on the setN_i = {x_i∈_̋i | (∀ℓ_i∈ L_i) h_ℓ_i (x_i)≤ 0}, andK = _{i,j}∈ EK_ij. Hence, the inequality constraintsh_ℓ_i(x_i)≤ 0are incorporated in the objective function using indicator functions for reasons that will become clear later in Section <ref>. The Lagrange dual functiong:→∪{+∞}is defined as g(λ) = inf_x∈( f(x) + ⟨λ, Ax-b⟩) = -f^*(-A^*λ) - ⟨λ,b⟩, [3mm]0mm0mm withf^*the Fenchel conjugate offandA^*∈(,)̋is the adjoint ofA. With this, the dual problem is given by <cit.> [ minimise f^*(-A^*λ) + ⟨λ,b⟩; [4mm]0mm0mmsubject to λ∈ K^∘, ] whereK^∘denotes the polar cone ofK. We haveλ = (λ_ij)_{i,j}∈, whereλ_ij∈_ijdenotes the Lagrange multipliers associated with the constraints on edge{i,j}∈ E. At this point we would like to highlight that in the case we have only equality constraints in (<ref>), that is, constraints of the formAx=b, we haveK={0}andK^∘=K^*=$̋, and the dual problem is simply an unconstrained optimisation problem. That is, problem (<ref>) reduces to [ minimise f^*(-A^*λ) + ⟨λ,b⟩. ] § OPERATER SPLITTING OF THE LIFTED DUAL FUNCTION Let f̃_i(x_i) = f_i(x_i)+ι_N_i(x_i). Since f is separable, we have f(x) = ∑_i∈ Vf̃_i(x_i) ⇔ f^*(y) = ∑_i∈ Vf̃_i^*(y_i), that is, the conjugate function of a separable CCP function is itself separable and CCP. Moreover, the adjoint operator A^* satisfies ⟨ A^*λ,x⟩ = ⟨λ,Ax⟩ = ∑_{i,j}∈⟨λ_ij, A_ijx_i + A_jix_j ⟩ = ∑_i∈∑_j∈ N_i⟨λ_ij, A_ijx_i ⟩ = ∑_i∈⟨∑_j∈ N_i A_ij^*λ_ij, x_i⟩, from which we conclude that A^* : →:̋ (λ_ij)_{i,j}∈↦(∑_j∈ N_i A_ij^* λ_ij)_i∈, and thus f^*(-A^*λ) = ∑_i∈ Vf̃^*_i(-∑_j∈ N_iA_ij^*λ_ij). By inspection of (<ref>) we conclude that every λ_ij, associated with edge {i,j}, is used by two conjugate functions: f̃_i^* and f̃_j^*. As a consequence, all conjugate functions depend on each other. We therefore introduce auxiliary variables to decouple the node dependencies. That is, we introduce for each edge {i,j}∈ Etwo auxiliary node variables λ_i|j and λ_j|i, one for each node i and j, respectively, and require that λ_i|j = λ_j|i. That is, let λ̅ = (λ_i|j)_(i,j)∈∈ where = ⊕_(i,j)∈_ij, and define C : →̋ : (x_i)_i∈↦ (A_ijx_i)_(i,j)∈, a permutation operator P : → : (λ_i|j)_(i,j)∈↦ (λ_j|i)_(i,j)∈, and d = (1/2b_ij)_(i,j)∈. With this, we can reformulate the dual problem (<ref>) as [ minimise f^*(-C^*λ̅) + ⟨λ̅, d⟩; [4mm]0mm0mmsubject to λ̅∈K̅^∘,; [4mm]0mm0mmλ̅ = Pλ̅, ] where C^* is the adjoint of C and K̅^∘ = _(i,j)∈K_ij^∘. We will refer to (<ref>) as the lifted dual problem of the primal problem (<ref>). Note that (I+P)C : →̋ : (x_i)_i∈↦ (A_ijx_i + A_jix_j)_(i,j)∈ so that (I+P)Cx -d ∈K̅ ⇔ Ax- b ∈ K. Moreover, if y∈ such that y∈K̅, then Py ∈K̅ by construction. Let M = {λ̅∈ | λ̅∈K̅^∘, λ̅=Pλ̅}. With this, problem (<ref>) can be equivalently expressed as minimise f^*(-C^*λ̅) + ⟨λ̅, d⟩ + ι_M(λ̅). Again, by comparing general cone vs. equality constraint optimisation, the difference is in the definition of the set M; for equality constraint optimisation the set M reduces to M = {λ̅∈ | λ̅=Pλ̅}. The optimality condition for problem (<ref>) is given by the inclusion problem 0∈ -C ∂ f^*(-C^*λ̅) + d + ∂ι_M(λ̅). In order to apply Peaceman-Rachford splitting to (<ref>), we define the operators T_1 and T_2 as T_1 = -C ∂ f^*(-C^*(·)) + d and T_2 = ∂ι_M. Since T_1 is the subdifferential of f^*(-C^*λ̅) + ⟨λ̅, d⟩, which is convex, both T_1 and T_2 are monotone. Maximality follows directly from the maximality of the subdifferential <cit.>. As a consequence, Peaceman-Rachford splitting to (<ref>) yields the iterates λ̅^(k) = J_cT_1z^(k), z^(k+1) = (R_cT_2∘ R_cT_1)z^(k). We will first focus on the reflected resolvent R_cT_2 in (<ref>), which carries the inequality constraints encapsulated by M. To do so, we introduce an auxiliary vector y^(k), such that y^(k) = R_cT_1 z^(k), z^(k+1) = R_cT_2y^(k). Since M is the intersection of an affine subspace S = {λ̅∈ | λ̅=Pλ̅} and a closed convex cone K̅^∘, M is closed and convex, and we have J_cT_2y = prox_cι_M(y) = P_My, the projection of y onto M. As a consequence, R_cT_2 is given by R_cT_2 = 2 P_M-I, the reflection with respect to M, which we will denote by R_M. We can explicitly compute P_My, and thus R_My. Let y∈. Then P_M y = 1/2 P_K̅^∘ (I+P)y. We have P_M y = min_u∈ M u -y^2. Then ũ is a solution to (<ref>) if and only if 1. ũ∈K̅^∘, ũ = Pũ 2. (∃ ξ̃∈K̅)(∃ η̃∈) {[ 0 = 2(ũ-y) + ξ̃ + (I-P)^*η̃,; ⟨ξ̃, ũ⟩ = 0. ]. Combining (<ref>) and (<ref>), and using the fact that P^2=I, we obtain ũ = 1/2(I+P)y - 1/4(I+P) ξ̃. Let v=1/2(I+P)y and w = 1/4(I+P) ξ̃. Since ξ̃∈K̅, we have Pξ̃∈K̅ (by construction of K̅) and thus w∈K̅ since K̅ is closed and convex. Moreover, we have v = P_K̅v + P_K̅^∘v and w = P_K̅w so that ũ = P_K̅(v -w) + P_K̅^∘v. However, since ũ∈K̅^∘, we conclude that P_K̅(v -w) = 0 and thus ũ= P_K̅^∘v which completes the proof. In order to find a dual expression for J_cT_1z^(k), we note that λ̅= J_cT_1z ⇔ z - λ̅∈ cT_1λ̅. Hence, λ̅ = z +c( Cu - d) where u∈∂ f^*(-C^*λ̅)and thus -C^*λ̅∈∂ f(u). Hence, 0 ∈∂ f(u) + C^*λ̅ = ∂ f(u) + C^*z + cC^*(Cu - d) so that u = min_x(f(x) + ⟨ z,Cx⟩ + c/2Cx-d^2). With this, the iterates (<ref>) can be expressed as x^(k) =min_x(f(x) + ⟨ z^(k),Cx⟩ + c/2Cx-d^2), λ̅^(k) = z^(k) + c(Cx^(k)-d), y^(k) = 2λ̅^(k) - z^(k), [4mm]0mm0mm z^(k+1) = R_My^(k),[4mm]0mm0mm which can be simplified to x^(k) =min_x(f(x) + ⟨ z^(k),Cx⟩ + c/2Cx-d^2), y^(k) = z^(k) + 2c(Cx^(k)-d), z^(k+1) = R_My^(k).[4mm]0mm0mm The iterates (<ref>) are collectively referred to as the generalised primal-dual method of multipliers (GPDMM). Recall that R_cT_2 = 2 P_M-I = R_M. To get some insight in how to implement R_M, note that R_My = P_K̅^∘ (I+P)y - y. In the case of equality constraints we have K̅={0} and thus K̅^∘ = G̅ so that R_My = Py, which is simply a permutation operator. This permutation operator represents the actual data exchange in the network. That is, we have for all {i,j}∈ E : z_i|j← y_j|i, z_j|i← y_i|j. In the case of general cone constraints, we have z_i|j← P_K^∘_ij(y_i|j+y_j|i) - y_i|j and z_j|i← P_K^∘_ij(y_i|j+y_j|i) - y_j|i. The distributed nature of PDMM can be made explicit by exploiting the structure of C and d and writing out the update equations (<ref>). Recall that f(x) = ∑_i∈ V( f_i(x_i) + ι_N_i(x_i)). Let (∀ i∈) C_i:_̋i→ : x_i ↦{[ A_ijx_i, if (i,j)∈,; 0, if (i,j)∉, [4mm]0mm0mm ]. so that Cx =∑_i∈ C_ix_i. In addition, since ⟨ C_j x_j, C_i x_i⟩ = 0 for j≠ i, we have Cx-d^2 = ∑_i∈C_ix_i-d^2. With this, (<ref>) for all i ∈𝒱 can be expressed as x_i^(k) =min_x_i( f_i(x_i)+ ι_N_i(x_i) + ⟨ z^(k), C_i x_i ⟩ + c/2C_i x_i - d^2 ) =min_x_i∈ N_i( f_i(x_i) + ⟨ z^(k), C_i x_i ⟩ + c/2C_i x_i - d^2 ) =min_x_i∈ N_i(f_i(x_i) +∑_j∈𝒩_i(⟨ z_i|j^(k),A_ijx_i⟩+c/2A_ijx_i - 1/2b_ij^2 ) ). In addition, we can express (<ref>) as (∀ i∈) (∀ j ∈ N_i) y_i|j^(k) = z_i|j^(k) + 2c(A_ijx_i^(k) - 1/2b_ij), after which data is exchanged amongst neighbouring nodes, and the auxiliary variables are updated: (∀ i∈) (∀ j ∈ N_i) z_i|j^(k+1) = P_K^∘_ij(y_i|j^(k)+y_j|i^(k)) - y_i|j^(k). The resulting algorithm is visualised in the pseudo-code of Algorithm <ref>. It can be seen that no direct collaboration is required between nodes during the computation of these updates, leading to an attractive (parallel) algorithm for optimisation in practical networks. The update (<ref>) can be interpreted as one-way transmissions of the auxiliary y variables to neighbouring nodes where the actual update of the z variables is done. [t] Synchronous GPDMM.[1] Initialise:z^(0)∈, c ∈_++Initialisationk=0,...,i∈ VNode updatesx_i^(k) =min_x_i∈ N_i( f_i(x_i)+ ∑_j∈𝒩_i(⟨ z_i|j^(k),A_ijx_i⟩ + c/2A_ijx_i - 1/2b_ij^2 ))j∈𝒩_i y_i|j^(k) = z_i|j^(k) + 2c (A_ijx_i^(k) - 1/2b_ij) i∈ V, j∈ N_iTransmit variablesnode_j←node_i(y_i|j^(k)) i∈ V, j∈ N_iAuxiliary updatesz_i|j^(k+1) = P_K^∘_ij(y_i|j^(k)+y_j|i^(k)) - y_i|j^(k) §.§ Node constraints So far we considered constraints of the form A_ijx_i + A_jix_j - b_ij∈ K_ij. If we set A_ji to be the zero operator, we have constraints of the form A_ijx_i - b_ij∈ K_ij, which are node constraints; it sets constraints on the values x_i can take on. Even though x_j is not involved in the constraint anymore, there is still communication needed between node i and node j since at the formulation of the lifted dual problem (<ref>) we have introduced two auxiliary variables, λ_i|j and λ_j|i, one at each node, to control the constraints between node i and j. This was done independent of the actual value of A_ij and A_ji. In order to guarantee convergence of the algorithm, these variables need to be updated and exchanged during the iterations. To avoid such communication between nodes, we can introduce dummy nodes, one for every node that has a node constraint. Let i' denote the dummy node introduced to define the node constraint on node i. That is, we have A_ii'x_i - b_ii'∈ K_ii'. Since dummy node i' is only used to communicate with node i, it is a fictive node and can be incorporated in node i, thereby avoiding any network communication for node constraints. In such cases, we will simply write A_ix_i - b_i∈ K_i. Let (∀ i∈) _̋i= S^n, and consider the node constraints (∀ i∈) X_i ∈ S^n_+, the cone of positive semidefinite matrices in ^n× n. Hence (∀ i∈) K_i = S^n_+. Since K_i^∘ = - K_i^* = -S^n_+=S^n_-, where S^n_- is the cone of negative semidefinite matrices, so that (<ref>) becomes (∀ i∈)(∀ j∈ N_i) Z_i|j = P_S^n_-(Y_i|j+Y_j|i) - Y_i|j, where P_S^n_-Y = QΛ_- Q^T, where QΛ Q^T is the eigenvalue decomposition of Y and Λ_- is the matrix obtained from Λ by setting the positive entries to 0. § LOCAL INEQUALITY CONSTRAINTS By inspection of (<ref>), we observe that (<ref>) is a constraint optimisation problem due to the fact that we have included the primal constraints (∀ℓ_i∈ L_i) h_ℓ_i (x_i)≤ 0 in the objective function using the indicator function ι_N_i. For many practical problems, however, the constraint optimisation problem can be solved analytically and efficiently, as the following example shows. Let (∀ i∈) _̋i = S^n equipped with the scalar product (X,Y) ↦ tra(X^TY), and consider the following consensus problem: [ minimise ∑_i∈1/2X_i-Q_i^2; subject to (∀ i∈) {[ X_i ∈ K_i,; (∀ℓ_i∈ L_i) tra(H_ℓ_i ^TX_i) = 1,[4mm]0mm0mm ].; (∀{i,j}∈) X_i=X_j.[4mm]0mm0mm ] By inspection of (<ref>) we note that the constraints (∀ i∈) X_i ∈ K_i and (∀{i,j}∈) X_i=X_j will be handled by the GPDMM iterates, while the constraints (∀ℓ_i∈ L_i) tra(H_ℓ_i ^TX_i) = 1 appear as constraints in the update (<ref>). Since the objective function is quadratic, (<ref>) is the minimisation over a quadratic function subject to affine constraints, which boils down to solving a set of linear equations. Indeed, note that the equality constraints over all edges imply that (∀{i,j}∈) A_ij = -A_ji = I, b_ij=0, and thus that C_i^*C_j = ( deg i)δ_ij, where δ_ij is the Kronekcer delta. The solution X_i^(k) to (<ref>) for problem (<ref>) therefore satisfies the following optimality conditions: 1. (∀ i∈) (∀ℓ_i∈ L_i) tra(H_ℓ_i ^TX^(k)_i) = 1, 2. (∀ i∈) (∃ γ^(k)_i ∈^n_i) (1+c deg i) X_i^(k) - Q_i + C_i^*Z^(k) + ∑_ℓ_i∈ L_i (γ^(k)_i)_ℓ_i H_ℓ_i = 0. Hence, (∀ i∈) X_i^(k) = (Q_i - C_i^*Z^(k) -∑_ℓ_i∈ L_i (γ^(k)_i)_ℓ_i H_ℓ_i)/(1 +c deg i) = X̃_i^(k) - ∑_ℓ_i∈ L_i (γ̃^(k)_i)_ℓ_i H_ℓ_i , where X̃_i^(k) is the solution of (<ref>) in the absence of the constraints, and γ̃^(k)_i = γ^(k)_i/(1+c deg i). The Lagrange multipliers γ̃^(k)_i are found by solving (<ref>). Let (∀ i∈) G_i ∈^n_i× n_i having entries (∀ m∈ L_i)(∀ n∈ L_i) G_i_m,n = tra(H^T_ℓ_mH_ℓ_n). With this, (<ref>) for all i∈𝒱 can be expressed as ( [ 1; ⋮; 1 ]) = ( [ tra(H_1^TX^(k)_i); ⋮; tra(H_n_i^TX^(k)_i) ]) = ( [ tra(H_1^TX̃^(k)_i); ⋮; tra(H_n_i^TX̃^(k)_i) ]) - G_i γ̃^(k)_i. In conclusion, the constraints (∀ℓ_i∈ L_i) tra(H_ℓ_i ^TX_i) = 1 can be implemented by applying a simple correction factor ∑_ℓ_i∈ L_i (γ̃^(k)_i)_ℓ_i H_ℓ_i to X̃_i^(k), the solution of (<ref>) in the absence of the constraints. A similar conclusion holds for semidefinite programs where the objective function is linear, and (<ref>) is still a minimisation over a quadratic function subject to affine constraints. For more complex objective functions the inequality constraints can be solved locally using standard convex solvers, or we can replace the complex objective function at every iteration by a quadratic approximation, leading to simple analytic update equations as described above, a procedure that is also guaranteed to converge <cit.>. § CONVERGENCE OF THE GPDMM ALGORITHM Let T = R_cT_2∘ R_cT_1. Since both R_cT_2 and R_cT_1 are nonexpansive, T is nonexpansive, and the sequence generated by the Banach–Picard iteration z^(k+1) = Tz^(k) may fail to produce a fixed point of T. A simple example of this situation is T=-I and z^(0)≠ 0. Although operator averaging provides a means of ensuring algorithmic convergence, resulting in the Krasnosel'skii-Mann iterations z^(k+1) = (1-α)z^(k) + α Tz^(k) with α∈ (0,1], it is well known that Banach-Picard iterations converge provable faster than Krasnosel'skii-Mann iterations for the important class of quasi-contractive operators <cit.>. As discussed before, the Peaceman-Rachford splitting algorithm converges when T_1 is uniformly monotone. However, assuming finite dimensional Hilbert spaces, we have dim > dim $̋ for any practical network so that ker C^* ≠∅. Recall thatT_1 = -C ∂ f^*(-C^*(·)) + d. Hence,(∃ (λ̅,η)∈×) λ̅≠η and C^*(λ̅-η) = 0, which prohibitsT_1of being strictly monotone, and thus uniformly monotone. It is therefore of interest to consider if there are milder conditions under which certain optimality can be guaranteed. Whilst such conditions may be restrictive in the case of convergence of the auxiliary variables, in the context of distributed optimisation we are often only interested in primal optimality. For this reason we define conditions that ensurex^(k)→ x^*even ifz^(k)↛z^*, z^*∈ fix T. Let T_1 = -C ∂ f^*(-C^*(·)) + d and T_2 = ∂ι_M such that fix T ≠∅ and ∂ f is uniformly monotone with modulus ϕ, let c∈_++, and let x^* be the solution to the primal problem (<ref>). Given the iterates (<ref>) and z^(0)∈, we have x^(k)→ x^*. Let z^* ∈ fix T. We have for all k∈, z^(k+1) - z^*^2 = (R_cT_2∘ R_cT_1)z^(k) - (R_cT_2∘ R_cT_1)z^*^2 ≤R_cT_1 z^(k) - R_cT_1 z^*^2 = 2λ̅^(k) - z^(k) - (2λ̅^* - z^*)^2 = z^(k)-z^*^2 -4 ⟨λ̅^(k) -λ̅^*, z^(k)-λ̅^(k) - (z^*-λ̅^*)⟩ = z^(k)-z^*^2+ 4c ⟨λ̅^(k) -λ̅^*, C(x^(k)-x^*)⟩, where the last equality follows from (<ref>). Moreover, since x^(k) minimises f(x) + ⟨ z^(k),Cx⟩ + c/2Cx-d^2, we have that 0∈∂ f(x^(k)) + C^*z^(k) + cC^*(Cx^(k)-d) = ∂ f(x^(k)) + C^*λ̅^(k), so that (<ref>) can be expressed as z^(k+1) - z^*^2 ≤z^(k)-z^*^2- 4c ⟨∂ f(x^(k)) -∂ f(x^*), x^(k)-x^*⟩ ≤z^(k)-z^*^2 - 4c ϕ(x^(k)-x^*). Hence, ϕ(x^(k)-x^*)→ 0 and, in turn, x^(k)-x^*→ 0. To show that x^* is primal feasible, we consider two successive z-updates: z^(k+1) = R_M(z^(k) + 2c(Cx^(k)-d)) = z^(k-1) + 2c((Cx^(k-1) -d)+ R_M( Cx^(k) - d)). Subtracting z^* from both sides of (<ref>) and observing from (<ref>) that (z^(k) - z^*^2)_k∈ converges, we conclude that 0 = Cx^* -d + R_M( Cx^* - d) = 2 P_M(Cx^* - d ) = P_K̅^∘ (I+P)(Cx^*-d). Hence, (I+P)(Cx^*-d) ∈K̅ and thus Ax^*-b∈ K by (<ref>), which completes the proof. Since T is at best nonexpansive, the auxiliary variables will not converge in general. In fact, from (<ref>) we conclude that (z^(2k))_k∈ and (z^(2k+1))_k∈ converges, and similarly for y and λ̅. § STOCHASTIC COORDINATE DESCENT In practice, synchronous algorithm operation necessitates a global clocking system to coordinate actions among nodes. Clock synchronisation, however, in particular in large-scale heterogeneous sensor networks, can be challenging. In addition, in such heterogeneous environments where sensors or agents vary greatly in processing capabilities, processors that are fast either because of high computing power or because of small workload per iteration are often forced to idle while waiting for slower processors to catch up. Asynchronous algorithms offer a solution to these issues by providing greater flexibility in leveraging information received from other processors, thereby mitigating the constraints imposed by synchronous operation. In order to obtain an asynchronous (averaged) GPDMM algorithm, we will apply randomised coordinate descent to the algorithm presented in Section <ref>. Stochastic updates can be defined by assuming that each auxiliary variablez_i|jcan be updated based on a Bernoulli random variableχ_i|j∈{0,1}. Letχ = (χ_i|j)_(i,j)∈, and let(χ^(k))_k∈ℕbe an i.i.d. random process defined on a common probability space(Ω, A, P), such thatχ^(k) : (Ω, A) →{0,1}^||. Hence,χ^(k)(ω) ⊆{0,1}^||indicates which entries ofz^(k)will be updated at iterationk. We assume that the following condition holds: (∀ (i,j) ∈) P({χ_i|j^(0) = 1})>0. Since(χ^(k))_k∈ℕis i.i.d., (<ref>) guarantees that at every iteration, entryz_i|j^(k)has nonzero probability to be updated. We define the stochastic operatorU: → : (z_i|j)_(i,j)∈↦ (χ_i|jz_i|j)_(i,j)∈. With this, we define the stochastic Banach-Picard iteration <cit.> as Z^(k+1) = T_χ^(k) Z^(k) = (I-U^(k))Z^(k)+U^(k)(TZ^(k)), whereZ^(k)denotes the random variable having realisationz^(k). IfTisα-averaged, a convergence proof is given in <cit.>, where it is shown thatZ^(k) - T_χ^(k) Z^(k) a.s.→ 0(almost surely). IfTis not averaged, we do not have convergence in general sinceTis at best nonexpansive and we need additional conditions for convergence. We have the following convergence result for stochastic GPDMM. Let T_1 = -C ∂ f^*(-C^*(·)) + d and T_2 = ∂ι_M such that fix T ≠∅ and ∂ f is uniformly monotone with modulus ϕ, let c∈_++, and let x^* be the solution to the primal problem (<ref>). Given the stochastic iteration (<ref>) and z^(0)∈G̅, we have X^(k) a.s.→ x^*. Let _χ = {(i,j) ∈ | χ_i|j=1} and let p_i|j = P({χ_i|j^(0) = 1}). Moreover, let S = {0,1}^|| so that (∀χ∈ S) p_χ = ∑_(i,j)∈_χ p_i|j. We define ⟨· , ·⟩_ P as (∀ u∈)(∀ v∈) ⟨ u,v⟩_ P = ∑_(i,j)∈ p_i|j^-1⟨ u_i|j,v_i|j⟩. In addition, let A_k := σ{χ^(t): t≤ k}, the σ-algebra generated by the random vectors χ^(0),…,χ^(k). Since A_k ⊆ A_l for k≤ l, the sequence ( A_k)_k∈ is a filtration on (Ω, A). For any z^* ∈ fix T we have (see also <cit.>) 𝔼 ( Z^(k+1) - z^*_ P^2 | A_k ) = ∑_χ∈ S p_χT_χZ^(k)-z^*^2_ P = ∑_χ∈ S p_χ( ∑_(i,j)∈_χ(T Z^(k))_i|j-z_i|j^*^2_ P + ∑_(i,j)∉_χZ_i|j^(k)-z_i|j^*^2_ P) = Z_i|j^(k+1)-z_i|j^*_ P^2 + ∑_χ∈ S p_χ∑_(i,j)∈_χp_i|j^-1( (T Z^(k))_i|j-z_i|j^*^2 - Z_i|j^(k)-z_i|j^*^2 ) = Z_i|j^(k+1)-z_i|j^*_ P^2 + ∑_(i,j)∈( (T Z^(k))_i|j-z_i|j^*^2 - Z_i|j^(k)-z_i|j^*^2 ) = Z^(k) - z^*_ P^2 + TZ^(k) - z^*^2 - Z^(k) - z^*^2. Using (<ref>), (<ref>) becomes 𝔼( Z^(k+1) - z^*_ P^2. . | A_k ) ≤Z^(k) - z^*_ P^2 - 4cϕ(X^(k)-x^*), which shows that (Z^(k) - z^*_ P^2)_k≥ 1 is a nonnegative supermartingale. Moreover, since ( · )^1/2 is concave and nondecreasing on _+, we conclude that (Z^(k) - z^*_ P)_k≥ 1 is a nonnegative supermartingale as well and therefore converges almost surely by the martingale convergence theorem <cit.>. Taking expectations on both sides of (<ref>) and iterating over k, we obtain 𝔼(Z^(k+1) - z^*_ P^2) ≤z^(0) - z^*_ P^2 - 4c∑_t=1^k𝔼(ϕ(X^(t)-x^*)), so that ∑_t=1^k𝔼(ϕ(X^(t)-x^*))≤1/4cz^(0) - z^*_ P^2<∞, which shows that the sum of the expected values of the primal error is bounded. Hence, using Markov's inequality, we conclude that ∑_t=1^∞ Pr{X^(t)-x^*^2≥ϵ}≤1/ϵ∑_t=1^∞𝔼[X^(t)-x^*^2]<∞, for all ϵ>0, and by Borel Cantelli's lemma <cit.> that Pr{lim sup_k→∞(X^(k)-x^*^2≥ϵ)}=0, which shows that X^(k)-x^*^2 a.s.→ 0. The proof that x^* is primal feasible is identical to the one given in the proof of Theorem <ref>. § APPLICATIONS AND NUMERICAL EXPERIMENTS In this section we will discuss experimental results obtained by Monte Carlo simulations. We will start in Section <ref> by showing convergence results for both synchronous and asynchronous updating schemes, where we optimise a quadratic objective function subject to different cone constraints. Next, in Section <ref>, we will show simulation results for a practical communication application of decentralised multiple-input multiple-output (MIMO) maximum likelihood (ML) detection is an ad-hoc communication network. The problem turns out to be NP-hard but can be well approximated through semidefinite relaxation techniques. §.§ Cone constrained consensus problem < g r a p h i c s > Demonstration of a random geometric graph with 25 nodes. In our first computer simulation we consider a random geometric graph ofN=25nodes where we have set the communication radiusr = √(2log(N)/N), thereby guaranteeing a connected graph with probability at least1-1/N^2<cit.>. The resulting graph is depicted in Fig. <ref>. Letmandnbe strictly positive integers. Let(∀ i∈) _̋i = ^m× nequipped with the scalar product(X,Y) ↦ tra(X^TY). The associated norm is the Frobenius norm·_ F. We consider the following consensus problem: [ minimise ∑_i∈X_i-A_i^2_ F; subject to (∀{i,j}∈ E) X_i=X_j, [4mm]0mm0mm; (∀ i ∈) X_i ∈ K_i, [4mm]0mm0mm ] where the dataA_i∈_̋iwas randomly generated from a zero mean, unit variance Gaussian distribution. Problem (<ref>) is in standard form and can be solved directly with the GPDMM algorithm. We will consider two scenarios. LetA̅ = ||^-1∑_i∈A_i. The first scenario is the one in which we set(∀ i∈) K_i = _+^m× n, the cone of real nonnegative matrices, and the solution to (<ref>) is given by(∀ i∈) X_i^* = A̅_+, whereA_+is the matrix obtained fromAby setting the negative entries to0. Fig. <ref>a) shows convergence results form=5andn=10. Results are shown for both synchronous (solid lines) and asynchronous (dashed lines) update schemes, and forα∈{1/2,1}. The caseα=1corresponds to the non-averaged Picard-Banach iterations (Peaceman-Rachford splitting algorithm), whileα=1/2corresponds to the Krasnosel'skii-Mann iterations (Douglas-Racheford splitting algorithm). < g r a p h i c s > < g r a p h i c s > a) b) Convergence results for GPDMM for the consensus problem (<ref>) over the graph depicted in Fig. <ref> for (a) (∀ i∈) K_i = _+^5× 10 and (b) (∀ i∈) K_i = S^10_+. In the second scenario, we consider the case where(∀ i∈) _̋i = S^n, again equipped with the scalar product(X,Y) ↦ tra(X^TY)and(∀ i∈) K_i = S^n_+, and the solution to (<ref>) is given by(∀ i∈) X_i^* = QΛ_+ Q^T, whereQΛ Q^Tis the eigenvalue decomposition ofA̅. Fig. <ref>b) shows convergence results forn=10. The figures show that synchronous and asynchronous implementations have similar convergence results and that, as expected, averaging slows down the convergence rate. §.§ Decentralised approximation algorithms for max-cut problems Given an undirected graphG = (,), we define nonnegative weights(∀{i,j}∈) w_ij = w_jion the edges in the graph, and we set(∀{i,j}∉) w_ij = w_ji = 0when there is no edge between nodes. The maximum cut (max-cut) problem is that of finding the set of verticesS ⊆that maximises the weight of the edges in the cut(S,S̅), whereS̅ = \ S. The maximum cut problem is known to be NP-hard and a typical approach to solving such a problem is to find ap-approximation algorithm which is a polynomial-time algorithm that delivers a solution of value at leastptimes the optimal value. Goemans and Williamson <cit.> gave the0.878-approximation algorithm based on semidefinite programming and randomised rounding. This is the best known approximation guarantee for the max-cut problem today. The max-cut problem is given by the following integer quadratic program [ maximise 1/4∑_i∈∑_j∈ N_i w_ij ( 1 - x_ix_j); subject to (∀ i∈) x_i ∈{-1,1}.[4mm]0mm0mm ] We can find an approximate solution of (<ref>) by reformulating the problem as a homogeneous (nonconvex) QCQP and solve a semidefinite relaxed version of it. To do so, letn = ||, and letx = (x_1,…, x_n)^T. Ignoring the scaling, (<ref>) can be equivalently expressed as [ maximise x^TW x; subject to x ∈{-1,1}^n,[4mm]0mm0mm ] whereW∈^n× ngiven by (∀ i∈)(∀ j∈) W_ij = {[ ∑_j∈ N_i w_ij, if i=j,; -w_ij, if i≠ j, {i,j}∈, [4mm]0mm0mm; 0, if i≠ j, {i,j}∉. [4mm]0mm0mm ]. Note thatW≽ 0. Let L = {1,…,n}. Sincex^TWx = tra(x^TWx) = tra(Wxx^T), we can rewrite (<ref>) as [ maximise tra(WX); subject to X ≽ 0, [3.5mm]0mm0mm; rank(X) = 1, [3.5mm]0mm0mm; (∀ℓ∈ L) X_ℓℓ = 1.[3.5mm]0mm0mm ] The conditionX≽ 0(Xpositive semidefinite) in combination with rank(X)=1impliesX = xx^T. The usefulness of expressing the original max-cut problem (<ref>) in the form (<ref>) lies in the fact that (<ref>) allows us to identify the fundamental difficulty in solving (<ref>), which is the nonconvex rank constraint; the objective function as well as the other constraints are convex.t Hence, we can directly relax this problem into a convex problem by ignoring the nonconvex constraint rank(X)=1. We then get a upper bound on the optimal value of (<ref>) by solving [ maximise tra(WX); subject to X ≽ 0, [3.5mm]0mm0mm; (∀ℓ∈ L) X_ℓℓ = 1,[3.5mm]0mm0mm ] which is called the SDP relaxation of the original nonconvex QCQP. There exist several strong results on the upper bound of the gap between the optimal solution and the solution obtained from semidefinite relaxations to NP-hard problems <cit.>. We can solve (<ref>) in a distributed fashion. Assume each nodeihas only knowledge of the weights(w_ij)_j∈ N_i. To decouple the node dependencies, we introduce local copiesX_iofX, and require that(∀{i,j}∈) X_i = X_j. With this, the distributed version of (<ref>) can be expressed as [ maximise ∑_i∈ tra(W_iX_i); subject to (∀ i∈) {[ X_i ≽ 0,[3.5mm]0mm0mm; (∀ℓ∈ L) (X_i)_ℓℓ = 1, [3.5mm]0mm0mm ].; (∀{i,j}∈) X_i=X_j, [3.5mm]0mm0mm ] where∀ i∈, ∀ m∈, ∀ j∈, we have (W_i)_mj = {[ ∑_j∈ N_i w_mj, if m=j=i,; -1/2 w_mj, if m=i, j∈ N_i, [4mm]0mm0mm; -1/2 w_mj, if j=i, m∈ N_i, [4mm]0mm0mm; 0, otherwise.[4mm]0mm0mm ]. Hence,(∀ i∈) W_i ≽ 0andW_ionly contains valuesf(w_ij)_j∈ N_iwhich are locally known to nodei. Moreover,∑_i ∈ W_i = W, so that problem (<ref>) is equivalent to problem (<ref>). Hence, by Proposition <ref> (or Proposition <ref> in case of stochastic updates), the distributed solution will converge to the centralised one. Problem (<ref>) is of the form of our prototypical problem, where we can express the constraint(X_i)_ℓℓ=1ash_ℓ(X_i) = tra(E_ℓ X_i) = 1, whereE_ℓis a matrix which is zero everywhere except for entry(ℓ,ℓ), which is 1. As explained in Section <ref>, these constraints can be easily implemented as correction terms to the solution of the unconstraint updates of the primal variables. Since tra(E_ℓ E_m) = δ_ℓ m, (<ref>) becomes(γ̃_i^(k))_ℓ = (X̃_i^(k))_ℓℓ-1, and the updates (<ref>) become (∀ i∈) {[ X̃_i^(k) = (W_i - C_i^*Z^(k))/(c deg i),; X_i^(k) =X̃_i^(k) - ∑_ℓ∈ L((X̃_i^(k))_ℓℓ-1)E_ℓ. [6mm]0mm0mm ]. Hence, the constraints(∀ℓ∈ L) (X_i)_ℓℓ = 1can be implemented by simply setting, at each iterationk, the diagonal elements ofX̃_i^(k)equal to 1. Note that due to the relaxation, we have in general rank X^* ≠ 1, and we have to extract an approximate solutionx̃fromX^*that is feasible. To do so, we can use randomised rounding <cit.>. Consider the optimisation problem [ maximise 𝔼_ξ∼ N(0,Σ)(ξ^TWξ); subject to (∀ℓ∈ L) 𝔼_ξ∼ N(0,Σ)(ξ^2_ℓ) = 1,[4mm]0mm0mm ] where𝔼_x∼ P f(x) = ∫ f(x) dP(x). This can be equivalently expressed as [ maximise tra( WΣ); subject to Σ≽ 0, [3.5mm]0mm0mm; (∀ℓ∈ L) Σ_ℓℓ = 1,[3.5mm]0mm0mm ] which is of the form (<ref>) withX=Σand leads to the following rounding procedure: starting from a solutionX^*from (<ref>), we randomly draw several samplesξ^(j)∼ N(0,X^*), setx̃^(j) = sign (ξ^(j)), and keep thex̃^(j)with the largest objective value. Even though this approach computes good approximate solutions with, in some cases, hard bounds on their suboptimality, it is not suitable for a distributed implementation since after the generation of the random samples we need to compute the objective function for selecting the best candidate which requires knowledge ofW. Alternatively, a simple heuristic for finding a good partition is to solve the SDPs above and approximateX^*by the best rank-1 approximation. That is, we setX̂= λ_1 q_1 q_1^T, whereλ_1is the largest eigenvalue ofX^*andq_1the corresponding eigenvector, and we may definex̃ = sign (q_1)as our candidate solution to (<ref>). Fig. <ref> shows convergence results for the max-cut problem (<ref>) over the graph depicted in Fig. <ref>. The (symmetric) weights are drawn uniformly at random from the set{0,…,9}and the step size parametercwas set toc = 1. Fig. <ref>(a) shows convergence result of the primal variable, averaged over all entries ofXand all 25 nodes. The objective value of the solution using the best rank-1 approximation was in this case 398, while the objective value using the randomised rounding method, generating 500 samples, was 401. Fig. <ref>(b) shows the distribution of 500 objective values for points sampled using randomised rounding. < g r a p h i c s > < g r a p h i c s > a) b) Convergence results for GPDMM for the max-cut problem. Fig. (a) shows convergence result of the primal variable, averaged over all entries of X and all nodes, while (b) shows the distribution of 500 objective values for points sampled using randomised rounding. In order to determine the distribution of the difference between the objective value obtained from the best rank-1 approximation and the one from randomised rounding, we run10^2Monte-Carlo simulations. Fig. <ref> shows the histogram of the differences. In 55%of the cases the difference is at most 1, and in 90%of the cases the difference is at most 10. The mean value of the objective values is 422.7 and the mean value of the differences is 3.8. < g r a p h i c s > Histogram of difference objective values. § CONCLUSIONS In this paper we considered the problem of distributed nonlinear optimisation of a separable convex cost function over a graph subject to cone constraints. We demonstrated the extension of the primal-dual method of multipliers (PDMM) to include cone constraints. We derived update equations, using convex analysis, monotone operator theory and fixed-point theory, by applying the Peaceman-Rachford splitting algorithm to the monotonic inclusion related to the lifted dual problem. The cone constraints are implemented by a reflection method in the lifted dual domain where auxiliary variables are reflected with respect to the intersection of the polar cone and a subspace relating the dual and lifted dual domain. We derived convergence results for both synchronous and stochastic update schemes and demonstrated an application of the proposed algorithm to the maximum cut problem. 10boy:11 S. Boyd, N. Parikh, E. Chu, B. Peleato, and J. Eckstein. Distributed optimization and statistical learning via the alternating direction method of multipliers. In Foundations and Trends in Machine Learning, 3(1):1–122, 2011. dim:10 A. G. Dimakis, S. Kar, J. M. F. Moura, M. G. Rabbat, and A. Scaglione. Gossip Algorithms for Distributed Signal Processing. Proceedings of the IEEE, 98(11):1847–1864, 2010. boy:06 S. Boyd, A. Ghosh, B. Prabhakar, and D. Shah. Randomized gossip algorithms. IEEE Trans. Information Theory, 52(6):2508–2530, 2006. ust:10 D. Üstebay, B.N. Oreshkin, M.J. Coates, and M.G. Rabbat. Greedy gossip with eavesdropping. IEEE Trans. on Signal Processing, 58(7):3765–3776, July 2010. ben:10b F. Bénézit, A.G. Dimakis, P. Thiran, and M. Vetterli. Order-optimal consensus through randomized path averaging. IEEE Trans. on Information Theory, 56(10):5150–5167, October 2010. lut:13b F. Lutzeler, P. Ciblat, and W. Hachem. Analysis of Sum-Weight-Like Algorithms for Averaging in Wireless Sensor Networks. IEEE Trans. Signal Processing, 61(11):2802–2814, 2013. wai:05 M.J. Wainwright, T.S. Jaakkola, and A.S. Willsky. MAP estimation via agreement on trees: Message-passing and linear programming. IEEE Trans. Information Theory, 51(11):3697–3717, November 2005. sch:11 L. Schenato and F. Fiorentin. Average timesynch: A consensus-based protocol for clock synchronization in wireless sensor networks. Automatica, 47(9):1878–1886, September 2011. lou:15 A. Loukas, A. Simonetto, and G. Leus. Distributed autoregressive moving average graph filters. IEEE Signal Process. Letters, 22(11):1931–1935, November 2015. li:20 Q. Li, R. Heusdens, and M.G. Christensen. Convex optimisation-based privacy-preserving distributed average consensus in wireless sensor networks. In Proc. of IEEE International Conference on Acoustics, Speech, and Signal Processing (ICASSP), May 2020. li:21 Qiongxiu Li, Jaron Skovsted Gundersen, Richard Heusdens, and M. G. Christensen. Privacy-preserving distributed processing: Metrics, bounds and algorithms. IEEE Transactions on Information Forensics and Security, 16:2090–2103, 2021. Li:20sp Q. Li, R. Heusdens, and M.G. Christensen. Privacy-preserving distributed optimization via subspace perturbation: A general framework. IEEE Trans. on Signal Processing, 68:5983 – 5996, October 2020. li:24 Qiongxiu Li, Jaron Skovsted Gundersen, Milan Lopuha-Zwakenberg, and Richard Heusdens. Adaptive differentially quantized subspace perturbation (adqsp): A unified framework for privacy-preserving distributed average consensus. IEEE Transactions on Information Forensics and Security, 19:1780–1793, 2024. luo:06 Z. Luo and W. Yu. An introduction to convex optimization for communications and signal processing. IEEE Journal on Selected Areas in Communications, 24(8):1426–1438, August 2006. zha:12 G. Zhang and R. Heusdens. Linear coordinate-descent message-passing for quadratic optimization. Neural Computation, 24(12):3340–3370, december 2012. she:19 Thomas William Sherson, Richard Heusdens, and W. Bastiaan Kleijn. Derivation and analysis of the primal-dual method of multipliers based on monotone operator theory. IEEE Transactions on Signal and Information Processing over Networks, 5(2):334–347, 2019. jor:23 Sebastian O. Jordan, Thomas W. Sherson, and Richard Heusdens. Convergence of stochastic pdmm. In Proceedings IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), pages 1–5, 2023. zha:22 G. Zhang, N. Kenta, and W. B. Kleijn. Revisiting the Primal-Dual Method of Multipliers for Optimisation Over Centralised Networks. IEEE Trans. Signal and Information Processing over Networks, 8:228–243, 2022. kar:20 Sai Praneeth Karimireddy, Satyen Kale, Mehryar Mohri, Sashank Reddi, Sebastian Stich, and Ananda Theertha Suresh. SCAFFOLD: Stochastic controlled averaging for federated learning. In Hal Daum III and Aarti Singh, editors, Proceedings of the 37th International Conference on Machine Learning, volume 119 of Proceedings of Machine Learning Research, pages 5132–5143. PMLR, 13–18 Jul 2020. pat:20 Reese Pathak and Martin J Wainwright. Fedsplit: an algorithmic framework for fast federated optimization. In H. Larochelle, M. Ranzato, R. Hadsell, M.F. Balcan, and H. Lin, editors, Advances in Neural Information Processing Systems, volume 33, pages 7057–7066. Curran Associates, Inc., 2020. heu:24 R. Heusdens and G. Zhang. Distributed optimisation with linear equality and inequality constraints using pdmm. IEEE trans. on Signal and Information Processing over Networks, 2024. bis:06 P. Biswas, T.-C. Liang, K.-C. Toh, Y. Ye, and T.-C. Wang. Semidefinite programming approaches for sensor network localization with noisy distance measurements. IEEE Transactions on Automation Science and Engineering, 3(4):360–371, 2006. so:07 A. M. C. So and Y. Ye. Theory of semidefinite programming for sensor network localization. Mathematical Programming, 109:367–384, 2007. sim:13 Andrea Simonetto and Geert Leus. Distributed maximum likelihood sensor network localization. IEEE Transactions on Signal Processing, 62:1424–1437, 2013. tan:01 Peng Hui Tan and L.K. Rasmussen. The application of semidefinite programming for detection in cdma. IEEE Journal on Selected Areas in Communications, 19(8):1442–1449, 2001. ma:02 Wing-Kin Ma, T.N. Davidson, Kon Max Wong, Zhi-Quan Luo, and Pak-Chung Ching. Quasi-maximum-likelihood multiuser detection using semi-definite relaxation with application to synchronous cdma. IEEE Transactions on Signal Processing, 50(4):912–922, 2002. mob:07 Amin Mobasher, Mahmoud Taherzadeh, Renata Sotirov, and Amir K. Khandani. A near-maximum-likelihood decoding algorithm for mimo systems based on semi-definite programming. IEEE Transactions on Information Theory, 53(11):3869–3886, 2007. dal:13 Emiliano Dall'Anese, Hao Zhu, and Georgios B. Giannakis. Distributed optimal power flow for smart microgrids. IEEE Transactions on Smart Grid, 4(3):1464–1475, 2013. liu:19 Tian Liu, Bo Sun, and Danny H. K. Tsang. Rank-one solutions for sdp relaxation of qcqps in power systems. IEEE Transactions on Smart Grid, 10(1):5–15, 2019. fuk:01 Mituhiro Fukuda, Masakazu Kojima, Kazuo Murota, and Kazuhide Nakata. Exploiting sparsity in semidefinite programming via matrix completion i: General framework. SIAM Journal on Optimization, 11(3):647–674, 2001. pak:18 Sina Khoshfetrat Pakazad, Anders Hansson, Martin S. Andersen, and Anders Rantzer. Distributed semidefinite programming with application to large-scale system analysis. IEEE Transactions on Automatic Control, 63(4):1045–1058, 2018. ryu:16 E.K. Ryu and S. Boyd. A primer on monotone operator methods. Applied Computational mathematics, 15(1):3 – 43, 2016. bau:17 Heinz H. Bauschke and Patrick L. Combettes. Convex Analysis and Monotone Operator Theory in Hilbert Spaces. Springer, second edition, 2017. CMS Books in Mathematics. mor:62 J. J. Moreau. Deécomposition orthogonale dÕun espace hilbertien selon deux coônes mutuellement polaires. Comptes rendus hebdomadaires des seéances de lÕAcadeémie des sciences, (255):238–240, 1962. hal-01867187. dou:56 J. Douglas and H.H. Rachford. On the numerical solution of heat conduction problems in two and three space variables. Transactions of the American Mathematical Society, 82:421–439, 1956. lio:79 P.L. Lions and B. Mercier. Splitting algorithms for the sum of two nonlinear operators. SIAM Journal on Numerical Analysis, 16(6):964–979, 1979. gab:76 Daniel Gabay and Bertrand Mercier. A dual algorithm for the solution of nonlinear variational problems via finite element approximation. Computers & Mathematics with Applications, 2(1):17–40, 1976. eck:93 J. Eckstein and M. Fukushima. Some Reformulations and Applications of the Alternating Direction Method of Multipliers. Large Scale Optimization: State of the Art, pages 119–138, 1993. bre:67 L.M. Bregman. The relaxation method of finding the common point of convex sets and its application to the solution of problems in convex programming. USSR Computational Mathematics and Mathematical Physics, 7(3):200–217, 1967. con:18b Matt O'Connor, Guoqiang Zhang, W. Bastiaan Kleijn, and Thushara Dheemantha Abhayapala. Function splitting and quadratic approximation of the primal-dual method of multipliers for distributed optimization over graphs. IEEE Trans. on Signal and Information Processing over Networks, 4(4):656–666, December 2018. ber:04 V. Berinde. Picard iteration converges faster than Mann iteration for a class of quasi-contractive operators. Fixed Point Theory and Applications, 2004(2):97–105, 2004. lut:13 F. Iutzeler, P. Bianchi, P. Ciblat, and W. Hachem. Asynchronous distributed optimizationusing a randomized alternating direction method of multipliers. 52nd IEEE Conference on Decision and Control, pages 3671–3676, December 2013. Firenze, Italy. bia:16 P. Bianchi, W. Hachem, and F. Iutzeler. A coordinate descent primal-dual algorithmand application to distributed asynchronous optimization. IEEE Trans. on Automatic Control, 61(10):2947–2957, October 2016. jac:04 J. Jacod and P. Protter. Probability Essentials. Springer, second edition, 2004. dal:02 Jesper Dall and Michael Christensen. Random geometric graphs. Phys. Rev. E, 66:016121, Jul 2002. goe:95 Michel X. Goemans and David P. Williamson. Improved approximation algorithms for maximum cut and satisfiability problems using semidefinite programming. J. ACM, 42:1115–1145, 1995. kar:94 D. Karger, R. Motwani, and M. Sudan. Approximate graph coloring by semidefinite programming. Technical report, Department of Computer Science, Stanford University, 1994. par:18 Jaehyun Park and Stephen Boyd. A semidefinite programming method for integer convex quadratic minimization. Optimization Letters, 12(3):499–518, 2018. jon:18 J. A. Jonkman, T. Sherson, and R. Heusdens. Quantization effects in distributed optimisation. ICASSP, pages 3649–3653, 2018.]
http://arxiv.org/abs/2405.09481v1
20240515162122
Thermodynamic properties of a relativistic Bose gas under rigid rotation
[ "E. Siri", "N. Sadooghi" ]
hep-ph
[ "hep-ph", "hep-lat", "hep-th", "nucl-th" ]
apsrev plain p/ saveeqn nedalph →/ ∂/ A/ k/ ℓ/ q/ D/ 𝐃/ →∂ e.siri@physics.sharif.irCorresponding author: sadooghi@physics.sharif.irDepartment of Physics, Sharif University of Technology, P.O. Box 11155-9161, Tehran, Iran We study the thermodynamic properties of a rigidly rotating relativistic Bose gas. First, we derive the solution of the equation of motion corresponding to a free rotating complex Klein-Gordon field and determine the free propagator of this model utilizing the Fock-Schwinger proper-time method. Using this propagator, we then obtain the thermodynamic potential of this model in the zeroth and first perturbative level. In addition, we compute the nonperturbative ring contribution to this potential. Our focus is on the dependence of these expressions on the angular velocity, which effectively acts as a chemical potential. Using this thermodynamic potential, we calculate several quantities, including the pressure, angular momentum and entropy densities, heat capacity, speed of sound, and moment of inertia of this rigidly rotating Bose gas as functions of temperature (T), angular velocity (Ω), and the coupling constant (α). We show that certain thermodynamic instabilities appear at high temperatures and large couplings. They are manifested as zero and negative values of the above quantities, particularly the moment of inertia and heat capacity. Zero moment of inertia leads to the phenomenon of supervorticity at certain T or α. Supervortical temperatures (couplings) decrease with increasing coupling (temperature). We also observe superluminal sound velocities at high T and for large α. Thermodynamic properties of a relativistic Bose gas under rigid rotation N. Sadooghi  0000-0001-5031-9675   ======================================================================== § INTRODUCTION Studying the effects of extreme conditions on the thermodynamic properties of quark matter is one of the important applications of modern thermal quantum field theory. These conditions include high temperatures up to 10^12 K, large densities up to 10^14 gr/cm^3, large magnetic fields up to 10^20 Gauß, and large angular velocities up to 10^22 s^-1. These conditions are partly realized in nature, e.g. in the early universe or the core of compact stars. The Quark-Gluon plasma (QGP) produced in relativistic heavy-ion collision (HIC) experiments at Relativistic Heavy Ion Collider (RHIC) and Large Hadron Collider (LHC) also exhibits these extreme conditions. The aim of these experiments is to recreate the conditions after the Big Bang in laboratories. Various international projects and intensive studies are in progress to understand the nature of the matter produced after these collisions and to overcome the deficiencies of standard computational methods in simulating quark matter under extreme conditions <cit.>. Among the aforementioned extreme conditions, analyzing quark matter under rotation has attracted much attention in the past few years <cit.>. Several important phenomena, e.g. chiral vortical effect <cit.>, are related to the presence of a uniform rotation in relativistic systems <cit.>. When apart from rotation, these systems are subjected to a uniform magnetic field, an inverse magnetorotational effect occurs <cit.>, that particularly leads to a reduction of the temperature of the chiral phase transition <cit.>. Recently, it has been shown that this effect is the main reason for excluding certain phases of quark matter in the interior of neutron stars under some specific circumstances <cit.>. The field-theoretical investigation of rotation is immensely simplified once it is assumed that the system under consideration is under a rigid rotation. Although this kind of rotation cannot be attained in the expanding QGP produced in HICs, however, all theoretical investigations of this problem are based on this assumption. The latter has several unexpected consequences: The effect of rigid rotation on the equation of state of gluodynamics is studied recently in <cit.>. Assuming sufficiently small angular velocities Ω, the free energy is Taylor expanded in powers of Ω up to 𝒪(Ω^2). This expansion leads immediately to an angular momentum density that is proportional to Ω. According to classical mechanics, the proportionality factor is the moment of inertia I(T). By computing I(T) using numerical simulation of lattice quantum chromodynamics, it is shown that it receives two different contributions. The competition between these two temperature dependent terms leads to a negative moment of inertia at a temperature below a certain supervortical temperature, T_s. According to these results, T_s is given by T_s≈ 1.5 T_c, where T_c is the confinement/deconfinement phase transition temperature. Assuming that the angular momentum is finite, a vanishing moment of inertia at T_s leads to the phenomenon of supervorticity, characterized by very large angular velocity at this temperature <cit.>. Moreover, as gluons are spin-one bosons, another interesting effect, dubbed '`negative Barnett effect`' <cit.>, is supposed to occur at temperatures below T_s. In contrast to the ordinary Barnett effect, in the negative Barnett effect, the rotation polarizes spin negatively. This is only possible when the moment of inertia I_S related to spin S is negative, and its absolute value is larger than the moment of inertia I_L, related to the angular momentum L. Since the latter is always positive <cit.>, the total moment of inertia I=I_L+I_S corresponding to the total angular momentum J=L+S becomes negative. In the present paper, we intend to answer whether a negative moment of inertia also arises in a spin-zero relativistic gas. We thus analyze the impact of a rigid rotation on a relativistic Bose gas. Massless and massive scalar fields under rigid rotation are previously studied in <cit.>. Here, the focus is on imaginary rotation <cit.>, which has application in numerical simulation of a rigidly rotating system on the lattice. It is shown that this procedure leads to the appearance of the fractal features of thermodynamics under imaginary rotation and ninionic deformation of statistics, leading to stable ghost-like excitations <cit.>. In <cit.>, the chiral symmetry breaking/restoration in a Yukawa model is studied. The authors determine first the propagators of free bosons and fermions in a rotating medium using the Fock-Schwinger proper-time method <cit.>. These propagators are then used to determine the thermodynamic potential of the rigidly rotating Yukawa gas. In the following sections, we first review the results presented in <cit.> and determine the free propagator of a rigidly rotating Bose gas using the Fock-Schwinger proper-time method. To do this, we start with the Lagrangian density of a complex Klein-Gordon (CKG) field ϕ. We use the imaginary time formalism to introduce the temperature T and determine the free propagator of the Bose gas at finite T. Introducing the interaction term λ(ϕ^†ϕ)^2 in the Lagrangian density, we then utilize this propagator to determine the thermodynamic potential of this gas in the zeroth and first perturbative expansion in the orders of the coupling constant λ. We present the results in an integral form and compare it with the corresponding thermodynamic potential in a nonrotating Bose gas. As expected, the angular velocity Ω plays the role of a chemical potential <cit.>. We then perform an appropriate high temperature expansion (HTE) and present the corresponding perturbative part of the thermodynamic potential in this approximation. Apart from these parts, we determine the nonperturbative ring contribution to the thermodynamic potential. The final result for the thermodynamic potential, including the zeroth and first perturbative corrections as well as the nonperturbative ring potential exhibits a summation over ℓ, which arises from the solution of the CKG equation of motion in cylinder coordinate system.[The quantum number ℓ is the conjugate momentum of the azimuthal angle φ in a cylindrical coordinate system. ] In the second part of the paper, we perform this summation numerically. Here, we mainly focus on the thermodynamic properties of the relativistic Bose gas under rigid rotation. Using the thermodynamic potential, we first determine the pressure of this gas and study the impact of a rigid rotation on this pressure. Using standard thermodynamic relations, we also determine the angular momentum and entropy densities, j and s, the heat capacity C_V, the speed of sound c_s^2, and the moment of inertia I. Setting first ℓ=1, we present analytical expressions for these quantities up to 𝒪(Ω^2). Plotting the moment of inertia in terms of the coupling α≡λ/π^2, it turns out that I becomes negative for certain coupling α_s, dubbed `'supervortical coupling”. In Fig. <ref>, we schematically describe how a negative moment of inertia affects the rotation of a system. In a system with a positive (negative) moment of inertia, an applied angular momentum J leads to a rotation with an angular velocity Ω parallel (antiparallel) to J. Finally, we perform the summation over ℓ numerically and explore the T and Ω dependence of the above thermodynamic quantities. We show that at high temperatures and for large coupling constants, certain thermodynamic instabilities appear. They are particularly manifested by negative I and C_V, as well as large c_s. For very large couplings, c_s becomes superluminal at high temperatures. The organization of this paper is as follows: In Sec. <ref>, we solve the equation of motion of a free CKG field under rotation. The free bosonic propagator at zero and finite temperature is presented in <ref>. In Sec. <ref>, we compute the thermodynamic potential of a rigidly rotating Bose gas in the zeroth [Sec. <ref>] and first perturbative level [Sec. <ref>], as well as the nonperturbative ring potential [Sec. <ref>]. To this purpose, we compute the one-loop tadpole diagram of this model in <ref>. In Sec. <ref>, we determine the thermodynamic quantities of this Bose gas under rigid rotation and explore the T and Ω dependence of these quantities first in the first nonvanishing term in Ω for ℓ=1 [Sec. <ref>] and then numerically for ℓ>1 [Sec. <ref>]. Section <ref> is devoted to our concluding remarks. In App. <ref>, we present the analytical details leading to the free propagator of the free CKG model. In Apps. <ref> and <ref>, we perform an appropriate HTE and present the free thermodynamic potential and one-loop tadpole diagram in this approximation. § COMPLEX KLEIN-GORDON FIELDS UNDER ROTATION §.§ The model We start with the action of a free CKG field in a curved space-time S_0=∫d^4x(-(g_μν) )^1/2ℒ_0, with the Lagrangian density ℒ_0=g^μν∂_μϕ^†∂_νϕ -m^2ϕ^†ϕ. To study the effect of a rigid rotation on a relativistic Bose gas described by (<ref>), we introduce the metric g_μν=( 1-( x^2+y^2)Ω^2 yΩ -xΩ 0 yΩ -1 0 0 -xΩ 0 -1 0 0 0 0 -1 ), where Ω is a constant angular velocity. The assumed rotation around the z-direction leads to a cylindrical symmetry around this axis. The system is thus naturally described by a cylindrical coordinate system described by x^μ=(t,x,y,z)=(t,rcosφ, rsinφ,z), with r the radial coordinate, φ the azimuthal angle, and z the height of the cylinder. Plugging ℒ from (<ref>) into the Euler-Lagrange equation of motion ∂_α( ∂ℒ_0/∂( ∂_αϕ^†))-∂ℒ_0/∂ϕ^†=0, we arrive at ∂_α( g^αν∂_νϕ)+m^2ϕ =0, with g^μν, the inverse of g_μν from (<ref>). Using L_z≡ -i(x∂_y-y∂_x)=-i∂_φ, the equation of motion of a rotating CKG field in a cylindrical coordinate system reads [ ( i∂_t+ΩL_z)^2+∇^2+∂ _z^2-m^2]ϕ( x )=0, with ∇^2= ∂ _r ^2+1/r∂_r+1/r^2∂ _φ^2. To solve (<ref>), we use the ansatz ϕ_ℓ(x,k)=e^-iEt+ik_zz+iℓφℛ_ℓ(r), where the radial part of ϕ_ℓ(x,k), ℛ_ℓ(r) satisfies ( ∂ _r^2+1/r∂_r-ℓ^2/r^2+k_⊥^2)ℛ_ℓ(r)=0, with k_⊥^2≡Ẽ^2-k_z^2-m^2 and Ẽ≡ E+ℓΩ. Introducing ρ≡ rk_⊥, we finally arrive at [ ρ^2∂ _ρ^2+ρ∂_ρ+(ρ^2-ℓ^2) ]ℛ_ℓ(ρ)=0, which is the Bessel differential equation leading to ℛ_ℓ(r)=J_ℓ(k_⊥r), where J_ℓ(z) is the Bessel function. Plugging this result into (<ref>), the solution of (<ref>) reads ϕ_ℓ(x,k)=e^-iEt+ik_zz+iℓφJ_ℓ(k_⊥r). §.§ Free bosonic propagator at zero and finite temperature According to the Fock-Schwinger proper-time method, the free two-point Green's function D_0(x,x') of a CKG field is given by <cit.> D_0( x,x' )=-i∫_-∞^0dτ∑_λexp( -iλτ)ϕ_λ( x )ϕ _λ^†( x' ), where λ and ϕ_λ are the energy eigenvalue and eigenfunction of the differential operator 𝒟(∂_x,x). They arise by solving the eigenvalue equation 𝒟(∂_x,x)ϕ_λ(x)=λϕ_λ(x). To show (<ref>), one starts with the Green's function differential equation 𝒟(∂_x,x)D_0(x,x')=δ^4(x-x'), where D_0(x,x') is represented as D_0(x,x')=-i∫_-∞^0 U(x,x';τ) dτ. Here, τ is the proper-time and U(x,x';τ) is the proper-time evolution operator which satisfies i∂_τU(x,x';τ)=𝒟(∂_x,x)U(x,x';τ). Using the boundary conditions lim_τ→ 0𝒰( x,x';τ)=δ^4( x-x' ), lim_τ→∞𝒰( x,x';τ)=0, the solution of (<ref>) reads 𝒰(x,x';τ)=e^-iτ H(∂_x,x)δ^4(x-x'). This result leads to (<ref>) upon using (<ref>) and the completeness relation satisfied by ϕ_λ(x) ∑_λϕ_λ(x )ϕ _λ^†( x' )= δ^4( x-x' ). For 𝒟=( i∂_t+ΩL_z)^2+∇^2+∂ _z^2-m^2 from (<ref>), the two-point Green's function of a CKG field under rotation is given by inserting (<ref>) into (<ref>), with λ=Ẽ^2-k_⊥^2-k_z^2-m^2<cit.>, D_0(x,x')=-i∫_-∞^0dτ∑_ℓ=-∞^+∞∫dEdk_z k_⊥dk_⊥/(2π)^3e^-iτ(Ẽ^2-k_⊥^2-k_z^2-m^2+iϵ)e^-iE(t-t')+ik_z(z-z')+iℓ(φ-φ')J_ℓ(k_⊥r)J(k_⊥r'). Integrating (<ref>) over τ and performing a change of variable E→ E-ℓΩ, we arrive at D_0(x,x') in coordinate space D_0( x,x' ) = ∑_ℓ=-∞^+∞∫dEdk_zk_⊥dk_⊥/( 2π)^3J_ℓ(k_⊥r )J_ℓ(k_⊥r' ) ×e^-iE( t-t' )+iℓΩ( t-t' )+ik_z( z-z' )+iℓ( φ -φ' )/E^2-k_⊥^2-k_z^2-m^2+iϵ. The corresponding free propagator in the Fourier space is determined by D_ℓℓ'^(0)(p,p')=∫ d^4xd^4x'D_0(x,x')ϕ_ℓ(x,p)ϕ_ℓ'(x',p'), with d^4x=dtdφ dzrdr in the cylindrical coordinate system, D_0(x,x') from (<ref>), and ϕ_ℓ(x,p) given in (<ref>). Performing the integration over x and x', we arrive after some computation at the free boson propagator at zero temperature (see App. <ref> for more details) D_ℓℓ'^(0)(p,p')=(2π)^3δ^3_ℓ,ℓ'(p_0,p_z,p_⊥;p^'_0,p^'_z,p^'_⊥)D_ℓ^(0)(p), with δ^3_ℓ,ℓ'(p_0,p_z,p_⊥;p^'_0,p^'_z,p^'_⊥) ≡ 1/p_⊥δ(p_0-p^'_0)δ(p_z-p^'_z) ×δ(p_⊥-p^'_⊥)δ_ℓℓ', and D_ℓ^(0)(p_0,ω)≡1/(p_0+ℓΩ)^2-ω^2+iϵ. Here, ω^2≡ p_⊥^2+p_z^2+m^2. At finite temperature T, p_0 is to be replaced with iω_n, where ω_n=2π nT is the Matsubara frequency. In the next section, we use D_ℓ^(0)(ω_n,ω)≡1/(ω_n-iℓΩ)^2+ω^2, to derive the thermodynamic potential of an interacting relativistic Bose gas under rotation up to first order in perturbative expansion. We also determine the nonperturbative ring potential in the lowest order. § THERMODYNAMIC POTENTIAL OF AN INTERACTING CKG FIELD IN THE PRESENCE OF ROTATION In this section, we determine the thermodynamic potential of an interacting CKG field in the presence of rotation. We start with the Lagrangian density ℒ=ℒ_0+ℒ_int, where the free part of the Lagrangian ℒ_0 is given in (<ref>), and the interaction part reads ℒ_int=-λ(ϕ^†ϕ)^2. Here, λ>0 is the coupling constant of the model. Assuming that λ< 1, it is possible to perturbatively expand the thermodynamic potential V_eff in a power series in the orders of λ, V_eff=∑_k=0^+∞λ^kV_eff^(k). In Sec. <ref>, we first determine the exact expression of the zeroth-order thermodynamic potential V_eff^(0) by making use of the standard methods in thermal field theory <cit.>. We then perform an appropriate HTE and present V_eff^(0) in this approximation. To determine the one-loop contribution to the thermodynamic potential, V_eff^(1), the one-loop self-energy function of the model, Π_1, is to be computed. In Sec. <ref>, we first present an exact expression for Π_1. We then determine Π_1 in the limit of high temperature. We end this section by determining the exact expression of the nonperturbative ring potential V_ring for this model. §.§ Zeroth-order correction to the thermodynamic potential According to <cit.>, the free (zeroth-order correction) thermodynamic (effective) potential V_eff of a relativistic Bose gas is given by V_eff^(0) = T∑_n=-∞^+∞∑_ℓ=-∞^+∞∫dp_zp_⊥dp_⊥/( 2π)^2ln(β^2(D_ℓ^(0))^-1), where β≡ T^-1 and D_ℓ^(0)(ω_n,ω) is the free propagator of this model. Plugging (<ref>) into (<ref>), we arrive after some standard computation at V_eff^(0) = T/2∑_n=-∞^+∞∑_ℓ=-∞^+∞∑_ζ=±∫dp_zp_⊥dp_⊥/( 2π)^2 ×ln[β^2( ω _n^2+( ω +ζℓΩ)^2) ]. Using at this stage ∑_n=-∞^+∞ln(( 2nπ)^2+u^2)=u+2ln( 1-e^-u), we perform the summation over Matsubara frequencies. The zeroth-order correction to V_eff is thus given by V_eff^(0)=∑_ℓ=-∞^+∞∫dp_zp_⊥dp_⊥/(2π)^2{ω +T[ln( 1-e^-β( ω +ℓΩ))+ln( 1-e^-β( ω -ℓΩ))]}. The first term is the vacuum contribution to V_eff^(0). It is independent of the angular velocity Ω. The T-dependent part of V_eff^(0), however, can be compared with the thermodynamic potential of a free relativistic Bose gas at finite chemical potential μ<cit.>. The fact that ℓΩ plays the role of the chemical potential μ is indeed expected from the literature (see, e.g., <cit.>). In what follows, we present another possibility to evaluate (<ref>). To do this, let us again start with <ref>. Using ln a^2 = -∂/∂κ(a^2)^-κ|_κ=0, = -∂/∂κ1/Γ(κ)∫_0^∞ds s^κ-1e^-a^2s|_κ=0, we arrive first at V_eff^(0)=-T∑_n=-∞^+∞∑_ℓ=-∞^+∞∫dp_zp_⊥dp_⊥/(2π )^2[∂/∂κ1/Γ( κ) ×∫_0^∞ds s^κ -1e^-β^2[(ω_n-iℓΩ )^2+ω^2]s]|_κ=0. Performing the summation over n, by making use of ∑_n=-∞^+∞e^-β^2(ω_n-iℓΩ)^2s=1/2√(π s)ϑ_3(-iℓΩβ/2|i/4π s), where ϑ_3(z|τ) is the elliptic theta-function <cit.>,[Here, we have used the notation ϑ_3(z|τ)≡ϑ_3(z,e^-iπτ). Here, ϑ_3 is given by ϑ_3(z|τ)=1+2∑_n=1^+∞τ^n^2cos(2n u). ] integrating over p_z and p_⊥ according to (<ref>), and using d/dκ(s^κ/Γ(κ))|_κ=0=1, we obtain V_eff^(0)=-T^4/16π^2∑_ℓ =-∞^+∞∫_0^∞ds/s^3e^-(mβ)^2sϑ_3( -iℓΩβ/2| i/4π s). For our numerical purposes, it is necessary to subtract the T=0 contribution from V_eff^(0). We thus arrive at V_eff^(0)T=-T^4/16π^2∑_ℓ =-∞^+∞𝒜_3,ℓ(x,y), where 𝒜_3,ℓ(x,y) is given from 𝒜_n,ℓ(x,y)≡∫_0^∞ds/s^ne^-x^2 s[ϑ_3( . -iℓ y/2| i/4π s) -1], by choosing n=3. Here, x≡ mβ and y≡Ωβ. In Sec. <ref>, we study the thermodynamic properties of a relativistic Bose gas under rotation by making use of (<ref>). We derive the pressure, the entropy density, the angular momentum, and the energy density up to first order perturbative corrections inclusive the first corrections to the nonperturbative ring potential. Inspired by the method presented in <cit.>, it is possible to expand V_eff^(0) in x≪ 1 and y≪ 1 and to determine an approximation of this potential at high temperature. According to the proof presented in App. <ref>, the HTE of V_eff^(0)T from (<ref>) reads V_eff^(0) T = -T^4{π^2/45-x^2/12+x^3/6π. .-x^4/16π^2( ln( 4π/x)-γ_E+3/4). . +∑_ℓ =1^+∞(3x^2-ℓ^2y^2/12π^2-x/2π+1/3)ℓ^2y^2}+⋯. Similar expression appears also in <cit.>, where ℓ y=ℓΩβ is replaced with the chemical potential μ. According to this result, for m,Ω=0, we thus have V_eff^(0)T-π^2T^4/45, which is the thermodynamic potential of a free and massless relativistic Bose gas <cit.>. Since we are interested in the Ω corrections to V_eff^(0)T, it is possible to keep the first nonvanishing Ω dependent term in (<ref>). For ℓ=1, we thus have V_eff^(0)T≈ -π^2T^4/45(1+15/π^2(Ωβ)^2). This result indicates that rotation increases the pressure of a free relativistic Bose gas. In Sec. <ref>, we study the effect of Ωβ on the pressure of free relativistic Bose gas arising from (<ref>) and show that this statement is true also once ℓ>1 contributions are taken into account. We also compare it with the pressure arising from (<ref>) in the high temperature limit. We show that at a certain temperature these two expressions coincides. §.§ One-loop perturbative correction to the self-energy function The one-loop correction to the self-energy function, Π_1, is given by the tadpole diagram from Fig. <ref>. Using the free propagator D_ℓ^(0)(ω_n,ω) from (<ref>), it is given by Π_1=4λ T∑_n=-∞^+∞∑_ℓ=-∞^+∞∫dp_zp_⊥dp_⊥/(2π)^2D_ℓ^(0)(ω_n,ω), with D_ℓ^(0)(ω_n,ω) from (<ref>). It is possible to utilize the method presented in the previous section and determine Π_1 in an exact form. To do this, we use D_ℓ^(0)(ω_n,ω)=1/2ω∂/∂ωln(β^2(D_ℓ^(0))^-1), and replace the propagator in (<ref>) with the expression on the right-hand side of (<ref>). According to the method leading from (<ref>) to (<ref>), we arrive first at Π_1=-2λ T∑_n=-∞^+∞∑_ℓ=-∞^+∞dp_zp_⊥dp_⊥/(2π)^21/ω ×∂/∂ω[∂/∂κ1/Γ(κ)∫_0^∞ds s^κ-1e^-β^2[(ω_n-iℓΩ)^2+ω^2]s]|_κ=0. After performing the summation over the Matsubara frequencies by making use of (<ref>), integrating over p_z and p_⊥ according to (<ref>), and using (<ref>) as well as 1/ω∂/∂ω(e^-β^2ω^2s)=-2sβ^2e^-β^2ω^2s, we arrive at the temperature dependent part of Π_1 Π_1^mat=α T^2/4∑_ℓ=-∞^+∞𝒜_2,ℓ(x,y), where α≡λ/π^2 and 𝒜_2,ℓ(x,y) can be read from (<ref>). In what follows, we perform a HTE and present the matter part of Π_1 in this approximation. To do this, let us consider (<ref>) and evaluate the summation over the Matsubara frequencies by making use of ∑_n=-∞^+∞1/( ω_n-iℓΩ)^2+ω^2 = β/2ω(n_b( ω +ℓΩ)+n_b( ω -ℓΩ)+1 ), where n_b is the Bose-Einstein distribution function defined by n_b(ω)≡1/e^βω-1. Plugging (<ref>) into (<ref>), we arrive at Π_1=Π_1^vac+Π_1^mat, with the vacuum (T=0) part Π_1^vac≡ 2λ∑_ℓ=-∞^+∞∫dp_zp_⊥dp_⊥/(2π)^21/ω, and the matter (T≠ 0) part Π_1^mat ≡ 2λ∑_ℓ=-∞^+∞∫dp_zp_⊥dp_⊥/(2π)^21/ω × [n_b(ω+ℓΩ)+n_b(ω-ℓΩ)]. We focus only on Π_1^mat and separate ℓ=0 and ℓ≠ 0 contribution of Π_1^mat to obtain Π_1^mat=4λ(𝒥_1+𝒥_2), with 𝒥_1 ≡ ∫dp_zp_⊥dp_⊥/(2π)^2n_b(ω)/ω, 𝒥_2 ≡ ∑_ℓ=1^+∞∫dp_zp_⊥dp_⊥/(2π)^21/ω [n_b(ω+ℓΩ)+n_b(ω-ℓΩ)]. In App. <ref>, we apply the method used in App. <ref> and perform a HTE of 𝒥_i, i=1,2 from (<ref>). The resulting expressions are presented in (<ref>) and (<ref>). Combining these expression the matter part of Π_1 for x≪ 1 and y≪ 1 is given by Π_1^mat=4λ T^2{1/12-x/4π+ .x^2/8π^2( ln( 4π/x)-γ_E+1/2) + .∑_ℓ =1^+∞[1/6- x/2π(1-ℓ^2y^2/2x^2) -ℓ^2y^2/4π ^2.. ..+x^2/4π^2( ln( 4π/x)-γ_E+1/2) ] }+⋯. In the limit of vanishing m and Ω, Π_1^mat is given by[For Ω=0, we neglect the series over ℓ in (<ref>).] Π_1^matΠ_0≡λ T^2/3. Apart from a factor, this result is, as expected, the same as the one presented in <cit.> for the one-loop self-energy diagram of a λφ^4 theory. Let us remind that λ T^2 plays the role of a thermal mass for charged bosons. Taking the limit of mβ→ 0 in (<ref>), and keeping the first nonvanishing term in Ω, we arrive for ℓ=1 at Π_1^mat≈λ T^2(1-(Ωβ)^2/π^2). To arrive at (<ref>), we particularly used Ω<m and neglected (Ωβ/mβ)^2 in the second line of (<ref>). This result indicates that, at least in the limit of mβ→ 0, the rotation decreases the thermal mass of a charged boson. §.§ One-loop perturbative correction to the thermodynamic potential Following the arguments in <cit.>, the one-loop contribution to the thermodynamic potential is given by V_eff^(1)=λ(T∑_n=-∞^+∞∑_ℓ=-∞^+∞∫dp_zp_⊥dp_⊥/(2π)^2D_ℓ^(0)(ω_n,ω))^2. Comparing (<ref>) with (<ref>) and neglecting the T-independent part of the thermodynamic potential, it is possible to determine V_eff^(1)T using the one-loop self-energy function Π_1^mat, V_eff^(1)=1/16λ(Π_1^mat)^2. The exact expression for Π_1^mat is given in (<ref>) and its HTE is presented in (<ref>). Using (<ref>), we thus obtain V_eff^(1)T=α T^4/256π^2(∑_ℓ=-∞^+∞𝒜_2,ℓ(x,y))^2. In the high temperature limit, V_eff^(1)T=λ T^4{1/12-x/4π+ .x^2/8π^2( ln( 4π/x)-γ_E+1/2) + .∑_ℓ =1^+∞[1/6- x/2π(1-ℓ^2y^2/2x^2) -ℓ^2y^2/4π ^2.. ..+x^2/4π^2( ln( 4π/x)-γ_E+1/2) ] }^2+⋯, arises from (<ref>). The thermodynamic potential up to one-loop perturbative correction is thus given by V_eff^T=V_eff^(0)T+V_eff^(1)T, with V_eff^(0)T from (<ref>) or (<ref>) and V_eff^(1)T from (<ref>) or (<ref>). In the high temperature limit x≪ 1 and y≪ 1, it is possible to neglect the m and Ω dependent terms in (<ref>). The T dependent part of V_eff^(1)T is thus given by V_eff^(1)Tλ T^4/144. Together with V_eff^(0)≈ -π^2T^4/45 from (<ref>), we arrive at V_eff^T in this approximation, V_eff^T -π^2T^4/45(1-45/144α). Keeping the first nonvanishing term in Ω in (<ref>), we arrive for ℓ=1 at V_eff^(1)T≈λ T^4/16(1-2(Ωβ)^2/π^2), that together with the zeroth order correction to V_eff from (<ref>) leads to V_eff^T≈ -π^2T^4/45[(1-45/16α)+15(Ωβ)^2/π^2(1+3/8α)]. In what follows, we determine the nonperturbative ring contribution to the thermodynamic potential. §.§ Nonperturbative ring contribution and the total thermodynamic potential Following the arguments in <cit.>, the nonperturbative part of the thermodynamic potential is given by the ring potential V_ring=T∑_n=-∞^+∞∑_ℓ=-∞^+∞∫dp_zp_⊥dp_⊥/(2π)^2 ×[ln(1+Π_ℓ D_ℓ^(0))-Π_ℓ D_ℓ^(0)], which arises by the resummation of ring diagrams with an increasing number of Π_1-insertion (see Fig. <ref>). Here, Π_ℓ≡ 4λ T∑_n=-∞^+∞∫dp_zp_⊥dp_⊥/(2π)^2D_ℓ^(0)(ω_n,ω), arises from Π_1=∑_ℓ=-∞^+∞Π_ℓ with the one-loop self-energy diagram Π_1 from (<ref>)[Later, we consider only the T-dependent part of Π_ℓ in V_ring.] and D_ℓ^(0) the free boson propagator from (<ref>). In what follows, we determine the leading contribution to V_ring by considering n=0 in the summation over the Matsubara frequencies. Plugging D_ℓ^(0) with n=0 into (<ref>), we first obtain V_ring = T∑_ℓ=-∞^+∞∫dp_zp_⊥dp_⊥/(2π)^2 ×{ln(1+Π_ℓ/[p_z^2+p_⊥^2+m^2-(ℓΩ)^2]) -Π_ℓ/[p_z^2+p_⊥^2+m^2-(ℓΩ)^2]}. Inspired by the method described in Appendices <ref> and <ref>, it is possible to perform the integration over p_z and p_⊥ and arrive at an exact expression for V_ring. To do this, we replace the logarithm in (<ref>) with its Taylor series, ln(1+x)=∑_k=1^+∞(-1)^k+1x^k/k, and arrive at V_ring = T∑_ℓ=-∞^+∞∫dp_zp_⊥dp_⊥/(2π)^2∑_k=2^+∞(-1)^k+1/k(Π_ℓ)^k(u^2)^-k, where u^2≡ p_z^2+p_⊥^2+m^2-(ℓΩ)^2. Using (<ref>), we have (u^2)^-k=1/Γ(k)∫_0^∞dt t^k-1e^-u^2t. Plugging this expression into (<ref>) and performing the integration over p_z and p_⊥ by using (<ref>), we obtain V_ring = π^3/2T/(2π)^3∑_ℓ=-∞^+∞∑_k=2^+∞(-1)^k+1/kΓ(k)(Π_ℓ)^k ×∫_0^∞dt t^k-5/2e^-ζ_ℓ t, with ζ_ℓ≡ [m^2-(ℓΩ)^2]. Assuming that Re[ζ_ℓ]>0, it is possible to perform the integration over t according to[In the massless limit, it is useful to first replace Ω with iΩ_I and eventually analytically continue back to Ω. In this way ζ_ℓ becomes positive and integration over t in (<ref>) will be possible.] ∫_0^∞dt t^k-5/2e^-ζ_ℓ t=Γ(k-3/2)ζ_ℓ^3/2-k. Substituting this expression into (<ref>) and performing the summation over k, we arrive at ∑_k=2^+∞(-1)^k+1/kΓ(k-3/2)/Γ(k)(Π_ℓ)^kζ_ℓ^3/2-k =2π^1/2/3[3ζ_ℓ^1/2Π_ℓ -2(Π_ℓ +ζ_ℓ)^3/2+2 ζ_ℓ^3/2]. Plugging finally (<ref>) into (<ref>), the ring potential (<ref>) reads V_ring = T/12π∑_ℓ=-∞^+∞(3ζ_ℓ^1/2Π_ℓ -2(Π_ℓ +ζ_ℓ)^3/2+2 ζ_ℓ^3/2). Adding V_ring from (<ref>) to V_eff^T from (<ref>), the full thermodynamic potential up to one-loop perturbative correction inclusive the nonperturbative ring potential is thus given by V_eff=V_eff^(0)T+V_eff^(1)T+V_ring. In Sec. <ref>, we use (<ref>) to study the thermodynamic behavior of a relativistic Bose gas under rigid rotation. In the rest of this section, we focus on V_ring and determine it in the following four special cases: i) Case 1: Let us first consider the massless limit. Setting m=0 in ζ_ℓ, plugging the resulting expression into (<ref>), and neglecting the terms with odd powers in ℓ in the summation over ℓ, we are left with V_ring = -T/6π∑_ℓ=-∞^+∞(Π_ℓ -(ℓΩ)^2)^3/2. Plugging Π_0≡λ T^2/3 from (<ref>) into (<ref>) and assuming that the Bose gas does not rotate, we arrive at V_ring-λ^3/2T^4/18√(3), as expected. Plugging this expression together with (<ref>) into (<ref>), the full thermodynamic potential in the limit m,Ω→ 0 reads V_eff -π^2T^4/45(1-45/144α+15/6√(3)α^3/2). This results is similar to the one presented in <cit.> for nonrotating relativistic neutral Bose gas. ii) Case 2: To keep the lowest Ω-dependent contribution to V_ring in the massless limit, we replace Π_ℓ in (<ref>) with Π_1^mat from (<ref>). Here, only the ℓ=1 term in the HTE of Π_1^mat from (<ref>) is considered. Going through the same procedure leading to (<ref>), we arrive first at V_ring≈ -λ^3/2T^4/6π[(1-(Ωβ)^2/π^2)^3/2 +2∑_ℓ=1^+∞(1-(Ωβ)^2/π^2-ℓ^2(Ωβ)^2/λ)^3/2]. Considering only the contribution from ℓ=0,1 terms, we obtain V_ring≈-λ^1/2T^4/2π{λ-(Ωβ)^2(1+3λ/2π)}. The full thermodynamic potential, including the perturbative part V_eff^T from (<ref>) and the nonperturbative part V_ring from (<ref>) in the massless limit is thus given by V_eff ≈ -T^4(𝒞_0+(Ωβ)^2𝒞_2), with T and Ω independent coefficients 𝒞_0 and 𝒞_2 𝒞_0 ≡ π^2/45( 1-45/16α+45/2α^3/2), 𝒞_2 ≡ 1/3(1-3/2α^1/2+3/8α-9/4α^3/2). A comparison between (<ref>) with (<ref>) shows that the limit Ω→ 0 is singular. iii) Case 3: In this case, we keep the m-dependence in ζ_ℓ=m^2-(ℓΩ)^2 in (<ref>) and neglect the m,Ω-dependence in Π_ℓ. We thus arrive at V_ring = T/12π∑_ℓ=-∞^+∞{ 3ζ_ℓ^1/2Π_0 -2(Π_0+ζ_ℓ)^3/2+2ζ_ℓ^3/2}, with Π_0 from (<ref>). iv) Case 4: In this case, similar to the previous one, we keep the m-dependence in ζ_ℓ, and replace Π_ℓ in (<ref>) with Π_ℓ(T,Ω)≡λ(T^2/3-ℓ^2Ω^2/2π^2), from (<ref>). We thus obtain V_ring = T/12π∑_ℓ=-∞^+∞{ 3ζ_ℓ^1/2Π_ℓ -2(Π_ℓ+ζ_ℓ)^3/2+2ζ_ℓ^3/2}, with Π_ℓ from (<ref>). In Secs. <ref> and (<ref>), the above results are used to determine the thermodynamic quantities of a rigidly rotating relativistic Bose gas. § THERMODYNAMIC QUANTITIES OF A RIGIDLY ROTATING RELATIVISTIC BOSE GAS In this section, we compute the pressure P, the angular momentum, entropy, and energy densities j,s, and ϵ by making use of the results from previous section. We also determine the moment of inertia I and the heat capacity of this Bose gas analytically as well as numerically and show that in some regions in the parameter space of this model, they become negative. Let us first consider the thermodynamic Euler equation of this system, ϵ+P=Ts. Here, the pressure P is given by P=-V_eff, with V_eff, the full thermodynamic potential from (<ref>). It includes contributions from tree level, first perturbative correction as well as nonperturbative ring potential. The energy density ϵ is defined in the corotating frame. Its relation with the energy density in the nonrotating frame, ϵ^nr, is given by ϵ=ϵ^nr-jΩ. Here, j is the angular momentum density of the rotating system. It is defined by j≡(dP/dΩ)_T. This expression arises from the Gibbs-Duhem relation <cit.> dP=sdT+jdΩ. Using (<ref>), the entropy density s is defined by s≡(dP/dT)_Ω. Apart from these quantities, let us define the heat capacity by <cit.> C_V≡d^2P/dT^2=ds/dT, and the speed of sound c_s, c_s^2≡dP/dϵ=s/TC_V. We also define the moment of inertia I=I(T) by Taylor expanding the pressure P(T,Ω) in the powers of Ω, P(T,Ω)=∑_n=0^+∞1/n!P^(n)(T,0)Ω^n, with P^(n)(T,0)≡lim_Ω→ 0d^nP(T,Ω)/dΩ^n, and identifying P^(2)(T,0) with I(T)<cit.>, I(T)≡d^2P(T,Ω)/dΩ^2|_Ω=0. Using (<ref>), the moment of inertia (<ref>) can be also interpreted as the linear response to j, I(T)= j(T,Ω)/Ω|_Ω→ 0. Plugging (<ref>), (<ref>), and (<ref>) with Π_ℓ from (<ref>) into V_eff from (<ref>) the exact expression for the pressure P in (<ref>) is determined. We use this exact result in Sec. <ref> to study the thermodynamic properties of a relativistic Bose gas under a rigid rotation. In the following Sec. <ref>, however, we present analytical results for P, s, j, I, and ϵ using the approximations (<ref>) (case i) and (<ref>) (case ii) for V_eff. §.§ Analytical results including first nontrivial contribution from ℓ=1 Plugging V_eff from (<ref>) (case i) into (<ref>), the pressure P in the limit of vanishing m and Ω reads Pπ^2T^4/45(1-45/144α+15/6√(3)α^3/2). This result is in analogy to the result presented in <cit.> for the pressure of an interacting relativistic neutral Bose gas (λφ^4-theory). The nonanalytical contribution proportional to α^3/2 arises from the nonperturbative ring contribution. Plugging in contrast (<ref>) (case ii) into (<ref>), the first nontrivial contribution of Ω arises in the pressure as P≈ T^4(𝒞_0+(Ωβ)^2𝒞_2), with 𝒞_i, i=0,2 defined in (<ref>). As in the previous case, in the coefficients 𝒞_i,i=0,2 the terms proportional to α^n and α^n/2 with n∈ℕ_0 arise from the perturbative and ring corrections to the pressure P, respectively. Using (<ref>), we immediately arrive at j,s, C_V,c_s^2, and I in this approximation, j ≈ 2T^2𝒞_2Ω, s ≈ 2T^3(2𝒞_0+(Ωβ)^2𝒞_2), C_V ≈ 2T^2(6𝒞_0+(Ωβ)^2𝒞_2), c_s^2 ≈ 2𝒞_0+(Ωβ)^2𝒞_2/6𝒞_0+(Ωβ)^2𝒞_2≈1/3(1+𝒞_2/3𝒞_0(Ωβ)^2), I ≈ 2T^2𝒞_2. Moreover, plugging (<ref>) and (<ref>) into ϵ=-P+Ts, the energy density of the relativistic Bose gas reads ϵ≈ T^4(3𝒞_0+(Ωβ)^2𝒞_2). Let us emphasize that the coefficients 𝒞_i,i=0,2 depend only on α. The thermodynamic quantities j,s,C_V,c_s^2 thus depend on α and Ωβ. The moment of inertia, however, depends only on α. It consists on a perturbative and a nonperturbative part, I=I_p+I_np, with I_p(α) ≡ 2T^2/3(1+3/8α), I_np(α) ≡ -T^2(α^1/2+3/2α^3/2). In Fig. <ref>, the α dependence of I_p,I_np, and I is demonstrated. It is shown that for 0<α<0.5, I_p>0 (red dashed line) and I_np<0 (green dashed line) in this interval, so that their combination I becomes positive for α<0.272 and negative for α>0.272 (see black solid line). At α≈ 0.272, the total moment of inertia vanishes. This scenario is very similar to the one described in <cit.> in which the summation of two different contributions to I leads to negative moment of inertia in some region of the parameter space. Inspired by <cit.>, we refer to α≈ 0.272 at which the moment of inertia vanishes as "supervortical coupling", α_s. According to (<ref>), at this point 𝒞_2 vanishes, thus, the speed of sound c_s^2 becomes equal to the speed of sound of a free relativistic Bose gas c_s^2=1/3. Let us notice that for a perturbative expansion to be valid, α=λ/π^2 must be lower than 0.1. Thus assuming that α<0.1, the total moment of inertia turns out to be always positive. In what follows, we determine numerically the thermodynamic quantities j,s,C_V,c_s^2,I, and ϵ and show that by considering the contribution from ℓ>1, some thermodynamic quantities, in particular, the moment of inertia and the heat capacity becomes negative for α<0.1. §.§ Numerical results including contributions from |ℓ|≥ 0 §.§.§ Preliminaries As it is shown in the previous section, the pressure P includes three different contribution, the zeroth order correction P_0, the one-loop perturbative correction P_1, and the nonperturbative ring correction P_ring. There are given by P_0=-V_eff^(0)T, P_1=-V_eff^(1)T, and P_ring=-V_ring with V_eff^(0)T,V_eff^(1)T, and V_ring given in (<ref>), (<ref>), and (<ref>). As concerns the ring contribution, we present in this section the results arising from (<ref>) (case iii), where we use the lowest order contribution to the one-loop self-energy in (<ref>).[We have also performed the computation with (<ref>) from case iv. The difference between the results of case iii and case iv is negligible. ] Since, according to (<ref>)-(<ref>), other thermodynamic quantities arise from P, they also consists of three contribution 𝒳_0, 𝒳_1 and 𝒳_ring with 𝒳={j,s,C_V,c_s^2, I, ϵ}. In what follows, we present the necessary analytic expressions for 𝒴_0, 𝒴_1 and 𝒴_ring with 𝒴={j̅,s̅,I̅,C̅_V}, where j̅≡ j/T^3, s̅≡ s/T^2,I̅≡ I/T^2, and C̅_V≡ C_V/T^2 are dimensionless quantities. The zeroth order, one-loop, and ring contributions of c_s^2, and ϵ arise simply from these expressions. Using the analytical expressions in this section, we explore the z≡ T/m as well as y=Ωβ behavior of these quantities. We particularly focus on the interval z∈ [0.1,1] and y∈[0.01,0.025] as well as α∈[0.01,0.1]. For nonvanishing Ω, we numerically perform the summation over ℓ up to ℓ_max=50. Let us start with j̅^(0) arising from (<ref>) with P replaced with P_0. Using (<ref>), we arrive at j̅_0(x,y) =-i/32π^2∑_ℓ=-∞^+∞ℓ𝒜^(1)_3,ℓ(x,y), where 𝒜^(m)_n,ℓ(x,y) is defined by 𝒜_n,ℓ^(m)(x,y)≡∫_0^∞ds/s^ne^-x^2 sϑ^(m)_3( . -iℓ y/2| i/4π s). Here, ϑ^(m)_3(z|τ)≡d^m/dz^mϑ_3(z|τ). Plugging then P_1 into (<ref>) and using dP_1/dΩ=βdP_1/dy, as well as d^m𝒜_n,ℓ(x,0)/dy^m=(-iℓ/2)^m𝒜_n,ℓ^(m)(x,y)|_y=0, we arrive at j̅_1(x,y,α)=iα/256π^2∑_ℓ=-∞^+∞𝒜_2,ℓ∑_ℓ=-∞^+∞ℓ𝒜_2,ℓ^(1). Here, 𝒜_2,ℓ and 𝒜_2,ℓ^(1) are given by (<ref>) and (<ref>), respectively. As concerns j̅_ring, we use (<ref>) (case i) and obtain j̅_ring(x,y,α) = y/4π∑_ℓ=-∞^+∞ℓ^2[ ζ̅_ℓ^-1/2Π̅_0-2(Π̅_0+ζ̅_ℓ)^1/2 +2ζ̅_ℓ^1/2], where ζ̅_ℓ≡ x^2-ℓ^2y^2 and Π̅_0=απ^2/3 are dimensionless quantities. The zeroth order contribution to entropy density arises from (<ref>) with P replaced with P_0. Using Td𝒜_n,ℓ/dT = 2x^2𝒜_n-1,ℓ+iℓ y/2𝒜_n,ℓ^(1), we obtain s̅_0(x,y) = 1/16π^2∑_ℓ=-∞^+∞(4 𝒜_3,ℓ+ 2 x^2𝒜_2,ℓ+ iℓ y/2𝒜_3,ℓ^(1)). To determine the one-loop contribution to the entropy density, we substitute P_1 into (<ref>), to arrive first at s̅_1=-α/64π^2∑_ℓ=-∞^+∞𝒜_2,ℓ∑_ℓ=-∞^+∞(𝒜_2,ℓ+T/2d𝒜_2,ℓ/dT). Then using (<ref>), we obtain s̅_1(x,y,α) = -α/64π^2∑_ℓ=-∞^+∞𝒜_2,ℓ∑_ℓ=-∞^+∞(𝒜_2,ℓ+ x^2𝒜_1,ℓ + iℓ y/4𝒜_2,ℓ^(1)). Using (<ref>) and (<ref>), the ring contribution to the entropy density reads s̅_ring(x,y,α) = -1/6π∑_ℓ=-∞^+∞[ 9/2ζ̅_̅ℓ̅^1/2Π̅_0-(Π̅_0+ξ̅_ℓ)^3/2 +ζ̅_ℓ^3/2-3Π̅_0(Π̅_0+ζ̅_ℓ)^1/2]. Similarly, plugging P_0 into (<ref>), the dimensionless moment of inertia I̅_0 is given by I̅_0(x)=-1/64π^2∑_ℓ=-∞^+∞ℓ^2𝒜_3,ℓ^(2)(x,0). Plugging P_1 into (<ref>), I̅_1 reads I̅_1(x,α)=-α/128π^2{(∑_ℓ=-∞^+∞d𝒜_2,ℓ(x,0)/dy)^2 +∑_ℓ=-∞^+∞d^2𝒜_2,ℓ(x,0)/dy^2∑_ℓ=-∞^+∞𝒜_2,ℓ(x,0)}, with d^m𝒜_n,ℓ/dy^m from (<ref>). The above expressions for I̅_0 and I̅_1 are independent of y. Thus, the summation over ℓ may be performed using ∑_ℓ=-d^+dℓ^n=H_d^(-n), where H_d^(n) is the generalized Harmonic number <cit.>. We need, in particular, H_d^(-1)=d(d+1)/2 and H_d^(-2)=d(d+1)(2d+1)/6. Let us now consider I̅_ring, which is given by plugging P_ring into (<ref>). It reads I̅_ring(x,α)=1/2π∑_ℓ=-∞^+∞ℓ^2[Π̅_0/2x-(Π̅_0+x^2)^1/2+x], where again the summation over ℓ can be performed. To determine the zeroth and first order contribution to C̅_V, we replace P in (<ref>) with P_0 and P_1, and arrive first at C̅_V,0 = 1/16π^2∑_ℓ=-∞^+∞{12𝒜_3,ℓ+2x^2𝒜_2,ℓ+iℓ y𝒜_3,ℓ^(1) + 4Td𝒜_3,ℓ/dT+2x^2Td𝒜_2,ℓ/dT+iℓ y/2Td𝒜_3,ℓ^(1)/dT}, and C̅_V,1 = -α/64π^2{∑_ℓ=-∞^+∞(𝒜_2,ℓ+x^2𝒜_1,ℓ+iℓ y/4𝒜_2,ℓ^(1)). .×∑_ℓ=-∞^+∞(3𝒜_2,ℓ+Td𝒜_2,ℓ/dT). .+∑_ℓ=-∞^+∞𝒜_2,ℓ∑_ℓ=-∞^+∞(Td𝒜_2,ℓ/dT-2x^2𝒜_1,ℓ.. ..+x^2Td𝒜_1,ℓ/dT-iℓ y/4𝒜_2,ℓ^(1)+iℓ y/4Td𝒜_2,ℓ^(1)/dT)}. Using then (<ref>) and Td𝒜_n,ℓ^(m)/dT = 2x^2𝒜_n-1,ℓ^(m)+iℓ y/2𝒜_n,ℓ^(m+1), we obtain the final expression for C̅_V,i, i=0,1. Finally, the ring contribution to C̅_V is given by C̅_V,ring = -απ/6∑_ℓ=-∞^+∞[3ζ̅_ℓ^1/2-3(Π̅_0+ζ̅_ℓ)^1/2 -Π̅_0(Π̅_0+ζ̅_ℓ)^-1/2]. Combining the expressions corresponding to s and C_V, it is possible to determine the speed of sound c_s^2 according to (<ref>). In what follows, we focus on T,Ω, and α dependence of thermodynamic quantities P,j,s,I,C_V,c_s^2. We also compute the dimensionless energy density ϵ̅ using ϵ̅=-P̅+s̅ and explore the T,Ω,α dependence of the interaction measure Δ̅≡ϵ̅-3P̅<cit.>. We use following notations: y=Ωβ, z≡ T/m, P_p≡ P_0+P_1, and 𝒳̅_tot≡𝒳̅_0+𝒳̅_1+𝒳̅_ring, with 𝒳={P,s,j,I,ϵ}. §.§.§ Results In Sec. <ref>, the exact and HTE expressions of V_eff^(0)T are given in (<ref>) and (<ref>). According to (<ref>), the exact and HTE expressions of the zeroth order pressure P_0 are determined from these expressions. In Fig. <ref>, we compare the T/m dependence of the exact (blue dots) and the HTE (red squares) of the dimensionless P_0/T^4 for a nonrotating y=0 [Fig. <ref>(a)] and rotating [Fig. <ref>(b)] relativistic Bose gas with y=0.018. As it is demonstrated, in both cases two expressions coincide for z≥ 0.4. This fixes the reliability regime for HTE. A comparison between two plots shows that the rotation increases the pressure P_0 up to several orders of magnitude. In Fig. <ref>, the T/m and Ωβ dependence of dimensionless P_ring/T^4 is presented for α=0.02,0.06,0.08 and fixed y=0.018 [Fig. <ref>(a)] as well as z=0.75 [Fig. <ref>(b)]. It is shown that P_ring/T^4 increases with increasing T/m and Ωβ. The α dependence of P_ring/T^4 is, however, nontrivial. It is plotted in Fig. <ref> for fixed y=0.018 and various temperatures z=0.7,0.8,0.9. It turns out that at low temperature (z=0.7) P_ring/T^4 is negative and decreases with increasing α, while at high temperatures (z=0.8,0.9), it is first negative, then increases with α and becomes positive for α∼ 1. In Fig. (<ref>)(a), the T/m dependence of P_p/T^4 (blue circles) and P_tot/T^4 (red squares) are plotted for fixed y=0.018 and α=0.02. At T/m>0.4, P_p and P_tot are both positive and their difference (P_tot-P_p=P_ring) becomes negligible. The same is also true for their Ωβ dependence once z and α are fixed [see Fig. <ref>(b)]. According to this plot, P_p and P_tot increases with increasing Ωβ, and their difference decreases with increasing Ωβ. These results are in complete agreement with the results from Fig. (<ref>), where the absolute value of P_ring decreases with increasing temperature and angular velocity. Let us now consider the T/m and Ωβ dependence of the dimensionless angular momentum density j̅_tot, which includes the contribution from j̅_0,j̅_1, and j̅_ring. In Fig. <ref>(a), the T/m dependence of j̅_tot is plotted for fixed y=0.018 and three different coupling α=0.02,0.06,0.08. According to this plot, j̅_tot first increases with T/m, at some temperature becomes maximum, and then decreases with increasing temperature. The position of the maxima and the point at which j̅_tot vanishes and then changes its sign depends on the strength of the coupling α. The larger α, the smaller is the maximum of j̅_tot. This specific T dependence of j̅_tot may be interpreted as a sign of thermodynamic instability in the medium, which turns out to be more probable for strong couplings α. In Fig. (<ref>)(b), we explore the Ωβ dependence of j̅_tot for fixed temperature z=0.75 and α=0.02,0.06,0.08. It turns out that for a weakly interacting relativistic Bose gas, j̅_tot increases with Ωβ, whereas for a moderately interacting one, it is first positive and remains almost constant for Ωβ∼ 0.018. Afterwards it becomes zero and changes its sign for larger Ωβ. For α=0.08 and z=0.75, however, it turns out to be negative, in accordance with the plots from Fig. <ref>(a). The plot from Fig. <ref>(b) indicates that j̅_tot is not linear in Ωβ, especially for larger Ωβs. In Fig. (<ref>), the T/m dependence of the dimensionless entropy density s̅_tot for a nonrotating (y=0) and a rotating (y=0.018) relativistic Bose gas is compared. As in the previous case, s̅_tot receives contributions from s̅_0, s̅_1, and s̅_ring. As it is shown in Fig. <ref>(a), in a nonrotating gas, the entropy increases with increasing temperature and changing the strength of the interaction α has practically no effect on this behavior. For a rotating gas, however, the coupling α substantially affects the T dependence of s̅_tot. Apart from the fact that its value increases up to several orders of magnitude due to rotation, the T dependence of s̅_tot in a weakly interacting medium is similar to the T dependence of a nonrotating gas. In contrast, for larger values of α, s̅_tot increases first with increasing temperature, exhibits a maximum at a certain temperature decreases with increasing T. Hence, an interplay between the coupling α and angular velocity Ω in the final expression for s̅_tot leads to a more ordered system at high temperature. In what follows, we show that this counter-intuitive T dependence of the entropy density leads to two novel phenomena in a rigidly rotating relativistic Bose gas: (i) the emergence of negative heat capacity and (ii) the appearance of superluminal sound velocities at high enough temperatures and large enough couplings. Both effects are signs of thermodynamic instability. In Fig. <ref>, the Ωβ dependence of the dimensionless entropy density is plotted for a fixed temperature z=0.75 and various couplings α=0.02,0.06,0.08. In the weakly interacting case (α=0.02), s_tot/T^3 increases slightly with increasing Ωβ. In contrast, for a moderately/strongly interacting medium, s_tot/T^3 decreases with Ωβ. According to these results, we conclude that the coupling constant α plays an important role on the T/m as well as Ωβ dependence of the total entropy density. Moreover, in general, for fixed temperature and angular velocity, the total entropy density decreases with increasing coupling α. Let us now consider the temperature dependence of the dimensionless moment of inertia I_tot/T^2. In Fig. <ref>, the T/m dependence of I̅_0 (blue dots), I̅_p (red squares), and I̅_tot=I_0+I_p (green triangles) is plotted. For a weakly interacting relativistic Bose gas with α=0.02, I̅_tot increases with increasing temperature. In addition, its T/m dependence turns out to be mainly dominated by that of I̅_p=I̅_0+I̅_1. The fact that I̅_tot is positive for the whole interval T/m∈[0,1] is only true for a weakly interacting gas. In Fig. <ref>(a), we explore the T/m dependence of the dimensionless I̅_tot for a weakly, moderately, and strongly interacting rotating Bose gas with couplings α=0.02,0.06, and α=0.08, respectively. It turns out that in a weakly interacting medium I̅_tot is positive in the whole interval of temperature, while in a moderately/strongly interacting gas, it first increases with T/m, has then a maximum at some moderate temperature, and eventually falls and changes sign at high temperatures. Following the terminology introduced recently in <cit.>, we refer to temperatures at which I̅_tot vanishes as ”supervortical" temperatures”z_s≡ (T/m)_s. According to the results in Fig. <ref>(a), at z<z_s (z>z_s) the total moment of inertia is positive (negative). In Fig. <ref>(b), the α dependence of supervortical temperatures (T/m)_s is plotted. The blue dots indicate supervortical temperatures for each given α. The region below (above) the blue dots corresponds to I̅_tot>0 (I̅_tot<0). We also examine the α dependence of I̅_tot/T^2 for fixed temperatures z=0.55,0.75,0.95 [see Fig. <ref>(a)]. Whereas for z=0.55, the dimensionless moment of inertia is positive for 0<α<0.1, at higher temperatures, there exists a certain "supervortical coupling" for which I̅_tot vanishes. Consequently, for α<α_s (α>α_s) I̅_tot turns out to be positive (negative). In Fig. <ref>(b), the T/m dependence of α_s is plotted. It decreases with increasing temperatures as expected from Fig. <ref>(a). Again, the blue dots indicate the supervortical couplings for each given temperature T/m and the region below (above) the blue dots corresponds to I̅_tot>0 (I̅_tot<0). At this stage a couple of remarks concerning I=0 and I<0 are in order. Using j=IΩ from (<ref>) and assuming that j=const., a vanishing moment of inertia leads to an extremely large angular velocity Ω. This is why the term "supervortical" is used in <cit.>. A negative moment of inertia, however, means that by applying an external angular momentum J, the system rotates with an angular velocity Ω directed antiparallel to J (see Fig. <ref> for a visualization of this situation). Thus, a negative moment of inertia may indicate a thermodynamic instability in a rigidly rotating medium <cit.>. In Fig. <ref>, we explore the T dependence of dimensionless Δ̅= ϵ̅-3P̅ for a nonrotating y=0 and a rotating y=0.018 relativistic Bose gas. This quantity is a measure for the ideality of a relativistic medium, as ϵ=3P is the equation of state of an ideal (Bose) gas. As it is demonstrated in Fig. <ref>(a), Δ̅_tot decreases with increasing temperature. The slope of its fall depends slightly on the coupling α. This result indicates that at high enough temperature the nonrotating Bose gas behaves as an ideal gas, as Δ̅_tot approaches asymptotically to zero. For a rotating medium, apart from the fact that Δ̅_tot is up to several orders of magnitude larger than in a nonrotating medium, it decreases with increasing temperature [see Fig. <ref>(b)]. However, in contrast to the nonrotating case, it vanishes at certain temperature, and becomes negative afterwards. This is a sign of a thermodynamic instability caused by a rigid rotation in a strongly interacting relativistic Bose gas. The Ωβ dependence of Δ̅_tot is plotted in Fig. <ref> for fixed temperature z=0.75 and α=0.02,0.06,0.08. According to this plot, Δ̅_tot decreases with increasing Ωβ. At some specific Ωβ, it vanishes and becomes negative. The stronger the coupling constant, the lower the angular velocity is at which Δ̅_tot vanishes and the system becomes unstable. In Fig. <ref>, the temperature dependence of dimensionless heat capacity C_V/T^2 is plotted for a nonrotating y=0 and rotating (y=0.018) relativistic Bose gas. We used (<ref>) to determine C_V. According to the plot in Fig. <ref>(a), in the nonrotating medium, the heat capacity increases with increasing temperature. In a rotating medium, however, the T/m dependence of C̅_V depends significantly on α [see Fig. <ref>(b)]. Whereas for α=0.02 (weakly interacting medium), the heat capacity is always positive and its T dependence is more or less similar to the case of a nonrotating gas, for a moderately and strongly interacting gas with α=0.06 and α=0.08, C̅_V decreases with increasing temperature. For α=0.08 at T/m∼ 0.082, it vanishes and then becomes negative at T/m>0.82. Let us notice that when a system possesses a negative heat capacity, its temperature decreases by supplying heat. Same counter-intuitive behavior appears in a rigidly rotating Bose gas, and is affected by the strength of the interaction in the medium. The Ωβ dependence of the dimensionless heat capacity is plotted in Fig. <ref> for fixed temperature z=0.75 and α=0.02,0.06,0.08. In the weakly interacting case α=0.02, C̅_V is positive and almost constant in Ωβ. For α=0.06 and α=0.08, however, it slightly decreases with increasing Ωβ. For large coupling α=0.08, it vanishes at some large Ωβ. For fixed T and Ω, C̅_V decreases with increasing α. Using the data corresponding to the entropy density and heat capacity, it is possible to determine the sound velocity c_s according to (<ref>). In Fig. (<ref>), the T/m dependence of c_s^2 is plotted for fixed α=0.02,0.06,0.08 as well as y=0 [Fig. <ref>(a)] and y=0.018 [Fig. <ref>(b)]. According to these results, the speed of sound of a nonrotating Bose gas increases with increasing T and approaches asymptotically the speed of sound of a free relativistic gas, c_s^2=1/3. In the absence of rotation, different choices of α does not affect this behavior too much. In a rotating medium, however, the situation is different. Whereas, according to the results in Fig. <ref>(b), the temperature dependence of c_s^2 is more of less similar to the nonrotating case, for a moderately interacting gas with α=0.06, c_s^2 increases with T but it passes 1/3 at high temperature and becomes almost equal to the speed of light at T/m∼ 1. For strong coupling α=0.08, the sound velocity increases very fast, so that at T/m=0.8 is given by c_s=1.07>1. This breaks the causality and is an indication that a strongly interacting rotating Bose gas becomes unstable at high temperature. The Ωβ dependence of c_s^2 is explored in Fig. <ref> for fixed temperature z=0.75 and α=0.02,0.06.0.08. For a weakly interacting Bose gas, the speed of sound is lower than c_s^2=1/3. It increases slightly with increasing Ωβ, but never becomes larger than the speed of light. The same is also true for a moderately interacting medium with α=0.06. For α=0.08, however, c_s^2 increases very fast with increasing Ωβ and reaches c_s∼ 1 at Ωβ∼ 0.022. Afterwards the system becomes unstable because of broken causality for larger values of angular velocity. Let us notice that α=0.08 corresponds to λ∼ 0.78<1, which is still reliable for a perturbative expansion. The above results show that such a strongly interacting Bose gas become unstable either at large temperatures or large angular velocities once the system is subjected to a rigid rotation. § CONCLUDING REMARKS We studied the effect of a rigid rotation on the thermodynamic properties of a relativistic Bose gas. First, we determined the perturbative thermodynamic potential up to one-loop order, which together with the nonperturbative ring potential was used to compute the thermodynamic quantities in this approximation. To do this, we considered the Lagrangian density of a CKG model in the presence of a rigid rotation. We utilized the solution of the corresponding equation of motion to derive the free propagator of this model using the Fock-Schwinger method. The free propagator allowed us to determine the thermodynamic potential of this model, including zeroth and one-loop perturbative contributions as well as nonperturbative ring potential. We presented analytical expressions for these quantities and showed, in particular, that the angular velocity Ω plays effectively the role of a chemical potential, as anticipated from literature. Additionally, we performed an appropriate high temperature expansion and presented the corresponding results to the total thermodynamic potential in this approximation. This potential was then used to determine several thermodynamic quantities, including the pressure, entropy density, angular momentum density, heat capacity, speed of sound, and the moment of inertia of this rotating relativistic Bose gas. We numerically explored the T and Ω dependence of these quantities. By comparing the exact expression of P_0, arising from the zeroth order thermodynamic potential, with the high temperature expanded expressions corresponding to it, we determined the high temperature regime of this model to be T/m≥ 0.4. We showed that P_0 of a rotating relativistic Bose gas is much higher than P_0 for a nonrotating gas. We then focused on the one-loop and ring contributions to the total pressure P_tot. As the ring potential is negative in the whole interval of T and Ω, and as it increases with increasing T and Ω, its effect reduces in high temperature and frequency regimes. Hence, in this regime, the (T,Ω) dependence of the total pressure is mainly dominated by the (T,Ω) dependence of P_1, including the zeroth and one-loop contributions to P_tot. Apart from (T,Ω) dependence of the pressure, we focused on its α=λ/π^2 dependence. Here, λ is the coupling constant of the model, which appears in the corresponding Lagrangian density. We showed that the ring pressure exhibits a nonlinear dependence on α. Regarding the (T,Ω) dependence of the angular momentum and entropy densities, j_tot and s_tot, for fixed (Ωβ, T/m) and α, we distinguish three different types of behavior in three different regimes of α. Whereas in the weakly interacting regime 0<α≤ 0.05, j_tot is positive and s_tot increases with increasing temperature and angular velocity, in the moderately and strongly interacting regimes α∈[0.05,0.07] and α∈[0.07,0.1], j_tot becomes negative, in particular, in the high temperature regime and s_tot decreases with increasing temperature. This is an effect mainly caused by the rigid rotation, as, for instance, the entropy density of a nonrotating relativistic Bose gas increases with increasing T, as expected. Being directly related to j_tot through its definition in (<ref>), the T dependence of the moment of inertia I_tot is also affected by α. Whereas in the weakly interacting regime, it is positive, it becomes negative in a moderately and strongly interacting medium after certain temperature. The specific temperature at which I_tot vanishes, was referred to as the supervortical temperature, T_s. We demonstrated in Fig. <ref>(b) that T_s decreases with increasing α. Apart from T_s, we defined a supervortical coupling α_s, and showed in Fig. <ref>(b) that α_s decreases with increasing temperature. Interpreting j=IΩ as the linear response to Ω, the moment of inertia I plays the role of the susceptibility of the medium corresponding to rotation. As it is demonstrated in Fig. <ref>, I>0 (I<0) means that by applying an angular momentum j, the system rotates with Ω parallel (antiparallel) to j and a vanishing moment of inertia leads to Ω→∞ once j is assumed to be finite. Similar counter-intuitive effect is also observed in the temperature dependence of the heat capacity C_V. Whereas C_V is positive in a weakly interacting Bose gas under rotation, in a moderately interacting gas, it decreases with increasing temperature, and in a strongly interacting Bose gas, it vanishes at some finite temperature and becomes negative with increasing temperature. Negative C_V means that although a system receives heat, but its temperature decreases. Its occurrence is a sign of thermodynamic instability in a medium. Here, this instability is caused by rigid rotation. Another noticeable effect that occurs once the relativistic Bose gas is strongly interacting and rigidly rotates, is the appearance of superluminal sound velocities at high temperatures and for large angular velocities. According to (<ref>), the sound velocity c_s is defined in terms of the entropy density and heat capacity. Its (T,Ω,α) dependence is thus directly related to the (T,Ω,α) dependence of the entropy density. It thus seems that a relativistic Bose gas under rigid rotation becomes thermodynamically unstable in the strong coupling regime α∈ [0.07,0.1], in which, because of λ<1, perturbative computation is still possible. In summary, the analysis of the thermodynamic properties of the rigidly rotating relativistic Bose gas revealed interesting behavior at high temperatures and large coupling constants. The appearance of thermodynamic instabilities, such as zero and negative values of the moment of inertia and heat capacity, suggested the presence of unique phenomena such as supervorticity and provided insight into the complex behavior of a rigidly rotating system. It would be interesting to extend this work to a relativistic Fermi gas and eventually generalize it to the QGP produced in relativistic HICs. First attempt in this direction is made in <cit.>. In general, the study of such systems not only enriches our knowledge of fundamental physics, but may offer potential applications in diverse fields such as condensed matter physics <cit.> and astrophysics <cit.>. § FREE BOSONIC PROPAGATOR IN MOMENTUM SPACE The free boson propagator in the coordinate space is given by (<ref>). The corresponding propagator in the Fourier space is determined by D_ℓℓ'^(0)(p,p')=∫ d^4xd^4x'D_0(x,x')ϕ_ℓ(x,p)ϕ_ℓ'(x',p'), with d^4x=dtdφ dzrdr in the cylindrical coordinate system. Plugging D_0(x,x') from (<ref>) and ϕ_ℓ(x,p) from (<ref>) into (<ref>), we arrive first at D_ℓℓ'^(0)( p,p' ) = ∑_n=-∞^+∞∫dtdt'dφ dφ'dzdz'rdrr'dr' ×∫dEdk_zdk_⊥k_⊥/( 2π)^3e^-iE( t-t' )+in Ω( t-t' )+ik_z( z-z' )+in ( φ -φ' )/E^2-k_⊥^2-k_z^2-m^2+iϵJ_n ( k_⊥r )J_n ( k_⊥r' ) ×e^+ip_0t-iℓφ -ip_zzJ_ℓ( p_⊥r )× e^-ip'_0t+iℓ 'φ'+ip'_zz'J_ℓ'(p'_⊥r' ). To perform the integrations over t and z, we use ∫ dt e^-i(E-(nΩ+p_0))t = 2πδ(E-(nΩ+p_0)), ∫ dze^i(k_z-p_z)z = 2πδ(k_z-p_z). Integration over t' and z' are performed similarly. The integral over φ yields ∫_0^2πdφ e^i(n-ℓ)φ=2πδ_nℓ. Similarly, the integration over φ' leads to ∫_0^2πdφ e^i(n-ℓ')φ=2πδ_nℓ'. Because of the summation over n in (<ref>), (<ref>) and (<ref>) result in ℓ=ℓ'=n. It is thus possible to perform the integration over r and r' by making use of ∫_0^∞drrJ_ℓ(k_⊥r )J_ℓ(p_⊥r ) = 1/k_⊥δ(k_⊥-p_⊥), ∫_0^∞dr'r'J_ℓ(k_⊥r' )J_ℓ(p'_⊥r' ) = 1/k_⊥δ(k_⊥-p'_⊥). Plugging (<ref>), (<ref>), (<ref>) and (<ref>) into (<ref>), and performing the integration over E, k_z,k_⊥ and the summation over n, we arrive at D_ℓℓ'^(0)(p,p')=(2π)^3δ_ℓ,ℓ'(p_0,p_z,p_⊥;p^'_0,p^'_z,p^'_⊥)/(p_0+ℓΩ)^2-p_⊥^2-p_z^2-m^2+iϵ, with δ_ℓ,ℓ'(p_0,p_z,p_⊥;p^'_0,p^'_z,p^'_⊥) = 1/p_⊥δ(p_0-p^'_0)δ(p_z-p^'_z) ×δ(p_⊥-p^'_⊥)δ_ℓℓ', [see (<ref>) and (<ref>)]. § HIGH TEMPERATURE EXPANSION OF V_EFF^(0)T In this appendix, we derive (<ref>) which arises by an appropriate HTE of the T dependent part of the zeroth order correction to the thermodynamic (effective) potential (<ref>). To this purpose, we use the method introduced in <cit.>, where the thermodynamic potential of a free relativistic Bose gas with a finite chemical potential μ is expanded in the orders of mβ, with β=T^-1 and μ<m.[Here, μ is assumed to be positive.] Let us consider V_eff^(0)T from (<ref>), V_eff^(0)T = T∑_ℓ=-∞^+∞∫dp_zp_⊥dp_⊥/(2π)^2 ×[ln(1-e^-β( ω +ℓΩ))+ln( 1-e^-β( ω -ℓΩ))], and separate the summation over ℓ into the contribution from ℓ=0 and ℓ≠ 0 to V_eff^(0)T. The resulting expression is then given by V_eff^(0)T=2(ℐ_1+ℐ_2), with ℐ_1 ≡ ∫dp_zp_⊥dp_⊥/( 2π)^2ln( 1- e^-βω), ℐ_2 ≡ ∑_ℓ =1^+∞∫dp_zp_⊥dp_⊥/( 2π)^2 ×[ ln( 1-e^-β( ω +ℓΩ))+ln( 1-e^-β( ω -ℓΩ)) ]. Here, ω^2=p_⊥^2+p_z^2+m^2. To evaluate the Ω-independent part of V_eff^(0)T, ℐ_1, we use ln (1-x)=-∑_k=1^+∞x^k/k, and arrive first at ℐ_1=-∑_k=1^+∞1/k∫dp_zp_⊥dp_⊥/( 2π)^2e^-βω k. Using, at this stage, the Mellin transformation of the exponential function in (<ref>), we obtain e^-βω k=1/2π i∫_c-i∞^c+i∞dz Γ( z ) (β k)^-z(ω^2)^-z/2. Plugging then (ω^2)^-z/2=1/Γ(z/2)∫_0^∞dt t^z/2-1e^-ω^2t, into (<ref>) and the resulting expression into (<ref>), we arrive at ℐ_1 = -1/2π i∫_c-i∞^c+i∞dzζ(z+1)Γ(z)/Γ(z/2)β^-z ×∫_0^∞dt t^z/2-1e^-m^2t∫dp_zp_⊥dp_⊥/(2π)^2e^-(p_z^2 +p_⊥^2)t, where ζ(z) is the Riemann ζ-function. It arises from ∑_k=1^+∞k^-( 1+z )=ζ( 1+z ), that is used to perform the summation over k in (<ref>). The integration over p_z and p_⊥ can easily be performed and yields ∫dp_zp_⊥dp_⊥/(2π)^2e^-(p_z^2 +p_⊥^2)t=1/(2π)^3(π/t)^3/2. We first substitute (<ref>) into (<ref>) and then perform the integration over t by making use of ∫_0^∞dt t^x-1e^-w^2t=Γ(x)(w^2)^-x, for Re[w^2]>0, and Re[x]>0. Using, at this stage, the Legendre formula for the Γ(z) function in (<ref>) Γ(z)=2^z/2√(π)Γ(z/2)Γ( z+1/2), and plugging (<ref>) and (<ref>), with x=z/2 and w=m, into (<ref>), we arrive at ℐ_1 = -m^3/16π^21/2π i∫_c-i∞^c+i∞dz Γ( z+1/2)Γ( z-3/2) ×ζ( 1+z )( mβ/2)^-z, that leads to ℐ_1 = -π^2/90T^3+m^2T/24-m^3/12π+m^4/32π^2T ×( ln( 4π T /m )-γ_E+3/4)+⋯, upon using Cauchy's theorem and summing over the residues of Γs and ζ-function. Let us now consider the Ω dependent part of V_eff^(0)T, ℐ_2 from (<ref>). Using (<ref>), it is first given by ℐ_2=-2∑_ℓ=1^+∞∫dp_zp_⊥dp_⊥/(2π)^2(e^-βω k/kcosh(kℓΩβ)). Expanding cosh(ℓΩβ) in (<ref>), ℐ_2 is given by ℐ_2=-2∑_ℓ=1^+∞∑_j=1^+∞1/(2j)! (βℓΩ)^2j𝒬_j, with 𝒬_j≡∑_k=1^+∞∫dp_zp_⊥dp_⊥/(2π)^2e^-βω kk^2j-1. Following the same steps leading to ℐ_1, we arrive first at 𝒬_j = m^3/16π^21/2π i∫_c-i∞^c+i∞dz Γ(z+1/2) ×Γ(z-3/2)ζ(1+z-2j)(mβ/2)^-z, and then after performing the integration over z by using the Cauchy's theorem, we obtain 𝒬_1=m^4/16π^2Tζ^'(-2)+m^2/8π^2T-m/4π T^2+1/6T^3+⋯, and 𝒬_2=m^4/16π^2Tζ^'(-4)-1/2π^2T^3+⋯. Substituting these results into ℐ_2, we arrive at ℐ_2=-∑_ℓ=1^+∞((3m^2-(ℓΩ)^2)/24π^2T-m/4π+1/6T)(ℓΩ)^2+⋯. Adding this expression with ℐ_1 from (<ref>), according to (<ref>), the HTE of V_eff^(0)T is given by (<ref>). § HIGH TEMPERATURE EXPANSION OF Π_1 As it is described in Sec. <ref>, the one-loop self-energy function of the CKG field is given by (<ref>), with 𝒥_i, i=1,2 from (<ref>). In this appendix, we use the method introduced in App. <ref> and derive the HTE of 𝒥_i, i=1,2. Let us first consider 𝒥_1 and replace the Bose-Einstein distribution function n_b(ω) with n_b(ω)=Td/dαln(1-e^-βω)|_α=0. We thus arrive at 𝒥_1=Td/dα∫dp_zp_⊥dp_⊥/( 2π)^21/ωln( 1-e^-β( ω +α)) |_α=0. Using (<ref>) to expand the logarithm in (<ref>), substituting the Mellin transformation of the exponential function (<ref>) into the resulting expression, and using (ω^2)^-(z+1)/2=1/Γ(z/2)∫_0^∞dt t^(z+1)/2-1e^-ω^2t, with ω^2=p_z^2+p_⊥^2+m^2, we arrive at 𝒥_1= -T/2π id/dα∫_c-i∞^c+i∞dz _z+1(e^-αβ)Γ(z)/Γ((z+1)/2)β^-z ×∫_0^∞ dt t^z+1/2-1e^-m^2t∫dp_zp_⊥dp_⊥/(2π)^2e^-(p_z^2+p_⊥^2)t|_α=0, where the polylogarithm _z+1(e^-αβ) arises from ∑_k=1^+∞e^-β kαk^-(1+z)=_z+1(e^-αβ). Performing the integration over p_z and p_⊥ by using (<ref>), plugging the resulting expression into (<ref>), and performing the integration over t by using (<ref>), 𝒥_1 = m^2/16π^21/2π i∫_c-i∞^c+i∞dzΓ(z/2)Γ(z-2/2)_z(1) ×(mβ/2)^-z. To arrive at (<ref>), we also used (<ref>) and d/dα_z+1(e^-αβ)|_α=0=-β_z(1). Using Cauchy's theorem and summing over residues of Γ and polylogarithm functions, 𝒥_1 is given by 𝒥_1 = T^2/12-mT/4π+m^2/8π^2(ln(4π T/m)-γ_E+1/2)+⋯. Let us now consider 𝒥_2 from (<ref>). To evaluate it, we use n_b(ω±ℓΩ)=± T∂/∂ℓΩln(1-e^-β(ω±ℓΩ)), and Taylor expand the logarithms according to (<ref>). We arrive at 𝒥_2 = 2T∑_ℓ=1^+∞∑_k=1^+∞∂/∂(ℓΩ)∫dp_zp_⊥dp_⊥/(2π)^21/ω ×(e^-βω k/ksinh(kℓΩβ)). Expanding sinh(kℓΩβ) in the orders of ℓΩ, we arrive first at 𝒥_2=2∑_ℓ=1^+∞∑_j=0^+∞1/(2j)!ℱ_j, with ℱ_j≡∑_k=1^+∞∫dp_zp_⊥dp_⊥/(2π)^2e^-βω kk^2j/ω. Following, at this stage, the same steps leading to ℐ_1 from App. <ref>, we arrive first at ℱ_j = m^2/16π^21/2π i∫_c-i∞^c+i∞dz Γ(z-2/2)Γ(z/2)ζ(z-2j) ×(mβ/2)^-z, and then after performing the integration over z by using the Cauchy's theorem, we obtain 𝒥_2 = ∑_ℓ=1^+∞[ T^2/6-(2m^2-(ℓΩ)^2)T/4π m-(ℓΩ)^2/4π^2 +m^2/8π^2(ln(4π T/m)-γ_E+1/2) ]. Adding this expression to 𝒥_1 from (<ref>), we arrive, according to (<ref>) at Π_1^mat from (<ref>). 99 rajagopal2018 W. Busza, K. Rajagopal and W. van der Schee, Heavy-ion collisions: The big picture, and the big questions, Ann. Rev. Nucl. Part. Sci. 68, 339 (2018), https://arxiv.org/pdf/1802.04801.pdfarXiv:1802.04801 [hep-ph]. pisarski2022 A. Lovato, T. Dore, R. D. Pisarski, B. Schenke, K. Chatziioannou, J. S. Read, P. Landry, P. Danielewicz, D. Lee and S. Pratt, et al.Long Range Plan: Dense matter theory for heavy-ion collisions and neutron stars, https://arxiv.org/pdf/2211.02224.pdfarXiv:2211.02224 [nucl-th]. aarts2023 G. Aarts, J. Aichelin, C. Allton, A. Athenodorou, D. Bachtis, C. Bonanno, N. Brambilla, E. Bratkovskaya, M. Bruno and M. Caselle, et al.Phase transitions in particle physics - Results and perspectives from lattice Quantum Chromo-Dynamics, https://arxiv.org/pdf/2301.04382.pdfarXiv:2301.04382 [hep-lat]. hot-QCD M. Arslandok, S. A. Bass, A. A. Baty, I. Bautista, C. Beattie, F. Becattini, R. Bellwied, Y. Berdnikov, A. Berdnikov and J. Bielcik, et al.Hot QCD white paper, https://arxiv.org/pdf/2303.17254.pdfarXiv:2303.17254 [nucl-ex]. present2023 P. Achenbach, D. Adhikari, A. Afanasev, F. Afzal, C. A. Aidala, A. Al-bataineh, D. K. Almaalol, M. Amaryan, D. Androic and W. R. Armstrong, et al.The present and future of QCD, https://arxiv.org/pdf/2303.02579.pdfarXiv:2303.02579 [hep-ph]. becattini2021 F. Becattini, J. Liao and M. Lisa, Strongly interacting matter under rotation: An introduction, Lect. Notes Phys. 987, 1 (2021), https://arxiv.org/pdf/2102.00933.pdfarXiv:2102.00933 [nucl-th]. machine2023 K. Zhou, L. Wang, L. G. Pang and S. Shi, Exploring QCD matter in extreme conditions with machine learning, https://arxiv.org/pdf/2303.15136.pdfarXiv:2303.15136 [hep-ph]. davoudi2024 C. W. Bauer, Z. Davoudi, N. Klco and M. J. Savage, Quantum simulation of fundamental particles and forces, Nature Rev. Phys. 5, 420 (2023), https://arxiv.org/pdf/2404.06298.pdfarXiv:2404.06298 [hep-ph]. vilenkin1980 A. Vilenkin, Quantum field theory at finite temperature in a rotating system, Phys. Rev. D 21, 2260 (1980). rotation A. Yamamoto and Y. Hirono, Lattice QCD in rotating frames, Phys. Rev. Lett. 111, 081601 (2013), https://arxiv.org/pdf/1303.6292.pdfarXiv:1303.6292 [hep-lat].rot2 V. E. Ambruş and E. Winstanley, Rotating quantum states, Phys. Lett. B 734, 296 (2014), https://arxiv.org/pdf/1401.6388.pdfarXiv:1401.6388 [hep-th].rot3 K. Mameda and A. Yamamoto, Magnetism and rotation in relativistic field theory, PTEP 2016, 093B05 (2016), https://arxiv.org/pdf/1504.05826.pdfarXiv:1504.05826 [hep-th].rot4 V. E. Ambruş and E. Winstanley, Rotating fermions inside a cylindrical boundary, Phys. Rev. D 93, 104014 (2016), https://arxiv.org/pdf/1512.05239.pdfarXiv:1512.05239 [hep-th].rot5 M. N. Chernodub and S. Gongyo, Interacting fermions in rotation: chiral symmetry restoration, moment of inertia and thermodynamics, JHEP 01, 136 (2017), https://arxiv.org/pdf/1611.02598.pdfarXiv:1611.02598 [hep-th].rot6 M. N. Chernodub and S. Gongyo, Effects of rotation and boundaries on chiral symmetry breaking of relativistic fermions, Phys. Rev. D 95, 096006 (2017), https://arxiv.org/pdf/1702.08266.pdfarXiv:1702.08266 [hep-th].rot7 M. N. Chernodub and S. Gongyo, Edge states and thermodynamics of rotating relativistic fermions under magnetic field, Phys. Rev. D 96, 096014 (2017), https://arxiv.org/pdf/1706.08448.pdfarXiv:1706.08448 [hep-th].rot8 V. E. Ambruş and E. Winstanley, Exact solutions in quantum field theory under rotation, Lect. Notes Phys. 987, 95 (2021), https://arxiv.org/pdf/1908.10244.pdfarXiv:1908.10244 [hep-th].rot9 V. V. Braguta, A. Y. Kotov, D. D. Kuznedelev and A. A. Roenko, Influence of relativistic rotation on the confinement-deconfinement transition in gluodynamics, Phys. Rev. D 103, 094515 (2021), https://arxiv.org/pdf/2102.05084.pdfarXiv:2102.05084 [hep-lat].rot10 V. V. Braguta, M. N. Chernodub, I. E. Kudrov, A. A. Roenko and D. A. Sychev, Influence of Relativistic Rotation on QCD Properties, Phys. Atom. Nucl. 86, 1249 (2023). rot11 F. Sun, J. Shao, R. Wen, K. Xu and M. Huang, Chiral phase transition and spin alignment of vector meson in the Polarized-Polyakov-loop Nambu-Jona-Lasinio model under rotation, https://arxiv.org/pdf/2402.16595arXiv:2402.16595 [hep-ph]. fukushima2015 H. L. Chen, K. Fukushima, X. G. Huang and K. Mameda, Analogy between rotation and density for Dirac fermions in a magnetic field, Phys. Rev. D 93, 104052 (2016), https://arxiv.org/pdf/1512.08974.pdfarXiv:1512.08974 [hep-ph]. fukushima2018 K. Fukushima, Extreme matter in electromagnetic fields and rotation, Prog. Part. Nucl. Phys. 107, 167 (2019), https://arxiv.org/pdf/1812.08886.pdfarXiv:1812.08886 [hep-ph]. sadooghi2021 N. Sadooghi, S. M. A. Tabatabaee Mehr and F. Taghinavaz, Inverse magnetorotational catalysis and the phase diagram of a rotating hot and magnetized quark matter, Phys. Rev. D 104, 116022 (2021), https://arxiv.org/pdf/2108.12760.pdfarXiv:2108.12760 [hep-ph]. zamora2021 A. Ayala, L. A. Hernández, K. Raya and R. Zamora, Fermion propagator in a rotating environment, Phys. Rev. D 103, 076021 (2021) [erratum: Phys. Rev. D 104, 039901 (2021)], https://arxiv.org/pdf/2102.03476.pdfarXiv:2102.03476 [hep-ph]. fukushima2022 S. Chen, K. Fukushima and Y. Shimada, Perturbative confinement in thermal Yang-Mills theories induced by imaginary angular velocity, Phys. Rev. Lett. 129, 242002 (2022), https://arxiv.org/pdf/2207.12665.pdfarXiv:2207.12665 [hep-ph]. chernodub2023-1 V. V. Braguta, M. N. Chernodub, A. A. Roenko and D. A. Sychev, Negative moment of inertia and rotational instability of gluon plasma, Phys. Lett. B 852, 138604 (2024), https://arxiv.org/pdf/2303.03147.pdfarXiv:2303.03147 [hep-lat]. chernodub2023-2 V. V. Braguta, M. N. Chernodub, I. E. Kudrov, A. A. Roenko and D. A. Sychev, Negative Barnett effect, negative moment of inertia of (quark-)gluon plasma and thermal evaporation of chromomagnetic condensate, https://arxiv.org/pdf/2310.16036.pdfarXiv:2310.16036 [hep-ph]. ambrus2023 V. E. Ambruş and M. N. Chernodub, Rigidly rotating scalar fields: Between real divergence and imaginary fractalization, Phys. Rev. D 108, 085016 (2023), https://arxiv.org/pdf/2304.05998.pdfarXiv:2304.05998 [hep-th]. zamora2023 I. I. Gaspar, L. A. Hernández and R. Zamora, Chiral symmetry restoration in a rotating medium, Phys. Rev. D 108, 094020 (2023), https://arxiv.org/pdf/2305.00101.pdfarXiv:2305.00101 [hep-ph]. sadooghi2023 H. M. Ghalati and N. Sadooghi, Magnetic dual chiral density wave phase in rotating cold quark matter, Phys. Rev. D 108, 5 (2023), https://arxiv.org/pdf/2306.04472.pdfarXiv:2306.04472 [nucl-th]. cao2023 G. Cao, Effects of imaginary and real rotations on QCD matters, Phys. Rev. D 109, 014001 (2024), https://arxiv.org/pdf/2310.03310.pdfarXiv:2310.03310 [nucl-th]. fukushima2024 S. Chen, K. Fukushima and Y. Shimada, Inhomogeneous confinement and chiral symmetry breaking induced by imaginary angular velocity, https://arxiv.org/pdf/2404.00965.pdfarXiv:2404.00965 [hep-ph]. cve D. E. Kharzeev, J. Liao, S. A. Voloshin and G. Wang, Chiral magnetic and vortical effects in high-energy nuclear collisions—A status report, Prog. Part. Nucl. Phys. 88, 1 (2016), https://arxiv.org/pdf/1511.04050.pdfarXiv:1511.04050 [hep-ph]. endrodi-book G. Endrődi, Thermal Quantum Field Theory (Lecture Notes, 2018). kapusta-book J. I. Kapusta and C. Gale, Finite Temperature Field Theory, Principles and Applications, 2nd ed. (Cambridge University Press, Cambridge, England, 2006). lebellac-book M. Le Bellac, Thermal Field Theory (Cambridge University Press, Cambridge, England, 1996). laine-book M. Laine and A. Vuorinen, Basics of Thermal Field Theory, A Tutorial on Perturbative Computations (Springer International Publishing AG, Switzerland, 2016). gradstein I. S. Gradshtein and I. M. Ryzhik, Tables of Integrals, Series, and Products (Academic Press, Orlando, 1980). toms-book D. J. Toms, The effective action at finite temperature and density with application to Bose-Einstein condensation, https://arxiv.org/pdf/cond-mat/9612003arXiv:cond-mat/9612003 [cond-mat.stat-mech]. weldon-haber H. E. Haber and H. A. Weldon, On the relativistic Bose- Einstein integrals, J. Math. Phys. 23, 1852 (1982). wolframmath J. Sema, The Roman harmonic numbers revisited, J. N. T. 180, 544 (2017), https://arxiv.org/pdf/1702.03718arXiv:2017.03718 [math.NT]. mameda2023 K. Mameda and K. Takizawa, Deconfinement transition in the revolving bag model, Phys. Lett. B 847, 138317 (2023), https://arxiv.org/pdf/2308.07310.pdfarXiv:2308.07310 [hep-ph]. negativeinertiaconverter J. Lončar, B. Igrec and D. Babić, Negative-inertia converters: Devices manifesting negative mass and negative moment of inertia, Symmetry 14, 529, 2022. gravwaveneginertia J. L. Wright, Optical springs to create macroscopic optical traps and negative inertia for gravitational wave detectors (PhD thesis, University of Glasgow, 2022).
http://arxiv.org/abs/2405.09674v1
20240515192936
$Σ_n$-correct Forcing Axioms
[ "Ben Goodman" ]
math.LO
[ "math.LO", "03E57 (Primary) 03E35, 03E55, 03E65 (Secondary)" ]
Σ_n-correct Forcing Axioms by Ben Goodman A dissertation submitted to the Graduate Faculty in Mathematics in partial fulfillment of the requirements for the degree of Doctor of Philosophy, The City University of New York 2024  2024 Ben Goodman All Rights Reserved APPROVAL Σ_n-correct Forcing Axioms by Ben Goodman This manuscript has been read and accepted by the Graduate Faculty in Mathematics in satisfaction of the dissertation requirement for the degree of Doctor of Philosophy. Approved: May 2024 Gunter Fuchs, Chair of Examining Committee Christian Wolf, Executive Officer Supervisory Committee: Gunter Fuchs, Advisor Arthur Apter Russell Miller The City University of New York Abstract Σ_n-correct Forcing Axioms by Ben Goodman Advisor: Gunter Fuchs I introduce a new family of axioms extending ZFC set theory, the Σ_n-correct forcing axioms. These assert roughly that whenever a forcing name ȧ can be forced by a poset in some forcing class Γ to have some Σ_n property ϕ which is provably preserved by all further forcing in Γ, then ȧ reflects to some small name such that there is already in V a filter which interprets that small name so that ϕ holds. Σ_1-correct forcing axioms turn out to be equivalent to classical forcing axioms, while Σ_2-correct forcing axioms for Σ_2-definable forcing classes are consistent relative to a supercompact cardinal (and in fact hold in the standard model of a classical forcing axiom constructed as an extension of a model with a supercompact), Σ_3-correct forcing axioms are consistent relative to an extendible cardinal, and more generally Σ_n-correct forcing axioms are consistent relative to a hierarchy of large cardinals generalizing supercompactness and extendibility whose supremum is the first-order version of Vopenka's Principle. By analogy to classical forcing axioms, there is also a hierarchy of Σ_n-correct bounded forcing axioms which are consistent relative to appropriate large cardinals. At the two lowest levels of this hierarchy, outright equiconsistency results are easy to obtain. Beyond these consistency results, I also study when Σ_n-correct forcing axioms are preserved by forcing, how they relate to previously studied axioms and to each other, and some of their mathematical implications. CHAPTER: ACKNOWLEDGEMENTS Thank you to my advisor, Gunter Fuchs, for his guidance, kindness, and support over the past several years, and for suggesting such a fascinating research topic. Thank you also to Arthur Apter and Russell Miller for agreeing to serve on both my dissertation and oral exam committees, and to everyone in the CUNY logic community who answered a question I had or provided enlightening conversation during my time here. I would also like to thank Joel David Hamkins, who first kindled my interest in set theory over a decade ago with a talk he gave at my undergraduate institution and the dinner we attended afterwards, has continued to entertain, challenge, and inspire me during his visits to CUNY, and proved a disproportionate number of the prior results this dissertation draws upon; Corey Bacal Switzer, who welcomed me into the CUNY set theory community when I first arrived, talked to me in depth about my research after he graduated, and asked some fruitful questions; and all the friends and family who supported me during this time and endured my attempts to explain what a forcing axiom is in plain language. CHAPTER: INTRODUCTION This dissertation studies the Σ_n-correct forcing axioms, potential axioms of set theory which can variously be viewed as: * generalizations of the "plus versions" of forcing axioms to properties beyond stationarity * strengthenings of the maximality principles of Stavi and Väänänen <cit.> and Hamkins <cit.> by intertwining them with reflection principles to accommodate parameters of arbitrary size * unifications of maximality principles with classical forcing axioms * the unbounded versions of the generalized bounded forcing axioms considered by David Asperó <cit.> Chapter 1 covers the background material necessary for what follows; almost none of the results in it are original to me. Section <ref> establishes notational definitions and reviews the definitions of the forcing classes which will be most commonly used in examples. Sections <ref> and <ref> cover classical forcing axioms and maximality principles respectively. Section <ref> briefly surveys the theory of Boolean-valued models and their quotients by ultrafilters. Section <ref> addresses the technical issue that there are multiple possible notions of the interpretation of a name by a filter, which can diverge when the filter is not generic or the name is not forced to be a subset of the ground model. Finally, Section <ref> lays out the most notable and relevant results in set-theoretic geology, the study of the forcing grounds of the universe (i.e. those inner models of which the universe is a generic extension). Chapter 2 introduces the large cardinals which will form the hypotheses of later consistency results. Joan Bagaria has done a great deal of work on related topics, though with somewhat different motivations. This chapter is a mix of previously known results (many of them due to or first explicitly written down by Bagaria), straightforward generalizations of old results, and a few genuinely new thoerems. Section <ref> explores the basic properties of the Σ_n-correct cardinals, i.e. those cardinals θ such that V_θ agrees with V on the truth of Σ_n formulas with parameters in V_θ. Section <ref> introduces the supercompact cardinals for C^(n), which are the critical points of supercompactness embeddings which additionally preserve arbitrarily large initial segments of the Σ_n-correct cardinals. These are importantly distinct from Bagaria's C^(n)-supercompact cardinals, and in fact more closely related to his notion of C^(n)-extendibility. I prove some useful characterizations of supercompactness for C^(n), and show that as n increases these cardinals form a hierarchy starting from ordinary supercompactness and then extendibility, reaching upward toward Vopenka's principle. Section <ref> covers the Σ_n-correctly H_λ-reflecting cardinals, a generalization of Miyamoto's H_λ-reflecting cardinals, which for any fixed n form a hierarchy reaching from the regular Σ_n-correct cardinals to the supercompact cardinals for C^(n-1). Chapter 3 finally begins to discuss the main topic of this dissertation. Section <ref> defines Σ_n-correct forcing axioms, motivating their statement with a characterization of classical forcing axioms due to Ronald Jensen. It goes on to prove that, for suitably iterable Σ_n-definable forcing classes, the corresponding Σ_n-correct forcing axiom holds in a forcing extension by a Baumgartner-style iteration of any model with a cardinal supercompact for C^(n-1). It also contains several other basic facts about the axioms. In Section <ref>, I explore several alternative statements of Σ_n-correct forcing axioms, motivated by various equivalents or enhancements of classical forcing axioms found in the literature; these alternative formulations are more convenient than the Jensen-style version of the axioms for certain proofs. Section <ref> addresses the natural question of whether increasing n necessarily results in a strictly stronger Σ_n-correct forcing axiom for sufficiently nice forcing classes; this turns out to be surprisingly hard to answer in general. Section <ref> briefly considers forcing axioms for classes which can collapse ω_1; the classical forcing axioms for such classes are of course trivially true or trivially false in most cases, but the Σ_n-correct versions are potentially interesting. Section <ref> resolves a technical ambiguity in the statement of Σ_n-correct forcing axioms, specifically whether they are single sentences quantifying over all Σ_n formulas encoded in the model or schemes with a separate instance for each Σ_n formula in the metatheory. As it turns out, both interpretations are viable, but the consistency proof in the former case requires more care and slightly stronger hypotheses. Chapter 4 explores the bounded version of the axioms. Asperó introduced equivalent principles, but I take a somewhat different approach, defining a notion of a filter being weakly generic over a model in order to facilitate a Jensen-style formulation of Σ_n-correct bounded forcing axioms. Section <ref> proves that these axioms are consistent relative to Σ_n-correct H_λ-reflecting cardinals for suitable λ≥κ. Conversely, Section <ref> obtains the existence of large cardinals in L from Σ_n-correct bounded forcing axioms, establishing equiconsistency results for the two lowest levels of the boundedness hierarchy. Chapter <ref> studies when Σ_n-correct forcing axioms are preserved by forcing, drawing on Sean Cox's unified framework for preservation results in terms of extensions of generic elementary embeddings <cit.>. Chapter 6 explores the relationships of Σ_n-correct forcing axioms to classical forcing axioms, maximality principles, and each other, with Section <ref> addressing when different principles turn out to be equivalent and Section <ref> containing non-implication and inconsistency proofs. Both sections conclude by listing some related problems which remain open. Chapter <ref> introduces residual reflection principles, weakenings of both Σ_n-correct (bounded) forcing axioms and Σ_n-correct H_λ-reflection which state that if a formula whose truth is preserved by some forcing class holds of some (not too large) parameter, then that parameter reflects to a small object of which the formula also holds. The main result of the chapter is that for forcing classes with a Σ_n definition which is itself preserved by all posets in the class, Σ_n-correct forcing axioms (bounded or unbounded) may be factored as the conjunction of the Σ_n-maximality principle for that class and an appropriate residual reflection principle. Unfortunately, that hypothesis only holds for a small number of forcing classes. Finally, Chapter 8 explores some of the consequences which follow from Σ_n-correct forcing axioms. Section <ref> focuses on the cardinality of the continuum. Whereas classical forcing axioms either do not settle the size of the continuum or imply that it is ℵ_2, Σ_2-correct forcing axioms for certain classes can also imply CH, or that the continuum is indescribably large. Section <ref> looks at combinatorial applications, specifically the tree property, , Kurepa's hypothesis, and stationary reflection. Section <ref> touches on some implications with a large cardinal or generic large cardinal character. The appendices contain information on basic facts which will be familiar to most but not all readers, and so is included for completeness. Appendix A analyzes the formula complexity of a number of basic set-theoretic properties, streamlining certain proofs in the main body which rely on such analysis. Appendix B compiles a number of widely but not universally known lemmas which will be used repeatedly. CHAPTER: PRELIMINARIES § NOTATION AND COMMON FORCING CLASSES Forcing posets are ordered so that p≤ q means p extends q. For any set X such that ∈↾ X is extensional, π_X is the Mostowski collapsing isomorphism of (X, ∈↾ X). If κ≤λ are regular uncountable cardinals, S⊆ [H_λ]^<κ is weakly stationary if for every function f:[H_λ]^<ω→ H_λ there is some Z∈ S such that f"[Z]^<ω⊆ Z. S is stationary if for any such f we can find a Z closed under it with the additional property that Z∩ H_κ is transitive. It can be shown that this is equivalent to Jech's original definition of stationarity (that S meets all closed unbounded subsets of [H_λ]^<κ). □ and are used to represent both modal operators and Jensen's combinatorial principles; which is intended should be clear from context, and fortunately neither is used very often. ZFC^- denotes the theory consisting of the axioms of Extension, Foundation, Pairing, Union, Infinity, and Well-Ordering (rather than some other form of Choice), together with the schemes of Separation and Collection (rather than Replacement), but not the Power Set axiom. See Gitman, Hamkins, and Johnstone <cit.> for more details on why the specific axiomatization matters. For further definitions of standard set-theoretic concepts, consult Jech's textbook <cit.> or a similar reference. I use first-person plural pronouns in situations which could by some stretch of the imagination include the reader and first-person singular pronouns in situations which could not. A poset ℙ satisfies the κ-cc iff every antichain (i.e. set of pairwise incompatible conditions) of ℙ has cardinality less than κ. We write ccc in place of ω_1-cc. A poset ℙ is <κ-closed iff for every θ<κ and every order-reversing function f:θ→ℙ, there is a p∈ℙ such that p≤ f(α) for all α<θ. We sometimes write countably closed in place of <ω_1-closed. A poset ℙ is proper iff for all sufficiently large cardinals θ there is a club of countable elementary substructures of H_θ such that each M in the club contains ℙ and for all p∈ℙ∩ M, there is a q≤ p such that for all names for an ordinal α̇∈ M, q⊩∃β∈M̌α̇=β. Equivalently, for all infinite cardinals λ and all stationary S⊆ [λ]^ω, ⊩_ℙŠ is stationary. A poset ℙ is semiproper iff for all sufficiently large cardinals θ there is a club of countable elementary substructures of H_θ such that each M in the club contains ℙ and for all p∈ℙ∩ M, there is a q≤ p such that for all names for a countable ordinal α̇∈ M, q⊩∃β∈M̌α̇=β. A poset ℙ is stationary set preserving iff for every stationary S⊆ω_1, ⊩_ℙŠ is stationary. To define subcomplete forcing, we need the following preliminary notion: A transitive structure M is full iff ω∈ M and there is an ordinal γ such that L_γ(M) ZFC^- and for all x∈ M and f:x→ M in L_γ(M), f"x∈ M. A poset ℙ is subcomplete iff for all sufficiently large θ, if ℙ∈ H_θ⊆ N=L_τ[A] ZFC^- for some τ>θ and A⊂τ, s∈ N, N̅ is a countable full model with an elementary embedding σ:N̅→ N, θ̅,ℙ̅, s̅∈N̅ are such that σ(θ̅)=θ, σ(ℙ̅)=ℙ, and σ(s̅)=s, and G̅⊆ℙ̅ is an N̅-generic filter, then there is a p∈ℙ such that whenever G⊆ℙ is V-generic and contains p, there is an elementary embedding σ':N̅→ N in V[G] such that: * σ'(θ̅)=θ * σ'(ℙ̅)=ℙ * σ'(s̅)=s * σ' "G̅⊂ G This is in fact the class that Fuchs and Switzer <cit.> introduced as ∞-subcomplete forcing. Jensen's original definition of subcompleteness includes an extra condition which is seemingly necessary to prove that subcomplete forcing is closed under revised countable support iterations, but makes it unclear whether restrictions of subcomplete posets or even posets forcing-equivalent to subcomplete posets are necessarily subcomplete. Fuchs and Switzer, building on the work of Miyamoto, showed that there is a notion of iteration under which the class defined above is closed, that it is further closed under forcing equivalence, and that it preserves most of the same things as Jensen's class. In light of these desirable qualities, I have adopted it as my official definition. § FORCING AXIOMS The study of forcing axioms began in 1970, when Martin and Solovay, seeking a statement that would settle many of the independent questions resolved by the continuum hypothesis but would be consistent with ¬ CH, introduced Martin's Axiom <cit.>. Martin's Axiom MA is the statement that if ℙ is a forcing poset which satisfies the countable chain condition and 𝒟 is a collection of dense subsets of ℙ of size less than the continuum, then there is a 𝒟-generic filter F⊆ℙ (i.e. for all D∈𝒟, D∩ F is nonempty). Martin and Solovay noted allowing ℙ to be arbitrary leads to inconsistency when ℙ can collapse ω_1, and that the ccc "is not the weakest restriction on ℙ which will prevent cardinal collapse, but it has the virtue of being strong enough to permit the proof of" the consistency of MA with the continuum being any regular uncountable cardinal. In the following decades, set theorists studied forcing axioms for a much broader range of forcing classes: If κ>ω_1 is a cardinal and Γ is a forcing class, the forcing axiom FA_<κ(Γ) is the statement that for any ℙ∈Γ and any collection 𝒟 of dense subsets of ℙ with |𝒟|<κ, there is a 𝒟-generic filter F⊆ℙ. If Γ is the class of proper posets, subcomplete posets, or posets preserving the stationarity of subsets of ω_1, we write FA_<ω_2(Γ) as PFA, SCFA, or MM (Martin's Maximum) respectively. It is more common to take the subscript to be the maximum allowable size of 𝒟 rather than a strict upper bound. I have chosen to depart from this convention in order to allow general consistency proofs to be stated more cleanly and to handle the case where κ is a limit cardinal: while a classical forcing axiom up to but not including a limit cardinal is simply the conjunction of the axioms at each cardinal below the limit, this will not hold for the principles which we study later. As a reminder of this unusual notation, I write the < explicitly in the subscript. MA+¬ CH is equiconsistent with ZFC, essentially because all the dense sets involved can be taken to be small and the ccc is inherited by arbitary subposets, so forcing with the ccc posets smaller than the desired size of the continuum is sufficient to add the desired filters. This is not true of most other forcing classes, so consistency proofs for other forcing axioms typically follow Baumgartner's proof of Con(PFA) <cit.> in starting from a supercompact cardinal κ and harnessing its reflection properties to show that all the desired filters can be added by iterating posets below κ. This process in fact leads to a model of a stronger principle: If ν≤ω_1 is a cardinal and Γ is a forcing class, FA^+ν(Γ) is the statement that for all ℙ∈Γ, all collections 𝒟 of ω_1 dense subsets of ℙ, and every sequence σ=⟨σ_αα<ν⟩ of ℙ names for stationary subsets of ω_1, there is a 𝒟-generic filter F⊆ℙ such that σ_α^F is a stationary subset of ω_1 in V. FA^+(Γ) means FA^+1(Γ). FA^++(Γ) is most often used to mean FA^+ω_1(Γ), but also sometimes FA^+2(Γ), so I will avoid this notation. Exactly what is meant by the interpretation of a name by a nongeneric filter is somewhat ambiguous; see Section <ref> for more details. However, it will turn out that all reasonable definitions will coincide in this case if we add at most ω_1 additional dense sets to 𝒟, so the ambiguity is not too important. In proving that the "plus versions" of forcing axioms hold in the standard models of classical forcing axioms, we make essential use of the fact that any forcing class for which FA_<ω_2 is consistent must preserve the stationarity of subsets of ω_1. The central goal of this work is to generalize the plus versions of forcing axioms to properties other than stationarity which the forcing class in question also preserves. Also of interest are a natural weakening of forcing axioms introduced by Goldstern and Shelah <cit.>: If Γ is a class of complete Boolean algebras and λ≥κ>ω_1 are cardinals, the bounded forcing axiom BFA_<κ^<λ(Γ) is the statement that for all 𝔹∈Γ and all sets 𝒜 of maximal antichains of 𝔹 such that |𝒜|<κ and for all A∈𝒜, |A|<λ, there is a (proper) filter F⊂𝔹 which meets every antichain in 𝒜. This definition uses Boolean algebras rather than posets in order to guarantee the existence of sufficiently small maximal antichains. I will frequently write "symmetric bounded forcing axiom" to refer to the case where λ=κ (the axiom originally studied by Goldstern and Shelah) and "asymmetric bounded forcing axiom" for the case where λ>κ (first proposed by Miyamoto <cit.>). § MAXIMALITY PRINCIPLES Maximality principles were introduced by Stavi and Väänänen <cit.> and further developed by Hamkins <cit.>. We will use the notation of the latter but a formulation more similar to the former, for which the following concept is needed: For a definable forcing class Γ, a formula ϕ is provably Γ-persistent iff ZFC⊢∀ x(ϕ(x)→∀ℚ∈Γ⊩_ℚϕ(x̌)) If n is a positive integer, Γ is a Σ_n-definable forcing class, and S is a class of parameters, the Σ_n maximality principle for Γ with parameters in S Σ_n MP_Γ(S) is the statement that for every provably Γ-persistent Σ_n formula ϕ and every a∈ S, if there is some ℙ∈Γ such that ⊩_ℙϕ(ǎ), then ϕ(a) already holds in V. Maximality principles are appealing axioms because they capture the intuition that the universe should be as wide as possible, at least with regard to Γ-extensions and Σ_n truth. They can easily be seen to imply symmetric bounded forcing axioms: (Folklore?) If Γ is a forcing class, κ is a regular uncountable cardinal, and n≥ 1, Σ_n MP_Γ(H_κ) implies BFA_<κ^<κ(Γ). Let 𝔹∈Γ be a Boolean algebra and A be a collection of fewer than κ maximal antichains of 𝔹, each of size less than κ. For λ>2^|𝔹|, let X be an elementary substructure of H_λ of size less than κ containing 𝔹, A, all antichains in A, and all elements of all antichains in A. Set 𝔹̅:=π_X(𝔹) and A̅:=π_X(A). If G⊂ℙ is V-generic, then for each antichain C∈ A, by genericity there is some p∈ C∩ G. It follows that π_X(p)∈π_X(C)∩π_X"G. Since every element of A̅ is of the form π_X(C) for some C∈ A, it is thus Γ-forceable that there is a filter on 𝔹̅ meeting every antichain in A̅. This property can easily be seen to be Σ_1, since it requires only an existential quantifier asserting the existence of the filter F and bounded quantifiers over F, A̅, and its elements. It is preserved by further forcing because all Σ_1 formulas remain true when moving to a larger structure in which the original model is transitive, and A̅,𝔹̅∈ H_κ because they are contained in a transitive structure of size less than κ. Thus by the maximality principle there is some F⊂𝔹̅ in V meeting every antichain in A̅. π_X^-1"F then generates a filter on 𝔹 meeting every antichain in A. George Leibman (related by Hamkins <cit.>, Theorem 5.5) proved the above result for the case where Γ is ccc forcing and κ is the continuum (in which case bounded Martin's Axiom is simply equivalent to full MA). Kaethe Minden (<cit.>, Lemma 4.1.8 and Proposition 4.1.9) did the same for subcomplete forcing with κ=ω_2. Bagaria's result that bounded forcing axioms are principles of generic Σ_1-absoluteness (<cit.>, Theorem 5) can be restated as: (Bagaria) If Γ is a forcing class and κ is a regular uncountable cardinal, Σ_1 MP_Γ(H_κ) is equivalent to BFA_<κ^<κ(Γ). Hamkins phrases maximality principles in terms of forceably necessary formulas (i.e. formulas which can be forced to be true and to remain true in all further forcing extensions), which is subtly different from our formulation in terms of forceable and provably persistent formulas, since a formula can be preserved in all forcing extensions of a particular model without that preservation being provable in ZFC. However, the following observations, essentially due to David Asperó in the last part of the proof of Theorem 2.6 in <cit.>, show that the two formulations are equivalent if, like Hamkins, one is only interested in the version of the maximality principle encompassing formulas of arbitrary complexity. Since we will primarily be considering the Σ_n-restricted forms, the provably persistent formulation is more convenient. For Γ a forcing class and ϕ a formula in the language of set theory, □_Γϕ is the assertion that ϕ holds in all Γ-extensions of the universe. If Γ is a Σ_n-definable forcing class and ϕ is a Σ_n formula, □_Γϕ is Π_n+1 and □_Γ¬ϕ is Π_n. If Γ is provably closed under two-step iterations, both are provably Γ-persistent. We can express □_Γϕ as "for all ℙ, ℙ∉Γ or ⊩_ℙϕ". The first clause of the disjunction is Π_n and the second Σ_n, so with the added universal quantifier we get a Π_n+1 formula. If we used ¬ϕ in place of ϕ, it would be a universal quantifier added to a disjunction of two Π_n formulas, which is Π_n. For provable persistence, work in ZFC and let ℙ∈Γ. Then for any ℙ-name ℚ̇ such that ⊩_ℙℚ̇∈Γ, ℙ*ℚ̇∈Γ, so □_Γϕ implies that ⊩_ℙ*ℚ̇ϕ. It follows that ⊩_ℙ∀ℚ∈Γ⊩_ℚϕ. We therefore have: ZFC⊢□_Γϕ→∀ℙ∈Γ⊩_ℙ□_Γϕ If Γ provably contains the trivial forcing and is closed under two-step iterations, then for any class of parameters S, Σ_n+2 MP_Γ(S) implies the Hamkins-style maximality principle for Σ_n formulas (and in fact Π_n+1 formulas) with parameters in S. Hamkins notes (<cit.>, Observation 1.3) that if Γ is the class of all forcing then the parameters must be contained in H_ω_1, because any parameter can be forced to be hereditarily countable by adding a surjection from ω to its transitive closure, and this persists to all forcing extensions. Similarly (as Hamkins discusses after Corollary 5.4), if Γ can collapse arbitrary cardinals to ω_1, we cannot consistently allow parameters outside of H_ω_2, and if Γ can add arbitrarily many reals but can't collapse the cardinality of the continuum, the maximal parameter set is H_2^ℵ_0, since for any x it is Γ-forceably Γ-necessary (by adding sufficient reals) that the cardinality of the transitive closure of x is less than the continuum. However, as Proposition <ref> shows, forcing axioms can be regarded as maximality principles for the specific Σ_1 assertion that there exists a filter meeting a desired collection of dense sets. While symmetric bounded forcing axioms assert this only for collections of antichains small enough to be collapsed into the allowable parameter set of the corresponding maximality principle, more general forcing axioms assert it for antichains too large for this to work. (Of course one always can mimic the procedure in the proof of <ref> to obtain a small 𝔹̅ and A̅ from a large Boolean algebra 𝔹 and collection of maximal antichains A, but if some of the antichains in A are too large, the corresponding antichains in A̅ will necessarily omit some of their conditions; then a generic G⊂𝔹 which meets those antichains at those omitted conditions will project to a filter on 𝔹̅ which does not meet those antichains in A̅, so 𝔹 does not force that there is a filter meeting all antichains in A̅, so we cannot apply a maximality principle to obtain a forcing axiom.) Thus classical forcing axioms can be viewed as a highly specific maximality principle intertwined with a reflection principle. Letting 𝒟 be the set of all dense sets of a poset ℙ in the ground model, it is forceably necessary that there is a filter meeting all elements of 𝒟; though we cannot hope to find such a filter in the ground model, we can consistently shrink 𝒟 down to a set 𝒟̅ smaller than the continuum (for ccc forcing) or ω_2 (for proper, semiproper, or subcomplete forcing) in a highly controlled way and find a ground-model filter meeting all dense sets in 𝒟̅. By adding the ability to realize the truth of a forceably necessary formula with excessively large parameters in the ground model with a reflected parameter of reasonable size, unbounded and asymmetrically bounded forcing axioms gain considerable consistency strength over maximality principles, despite the restriction on allowable formulas. Our central goal may alternatively be stated as loosening this restriction on formulas, and in doing so unifying maximality principles with classical forcing axioms. § BOOLEAN-VALUED MODELS For some proofs, it will be convenient to use Boolean-valued models and their quotients, so we briefly review the basic theory here. For proofs of the lemmas below and more details, see the relevant section of Jech's Chapter 14 <cit.> or the first few sections of Hamkins and Seabold <cit.>. Given a complete Boolean algebra 𝔹, a 𝔹-valued model in signature τ consists of a class M (called the class of names) and a map ϕ(a_1,… a_n)↦ϕ(a_1,…, a_n) from the first-order formulas in the language generated by τ, with assignments of their free variables to elements of M, to 𝔹, such that the expected Tarski-style relations between the Boolean values of different formulas hold. If 𝔹∈ N ZFC^-“𝔹 is a complete Boolean algebra”, then N^𝔹 is the Boolean-valued model constructed within N whose class of names is inductively defined as the class of functions (in N) whose domain is a set of 𝔹-names and whose codomain is 𝔹, and whose Boolean valuation is recursively given by: τ∈σ =⋁_ρ∈ dom(σ)τ=ρσ(ρ) τ=σ = τ⊆σσ⊆τ τ⊆σ =⋀_ρ∈ dom(τ) (¬τ(ρ)ρ∈σ) A Boolean-valued model is full iff for all formulas ϕ in its language with n+1 free variables and names b_1,…, b_n, there is a name a such that ∃ xϕ(x, b_1,…, b_n)=ϕ(a, b_1,…, b_n). If 𝔹∈ N ZFC^-“𝔹 is a complete Boolean algebra”, then N^𝔹 is full. If 𝔹 is a complete Boolean algebra, M is a 𝔹-valued model in the language of set theory, and U⊂𝔹 is an ultrafilter, then for σ and τ names in M, σ=_Uτ means that σ=τ∈ U and σ∈_Uτ means that σ∈τ∈ U. M/U is the ordinary first-order model of set theory whose elements are =_U-equivalence classes (using Scott's trick if M is a proper class) and whose element relation is given by lifting ∈_U to equivalence classes. If 𝔹∈ N ZFC^-“𝔹 is a complete Boolean algebra”, and U⊂𝔹 is an ultrafilter, then * N^𝔹/U ZFC^- * N embeds into N^𝔹/U via the map x↦[x̌]_U, where dom(x̌)={y̌ y∈ x} and x̌(y̌)=1_𝔹 for all y∈ x. * For all formulas ϕ in the language of set theory and σ∈ N^𝔹, N^𝔹/Uϕ([σ]_U) iff ϕ(σ)∈ U. Two things are important to note here. First, there is no need for U to be N-generic, and it could even be in N. For non-generic U, [Ġ]_U (where Ġ(b̌)=b for all b∈^N𝔹) will still play the role of a generic filter in N^𝔹/U, since this holds with Boolean value one. However, rather than being a forcing extension of N itself, N^𝔹/U will be a forcing extension of some elementary extension N̅ of N, consisting of equivalences classes of names which are forced to be equal to some check name. The elements of N̅-N arise from those names such that U misses the maximal antichain that decides exactly which element of N they are equal to. Second, there is no need for N to be wellfounded. Whereas the ordinary construction of a forcing extension N[G] as the interpretation σ^G of each σ∈ N relies on recursion on the ∈-relation of N, which runs into difficulties when N is illfounded, the quotient construction has no such limitations. (Of course, we still need to perform recursion on ∈^N to define Boolean values of formulas, but this can be done inside of N, where it is justified because N satisfies the foundation axiom, whereas interpreting names by a generic filter requires the recursion to be performed externally.) § INTERPRETATIONS OF NAMES There are at least four natural ways to interpret or evaluate names[When ℙ is an arbitrary poset rather than a complete Boolean algebra, a ℙ-name is defined, following Kunen (<cit.>), as merely a set of ordered pairs with the first coordinate a ℙ-name and the second an element of ℙ, not necessarily a function.] with filters, some of which coincide when the filter is generic but can diverge significantly in general. First, our official definition will be: If ℙ is a forcing poset, ȧ is a ℙ-name, and F⊆ℙ is a filter, the standard interpretation of ȧ by F is given recursively by ȧ^F={τ^F|∃ p∈ F⟨τ, p⟩∈ȧ} We also have the following concept, frequently used in the statement of FA^+: If ℙ is a forcing poset, ȧ is a ℙ-name, and F⊆ℙ is a filter, the quasi-interpretation of ȧ by F is given by ȧ^(F)={x∈ V|∃ p∈ F p⊩x̌∈ȧ} The quasi-interpretation terminology comes from Schlicht and Turner <cit.>. Quasi-interpretations are a natural way to attempt to evaluate names in the ground model, but they are badly behaved except on names for subsets of V, so it is natural to modify the definition to the following recursive construction: If ℙ is a forcing poset, ȧ is a ℙ-name, and F⊆ℙ is a filter, the recursive quasi-interpretation of ȧ by F is given by ȧ^((F))={τ^((F))|τ∈ dom(ȧ)∃ p∈ F p⊩τ∈ȧ} Finally, we can interpret names within the quotients of Boolean algebras discussed in the previous section: If 𝔹 is a complete Boolean algebra, U⊂𝔹 is an ultrafilter, and ȧ is a 𝔹-name whose equivalence class is in the well-founded part of V^𝔹/U, its Boolean interpretation by U is the value of its equivalence class [ȧ]_U under the transitive collapse isomorphism on wfp(V^𝔹/U). Sometimes a name of interest ȧ is not in the well-founded part of V^𝔹/U, but there is some small structure N containing 𝔹 and ȧ such the equivalence class of ȧ is in the well-founded part of N^𝔹/U; in such cases it is useful to consider the relativized Boolean interpretation [ȧ]_U^N. As an example to illustrate the distinctions between these interpretations, consider ȧ={⟨Ġ, 1⟩}, where Ġ:={⟨p̌,p⟩ p∈ℙ} is the canonical name for the generic filter. For any filter F⊆ℙ, ȧ^F=ȧ^((F))={F}, while if ℙ is atomless 1⊩Ġ≠x̌ for all x∈ V, so ȧ^(F)=∅. Finally, if 𝔹 is the Boolean completion of ℙ and U is an ultrafilter on 𝔹, then [ȧ]_U={[Ġ]_U}, where the singleton is in the sense of V^𝔹/U and may not correspond to an actual singleton if [Ġ]_U is in the ill-founded part of V^𝔹/U (in which case the Boolean interpretation of ȧ will be undefined). [Ġ]_U in turn consists of all the elements of U (or more properly the equivalence classes of their check names) as well as potentially the equivalence classes of names which are generically equal to some element of U, but U misses the dense set which determines exactly which of its elements they are equal to. To see how the standard interpretation and recursive quasi-interpretation can differ for non-generic F, let A={p_αα<λ} be a maximal antichain of ℙ such that F∩ A=∅. Then if ȧ={⟨∅̌, p_α⟩α<λ}, ȧ^F=∅ but ȧ^((F))={∅}. We now examine how partial genericity is sufficient to ensure that different interpretations coincide. If N is a transitive model of ZFC^- and "𝔹 is a complete Boolean algebra", ȧ∈ N^𝔹, and U⊆𝔹 is an ultrafilter which meets all antichains of 𝔹 in N whose N-cardinality is at most |trcl({ȧ})|^N, then [ȧ]_U^N is contained in the well-founded part of N^𝔹/U and its value under the transitive collapse isomorphism is ȧ^((U)). If [ȧ]_U is not in the well-founded part of N^𝔹/U, then there is a sequence of names ⟨σ_n n∈ω⟩ such that σ_0=ȧ, [σ_n+1]_U^N∈_U [σ_n]_U^N, and σ_n∈ N for all n. As far as possible, we choose each name in the sequence to be an actual element of the domain of the previous one, but by the well-foundedness of ∈, this can't continue forever, so there is some n such that σ_n∈ trcl({ȧ}) but σ_n+1 is not in the equivalence class of any name in dom(σ_n). Therefore because dom(σ_n)⊂ trcl(ȧ), {¬σ_n+1∈σ_n}∪{σ_n+1=ττ∈ dom(σ_n)} is a predense set definable in N of N-cardinality at most |trcl({ȧ})|^N. Thus U must meet it, contradicting the choice of σ_n+1. Now we let π be the Mostowski collapse isomorphism on the well-founded part of N^𝔹/U and prove by induction that π([ȧ]_U^N)=ȧ^((U)). Assume that σ∈ trcl({ȧ}) and for all τ∈ dom(σ), π([τ]_U)=τ^((U)). Since σ^((U))={τ^((U))τ∈ dom(ȧ)τ∈ȧ∈ U}={π([τ]_U)τ∈ dom(ȧ)τ∈ȧ∈ U}, certainly σ^((U))⊆π([σ]_U), and to show the reverse inclusion it is sufficient to show that if τ'∈_U σ for any 𝔹-name τ'∈ N, then there is a τ∈ dom(σ)∩ [τ']_U. However, since by definition τ'∈σ=⋁_τ∈ dom(σ)(τ'=τσ(τ)), by essentially the same argument as in the previous paragraph U must include one of the τ'=τ. To prove that the standard and recursive quasi-interpretations are equivalent given enough genericity, the following elementary lemma is helpful: If 𝔹 is a Boolean algebra, p∈𝔹, and A⊂𝔹 is an antichain maximal below p, then for any q∈𝔹, A q:={a q| a∈ A, a q>0} is an antichain maximal below p q (where we consider the empty set to be the unique antichain maximal below 0). A q is an antichain because for all a, b∈ A such that a q, b q∈ A q, any common lower bound of a q and b q is a common lower bound of a and b, so because A is an antichain it must be 0. A q is maximal below p q because for any nonzero r≤ p q, since r≤ p there is some a∈ A such that r a>0 by the maximality of A; then since r a≤ r≤ p q≤ q, r a q=r a>0, so r is compatible with a q∈ A q. If 𝔹 is a complete Boolean algebra, κ is a regular cardinal, and ȧ∈ H_κ is a 𝔹-name, then there is a set of fewer than κ predense subsets of 𝔹, each of size less than κ, such that if F⊆𝔹 is a filter meeting all of them, ȧ^F=ȧ^((F)). Furthermore, this set of predense sets is an element of any transitive ZFC^- model containing ȧ and 𝔹 and is definable over any such model using ȧ and 𝔹 as parameters. We proceed by ∈-induction on ȧ. Assume that σ∈ trcl({ȧ}) is such that the lemma holds for all τ∈ dom(σ). Then for each (of the fewer than κ) τ∈ dom(σ), we have a cardinal λ_τ<κ and a collection of predense sets {A_α^τα<λ_τ}, each smaller than κ, such that if F is a filter meeting all of the A_α^τ, then τ^F=τ^((F)). Using the notation of Lemma <ref>, for each such τ and each α<λ_τ, we can form the predense set A_α^τσ(τ)∪{¬σ(τ)}. If F is any filter meeting all of these predense sets, then every element of σ^F must have a name τ∈ dom(σ) such that σ(τ)∈ F, so F must meet all the A_α^τ for this τ and thus τ^F=τ^((F)). Since τ∈σ≥σ(τ), τ^((F))∈σ^((F)), so σ^F⊆σ^((F)). For the reverse inclusion, we require that for each τ∈ dom(σ) F also meet the predense set B_τ:={τ=ρσ(ρ)ρ∈ dom(σ)}∪{¬τ∈σ}. Then whenever τ∈σ∈ F, putting τ^((F)) in σ^((F)), there is some ρ∈ dom(σ) such that ρ^F∈σ^F and τ=ρ∈ F. Assuming F meets all the A_α^τ and A_α^ρ, τ^F=τ^((F)) and ρ^F=ρ^((F)). Each x∈τ^F has a name ẋ∈ dom(τ) such that τ(ẋ)∈ F, which by the definition of τ=ρ implies that ẋ∈ρ∈ F, so τ^F⊆ρ^((F)). The same argument shows that ρ^F⊆τ^((F)), so both interpretations of each name are equal to both interpretations of the other. Therefore τ^((F))∈σ^F, so σ^((F))⊆σ^F and the two interpretations of σ by F are the same, provided F meets A^τ_α and B_τ for all τ∈ dom(σ) and α<λ_τ. To see that the set of relevant predense sets is definable in ZFC^- with ȧ and 𝔹 as parameters, observe that each must initially arise as a B_τ for some name τ∈ trcl(ȧ) and then is modified only in very simple ways later on in the induction. Since the definition of B_τ only involves Boolean values of atomic formulas, which are Δ_1-definable and therefore absolute to arbitrary transitive structures, each predense set we require F to meet and the set of all of them can be formed from subsets of trcl(ȧ) by applications of the collection axioms. This resulting set is then definable as the set of everything constructed by the end of the induction described above. Going forward, we will usually make assumptions which allow us to satisfy the hypotheses of the preceding lemmas at no additional cost, so we will make no further use of the quasi-interpretation and use the other three interpretations interchangeably as is convenient. § SET-THEORETIC GEOLOGY Set-theoretic geology, launched by Fuchs, Hamkins, and Reitz in <cit.>, is the study of the grounds of the universe, that is the inner models W such that V=W[G] for some W-generic filter G∈ V on a (set) forcing notion in W. Since it will prove useful in establishing certain separation results, we review the basics here. First, some preliminary facts and definitions: (Hamkins <cit.>) For M⊆ N transitive classes satisfying some sufficient fragment of ZFC and δ a cardinal of both, M has the δ-cover property in N (or the extension M⊆ N has the δ-cover property) iff whenever A∈ N, A⊂ M, and |A|^N<δ, there is a B∈ M such that A⊆ B and |B|^M<δ. (Hamkins <cit.>) For M, N, and δ as above, M has the δ-approximation property in N iff whenever A∈ N, A⊂ M, and for all b∈ M with |b|^M<δ we have A∩ b∈ M, then A∈ M. If ℙ is a forcing poset in W, |ℙ|^W<δ, and G⊆ℙ is W-generic, then W has the δ-cover and δ-approximation properties in W[G]. The same holds for some posets larger than δ; see <cit.> for more details. (Reitz <cit.>) ZFC_δ is the theory in the language of set theory with an added constant symbol for δ consisting of the axioms of Zermelo set theory with foundation and choice, the assertion that δ is a regular cardinal, the assertion that every set is coded by a set of ordinals, and the restricted replacement scheme asserting that the image of δ under any first-order-definable mapping is a set. If a regular cardinal δ has already been chosen, to say that M ZFC_δ for some transitive M with δ∈ M is to say that this holds with the constant symbol interpreted as that particular δ. (Reitz) If δ is a regular cardinal, ℶ_θ=θ and cf(θ)>δ, then V_θ ZFC_δ. (Hamkins) (Uniqueness Lemma) Suppose M, M', and N are transitive classes with M⊆ N and M'⊆ N, δ is a regular cardinal in N, all three satisfy ZFC_δ, M and M' has the δ-cover and δ-approximation properties in N, (^<δ2)^M=(^<δ2)^M', and (δ^+)^M=(δ^+)^M'=(δ^+)^N. Then M=M'. The preceding ideas enabled Hamkins to prove the foundational theorem of set-theoretic geology, that the grounds are uniformly first-order definable (though Laver and Woodin proved similar results independently). (<cit.>, Theorem 6) (Ground Definability Theorem) There is a Σ_2 formula ρ with two free variables such that for all set forcing grounds W, there is a parameter r∈ W such that for all x, x∈ W⟷ρ(x, r). Specifically, if ℙ is a poset, G⊆ℙ is a W-generic filter, and V=W[G], we can set δ:=|ℙ|^+ and r:=𝒫(ℙ)^W; then ρ(x, r) asserts that there exists a θ and an M⊆ V_θ such that: * ℶ_θ=θ * cf(θ)>δ * M ZFC_δ * M⊆ V_θ has the δ-cover and δ-approximation properties * r=𝒫(ℙ)^M * (δ^+)^M=δ^+ * x∈ M W is also definable by a Π_2 formula ρ'(x,r) asserting that for all θ>rank(x) and M⊆ V_θ, if the first six conditions hold, then x∈ M. The idea of the proof is that for any suitable θ, W∩ V_θ will satisfy conditions 3-6 for M, and by the Uniqueness Lemma it is the only such M, so we will have x∈ W iff we can find a θ and an M⊆ V_θ satisfying conditions 1-6 with x∈ M iff all θ and M⊆ V_θ satisfying 1-6 have x∈ M. For the formula complexity, note that the definition of V_θ is Π_1, while conditions 1-6 can be expressed with quantifiers over M and V_θ with at most one unbounded universal quantifier (which becomes an existential quantifier in the hypothesis of the conditional in ρ'). Note also that δ and ℙ are definable within V_θ from r, so there is no need to include them as additional parameters. If W_r={xρ(x, r)} is a ground of V, a∈ W_r, and ϕ is a Σ_n or Π_n formula with n≥ 2, then W_rϕ(a) is expressible in V as a Σ_n or Π_n formula with a and r as parameters. This proof is due to Farmer Schlutzenberg in an answer to a question I asked on MathOverflow <cit.>. We proceed by induction on the complexity of ϕ. If ϕ is Σ_2 or Π_2, we use the facts that W_r satisfies a Σ_2 formula iff it holds in some V^W_r_θ where θ is a beth fixed point, and it satisfies a Π_2 formula iff it holds in all such V^W_r_θ containing the parameters. (For readers unfamiliar with this fact, it will follow as a special case of Proposition <ref> and Lemma <ref>.) Hence in the Σ_2 case we can express W_rϕ(a) as "there exist θ and M⊆ V_θ such that conditions 1-7 from Theorem <ref> hold and Mϕ(a)", while if ϕ is Π_2, we can similarly say "for all θ and M⊆ V_θ, if conditions 1-7 hold, then Mϕ(a)". Now let n>2. If n is odd, we assume for induction that for any Π_n-1 formula ψ, W_rψ(a, y) is expressible as n-3 layers of quantifiers, followed by the assertion that for all θ and M⊆ V_θ satisfying the appropriate conditions, if M contains a, y, and all the variables bound by the outer quantifiers, then M satisfies the Π_2 formula obtained by stripping away n-3 layers of quantifiers from ψ. We similarly assume that Σ_n-1 formulas are expressible via n-3 layers of outer quantifiers, followed by the assertion that there exists a θ and M⊆ V_θ satisfying the conditions and containing the appropriate variables and parameters such that M satisfies the inner Σ_2 formula. If n is even, our induction hypotheses are the same but with Σ_n-1 and Π_n-1 switched. Now if W_rϕ(a) for some Σ_n formula ϕ of the form ∃ y ψ(a, y) where ψ is Π_n-1 and n is odd, fix some b such that W_rψ(a, b). Then by the induction hypothesis, this fact is expressible by a string of n-3 quantifiers followed by an assertion about all θ and M⊆ V_θ with a, b∈ M, so W_rϕ(a) is expressible as a similar statement preceded by a string of n-2 quantifiers, with all their bound variables required to be in M. Conversely, if the formula that we wish to capture W_rϕ(a) holds, then there is such a θ, M, and b∈ M so that the formula expressing W_rψ(a, b) holds, so by the induction hypothesis ψ(a,b) in fact holds in W_r and thus ϕ(a) holds in W_r. The arguments in the cases where ϕ is Π_n and/or n is even are almost identical. Of course, not all parameters r will actually define a ground, so when we wish to quantify over grounds, it is useful to know that: (Bagaria-Hamkins-Tsaprounis-Usuba, <cit.>) The assertion that a parameter r successfully defines a ground W_r is Σ_3 expressible in r. We express this as "there exist a poset ℙ=(⋃ r, ≤_ℙ) and a filter G⊆ℙ which meets all dense sets in r such that for all beth fixed points θ of cofinality greater than δ:=|ℙ|^+ with ℙ∈ V_θ, there is an M_θ⊆ V_θ with the δ-cover and δ-approximation properties in V_θ such that M_θ ZFC_δ, ℙ∈ M_θ, r=𝒫(ℙ)^M_θ, (δ^+)^M_θ=δ^+, and V_θ=M_θ[G]". If W_r is in fact a ground, we can take M_θ=W_r∩ V_θ. Conversely, if the quoted statement holds, then the M_θ are uniquely defined because of the Uniqueness Lemma and W_r as defined in Theorem <ref> can be seen to be equal to their union. Jech's Theorem 13.9 <cit.> asserts that a transitive class is an inner model of ZF iff it is closed under Gödel operations and every subset of the class is covered by an element of the class, which can straightforwardly and tediously be verified for W_r. Choice and V=W_r[G] follow because each set in W_r has a well-ordering in some M_θ and each set in V has a ℙ-name in some M_θ. Bagaria, Hamkins, Tsaprounis, and Usuba observe (several paragraphs after Lemma 5 in <cit.>) that, given ℙ and G, any counterexample to the rest of the statement can be witnessed by any sufficiently large V_α. Consequently, everything after the initial existential quantifiers can be expressed as the Π_2 statement that every V_α containing ℙ, r, and G does not think there is a counterexample. Hence, the overall statement is Σ_3. Thus in statements like ∃ r W_rϕ(a) or ∀ r W_rϕ(a), specifying that r actually defines a ground only increases the complexity of the formula beyond what would be obtained from adding an extra quantifier to the formula produced in Lemma <ref> if ϕ is at most Π_2 or Σ_2. The following fact predates the explicit study of set-theoretic geology, but is frequently useful in geological arguments: (see e.g. Jech Lemma 15.43 <cit.>) (Intermediate Model Lemma) If V[G] is a generic extension by some poset ℙ and V⊆ W⊆ V[G] for some transitive W ZFC, then W is a generic extension of V by some complete subalgebra 𝔹 of the Boolean completion of ℙ, with W=V[G∩𝔹], and W is a ground of V[G]. With the groundwork laid, we now survey some basic concepts of set-theoretic geology. Hamkins and Reitz used the first-order definability of grounds to formulate the following axioms: The Ground Axiom is the statement that there are no nontrivial set forcing grounds. The Bedrock Axiom is the statement that there is a minimal ground, or in other words a ground which satisfies the Ground Axiom. Reitz <cit.> showed that both axioms can hold or fail independently of a wide range of other natural statements. Fuchs, Hamkins, and Reitz introduced the following basic object of study: The mantle 𝕄 of a model of set theory is the intersection of all its grounds. It follows from the above results that the mantle is a Π_3 definable class (without parameters). Many basic questions about the mantle were initially open, but were settled six years later by the work of Toshimichi Usuba: (Usuba <cit.>, Proposition 5.1) (Strong DDG Theorem) The grounds are strongly downward directed: that is, for every set X of parameters which succeed in defining grounds of the universe, there is a parameter s which defines a ground W_s such that for all r∈ X, W_s is a ground of W_r. (Usuba <cit.>, Corollary 5.5, applying Fuchs, Hamkins, and Reitz, <cit.>, Theorem 22) * 𝕄 is a transitive model of ZFC * 𝕄 is a forcing-invariant class * The following are equivalent: * V has only set many grounds * V satisfies the Bedrock Axiom * V is a set forcing extension of the mantle (Usuba) For κ a cardinal, a κ-ground is a ground W of which V is a forcing extension by a poset of size less than κ. The κ-mantle is the intersection of all κ-grounds. (Usuba <cit.>, Theorem 1.3) If κ is extendible, then the mantle is equal to the κ-mantle. It thus follows from the Strong DDG Theorem that if there is an extendible cardinal, then V is a set forcing extension of 𝕄. Note however that the κ-mantle is not necessarily a κ-ground, since the proof of the Strong DDG Theorem only tells us in that case that the universe is a κ^++-cc extension of some ground of all the κ-grounds. The generic multiverse of a model of set theory is the collection of all models which can be obtained from it by repeatedly taking set forcing extensions and grounds. * (Fuchs, Hamkins, and Reitz, conditional on DDG) The generic multiverse can alternatively be defined as the collection of all set forcing extensions of grounds. * (Fuchs, Hamkins, and Reitz, conditional on DDG) All models in the generic multiverse share the same mantle. * (Usuba) If any model in the generic multiverse has an extendible cardinal, then the generic multiverse can be defined as the collection of all set forcing extensions of the mantle. (1): Let W be a ground of V and W[G] be a forcing extension of it. We show that all forcing extensions and all grounds of W[G] are in fact forcing extensions of grounds of V, so the collection of forcing extensions of grounds satisfies the closure property to be the complete generic multiverse. If W[G][H] is a forcing extension of W[G], then it is a forcing extension of W by the generic filter G*H on some two-step iteration of posets. If U is a ground of W[G], then because grounds are downward directed, there is a ground W' of both W and U, so U is a forcing extension of the ground W' of V, as desired. (2): This follows inductively or by (1) from Usuba's result that the mantle is invariant under forcing. (3): If any W in the generic multiverse has an extendible cardinal, then it is a set forcing extension of the mantle, which by (2) is the mantle of the entire generic multiverse, and by downward directedness the mantle is thus a ground of every model in the generic multiverse. CHAPTER: LARGE CARDINALS WITH Σ_N REFLECTION PROPERTIES In this chapter, we study cardinals which reflect the truth of Σ_n formulas true in V. In the same way that the Σ_2 reflection properties of supercompact cardinals allow us to establish the consistency of forcing axioms involving Σ_2-definable forcing classes and Σ_2 formulas like "there exists a filter which interprets a particular name as a stationary set", these cardinals will allow us to prove the consistency of corresponding axioms for more complex classes and formulas. § Σ_N-CORRECT CARDINALS We start with a simple and well-studied form of reflection: agreement with V about formulas with parameters in V_κ. Much of the material in this section comes from Section 1 of Joan Bagaria's paper on the subject and related large cardinal notions (<cit.>). The class of Σ_n-correct cardinals C^(n) consists of all κ such that V_κ≺_Σ_n V, i.e. for all Σ_n formulas ϕ and all a∈ V_κ, V_κϕ(a) iff Vϕ(a). In defining and in proving results about Σ_n-correct cardinals, we make heavy use of the well-known fact that, although by Tarski the truth of formulas in V is not uniformly definable, there is a Σ_n definition of Σ_n truth: (Folklore) For each standard positive integer n, there is a Σ_n formula T_Σ_n in the language of set theory such that, for any Σ_n formula ϕ with Gödel number ⌈ϕ⌉∈ V and parameter a, T_Σ_n(⌈ϕ⌉, a)⟺ϕ(a). Furthermore, there is a Π_n formula T_Π_n which works the same way with Π_n formulas. We proceed by induction on n. For n=1, we use the facts that Δ_0 formulas are absolute to transitive structures and that the satisfaction of a formula in a set-sized structure is a Δ_1 relation between the structure, the Gödel number of the formula, and the parameters of the formula. Thus given a Δ_0 formula ψ(x,y) we can express T_Σ_1(⌈∃ x ψ⌉, a) as "there exists an x and a transitive structure S such that a, x∈ S and Sψ(x, a)" and T_Π_1(⌈∀ x ψ⌉, a) as "for all x and all transitive structures S such that a, x∈ S, Sψ(a, x)". Now if T_Σ_n and T_Π_n have already been defined and χ(x, y) is a Π_n formula, we can define T_Σ_n+1(⌈∃ x χ⌉, a) as ∃ x T_Π_n(⌈χ⌉, (x,a)). If χ is instead Σ_n, we can define T_Π_n+1 in a corresponding way. (Bagaria) C^(n) is a Π_n definable club class. That C^(n) is a proper class follows immediately from the Reflection Theorem, applied to the formula T_Σ_n; closure follows from standard model-theoretic arguments about unions of chains of elementary substructures. For Π_n definability, we assume that n≥ 2; the case where n=1 will follow from Proposition <ref> below. We show that α∈ C^(n) is equivalent to "for all x, all Π_n formulas ϕ, and all b∈ x, if x=V_α and xϕ(b), then T_Π_n(⌈ϕ⌉,b )". This is sufficient to establish the assertion is Π_n because x=V_α is Π_1 in x and α, so after converting the implication to a disjunction the formula has the form ∀ x∀ϕ∀ b(Σ_1∨Π_n), which is Π_n as long as n≥ 2. To see that this formula actually implies that α∈ C^(n), note that if a Σ_n formula fails in V_α, its Π_n negation holds in V_α and thus in V. Conversely, if we have a Σ_n formula of the form ∃ xψ(x, b) for ψ Π_n-1 which holds in V_α, let a∈ V_α be such that V_αψ(a,b); then Vψ(a,b) because ψ is at most Π_n, so V∃ xψ(x,b). C^(1) turns out to have a nice characterization in terms of cardinal arithmetic. C^(1) is exactly the class of κ such that ℶ_κ=κ. The axioms of infinity and pairing are Σ_1 expressible, using the sets to be paired as parameters in the latter case, so if κ∈ C^(1), then κ>ω is a limit ordinal. For any set x, both "there exists a bijection f:α→ x" (where α is an ordinal parameter) and "there exists an ordinal α with a bijection f:α→ x" are Σ_1, so V_κ correctly identifies an ordinal with the same cardinality as each of its elements. It follows that κ is larger than many uncountable cardinals and certainly larger than ω^2, so ω+α<κ whenever α<κ. An easy induction shows that |V_ω+α|=ℶ_α for any ordinal α. Then because V_ω+α∈ V_κ for all α<κ, it follows that ℶ_α∈ V_κ and thus ℶ_α<κ for all such α. Therefore we must have ℶ_κ=κ. Conversely, if ℶ_κ=κ, then for any x∈ V_κ, there is a β<κ such that trcl(x)⊂ V_ω+β. It follows that |trcl(x)|≤ℶ_β<κ, so x∈ H_κ. Since H_κ⊆ V_κ for all cardinals κ, this implies that V_κ=H_κ, so κ∈ C^(1) by Lemma <ref>. We can therefore express κ∈ C^(1) as the Π_1 formula: ∀ x∀ f:x→κ∀α<κ∃β<κ∀ y∈ x(rank(y)<α⇒ f(y)≠β) (in words, "there is no surjection x∩ V_α→κ for any set x and any α<κ"), using the fact that the rank function is Δ_1. Bagaria proved an alternative characterization of C^(1) included in the above proof, that it consists of exactly the uncountable cardinals κ such that V_κ=H_κ. Though by Corollary <ref> any model of ZFC will have many C^(n) cardinals, the existence of a regular member of C^(n) has large cardinal strength. (Folklore) A regular cardinal is in C^(1) if and only if it inaccessible. The axiom scheme that there is a regular Σ_n-correct cardinal for each n is equivalent to Ord is Mahlo (i.e., that every definable (with parameters) club class of ordinals contains a regular cardinal). The only part that is not immediate from previous results is that the existence of regular C^(n) cardinals for all n implies that Ord is Mahlo. For this, we first show that the hypothesis implies that for each n, there is a proper class of regular C^(n) cardinals. Let κ∈ C^(n+2) be regular. Then for any α<κ, V thinks there is a regular C^(n) cardinal above it (namely κ), so V_κ thinks there is a regular C^(n) cardinal above α. Thus V_κ satisfies the Π_n+2 sentence "for every ordinal α, there is a regular C^(n) cardinal above α", so V believes this as well, as desired. Now let ϕ be a Σ_n formula defining a club class and κ a regular C^(n) cardinal large enough for V_κ to contain all the parameters used in ϕ. Then for any β<κ, V thinks "there is some α>β such that ϕ(α)", which is Σ_n. Thus there are unboundedly many ordinals α in V_κ such that ϕ(α) holds (in V_κ and thus also in V), so because ϕ defines a closed class, ϕ(κ) holds. Thus κ is a regular cardinal in the class defined by ϕ, as desired. The regular C^(2) cardinals are reasonably well-known: they are exactly the Σ_1-reflecting cardinals[Some authors use Σ_n-reflecting to mean regular Σ_n-correct. However, since this conflicts with Goldstern and Shelah's usage and the word "reflecting" is somewhat overloaded in set theory, I will avoid this in favor of more descriptive terminology.] introduced by Goldstern and Shelah <cit.> to prove the consistency of BPFA, and later used by Aspero and Bagaria to establish the consistency of symmetric bounded forcing axioms in general (<cit.>, Lemma 2.2). (Folklore) A regular cardinal κ is Σ_2-correct if and only if for all formulas ϕ, all regular θ≥κ, and all a∈ H_κ such that H_θϕ(a), there is a cardinal δ<κ with a∈ H_δ and H_δϕ(a). For the forward direction, since the assertions that an ordinal is regular and that a set is equal to H_θ are Π_1, the statement that there is a regular θ such that a∈ H_θ and H_θϕ(a) is Σ_2. It follows that it is true in V_κ; let δ be the cardinal witnessing it. For the reverse direction, if V∃ xψ(a, x) where ψ is Π_1, let b be such that V ψ(a,b). Then by Lemma <ref>, H_θψ(a,b) for any regular θ large enough for θ to contain a and b, and hence H_θ∃ xψ(a, x). Then applying the reflection property, there is some δ<κ such that H_δ∃ xψ(a,x). Since by Corollary <ref> κ is inaccessible, the proof of Lemma <ref> goes through in V_κ, so V_κ agrees with H_δ about the truth of ψ, and thus whichever witness for the statement H_δ has also works in V_κ. Hence V_κ∃ x ψ(a, x), so κ∈ C^(2). We can straightforwardly generalize this result to n>2 if we require that θ and δ are in C^(n-1). The following generalizes a fact about H_θ that we established in the preceding proof: (Bagaria) A Σ_n+1 formula holds in V if and only if there is an α∈ C^(n) such that it holds in V_α. Inversely, a Π_n+1 formula holds in V if and only if it holds in V_α for all α∈ C^(n) such that V_α contains the parameters. Let our formula be ∃ yψ(a, y) where ψ is Π_n and a is any parameter. If V_α∃ yψ(a, y) for some α∈ C^(n) such that a∈ V_α, then if b∈ V_α witnesses this, Vψ(a, b). It follows that V∃ y ψ(a, y). Conversely, if b is such that Vψ(a, b), let α∈ C^(n) be big enough that a, b∈ V_α. Then V_αψ(a, b), so V_α∃ y ψ(a, y). It is obvious from the transitivity of agreement on Σ_n truth that Σ_n-correct cardinals recognize smaller Σ_n-correct cardinals. Using the previous lemma, we can in fact conclude something slightly stronger: If n≤ m, α<β, α∈ C^(n+1), and β∈ C^(m), then V_βα∈ C^(n+1) (or in other words, V_α≺_Σ_n+1 V_β). By Corollary <ref>, the assertion that α is Σ_n+1-correct is at most Π_m+1, so by Lemma <ref>, it holds in V_β. Thus if a Σ_n-correct cardinal β has a Σ_n+1-correct cardinal α below it, V_β will correctly compute the C^(n+1) up to and including α (because V_α correctly computes it and V_β agrees with V_α on Π_n+1 truth). However, there may be cardinals above α which are Σ_n+1-correct in V_β but not in V. Finally, we examine how Σ_n-correct cardinals interact with small forcing: (Folklore) If ℙ∈ V_θ is a forcing poset and θ∈ C^(n), then ⊩_ℙθ∈ C^(n). Let G⊆ℙ be V-generic, ϕ be a Σ_n (or Π_n) formula, and a∈ V_θ^V[G] such that V[G]ϕ(a). By Lemma <ref> there is some ȧ∈ V_θ such that a=ȧ^G. Then if p∈ G forces ϕ(ȧ), V_θ p⊩_ℙϕ(ȧ) because the forcing relation for Σ_n (Π_n) formulas is Σ_n (Π_n). Thus V_θ[G]=V_θ^V[G]ϕ(a), where the equality follows from Lemma <ref> as well. We also have the geological converse: If ℙ∈ V_θ is a forcing poset, G⊆ℙ is a V-generic filter, and θ∈ (C^(n))^V[G], then θ∈ (C^(n))^V. If n=1, this follows because C^(1) is Π_1-definable and Π_1 formulas are downward absolute between transitive structures. For n≥ 2, let r=𝒫(ℙ)^V and let ϕ(a) be a Σ_n or Π_n formula which holds in V for some a∈ V_θ. By Lemma <ref>, Vϕ(a) is Σ_n or Π_n expressible in V[G] with a and r as parameters. Since θ is a limit ordinal, r∈ V_θ, so V_θ^V[G] agrees with V[G] that the ground defined by r satisfies ϕ(a). Since θ is at least Σ_2-correct in V[G], by the Ground Definability Theorem <ref> V_θ^V[G] computes the ground defined by r as V_θ. It thus follows that V_θϕ(a). § SUPERCOMPACTNESS FOR C^(N) We study a strengthening of supercompactness where the elementary embeddings are also required to preserve initial segments of a class, analogous to the notion of strongness for a class A used in some definitions of Woodin cardinals. * A cardinal κ is ν-supercompact for a class A iff there is an inner model M with an elementary embedding j:V→ M such that: * crit(j)=κ * ^ν M⊂ M * j(κ)>ν * j(A∩ V_κ)∩ V_ν=A∩ V_ν * κ is supercompact for A if it is ν-supercompact for A for all cardinals ν * A cardinal δ is Woodin for supercompactness if for all A⊆ V_δ, there is some κ<δ such that (V_δ, ∈, A)“κ is supercompact for A" Kanamori's <cit.> Exercise 24.19 shows that the Woodin for supercompactness cardinals are exactly the Vopenka cardinals; Kentaro Sato (<cit.>, Corollary 10.6) and Norman Perlmutter (<cit.>, Theorem 5.10) independently found alternative proofs. Going forward I will not make much use of Vopenka cardinals, but working in V_δ for δ Vopenka rather than assuming some version of Vopenka's principle in V can simplify the metamathematical details, so I include them in the above definition for completeness. The following standard result (see e.g. Kanamori <cit.> Lemma 22.12 and surrounding discussion, Jech <cit.> pg. 377 between Corollary 20.18 and Lemma 20.19) is very useful for analyzing supercompactness for A. If κ≤ν is a cardinal and j:V→ M witnesses that κ is ν-supercompact, let U:={X⊆𝒫_κ(ν) j"ν∈ j(X)} be the normal ultrafilter derived from j and k: Ult(V, U)→ M be defined by k([f]_U)=j(f)(j"ν). Then k is an elementary embedding with critical point above ν and j=k∘ j_U. Thus, supercompactness for A has a straightforward combinatorial characterization: If κ is ν-supercompact for a class A, there is a normal ultrafilter U⊂𝒫(𝒫_κ(ν)) witnessing that fact. Let U and k be as in the preceding lemma. Then j_U is a ν-supercompactness embedding, and we have: A∩ V_ν =j(A∩ V_κ)∩ V_ν =k(j_U(A∩ V_κ))∩ V_ν =j_U(A∩ V_κ)∩ V_ν where the final equality holds because k does not move elements of V_ν. Laver functions are very useful for many applications of supercompact cardinals, including Baumgartner-style consistency proofs of forcing axioms. They generalize easily to supercompactness for A: If κ is supercompact for some class A, there exists a Laver function f on κ for A. That is, f:κ→ V_κ is such that for all sets x and all ordinals λ≥ |trcl(x)|, there is an elementary embedding j:V→ M witnessing that κ is λ-supercompact for A such that j(f)(κ)=x. We carry out the standard proof of the existence of a Laver function (e.g. Jech <cit.> Theorem 20.21) and verify that the embeddings we consider can be taken to preserve A. We assume toward a contradiction that for all f:κ→ V_κ there is some set x and some λ≥ |trcl(x)| such that for all elementary embeddings j witnessing that κ is λ-supercompact for A, j(f)(κ)≠ x; let λ_f be the least λ for which such an x exists. Choose some ν greater than any possible value of 2^λ_f^<κ for f:κ→ V_κ and let j: V→ M witness that κ is ν-supercompact for A. Let ϕ(g, β) be the statement that g is a function from some cardinal α to V_α and β≥α is minimal such that for some x with |trcl(x)|≤β, there is no normal measure U⊂𝒫(𝒫_α(β)) such that j_U(A∩ V_α)∩ V_β=A∩ V_β and j_U(f)(α)=x. Then for all f:κ→ V_κ, Mϕ(f, λ_f) since M was chosen so as to contain everything relevant to the truth of ϕ. For any suitable function g, λ_g denotes the unique cardinal such that ϕ(g,λ_g) holds if one exists. Let B={α<κ | ∀ g:α→ V_αϕ(g,λ_g)}. Then j(B) is the set of all α<j(κ) satisfying the same property, except that in the statement of ϕ we now say there is no normal measure U such that j_U(g)(α)=x and j_U(j(A∩ V_α))∩ V_λ_g=j(A∩ V_α)∩ V_λ_g. If crit(j_U)=κ, then since j(A∩ V_κ)∩ V_ν=A∩ V_ν, j_U(j(A∩ V_κ))∩ V_λ_g=j_U(j(A∩ V_κ)∩ V_κ)∩ V_λ_g=j_U(A∩ V_κ)∩ V_λ_g and, since ν>λ_g for all g, j(A∩ V_κ)∩ V_λ_g=A∩ V_λ_g. Hence for any function g:κ→ V_κ and normal measure U⊂𝒫(𝒫_κ(λ_g)), j_U(j(A∩ V_κ))∩ V_λ_g=j(A∩ V_κ)∩ V_λ_g iff j_U(A∩ V_κ)∩ V_λ_g=A∩ V_λ_g, so κ∈ j(B) by our hypothesis about the nonexistence of Laver functions. Now we inductively define f:κ→ V_κ so that if α∈ B then f(α) witnesses ϕ(f↾α, λ_f↾α), and if α∉B, f(α)=∅. Let x=j(f)(κ). j(f) is constructed in the same way as f, except with j(A) used in place of A in ϕ, but as shown in the previous paragraph this doesn't matter for functions with domain κ like j(f)↾κ=f, so x witnesses the truth of ϕ(f,λ_f) in M. By the choice of ν, M and V have exactly the same ultrafilters on 𝒫_κ(λ_f) and compute their ultrapowers identically, so x also witnesses the truth of ϕ(f, λ_f) in V. Now we let U⊂𝒫(𝒫_κ(λ_f)) be the normal measure derived from j and obtain a contradiction by showing that j_U(A∩ V_κ)∩ V_λ_f=A∩ V_λ_f and j_U(f)(κ)=x. Let k: Ult(V, U)→ V be the elementary embedding such that j=k∘ j_U. Then k↾ V_λ_f=id, so for any set y, y∈ j_U(A∩ V_κ)∩ V_λ_f iff k(y)∈ k(j_U(A∩ V_κ))∩ V_λ_f iff y∈ j(A∩ V_κ)∩ V_λ_f iff y∈ A∩ V_λ_f. Furthermore, x∈ H_λ_f^+ and k(λ_f)=λ_f, so it can be shown inductively that k(x)=x. However, we also have that k(j_U(f)(κ))=k(j_U(f))(k(κ))=j(f)(κ)=x so by the injectivity of k, j_U(f)(κ)=x. This contradicts the fact that x is a counterexample to f being a Laver function for A, completing the proof. Most of interest to us is the case where A=C^(n) for some n. (These are not to be confused with the C^(n)-supercompact cardinals Bagaria considered in <cit.>.) In this case we get the following alternative characterization: The following are equivalent for all cardinals κ and all positive integers n: * κ is supercompact for C^(n) * For every cardinal ν, Σ_n+1 formula ϕ, and set a such that Vϕ(a), there is an elementary embedding j:V→ M such that: * crit(j)=κ * ^ν M⊂ M * j(κ)>ν * Mϕ(a) (1⇒ 2): Given ν, ϕ, and a, let θ≥ν be a C^(n) cardinal such that V_θϕ(a) (using Lemma <ref>). Then if λ>θ=|V_θ| and j:V→ M witnesses that κ is λ-supercompact for C^(n), the first three bullet points in (2) are immediate, so we show the fourth. First, since there is a λ-sequence in V with range V_θ, the closure conditions on M imply that V_θ^M=V_θ. By elementarity, j(C^(n)∩ V_κ) is exactly the C^(n) cardinals of M below j(κ), so θ∈ C^(n)∩ V_λ=j(C^(n)∩ V_κ)∩ V_λ= (C^(n))^M∩λ. Since V_θ^M=V_θϕ(a), applying Lemma <ref> in M yields Mϕ(a). (2⇒ 1): Given ν, let a=C^(n)∩ν. Then V∀θ<νθ∈ a⟷θ∈ C^(n), where θ∈ C^(n) abbreviates the formula from Corollary <ref>. This is a conjunction of a Π_n formula and a Σ_n formula with an added bounded quantifier, so it can be written in Σ_n+1 form. Let j:V→ M be a ν-supercompactness embedding with critical point κ and M∀θ<νθ∈ a⟷θ∈ C^(n). Then j(C^(n)∩ V_κ)∩ V_ν=(C^(n))^M∩ν=a=C^(n)∩ν, so j witnesses that κ is ν-supercompact for C^(n). Thus, for example if ϕ is a Π_2 large cardinal property like strongness or supercompactness, any given cardinal with that property can retain it in the codomain of a supercompactness for C^(2) embedding. Similar arguments show that any cardinal supercompact for C^(n) has many cardinals supercompact for C^(n-1) below it. By examining the proofs of Lemma <ref> and Lemma <ref>, it is easy to see that if f is a Laver function for κ, we can simultaneously ensure that Mϕ(a) and j(f)(κ)=b for any desired a and b such that Vϕ(a). Clarifying the bottom of our hierarchy, analogously to the first sentence of Corollary <ref>: Every supercompact cardinal is supercompact for C^(1). Since j(C^(1)∩ V_κ)∩ V_ν is simply the set of C^(1) cardinals of M below ν, by Proposition <ref>, it is sufficient to show that for every beth fixed point ν, if M is the codomain of any ν-supercompactness embedding, then M correctly computes the beth function below ν. If α<ν, then |V_ω+α|=ℶ_α<ν, so ^ν M⊂ M implies that V_ω+α^M=V_ω+α and that M correctly computes the cardinality of this set. Thus ℶ_α^M=|V_ω+α|^M=ℶ_α as desired, so α is C^(1) in M if and only if it is C^(1) in V. The following lemmas generalize well-known facts about supercompact cardinals: If κ is supercompact for C^(n) (or merely strong for C^(n)), then κ∈ C^(n+1) First note that κ is a limit point of C^(n), since bounded subsets of κ are fixed by all elementary embeddings with critical point κ, whereas j(C^(n)∩κ) can be an arbitrarily large initial segment of a proper class. Since C^(n) is closed, κ∈ C^(n). Thus by Lemma <ref> any Σ_n+1 formula true in V_κ must also be true in V. If ϕ is a Σ_n+1 formula and a∈ V_κ is such that Vϕ(a), again by Lemma <ref> there is some θ∈ C^(n) such that V_θϕ(a). Then there is an elementary embedding j:V→ M with j(κ)>θ, V_θ^M=V_θ, and Mθ∈ C^(n). Thus M thinks there is a C^(n) cardinal less than j(κ) witnessing ϕ(j(a)) (j(a)=a because a∈ V_κ), so by elementarity there is in V a θ̅<κ in C^(n) with V_θ̅ϕ(a). By Lemma <ref>, V_κθ̅∈ C^(n), and since V_κ ZFC, we can apply Lemma <ref> again inside of it to obtain V_κϕ(a), as desired. If κ<λ, κ is ν-supercompact for C^(n) for all ν<λ, and λ∈ C^(n) is regular, then V_λ"κ is supercompact for C^(n)". Since λ is inaccessible, V_λ contains all the ultrafilters on 𝒫_κ(ν) witnessing that κ is supercompact for C^(n) up to λ, as well as all the functions 𝒫_κ(ν)→ V_λ which represent elements of V_j_U(λ)^Ult(V, U) for any such normal ultrafilter U, so we can construct the ultrapower within V_λ and get exactly the same results below λ. Furthermore, all functions 𝒫_κ(ν)→λ are bounded, so the order type of the predecessors of any given one in any ultrapower is less than λ, from which it follows that j_U(λ)=λ. Hence for each ν<λ, we have a ν-supercompactness embedding V_λ→ V_λ^M which arises as a restriction of a ν-supercompactness embedding V→ M with (C^(n))^M∩ν=C^(n)∩ν. Since λ is C^(n), it agrees with V on which cardinals below it are C^(n) by Corollary <ref>, so by elementarity V_λ^M agrees with M on which cardinals are C^(n) below λ as well. Thus V_λ and V_λ^M agree on C^(n) cardinals below ν, as desired. We conclude this section with some results on the relationship between supercompactness for C^(n) and other large cardinal axioms: κ is supercompact for C^(2) iff κ is extendible. For the forward direction, given an ordinal η, we show that κ is η-extendible. Let θ∈ C^(2) be greater than κ+η; then there is an elementary embedding j:V→ M with crit(j)=κ, θ∈ (C^(2))^M, V_θ^M=V_θ, and j↾ V_κ+η∈ M. By the last condition and Lemma <ref>, M satisfies the Σ_2 assertion that κ is η-extendible, so by Σ_2-correctness V_θ contains an elementary embedding σ:V_κ+η→ V_β for some β<θ with crit(σ)=κ. σ then witnesses that κ is η-extendible in V. For the converse, given an ordinal λ>κ, we show that κ is λ-supercompact for C^(2). Let θ∈ C^(2) be greater than λ; then by extendibility, for some ordinal β>θ there is an elementary embedding σ: V_θ→ V_β with critical point κ. By Lemma <ref>, the ordinals below θ are closed under the beth function, so by elementarity those below β are as well, and thus β∈ C^(1). It follows from Lemma <ref> that V_β recognizes the Σ_2-correctness of θ, and from the remarks after that lemma that it correctly computes C^(2)∩λ. By elementarity σ(C^(2)∩κ) is the set of cardinals V_β thinks are Σ_2-correct, so σ(C^(2)∩κ)∩λ= C^(2)∩λ. By standard arguments, if U⊂𝒫(𝒫_κ(λ)) is the ultrafilter derived from σ, then U is a fine, normal, and κ-complete. Furthermore, if j_U is the associated ultrapower embedding, there is an elementary embedding k: Ult(V_θ, U)→ V_β such that σ=k∘ j_U and crit(k)≥λ, so in particular j_U(C^(2)∩κ)∩λ=σ(C^(2)∩κ)∩λ=C^(2)∩λ. Thus U witnesses that κ is λ-supercompact for C^(2), as desired. The existence of cardinals supercompact for C^(n) for each standard n is equivalent to the first-order Vopenka scheme (i.e., that for every definable proper class of structures of the form (A, ∈, R) for A a transitive set and R⊆ A, there is an elementary embedding between two distinct elements of the class). For the forward direction, let ϕ be a Σ_n formula defining the class and κ a supercompact cardinal for C^(n-1). Then if (A,∈, R) is such that ϕ((A,∈, R)) and |A|≥κ, we apply Lemma <ref> to obtain an elementary embedding j:V→ M witnessing the |A|-supercompactness of κ such that Mϕ((A,∈, R)). Then Mϕ((j(A), ∈, j(R))) by elementarity, and by Lemma <ref> j↾ A∈ M is an elementary embedding (A, ∈, R) → (j(A), ∈, j(R)). Since κ≤ |A|^M<j(κ)≤ |j(A)|^M, A≠ j(A), so M thinks there are distinct structures in the class defined by ϕ with an elementary embedding between them. By elementarity, the same holds in V. For the converse, we generalize Bagaria's proof of Theorem 4.3 in <cit.>. Given n, let 𝒜 consist of all structures of the form (V_γ, ∈, {{α, λ}}∪ (C^(n)∩γ)) where λ is the least limit ordinal greater than α such that there are no κ≤α that are <λ-supercompact for C^(n) and γ is the least element of C^(n+1) above λ with uncountable cofinality. Note that there is at most one structure in 𝒜 for any given value of α, which we denote A_α. We assume toward a contradiction that there are no cardinals supercompact for C^(n); then A_α exists for all α, so in particular 𝒜 is a proper class. Since 𝒜 is definable, there are α,β such that there is an elementary embedding j:A_α→ A_β for α≠β. Set γ, δ, λ, and μ such that A_α= (V_γ, ∈, {{α, λ}}∪ (C^(n)∩γ)) and A_β= (V_δ, ∈, {{β, μ}}∪ (C^(n)∩δ)). Since {α, λ} and {β, μ} are the only two-elements sets included in each unary predicate, α<λ, and β<μ, we must have j(α)=β. As α≠β and elementary embeddings of transitive structures never map ordinals to smaller ordinals, α<β and so κ:=crit(j)≤α. To complete the proof, we show that κ is <λ-supercompact for C^(n), contradicting A_α∈𝒜. The argument is a straightforward generalization of the standard proof that sufficient partial extendibility, even without any hypotheses on the size of j(κ), implies partial supercompactness (see e.g. Kanamori <cit.> Propositions 23.6 and 23.15(b)). We first observe that if j^i(κ) is defined for all natural numbers i, then the supremum τ of this sequence must be a fixed point of j strictly between κ and γ (as γ was chosen to have uncountable cofinality); however this would mean that j restricts to a nontrivial elementary embedding V_τ+2→ V_τ+2, contrary to Kunen's inconsistency theorem. Thus finitely iterating j sufficiently must carry us above γ, so in particular we can find some m such that j^m(κ)≤λ<j^m+1(κ). Following Kanamori, let P(i) denote the assertion that there is an elementary embedding σ: (V_λ, ∈, λ∩ C^(n))→ (V_θ, ∈, θ∩ C^(n)) for some θ with crit(σ)=κ and σ(κ)=j^i+1(κ). P(i) can easily be verified to be a Σ_n+1 statement with κ, j^i+1(κ), and (V_λ, ∈, λ∩ C^(n)) as parameters. We verify P(m) by induction. P(0) holds with θ=μ and σ=j↾ V_λ. If P(i) holds and i<m, then because γ∈ C^(n+1) and all the parameters are in V_γ, it holds in V_γ. Thus we can find a θ<γ and σ∈ V_γ witnessing it. Then in V_δ j(σ) is an elementary embedding (V_j(λ), ∈, j(λ)∩ C^(n))→ (V_j(θ), ∈, j(θ)∩ C^(n)) with crit(j(σ))=j(κ) and j(σ)(j(κ))=j(j^i+1(κ))=j^i+2(κ). Thus j(σ)∘σ: (V_λ, ∈, λ∩ C^(n))→ (V_j(θ), ∈, j(θ)∩ C^(n)) witnesses that V_δ P(i+1), so because δ∈ C^(n+1), P(i+1) holds in V. It follows that P(m) holds and thus there is an elementary embedding j̅: (V_λ, ∈, λ∩ C^(n))→ (V_θ, ∈, θ∩ C^(n)) with critical point κ and j̅(κ)>λ. Thus for any ν<λ, we can derive a normal ultrafilter U_ν:={X⊆𝒫_κ(ν)j̅"ν∈j̅(X)}. By standard arguments U_ν is a fine normal κ-complete ultrafilter, and by a simple adaptation of Lemma <ref> j̅ factors through Ult(V_λ, U_ν) with the second factor embedding having critical point above ν. Since j̅ preserves C^(n), the embedding generated by U_ν preserves C^(n) up to ν. Thus κ is <λ-supercompact for C^(n), as desired. Since Proposition <ref> can be generalized to show that Bagaria's notion of C^(n)-extendibility is equivalent to supercompactness for C^(n+1) for all n, Proposition <ref> can be viewed as a corollary and unification of Bagaria's results on the equivalence of fragments of the Vopenka scheme with the existence of supercompact or C^(n)-extendible cardinals. § Σ_N-CORRECTLY H_Λ-REFLECTING CARDINALS Tadatoshi Miyamoto introduced the H_λ-reflecting cardinals (<cit.>, Definition 1.1) to extend the work of Goldstern and Shelah to asymmetric versions of BPFA. The natural generalization of Miyamoto's definition to formulas of a given complexity true in V will form the basis of consistency proofs for Σ_n-correct bounded forcing axioms. For cardinals κ≤λ and n≥ 2 an integer, we say that κ is Σ_n-correctly H_λ-reflecting iff κ is regular and for every Σ_n formula ϕ and a∈ H_λ, if ϕ(a) holds, then the set of Z≺ H_λ of size less than κ and containing a such that V_κϕ(π_Z(a)) is stationary in [H_λ]^<κ (recall that π_Z is the Mostowski collapse map for Z). If λ=κ^+α, we say that κ is Σ_n-correctly +α reflecting.[The +α-reflecting terminology is due to Fuchs (<cit.>, Definition 3.9).] The n≥ 2 assumption is necessary to make this a genuine large cardinal notion because every regular κ is Σ_1-correctly H_λ reflecting for all λ≥κ: if ϕ is Σ_1 then Vϕ(a) implies H_λϕ(a) implies Zϕ(a) for all Z≺ H_λ; it then follows that π_Z"Zϕ(π_Z(a)), and since Σ_1 statements are upward absolute between transitive structures, V_κϕ(π_Z(a)) whenever |Z|<κ. The n=2 case is exactly Miyamoto's H_λ-reflecting cardinals, by an argument very similar to Proposition <ref>. We begin with the simple observation that Σ_n-correct H_λ-reflection is a strengthening of Σ_n-correctness, so in particular it is also the case that Vϕ(π_Z(a)) for suitable Z, ϕ, and a: If κ≤λ is Σ_n-correctly H_λ-reflecting, then κ∈ C^(n). First we show that κ is a strong limit and thus inaccessible. For any α<κ, there are club many Z≺ H_λ of size less than κ with α+1⊂ Z and thus π_Z(α)=α, so we can reflect the Σ_2 statement "there exists a surjection from some ordinal onto the power set of α" and get that (2^|α|)^V_κ exists (if we drop the prevailing assumption that n≥ 2, the proposition is of course false). Since V_κ contains the full power set of α, 2^|α|=(2^|α|)^V_κ<κ, as desired. Now fix a∈ V_κ. If ϕ is a Σ_n formula such that Vϕ(a), then we can find a Z≺ H_λ large enough so that π_Z(a)=a (since V_κ=H_κ when κ is inaccessible, so |trcl(a)|<κ) and such that V_κϕ(π_Z(a)), so V_κϕ(a). By the contrapositive of the argument given in the proof of Corollary <ref>, this is sufficient to establish κ∈ C^(n). We have the following analogue to Corollary <ref>: "κ is Σ_n-correctly +α-reflecting" is a Π_n-definable relation between κ and α. All objects relevant to the definition of Σ_n-correct H_λ reflection can be found in V_λ+3, with the exception of those needed to verify that Vϕ(a). Thus we can say "κ is regular and for all λ=κ^+α, all x=V_λ+3, all a∈ H_λ, and all Σ_n formulas ϕ, ¬ϕ(a) or there is a set S∈ x satisfying the definition of stationarity as evaluated by functions in x such that every Z∈ S is an elementary substructure of H_λ, has cardinality less than κ, contains a, and has a function π_Z∈ x satisfying the definition of the Mostowski collapse such that V_κϕ(π_Z(a))." ¬ϕ(a) is Π_n, regularity is Π_1, and as shown in the appendix, the definitions of κ^+α, H_λ, and V_λ+3 are at worst Δ_2, so (given our prevailing assumption that n≥ 2) the overall definition is Π_n. Using this, we analyze the hierarchy of Σ_n-correctly +α-reflecting cardinals as n and α vary. Let κ be Σ_n-correctly +α reflecting. Then: * κ is Σ_m correctly +β-reflecting for all m≤ n and β≤α * If 0<α<κ, then for all m<n there are stationarily many κ̅<κ which are Σ_m-correctly +α-reflecting (cf. Lemma 4.10 in <cit.>) (1): If ϕ is a Σ_m formula and a∈ H_κ^+β is such that ϕ(a) holds, let f:[H_κ^+β]^<ω→ H_κ^+β be a function; to simplify things, we will assume without loss of generality than f encodes the Skolem functions necessary to guarantee that any set closed under it is an elementary substructure of H_κ^+β. We extend this to a function f':[H_κ^+α]^<ω→ H_κ^+α by defining f'(x)=f(x∩ H_κ^+β). Then because a∈ H_κ^+β⊆ H_κ^+α, every Σ_m formula is Σ_n, and κ is Σ_n-correctly +α reflecting, we can find a Z'≺ H_κ^+α of size less than κ, containing a, and closed under f' with Z'∩ H_κ transitive and V_κϕ(π_Z'(a)). Let Z:=Z'∩ H_κ^+β. By the definition of f', Z is closed under f, and clearly a∈ Z, Z∩ H_κ=Z'∩ H_κ is transitive, |Z|<κ, and because we put Skolem functions into f Z≺ H_κ^+β. Finally, π_Z=π_Z'↾ Z because if x∈ Z' and x∈ y∈ Z then y∈ H_κ^+β, so by transitivity x∈ H_κ^+β, and thus x∈ Z, so the recursive definitions of π_Z(y) and π_Z'(y) will agree. (2): By Lemma <ref>, the assertion that κ is Σ_m-correctly +α-reflecting is Π_m. Thus if 2≤ m <n[or even if m=1, though by earlier remarks the proof in this case would reduce to a needlessly circuitous proof that κ is Mahlo], the assertion that κ is Σ_m-correctly +α-reflecting is Σ_n, so we can find stationarily many Z≺H_κ^+α of size less than κ with Z∩ H_κ transitive, α,κ∈ Z, and π_Z(κ) Σ_m-correctly +α-reflecting (since the transitivity hypothesis implies that π_Z(α)=α). It then follows from Lemma <ref> that there are stationarily many Σ_m-correctly +α-reflecting cardinals below κ, as desired. Now we compare them to other large cardinals. Σ_n-correctly +0-reflecting can easily be seen to be equivalent to regular C^(n), which as we have seen lies between inaccessible and Mahlo in the consistency strength hierarchy. +1-reflecting, even without added correctness, is already a fair bit stronger: If κ is Σ_2-correctly +1-reflecting, then κ is weakly compact. We have already shown that κ is inaccessible, so we verify the tree property. Let <_T⊂κ×κ be a κ-tree ordering and T=(κ, <_T)∈ H_κ^+. If T has no cofinal branch, the statement "T is a κ-tree with no cofinal branch" is Π_1. Thus we can find a Z≺ H_κ^+ of size less than κ such that κ, T∈ Z, Z∩ H_κ is transitive, Z closed under the mapping α↦ T_α, and π_Z(T) is a π_Z(κ)-tree with no cofinal branch. By the closure and transitivity conditions, every node of T which appears below level π_Z(κ)=Z∩κ is in Z and not moved by π_Z. By elementarity, they have the same ordering relations to each other in π_Z(T) as they did in T. Since π_Z(T) has height π_Z(κ), it can have no nodes on higher levels, and since all of its nodes arise from nodes of T, none of whose heights in the tree are moved by π_Z, it must have exactly the same nodes as T on all levels below π_Z(κ). Thus π_Z(T)=T↾π_Z(κ). However, taking the predecessors of any node in T_π_Z(κ) gives a cofinal branch of T↾π_Z(κ), a contradiction. Thus T has have had a cofinal branch to begin with, so κ is weakly compact. It can further be shown that (Σ_2-correct) +1-reflection is exactly equivalent to the concept of strong unfoldability introduced by Villaveces (<cit.>, which in turn implies total indescribability, so Σ_n-correct +1-reflection is in fact a fair bit stronger than weak compactness. However, as the next two lemmas show, Σ_n-correct +1-reflection is still consistent with V=L and thus below 0^♯ for arbitrarily large n: (adapted and generalized from Miyamoto <cit.>, proof of Theorem 4.2) A cardinal κ is Σ_n-correctly +1-reflecting in L if and only if for all A∈𝒫(κ)∩ L and Σ_n formulas ϕ such that Lϕ(A, κ), there are stationarily many α<κ such that L_κϕ(A∩α, α). Assume V=L throughout. For the forward implication, if κ is Σ_n-correctly +1-reflecting and ϕ(A, κ) holds, there are stationarily many Z≺ H_κ^+ of size less than κ and containing A and κ such that V_κϕ(π_Z(A), π_Z(κ)) and Z∩ H_κ is transitive. By Lemma <ref>, there are stationarily many α<κ such that α=π_Z(κ)=Z∩κ for some such Z, and by elementarity, π_Z(A)=A∩π_Z(κ). Since even Σ_1-correctness implies inaccessibility for regular cardinals, V_κ=H_κ=L_κ. Thus there are stationarily many α<κ such that L_κϕ(A∩α, α), as desired. For the reverse implication, since both 𝒫(κ) and H_κ^+=L_κ^+ are well-ordered by <_L with order type κ^+, there is a bijection f_κ between them, definable with κ as a parameter. Given any a∈ L_κ^+ such that Lϕ(a), setting A:=f_κ^-1(a), Lϕ(f_κ(A)). It follows that there is a stationary S⊂κ such that L_κϕ(f_α(A∩α)) for all α∈ S, where f_α has the obvious definition. Then again by Lemma <ref> we have a stationary S^*⊂ [L_κ^+]^<κ consisting of sets Z≺ L_κ^+ containing a, A, and κ such that Z∩κ∈ S. For all Z∈ S^*, by elementarity we have π_Z(A)=A∩π_Z(κ) and thus π_Z(a)=π_Z(f_κ(A))=f_π_Z(κ)(π_Z(A))=f_π_Z(κ)(A∩π_Z(κ)), so since π_Z(κ)∈ S, L_κϕ(π_Z(a)). Thus κ is Σ_n-correctly +1-reflecting. Every Σ_n-correctly +1-reflecting cardinal is Σ_n-correctly +1-reflecting in L. Let κ be Σ_n-correctly +1-reflecting. If A⊆κ is constructible and ϕ is Σ_n such that Lϕ(A, κ), then the Σ_n formula ϕ^L(A, κ) is true in V. Thus there are stationarily many Z≺ H_κ^+ of size less than κ containing A and κ with Z∩ H_κ transitive and V_κϕ^L(π_Z(A), π_Z(κ)), so by arguments similar to those given for the previous lemma there are stationarily many α<κ such that V_κϕ^L(A∩α, α). Since L^V_κ=L_κ and any club in L is still a club in V, {α<κ L_κϕ(A∩α, α)} is constructible and stationary in L. It follows that L"κ is Σ_n-correctly +1-reflecting". However, if there is a κ which is even Σ_2-correctly +2-reflecting, Miyamoto <cit.> shows that BPFA^<ω_4+2^ℵ_0=ℵ_2 is consistent (Theorem 3.1; note that Miyamoto writes the axiom as Σ(ω_3) and that the hypothesis of the theorem can be forced over any model with a +2-reflecting cardinal). Schimmerling (<cit.>, two paragraphs following Proposition 1.2), combining his own work with various results from Todorcevic, Velickovic, and Steel, observes that under the forcing axiom for proper posets of size at most (2^ℵ_0)^+ (which certainly holds in Miyamoto's model), □(ℵ_2) and □(ℵ_3) simultaneously fail, so the axiom of determinacy holds in L(ℝ). Thus the consistency strength of Σ_n-correctly +2-reflecting cardinals is at least as high as infinitely many Woodin cardinals. With an added cardinal arithmetic hypothesis, we can get even more consistency strength more straightforwardly (recall that every 1-extendible cardinal is superstrong and a limit of superstrongs, and thus of greater consistency strength than any number of Woodin cardinals; see e.g. Kanamori's Proposition 26.11(a) <cit.>). If κ is Σ_2-correctly +2-reflecting and 2^κ=κ^+, then there are stationarily many 1-extendible cardinals below κ. If κ is additionally Σ_3-correctly +2-reflecting, then it is itself 1-extendible. Since κ is inaccessible, |V_κ|=κ, so |V_κ+1|=2^κ=κ^+ and hence V_κ+1∈ H_κ^++. If ϕ(V_κ+1, κ) is the (Π_1) assertion that V_κ+1 satisfies its own definition, then there are stationarily many Z≺ H_κ^++ of size less than κ containing κ and V_κ+1 such that π_Z(V_κ+1)=V_π_Z(κ)+1 and Z∩κ is transitive. For any such Z, π_Z^-1↾ V_π_Z(κ)+1 witnesses that π_Z(κ) is 1-extendible. By Lemma <ref>, there are stationarily many possible values of π_Z(κ). If κ is Σ_3-correctly +2-reflecting, assume toward a contradiction that κ is not 1-extendible. Then we can carry out the above argument while simultaneously reflecting the Π_2 assertion that κ is not 1-extendible, obtaining a π_Z(κ) which both is and is not 1-extendible. We get a natural upper bound on the strength of correctly reflecting cardinals (analogous to Miyamoto's Proposition 1.2(3)); in doing so, it is convenient to also prove an analogue of Magidor's characterization of supercompactness <cit.>: The following are equivalent for any integer n>1 and any cardinal κ: * κ is supercompact for C^(n-1) * κ is Σ_n-correctly H_λ-reflecting for all cardinals λ≥κ * For every η>κ there is an α<κ with a nontrivial elementary embedding σ: (V_α, ∈, C^(n-1)∩α)→ (V_η,∈ C^(n-1)∩η) such that σ(crit(σ))=κ (1⇒ 2): Given a λ≥κ, Σ_n formula ϕ, an a∈ H_λ such that ϕ(a), and a club C⊆ [H_λ]^<κ, we show that there is a Z∈ C containing a such that V_κϕ(π_Z(a)). Setting ν:=|[H_λ]^<κ|, since κ is supercompact for C^(n-1) there is by Lemma <ref> an embedding j:V→ M witnessing the ν-supercompactness of κ such that Mϕ(a). Then j"H_λ, j"C∈ M, and by the unboundedness of C j"C is a directed subset of j(C) whose union is j"H_λ; since |j"C|≤ν<j(κ) and j(C) is a club of ([H_j(λ)]^<j(κ))^M, j"H_λ∈ j(C). Since π_j"H_λ is simply the inverse of the restriction of j, π_j"H_λ(j(a))=a. Furthermore, by elementarity j(κ) is supercompact for C^(n-1) in M, so by Lemma <ref> V_j(κ)^Mϕ(a). Hence in M there is a set j"H_λ∈ j(C) containing j(a) such that V_j(κ)^Mϕ(π_j"H_λ(j(a))). Therefore in V there is some Z∈ C containing a such that V_κϕ(π_Z(a)), as desired. (2⇒ 3): Let ϕ(β, x, y) denote the assertion that β is an ordinal, x=V_β, and for all θ<β, θ is Σ_n-1-correct if and only if θ∈ y. Then ϕ can be written in Σ_n form, and for any η>κ, ϕ(η, V_η, C^(n-1)∩η) holds. It follows that for any λ large enough so that V_η∈ H_λ, there is a Z≺ H_λ of size less than κ containing η, V_η, and C^(n-1)∩η such that ϕ(π_Z(η), π_Z(V_η), π_Z(C^(n-1)∩η)) holds and Z∩ H_κ is transitive. Setting α:=π_Z(η), it is immediate that π_Z(V_η)=V_α and π_Z(C^(n-1)∩η)=C^(n-1)∩α. Then if we define σ:=π_Z^-1↾ V_α, Lemma <ref> implies that σ is elementary (V_α, ∈, C^(n-1)∩α)→ (V_η, ∈, C^(n-1)∩η). By the transitivity condition on Z, crit(σ)=π_Z(κ), so σ maps its critical point to κ, as desired. (3⇒ 1): Given ν, we show that κ is ν-supercompact for C^(n-1). Let η=ν+ω and σ: (V_α, ∈, C^(n-1)∩α) → (V_η, ∈, C^(n-1)∩η) be elementary with critical point δ such that σ(δ)=κ>α. Then we must have α=β+ω for some β such that σ(β)=ν. By standard arguments, if U is the set of all X⊆𝒫_δ(β) such that σ"β∈σ(X), then U is a fine normal δ-complete ultrafilter. Furthermore, if j_U is the associated ultrapower embedding, σ factors as j_U followed by an embedding which fixes all ordinals up to β, so in particular j_U(C^(n-1)∩δ)∩β=σ(C^(n-1)∩δ)∩β=C^(n-1)∩β. Since U∈ V_α, by elementarity σ(U)⊂𝒫(𝒫_κ(ν)) is a fine normal κ-complete ultrafilter such that j_σ(U)(C^(n-1)∩κ)∩ν=C^(n-1)∩ν, so it witnesses that κ is ν-supercompact for C^(n-1). If κ is Σ_n-correctly H_λ-reflecting and λ<θ∈ C^(n-1), then combining Lemma <ref> and Lemma <ref> we get that V_θκ is Σ_n-correctly H_λ-reflecting. If θ is regular and this holds for all λ<θ, then V_θ ZFC+∃κ supercompact for C^(n-1), so correct reflection up to the next regular C^(n-1) cardinal is equiconsistent with supercompactness for C^(n-1). Finally, since Laver functions are highly useful for proving the consistency of forcing axioms, we would like a notion of Laver functions appropriate to Σ_n-correct H_λ-reflection: g:κ→ V_κ is a correctly reflecting Laver function for a Σ_n-correctly H_λ-reflecting cardinal κ <λ iff for all Σ_n formulas ϕ and a∈ H_λ such that Vϕ(a), there are stationarily many Z≺ H_λ of size less than κ such that V_κϕ(π_Z(a)) and g(π_Z(κ))=π_Z(a). If κ is in fact supercompact for C^(n-1), any standard Laver function g will be a correctly reflecting Laver function, since if j:V→ M is a supercompactness embedding such that j(g)(κ)=a and Mϕ(a), then setting Z:=j"H_λ we get M j(g)(π_Z(j(κ)))=π_Z(j(a))ϕ(π_Z(j(a))), which pulled back to V gives the desired properties. In general, however, it is not clear that correctly reflecting Laver functions always exist. Fortunately, they can always be added by the following forcing, developed by Woodin and best exposited by Hamkins <cit.>. If κ is a cardinal, the fast function forcing 𝔽_κ consists of all partial functions p:κ→κ such that: * |dom(p)|<κ * every element of dom(p) is an inaccessible cardinal[Hamkins later gave a slightly different version of fast function forcing in <cit.>, where arbitrary ordinals are allowed in the domain but |p↾γ|<γ is only required for inaccessible γ. This version has some nicer properties, but I have chosen to use the older one because the differences are not relevant here and the presentation in <cit.> is more thorough.] * for all γ∈ dom(p), p"γ⊂γ and |p↾γ|<γ The ordering is given by p≤_𝔽_κ q iff q⊆ p. If G⊂ℙ_κ is a generic filter, we call the partial function f=⋃ G a fast function on κ. If κ is Σ_n-correctly H_λ-reflecting for some λ>κ and f is a V-generic fast function on κ, then in V[f] there is a correctly reflecting Laver function g:κ→ V_κ. Since the definition of a correctly reflecting Laver function implies that κ is Σ_n-correctly H_λ-reflecting, in particular 𝔽_κ preserves that κ is Σ_n-correctly H_λ-reflecting. The following argument is essentially an adaptation of Hamkins's Generalized Laver Function Theorem 2.2 in <cit.> to the correctly reflecting setting. Let e:κ→ V_κ be any surjection in V and define g(γ)=e(f(γ))^f↾γ whenever γ∈ dom(f) and e(f(γ)) is an 𝔽_γ-name (where e(f(γ))^f↾γ in fact means the interpretation of e(f(γ)) by the filter corresponding to f↾γ), and g(γ)=∅ otherwise. Assume toward a contradiction that g is not a correctly reflecting Laver function; then for some Σ_n formula ϕ, suitable names ḟ and ġ, and other names ȧ and ḣ, there is a p∈𝔽_κ which forces: "ϕ(ȧ) holds and ḣ:[H_λ^V[ḟ]]^<ω→ H_λ^V[ḟ] is a function such that for all Z≺ H_λ^V[ḟ] of size less than κ such that Z∩ H_κ^V[ḟ] is transitive, V_κ^V[ḟ]ϕ(π_Z(ȧ)), and ġ(π_Z(κ))=π_Z(ȧ), there is a finite u⊂ Z such that ḣ(u)∉Z." Let ψ(p,ȧ, κ, 𝔽_κ) denote the formula asserting that κ is inaccessible and p⊩_𝔽_κϕ(ȧ). Define h̃:[H_λ]^<ω→ H_λ by, for ẋ_1,…, ẋ_k∈ H_λ 𝔽_κ-names, h̃({ẋ_1,…, ẋ_k}) is an 𝔽_κ-name ẏ∈ H_λ such that p⊩ḣ({ẋ_1…, ẋ_k})=ẏ; for all other finite sets u in its domain, h̃(u)=∅. (Such a ẏ will exist for any finite set of names in the domain by a mixing lemma argument.) Since ψ is a Σ_n formula (inaccessibility is Π_1 and a condition forcing a Σ_n formula is Σ_n), there is (in V) a Z≺ H_λ such that: * |Z|<κ * Z∩ H_κ=V_β for some β<κ (this is possible because the set of all Z with this property can easily be seen to be a club in [H_λ]^<κ for any inaccessible κ≤λ) * p, ȧ, κ, 𝔽_κ∈ Z * Z is closed under h̃ * κ̅:=π_Z(κ) is inaccessible * π_Z(p)⊩_π_Z(𝔽_κ)ϕ(π_Z(ȧ)) Since 𝔽_κ⊂ H_κ, condition (2) implies that π_Z(p)=p (and in fact this holds for all elements of Z∩𝔽_κ). Since we must have β=κ̅ in condition (2) and 𝔽_κ̅⊂ V_κ̅, π_Z(𝔽_κ)=V_κ̅∩𝔽_κ=𝔽_κ̅. Let α<κ be such that e(α)=π_Z(ȧ). By elementarity, p∈𝔽_κ̅, so p"κ̅⊂κ̅ and thus p can be extended in 𝔽_κ to a partial function with κ̅ in its domain. Let f^* be a V-generic fast function including p∪{⟨κ̅,α⟩}. Then π_Z"f^*=f^*↾κ̅, and by Hamkins's Fast Function Factor Lemma, below {⟨κ̅,α⟩} 𝔽_κ factors as 𝔽_κ̅×𝔽_ν,κ, where ν is the least inaccessible above κ̅ and α and 𝔽_ν,κ is the subposet of 𝔽_κ consisting of partial functions with domains contained in [ν,κ), so f^*↾κ̅ is a V-generic fast function on κ̅. We can then apply Lemma <ref> with M=rng(π_Z), N=H_λ, j=π_Z^-1, G=f^*↾κ̅, and H=f^* to get that H_λ[f^*] has an elementary substructure Z[f^*]:={ẋ^f^*ẋ∈ Z∩ H_λ^𝔽_κ} of size less than κ (since by Hamkins Lemma 1.3 fast function forcing does not change cardinalities) containing ȧ^f^* and κ such that π_Z[f^*](ẋ^f^*)=π_Z(ẋ)^f^*↾κ̅ for all 𝔽_κ-names ẋ∈ Z. We now verify that Z[f^*] is a counterexample to the statement forced by p. To see that Z[f^*]∩ H_κ^V[f^*] is transitive, note that π_Z[f^*]^-1 agrees with π_Z^-1 on the ordinals, so it sends its critical point κ̅ to κ. V_κ^V[f^*]ϕ(π_Z(ȧ)^f^*↾κ̅) because p forces that, so it follows from the equation at the end of the previous paragraph that V_κ^V[f^*]ϕ(π_Z[f^*](ȧ^f^*)). Because ⟨κ̅,α⟩∈ f^*, we have ġ^f^*(κ̅) =e(f^*(κ̅))^f^*↾κ̅ =e(α)^f^*↾κ̅ =π_Z(ȧ)^f^*↾κ̅ =π_Z[f^*](ȧ^f^*) Finally, if u={u_0,… u_k-1}⊂ Z[f^*] and for each i<k u̇_i∈ Z is a name for u_i, then p forces that h̃({u̇_0, …, u̇_k})∈ Z is a name for ḣ^f^*(u), so Z[f^*] is closed under ḣ^f^*, a contradiction. Thus g is a correctly reflecting Laver function. CHAPTER: Σ_N-CORRECT FORCING AXIOMS § STATEMENT AND CONSISTENCY To motivate the statement of Σ_n-correct forcing axioms, we first consider the following characterization of classical forcing axioms, obtained from generalizing a characterization in Jensen's handwritten notes (<cit.> Lemma 1) beyond the κ=ω_2 case: For any forcing class Γ consisting of separative posets and regular cardinal κ>ω_1, the following are equivalent: * FA_<κ(Γ) * For every ℙ∈Γ, every regular γ>κ such that ℙ∈ H_γ, and X⊂ H_γ such that |X|<κ, there is a transitive structure N̅ and an elementary embedding σ:N̅→ H_γ such that: * |N̅|<κ * X∪{ℙ}⊆ rng(σ) * there is an N̅-generic filter G̅⊆ℙ̅:=σ^-1(ℙ) We closely follow Jensen's arguments. (2⇒ 1): Let X be any collection of fewer than κ dense subsets of ℙ∈Γ and γ be an arbitrary regular cardinal large enough for the hypotheses of (2). Given N̅, σ, and G̅ from (2), let G be the filter on ℙ generated by σ"G̅. Then for any D∈ X, by (2)(b) there is a D̅∈N̅ such that σ(D̅)=D, and by elementarity D̅ is a dense subset of ℙ̅. Thus there is a p∈G̅∩D̅ by N̅-genericity, so σ(p)∈ G∩ D and hence G meets every set in X. (1⇒ 2): Given ℙ∈Γ, assume without loss of generality that ℙ∈ X≺ H_γ and that X∩κ is transitive. Let G⊆ℙ be a filter meeting all dense subsets D of ℙ such that D∈ X (since |X|<κ). Now we cannot collapse X to obtain N̅, since G might meet some such D at a condition which is not itself an element of X and then the image of G under the collapsing map would not be N̅-generic. To avoid this issue, we thicken X before collapsing it. Let Υ:={u̇∈ X∩ H_γ^ℙ{p∈ℙ∃ x∈ H_γ p⊩_ℙu̇=x̌} is dense in ℙ} For any such u̇, the dense set witnessing u̇∈Υ is in X by elementarity. Then by the choice of G, there is a p∈ G and a unique x∈ H_γ such that p⊩_ℙu̇=x̌. Define (for the remainder of this proof only) u̇^G to be this x and Y:={u̇^Gu̇∈Υ}. To see that Y is an elementary substructure of H_γ, suppose H_γ∃ x ϕ(x, a_1,… a_n) where a_i=u̇_i^G∈ Y for each i. Let p∈ G be a common lower bound of the conditions in G forcing u̇_i=ǎ_i. Then p⊩ H_γ^V∃ xϕ(x, u̇_1,…u̇_n), so by the mixing lemma there is a name u̇_0 such that u̇_0 being equal to some a_0∈ H_γ such that H_γ^Vϕ(ǎ_0, u̇_1,…u̇_n) is dense below p. By X≺ H_γ, there is such a u̇_0 in X and hence in Υ, so u̇_0^G∈ Y is such that H_γϕ(u̇_0^G, a_1, …, a_n). By the Tarski-Vaught criterion, Y≺ H_γ. Thus in particular it is extensional, so let N̅ be the transitive collapse of Y, σ=π_Y^-1, and G̅:=π_Y"G. We now verify the conditions in (2). For (a), |X|<κ by hypothesis and each element of Y is generated by a name in X, so |Y|≤ |X|. For (b), X⊆ Y=rng(σ) because for any x∈ X, x̌∈Υ and x̌^G=x; ℙ∈ rng(σ) because we assumed ℙ∈ X. Finally, for (c), any dense subset of ℙ̅:=π_Y(ℙ) in N̅ is of the form π_Y(D) for some dense subset of ℙ D=Ḋ^G∈ Y; thus if we can produce a p∈ G∩ D∩ Y, π_Y(p)∈G̅∩π_Y(D), establishing the N̅-genericity of G̅. Towards this, assume without loss of generality that the name Ḋ is such that ⊩_ℙ“Ḋ is a dense subset of ℙ̌ in the ground model”. Then it is forced that Ḋ meets the canonical name for the generic filter Ġ, so by the mixing lemma let ṗ∈Υ be such that ⊩_ℙṗ∈Ḋ∩Ġ and set p:=ṗ^G∈ Y. Then by the definition of ṗ^G, there is a q∈ G such that q⊩_ℙṗ=p̌Ḋ=Ď. It follows immediately from the choice of ṗ that q⊩_ℙp̌∈Ď, which can only happen if p∈ D. Furthermore, q⊩_ℙp̌∈Ġ, so by the definition of Ġ the set {r∈ℙ r≤ p} is dense below q. Since ℙ was assumed to be separative, this implies that q≤ p, so p∈ G. Thus p∈ G∩ D∩ Y, as desired. We are now ready to state the central axiom. We would like to generalize FA^+ by allowing arbitrary names and arbitrary (or at least arbitrary Σ_n) provably Γ-persistent formulas rather than simply the stationarity of subsets of ω_1, but as noted toward the end of Section <ref>, this is inconsistent, since most nontrivial forcing classes can make arbitrary sets smaller than some definable cardinal (such as ℵ_2 or 2^ℵ_0) of the forcing extension. To avoid this issue, we need to shrink our names down to a reasonable size before interpreting them. The previous lemma guides us on how to do that. For Γ a forcing class, n a positive integer, and κ a regular uncountable cardinal, the Σ_n-correct forcing axiom Σ_n CFA_<κ(Γ) is the statement that for all posets ℙ∈Γ, ℙ-names ȧ, sets b, regular cardinals γ>κ such that ℙ, ȧ, b∈ H_γ, X⊂ H_γ with |X|< κ, and provably Γ-persistent Σ_n formulas ϕ such that ⊩_ℙϕ(ȧ,b̌), there is a transitive structure N with an elementary embedding σ:N→ H_γ such that * |N|<κ * ȧ, b, ℙ, and all elements of X are in the range of σ * rng(σ)∩κ is transitive * there is an N-generic filter F⊂σ^-1(ℙ) such that ϕ(σ^-1(ȧ)^F, σ^-1(b)) holds. If κ=δ^+ for some cardinal δ, Σ_n CFA_δ is synonymous with Σ_n CFA_<κ. As with ZFC_δ (Definition <ref>), formally Σ_n CFA_<κ(Γ) is expressed in the language of set theory with a constant symbol for κ, though for many forcing classes Γ the resulting theory will be able to prove a particular value for κ. The parameter b can of course be incorporated into the name ȧ, and as such we will frequently omit it, but it is included here to emphasize that such parameters are permissible. We refer to Σ_n-correct forcing axioms for certain common forcing classes by the obvious modifications of the names for the corresponding classical forcing axioms: Σ_n CMA is Σ_n CFA_<2^ℵ_0(ccc). Σ_n CPFA, Σ_n CMM, and Σ_n CSCFA are Σ_n CFA_<ω_2(Γ) where Γ is the class of proper, stationary set preserving, or subcomplete forcing, respectively. If Δ⊆Γ and λ≤κ, then FA_<κ(Γ) implies FA_<λ(Δ). However, these downward implications fail for even Σ_2-correct forcing axioms. The class of proper forcing contains the classes of countably closed and ccc posets, but as we will see in Section <ref>, the Σ_2-correct proper forcing axiom, Σ_2-correct countably closed forcing axiom, and Σ_2-correct Martin's Axiom all have mutually inconsistent implications for the value of the continuum. Similarly, the statement that ω_2 is the third infinite cardinal can be expressed as a provably ccc-persistent Σ_2 formula, so Σ_2 CFA_<ω_2(ccc) implies that it is true of some ordinal in a transitive structure of cardinality at most ω_1 and thus Σ_2 CFA_<ω_2(ccc) is inconsistent, even though (as we will see shortly) Σ_2 CMA is consistent (and implies that the continuum is very large). We do, however, have the following easy implications: Σ_n CFA_<κ(Γ) implies: * FA_<κ(Γ) * Σ_n MP_Γ(H_κ) * Σ_m CFA_<κ(Γ) for all m≤ n (1): We apply Lemma <ref>, since the statement of Σ_n CFA_<κ(Γ) is an obvious strengthening of (2) in that lemma. (2): For any provably Γ-persistent Σ_n formula ϕ, b∈ H_κ, and ℙ∈Γ such that ⊩_ℙϕ(b̌), let X=trcl({b}). Then if σ is as in the statement of the axiom applied to X, ϕ, b, and ȧ:=∅, X⊆ rng(σ) implies that σ(b)=b, so ϕ(b) holds in V. (3): Every Σ_m formula is Σ_n. We now turn to proving the consistency of Σ_n-correct forcing axioms. First, we characterize the forcing classes for which the consistency proof works correctly (modeled on Asperó and Bagaria <cit.>, Lemma 2.2): A forcing class Γ is n-nice iff: * Γ contains the trivial forcing * Each ℙ∈Γ preserves ω_1 * Γ is closed under restrictions, i.e., for any ℙ∈Γ and p∈ℙ, ℙ↾ p:={q∈ℙ q≤ p}∈Γ * If ℙ∈Γ and ⊩_ℙℚ̇∈Γ, then ℙ*ℚ̇∈Γ * For every inaccessible cardinal κ and every forcing iteration ⟨⟨ℙ_α, ℚ̇_α⟩ | α<κ⟩ of posets in V_κ∩Γ with some suitable support, if ℙ_κ is the corresponding limit, then: * ℙ_κ∈Γ * ⊩_ℙ_αℙ_κ/ℙ_α∈Γ in for all α<κ * If ℙ_α∈ V_κ for all α<κ, then ℙ_κ is the direct limit of ⟨ℙ_αα<κ⟩ and satisfies the κ-cc * Γ is Σ_n definable Most forcing classes whose corresponding forcing axioms are commonly studied are 2-nice. Strictly speaking, only the last three conditions, regarding iterability and definability, are necessary for the consistency proof, but the first three are convenient and hold for most reasonable forcing classes. If κ is supercompact for C^(n-1) and Γ is an n-nice forcing class, then there is a κ-cc forcing ℙ_κ∈Γ of size κ such that if G⊂ℙ_κ is V-generic, then V[G]Σ_n CFA_<κ(Γ). Let f be a Laver function on κ for C^(n-1) and let ℙ_κ be the standard Baumgartner iteration of Γ of length κ derived from f. That is, we recursively construct a sequence of names for posets in Γ ⟨ℚ̇_αα<κ⟩ and take ⟨ℙ_αα≤κ⟩ to be the iteration of it with support suitable to Γ, where ℚ̇_α=f(α) for α such that f(α) is a ℙ_α-name for a forcing in Γ, and otherwise ℚ̇_α is a ℙ_α-name for the trivial forcing. By the definition of n-niceness, ℙ_κ satisfies the κ-cc (so in particular κ remains regular in the forcing extension) and |ℙ_κ|=κ. Let G⊆ℙ_κ be V-generic. In V[G], given a poset ℙ∈Γ, a ℙ-name ȧ, a parameter b, a regular cardinal γ>κ such that ℙ, ȧ, b∈ H_γ^V[G], a set X⊂ H_γ^V[G] of size less than κ, and a provably Γ-persistent Σ_n formula ϕ such that ⊩_ℙϕ(ȧ,b̌), let ℙ̇, ä, ḃ∈ H_γ^V (applying Lemma <ref>) be ℙ_κ-names for the corresponding objects. Then there is some p∈ G such that p⊩_ℙ_κℙ̇∈Γ and (p, 1_ℙ)⊩_ℙ_κ*ℙ̇ϕ(ä,ḃ) (interpreting ä and ḃ as ℙ_κ*ℙ̇-names in the obvious way). Applying Lemma <ref>, there is an elementary embedding j:V→ M witnessing that κ is 2^γ-supercompact such that j(f)(κ)=ℙ̇ and M“p⊩_ℙ_κℙ̇∈Γ (p, 1_ℙ)⊩_ℙ_κ*ℙ̇ϕ(ä,ḃ)". Then because j(ℙ_κ) is constructed from j(f) in the same way that ℙ_κ is from f, the κth poset used in the iteration is ℙ̇ (and since it is forced to be in Γ, it is in fact used). Furthermore, for all α<κ, ℙ_α∈ V_κ and thus j(ℙ_α)=ℙ_α, so j(ℙ_κ)=ℙ_κ*ℙ̇*ℝ̇ for some ℙ_κ*ℙ̇-name for a poset ℝ̇. Let H*K⊆ℙ*ℝ̇ be V[G]-generic. From the choice of M, M[G][H]ϕ(ȧ^H, b), and by one of the iteration conditions in the definition of n-nice forcing classes, ℝ̇^G*H=j(ℙ_κ)/(ℙ_κ*ℙ̇)∈Γ^M[G][H]. Since ϕ is provably Γ-persistent and M[G][H] ZFC, M[G][H][K]ϕ(ȧ^H, b). (It is here that we need ϕ to be provably persistent rather than merely forceably necessary over V[G], although see Lemma <ref>.) By Lemma <ref>, j extends (in V[G][H][K]) to an elementary embedding j^*:V[G]→ M[G][H][K] defined by j^*(ẋ^G)=j(ẋ)^G*H*K for all ℙ_κ-names ẋ. By the closure condition on M, j↾ H_γ^V∈ M. By Lemma <ref>, every element of H_γ^V[G] is of the form ẋ^G for some ẋ∈ H_γ^V. It follows that σ':=j^*↾ H_γ^V[G]∈ M[G][H][K], since it is definable from G*H*K and the values of j on ℙ_κ-names in H_γ^V. By Lemma <ref>, σ':H_γ^V[G]→ H_j(γ)^M[G][H][K] is an elementary embedding such that σ'^-1(j^*(ȧ))=ȧ, σ'^-1(j^*(b))=b, and σ'^-1(j^*(ℙ))=ℙ. Since |X|<κ=crit(j^*), j^*(X)=j^*"X, so in particular j^*(X)⊆ rng(σ'). Finally, since σ' maps its critical point κ to j(κ), rng(σ)∩ j(κ) is transitive. We have thus shown that M[G][H][K] satisfies the statement: "There is a transitive structure H_γ^V[G] of size less than j(κ) with an elementary embedding σ':H_γ^V[G]→ H_j(γ)^M[G][H][K] such that {j^*(ȧ), j^*(b), j^*(ℙ)}∪ j^*(X)⊆ rng(σ'), rng(σ')∩ j(κ) is transitive, and there is an H_γ^V[G]-generic filter H⊆σ'^-1(j^*(ℙ)) such that ϕ(σ'^-1(j^*(ȧ))^H, σ'^-1(j^*(b))) holds." By the elementarity of j^*, in V[G] there must be a transitive structure N, an elementary embedding σ:N→ H_γ^V[G], and an N-generic filter F⊆σ^-1(ℙ) witnessing the truth of the desired instance of the axiom. It is straightforward to check that if Γ can (Γ-necessarily) collapse arbitrary cardinals to ω_1, then κ=ℵ_2^V[G], and if Γ can add arbitrarily many reals, then κ=(2^ℵ_0)^V[G]. In the case where n=2, ℙ_κ is exactly the poset used to force FA_<κ(Γ), so by Lemma <ref> we have: For any 2-nice forcing class Γ, the standard model of FA_<κ(Γ) (constructed as a forcing extension of a model where κ is supercompact) in fact satisfies Σ_2 CFA_<κ(Γ). We can also obtain models of what one might call the fully correct forcing axiom: If the Vopenka scheme is consistent and Γ is an m-nice forcing class for some m, then it is consistent that there is a regular κ>ω_1 such that Σ_n CFA_<κ(Γ) holds for all (metatheoretic) natural numbers n. By Lemma <ref>, for each n, any model of the Vopenka scheme has a cardinal κ_n supercompact for C^(n-1) and thus by Theorem <ref> a forcing extension where Σ_n CFA_<κ_n(Γ) holds and κ_n is regular. By the compactness theorem, there is a model with a single regular κ such that Σ_n CFA_<κ(Γ) holds for each n. For the plus versions of forcing axioms, we can obtain filters which interpret not just a single name for a stationary subset of ω_1 as an actual stationary set, but all names in a small transitive structure (see e.g. Remark 42 of <cit.>). This easily generalizes to any other single formula in the Σ_n-correct case: If Σ_n CFA_<κ(Γ) holds for some Σ_n-definable Γ, then for all ℙ∈Γ, cardinals γ such that ℙ∈ H_γ, provably Γ-persistent Σ_n formulas ϕ, and X⊆ H_γ such that |X|<κ, there is a transitive structure N with an elementary embedding σ: N→ H_γ such that X∪{ℙ}⊆ rng(σ) and an N-generic filter F such that for all σ^-1(ℙ)-names ȧ∈ N such that ⊩_ℙϕ(σ(ȧ)), ϕ(ȧ^F) holds. We call such an F a ϕ-correct N-generic filter. Given ℙ, γ, ϕ, and X, let γ'>2^<γ and S={ẋ∈ H_γ⊩_ℙϕ(ȧ)}. If we take ς:={⟨ẋ, 1_ℙ⟩ẋ∈ S} to be the canonical name for the set of interpretations of elements of S (not to be confused with Š), then ⊩_ℙ∀ x∈ςϕ(x), and ∀ x∈ yϕ(x) is a provably Γ-persistent Σ_n formula in y because ϕ is a provably Γ-persistent Σ_n formula. ς, H_γ∈ H_γ' by the choice of γ', so we can find an embedding σ':N'→ H_γ' with X∪{ℙ, ς, H_γ}⊆ rng(σ') and an N'-generic filter F⊆ℙ̅:=σ'^-1(ℙ) such that ϕ(a) holds for all a∈σ'^-1(ς)^F. Observe that σ'^-1(ς)^F={ȧ^Fȧ∈ N'σ'(ȧ)∈ S}. Therefore if we set N:=σ'^-1(H_γ) and σ=σ'↾ N, then σ is elementary by Lemma <ref>, σ(ℙ̅)=ℙ, F is N-generic, and for every ȧ∈ N such that ⊩_ℙϕ(σ(ȧ)), σ(ȧ)∈ S, so ȧ^F∈σ'^-1(ς)^F and thus ϕ(ȧ^F) holds. We might wish to obtain filters which are fully Σ_n-correct; that is, they correctly interpret all names in N with respect to all provably Γ-persistent Σ_n formulas. However, difficulties arise with finding a formula we can apply the Σ_n-correct forcing axiom to in order to obtain such a filter. We might attempt to take the fragment T of the Σ_n forcing relation for ℙ involving only names in H_γ and formulas which are provably Γ-persistent. The problems appear when we attempt to choose a particular forceable property of T to reflect, since we need to include some information about T in order for ZFC to prove that the formulas in it are preserved by further forcing. If we try to use "for all ⟨ϕ, ȧ⟩∈ T, ϕ is provably Γ-persistent and ϕ(ȧ) holds", then we can't prove in ZFC that the ϕ(ȧ) actually continue to hold in further forcing extensions, since ZFC does not prove its own soundness. We can avoid this issue by instead reflecting "for all ⟨ϕ, ȧ⟩∈ T, ϕ(ȧ) holds and is preserved by all forcing in Γ"; however, as noted in Lemma <ref>, this adds additional formula complexity. If Σ_n+2 CFA_<κ(Γ) holds for some Σ_n-definable Γ, then for all ℙ∈Γ, regular cardinals γ such that ℙ∈ H_γ, and X⊆ H_γ such that |X|<κ, there is a transitive structure N with an elementary embedding σ: N→ H_γ such that X∪{ℙ}⊆ rng(σ) and a Σ_n-correct N-generic filter F, i.e. for all σ^-1(ℙ)-names ȧ∈ N and all Σ_n formulas ϕ such that ⊩_ℙ“ϕ(σ(ȧ)) holds and is preserved by all further forcing in Γ", ϕ(ȧ^F) holds. Consequently, if Σ_n CFA_<κ(Γ) holds for all n∈ω, for all n we can find a structure with a Σ_n-correct generic filter. Given ℙ, γ, and X, let T={⟨ϕ, ẋ⟩ẋ∈ H_γ^ℙ⊩_ℙ□_Γϕ(ẋ)}, and let τ be the ℙ-name such that for any filter F, τ^F={⟨ϕ, ẋ^F⟩⟨ϕ, ẋ⟩∈ T}. Since □_Γϕ is Π_n+1, the statement "□_Γϕ(x) holds for all ⟨ϕ, x⟩∈τ" can be expressed as a Π_n+1 formula ψ(τ), which is forced by ℙ. Because □_Γϕ is provably Γ-persistent, ψ is as well. If γ' is sufficiently large that τ, H_γ∈ H_γ', then there is an elementary embedding σ': N'→ H_γ' such that X∪{ℙ, τ, H_γ}⊆ rng(σ') and an N'-generic filter F⊆ℙ̅:=σ'^-1(ℙ) such that ψ(σ'^-1(τ)^F) holds. Let N:=σ'^-1(H_γ). Then as in the proof of Proposition <ref>, σ'^-1(τ)^F consists of all pairs of Σ_n formulas ϕ and ȧ^F where ȧ is a ℙ̅-name ȧ∈ N such that ℙ forces that ϕ(σ'(ȧ)) holds and continues to hold after all further Γ-forcing. By ψ(σ'^-1(τ)^F), ϕ(ȧ^F) holds for each such pair. Thus if we set σ=σ'↾ N, σ:N→ H_γ is an elementary embedding with all the desired properties and F is a Σ_n-correct N-generic filter. § EQUIVALENT FORMULATIONS The official (Jensen-style) formulation of Σ_n-correct forcing axioms given in the previous section underscores the fact that they are generalizations of FA^+(Γ), but can be somewhat unwieldy. In this section, we explore more streamlined presentations. First, Philipp Schlicht and Christopher Turner <cit.> have shown that classical forcing axioms are equivalent to "name principles" asserting the existence of filters interpreting a name to have a certain property. As such, the N-genericity conditions may be omitted: The Schlicht-Turner[Schlicht and Turner would perhaps be less likely to recognize this axiom than the namesakes of the other formulations would be to recognize theirs, since their work mainly focused on identifying the names which can consistently be interpreted to have certain properties, whereas my approach is to reflect arbitrary names to names small enough to have no issues. However, their work was very helpful to me in clarifying the relationship between genericity and interpretations of names, and I had to call this formulation something.] (S-T) formulation of Σ_n CFA_<κ(Γ) is the assertion that for all provably Γ-persistent Σ_n formulas ϕ, posets ℙ∈Γ, ℙ-names ȧ such that ⊩_ℙϕ(ȧ), regular cardinals γ>κ such that H_γ contains ℙ and ȧ, and sets X⊂ H_γ of size less than κ, there is a Z≺ H_γ of size less than κ containing ℙ, ȧ, and all elements of X such that Z∩ H_κ is transitive and a filter F⊆π_Z(ℙ) such that ϕ(π_Z(ȧ)^F) holds. In fact, we can go even further and dispense with filters altogether, yielding the following elegant principle: The Miyamoto-Asperó (M-A) formulation of Σ_n CFA_<κ(Γ) is the assertion that for all provably Γ-persistent Σ_n formulas ϕ, all cardinals λ≥κ, and all b∈ H_λ such that ⊩_ℙϕ(b̌) for some ℙ∈Γ, there are stationarily many Z≺ H_λ of size less than κ containing b such that ϕ(π_Z(b)) holds. Miyamoto's Theorem 2.5(2) in <cit.> is essentially a bounded version of this principle in the special case where n=1, κ=ω_2, and Γ is the class of proper forcing. David Asperó, possibly inspired by Miyamoto, stated the bounded version in generality (<cit.>, Definitions 1.3 and 1.5). Finally, we have a generic elementary embedding characterization, reminiscent of Lemma <ref>. The following definition is helpful in stating it and related results: If ℚ is a forcing poset, ϕ is a formula in the language of set theory, ȧ is a ℚ-name, and κ<γ are regular cardinals, we say that a generic elementary embedding j:V→ M (where M is a possibly ill-founded class model with transitive well-founded part) witnesses the <κ-forcing axiom for (ℚ, ϕ, ȧ, γ) iff: * H_γ^V is in the wellfounded part of M * |H_γ^V|^M<j(κ) * j↾ H_γ^V∈ M * crit(j)=κ * M contains a V-generic filter H⊆ℚ such that Mϕ(ȧ^H) The Woodin-Cox (W-C) formulation of Σ_n CFA_<κ(Γ) is the assertion that for every provably Γ-persistent Σ_n formula ϕ, ℚ∈Γ, ℚ-name ȧ such that ⊩_ℚϕ(ȧ), and regular γ>|𝒫(ℚ)∪ trcl(ȧ)∪κ|, there is a generic elementary embedding j:V→ M which witnesses the <κ-forcing axiom for (ℚ, ϕ, ȧ, γ). Woodin showed the equivalence for classical forcing axioms with the existence of suitable generic embeddings under the additional assumption of a proper class of Woodin cardinals, in which case the codomain M can be taken to be wellfounded, with the generic elementary embedding produced by stationary tower forcing (<cit.>, Theorem 2.53). Sean Cox generalized Woodin's result by proving that even without the Woodin cardinals we can still get an equivalence with embeddings into illfounded models, and that this also holds for FA^+ν for any ν≤ω_1 (<cit.>, Theorem 43). (Note that since there is a Σ_2 characterization of Woodin cardinals and there are unboundedly many below any supercompact κ, "there is a proper class of Woodin cardinals" is a Π_3 sentence which holds in V_κ. Thus if κ is in fact supercompact for C^(2) (or merely supercompact and Σ_3-correct), then there must be a proper class of Woodins in V and any set forcing extension of V, so in particular for n≥ 3 our consistency proof of Σ_n CFA_<κ(Γ) in fact produces a model of the hypotheses of Woodin's Theorem 2.53.) The following are equivalent for all positive integers n, regular cardinals κ>ω_1, and forcing classes Γ: * The official (Jensen-style) formulation of Σ_n CFA_<κ(Γ) given in Section <ref> * The Miyamoto-Asperó formulation of Σ_n CFA_<κ(Γ) * The Woodin-Cox formulation of Σ_n CFA_<κ(Γ) * The Schlicht-Turner formulation of Σ_n CFA_<κ(Γ) (1⇒ 2): Fix ϕ, λ, b, and ℙ as in the M-A formulation. We plan to apply the Jensen formulation to the formula ϕ and the name b̌; the only difficulty is ensuring that the set of possible ranges of the embedding σ is stationary in [H_λ]^<κ. For this, let h:[H_λ]^<ω→ H_λ; we will produce a Z≺ H_λ of size less than κ, containing b, and closed under h such that Z∩ H_κ is transitive and ϕ(π_Z(b)) holds. Let γ be a regular cardinal large enough that H_λ∈ H_γ. Let X:={H_λ, h, ℙ, b}; then if N is transitive and smaller than κ and σ:N→ H_γ is elementary with X⊂ rng(σ), rng(σ)∩ H_κ transitive, and ϕ(σ^-1(b)), define Z:=rng(σ)∩ H_λ. That Z is smaller than κ and transitive below κ is immediate. That ϕ(π_Z(b)) holds follows from the fact that π_Z is a restriction of σ^-1, while Z≺ H_λ is a consequence of Lemma <ref>. Finally, if Z̅ and g are such that σ(Z̅)=H_λ and σ(g)=h, then by elementarity Z̅ is closed under g. Since every finite subset of Z is of the form {σ(x_1),…σ(x_k)} for some x_1,…, x_k∈Z̅ and g({x_1,…, x_k})∈Z̅, by elementarity h({σ(x_1),…, σ(x_k)})=σ(g({x_1,…, x_k}))∈ Z. Thus Z has all the desired properties, so the M-A formulation holds. (2⇒ 3): We closely follow Cox's proof of Theorem 43 (<cit.>). Let ϕ, ℚ, ȧ, and γ be as in the statement of the W-C formulation. First, observe that for any filter F, ȧ^F is Δ_1-definable from ȧ and F. It follows that, if 𝒟∈ H_γ is the set of all dense subsets of ℚ in V, the statement ψ(ȧ,ℚ, 𝒟):="there exists a 𝒟-generic filter F⊆ℚ such that ϕ(ȧ^F) holds" is a Σ_n formula which is forced by ℚ; furthermore, since ϕ is provably Γ-persistent and forcing does not destroy F or change the interpretation of names, ψ is provably Γ-persistent as well. Then if R is the set of all Z≺ H_γ of size less than κ such that ℚ, ȧ∈ Z, Z∩ H_κ transitive, and ψ(π_Z(ȧ), π_Z(ℚ), π_Z(𝒟)) holds, by the M-A formulation R is stationary in [H_γ]^<κ. Let 𝔹 be the power set of R modulo the restriction of the nonstationary ideal and let U⊂𝔹 be a V-generic ultrafilter. Taking j: V→ M to be the generic ultrapower embedding derived from U, that j witnesses the -forcing axiom for (ℚ, ϕ, ȧ, γ) follows easily from the basic theory of generic ultrapowers, but the arguments involved will be briefly indicated for the benefit of readers unfamiliar with that theory. j"H_γ^V⊆ [id_R]_U because any x∈ H_γ^V is in all but nonstationarily many Z∈ R, so by Łos's Theorem j(x)=[Z↦ x]_U∈ [id_R]_U for all such x; furthermore the reverse inclusion follows from the normality of U (which in turn follows from the normality of the club filter and the genericity of U), so [id_R]_U=j"H_γ^V. Similarly, the function Z↦π_Z"Z on R is the coordinatewise transitive collapse of id_R, so [Z↦π_Z"Z]_U≅ H_γ^V, the coordinatewise transitive collapse of j"H_γ^V; since H_γ^V is a transitive set from a well-founded model, this transitive isomorphic copy must lie in the well-founded part of M, and hence if we take that well-founded part to be transitive the isomorphic copy will be equal to H_γ^V, and so H_γ^V∈ wfp(M). Furthermore, since Z↦π_Z"Z is coordinatewise smaller than κ, in M H_γ^V is smaller than j(κ)=[Z↦κ]_U. For condition (c) in the definition of witnessing a forcing axiom, observe that coordinatewise Z↦π_Z^-1 is the inverse of the Mostowski isomorphism of id_R, so [Z↦π_Z^-1]_U is the inverse Mostowski isomorphism of j"H_γ^V, i.e. j↾ H_γ^V. For (d), Z↦ Z∩κ is a function on R everywhere less than κ but less than any particular α<κ only nonstationarily often, so j must be discontinuous at κ; it fixes each ordinal less than κ because the κ-additivity of the nonstationary ideal and the genericity of U imply that U must be κ-complete over V. Setting π:=(j↾ H_θ^V)^-1=[Z↦π_Z]_U, further invocations of Łos show that [Z↦π_Z(ℚ)]_U=π(j(ℚ))=ℚ, [Z↦π_Z(ȧ)]_U=π(j(ȧ))=ȧ, [Z↦π_Z(𝒟)]_U=𝒟, and [Z↦ G_Z]_U (where G_Z is the filter whose existence is asserted by ψ(π_Z(ȧ), π_Z(ℚ), π_Z(𝒟))) is a 𝒟-generic (hence V-generic) filter on ℚ which interprets ȧ correctly. Thus j witnesses the -forcing axiom for (ℚ, ϕ, ȧ, γ). (3⇒ 4): Given ϕ, ℙ, ȧ, γ, and X as in the S-T formulation, let j: V→ M be a generic elementary embedding which witnesses the -forcing axiom for (ℙ, ϕ, ȧ, γ). (If γ is not large enough to meet the requirements of the W-C formulation, we can replace it with a larger γ and then easily draw all the desired conclusions about our original γ.) Since |X|<κ=crit(j), j(X)=j"X, so Z':= j"H_γ^V is an elementary substructure of H_j(γ)^M containing j(ℙ), j(ȧ), and all elements of j(X). As elements of H_κ are not moved by j and sets outside of H_κ are mapped to sets outside of H_j(κ)^M, Z'∩ H_j(κ)^M is transitive. Finally, since π_Z'(j(ℙ))=ℙ and π_Z'(j(ȧ))=ȧ, there is a filter H⊆π_Z'(j(ℙ)) in M such that ϕ(π_Z'(j(ȧ))^H) holds. Pulling all of this back to V, there is a Z≺ H_γ^V and a filter F⊆π_Z(ℙ) witnessing the truth of the S-T formulation of the axiom. (4⇒ 1): Given ℙ, ȧ, γ, X, and ϕ as in the Jensen formulation, let δ be large enough that H_γ∈ H_δ and let χ(x, y, H) denote the assertion that ϕ(x) holds and H is a y-generic filter. Then if Ġ is the canonical ℙ-name for the generic filter, ⊩_ℙχ(ȧ, Ȟ_γ, Ġ) and χ is a provably Γ-persistent Σ_n formula (as asserting that a filter is generic over a transitive structure only requires quantifying over the filter and the structure). It follows from the S-T formulation that there is a Z≺ H_δ of size less than κ containing ℙ, ȧ, H_γ, Ġ, and all elements of X such that Z∩ H_κ is transitive and a filter F⊂π_Z(ℙ) such that χ(π_Z(ȧ)^F, π_Z(H_γ), π_Z(Ġ)^F) holds. Since by elementarity π_Z(Ġ) is the canonical π_Z(ℙ)-name for the generic filter, π_Z(Ġ)^F=F, so setting N:=π_Z(H_γ) and σ:=π_Z^-1↾ N, F is N-generic and ϕ(σ^-1(ȧ)^F) holds. By Lemma <ref> σ is an elementary embedding, and all other desired properties of N and σ are immediate from the choice of Z, so the Jensen formulation holds. § DO Σ_N-CORRECT FORCING AXIOMS FORM A STRICT HIERARCHY IN N? It is natural to ask whether increasing the value of n in the Σ_n-correct forcing axioms produces a strictly stronger axiom, or if Σ_n CFA_<κ can ever imply Σ_n+1 CFA_<κ. In the Σ_1 vs Σ_2 case, this is difficult to answer in general, but for most specific forcing classes of interest, results in later chapters will imply that we do get a separation. At n=2, we can prove that moving one more level up produces strictly stronger axioms for a wide range of forcing classes, including for example all classes which can add arbitrarily many reals or collapse arbitrarily large cardinals. Let Γ be an 2-nice forcing class and Δ a forcing class which can destroy arbitrarily many inaccessibles, i.e. for any set X⊂ Ord, there is a poset in Δ which forces "X̌ does not contain any inaccessible cardinals". Then if it is consistent that there is a supercompact cardinal with an inaccessible above it, Σ_2 CFA_<κ(Γ) does not imply Σ_3 MP_Δ(∅). In particular, if Γ is a 2-nice forcing class which can destroy arbitrarily many inaccessibles, then Σ_2 CFA_<κ(Γ) does not imply Σ_3 CFA_<κ(Γ). By truncating the universe if necessary, we can assume that there is a model V with a supercompact κ with exactly one inaccessible λ above it. Applying Corollary <ref>, there is a forcing extension V[G]Σ_2 CFA_<κ(Γ). Since the forcing in question has cardinality κ<λ, λ will remain inaccessible in V[G], and since forcing does not add inaccessibles it will be the largest inaccessible in V[G][In the main case of interest where Γ can destroy inaccessibles, this fact will reflect to V_κ, the inaccessible-destroying posets will be included in the Baumgartner iteration, and so λ will in fact be the only inaccessible in V[G], but that is not relevant to the proof.]. We now show that Σ_3 MP_Δ(∅) fails in V[G] by the argument of Hamkins in Theorem 3.9 of <cit.>. Since inaccessibility is a Π_1 property, the assertion "all ordinals are not inaccessible cardinals" is a Π_2 sentence. If there are only set many inaccessibles, there is some forcing in Δ which destroys them all (without adding new ones, of course) and so makes this statement true. Since the statement is preserved by all further forcing, it holds in any model of the Σ_3 maximality principle for Δ without a proper class of inaccessibles. As V[G] has a nonempty set of inaccessibles, it cannot satisfy Σ_3 MP_Δ(∅). It is tempting to try to generalize the above argument by replacing "inaccessible" with "regular C^(n-1)". However, the proof made essential use of the fact that forcing cannot add inaccessible cardinals, which need not hold for Σ_n-1-correctness when n>2. We could define C^(n) to be those cardinals which are Σ_n-correct in some forcing extension, and then Σ_n+2 MP_Δ(∅) will imply that the intersection of C^(n) with the class of regular cardinals is empty or a proper class, but then it is no longer clear how to arrange that there is a model of Σ_n+1 CFA_<κ(Γ) with C^(n)∩ Reg a nonempty set, since truncations at forceably Σ_n-correct cardinals are not necessarily well-behaved. We can, however, at least show that Σ_n-correct forcing axioms become stronger when increasing n by at least two with a more elaborate geological argument. For any forcing class Γ which contains the trivial forcing, Σ_n+2 CFA_<κ(Γ) implies that there are unboundedly many ordinals less than κ which are Σ_n+1-correct in some ground. Fix any θ∈ C^(n+1). Combining Corollary <ref> and Lemma <ref>, the formula θ∈ (C^(n+1))^W_r is Π_n+1 in θ and r, so ∃ rθ∈ (C^(n+1))^W_r is Σ_n+2. Since it is forceable by the trivial forcing and can easily be seen to be provably preserved by all forcing, applying the Miyamoto-Asperó form of the axiom, there are stationarily many Z≺ H_θ^+ of size less than κ such that π_Z(θ) is Σ_n+1-correct in some ground. By considering Z which contain sufficiently many ordinals below κ, we can force π_Z(θ) to be arbitrarily large below κ. We now need to construct a model of Σ_n CFA_<κ(Γ) with no θ<κ which is Σ_n+1-correct in any ground. First, we need the following generalization of the downward direction of the Levy-Solovay Theorem: (special case of Hamkins <cit.>, Gap Forcing Theorem) If δ is a regular cardinal, ℙ is a forcing poset with ℙ∈ H_δ, G⊆ℙ is a V-generic filter, and j:V[G]→ M' is an elementary embedding definable from a parameter u with crit(j)>δ and M' closed under δ-sequences in V[G], then setting M:=⋃_α∈ Ord j(V_α^V): * M=M'∩ V * M'=M[G] * j↾ V: V→ M is an elementary embedding * j↾ V is definable in V from a name for u Suppose that κ is a cardinal, ℙ∈ H_κ is a forcing poset, and G⊆ℙ is a V-generic filter. Then κ is supercompact for C^(n) in V if and only if it is supercompact for C^(n) in V[G]. First, assume that κ is supercompact for C^(n) in V. Given λ>κ, we show that κ is λ-supercompact for C^(n) in V[G]. Let ẋ be a ℙ-name for (C^(n))^V[G]∩λ; then there is a p∈ G which forces the Δ_n+1 property that ẋ consists of the Σ_n-correct cardinals below λ. By Lemma <ref>, there is an elementary embedding j:V→ M witnessing the λ-supercompactness of κ such that M thinks that p forces ẋ to be the set of Σ_n-correct cardinals below λ. Since j"G=G, by Lemma <ref> j extends to an elementary embedding j^*:V[G]→ M[G]. Then crit(j^*)=κ and j(κ)>λ because j^* agrees with j on the ordinals, j^*(C^(n)∩κ)∩λ=(C^(n))^M[G]∩λ=ẋ^G=(C^(n))^V[G]∩λ, and by Lemma <ref> M[G] is closed under λ-sequences in V[G]. Thus κ remains supercompact for C^(n) in V[G]. Conversely, given any θ>κ in (C^(n))^V[G], let j: V[G]→ M' be an elementary embedding with crit(j)=κ, j(κ)>θ, (^θ M')^V[G]⊂ M', and (C^(n))^V[G]∩(θ+1)=(C^(n))^M'∩(θ+1). Then by Theorem <ref>, there is an inner model M of V such that M'=M[G] and j̅:=j↾ V: V→ M is elementary. crit(j̅)=κ and j̅(κ)>θ are again immediate. For the closure condition, if f:θ→ M is in V, then it is in V[G] and thus in M[G] by the closure condition there, so f∈ M[G]∩ V=M. Finally, since θ is Σ_n-correct in both V[G] and M[G], by Lemma <ref> it is Σ_n-correct in both V and M as well. Since θ is a beth fixed point, |V_θ|=θ, so V_θ=V_θ^M and thus both V and M agree with V_θ's computation of the Σ_n cardinals below θ. Hence κ is supercompact for C^(n) in V. If n≥ 3, Γ is an n-nice forcing class, and there are two cardinals supercompact for C^(n-1), then there is a model of Σ_n CFA_<κ(Γ) such that no ordinal less than κ is Σ_n+1-correct in any ground. Let δ<λ be supercompact for C^(n-1). First, by Proposition <ref>, δ is extendible, so Usuba's Theorem <ref> implies that the mantle 𝕄 is a ground of V. Usuba's arguments in fact show that V is a forcing extension of 𝕄 by a poset ℙ∈𝕄 such that |ℙ|^𝕄≤ (2^2^δ^++)^V (see the discussion following Definition 2.6 in <cit.>); since λ is inaccessible in both 𝕄 and V, ℙ∈ V_λ^𝕄. Let κ be the least ordinal which is supercompact for C^(n-1) in some forcing extension of 𝕄 by a poset of size less than λ. Since V is such a forcing extension and δ is supercompact for C^(n-1) there, such an ordinal exists and κ≤δ<λ. Let 𝕄[G] be a forcing extension by a poset ℚ smaller than λ in which κ is supercompact for C^(n-1) and 𝕄[G][H] be a further forcing extension by the poset ℙ_κ from Theorem <ref> in which Σ_n CFA_<κ(Γ) holds. Let θ<κ and let W be a ground of 𝕄[G][H]; we show that θ is not Σ_n+1-correct in W. As W is a ground of a forcing extension of a ground of V, it is in the generic multiverse, so it is a forcing extension of 𝕄 by Proposition <ref>. Hence by the Intermediate Model Lemma <ref>, W is a generic extension of the mantle by some complete subalgebra of the Boolean completion of ℚ*ℙ̇_κ, which has size less than λ. Thus by the minimality of κ, there are no cardinals supercompact for C^(n-1) below θ in W. However, by the downward direction of Corollary <ref>, λ is supercompact for C^(n-1) in 𝕄, so by the upward direction the same is true in W. Thus V_θ^W either incorrectly identifies a cardinal as supercompact for C^(n-1) or it disagrees with W on the sentence "there exists a cardinal supercompact for C^(n-1)". Combining Lemma <ref> and Proposition <ref>, being supercompact for C^(n-1) is a Π_n property and the existence of such a cardinal is Σ_n+1. It follows that in either case, θ∉(C^(n+1))^W. Combining Proposition <ref>, Lemma <ref>, and Lemma <ref>, we have: For all positive integers n and forcing classes Γ (where if n<3 we need the assumption that Γ can destroy arbitrarily many inaccessibles), if it is consistent that there is are two cardinals supercompact for C^(n-1), then Σ_n CFA_<κ(Γ) does not imply Σ_n+2 CFA_<κ(Γ). We are left with the following open questions, where a positive answer to the first would easily yield a positive answer to the second: Is it possible to produce a model of Σ_n CFA_<κ(Γ) where C^(n-1)∩ Reg is neither empty nor a proper class when n>2? Is Σ_n+1 CFA_<κ(Γ) a strictly stronger axiom than Σ_n CFA_<κ(Γ) when n>2? § FORCING AXIOMS FOR CLASSES WHICH COLLAPSE Ω_1 Since the original motivation of this work was generalizing FA^+, most of the focus so far has been on classes which preserve ω_1, as those are the classes for which classical and "plus" forcing axioms make sense and have been previously studied. However, this restriction is not necessary. One could call a forcing class weakly n-nice if it satisfies all the conditions of n-niceness except possibly preservation of ω_1, and then the proof of Theorem <ref> will yield models of Σ_n CFA_<κ(Γ) for any weakly n-nice Γ, where if Γ can necessarily collapse ω_1 we will get κ=ω_1^V[G]. The classical forcing axiom content of such axioms will be trivial, of course, but this isn't really an issue; the classical forcing axiom content of Σ_n-correct forcing axioms for countably closed forcing, is after all, similarly provable in ZFC. One could even consider forcing axioms for the class of all forcing. By Theorem <ref>, this will turn out to be the conjunction of the Σ_n-maximality principle for the class of all forcing together with a reflection principle for provably forcing-persistent properties. However, it is somewhat difficult to identify interesting consequences of this axiom beyond the consequences of the maximality principle. Lemma <ref> at least yields the implication that ω_1 is Σ_n-correctly +1-reflecting in L, so it has noticeably greater consistency strength than the maximality principle. However, even the answer to the following question is unclear. Does Σ_n CFA_<ω_1(all) imply that 0^♯ exists? Further exploration of this topic will be left for future work. § INTERNAL VS EXTERNAL PROVABLE PERSISTENCE The statements of all formulations of the Σ_n-correct forcing axioms given so far have been somewhat ambiguous. Both possible interpretations have slight but easily manageable drawbacks. First, Σ_n CFA_<κ(Γ) could be read as an axiom scheme, with one axiom for each (external) provably Γ-persistent Σ_n formula. This has the disadvantage that the most natural form of the axiom scheme is undecidable, since there is no way to determine whether ZFC proves a formula Γ-persistent except to wait for a proof to be found. However, the resulting axiom set is at least computably enumerable, and Craig's theorem states that every computably enumerable set of axioms is equivalent to a computable set of axioms[Thanks to Russell Miller for relating this fact to me.]. Alternatively, Σ_n CFA_<κ(Γ) could be read as a single sentence quantifying over all (internal) Σ_n formulas of the model which the model's ZFC proves to be Γ-persistent, making use of Lemma <ref> to handle assertions about the truth of the formulas. This eliminates concerns about decidability because a single sentence is of course decidable, and in fact even the scheme consisting of Σ_n CFA_<κ(Γ) for each n is decidable. New difficulties arise from the fact that ZFC does not prove its own soundness (or equivalently, the ZFC of a nonstandard model need not be sound), so the mere fact that a model believes a formula to be provably Γ-persistent does not mean that it actually continues to hold in Γ-extensions of the model. We can address this issue by slightly strengthening the hypotheses of our consistency proof. If Γ is an n-nice forcing class, κ is supercompact for C^(n-1) and there is a regular ζ∈ C^(n) above κ, then there is a model in which the internal version of Σ_n CFA_<κ(Γ) holds. We construct V[G] as in the proof of Theorem <ref> and follow that proof up until the point where we need to show that M[G][H][K]ϕ(ȧ^H, b). By elementarity, j(ζ)∈ (C^(n))^M, so by Lemma <ref> it is in (C^(n))^M[G][H] as well. Similarly, it is regular in M, so since |ℙ_κ*ℙ|<j(κ)<j(ζ), it remains regular (and hence inaccessible) in M[G][H]. For the purposes of this proof, let ZFC denote the external theory and ZFC^V denote the theory defined within the models under consideration (since inner models and forcing extensions do not alter arithmetic truth, all of them will have the same ZFC). Now because M[G][H] is a model of ZFC, it believes that V_j(ζ)^M[G][H] ZFC^V (since ZFC proves that V_α satisfies internal ZFC whenever α is inaccessible). By Σ_n-correctness, V_j(ζ)^M[G][H]ϕ(ȧ^H, b)ℝ̇^G*H∈Γ (observing that all parameters have size at most j(κ) and hence are certainly contained in V_j(ζ)^M[G][H]). Then because ZFC^V proves that ϕ is preserved by Γ-forcing, V_j(ζ)^M[G][H][K]=V_j(ζ)^M[G][H][K]ϕ(ȧ^H, b). Since ℙ_κ⊂ V_κ and ℝ̇ is a factor of j(ℙ_κ), |ℝ̇^G*H|^M[G][H]=j(κ)<j(ζ). Applying Lemma <ref> again, j(ζ)∈ (C^(n))^M[G][H][K], so M[G][H][K]ϕ(ȧ^H, b). The rest of the proof proceeds as for Theorem <ref>. Since the distinction between the internal and external versions of the axiom is fairly technical and not particularly relevant, we will largely ignore it outside this section. CHAPTER: BOUNDED Σ_N-CORRECT FORCING AXIOMS We now turn our attention to the bounded versions of Σ_n-correct forcing axioms. In order to transition from the unbounded to bounded forms, we need to add two restrictions. First, we can no longer ask that our filter F be fully N-generic, since some of the maximal antichains in N may get mapped to excessively large antichains by σ. To accommodate this, we will use the following natural restricted form of N-genericity: If β is an ordinal, 𝔹 is a Boolean algebra, and N is a transitive ZFC^- model containing β and 𝔹 such that N“β is a cardinal and 𝔹 is a complete Boolean algebra", then a filter F⊂𝔹 is <β-weakly N-generic iff F meets all maximal antichains A∈ N of 𝔹 such that |A|^N<β. Second, we need to limit the size of ȧ to ensure that it does not encode information about excessively large antichains. (This occurs most blatantly in situations involving formulas like "Ġ is a Ȟ_γ-generic filter" as in the proof of Theorem <ref>, but can also happen in more subtle ways.) The most precise smallness condition would be that if ẋ is a 𝔹-name in the transitive closure of {ȧ}, |ẋ|<λ, where λ is our (strict) bound. However, we will instead require that ȧ∈ H_λ, since this is easier to state and work with than the more precise condition, clearly implies it, and can be made to hold whenever the precise smallness condition does by replacing 𝔹 we an isomorphic algebra such that all forcing conditions which occur in the transitive closure of ȧ are contained in H_λ. With these restrictions added, our statement of the axiom becomes: If κ>ω_1 and λ≥κ are cardinals and Γ is a forcing class, Σ_n CBFA_<κ^<λ(Γ) is the statement that for all complete Boolean algebras 𝔹∈Γ, 𝔹-names ȧ∈ H_λ, sets b∈ H_λ, regular cardinals γ≥λ such that 𝔹∈ H_γ, X⊂ H_γ with |X|< κ, and provably Γ-persistent Σ_n formulas ϕ such that ⊩_𝔹ϕ(ȧ,b̌), there is a transitive structure N with an elementary embedding σ:N→ H_γ such that ȧ, b, 𝔹, λ, and all elements of X are in the range of σ, rng(σ)∩κ is transitive, and there is a <σ^-1(λ)-weakly N-generic filter F⊂𝔹̅:=σ^-1(𝔹) such that ϕ(σ^-1(ȧ)^F, σ^-1(b)) holds. As before, if κ and/or λ are successor cardinals, we may write their predecessors in place of <κ or <λ. David Asperó stated an equivalent axiom, using what we previously called the Miyamoto-Asperó formulation (Definitions 1.3 and 1.5 of <cit.>). However, he appears to have only published a consistency proof for the case where κ=λ. As we did in Section <ref>, we note some obvious implications: * Σ_n CBFA_<κ^<λ(Γ) implies BFA_<κ^<λ(Γ). * Σ_n CBFA_<κ^<λ(Γ) implies Σ_n MP_Γ(H_κ). * Σ_n CBFA_<κ^<λ(Γ) implies Σ_m CBFA_<κ^<λ(Γ) for any m≤ n. * Σ_n CBFA_<κ^<λ(Γ) implies Σ_n CBFA_<κ^<ν(Γ) for any ν≤λ. * Σ_n CFA_<κ(Γ) is equivalent to the assertion that Σ_n CBFA_<κ^<λ(Γ) holds for all λ. (1): For any complete Boolean algebra 𝔹∈Γ, let X consist of any desired collection of fewer than κ maximal antichains of 𝔹, each of size less than λ. Let N be a transitive structure with an elementary embedding σ: N→ H_γ for some sufficiently large γ such that X∪{λ, 𝔹}⊂ rng(σ). Setting λ̅:=σ^-1(λ) and 𝔹̅:=σ^-1(𝔹), let F⊂𝔹̅ be a <λ̅-weakly N-generic filter and G⊂𝔹 be the filter generated by σ" F. Then for any maximal antichain A∈ X, if A̅:=σ^-1(A), by elementarity A̅ is a maximal antichain of 𝔹 of N-cardinality less than λ̅, so there is some p∈A̅∩ F. It follows that σ(p)∈ A∩ G, so G witnesses the truth of BFA_<κ^<λ(Γ). (2): For any provably Γ-persistent Σ_n formula ϕ, b∈ H_κ, and ℙ∈Γ such that ⊩_ℙϕ(b̌), let X=trcl({b}). Then if N, F, and σ are as in the statement of the axiom applied to X, ϕ, b, and ȧ:=∅, X⊆ rng(σ) implies that σ(b)=b, so ϕ(b) holds in V. (3): Every Σ_m formula is Σ_n. (4): Any ȧ and b in H_ν must be in H_λ, and since σ^-1(ν)≤σ^-1(λ) for any elementary embedding σ, any <σ^-1(λ)-weakly N-generic filter is <σ^-1(ν)-weakly N-generic. (5): The forward direction is immediate, observing that any N-generic filter is <β-weakly N-generic for all N-cardinals β. For the converse, given ℙ, ȧ, and b, let 𝔹 be the Boolean completion of ℙ and choose λ large enough that ȧ, b, 𝔹∈ H_λ. Then we can apply Σ_n CBFA_<κ^<λ(Γ), and since every maximal antichain of σ^-1(𝔹) must have size less than σ^-1(λ), <σ^-1(λ)-weak N-genericity implies full N-genericity. § CONSISTENCY PROOFS Now we show that Σ_n-correct bounded forcing axioms are consistent relative to the appropriate Σ_n-correctly H_λ-reflecting cardinals. We start with the symmetric case, since this is simpler but does not allow us to use correctly reflecting Laver functions (as Lemma <ref> requires that λ>κ). Asperó (<cit.>, Theorem 2.6) and, as we will later see, Hamkins (<cit.>, Lemma 3.3) proved the consistency of principles equivalent to Σ_n-correct symmetrically bounded forcing axioms; the proof below adapts their arguments to work with our official formulation of the axiom. If Γ is an n-nice forcing class and κ∈ C^(n) is regular, there is a κ-cc poset ℙ_κ∈Γ which forces Σ_n CBFA_<κ^<κ(Γ). Since κ is inaccessible, let f:κ→κ^2× V_κ^4×ω be a surjection such that for all α<κ, the first coordinate of f(α) is at most α. Fix an enumeration ⟨ϕ_k k<ω⟩ of the Σ_n formulas in the language of set theory. Using these, we recursively construct a sequence ⟨ℚ̇_αα<κ⟩ of names for posets in Γ∩ V_κ and take ⟨ℙ_αα≤κ⟩ to be the iteration of this sequence with support suitable to Γ. If ℙ_α has already been defined and f(α)=(β, μ, Ṁ, ℝ̇, ä, ḃ, k), where: * Ṁ is a ℙ_β-name for a transitive set * ℝ̇ is a ℙ_β-name for a Boolean algebra (not necessarily in Γ) * ä is a ℙ_β-name for a ℝ̇-name * ḃ is a ℙ_β-name then we choose ℚ̇_α such that, for any p∈ℙ_α which forces * μ̌ is a cardinal of Ṁ ZFC^- * ℝ̇, ä, ḃ∈Ṁ * there is a poset ℚ∈Γ∩ V_κ which forces that there is a <μ-weakly Ṁ-generic filter F⊆ℝ̇ such that ϕ_k(ä^F,ḃ) holds p forces that ℚ̇_α is such a ℚ, while if p forces any of the conditions on the second list to fail, it forces ℚ̇_α to be trivial. If any of the conditions on the first list fail, we choose ℚ̇_α to be a canonical name for the trivial forcing. Let G⊆ℙ_κ be V-generic; we will show that V[G]Σ_n CBFA_<κ^<κ(Γ). Fix γ>κ, 𝔹∈Γ∩ H_γ^V[G], ȧ, b∈ H_κ^V[G] with ȧ a 𝔹-name, X⊂ H_γ^V[G] of size less than κ, and ϕ a provably Γ-persistent Σ_n formula such that ⊩_𝔹ϕ(ȧ, b̌). Let Y∈ [H_γ^V[G]]^<κ be such that Y≺ H_γ^V[G], X∪ trcl({ȧ, b})∪{ℙ, κ}⊆ Y, and Y∩κ is transitive, since the sets with any of those three properties form clubs in [H_γ^V[G]]^<κ, so the intersection of all three of them is nonempty. Then by Lemma <ref>, whenever A∈ Y is a maximal antichain of 𝔹 with |A|<κ, A⊂ Y. Define N to be the transitive collapse of Y, σ:N→ H_γ^V[G] the inverse collapse embedding, 𝔹̅:=σ^-1(𝔹) and κ̅:=σ^-1(κ). Then let ä, ḃ, 𝔹̇̅̇, and Ṅ be suitable ℙ_κ-names in V; by Lemma <ref>, we can in fact arrange that for some β<κ, ä, ḃ, ℙ̇̅̇, and Ṅ are ℙ_β-names in V_κ with ä^G_β=ȧ, ḃ^G_β=b, 𝔹̇̅̇^G_β=𝔹̅, and Ṅ^G_β=N. Then because f was taken to be surjective, we can find α<κ such that f(α)=(β, κ̅, Ṅ, ℙ̇̅̇, ä, ḃ, k), where ϕ=ϕ_k. It is then immediate from the choices of the parameters involved that all the conditions for ℚ̇_α^G_α to be nontrivial are met, except possibly the last condition. To see that the last condition holds as well, observe that ℙ_κ/ℙ_α *𝔹̇^+ (where 𝔹̇^+ is a name for 𝔹-{0_𝔹}) is a poset in Γ^V[G_α] which adds a filter H⊆𝔹 which meets all maximal antichains of 𝔹 in Y. Thus if we take F:=σ^-1"H, whenever A is a maximal antichain of 𝔹̅ of size less than κ̅, σ(A)∈ Y is a maximal antichain of 𝔹 of size less than κ. By the construction of Y, all elements of σ(A) are in Y, so since Y=rng(σ), in particular there is a σ(p)∈σ(A)∩ H for some p∈𝔹̅. It follows that p∈ A∩ F, so F is <κ̅-weakly N-generic. Furthermore, ϕ(ȧ^H, b) holds because 𝔹 forces it to, so because trcl({ȧ, b})⊂ Y, none of the conditions of 𝔹 relevant to the interpretation of ȧ are moved by σ^-1, so ȧ^F=ȧ^H. Hence the last condition holds if we drop the requirement that the poset ℚ witnessing it is in V_κ^V[G_α]. However, the existence of such a poset is Σ_n expressible, and by Lemma <ref>, κ remains Σ_n-correct in V[G_α]. Thus we can find such a ℚ in Γ∩ V_κ^V[G_α], so ℚ_α will be such a poset. It follows that in V[G_α+1], there is a <κ̅-weakly N-generic filter F⊆ℙ̅ such that ϕ(ȧ^F, b) holds. Since ℙ_κ/ℙ_α+1∈Γ by the definition of n-niceness, ϕ(ȧ^F, b) continues to hold in V[G], while <κ̅-weak N-genericity is a Δ_0 relation between F, N, and κ̅, so it is preserved by arbitrary extensions. Thus V[G]Σ_n CBFA_<κ^<κ(Γ). We now turn to the asymmetric case: If κ is Σ_n-correctly H_λ-reflecting for n≥ 2 and some regular λ>κ, f is a V-generic fast function on κ, and Γ is an n-nice forcing class, there is a κ-cc poset ℙ_κ∈Γ^V[f] such that if G⊆ℙ_κ is V[f]-generic, then V[f][G]Σ_n CBFA_<κ^<λ(Γ). By Lemma <ref>, in V[f] there is a correctly reflecting Laver function g:κ→ V_κ^V[f]. As usual, we wish to construct an iteration ⟨ℙ_αα≤κ⟩ of (names for) posets ⟨ℚ̇_αα<κ⟩ in Γ∩ V_κ^V[f] with support suitable for Γ. We cannot quite let g select ℚ̇_α as in the proof of Theorem <ref>, since we wish to apply the axiom to arbitrarily large posets in Γ but g only works nicely with parameters in H_λ^V[f]. Instead, we follow the approach of Theorem <ref>, with a more complex reflection argument. Assume that ℙ_α has already been defined. We construct ℚ̇_α so that it names the trivial forcing unless g(α)=(α, μ, Ṁ, ℝ̇, ä, ḃ, p, ϕ)[Some of the parameters in this tuple (most obviously α) are redundant in the choice of ℚ̇_α, but will be needed as parameters in the statement which we will eventually want to reflect, so by the definition of a correctly reflecting Laver function we must include them here in order for the reflection to work correctly], where: * Ṁ is a ℙ_α-name for a transitive structure containing μ, ℝ̇, ä, and ḃ * ℝ̇ is a ℙ_α-name for a Boolean algebra (not necessarily in Γ) * ä is a ℙ_α-name for a ℝ̇-name * ḃ is a ℙ_α name * p∈ℙ_α forces α̌ and μ̌ to be cardinals of Ṁ * ϕ is a Σ_n formula * p forces that there is a poset ℚ∈Γ of size less than κ which adds a <μ̌-weakly Ṁ-generic filter F⊂ℝ̇ such that ϕ(ä^F, b) holds If all of these hypotheses hold, we arrange that p also forces ℚ̇_α to be a poset as in the last item in V_κ of the forcing extension by ℙ_α. Let G⊆ℙ_κ be V[f]-generic. In V[f][G], fix a complete Boolean algebra 𝔹∈Γ, a 𝔹-name ȧ∈ H_λ^V[f][G], a parameter b∈ H_λ^V[f][G], a cardinal γ≥λ such that 𝔹∈ H_γ^V[f][G], a set X⊂ H_γ of size less than κ, and a provably Γ-persistent Σ_n formula ϕ such that ⊩_𝔹ϕ(ȧ, b̌). As in the previous proof, we can find a Y≺ H_γ^V[f][G] of size less than λ such that {𝔹,λ}∪ trcl({ȧ, b})∪ (κ+1)∪ X⊂ Y and Y∩λ∈λ. Then by Lemma <ref>, whenever A⊂𝔹 is a maximal antichain of size less than λ and A∈ Y, we have A⊂ Y. Let N' be the transitive collapse of Y, λ':=π_Y(λ) and σ':N'→ H_γ^V[f][G] denote π_Y^-1. In V[f], let Ṅ', 𝔹̇', ä, and ḃ be ℙ_κ-names for N', π_Y(𝔹), ȧ, and b respectively. Since |ℙ_κ|=κ<λ and all the parameters are in H_λ^V[f][G], by Lemma <ref> we can arrange that the names for them are in H_λ^V[f]. Let ψ(κ, λ', Ṅ', 𝔹̇', ä, ḃ,p,ϕ) abbreviate the assertion that κ is inaccessible and p forces "there exists a poset in Γ (namely 𝔹-{0_𝔹}) which adds a <λ'-weakly Ṅ'-generic filter F⊂𝔹̇' such that ϕ(ä^F, b) holds". Inaccessibility is immediate, while the second part holds for some p∈ G because if H⊂𝔹 is a V[f][G]-generic filter and A∈ N' is a maximal antichain of 𝔹' with |A|^N'<λ', then by elementarity σ'(A) is a maximal antichain of 𝔹 of size less than λ, so σ'(A)⊂ Y by the construction of Y. Thus there is some q∈σ'(A)∩ H∩ Y, so π_Y(q)∈ A and thus π_Y"H is the desired filter. Since inaccessibility is a Π_1 property and the rest of ψ merely adds some existential and bounded quantifiers to the Σ_n assertions that a poset is in Γ and ϕ holds, ψ is overall a Σ_n statement (as we assume that n≥ 2) about parameters in H_λ^V[f]. Hence by the definition of g, we can find a Z≺ H_λ^V[f] of size less than κ containing all parameters of ψ, ℙ_κ, and a name for every element of σ'^-1"X, which is transitive below κ and closed under the map α↦ℙ_α such that g(π_Z(κ))=(κ̅,λ̅, Ṅ, 𝔹̇̅̇, ä̅̈,ḃ̅̇,π_Z(p),ϕ) :=(π_Z(κ),π_Z(λ'), π_Z(Ṅ'),π_Z(𝔹̇'), π_Z(ä),π_Z(ḃ),π_Z(p),ϕ) and ψ(κ̅,λ̅,Ṅ, 𝔹̇̅̇, ä̅̈,ḃ̅̇,π_Z(p),ϕ) holds in V_κ^V[f] (and V[f]). By the definition of n-niceness and the fact that κ is inaccessible, ℙ_κ is the direct limit of ⟨ℙ_αα<κ⟩ (and this fact is absolute to H_λ^V[f]), so by elementarity and the fact that Z contains ℙ_α for all α<κ̅=κ∩ Z, π_Z(ℙ_κ) is the direct limit of ⟨ℙ_αα<κ̅⟩. Since κ̅ is inaccessible, this is exactly ℙ_κ̅. Since p∈ℙ_κ⊂ V_κ^V[f], p is in the transitive part of Z and thus π_Z(p)=p. Then again by elementarity, p∈ℙ_κ̅ and Ṅ, 𝔹̇̅̇, ä̅̈, and ḃ̅̇ are ℙ_κ̅ names. Thus p∈ G_κ̅:=G∩ℙ_κ̅. Set N:=Ṅ^G_κ̅, 𝔹̅:=𝔹̇̅̇^G_κ̅, ȧ̅̇:=ä̅̈^G_κ̅, and b̅:=ḃ̅̇^G_κ̅. By the reflected version of ψ, there is a poset in V_κ^V[f][G_κ̅]∩Γ which adds a <λ̅-weakly N-generic filter F⊂𝔹̅ such that ϕ(ȧ̅̇^F,b̅) holds. By the construction of ⟨ℚ̇_αα<κ⟩ from g, ℚ̇_κ̅ is a name for such a poset, so such an F exists in V[f][G_κ̅+1]. Since ℙ_κ/ℙ_κ̅+1∈Γ, it preserves ϕ, and all forcing preserves the existence of weakly N-generic filters, so we have such an F in V[f][G]. It remains only to show that V[f][G] contains an elementary embedding σ:N→ H_γ which maps everything where we want. Define σ̅:N→ N' by, whenever ẋ is a ℙ_κ̅-name for an element of N, σ̅(ẋ^G_κ̅):=π_Z^-1(ẋ)^G. To see that this is a well-defined elementary embedding, consider any formula χ and names for elements of N ẋ_1,…, ẋ_m such that Nχ(ẋ_1^G_κ̅,…, ẋ_n^G_κ̅). Then there is some q∈ G_κ̅ such that q⊩_ℙ_κ̅π_Z(Ṅ')χ(ẋ_1,…, ẋ_m) is true in the transitive collapse of Z, so q⊩_ℙ_κṄ'χ(π_Z^-1(ẋ_1),…, π_Z^-1(ẋ_m)) and hence N'χ(π_Z^-1(ẋ_1)^G,…, π_Z^-1(ẋ_m)^G). We can now define σ:=σ'∘σ̅. Then σ is elementary N→ H_γ^V[f][G] and we verify: σ(λ̅)=σ'(σ̅(λ̅))=σ'(σ̅(π_Z(λ̌')^G_κ̅))=σ'(λ̌'^G)=σ'(λ')=λ σ(𝔹̅)=σ'(σ̅(π_Z(𝔹̇')^G_κ̅))=σ'(𝔹̇'^G)=𝔹 σ(ȧ̅̇)=σ'(σ̅(π_Z(ä)^G_κ̅))=σ'(ä^G)=σ'(ȧ)=ȧ σ(b̅)=σ'(σ̅(π_Z(ḃ)^G_κ̅))=σ'(ḃ^G)=σ'(b)=b To see that rng(σ)∩κ is transitive, first note that σ̅(κ̅)=π_Z^-1(κ̅)=κ, while for α<κ̅, since Z∩ V_κ was taken to be transitive, σ̅(α)=α. Since crit(σ')>κ, σ fixes all α<κ̅ while sending κ̅ to κ, as desired. Finally, for any x∈ X, Z was chosen to contain some ℙ_κ-name ẏ such that x=σ'(ẏ^G). Observe that there must be some q∈ G_κ̅ which decides whether π_Z(ẏ)∈Ṅ; since q∈ V_κ^V[f] and thus it is not moved by π_Z, by elementarity it must decide ẏ∈Ṅ' the same way. Because G_κ̅⊂ G and ẏ^G∈ N', we must have q⊩π_Z(ẏ)∈Ṅ, so π_Z(ẏ)^G_κ̅∈ N. As σ(π_Z(ẏ)^G_κ̅)=σ'(σ̅(π_Z(ẏ)^G_κ̅))=σ'(ẏ^G)=x it follows that X⊆ rng(σ). Thus σ has all the desired properties and V[f][G]Σ_n CBFA_<κ^<λ(Γ). The construction of Y in the above proof makes essential use of the regularity of λ; it is unclear whether the consistency of Σ_n CBFA_<κ^<λ(Γ) follows from the existence of a Σ_n-correct H_λ-reflecting cardinal when λ is singular. § EQUICONSISTENCY RESULTS In this section, we derive large cardinal properties of κ in L from Σ_n CBFA_<κ^<λ(Γ). Asperó (<cit.>, Lemma 2.3) and Hamkins (<cit.>, Lemma 3.2) proved results along these lines, making essential use of collapse forcings. These arguments do not work for cardinal-preserving classes like ccc forcing, but for these we can achieve similar results by enlarging power sets rather than collapsing cardinals. The following definition unifies these two styles of argument: Given a forcing class Γ, a formula ψ is a Γ-inflatable definition of a cardinal κ iff ψ(κ) holds and ZFC proves: * there is a unique ordinal α such that ψ(α) holds * for every ordinal α, there exists a β>α and a poset ℙ∈Γ such that ⊩_ℙψ(β) * if ψ(α) holds and ψ(β) is Γ-forceable, then α≤β Thus for example if Γ can collapse arbitrarily large cardinals to ω_1, "a cardinal with exactly two smaller infinite cardinals" is a Γ-inflatable definition of ω_2. Alternatively, if Γ preserves cardinals but can add arbitrarily many reals, then "the cardinality of the continuum" is a Γ-inflatable definition of (2^ℵ_0)^V. Note that a Γ-inflatable definition can never be Σ_1 or Π_1, since if any formula of those complexities provably defines a unique object, then by upward or downward absoluteness it is impossible to change which object it defines by forcing. (Generalization of Asperó's Lemma 2.3 <cit.>) If Γ is a Σ_n-definable forcing class and κ is a regular cardinal such that Σ_n CBFA_<κ^<κ(Γ) holds, and there is a Γ-inflatable Σ_n definition of κ (so n≥ 2), then κ is a regular Σ_n-correct cardinal in L. It is sufficient to shows that L_κ≺_Σ_n L. For this, we use a version of the Tarski-Vaught criterion. Suppose that a∈ L_κ and L∃ yϕ(a, y), where ϕ is a Π_n-1 formula. If ψ is a Γ-inflatable Σ_n definition of κ, then the formula: ∃θ∃β<θ∃ b∈ L_β (ψ(θ) Lϕ(a, b)) is Σ_n and Γ-forceable, since for any β containing a witness b of L∃ yϕ(a, y) we can enlarge the ordinal defined by ψ to be larger than β. Furthermore, ZFC proves that forcing preserves truth in L and preserves or enlarges the ordinal defined by ψ, so this formula is provably Γ-persistent. Applying the Σ_n-correct bounded forcing axiom to the above formula with X=trcl({a}), we get an elementary embedding σ: N→ H_γ for some transitive N and regular γ such that the formula already holds in V (as in this situation we will have σ^-1(a)=a). Since κ is the unique ordinal such that Vψ(κ), there is some β<κ with a b∈ L_β such that Lϕ(a,b). Thus b∈ L_κ, so L_κ≺_Σ_n L by the Tarski-Vaught criterion. Miyamoto proved that BPFA^<ω_3 is equiconsistent with a +1-reflecting cardinal (<cit.>, Theorem 4.2); Fuchs showed that the same holds for BSCFA^<ω_3 (<cit.>, Lemma 3.10). We can prove an analogue of these results in the Σ_n-correct setting: If Γ is a Σ_n-definable forcing class and κ is a regular cardinal such that Σ_n CBFA_<κ^<κ^+(Γ) holds, and there is a Γ-inflatable Σ_n definition of κ, then κ is Σ_n-correctly +1-reflecting in L. By Lemma <ref> it is sufficient to consider some A∈𝒫(κ)∩ L and Σ_n formula ϕ such that Lϕ(A, κ) and show that for stationarily many α<κ, L_κϕ(A∩α, α). Fix a club C⊂κ; we will find a κ̅∈ C such that L_κϕ(A∩κ̅, κ̅). We apply the M-A formulation of the axiom to the formula Lϕ(A, κ), which is clearly Σ_n, Γ-forceable, and provably Γ-persistent. This yields stationarily many Z≺ H_κ^+ such that Lϕ(π_Z(A), π_Z(κ)), so by Lemma <ref> we can find one with κ̅:=κ∩ Z=π_Z(κ)∈ C. Then π_Z does not move any ordinals less than κ, so π_Z(A)=A∩κ̅. Thus we have obtained a κ̅∈ C such that Lϕ(A∩κ̅, κ̅). Applying Lemma <ref>, we get that L_κϕ(A∩κ̅, κ̅), so κ is Σ_n-correctly +1-reflecting in L. By Proposition <ref> and the preceding discussion, +2-reflecting cardinals are too strong to be accommodated by any currently understood canonical inner model, so obtaining good consistency strength lower bounds for Σ_n-correct bounded forcing axioms when λ≥κ^++ will most likely depend on further developments in inner model theory. To summarize: Suppose Γ is an n-nice forcing class with a Γ-inflatable Σ_n definition ψ of a cardinal such that for ℙ_κ as in Theorems <ref> or <ref>, ⊩_ℙ_κψ(κ). Then (1) and (2) are equiconsistent over ZFC, as are (3) and (4): * There is a regular Σ_n-correct cardinal * Σ_n CBFA_<κ^<κ(Γ)ψ(κ) * There is a Σ_n-correctly +1-reflecting cardinal * Σ_n CBFA_<κ^<κ^+(Γ)ψ(κ) As a corollary, we obtain much cleaner hierarchy results in n than seemed to be possible for unbounded Σ_n-correct forcing axioms in Section <ref>: For any positive integer n and Γ as in the previous theorem: * Σ_n CBFA_<κ^<κ(Γ) (if it is consistent) does not imply Σ_n+1 CBFA_<κ^<κ(Γ) * Σ_n CBFA_<κ^<κ^+(Γ) (if it is consistent) does not imply Σ_n+1 CBFA_<κ^<κ^+(Γ) In either case, if the Σ_n+1-correct bounded forcing axiom holds, Theorem <ref> implies that κ is Σ_n+1-correctly +i-reflecting in L, where i is 0 or 1 as appropriate. By the inaccessibility of correctly reflecting cardinals and Proposition <ref>, L_κ is thus a model of ZFC with many Σ_n-correctly +i-reflecting cardinals, so it has a forcing extension which is a transitive model of the appropriate Σ_n-correct bounded forcing axiom. It follows that the Σ_n+1-correct bounded forcing axiom is of significantly greater consistency strength, so in particular it is not implied by the Σ_n-correct bounded forcing axiom. CHAPTER: PRESERVATION THEOREMS In this chapter, we study the preservation of Σ_n-correct forcing axioms under forcing. Sean Cox proved a theorem on the preservation of classical forcing axioms and their "plus versions" in terms of lifting generic elementary embeddings (Theorem 20 of <cit.>), unifying various preservation results that had been published previously and providing a basis for the approach taken here. However, in some cases only a fragment of the original axiom will survive in the forcing extension, particularly since Cox's result makes some use of the fact that the property of preserving the stationarity of subsets of ω_1 is inherited by regular suborders of a forcing poset, which need not hold if we replace stationarity with a property of complexity greater than Π_1. To account for this possibility we define those fragments of Σ_n-correct forcing axioms below (using the Woodin-Cox formulation of Definition <ref> for convenience). For Γ a forcing class, κ a regular uncountable cardinal, and Φ a collection of provably Γ-persistent formulas, the Φ-correct forcing axiom Φ CFA_<κ(Γ) is the assertion that for every ϕ∈Φ, ℚ∈Γ, ℚ-name ȧ such that ⊩_ℚϕ(ȧ), and regular γ>|𝒫(ℚ)∪ trcl(ȧ)∪κ|, there is a generic elementary embedding (with possibly ill-founded codomain) which witnesses the -forcing axiom for (ℚ,ϕ, ȧ, γ) (recall Definition <ref>). To justify the original Σ_n CFA notation, we can say that when Φ contains some formulas which are not provably Γ-persistent, Φ CFA_<κ(Γ) uses only those formulas in Φ which are provably Γ-persistent. Let Γ be a forcing class closed under restrictions, Φ a set of provably Γ-persistent formulas in the language of set theory, κ a regular cardinal such that Φ CFA_<κ(Γ) holds, and ℙ a poset. Define Φ' to be the set of formulas ϕ such that: * ϕ∈Φ * ℙ preserves ϕ (note that this preservation need not be provable, only true) * for every ℙ-name ℚ̇ for a poset in Γ, and every ℙ*ℚ̇-name ȧ such that ⊩_ℙ*ℚ̇ϕ(ȧ), there is a ℙ*ℚ̇-name ℝ̇ for a poset (possibly depending on ϕ, ℚ̇, and ȧ) such that: * ℙ*ℚ̇*ℝ̇∈Γ * ℙ*ℚ̇ forces that ℝ̇ preserves ϕ(ȧ) * If θ>|𝒫(ℙ*ℚ̇*ℝ̇)|+|trcl(ȧ)| is a regular cardinal and j: V→ M is a generic elementary embedding which witnesses the <κ-forcing axiom for (ℙ*ℚ̇*ℝ̇, ϕ, ȧ, θ) (where we interpret ȧ as a ℙ*ℚ̇*ℝ̇-name in the obvious way) with V-generic filter G*H*K⊆ℙ*ℚ̇*ℝ̇ in M, then M" j"G has a lower bound in j(ℙ)". Then ⊩_ℙΦ' CFA_<κ(Γ). (Note the ∅-correct forcing axiom is vacuously true, so cases where no such ℝ̇ can be found for any ϕ are of no concern.) Following Cox, we apply the axiom in V to obtain a generic elementary embedding j:V→ M, then show that j lifts to a generic embedding V[G]→ M[G'] for some M-generic G'⊆ j(ℙ) which witnesses the desired instance of the axiom in V[G]. More specifically, given any ℙ, ℚ̇, provably Γ-persistent ϕ∈Φ', and ȧ satisfying the standard hypotheses, let ℝ̇ satisfy conditions 1-3 used to define Φ' in the statement of the theorem. Fix regular θ>κ so that all objects mentioned so far and their power sets are in H_θ; then there is an elementary embedding j:V→ M in some generic extension W of V such that: * the wellfounded part of M is transitive and contains H_θ^V * |H_θ^V|^M<j(κ) * j↾ H_θ^V∈ M * crit(j)=κ * M contains a V-generic filter G*H*K⊆ℙ*ℚ̇*ℝ̇ such that Mϕ(ȧ^G*H) By (iii), (v), and the fact that ℙ⊂ H_θ^V, j"G∈ M. By condition (3) in the statement of the theorem, M thinks that there is some p'∈ j(ℙ) which is below all conditions in j"G. Then if 𝔹∈ W is the set of ∈^M-predecessors of the Boolean completion of j(ℙ) in M with the partial ordering inherited from M and G'⊂𝔹 is a W-generic ultrafilter containing p', it will also be M-generic, and in W[G'] we can form the (possibly ill-founded) class model M[G'] as the quotient M^𝔹/G'. See Cox for more details on this process. Since j"G⊂ G' by the upward closure of filters, we can (in W[G'], hence in a generic extension of V[G]) apply Lemma <ref> to define an elementary embedding j^*:V[G]→ M[G'] by j^*(ẋ^G)=j(ẋ)^G'. We now verify that j^* satisfies the conditions in the definition of witnessing the <κ-forcing axiom for (ℚ̇^G, ϕ, ȧ, θ) in V[G]: (a): Since M contains H_θ^V and G, it (and thus M[G']) contains H_θ^V[G], which by Lemma <ref> is H_θ^V[G]. (b): |H_θ^V[G]|^M=|H_θ^V|^M<j(κ)=j^*(κ); thus it is sufficient to show that M believes that j(ℙ) does not collapse j(κ). By elementarity, this is the same as showing that V"ℙ does not collapse κ". Assume toward a contradiction that there is a δ<κ and a ℙ-name ḟ such that ḟ^G:δ→κ is surjective. For each α<δ, fix p_α∈ G which decides ḟ(α). Since crit(j)=κ>δ, j does not move any of these αs, so j(p_α) decides j(ḟ)(α). Since p'≤ j(p_α) for all α<δ, p' alone decides all values of j(ḟ), so j(ḟ)^G'∈ M. Therefore j(κ) is not a cardinal in M, so κ is not a cardinal in V, contradicting the fact that VΦ CFA_<κ(Γ). (c): Since j↾ H_θ^V, G'∈ M[G'], j^* is defined entirely in terms of G' and the action of j on names, and by Lemma <ref> every element of H_θ^V[G] has a name in H_θ^V, j^*↾ H_θ^V[G]∈ M[G']. (d): j^* is an extension of j, so in particular it agrees with j on the ordinals. (e): H∈ M⊂ M[G'] and H is V[G]-generic by the choice of M; M[G']ϕ(ȧ^H) because this holds in M and M thinks that j(ℙ) preserves ϕ by elementarity. Suppose Γ is an n-nice forcing class, VΣ_n CFA_<κ(Γ), ℙ∈Γ, and for all provably Γ-persistent Σ_n formulas ϕ, ℙ-names ℚ̇ for a poset in Γ, and ℙ*ℚ̇-names ȧ such that ⊩_ℙ*ℚ̇ϕ(ȧ), there is a ℙ*ℚ̇-name ℝ̇ for a poset in Γ such that condition 3 from the statement of Theorem <ref> holds. Then ⊩_ℙΣ_n CFA_<κ(Γ). If ℙ∈Γ and ⊩_ℙ*ℚ̇ℝ̇∈Γ, then ℙ and ℝ will preserve all provably Γ-persistent formulas, so we can take Φ' to consist of all provably Γ-persistent Σ_n formulas. The following result generalizes Cox's observation about inheritance of stationary set preservation: In Theorem <ref>, if Φ consists exclusively of Π_1 formulas, then the hypothesis that ℙ preserves ϕ can be omitted from the definition of Φ', since it will follow from the other conditions. If ϕ∈Φ is a provably Γ-persistent Π_1 formula such that V∃ x(ϕ(x)∃ p∈ℙ p⊩_ℙ¬ϕ(x)), then the failure of ϕ(x) in any generic extension by a filter containing p is a Σ_1 statement. It follows that it is preserved by passing to larger models, so (p, 1̇_ℚ̇, 1̇_ℝ̇)⊩¬ϕ(x). Since ϕ is provably Γ-persistent, this contradicts the fact that ℙ*ℚ̇*ℝ̇∈Γ. To illustrate the application of Theorem <ref> and procure a useful tool for later separation results, we consider the cases of proper and countably closed forcing: (cf König and Yoshinobu <cit.>, Theorem 6.1) If Γ is an n-nice forcing class which necessarily contains Coll(ω_1, λ) for all uncountable cardinals λ and has the countable covering property (i.e. every countable set in a Γ-extension whose elements are all in the ground model is a subset of a countable set of the ground model), then Σ_n CFA_<ω_2(Γ) is preserved by all <ω_2-closed forcing in Γ. In particular, Σ_n CPFA and Σ_n CFA_<ω_2(<ω_1 closed) are preserved by all <ω_2-closed forcing. Let ℙ∈Γ be <ω_2-closed. Given a ℙ-name ℚ̇ such that ⊩_ℙℚ̇∈Γ, a provably Γ-persistent Σ_n formula ϕ, and ℙ*ℚ̇-name ȧ such that ⊩_ℙ*ℚ̇ϕ(ȧ), let ℝ̇ be a ℙ*ℚ̇-name such that ⊩_ℙ*ℚ̇ℝ̇=Coll(ω_1, |Ġ|), where Ġ is the canonical name for the generic filter on ℙ. Then ⊩_ℙ*ℚ̇ℝ̇∈Γ, so it is sufficient to verify the lower bound of j"G condition. To that end, fix any sufficiently large θ and let j:V→ M be a generic elementary embedding such that: * the wellfounded part of M is transitive and contains H_θ^V * |H_θ^V|^M<j(ω_2^V)=ω_2^M * j↾ H_θ^V∈ M * crit(j)=ω_2^V * M contains a V-generic filter G*H*K⊆ℙ*ℚ̇*ℝ̇ such that Mϕ(ȧ^G*H) By our choice of ℝ̇, in V[G][H][K] there is an enumeration ⟨ g_αα<ω_1⟩ of G. We will use this to recursively construct a descending sequence ⟨ x_αα<ω_1⟩ of elements of G such that x_α≤_ℙ g_β whenever β<α. Let x_0=1_ℙ. If x_α has already been defined, then x_α and g_α have a common lower bound in G because both are elements of G; let x_α+1 be some such lower bound. At limit stages γ<ω_1 where x_α has already been defined for all α<γ, then since ℙ*ℚ̇*ℝ̇∈Γ and Γ has the countable covering property, there is some countable Y∈ V such that { x_αα<γ}⊆ Y. Replacing Y with Y∩ℙ if necessary, we may assume that Y⊆ℙ. Working now in V, fix an enumeration ⟨ y_n n<ω⟩ of Y and let D:={d∈ℙ∀ y∈ Y (y⊥ d d≤ y)}. Then for any p_0∈ℙ, we can recursively choose for each n<ω a p_n+1≤ p_n such that p_n+1⊥ y_n or p_n+1≤ y_n. Since ℙ is <ω_2-closed and hence countably closed, ⟨ p_n n<ω⟩ has a lower bound in ℙ, which must be an element of D. It follows that D is dense. Returning to V[G][H][K], we can choose x_γ∈ G∩ D. It cannot be the case that x_α⊥ x_γ for any α<γ because they are all elements of the same filter G, so by the definitions of D and Y we must have x_γ≤ x_α for all such α. This completes the construction of ⟨ x_αα<ω_1⟩. Let ẋ be a name for ⟨ x_αα<ω_1⟩ in H_θ^V; this is possible by Lemma <ref> if θ was chosen to be sufficiently larger than everything of interest. Then since ẋ, G, j↾ H_θ^V ∈ M, ⟨ j(x_α)α<ω_1⟩∈ M. By elementarity, j↾ℙ is order-preserving and M thinks that j(ℙ) is <ω_2-closed. Thus j(x_α)≤_j(ℙ) j(g_β) whenever β<α and there is a p'∈ j(ℙ) less than all j(x_α), so in particular p' is a lower bound of j"G. The result follows from Corollary <ref>. By strengthening the closure property on ℙ, we can generalize this to other forcing classes: (cf Larson <cit.>, Theorem 4.3) If Γ is any n-nice forcing class, and ℙ∈Γ is -directed closed (i.e., for all X⊂ℙ of size less than κ such that any two elements of X have a common lower bound in X, there is a common lower bound for all elements of X in ℙ), then ℙ preserves Σ_n CFA_<κ(Γ). Given a -directed closed ℙ∈Γ, ℙ-name ℚ̇ such that ⊩_ℙℚ̇∈Γ, a provably Γ-persistent Σ_n formula ϕ, and ℙ*ℚ̇-name ȧ such that ⊩_ℙ*ℚ̇ϕ(ȧ), let ℝ̇ be a ℙ*ℚ̇-name for the trivial forcing, so that ℙ*ℚ̇*ℝ̇≡ℙ*ℚ̇. Then if θ>|𝒫(ℙ*ℚ̇)| is regular and j:V→ M witnesses the -forcing axiom for (ℙ*ℚ̇, ϕ, ȧ, θ) with generic filter G*H, M thinks that j"G is directed and |j"G|^M<j(κ), since G⊂ H_θ^V and H_θ^V has M-cardinality less than j(κ). Since by elementarity M thinks that j(ℙ) is <j(κ)-directed closed, there is a common lower bound for j"G in j(ℙ). Applying Corollary <ref> yields the desired result. CHAPTER: RELATIONSHIPS WITH OTHER AXIOMS This chapter explores the relations Σ_n-correct forcing axioms have to various previously-proposed axioms, as well as the relationship between the Σ_n-correct forcing axioms for different forcing classes. § EQUIVALENCES If κ is a regular cardinal and n≥ 1, Σ_n CBFA_<κ^<κ(Γ) is equivalent to Σ_n MP_Γ(H_κ). The forward implication is Proposition <ref>(2). For the converse, let ϕ be a provably Γ-persistent Σ_n formula, 𝔹∈Γ be a complete Boolean algebra, ȧ∈ H_κ a 𝔹-name, b∈ H_κ, X⊂ H_γ with |X|<κ, and ⊩_𝔹ϕ(ȧ,b̌). Then following the proof of Theorem <ref> we can construct a transitive N∈ H_κ with an elementary embedding σ:N→ H_γ with X∪ trcl({ȧ, b})∪{𝔹, κ}⊂ rng(σ) and rng(σ)∩κ transitive such that whenever A⊂𝔹 is a maximal antichain of size less than κ in the range of σ, all its elements are also in the range of σ. Then σ^-1(ȧ)=ȧ and σ^-1(b)=b, and we can define κ̅=σ^-1(κ) and 𝔹̅=σ^-1(𝔹). As in the relative consistency proofs, if G⊂𝔹 is V-generic, then for every maximal antichain of 𝔹̅ A∈ N with |A|^N<κ̅, |σ(A)|<κ, so A⊂ rng(σ) and thus G∩ A∩ rng(σ) is non-empty. Hence V[G] "σ^-1"G is a <κ̅-weakly N-generic filter on 𝔹̅ and ϕ(ȧ^σ^-1"G, b) holds". It follows that the statement that there exists a <κ̅-weakly N-generic F⊂𝔹̅ such that ϕ(ȧ^F, b) holds is a Σ_n statement forced by 𝔹 and provably preserved by further forcing in Γ with parameters ȧ, b, N, κ̅, 𝔹̅∈ H_κ. Applying the maximality principle, it is therefore true. We now wish to show that classical forcing axioms are equivalent to Σ_1-correct forcing axioms. However, there is a slight technical wrinkle. Σ_n-correct forcing axioms were stated with the requirement that rng(σ)∩κ is transitive, but the proof of Lemma <ref> can be extended to guarantee this condition only in some cases. If κ=θ^+ is a successor cardinal, we can simply assume that θ+1⊂ X, so that the same will hold of rng(σ). Then by elementarity rng(σ) correctly identifies a surjection θ→α for each α∈ rng(σ)∩κ and correctly computes all values of that surjection, so we must have α⊂ rng(σ) and thus rng(σ)∩κ is transitive. When κ is a limit cardinal, however, this fails. To remedy this issue, we could remove the transitivity condition from the statement of Σ_n CFA, but it is occasionally useful to have that condition. Alternatively, we could redefine FA_<κ(Γ) to mean Lemma <ref>(2) with the added condition that rng(σ)∩κ is transitive (in essence requiring that the collection of elementary substructures of H_γ which have generics be stationary in [H_γ]^<κ in Jech's sense rather than merely weakly stationary), since this condition can easily be derived in both the Martin-Solovay consistency proof for Martin's Axiom and Baumgartner-style consistency proofs for other forcing axiom. Whichever option is chosen, the issue can be resolved fairly easily, and we will not worry about it much further. To show a level-by-level equivalence of bounded forcing axioms, it is helpful to have a bounded version of Lemma <ref>, proved in essentially the same way: The following are equivalent for any forcing class Γ and regular cardinals λ≥κ>ω_1: * BFA_<κ^<λ(Γ) * For any cardinal γ>λ, complete Boolean algebra 𝔹∈Γ such that 𝔹∈ H_γ, and X∈ [H_γ]^<κ, there is a transitive structure N and an elementary embedding σ:N→ H_γ such that |N|<κ, X∪{𝔹,κ,λ}⊆ rng(σ), and there is a <σ^-1(λ)-weakly N-generic filter F⊆𝔹̅:=σ^-1(𝔹). The following result is a generalization of Bagaria's Theorem <ref> to asymmetric bounded forcing axioms and of Miyamoto's Theorem 2.5 in <cit.> to forcing classes other than proper forcing: If Γ is a forcing class and λ≥κ>ω_1 are regular cardinals, then BFA^<λ_<κ(Γ) is equivalent to Σ_1 CBFA^<λ_<κ(Γ), modulo the requirement that rng(σ)∩κ is transitive. The reverse implication is Proposition <ref>(1). For the forward direction, if 𝔹∈Γ is a complete Boolean algebra, ȧ∈ H_λ is a 𝔹-name, b∈ H_λ, γ≥λ is a regular cardinal, X⊂ H_γ is such that |X|<κ, and ϕ is a Δ_0 formula such that ∃ xϕ(ȧ,b, x)=1, then the fact that 𝔹 forces ∃ xϕ(ȧ,b, x) is Σ_1 and thus by Lemma <ref> absolute to H_γ. Now applying Lemma <ref> to X∪{ȧ, b}, there is a transitive N of size less than κ with an elementary embedding σ:N→ H_γ such that X∪{ȧ, b, κ, λ,𝔹}⊆ rng(σ) and, setting λ̅:=σ^-1(λ) and 𝔹̅:=σ^-1(𝔹), there is a <λ̅-weakly N-generic filter F⊂𝔹̅. Since 2<λ̅, F is in particular an ultrafilter. By elementarity, N∃ xϕ(σ^-1(ȧ),σ^-1(b), x)=1_𝔹̅. Thus by Lemma <ref>, N^𝔹̅/F∃ xϕ([σ^-1(ȧ)]_F, [σ^-1(b̌)]_F, x). Since trcl({σ^-1(ȧ)}) and trcl(σ^-1({b̌)}) have N-cardinality less than λ̅, Lemma <ref> implies that [σ^-1(ȧ)]_F and [σ^-1(b̌)]_F are in the wellfounded part of N^𝔹̅/F, and if the wellfounded part is taken to be transitive they are equal to σ^-1(ȧ)^((F)) and σ^-1(b) respectively. Applying Lemma <ref>, σ^-1(ȧ)^((F))=σ^-1(ȧ)^F. By Lemma <ref>, there is some c∈ H_λ̅^N^𝔹̅/F such that ϕ(σ^-1(ȧ)^F, σ^-1(b), c) holds. (Note that since N^𝔹̅/F does not in general satisfy the power set axiom, H_λ̅^N^𝔹̅/F may be a proper class in it, but the proof of Lemma <ref> does not rely on it being a set.) Applying Lemma <ref> again, c must be in the wellfounded part of N^𝔹̅/F, so wfp(N^𝔹̅/F) is a transitive structure satisfying ϕ(σ^-1(ȧ)^F,σ^-1(b), c). Since Δ_0 formulas are absolute between transitive classes, V∃ xϕ(σ^-1(ȧ)^F, σ^-1(b), x). To conclude this section, consider the following open question: Is MM^+ω_1 equivalent to Σ_2 CMM? In other words, can every Σ_2 property provably preserved by all stationary set-preserving forcing be encoded in a sequence of stationary subsets of ω_1? The answer if we replace Σ_2 with Σ_3 or higher is no by Proposition <ref>, since MM^+ω_1 is implied by Σ_2 CMM and thus cannot imply Σ_3 CMM. § SEPARATIONS Corey Switzer posed the question of whether Σ_n CPFA can ever imply MM. Using the work of König and Yoshinobu on regressive Kurepa trees <cit.>, this can be answered in the negative. (König and Yoshinobu) For uncountable cardinals γ and λ, a γ-regressive λ-Kurepa tree is a tree (T, <_T) such that: * the height of T is λ * for each α<λ, the αth level T_α has cardinality less than λ * there are at least λ^+ distinct cofinal branches of T * for all limit ordinals β<λ of cofinality less than γ, there is a function f:T_β→ T↾β such that f(x)<_T x for all x∈ T_β and whenever x≠ y in T_β, f(y)≮_T x or f(x)≮_T y (König and Yoshinobu, Theorem 5 <cit.>) For each regular uncountable cardinal λ, there is a <λ-closed forcing which adds a λ-regressive λ-Kurepa tree. (König and Yoshinobu, Theorem 13 <cit.>) MM implies that no ω_2-Kurepa tree is even ω_1-regressive. For each n, if Σ_n CPFA is consistent, it does not imply MM. Any model of Σ_n CPFA has a forcing extension via a <ω_2-closed poset to a model with a ω_2-regressive ω_2-Kurepa tree. By Theorem <ref>, Σ_n CPFA still holds in the forcing extension. By the previous lemma, however, MM fails. With stronger assumptions and a geological argument somewhat reminiscent of that for Lemma <ref>, we can get an outright contradiction between strengthenings of PFA and MM. Let Ω_2:={α<ω_2 cf(α)=ω∃ r (r defines a groundα=ω_2^W_r)} where we think of Ω_2 as a bounded definable class which varies between models rather than a fixed set. Since cf(α)=ω and α=ω_2 are Σ_1 and Δ_2 properties respectively, by Lemmas <ref> and <ref> Ω_2 is Σ_3-definable. Σ_3-correct Martin's Maximum implies that Ω_2 is stationary in ω_2. The same holds for Σ_3 CSCFA or Σ_3 CFA_<ω_2(Γ) for any class Γ containing Namba forcing. Since Namba forcing forces that ω̌_2 has countable cofinality and is the ω_2 of some ground, and these properties can easily be proven to be preserved under all further forcing, then by the Miyamoto-Asperó form of the axiom there are stationarily many Z≺ H_ω_3 of size ω_1 such that π_Z(ω_2)=ω_2∩ Z∈Ω_2. By Lemma <ref>, it follows that Ω_2 is stationary in ω_2. Σ_4 MP_proper(∅) together with the Bedrock Axiom implies that Ω_2 is bounded below ω_2. The same holds if we replace proper forcing with countably closed forcing or any other Σ_3-definable forcing class which can collapse arbitrary cardinals to ω_1, is closed under two-step iterations, and has the countable covering property. By Usuba's Corollary <ref>, the Bedrock Axiom implies that the universe (and in fact every model in the generic multiverse) is a set forcing extension of the mantle. We then observe that the ω_2 of any model in the generic multiverse must be regular in the mantle, and that there is some regular λ such that 𝕄 and V agree on which ordinals are regular cardinals above λ. In particular, every ordinal which is regular in 𝕄 but has countable cofinality in V is below λ. We now consider the sentence "there exists a β<ω_2 such that for all α>β and all proper posets ℙ, ⊩_ℙα∉Ω_2. Since α∈Ω_2 is Σ_3, this sentence can be written in Σ_4 form. To see that it is forceable by Coll(ω_1, λ), let G⊂ Coll(ω_1, λ) be V-generic and V[G][H] be some further proper extension. For any ground W of V[G][H], if ω_2^W>λ, then ω_2^W is regular in both the mantle and V. It follows from the countable covering property that it must have uncountable cofinality in V[G][H], so it cannot be in Ω_2^V[G][H]. Hence Ω_2^V[G][H]⊂λ <ω_2^V[G], so the sentence holds in V[G]. Since by construction the sentence is preserved by all further proper forcing, the maximality principle implies that sup(Ω_2)<ω_2 in V. Some examples of models in which the hypotheses of the preceding lemma hold: * Any model of Σ_4 CPFA with an extendible cardinal * The standard model of Σ_4 CPFA, obtained by forcing as in Theorem <ref> from a model with a cardinal supercompact for C^(3) (and thus many extendibles) * The model of Σ_4 MP_proper(H_ω_2) obtained by forcing as in Theorem <ref> from a regular Σ_4-correct cardinal over L * The model of Σ_4 MP_proper(∅) obtained from forcing over L up to a singular Σ_4-correct cardinal, as in Lemma 2.6 of Hamkins <cit.> Thus we get, for example: If there is an extendible cardinal, then Σ_3 CMM contradicts Σ_4 CPFA, and Σ_3 CSCFA contradicts Σ_4 CFA_<ω_2(<ω_1 closed). It follows that although large cardinal axioms, strengthenings of MM, and strengthenings of PFA all express in different ways the idea that the universe of sets is very large, beyond a certain point it cannot be large in all three ways simultaneously. The hypotheses of Theorem <ref> are likely stronger than necessary. The assumption of an extendible cardinal is helpful for controlling Ω_2, but can perhaps be dispensed with. If the mantle is not a ground, is Σ_3 CMM consistent with Σ_4 CPFA? Alternatively, it could perhaps be shown that Σ_3-correct or Σ_4-correct forcing axioms outright imply the Bedrock Axiom without additional large cardinal assumptions. This is false for classical forcing axioms, since Reitz showed that the existence of a supercompact cardinal is consistent with the failure of the Bedrock Axiom (<cit.>, Corollary 25(2)) and this failure is preserved by set forcing, but may become true at the Σ_3 level or above. It also seems likely that the Σ_3 and Σ_4 can be reduced to lower complexities, although since MM implies PFA, the latter can't be reduced all the way to Σ_1. Is Σ_2 CPFA consistent with MM? CHAPTER: RESIDUAL REFLECTION PRINCIPLES In light of Theorem <ref>, it is natural to wonder whether asymmetrically bounded or unbounded Σ_n-correct forcing axioms can be factored as the conjunction of the appropriate Σ_n maximality principle and some sort of reflection principle. The attempted factorization chronicled in this chapter only works for a few forcing classes, but the resulting reflection principles have some independent interest. For n a positive integer, κ≤λ uncountable cardinals, and Γ a forcing class, Σ_n RRP(κ, λ, Γ) is the assertion that for all provably Γ-persistent Σ_n formulas ϕ and all a∈ H_λ such that ϕ(a) holds, the set of Z≺ H_λ of size less than κ and containing a such that ϕ(π_Z(a)) holds is stationary in [H_λ]^<κ. This is reminiscent of Σ_n-correct H_λ-reflection, but here we assert only that ϕ(π_Z(a)) holds in V, not necessarily in V_κ. Thus Σ_n RRP(κ, λ, Γ) does not imply that κ is Σ_n-correct, or even correct with respect to provably Γ-persistent Σ_n formulas, so it is consistent with κ being small, at least for some forcing classes Γ. These statements can be called residual reflection principles, because they are all that remains of Σ_n-correct H_λ-reflection after a length κ iteration of posets in Γ∩ V_κ: If κ is Σ_n-correctly H_λ-reflecting for λ>κ, Γ is an n-nice forcing class, ℙ_κ∈Γ is any length κ iteration of posets in V_κ∩Γ (using the notion of iteration suitable to Γ) which does not collapse κ, and G⊂ℙ_κ is a V-generic filter, then V[G]Σ_n RRP(κ,λ, Γ). In V[G], given suitable ϕ, a, and a function h:[H_λ^V[G]]^<ω→ H_λ^V[G], we wish to find a Z≺ H_λ^V[G] of size less than κ, containing a and κ, closed under h, and transitive below κ such that ϕ(π_Z(a)) holds. Now in V, let ȧ and ḣ be suitable ℙ_κ-names and p∈ G force ϕ(ȧ); as in the proof of Lemma <ref>, we can construct a function h̃∈ V which maps finite sets of names for elements of H_λ^V[G] to names in H_λ for the corresponding values of h. Therefore we can find a Z≺ H_λ of size less than κ, containing ȧ, ℙ_κ, and κ, with Z∩ V_κ transitive and π_Z(p)=p, and closed under both h̃ and the map α↦ℙ_α for α<κ such that V_κ p⊩_π_Z(ℙ_κ)ϕ(π_Z(ȧ)). By our conditions on Z, π_Z(ℙ_κ)=ℙ_κ̅, where as usual κ̅:=π_Z(κ). Therefore if G_κ̅=G∩ℙ_κ̅, ϕ(π_Z(ȧ)^G_κ̅) holds in V_κ[G_κ̅] because p∈ G_κ̅. By Lemma <ref>, κ is still Σ_n-correct in V[G_κ̅], and thus ϕ(π_Z(ȧ)^G_κ̅) holds in V[G_κ̅]. Since the tail forcing ℙ_κ/ℙ_κ̅∈Γ, ϕ(π_Z(ȧ)^G_κ̅) continues to be true in V[G] (although not necessarily in V_κ^V[G], since ℙ_κ is too large to preserve the Σ_n-correctness of κ). Then as in the proof of Lemma <ref>, by Lemma <ref>, Z[G]≺ H_λ^V[G] (where Z[G]:={ẋ^Gẋ∈ Z∩ H_λ^ℙ_κ}) and for all ℙ_κ-names ẋ∈ Z, π_Z(ẋ)^G_κ̅=π_Z[G](ẋ^G). It follows that V[G]ϕ(π_Z[G](a)), and the closure of Z under h̃ implies that Z[G] is closed under h, as desired. Furthermore, as π_Z[G]^-1 extends π_Z^-1, it also maps its critical point κ̅ to κ, so Z[G]∩ V_κ^V[G] is transitive. Thus Z[G]∈ [H_λ^V[G]]^<κ has all the properties we wanted, so Σ_n RRP(κ, λ, Γ) holds in V[G], as desired. Thus residual reflection principles at least hold in the standard models of Σ_n-correct bounded forcing axioms as constructed in Theorem <ref>, but in fact they can easily be seen to hold in all models of those axioms: For any forcing class Γ containing the trivial forcing, positive integer n, and uncountable cardinals κ<λ, Σ_n CBFA_<κ^<λ(Γ) implies Σ_n RRP(κ, λ, Γ). Since residual reflection principles are effectively special cases of the M-A formulation of Σ_n-correct forcing axioms, we essentially need only prove the bounded version of one direction of Theorem <ref>. Let ϕ be a provably Γ-persistent Σ_n formula, a∈ H_λ such that ϕ(a) holds, and h:[H_λ]^<ω→ H_λ a function. Applying the axiom, there is a transitive structure N of size less than κ with an elementary embedding σ: N→ H_(2^<λ)^+ such that a, h, H_λ, κ∈ rng(σ), ϕ(σ^-1(a)) holds, and σ(crit(σ))=κ. Since H_(2^<λ)^+“H_λ is closed under h", rng(σ)∩ H_λ is as well. rng(σ)∩ H_λ≺ H_λ because H_(2^<λ)^+ and rng(σ) agree on which formulas H_λ satisfies. Therefore rng(σ)∩ H_λ is an elementary substructure of H_λ of size less than κ which is closed under h and transitive below κ such that ϕ(σ^-1(a)) holds, and (a restriction of) σ^-1 is exactly the transitive collapse isomorphism of rng(σ)∩ H_λ. Thus Σ_n RRP(κ,λ, Γ) holds. We now study the forcing classes for which Theorem <ref> can be extended to an equivalence for asymmetrically bounded forcing axioms using residual reflection principles. A forcing class Γ is provably self-preserving iff the formula ℙ∈Γ is provably Γ-persistent. Strictly speaking, provable self-preservation is a property of definitions of forcing classes, not an extensional property of the classes themselves. For any forcing class, we can slim it down to a "provably self-preserving interior", though in doing so we may lose an unacceptably large number of posets: For any Σ_n- or Π_n-definable forcing class Γ which is provably closed under two-step iterations, □_ΓΓ:={ℙ∈Γ|∀ℚ∈Γ⊩_ℚℙ̌∈Γ} is a provably self-preserving Π_n+1-definable forcing class. ZFC proves that if ℙ∈□_ΓΓ, ℚ∈Γ, and ℝ̇ is a ℚ-name for a forcing in Γ, then ℚ*ℝ̇∈Γ, so by the definition of □_ΓΓ, ⊩_ℚ*ℝ̇ℙ∈Γ. It follows (provably) that ⊩_ℚℙ∈□_ΓΓ, so the formula ℙ∈□_ΓΓ is provably Γ-persistent. Since ZFC proves □_ΓΓ⊆Γ, it is thus provably □_ΓΓ-persistent, so □_ΓΓ is provably self-preserving. To see that □_ΓΓ is Π_n+1-definable, observe that is definition can be written as ∀ℚ(ℚ∉Γ⊩_ℚℙ∈Γ); whether Γ is Σ_n or Π_n, the inner disjunction is of a Σ_n formula and and Π_n formula, so the overall formula is Π_n+1. We survey provable self-preservation for common forcing classes: Assume that ZFC is consistent. Then: * The class of countably closed forcing (or more generally -closed forcing for regular κ) is provably self-preserving. * The class of ccc forcing is not provably self-preserving using the most natural definition, but MA_ω_1 implies that ccc=□_cccccc. * The class of subcomplete forcing is not provably self-preserving, and CH implies that □_sc sc ⊊ sc. * The classes of proper and stationary set-preserving forcing are not provably self-preserving, and in fact ZFC proves that □_ΓΓ⊊Γ when Γ is either one (a): Let ℙ and ℚ be -closed, G⊂ℚ be V-generic, and f∈ V[G] be a descending sequence γ→ℙ for some γ<κ. Then f∈ V, so there is a p∈ℙ such that p≤ f(α) for all α<γ. Therefore ℙ remains -closed in V[G]. (b): If T is a Suslin tree, then T∈ ccc, but ⊩_T Ť∉ccc. Since the existence of a Suslin tree is consistent relative to ZFC, it follows that ZFC cannot prove that all posets satisfying the ccc retain this property under ccc forcing. However, if we further assume MA_ω_1, let ℙ be a poset, ℚ be ccc, q∈ℚ, and ȧ be such that q ⊩_ℚ“ȧ is an enumeration of an antichain of ℙ of size ω_1". For each α<ω_1, we can find a set D_α dense below q which decides the value of ȧ_α. Applying MA_ω_1 below q, we obtain a filter F such that there is some q_α∈ F∩ D_α for each α<ω_1. Let p_α be the element of ℙ such that q_α⊩_ℚȧ_α=p̌_α. If for some α<β<ω_1 there is a p∈ℙ extending both p_α and p_β, then for any V-generic filter G containing both q_α and q_β, ȧ_α^G∥ȧ_β^G, contradicting the fact that q∈ G forces ȧ^G to be an antichain. Therefore {p_α|α<ω_1} is an antichain of ℙ in V, so ℙ does not satisfy the ccc. Thus under our hypotheses every poset satisfying the ccc must in fact be in □_cccccc. (c): Under CH, the Namba forcing ℕ of V is subcomplete, but ⊩_ℕℕ̌∉sc, since ℕ×ℕ adds a real. See Theorem 4.1.27 of Kaethe Minden's dissertation <cit.> for more details. Since CH is consistent relative to ZFC, it follows that ZFC cannot prove that subcomplete forcing is self-preserving. (d): For proper forcing, by an argument due to Shelah and related by Goldstern (<cit.>), the ground model version of Coll(ω_1, ω_2) ceases to be proper after any forcing which adds a real, which includes many proper posets. For stationary set-preserving forcing, both Namba forcing and Coll(ω_1, ω_2) preserve stationary subsets of ω_1, but forcing with either makes the ground model version of the other collapse ω_1; see Minden (<cit.>, proof of Theorem 4.1.29). The main result of this chapter is: The following are equivalent for all uncountable cardinals κ≤λ, positive integers n, and provably self-preserving n-nice forcing classes Γ: * Σ_n CBFA_<κ^<λ(Γ) * Σ_n MP_Γ(H_κ) and Σ_n RRP(κ, λ, Γ) (1⇒ 2): We get the maximality principle from Theorem <ref> and the residual reflection principle from Lemma <ref>. (2⇒ 1): Given a provably Γ-persistent Σ_n formula ϕ and an a∈ H_λ such that ϕ(a) holds in some Γ-extension, the formula ∃ℙ∈Γ⊩_ℙϕ(ǎ) is Σ_n because Γ is Σ_n-definable. To see that it is provably Γ-persistent, let ℙ witness it and ℚ∈Γ be arbitrary. Then ⊩_ℙℚ̌∈Γ because Γ is provably self-preserving, so ⊩_ℙ×ℚϕ(ǎ) because ϕ is provably Γ-persistent. Since products commute and ⊩_ℚℙ̌∈Γ, it follows that ⊩_ℚ(∃ℙ∈Γ⊩_ℙϕ(ǎ)), as desired. Thus by the residual reflection principle, there are stationarily many Z≺ H_λ of size less than κ containing a such that ϕ(π_Z(a)) is Γ-forceable. Since π_Z(a)∈ H_κ, the maximality principle yields ϕ(π_Z(a)) in V. Thus the M-A formulation of Σ_n CBFA_<κ^<λ(Γ) holds, and bounded versions of the arguments for Theorem <ref> yield the other formulations. Thus for example Σ_n-correct bounded forcing axioms for countably closed forcing can be factored into the Σ_n-maximality principle for countably closed forcing and a Σ_n-residual reflection principle when n≥ 2. The same holds for □_cccccc when n≥ 3 (the ccc is a Δ_2 property because it holds iff it holds in H_θ for some uncountable θ iff it holds in H_θ for all uncountable θ such that the poset is itself in H_θ). Furthermore, since by the results of the next chapter Σ_2 MP_ccc(H_2^ℵ_0) implies that CH fails badly and by Proposition <ref> it also implies MA, Proposition <ref>(b) yields that □_cccccc=ccc under the hypotheses of the above theorem when Γ=ccc. Thus Σ_n CBMA^<λ is equivalent to the conjunction of the Σ_n-maximality principle for ccc forcing and Σ_n RRP(2^ℵ_0,λ, ccc) when n≥ 3, but not necessarily when n=2. Even though ccc forcing has a Σ_2 definition, we must use the more complex definition of □_ccc ccc in order for the preservation to be provable in ZFC alone. For other forcing classes, such as subcomplete, proper, or semiproper forcing, even shifting to more complex definitions does not appear to help. Examining the consistency proofs for Σ_n-correct forcing axioms sheds some light on the obstacle: when there is a ℙ∈Γ^V[G] forcing ϕ(ȧ), we can find an N, σ, and F such that ϕ(σ^-1(ȧ)^F) already holds not because there is some ℚ∈Γ^V[G] forcing this, as would be required for a straightforward maximality principle argument to work, but because there is such a ℚ which was in Γ as defined inside of some ground model V[G_α]. Thus perhaps, in order to get a factorization for a wider range of forcing classes, a more geological approach is needed. For now, the following (somewhat vague) question remains open: Is Σ_n CBFA_<κ^<λ(Γ) equivalent to the conjunction of some sort of generalized maximality principle and some sort of reflection principle for a wide number of natural forcing classes? Further open questions arise from the fact that, although the proof of Theorem <ref> does not appear to generalize beyond the provably self-preserving classes, it is unclear how to establish the separations that would definitively show that the theorem does not generalize: If Γ is a forcing class which is never self-preserving, is it consistent for suitable κ that Σ_n MP_Γ(H_κ) and Σ_n RRP(κ, λ, Γ) hold but Σ_n CBFA_<κ^<λ(Γ) fails? Is it consistent that Σ_2 MP_ccc(H_𝔠) and Σ_2 RRP(𝔠,λ, ccc) hold for all λ≥𝔠, but Σ_2 CMA fails? CHAPTER: CONSEQUENCES We conclude with a brief survey of some of the implications of Σ_n-correct forcing axioms. It is highly likely that they have many more consequences, perhaps including some much more dramatic than those explored here, but the constraints of time, space, and my own limited knowledge will restrict us to only these few. Particular attention will be paid to Σ_n-correct Martin's Axiom, since for n≥ 2 it has a somewhat different character than and in particular is much stronger than ordinary Martin's Axiom. § THE CARDINALITY OF THE CONTINUUM We begin with some simple observations on the implications for various axioms on the size of the continuum. * Σ_n CPFA and Σ_n CMM imply that 2^ℵ_0=ℵ_2 for any positive n * Σ_2 CSCFA and Σ_2 CFA_<ω_2(<ω_1 closed) imply CH * Σ_2 CMA implies that the continuum is weakly Mahlo (1) follows from the well-known fact that PFA implies that the continuum is ℵ_2. (2): Since ω_1 is Δ_2 definable without parameters and 𝒫(ω) is Π_1-definable, the assertion that there is a bijection between them is Σ_2. Since Coll(ω_1, 𝔠) is a countably closed (and hence subcomplete) poset which forces CH, and neither subcomplete nor countably closed posets add reals, either Σ_2-correct forcing axiom implies CH. In fact, CH follows merely from the corresponding Σ_2 maximality principles. (3): First, we show (following Stavi and Vaananen <cit.>) that Σ_2 MP_ccc(H_𝔠) implies that the continuum is weakly inaccessible. First, by Proposition <ref>, it implies MA and thus that the continuum is regular. To see that the continuum is a weak limit cardinal, let λ<𝔠. Since 𝒫(ω) is Π_1 definable without parameters and λ^++ is Δ_2 in λ, "there is an injective function λ^++→𝒫(ω)" is a Σ_2 statement which can be forced by adding λ^++ reals and, since ccc forcing preserves cardinals, provably remains true after any further ccc forcing. Thus it holds in V, so the continuum cannot be λ^+. Now strengthen our hypotheses to Σ_2 CBMA^<𝔠^+. (Ordinary Martin's Axiom is equivalent to all of its bounded forms, since by definition a ccc Boolean algebra only has small antichains, but this fails for the Σ_n-correct versions, since the boundedness will restrict the size of the names that can be used as parameters.) Let C⊆𝔠 be a club. Then "sup(C) is weakly inaccessible" is a true Π_1 formula provably preserved by ccc forcing because ccc forcing preserves weak inaccessibility, so applying the Miyamoto-Aperó form of the axiom, there is a Z≺ H_𝔠^+ of size less than 𝔠 containing 𝔠 and C such that the supremum of π_Z(C) is weakly inaccessible and Z∩𝔠 is transitive. Since by elementarity π_Z(C)=Z∩ C is unbounded in π_Z(𝔠)=Z∩𝔠, sup(Z∩ C)=Z∩𝔠∈ C by closure, so C contains a weakly inaccessible cardinal. It follows that 𝔠 is weakly Mahlo. Thus combined with Theorem <ref>, we get that Σ_n CMA, Σ_n CPFA, Σ_n CMM, Σ_n CSCFA, and Σ_n CFA_<κ(<ω_1 closed) are mutually inconsistent for n≥ 4 (and perhaps for smaller n), with some inconsistencies possibly requiring an extendible cardinal (or at least the Bedrock Axiom). We can strengthen (3) above to show that the continuum is in fact so large as to be undefinable in terms of ccc-persistent formulas, so it is not the least weakly Mahlo cardinal, and in fact should have all properties in weakly hyper-Mahlo hierarchy: For n≥ 2, Σ_n MP_ccc(H_𝔠) (i.e. Σ_n-correct symmetrically bounded Martin's Axiom) implies that the continuum is not the least element of any class of ordinals with a provably ccc-persistent Σ_n definition. Σ_n CBMA^<𝔠^+ (or merely Σ_n RRP(𝔠, 𝔠^+, ccc)) further implies that any such class containing 𝔠 must be stationary below it. Let ϕ be a provably ccc-persistent Σ_n formula such that ϕ(𝔠) holds. Then any ccc forcing which adds at least (𝔠^V)^+ reals forces "there is an α smaller than the continuum such that ϕ(α) holds". Since any further ccc forcing preserves ϕ(𝔠^V) and cannot decrease the continuum, the quoted property corresponds to a provably ccc-persistent formula; because the cardinality of the continuum is Δ_2-definable, this formula is also Σ_n as long as n≥ 2. Thus by the maximality principle, there is some α<𝔠^V such that Vϕ(α), as desired. If Σ_n RRP(𝔠, 𝔠^+, ccc) holds, then there are stationarily many Z∈ [H_𝔠^+]^<𝔠 such that Z∩κ is transitive (and thus equal to π_Z(κ)) and ϕ(π_Z(κ)) holds. Thus by Lemma <ref>, {α<𝔠ϕ(α)} is stationary. Of course, similar results hold for other forcing classes, but since ccc forcing preserves more properties related to the size of cardinals (most notably the cofinality function), this particular case is more relevant to characterizing how large the continuum is under Σ_n CMA (whereas the Σ_n-correct forcing axioms for other natural classes often imply that κ is merely ω_2). The first result of the following section can be interpreted as saying that under Σ_2-correct Martin's Axiom, the continuum is "very weakly compact" (i.e. weakly inaccessible and has the tree property). Propostion <ref> can similarly be read as a statement about the largeness of the continuum when Γ is ccc forcing. § COMBINATORICS This sections surveys some of the combinatorial consequences of Σ_n-correct forcing axioms, starting with the tree property: * Σ_2 CBMA^<𝔠^+ (or merely Σ_2 RRP(𝔠, 𝔠^+, ccc)) implies that the continuum has the tree property. * If Γ is an n-nice forcing class, then Σ_n+1 RRP(κ, κ^+, Γ) implies that for every κ-tree, there is a Γ-extension in which it has a cofinal branch. (1): Assume toward a contradiction that T=(𝔠, <_T) is a 𝔠-tree with no cofinal branch. Having no cofinal branch is a Π_1 property of 𝔠 and T, since it amounts to saying that all functions from 𝔠 with the ordinal ordering to T fail to be strictly order-preserving. To see that it is provably ccc-persistent, let ℙ satisfy the ccc and ḃ be a ℙ-name for a cofinal branch of T; we can assume without loss of generality that all conditions of ℙ force ḃ to be a cofinal branch. Define T_ḃ:={x∈ T∃ p∈ℙ p⊩x̌∈ḃ}. Since ḃ is forced to be a cofinal branch, T_ḃ contains at least one element from every level of T. However, conditions of ℙ which force different elements of the same level to be in ḃ are incompatible, so by the ccc there are at most countably many conditions for each level. It follows that each level of T_ḃ is at most countable. As CH fails under our hypotheses, T_ḃ is a tree of height 𝔠 with width uniformly bounded below 𝔠, so by a standard result of Kurepa (see e.g. Kanamori Proposition 7.9 <cit.>), it has a cofinal branch, which is then also a cofinal branch of T in V. Thus T's lack of a cofinal branch is a provably ccc-persistent Π_1 property, so there are stationarily many Z≺ H_𝔠^+ of size less than 𝔠 such that π_Z(T) has no cofinal branch. Fix a particular such Z which is closed under the mapping from ordinals α<𝔠 to the αth level of T and transitive below 𝔠. Combining these two properties, every node of T below level π_Z(𝔠) is in Z, but by elementarity Z contains no nodes from higher levels, so π_Z(T)=T↾π_Z(𝔠). Thus T↾π_Z(𝔠) has no cofinal branch, contradicting the fact that T_π_Z(𝔠) is nonempty. Hence 𝔠 must have the tree property under our hypotheses. (2): Assume toward a contradiction that T is a κ-tree with no cofinal branch in any Γ-extension. By Lemma <ref>, this is a provably Γ-persistent Π_n property. Then we can argue as above to produce a Z≺ H_κ^+ such that π_Z(T)=T↾π_Z(κ) has no cofinal branch in any Γ-extension, contradicting the fact that the predecessors of any node of height π_Z(κ) in T form a cofinal branch of T↾π_Z(κ) in V itself. Next, some implications of the Σ_n-correct forcing axioms for subcomplete or countably closed forcing: Let Γ be the class of subcomplete forcing or the class of countably closed forcing. Then Σ_2 CFA_<ω_2(Γ) implies: * * There are no (ω_1-)Kurepa trees In fact, both consequences follows from Σ_2 MP_Γ(H_ω_2). (1): It is a standard result that adding a Cohen subset to ω_1 forces (see e.g. Kunen's Lemma IV.7.30 <cit.>). This forcing is countably closed and thus subcomplete. Jensen proved (<cit.>, Part 3, Lemma 4) that subcomplete forcing and thus countably closed forcing[The result for countably closed forcing was in fact known well before Jensen.] preserves . is a Σ_2 sentence, since it asserts that there is a sequence ⟨ A_αα<ω_1⟩ such that for all A, C⊆ω_1 with C a club, there is an α∈ C such that A∩α=A_α. Thus follows from the maximality principles for both forcing classes. (2): Let T∈ H_ω_2 be an ω_1-tree. Then there is a countably closed (hence subcomplete) forcing to add a surjection from ω_1 to the set of cofinal branches of T, and further subcomplete forcing provably does not add branches to T by Minden's Theorem 3.1.2 <cit.>. Asserting that T is not a Kurepa tree is Σ_2, since we can say that there is a function f with domain ω_1 such that every strictly order-preserving map ω_1→ T with downward-closed range is itself in the range of f. Thus T not being a Kurepa tree is a Γ-forceable and provably Γ-persistent Σ_2 formula, so T is not a Kurepa tree in V. Since every ω_1-tree is isomorphic to one in H_ω_2 it follows that there are no Kurepa trees at all. Finally, we consider the following version of simultaneous stationary reflection: ν RP_<κ(θ), where ν<κ≤θ are cardinals with κ and θ regular and uncountable, is the statement that for every sequence of ν stationary subsets of [θ]^ω 𝒮=⟨ S_αα<ν⟩ and every X⊂θ with |X|<κ, there is a Y such that X⊆ Y⊂θ, |Y|<κ, and S_α∩ [Y]^ω is stationary in [Y]^ω for all α<ν. When ν and/or κ is left unspecified, it is assumed that ν=1 and κ=ω_2. When θ is left unspecified, it is assumed that the principle holds for all regular θ≥κ. These principles follow from Σ_2-correct forcing axioms for a wide range of forcing classes, but different proofs seem to be necessary for two overlapping cases: If κ and λ are regular uncountable cardinals with ω_1<κ<λ and Γ is a subclass of proper forcing, Σ_2 RRP(κ, λ, Γ) implies ν RP_<κ(θ) for all cardinals ν<κ and regular θ≥κ such that θ^ω·ν<λ. Thus Σ_2 CFA_<κ(Γ) implies ν RP_<κ for all ν<κ. Whenever the specified bounds on ν and θ hold, given 𝒮 and X as in the above definition, 𝒮∈ H_λ. Since there is a Π_1 formula asserting that 𝒮 is a ν-sequence of stationary subsets of [θ]^ω, which is provably Γ-persistent because every poset in Γ is proper, by the residual reflection principle there is a Z≺ H_λ of size less than κ with X∪(ν+1)∪ω_1∪{θ, 𝒮}⊂ Z such that, setting θ̅:=π_Z(θ) and 𝒮̅:=π_Z(𝒮), 𝒮̅ is a ν-sequence of stationary subsets of [θ̅]^ω. Set Y:=π_Z^-1"θ̅=Z∩θ. Then Y⊂θ by definition, |Y|<κ because |Z|<κ, and X⊆ Y because X⊆ Z and X⊂θ, so it is sufficient to verify that the stationarity of each S_α reflects to Y. By elementarity, 𝒮̅_α=π_Z(S_α) for each α<ν. For every a∈ Z∩ [θ]^ω, a⊂ Y, since by elementarity there is a bijection f:ω→ a in Z, which must then be an actual bijection in H_λ and thus in V, and since ω⊂ Z we have that Z computes the values of f correctly, so those values must all be in Z. Then because by the definition of the transitive collapse π_Z(a)=π_Z"(a∩ Z), it follows from the injectivity of π_Z that π_Z^-1"π_Z(a)=a. For similar reasons, π_Z^-1"π_Z(S_α)=S_α∩ Z for all α<ν, so π_Z^-1"π_Z(S_α)⊆ S_α∩ [Y]^ω. To complete the proof, we fix an α<ν and an h:[Y]^ω→ω and produce an a∈ [Y]^ω such that π_Z(a)∈π_Z(S_α) and h"[a]^<ω⊆ a. Define h'=π_Z∘ h ∘π_Z^-1; then h':[θ̅]^<ω→θ̅, so there is some a̅∈π_Z(S_α) closed under it and transitive below ω_1 by stationarity. Then a̅=π_Z(a) for some a∈ S_α because all elements of π_Z(S_α) are of this form. For any finite {β_1, …, β_k}⊂ a, π_Z({β_1,…,β_k})={π_Z(β_1),…, π_Z(β_k)}⊂a̅, since as argued in the previous paragraph a⊂ Z in this situation. Hence h'({π_Z(β_1), …, π_Z(β_k)})=π_Z(h({β_1, …, β_k}))∈a̅, so h({β_1, …, β_k})∈ a by elementarity. It follows that a is closed under h; to see that it is furthermore transitive below ω_1, note that π_Z does not move countable ordinals or map uncountable ordinals to countable because ω_1⊂ Z, so this follows from the fact that a̅∩ω_1 is transitive. Therefore π_Z^-1"π_Z(S_α) is stationary in [Y]^ω, so S_α∩ [Y]^ω is as well, as desired. If λ>ω_2 is a regular cardinal and Γ is a forcing class which provably preserves stationary subsets of ω_1, Σ_2 CBFA_<ω_2^<λ(Γ) implies ω_1 RP(θ) for all regular θ≥ω_2 such that θ^ω_1<λ and there is a proper poset in Γ which collapses θ to ω_1. Thus if Γ can collapse arbitrarily large cardinals to ω_1 with proper posets, Σ_2 CFA_<ω_2(Γ) implies ω_1 RP. This is essentially an adaptation of Foreman, Magidor, and Shelah's argument that FA^+(<ω_1 closed) implies RP (<cit.>, paragraph before Theorem 13). Given θ, a sequence of stationary subsets of [θ]^ω 𝒮=⟨ S_αα<ω_1⟩∈ H_λ, X⊂θ of size ω_1, and a proper ℙ∈Γ which collapses θ to ω_1, let ḟ and ȧ be ℙ-names such that ⊩_ℙ“ḟ:ω̌_̌1̌→θ̌ is a bijection” and ⊩_ℙȧ=⟨{β<ω_1ḟ"β∈Š_α}α<ω_1⟩. To show that ℙ forces ȧ to be an ω_1-sequence of stationary subsets of ω_1, let G⊂ℙ be V-generic and argue in V[G]. For any club C⊆ω_1 in V[G], {ḟ^G"ββ∈ C} is a club in ([θ]^ω)^V[G], since by the surjectivity of ḟ^G every countable subset of θ is contained in ḟ^G"β for some β, and ⋃_n<ωḟ^G"β_n=ḟ^G"sup_n<ωβ_n for any increasing sequence ⟨β_n n<ω⟩ in C. By the properness of ℙ, each S_α remains stationary in V[G], so for all α<ω_1 there is a β_α∈ C such that ḟ^G"β_α∈ S_α. Thus β_α∈ C∩ȧ^G_α. Since this holds for arbitrary clubs C and α<ω_1, each term of the sequence ȧ^G is stationary in ω_1. Returning to V, we have shown ℙ∈Γ forces ȧ to be a sequence of stationary subsets of ω_1 and by hypothesis ZFC proves that every poset in Γ preserves this Π_1 property, but in the case where we wish to apply the bounded form of the axiom, we need additionally that ȧ∈ H_λ. Observe that for each β<ω_1, there are θ possible values of ḟ(β), so the values of ḟ are determined by ω_1 many antichains of ℙ each of size at most θ. Since ȧ is entirely determined by the values of ḟ, we can assume without loss of generality that only these conditions of ℙ appear in its transitive closure, and by replacing ℙ with an isomorphic poset if necessary, we can arrange that they are all in H_λ, so ȧ is as well. Hence for arbitrary regular γ>λ, we can find an elementary embedding σ:N→ H_γ for transitive N of size ω_1 with ω_1+1 ∪ X∪{θ, λ, ℙ, ȧ, ḟ, 𝒮}⊂ rng(σ) and a <σ^-1(λ)-weakly N-generic filter F⊂σ^-1(ℙ) such that σ^-1(ȧ)^F is an ω_1-sequence of stationary subsets of ω_1, σ^-1(ḟ)^F is a bijection ω_1→σ^-1(θ), and σ^-1(ȧ)^F=⟨{β<ω_1σ^-1(ḟ)^F"β∈σ^-1(S_α)}α<ω_1⟩. Set g:=σ∘σ^-1(ḟ)^F, so that g:ω_1→θ is injective, and let Y:=rng(g). Then |Y|=ω_1 because dom(g)=ω_1, Y⊂θ because σ preserves the order of ordinals, and X⊆ Y because X⊆ rng(σ)∩θ=σ"σ^-1(θ)=rng(g). To verify that Y reflects the stationarity of the S_α, let C⊆ [Y]^ω be a club. Then since {g"ββ<ω_1} can easily be verified to be a club in [Y]^ω using the fact that g:ω_1→ Y is a bijection, the intersection C_g:={g"β∈ Cβ<ω_1} is a club as well. We now verify that {β<ω_1 g"β∈ C} is a club in ω_1. It is unbounded because for any δ<ω_1, by the unboundedness of C_g there is a β<ω_1 with g"δ⊆ g"β∈ C, and since g is injective we must have δ≤β. It is closed because for any increasing sequence ⟨β_n n<ω⟩ in it with supremum β_ω, ⋃_n<ωg"β_n∈ C_g because C_g is closed, so since direct images commute with unions we have g"β_ω∈ C. Fix α<ω_1. From the above paragraph, it follows that there is a β∈σ^-1(ȧ)^F_α such that g"β∈ C. By one of the conditions on σ^-1(ȧ)^F, σ^-1(ḟ)^F"β∈σ^-1(S_α). Hence σ(σ^-1(ḟ)^F"β)∈ S_α, and since |σ^-1(ḟ)^F"β|=ω<crit(σ), σ(σ^-1(ḟ)^F"β)=σ"(σ^-1(ḟ)^F"β)=g"β. It follows that g"β∈ C∩ S_α∩ [Y]^ω, so the stationarity of each S_α reflects to Y, as desired. § LARGE CARDINALS We conclude with some implications of Σ_n-correct forcing axioms with a large cardinal character. Σ_2 CBMA^<𝔠^++ implies that 0^♯ exists. Since ccc forcing provably preserves cardinalities, by the M-A formulation of the axiom applied to the statement that |L_𝔠^+|>𝔠 there is a Z≺ H_𝔠^++ of cardinality less than the continuum, transitive below 𝔠, and containing 𝔠 and L_𝔠^+ such that |π_Z(L_𝔠^+)|>π_Z(𝔠). By elementarity, π_Z(L_𝔠^+) satisfies the sentence forcing it to be of the form L_β for some β, and we must have π_Z(𝔠)^+≤β<𝔠. Thus by Lemma <ref>, π_Z^-1 restricts to an elementary embedding L_β→ L_𝔠^+ with critical point π_Z(𝔠). 0^♯ follows from a well-known result of Kunen (see e.g. <cit.> Theorem 21.1). Of course, the existence of 0^♯ follows from the Σ_2-correct forcing axioms for a wide range of forcing classes by the stationary reflection results of the previous section, but the proof in the case of ccc forcing (and other cardinal-preserving classes) is considerably more straightforward, and fairly striking given the low consistency strength of (Σ_1-correct) Martin's Axiom. If Σ_n CFA_<κ(Γ) holds (or merely Σ_n RRP(κ,λ, Γ) for all λ), then Vopenka's principle holds for all proper classes with provably Γ-persistent Σ_n definitions. Let ϕ define the class and let A be a structure of size at least κ in the class. Since ϕ is a provably Γ-persistent Σ_n formula, applying the axiom there is some B∈ H_κ such that ϕ(B) and there is an elementary embedding of transitive sets σ such that σ(B)=A. σ then restricts to the desired elementary embedding B→ A. Admittedly, since many applications of Vopenka's principle involve structures with domain V_α, which are not preserved by any nontrivial forcing, the above result is less exciting than it may at first appear. However, it can be improved to an equivalence between Σ_n-correct forcing axioms and a sort of hybridization of maximality principles and structural reflection principles of the sort Bagaria studies in <cit.>.[Note however that this principle is distinct from and considerably stronger than Bagaria's generic structural reflection principles, since the latter require A∈𝒞 to hold in V and only give an elementary embedding in some forcing extension, while the structural reflection principle given here allow A to enter 𝒞 in a forcing extension but yields the embedding in V.] The following are equivalent for regular uncountable cardinals κ, forcing classes Γ, and positive integers n: * Σ_n CFA_<κ(Γ) * Whenever 𝒞 is a class of first order structures with a provably Γ-persistent Σ_n definition with parameters in H_κ, for every structure A such that ⊩_ℙǍ∈𝒞 for some poset ℙ∈Γ, there is a B∈𝒞∩ H_κ with an elementary embedding B→ A. (1⇒ 2): Let 𝒞 be defined by a provably Γ-persistent Σ_n formula ϕ(x, t) for some parameter t∈ H_κ (so that ⊩_ℙϕ(Ǎ, ť)). Then for any regular γ>κ large enough that A∈ H_γ, by the Σ_n-correct forcing axiom there is a transitive N∈ H_κ and an elementary embedding σ: N→ H_γ such that t∈ N, σ(t)=t, A∈ rng(σ), and ϕ(B, σ^-1(t)) holds, where B:=σ^-1(A). As σ^-1(t)=t, we have B∈𝒞, and by Lemma <ref>, σ↾ B is the desired elementary embedding B→ A. (2⇒ 1): Let ϕ, γ, X, ℙ, ȧ, and b be as in the statement of Σ_n CFA_<κ(Γ). Let δ:=|X|<κ and fix an enumeration ⟨ x_αα<δ⟩ of X. Define 𝒞 to be the class of all (M, ℚ, ẏ, z, ⟨ c_αα<δ⟩) such that M is a transitive structure, ℚ is a forcing poset, ẏ is a ℚ-name, and there is an M-generic filter F⊆ℚ such that ϕ(ẏ^F, z) holds. Then 𝒞 has a provably Γ-persistent Σ_n definition with parameter δ∈ H_κ because ϕ is assumed to be Σ_n and provably Γ-persistent, and ⊩_ℙ (H_γ^V, ℙ, ȧ, b, ⟨ x_αα<δ⟩)∈𝒞. Applying the structural reflection principle (2), there is an elementary embedding σ: (N, σ^-1(ℙ), σ^-1(ȧ), σ^-1(b), ⟨σ^-1(x_α)α<δ⟩)→ (H_γ, ℙ, ȧ, b, ⟨ x_αα<δ⟩) where N∈ H_κ is transitive and there is an N-generic filter F⊆σ^-1(ℙ) such that ϕ(σ^-1(ȧ)^F, σ^-1(b)) holds in V. Since by construction each x_α∈ X is in the range of σ as well, Σ_n CFA_<κ(Γ) holds. This result suggests that there should be generalized forcing axioms corresponding to all of Bagaria's (and Bagaria and Lücke's in <cit.> and elsewhere) structural reflection principles. That is, for any suitable forcing class there should be a Σ_n-product structural reflection forcing axiom equiconsistent with a strong cardinal for C^(n-1), a Σ_n-weak structural reflection forcing axiom equiconsistent with a cardinal strongly unfoldable for C^(n-1) (and presumably equivalent to Σ_n CBFA_<κ^<κ^+, given that strong unfoldability is equivalent to +1-reflection), a Σ_n-exact structural reflection forcing axiom of consistency strength somewhere below a 2-huge cardinal, and so on. However, I will not further explore that possibility here. Finally, in line with the results of Bagaria (mentioned after Proposition <ref>) on the connection between the Vopenka scheme and strengthenings of extendibility, we can get a sort of generic C^(n)-extendibility from Σ_n+2-correct forcing axioms: If n is a positive integer such that Σ_n+2 CFA_<κ(Γ) holds for some regular uncountable cardinal κ and n+1-nice forcing class Γ, then for every α∈ C^(n) above κ, there is a Γ-extension V[G], a β∈ (C^(n))^V[G], and an elementary embedding j: V_α^V→ V_β^V[G] in V[G] with critical point κ and j(κ)≥α. Assume that no such j exists and let ϕ(V_α, κ) denote the assertion that for all ℙ∈Γ and ordinals β, if ℙ forces that there is an elementary embedding j:V̌_α→ V_β with crit(j)=κ̌ and j(κ̌)∉V̌_α, then ℙ forces that β∉C^(n). Then ϕ is Π_n+1 because by Corollary <ref> the assertion β∉C^(n) is Σ_n, quantifying over all ℙ∈Γ adds a Π_n+1 disjunct, and the rest of the statement adds only universal quantifiers and conjuncts or disjuncts of the same or lower complexity. Furthermore, it is provably Γ-persistent, since if any ℚ∈Γ forces that ℙ̇ and β form a counterexample to ϕ(V_α, κ), then ℚ*ℙ̇ and β are already a counterexample in V. Thus for sufficiently large regular γ, there is a transitive N∈ H_κ and an elementary embedding σ: N→ H_γ whose range is transitive below κ and contains V_α and κ such that ϕ(σ^-1(V_α), κ̅) holds, where as usual κ̅:=σ^-1(κ). However, the trivial forcing forces that α is an ordinal such that σ restricts to an elementary embedding j:σ^-1(V_α)→ V_α with crit(j)=crit(σ)=κ̅, j(κ̅)=σ(κ̅)=κ∉σ^-1(V_α), and α∈ C^(n), contradicting ϕ(σ^-1(V_α), κ̅). Hence some Γ-extension must contain the desired elementary embedding. This is somewhat reminiscent of the Woodin-Cox characterization of forcing axioms in terms of a sort of generic supercompactness for C^(n-1) in Section <ref>. However, the above result gives us the generic embedding specifically in a Γ-extension, whereas the poset from Cox's argument used in Theorem <ref> need not be in Γ. CHAPTER: ANALYSES OF THE FORMULA COMPLEXITY OF COMMON CONCEPTS To simplify proofs which involve analyzing formula complexity, this appendix separates out the analyses for various assertions that come up frequently. "α is a cardinal": Π_1 We can express this as "for all β<α and all functions f:β→α, there is some γ<α such that for all δ<β, f(δ)≠γ." Since the only unbounded quantifier is the one over f, this is Π_1. "λ=κ^+α": Δ_2 Π_2 expression: "λ is a cardinal and for all strictly order-preserving functions f:α→ [κ, λ), there is some β<α such that f(β) is not a cardinal or, for all γ<λ, γ<κ or γ is not a cardinal or there is some β<α such that f(β)=γ." Every concept used involves only bounded quantifiers except "is a cardinal", which is Π_1, "is not a cardinal", which is Σ_1, and the quantification over order-preserving functions. Σ_2 expression: "λ is a cardinal and there exists a strictly order-preserving function f:α→ [κ, λ) such that for all β<α, f(β) is a cardinal, and for all γ<λ, γ<κ or γ is not a cardinal or there is some β<α such that f(β)=γ." "x=V_α": Π_1 The rank function can be defined in a Δ_1 way, since it only involves bounded quantifiers and uniquely defined entities which can be referred to with an existential or universal quantifier interchangeably. Thus "for all y, y∈ x iff the rank of y is less than α" is Π_1. "x=H_κ": Π_1 We can express this as "for all y, y∈ x iff there is an α<κ and a function f∈ x with dom(f)=α and rng(f) a transitive superset of y." "S⊆ [H_λ]^<κ is stationary": Π_1 (with S, λ, and κ as parameters) We can express this as "for all functions h:[H_λ]^<ω→ H_λ, there is some Z∈ S such that for all x, y∈ H_κ, if x∈ y∈ Z, then x∈ Z, and for all t∈ [H_λ]^<ω, if t⊂ Z then h(t)∈ Z." H_κ has a Π_1 definition by the above, [H_λ]^<ω has a Π_1 definition for the same reason as the full power set does, and everything else involves only bounded quantifiers. "Mϕ(a)" (M a set model containing a, ϕ an arbitrary formula in the language of M): Δ_1 This is expressible either as "there exists a function assigning truth values to the subformulas of ϕ which obeys the Tarski recursion relations and assigns `True' to ϕ(a)" or "all functions assigning truth values to the subformulas of ϕ which obey the Tarski recursion relations assign `True' to ϕ(a)". Since the Tarski relations only involve quantifying over M, these are Σ_1 and Π_1 respectively. "p⊩_ℙϕ(ȧ)": same complexity as ϕ (minimum Δ_1) The definition of the forcing relation (see, e.g. Kunen's Definition IV.2.42) involves a recursion on subformulas of ϕ as above, where each step only involves quantifiers over ℙ, except for steps dealing with the quantifiers of ϕ, in which case we need a matching quantifier over ℙ-names. Thus the overall number of quantifiers is the same, except we need an extra quantifier for the function handling the recursion, which as above can be either existential or universal, and thus only increases overall complexity if ϕ has no unbounded quantifiers of its own. CHAPTER: VARIOUS USEFUL LEMMAS This appendix collects and proves various basic facts I use repeatedly, for the benefit of any readers unfamiliar with any of them. For any uncountable cardinal κ, H_κ≺_Σ_1 V. First, observe that transitive classes agree on the truth of Δ_0 formulas with parameters in both of them, since the element relations are the same and all quantifiers can be taken to range over the same domains. Now if V∃ y ψ(a, y), where ψ is Δ_0 and a∈ H_κ, then for any b such that ψ(a, b) holds and any transitive set T containing a and b, Tψ(a, b). By the Lowenheim-Skolem theorem, there is an elementary substructure of T of size less than κ containing b and all elements of trcl({a}), which collapses to a transitive set in H_κ containing a and some b̅ such that ψ(a, b̅) holds. Since ψ is Δ_0, it follows that H_κψ(a, b̅) and thus H_κ∃ y ψ(a, y). If κ≤λ are uncountable cardinals with κ regular, for any club C⊆κ, {Z∈ [H_λ]^<κ Z∩κ∈ C} is a club in [H_λ]^<κ, and for any club C'⊆ [H_λ]^<κ, {Z∩κ Z∈ C' Z∩κ∈κ} contains a club in κ. Similar statements hold for stationary sets. Given any X∈ [H_λ]^<κ, let α∈ C be greater than any ordinal in X∩κ. Then X∪α is a superset of X in {Z∈ [H_λ]^<κ Z∩κ∈ C}, so the latter set is unbounded. To see that it is closed, note that if γ<κ and ⟨ X_αα<γ⟩ is a chain with X_α∩κ∈ C for all α<γ, then (⋃_α<γX_α)∩κ=sup{X_α∩κα<γ}∈ C because C is closed, so the union is in {Z∈ [H_λ]^<κ Z∩κ∈ C} as well. For C', first note that {Z∈ [H_λ]^<κ Z∩κ∈κ} is a club, so by intersecting it with C' we can assume that Z∩κ is transitive for all Z∈ C'. Now we recursively construct a sequence ⟨ Z_αα<κ⟩ in C'. Let Z_0 be an arbitrary element of C', at successor stages invoke the unboundedness of C' to arrange Z_α∪{Z_α∩κ}⊆ Z_α+1∈ C', and at limit stages γ<κ use the closure of C' to set Z_γ:=⋃_α<γ Z_α. Then the map α↦ Z_α∩κ is a strictly increasing function κ→κ, so its range is unbounded in κ, and by the definition at limit stages it is continuous, so its range is also closed. Hence {Z_α∩κα<κ} is a club in κ contained in {Z∩κ Z∈ C' Z∩κ∈κ}, as desired. For stationary sets, if S⊆κ is stationary, then for any club C⊆ [H_λ]^<κ, S∩{Z∩κ Z∈ C Z∩κ∈κ}≠∅ by the above, so every Z such that Z∩κ is in that intersection lies in {Z∈ [H_λ]^<κ Z∩κ∈ S}∩ C. Thus the latter intersection is also nonempty, so {Z∈ [H_λ]^<κ Z∩κ∈ S} is stationary. The argument in the other direction works similarly. If j:M→ N is an elementary embedding between transitive classes and X∈ M is transitive, then j↾ X is an elementary embedding X→ j(X). Furthermore, for any Y∈ M, j↾ X is an elementary embedding of the structure (X, ∈, Y∩ X) into (j(X), ∈, j(Y)∩ j(X)). For any formula ϕ in the language of set theory with an added predicate for Y and any a∈ X such that Xϕ(a), M“(X, ∈, Y∩ X)ϕ(a)", so by elementarity N“(j(X), ∈, j(Y)∩ j(X))ϕ(j(a))". If j:M→ N is an elementary embedding between transitive classes satsifying ZFC^-, ℙ∈ M is a forcing poset, G⊆ℙ is an M-generic filter, and H⊆ j(ℙ) is an N-generic filter with j"G⊆ H, then j extends to an elementary embedding j^*:M[G]→ N[H]. Furthermore, rng(j^*)={ẋ^Hẋ∈ N^j(ℙ)∩ rng(j)}. Let j^*(ẋ^G)=j(ẋ)^H. To see this is a well-defined elementary embedding, let ẋ∈ M be any ℙ-name such that M[G]ϕ(ẋ^G). Then there is some p∈ G such that p⊩ϕ(ẋ), so by elementarity j(p)⊩ϕ(j(ẋ)). Since j(p)∈ H by hypothesis, N[H]ϕ(j(ẋ)^H), as desired. By considering check names, we see that j^* extends j. The "furthermore" is immediate from the definitions of j^* and M[G]. For any infinite cardinal κ such that ℙ∈ H_κ is a forcing poset or Boolean algebra, if G⊂ℙ is V-generic, H_κ[G]=H_κ^V[G]. H_κ[G]⊆ H_κ^V[G] follows immediately from the observation that if a name has fewer than κ elements, the set it names in V[G] must as well. For the reverse inclusion, we proceed by induction on κ. In the limit case, if x∈ H_κ^V[G], then x∈ H_λ^V[G] for some λ<κ (since κ is still a limit cardinal in V[G], as ℙ is in H_κ), so inductively, x=ẋ^G for some ẋ∈ H_λ^V. In the successor case, assume toward a contradiction that the desired inclusion fails and let x be ∈-minimal in H_κ^V[G]- H_κ[G]. Then x⊆ H_κ[G]. In V[G], let γ be the cardinality of x (so γ<κ), and let f:γ→ H_κ^V∩ V^ℙ be such that x={f(ξ)^G ξ<γ}. Since ℙ is κ-cc, there is in V a function g:γ→ H_κ^V such that for all ξ<γ, f(ξ)∈ g(ξ) and the cardinality of g(ξ) is less than κ. Since κ is regular, r:=⋃ rng(g)∈ H_κ. Now let x=ẋ^G, where ẋ is a ℙ-name but perhaps not in H_κ. We can then define in V: ẏ:={(ż,p)ż∈ r p∈ℙ p⊩_ℙż∈ẋ}. Then ẏ∈ H_κ and ẏ^G=x, a contradiction. If λ<γ are regular cardinals, Y≺ H_ γ is such that Y∩λ∈λ, and A∈ Y has cardinality less than λ, then A⊂ Y. Let κ:=|A|<λ; then by elementarity κ∈ Y and there must be some bijection f:κ→ A in Y. By hypothesis, each α<κ must then be in Y, so A={f(α)α<κ}⊂ Y. Suppose κ is a regular cardinal, ⟨ℙ_αα≤κ⟩ is a forcing iteration such that ℙ_α∈ H_κ for all α<κ, ℙ_κ is the direct limit of the preceding ℙ_α, and ℙ_κ satisfies the κ-cc, and G⊂ℙ_κ is a V-generic filter. Then if a∈ H_κ^V[G], there is a β<κ and a ℙ_β-name ȧ∈ H_κ^V such that ȧ^G_β=a. Taking δ=|trcl({a})|^V[G]<κ, we can code a by some A⊂δ. Specifically, choosing some bijection f:δ→δ×δ in V, let A be such that (trcl({a}), ∈↾ trcl({a}))≅ (δ, f"A), with the isomorphism given by some bjiection trcl({a})→δ in V[G]. Let Ȧ be a nice ℙ_κ-name for A, i.e. Ȧ=⋃_α<δ{α̌}× X_α for some sequence ⟨ X_αα<δ⟩ of antichains of ℙ_κ. By the κ-cc and the regularity of κ, fewer than κ conditions appear in Ȧ. Since each condition of ℙ_κ is supported in a ℙ_α for α<κ, again applying the regularity of κ there is some β<κ such that the nontrivial coordinates of every forcing condition used in Ȧ lie in ℙ_β. Taking Ȧ↾β to be the ℙ_β-name obtained by truncating the forcing conditions of Ȧ at the β stage, (Ȧ↾β)^G_β=A. Since V[G_β] contains A and f, it can compute a as the ∈-maximal element of the transitive collapse of (δ, f"A), so there must be some ℙ_β-name for a. By Lemma <ref>, we can take this ȧ∈ H_κ^V. If M is an inner model closed under λ-sequences, ℙ∈ M is a λ^+-cc forcing poset, and G⊂ℙ is a V-generic filter, then M[G] is closed under λ sequences in V[G]. Let f:λ→ M[G] be a sequence in V[G] and let ḟ be a name for it in V. We construct a new name Ḟ for f as follows: for each α<λ, given p incompatible with all conditions we have used with α so far, we find a q≤ p and a name ẋ_α,q∈ M such that q⊩_ℙḟ(α̌)=ẋ_α, q (assuming without loss of generality that 1_ℙ forces ḟ to be a function with domain λ). Letting σ_α, q be the canonical name for the ordered pair ⟨α̌, ẋ_α, q⟩, we put ⟨σ_α, q, q⟩ into Ḟ. We continue until for each α, our q form a maximal antichain A_α of ℙ. By the λ^+-cc, Ḟ consists of λ^2=λ ordered pairs. Since we built it from names in M, conditions of p, and ordinals, it is a subset of M of size λ, so it is in M. For each α, there is an unique q∈ A_α∩ G which forces that Ḟ^G(α)=ẋ_α, q^G=ḟ^G(α), so f=Ḟ^G∈ M[G]. [heading=bibintoc]
http://arxiv.org/abs/2405.09728v1
20240515232859
Hidden zero modes and topology of multiband non-Hermitian systems
[ "K. Monkman", "J. Sirker" ]
cond-mat.stat-mech
[ "cond-mat.stat-mech", "cond-mat.str-el", "math-ph", "math.MP", "quant-ph" ]
^1Stewart Blusson Quantum Matter Institute, University of British Columbia, Vancouver, British Columbia V6T 1Z4, Canada ^2Department of Physics and Astronomy and Manitoba Quantum Institute, University of Manitoba, Winnipeg, Canada R3T 2N2 In a finite non-Hermitian system, the number of zero modes does not necessarily reflect the topology of the system. This is known as the breakdown of the bulk-boundary correspondence and has lead to misconceptions about the topological protection of edge modes in such systems. Here we show why this breakdown does occur and that it typically results in hidden zero modes, extremely long-lived zero energy excitations, which are only revealed when considering the singular value instead of the eigenvalue spectrum. We point out, furthermore, that in a finite multiband non-Hermitian system with Hamiltonian H, one needs to consider also the reflected Hamiltonian H̃, which is in general distinct from the adjoint H^†, to properly relate the number of protected zeroes to the winding number of H. Hidden zero modes and topology of multiband non-Hermitian systems Jesko Sirker^2 May 20, 2024 ================================================================= § INTRODUCTION In quantum physics, we usually consider observables which are Hermitian operators. These operators have a real eigenspectrum, guaranteeing that expectation values are real and time evolution unitary. However, it has been pointed out about 20 years ago that one can replace the condition of Hermiticity by the less stringent condition of space-time reflection (PT) symmetry and, if this symmetry is unbroken, still obtain a real spectrum <cit.>. Another motivation to study non-Hermitian Hamiltonians comes from Master equations for open quantum systems <cit.>. If one ignores quantum jumps, which is a reasonable approximation in certain limits, one can rewrite such a Master equation as a non-Hermitian Hamiltonian <cit.>. This approach has been used in recent times to analyze and interpret experimental results in optical and magnetic systems <cit.>. Non-Hermitian systems do show a number of phenomena which are not present in the Hermitian case. Most intriguingly, their spectrum is extremely sensitive to small perturbations <cit.>, see also App. <ref>. The best known example is the non-Hermitian skin effect: changing the boundary conditions from periodic to open can lead to a localization of a macroscopic number of states at the boundaries <cit.>. Another well known phenomenon are exceptional points. At these points, two eigenvalues and also their corresponding eigenvectors coalesce <cit.>. This is, of course, not possible in a Hermitian system where the eigenvectors always form an orthogonal system. From a theoretical perspective, an important step to understand these phenomena is to classify Gaussian non-Hermitian systems <cit.>. Here, topology plays a crucial role because it is robust against small perturbations. In one-dimensional Hermitian systems, topological order is only possible if the system possesses additional symmetries. In the case of non-spatial symmetries—such as time reversal, particle-hole, and chiral symmetry—this leads to the tenfold classification of symmetry-protected topological (SPT) order <cit.>. In contrast, one-dimensional non-Hermitian systems can have a non-trivial topology even without additional symmetries because the eigenspectrum is complex and the determinant of the Bloch Hamiltonian h(k): [0,2π)→ℂ as function of momentum k can have a non-zero winding number in the complex plane. It has been shown that if such a winding around a reference point E in the complex plane exists for a single-band model then there will be a skin effect for open boundaries <cit.>. It has therefore been suggested that the standard bulk topological invariants are not useful to define a bulk-boundary correspondence for non-Hermitian systems and that they have to be replaced by other invariants <cit.> or by an invariant based on a modified Hermitian Bloch Hamiltonian <cit.>. However, zero energy edge modes in finite non-Hermitian systems are, in general, fragile to perturbations <cit.> (see also App. <ref>), implying that they often lack topological protection thus making a bulk-boundary correspondence based on such modes questionable. As an alternative, the singular value spectrum has been put forward <cit.>. The singular values s_i of a system with Hamiltonian H are the square roots of the eigenvalues of H^† H. For a Hermitian system, we therefore have that s_i=|λ_i| where |λ_i| are the eigenvalues of H. In this case, the standard bulk-boundary correspondence also applies to the singular value spectrum. However, for a non-Hermitian H the eigenvalue and the singular value spectrum are different. While the former is unstable to small perturbations, the latter is stable and important properties can be directly inferred from the topological winding number. In this article we will show why this is the case. We show, in particular, that non-Hermitian systems quite generally have hidden zero modes if they have a non-trivial topology. These are topologically protected edge modes with exact zero eigenvalues for semi-infinite boundaries which are, however, not present and do not converge to zero with system size for a finite system. They do, however, get mapped exponentially close to zero by a finite-system Hamiltonian and thus represent extremely long-lived states which are physically highly relevant and might be experimentally indistinguishable from true eigenstates. We will show, furthermore, that in the multiband case one has to consider not only the Hamiltonian H but also the reflected Hamiltonian H̃. We will put the relation between topology, zero modes for semi-infinite boundaries, and protected singular values for finite systems which converge to zero with increasing system size, on a firm footing by using theorems known from the study of Toeplitz operators. Using several examples, we will highlight that the eigenspectrum is, in general, insensitive to the topology of the system while the singular value spectrum can be directly related to the winding number. In App. <ref> we will show, furthermore, that the bulk-boundary correspondence put forward here fully explains the topological protection of stable edge modes in models considered previously in the literature. On the other hand, the approaches considered in Refs. <cit.> do predict zero modes in open systems, however, these modes generically are, as we will show, unstable to small perturbations and thus not topologically protected. § TOPOLOGY AND WINDING NUMBERS In a tight-binding approximation for a non-interacting system, electrons moving on a periodic lattice can be described by H=∑_kΨ^†_k h(k) Ψ_k where Ψ_k=(c_k^1,c_k^2,⋯,c_k^N) for a unit cell with N elements, and h(k) is the N× N Bloch Hamiltonian. If (h(k)) has no zeroes in k∈ [0,2π) then we can define a winding number by ℐ= ∫_0^2πdk/2π ∂_k ln h(k) = ∫_0^2πdk/2π∑_j ∂_k λ_j(k)/λ_j(k) . In the second line, we have assumed that h(k) is diagonalizable with eigenvalues λ_j(k), j=1,⋯,N. Note that λ_j(0)=λ_j(2π) is periodic. This implies, in particular, that if all eigenvalues are real then ℐ=0. A generic one-dimensional Hermitian system is thus always topologically trivial. The only way for a one-dimensional Hermitian system to have non-trivial topology is symmetry-protected topological (SPT) order. These symmetries lead to a block structure of h(k) and topology can be defined based on the properties of these blocks. This leads to the tenfold classification scheme for non-spatial symmetries and to topological crystalline orders for spatial symmetries <cit.>. In contrast, ℐ≠ 0 is a generic property of non-Hermitian systems which does not require the presence of additional symmetries. The bulk-boundary correspondence for SPT phases of Hermitian systems connects the bulk topological invariant with the number of gapless protected edge modes in a system with boundaries. In a non-Hermitian system, on the other hand, which has a non-zero winding number around a reference energy E, we have the skin effect <cit.>. I.e., a macroscopic number of modes become localized in a system with boundaries but they are fragile with respect to small perturbations. § TOPOLOGY IN ONE-DIMENSIONAL SEMI-INFINITE SYSTEMS Here we want to elucidate in detail the proper bulk-boundary correspondence in the non-Hermitian case. First, we consider a semi-infinite system with a single boundary. We define Fourier coefficients h_j=1/2π∫_0^2π h(k) e^ k j dk which leads to the real-space matrix H = [ h_0 h_1 h_2 …; h_-1 h_0 h_1 …; h_-2 h_-1 h_0 …; ⋮ ⋮ ⋮ ⋱ ] where each of the h_j matrices has size N equal to that of the unit cell. H thus has the form of a block Toeplitz operator. For such an operator, the index is defined as ind(H) = D(ker(H))-D(ker(H^†)) with ker(H^†)=ker(H^T) and D denotes the dimension. For a Hermitian system, we always have ind(H) =0. The fact that the index can be non-zero tells us that the right and left eigenspectra of systems with semi-infinite boundary conditions are in general not the same. Furthermore, Gohberg's index theorem directly relates the winding number (<ref>) with the index (<ref>) by <cit.> ℐ=-ind(H) . Thus, the winding number ℐ tells us directly about the difference in the number of right and left zero eigenvalues of H. It is also very important to distinguish between the case where h_j are N× N blocks with N>1 and the scalar case, N=1, where h_j are just complex numbers. In the latter case, Coburn's lemma states that either D(ker(H))=0 or D(ker(H^†))=0. I.e., in this case the sign of a non-zero winding number ℐ tells us immediately whether H has right or left zeroes and |ℐ| gives the number of such zeroes. §.§ Example 1 To illustrate these points, we consider an example where h(k)=1+2 ^- k which means that ℐ=-ind(H)=-1 and the Fourier coefficients are h_0=1 and h_1=2 with all the other ones being equal to zero. Coburn's lemma then implies that D(ker(H))=1 and D(ker(H^†))=0, i.e., H has exactly one right zero mode. In this simple case, we can explicitly construct the right zero mode by considering Hv=0 with v=[ v_1 v_2 … ]^T and v_j ∈ℂ. This leads to the simple recurrence relation for the vector coefficients v_j+2v_j+1=0. A normalized solution is given by v_j=(-1)^j-1√(3)/2^j which means that the zero mode is exponentially localized. § FINITE SYSTEMS AND HIDDEN ZERO MODES In a finite Hermitian system with non-trivial SPT order and system size L, zero modes can typically be easily identified as eigenvalues which scale as |λ|∼exp(-L). This, however, is in general not the case in a non-Hermitian system. Here, zero modes can be hidden. This can be understood by considering the example discussed above. If we consider H_L v=0 for a finite matrix H_L and vector v=[ v_1 v_2 … v_L ]^T then the recurrence relation above is supplemented by the boundary condition v_L=0. In this case, the only solution is v≡ 0 which is not a proper solution. The protected zero mode, which does exist in the thermodynamic limit, cannot be found by considering the eigenspectrum of a finite system. More generally, we call a vector v a hidden zero mode in a finite system with Hamiltonian H_L if lim_L →∞ ||H_L v ||=0 . In the example, the vector v=[ v_1 v_2 … v_L ]^T with v_j=(-1)^j-1√(3)/2^j for j<L and v_L=0 is a hidden zero mode with lim_L →∞ ||H_L v ||=lim_L →∞|v_L|=0. Another issue which needs to be addressed is that a finite system has two boundaries. To study the properties of the second boundary we also need the real-space matrix corresponding to h(-k) which is given by H̃=H(h_j→ h_-j) and describes the reflected Hamiltonian. In the scalar case H̃=H^T, however, this is no longer true in the block case. If the winding number of H is ℐ then the winding number of H̃ is ℐ̃=-ℐ. §.§ Example 2 This is best illustrated by another example. Consider the Hamiltonian in k-space given by h(k) = [ ^ik 1 0; 0 ^-ik 0; 0 1 ^ik ] with Fourier coefficients h_0 = [ 0 1 0; 0 0 0; 0 1 0 ] , h_-1 = [ 1 0 0; 0 0 0; 0 0 1 ] , h_1 = [ 0 0 0; 0 1 0; 0 0 0 ] and h_j=0 otherwise. In this case it is easy to see from Fig. <ref> that D(ker(H))=0, D(ker(H̃))=2, D(ker(H^†))=1, and D(ker(H̃^†))=1. For the winding numbers it follows from the index theorem (<ref>,<ref>) that ℐ=D(ker(H^†)) - D(ker(H))=1 and ℐ̃=D(ker(H̃^†)) - D(ker(H̃))= -1. This simple example with unidirectional hopping shows that the zero modes of H and H̃ are very different and that, in the non-scalar case, H̃≠ H^T. If we consider this Hamiltonian on a finite line then we find that it does have two zero modes which are fully localized at the right boundary. I.e., the zero modes are not hidden in contrast to our example 1. However, only one is topologically protected while the other one can be removed by a small perturbation. § SINGULAR VALUES AND HIDDEN ZERO MODES The instability of the eigenspectrum in the non-Hermitian case, and the issues of the two distinct boundaries and the hidden zero modes leads to the question how a bulk-boundary correspondence can be properly formulated. Here, the general theory of truncated Toeplitz operators provides an answer <cit.>. While relatively little is known about the eigenspectrum of such operators, in particular in the non-scalar case, the K-splitting theorem directly relates the index of the operator, and thus also its winding number, to the spectrum of singular values. Assume we have a Hamiltonian h(k) where (h(k)) has no zeroes in k∈ [0,2π). Then we define K=D((H)) + D((H̃)) , and the singular value spectrum s_n(H_L) of a finite chain of length L has the K-splitting property that lim_L→∞ s_n(H_L) = {[ 0, ≤ n≤ K; >0, n > K . ]. I.e., there are exactly K singular values which go to zero in the thermodynamic limit. Now we want to relate this property to the winding number (<ref>). Let us start with the scalar case N=1. In this case, H̃ = H^T and D((H̃))=D((H^†)). In addition, we also know from Coburn's lemma that either D((H))=0 or D((H^†))=0. We can therefore also write K=|D((H))-D((H^†))|=|ℐ|. I.e., in the scalar case there are exactly |ℐ|-many protected singular values. The block case, N>1, is slightly more complicated. We know that ℐ̃=-ℐ which implies that we can also write K=D((H^†)) + D((H̃^†)) and therefore 2K = D((H)) + D((H̃)) + D((H^†)) + D((H̃^†)) ≥ |D((H))-D((H^†))| + |D((H̃))-D((H̃^†))| = 2|ℐ| . I.e., in the block case we only know that there are at least |ℐ|≤ K singular values which will go to zero. Nevertheless, we obtain in both cases a clear bulk-boundary correspondence between the bulk winding number and the singular value spectrum of a finite system. Next, we want to connect the number of singular values K going to zero with the number of zero modes in the eigenspectrum. Consider the singular value decomposition H_L=USV^† with U,V unitary and S diagonal and positive. It follows that H^†_L H_L= VS^2V^†. Let us denote the column vectors of V by v_n, those of U by u_n and the singular values on the diagonal of S by s_n. We then obtain H_L^† H_L v_n = s_n^2 v_n ⇒ H_L v_n = s_n u_n . Now consider, in particular, one of the K singular values with lim_L→∞s_n=0. In this case we have lim_L→∞ || H_L v_n || = lim_L→∞ ||s_n u_n|| = lim_L→∞s_n=0 where we have used that U is unitary. We thus find one of the main results of this letter: Every singular value s_n which vanishes in the thermodynamic limit is directly connected to a (hidden) zero mode v_n of the Hamiltonian H_L. The K-splitting theorem then connects this to the winding number. For a system with winding number ℐ there are at least |ℐ| exact or hidden zero modes. In the scalar case N=1, there are exactly |ℐ| many. § HIDDEN ZERO MODES IN A SUB-LATTICE SYMMETRIC MULTIBAND MODEL The following example shows several of these phenomena. Consider the Bloch Hamiltonian h(k)=([ 0 0 0.9 ^ik; 0 0 1 1; -0.9 ^α k 0 0; x y 0 0 ]) with α=± 1 and real parameters x>0,y>0. The Hamiltonian has a sub-lattice symmetry with an upper right 2× 2 block h_1(k) and a lower left 2× 2 block h_2(k). We can define winding numbers for the two blocks which are, according to Eq. (<ref>), determined by their determinants with h_1(k)=0.9-^ k and h_2(k)=-0.9y-x^α k. The winding of the upper block is therefore ℐ_1=1. The lower block has winding ℐ_2=α if x>0.9 y and no winding, ℐ_2=0, otherwise. Let us first consider the case α=1. In this case we can have a total winding ℐ=ℐ_1+ℐ_2=2 if x>0.9y and ℐ=1 otherwise. In the former case, there are two singular values for the open system which are non-degenerate but both approach zero for system size L→∞. For the ℐ=1 case, on the other hand, there is only one singular value approaching zero for L→∞. The singular values thus clearly distinguish between these topologically distinct cases, see Fig. <ref>. In both cases there is a skin effect and all eigenstates are localized at the left boundary. What makes this model a very illustrative example is that we can calculate the characteristic polynomial for the open system and L/4∈ℕ unit cells exactly leading to [(λ-√(y))(λ+√(y))(λ-0.9)(λ+0.9)]^L/4=0 . This means that the eigenvalues are ±√(y) and ± 0.9, each with a multiplicity of L/4 and completely independent of x. We can, in particular, switch between the cases ℐ=2 and ℐ=1 without changing the eigenvalue spectrum at all. It is completely insensitive to the change of topology. The zero modes in the eigenspectrum are only present in the semi-infinite system. For a finite system, these hidden modes can only be seen by considering the singular value spectrum as illustrated in Fig. <ref>. Note also, that in both cases there is no line gap. The case α=-1 further exemplifies some of the findings in this work. Now ℐ_2=-1 if x>0.9 y and ℐ_2=0 otherwise meaning that the total winding ℐ is either zero or 1. Naively, we might expect that for ℐ=0 the topology is trivial and there is no skin effect. This, however, is incorrect. With sub-lattice symmetry being present, there are two topological invariants which can be either considered to be the windings of the blocks ℐ_1,2 or the sum and the difference of these two invariants <cit.>. The number of singular values K is bounded from below by K≥ |ℐ_1|+|ℐ_2|. We thus expect to have either two or just one protected singular value consistent with the numerical results shown in Fig. <ref>. However, the single zero mode for the ℐ_1=1, ℐ_2=0 case is hidden while two eigenvalues which go to zero are present for the ℐ_1=1, ℐ_2=-1 case. The latter result can be explained by noting that the system for ℐ_1=-ℐ_2 can be adiabatically deformed to a Hermitian system. The two zero eigenmodes then follow from the standard bulk-boundary correspondence, see also App. <ref>. § CONCLUSIONS To conclude, we have shown why hidden zero modes appear in finite non-Hermitian systems and how a rigorous bulk-boundary correspondence can be established—based on well-known results for truncated Toeplitz operators—relating the winding number with the singular value spectrum. We also pointed out that zero eigenmodes in finite non-Hermitian systems are often not topologically protected and that there are important differences between the scalar case and multiband models. In experiments, we expect hidden zero modes to be almost indistinguishable from true zero energy eigenmodes because they get mapped by the Hamiltonian exponentially close to zero and thus correspond to extremely long-lived states. § OTHER BULK-BOUNDARY CORRESPONDENCES Here we want to consider other examples which have been studied in the literature and which have lead to some misconceptions about what is and what is not topologically protected in a non-Hermitian system. §.§ Non-Hermitian SSH model In Refs. <cit.> a non-Hermitian Su-Schrieffer-Heeger (SSH) model was considered which has the following k-space Hamiltonian h(k)=([ 0 t_1+γ/2+exp(- k); t_1-γ/2+exp( k) 0 ]) where t_1 and γ are real parameters. This model has sub-lattice symmetry σ_zh(k)σ_z=-h(k). We note that this is different from chiral symmetry σ_zh(k)σ_z=-h^†(k) which this model does not possess. The Fourier coefficients of this model are given by h_0=([ 0 t_1+γ/2; t_1-γ/2 0 ]) , h_1=([ 0 1; 0 0 ]) , h_-1=([ 0 0; 1 0 ]) , with all other coefficients being equal to zero. We have two topological invariants: the winding numbers of the upper right element ℐ_1 and of the lower left element ℐ_2. We see immediately that ℐ_1=-1 if -1<t_1+γ/2<1 and zero otherwise. Similarly, we have ℐ_2=+1 if -1<t_1-γ/2<1 and zero otherwise. The number of protected singular values is given by K≥ |ℐ_1|+|ℐ_2| and we have three distinct regions with |ℐ_1|+|ℐ_2|=2,1,0. Let us now consider the specific example which has led to some confusion in Ref. <cit.>. The example studied in Ref. <cit.> is similar but uses a different γ parameter which leads to a less complex phase diagram. For this reason we concentrate on Fig. 2 in Ref. <cit.> where γ=4/3 and t_1 is used as a parameter. In this specific case, ℐ_1=-1 if -5/3<t_1<1/3 and ℐ_2=+1 if -1/3<t_1<5/3. We thus expect that there is one protected singular value which goes to zero in the thermodynamic limit for a finite system if 1/3<|t_1|<5/3 and two such protected singular values if -1/3<t_1<1/3. Outside of these regions, the model is topologically trivial and there are no protected modes. As shown in Fig. <ref> for a system with 40 unit cells, these results are in complete agreement with numerical calculations of the singular values. What has lead to some confusion is that there are two eigenvalues whose magnitude is going to zero with system size in a range which does not agree with the phase boundaries coming from the winding numbers. This has lead to the claim that one needs to abandon the notion of topology based on invariants for the Bloch Hamiltonian h(k) <cit.>. In Ref. <cit.>, in particular, a 'non-Bloch topological invariant' is constructed which is meant to explain the topology of the system and the related topologically protected zero eigenmodes. This notion, however, is misguided because in part of the regime where the eigenvalues are zero there is no stability against small perturbations implying that there is no topological protection. For the specific case considered, adding small random matrices to the Hamiltonian for a finite system clearly shows that the almost zero eigenvalues are only stable in -1/3<t_1<1/3, see Fig. <ref> right column. Outside of this regime, there is no topological protection in contrast to what is implied by the non-Bloch invariant constructed in Ref. <cit.>. On the other hand, the singular values are completely stable against these perturbations demonstrating that they are indeed topologically protected. That there are topologically protected zero eigenmodes for -1/3<t_1<1/3 can be understood in the following manner: In this case we have ℐ_1=-1 and ℐ_2=+1 as for a chiral Hermitian systems and, indeed, we can adiabatically deform the Hamiltonian (<ref>) to a Hermitian one in this regime. This can be done, for example, by changing γ/2→ -γ/2 in the upper right block. Along this path, the gap never closes and the winding numbers thus remain the same as well. We conclude that the zero eigenmodes are only protected in the regime where the model is adiabatically connected to a topologically non-trivial Hermitian model. Outside of this regime, there is no protection of eigenmodes with zero energy. This leaves us with the task to explain what happens in the regime 1/3<|t_1|<5/3 where the winding numbers and singular values indicate that there is one protected mode for the semi-infinite chain. Based on the discussion in the main text, we expect again that for a finite system this will turn out to be a hidden zero mode. Because the Fourier coefficients (<ref>) are so simple, we can explicitly demonstrate that this is indeed the case. Note that we have to consider again both H and H̃ because the finite system has two boundaries. Let us start by considering H for semi-infinite boundaries. We want to solve Hv=0. We then find for the odd vector coefficients v_1=0 v_2j-1 + (t_1-γ/2)v_2j+1=0 j=1,2,⋯ This implies that all odd coefficients are zero, v_2j-1=0 for j=1,2,⋯. For the even vector coefficients we find the relation (t_1+γ/2)v_2j + v_2j+2=0 j=1,2,⋯ We thus have v_2 as a free parameter and v_2j=(-t_1-γ/2)^j-1v_2 j=2,3,⋯. However, for this to be a proper solution, we also have to demand that the solution is normalizable! We find ||v||^2=v_2^2∑_n=0^∞(t_1+γ/2)^2n which implies that we need -1<t_1+γ/2<1. This is exactly the regime where ℐ_1=-1. Thus we have proven that there is a zero mode in this regime which has the vector coefficients v_2j-1=0, v_2j=√(1-(t_1+γ/2)^2)(-t_1-γ/2)^j-1 for j=1,2,⋯. In a finite system, we simply truncate the vector and we see that it becomes a hidden zero mode with ||Hv||∼exp(-L). For the second zero mode we have to consider the other boundary and thus H̃. Now we solve H̃w=0 and after a completely analogous calculation find that a normalizable solution exists if -1<t_1-γ/2<1 which is exactly the regime where ℐ_2=+1. The normalized solution in this regime reads w_2j=0, w_2j-1=√(1-(t_1-γ/2)^2)(-t_1+γ/2)^j-1 for j=1,2,⋯. The truncated vector for a finite system is again a hidden zero mode with ||H̃w||∼exp(-L). We note that the same applies to the case γ=3 studied in Ref. <cit.>. The zero eigenmodes found in this case are unstable to perturbations as well. Instead, there is a stable hidden zero mode which exists in the regions 1/2<|t_1|<5/2. While the polarization operator devised in Ref. <cit.> is quantized and detects the unstable zero modes by construction, it is in our view not correct to call this a bulk-boundary correspondence because the polarization operator requires the states of the open system as an input. The topological properties of the bulk Hamiltonian h(k) do not enter at all. To conclude, we have shown that the winding numbers for h(k) do predict the number of protected singular values which correspond to the number of topologically protected stable hidden and visible zero modes. The only regime where the zero modes for a finite system are protected and not hidden is the regime where the model can be adiabatically deformed to a Hermitian chiral model with ℐ_1=-ℐ_2≠ 0. In this case, the standard bulk-boundary correspondence applies. Outside of this regime, the eigenmodes with zero energy are accidental and unstable and not topologically protected in contradiction to Refs. <cit.>. Our theory of protected singular values and hidden zero modes, in contrast, provides the correct bulk-boundary correspondence also for this model. §.§ Bloch Hamiltonian for open boundaries Another attempt at formulating a bulk-boundary correspondence for non-Hermitian systems has been to define a Bloch Hamiltonian for the bulk of an open system <cit.>. From a general perspective, this appears already questionable because for a non-Hermitian system there is no separation between bulk and boundary properties in the way we are used to for Hermitian systems. For example, the energies in a Hermitian system just aquire 1/L corrections when cutting a periodic chain and—except for possible edge states—changes to the eigenstates are limited to regions close to the boundary. In contrast, changing the boundary conditions from periodic to open typically completely changes all the eigenenergies and eigenstates of a non-Hermitian system. The corrections are not small in order 1/L. More specifically, it has been suggested that the spectrum and, in particular, the topological edge states in the model (<ref>) for OBC can be understood based on the Bloch Hamiltonian h(k)=([ 0 √(t_1^2-(γ/2)^2)+exp(- k); √(t_1^2-(γ/2)^2)+exp( k) 0 ]). This Hamiltonian is chiral and Hermitian and thus will, according to the standard bulk-boundary correspondence <cit.>, have two protected edge modes if ℐ_1=-ℐ_2=-1. This is the case if |√(t_1^2-(γ/2)^2)|<1. Clearly, this is very different from the conditions found for a single or two protected edge modes for the original model (<ref>). More specifically, the model (<ref>) predicts that there are two protected edge modes if either (i) |γ|/2<1 and -√(1+(γ/2)^2)<t_1<√(1+(γ/2)^2) or (ii) |γ|/2>1 and √(-1+(γ/2)^2)<t_1<√(1+(γ/2)^2) or -√(1+(γ/2)^2)<t_1<-√(-1+(γ/2)^2). For the example γ=4/3 considered in Fig. <ref>, the chiral Hermitian Hamiltonian (<ref>) thus has two protected edge modes if -√(13/9)<t_1<√(13/9). This is indeed also the regime in which the eigenpectrum of the original model for OBC (<ref>) has two zero eigenmodes. However, as we have aready shown in Fig. <ref> these two zero modes are, in general, not topologically protected. It it thus not correct to relate the original non-Hermitian Bloch Hamiltonian to a chiral Hermitian one. This is only allowed if ℐ_1=-ℐ_2. Topological protection in regimes where ℐ_1≠ -ℐ_2 is fundamentally different and cannot be captured by a Hermitian Bloch Hamiltonian. Instead, the topologically protected modes are hidden and only emerge as exact eigenstates in the thermodynamic limit. 26 fxundefined [1] ifx#1 fnum [1] #1firstoftwo secondoftwo fx [1] #1firstoftwo secondoftwo noop [0]secondoftwo ref[1]@startlink#1@href href[1]#1@endlink anitize@url [0]` 12`$12`&12`#12`1̂2`_12`%12 startlink[1] endlink[0] rl [1]href #1 @bib@innerbibempty [Bender(2005)]Bender_2005 author author C. M. Bender, title title Introduction to pt-symmetric quantum theory, https://doi.org/10.1080/00107500072632 journal journal Contemp. Phys. volume 46, pages 277 (year 2005)NoStop [Lindblad(1976)]L1976 author author G. Lindblad, title title On the generators of quantum dynamical semigroups, https://doi.org/https://doi.org/10.1007/BF01608499 journal journal Comm. Math. Phys. volume 48, pages 119 (year 1976)NoStop [Breuer and Petruccione(2002)]BreuerPetruccione author author H. P. Breuer and author F. Petruccione, @noop title The Theory of Open Quantum Systems (publisher Oxford University Press, address New York, year 2002)NoStop [Roccati et al.(2022)Roccati, Palma, Bagarello, and Ciccarello]RPBC2022 author author F. Roccati, author G. M. Palma, author F. Bagarello, and author F. Ciccarello, title title Non-Hermitian Physics and Master Equations, https://doi.org/10.1142/S1230161222500044 journal journal Open Systems & Information Dynamics volume 29, pages 2250004 (year 2022), NoStop [Minganti et al.(2019)Minganti, Miranowicz, Chhajlany, and Nori]MingantiMiranowicz author author F. Minganti, author A. Miranowicz, author R. W. Chhajlany, and author F. Nori, title title Quantum exceptional points of non-hermitian hamiltonians and liouvillians: The effects of quantum jumps, https://doi.org/10.1103/PhysRevA.100.062131 journal journal Phys. Rev. A volume 100, pages 062131 (year 2019)NoStop [Miri and Alù(2019)]MiriAlu author author M.-A. Miri and author A. Alù, title title Exceptional points in optics and photonics, https://doi.org/10.1126/science.aar7709 journal journal Science volume 363, pages eaar7709 (year 2019)NoStop [Su et al.(2021)Su, Estrecho, Biegaska, Huang, Wurdack, Pieczarka, Truscott, Liew, Ostrovskaya, and Xiong]SuEstrecho author author R. Su, author E. Estrecho, author D. Biegaska, author Y. Huang, author M. Wurdack, author M. Pieczarka, author A. G. Truscott, author T. C. H. Liew, author E. A. Ostrovskaya, and author Q. Xiong, title title Direct measurement of a non-hermitian topological invariant in a hybrid light-matter system, https://doi.org/10.1126/sciadv.abj8905 journal journal Sci. Adv. volume 7, pages eabj8905 (year 2021), NoStop [Yang et al.(2020)Yang, Wang, Rao, Gui, Yao, Lu, and Hu]YangWang author author Y. Yang, author Y.-P. Wang, author J. W. Rao, author Y. S. Gui, author B. M. Yao, author W. Lu, and author C.-M. Hu, title title Unconventional singularity in anti-parity-time symmetric cavity magnonics, https://doi.org/10.1103/PhysRevLett.125.147202 journal journal Phys. Rev. Lett. volume 125, pages 147202 (year 2020)NoStop [Böttcher and Silbermann(1999)]BoettcherSilbermann author author A. Böttcher and author B. Silbermann, @noop title Introduction to large truncated Toeplitz matrices (publisher Springer (New York), year 1999)NoStop [Okuma et al.(2020)Okuma, Kawabata, Shiozaki, and Sato]OkumaKawabata author author N. Okuma, author K. Kawabata, author K. Shiozaki, and author M. Sato, title title Topological origin of non-hermitian skin effects, https://doi.org/10.1103/PhysRevLett.124.086801 journal journal Phys. Rev. Lett. volume 124, pages 086801 (year 2020)NoStop [Ashida et al.(2021)Ashida, Gong, and Ueda]AGU2021 author author Y. Ashida, author Z. Gong, and author M. Ueda, title title Non-Hermitian physics, https://doi.org/10.1080/00018732.2021.1876991 journal journal Adv. Phys. volume 69, pages 249 (year 2021), NoStop [Bergholtz et al.(2021)Bergholtz, Budich, and Kunst]BBK2021 author author E. J. Bergholtz, author J. C. Budich, and author F. K. Kunst, title title Exceptional topology of non-Hermitian systems, https://doi.org/10.1103/revmodphys.93.015005 journal journal Rev. Mod. Phys. volume 93, pages 015005 (year 2021), https://arxiv.org/abs/1912.10048 arXiv:1912.10048 [cond-mat.mes-hall] NoStop [Bernard and LeClair(2002)]BL2002 author author D. Bernard and author A. LeClair, title A classification of non-hermitian random matrices, in https://doi.org/10.1007/978-94-010-0514-2_19 booktitle Statistical Field Theories (publisher Springer Netherlands, year 2002) p. pages 207NoStop [Kawabata et al.(2019)Kawabata, Shiozaki, Ueda, and Sato]KSUS2019 author author K. Kawabata, author K. Shiozaki, author M. Ueda, and author M. Sato, title title Symmetry and Topology in Non-Hermitian Physics, https://doi.org/10.1103/PhysRevX.9.041015 journal journal Physical Review X volume 9, eid 041015 (year 2019), NoStop [Chiu et al.(2016)Chiu, Teo, Schnyder, and Ryu]RyuSchnyderReview author author C.-K. Chiu, author J. C. Y. Teo, author A. P. Schnyder, and author S. Ryu, title title Classification of topological quantum matter with symmetries, https://doi.org/10.1103/RevModPhys.88.035005 journal journal Rev. Mod. Phys. volume 88, pages 035005 (year 2016)NoStop [Yao and Wang(2018)]YaoWang author author S. Yao and author Z. Wang, title title Edge states and topological invariants of non-hermitian systems, https://doi.org/10.1103/PhysRevLett.121.086803 journal journal Phys. Rev. Lett. volume 121, pages 086803 (year 2018)NoStop [Kunst et al.(2018)Kunst, Edvardsson, Budich, and Bergholtz]KunstEdvardsson author author F. K. Kunst, author E. Edvardsson, author J. C. Budich, and author E. J. Bergholtz, title title Biorthogonal bulk-boundary correspondence in non-hermitian systems, https://doi.org/10.1103/PhysRevLett.121.026808 journal journal Phys. Rev. Lett. volume 121, pages 026808 (year 2018)NoStop [Okuma and Sato(2020)]NobuyukiSato author author N. Okuma and author M. Sato, title title Hermitian zero modes protected by nonnormality: Application of pseudospectra, https://doi.org/10.1103/PhysRevB.102.014203 journal journal Phys. Rev. B volume 102, pages 014203 (year 2020)NoStop [Herviou et al.(2019)Herviou, Bardarson, and Regnault]HerviouBardarson author author L. Herviou, author J. H. Bardarson, and author N. Regnault, title title Defining a bulk-edge correspondence for non-hermitian hamiltonians via singular-value decomposition, https://doi.org/10.1103/PhysRevA.99.052118 journal journal Phys. Rev. A volume 99, pages 052118 (year 2019)NoStop [Hughes et al.(2011)Hughes, Prodan, and Bernevig]HughesProdan author author T. L. Hughes, author E. Prodan, and author B. A. Bernevig, title title Inversion-symmetric topological insulators, https://doi.org/10.1103/PhysRevB.83.245132 journal journal Phys. Rev. B volume 83, pages 245132 (year 2011)NoStop [Fang et al.(2013)Fang, Gilbert, and Bernevig]FangGilbert author author C. Fang, author M. J. Gilbert, and author B. A. Bernevig, title title Entanglement spectrum classification of C_n-invariant noninteracting topological insulators in two dimensions, https://doi.org/10.1103/PhysRevB.87.035119 journal journal Phys. Rev. B volume 87, pages 035119 (year 2013)NoStop [Monkman and Sirker(2023a)]MonkmanSirker3 author author K. Monkman and author J. Sirker, title title Symmetry-resolved entanglement of C_2-symmetric topological insulators, https://doi.org/10.1103/PhysRevB.107.125108 journal journal Phys. Rev. B volume 107, pages 125108 (year 2023a)NoStop [Monkman and Sirker(2023b)]MonkmanSirker4 author author K. Monkman and author J. Sirker, title title Symmetry-resolved entanglement: general considerations, calculation from correlation functions, and bounds for symmetry-protected topological phases, https://doi.org/10.1088/1751-8121/ad086d journal journal Journal of Physics A: Mathematical and Theoretical volume 56, pages 495001 (year 2023b)NoStop [Yin et al.(2018)Yin, Jiang, Li, Lü, and Chen]YJLLC2018 author author C. Yin, author H. Jiang, author L. Li, author R. Lü, and author S. Chen, title title Geometrical meaning of winding number and its characterization of topological phases in one-dimensional chiral non-Hermitian systems, https://doi.org/10.1103/PhysRevA.97.052115 journal journal volume 97, eid 052115 (year 2018), NoStop [Jiang et al.(2018)Jiang, Yang, and Chen]JYC2018 author author H. Jiang, author C. Yang, and author S. Chen, title title Topological invariants and phase diagrams for one-dimensional two-band non-Hermitian systems without chiral symmetry, https://doi.org/10.1103/PhysRevA.98.052116 journal journal volume 98, eid 052116 (year 2018), NoStop [Monkman and Sirker(2023c)]MonkmanSirker2 author author K. Monkman and author J. Sirker, title title Entanglement and particle fluctuations of one-dimensional chiral topological insulators, https://doi.org/10.1103/PhysRevB.108.125116 journal journal Phys. Rev. B volume 108, pages 125116 (year 2023c)NoStop
http://arxiv.org/abs/2405.09436v1
20240515152648
Outlier-resilient model fitting via percentile losses: Methods for general and convex residuals
[ "João Domingos", "João Xavier" ]
eess.SP
[ "eess.SP" ]
theoremTheorem lemmaLemma remarkRemark exampleExample resultResult definitionDefinition corollaryCorollary[theorem] Outlier-resilient model fitting via percentile losses: Methods for general and convex residuals João Domingos and João Xavier João Domingos (oliveira.domingos@tecnico.ulisboa.pt) and João Xavier (jxavier@isr.tecnico.ulisboa.pt) are with Instituto de Sistemas e Robótica of Instituto Superior Técnico, Portugal. This work has been supported by FCT through LARSyS funding (DOI: 10.54499/LA/P/0083/2020, 10.54499/UIDP/50009/2020, and 10.54499/UIDB/50009/2020) and research grant with the reference PD/BD/150631/2020. May 20, 2024 ========================================================================================================================================================================================================================================================================================================================================================================================================================================== We consider the problem of robustly fitting a model to data that includes outliers by formulating a percentile optimization problem. This problem is non-smooth and non-convex, hence hard to solve. We derive properties that the minimizers of such problems must satisfy. These properties lead to methods that solve the percentile formulation both for general residuals and for convex residuals. The methods fit the model to subsets of the data, and then extract the solution of the percentile formulation from these partial fits. As illustrative simulations show, such methods endure higher outlier percentages, when compared with standard robust estimates. Additionally, the derived properties provide a broader and alternative theoretical validation for existing robust methods, whose validity was previously limited to specific forms of the residuals. Outliers, robust methods, percentile optimization, least median squares, least quantile regression, subset sampling. § INTRODUCTION We consider the problem of fitting a model to M data points, where O of them are outliers (1 ≤ O < M). A data point is denoted by (x_m , y_m). The vector x_m ∈𝐑^p is the feature and the scalar y_m ∈𝐑 is the label, for m ∈ℳ = { 1, 2, …, M }. The model h is parameterized by θ∈𝐑^d and predicts the label y_m from the feature x_m. Let f_m(θ) be the residual of the data point (x_m, y_m) when the model parameter is set to θ (a common choice is f_m(θ) = | y_m - h( x_m, θ) |). We thus face the problem of computing a model parameter θ that satisfies f_m(θ) ≃ 0, for M-O data points, but not necessarily for all data points because of the outliers. In other words, letting ℐ⊂ℳ index the data points that are inliers and 𝒪 = ℳ\ℐ the outliers (thus, O = | 𝒪 |), we seek a θ that satisfies (<ref>) for m ∈ℐ, but not necessarily for m ∈𝒪. The challenge lies in that neither ℐ nor 𝒪 is given beforehand. Non-robust least-squares formulation. If the presence of outliers is ignored, model fitting can be cast as the problem of minimizing a global quadratic loss of the residuals, min_θϕ_quad( f_m(θ) ; m ∈ℳ), where ϕ_quad( z ) gives the sum-of-squares of the components of its input vector z = ( z_m ; 1 ≤ m ≤ M ), ϕ_quad(z) = ∑_m = 1^M z_m^2, see <cit.>. Such methods, however, are not robust. As soon as even just a few outliers are introduced, these methods lead to a θ that satisfies (<ref>) only for a very small number of data points <cit.>. § ROBUST PERCENTILE FORMULATION Finding a parameter θ that minimizes the residuals of inliers, while ignoring a number O of outliers, can be captured by the optimization problem min_θϕ_per( f_m(θ) ; m ∈ℳ). Here, ϕ_per is the percentile function of order O: The function ϕ_per discards the top O components of the input vector z and returns the largest component that remains. In other words, let z = ( z_m ; 1 ≤ m ≤ M ) be the vector inputted into ϕ_per and let { m_1, m_2, …, m_M} be a permutation of { 1, 2, …, M } that sorts z in descending order, z_m_1≥…≥ z_m_O≥ z_m_O+1≥…≥ z_m_M. We have ϕ_per(z) = z_m_O+1. Note that the permutation { m_1, m_2, …, m_M} depends on z: the function ϕ_per is non-linear. In (<ref>), the effectiveness of a candidate parameter θ is thus gauged by its ability to best fit M-O data points, since its O poorest fits are excluded. The percentile formulation (<ref>) is known in different communities by different names. We mention two examples: Least Quantile of Squares (LQS) regression When each residual is taken as f_m(θ) = | y_m - h( x_m, θ) |, with the model being linear (h(x_m, θ) = x_m^T θ), and the dataset contains about 50% of outliers, O ≃ M/2, the percentile loss ϕ_per becomes the median and (<ref>) collapses into the least median of squares (LMS) formulation in <cit.>. The LMS formulation has been generalized to arbitrary percentiles beyond 50%, being known as Least Quantile of Squares (LQS) regression <cit.>. Although solving the LMS formulation is NP-HARD <cit.>, efficient methods have been proposed for the case of intersect-slope regression <cit.>. Exact methods for small parameter size d are given in <cit.>. Section <ref> connects our results with Stromberg's method <cit.> for the LQS problem. Value-at-Risk (VaR) optimization Formulation (<ref>) also has a strong connection to portfolio optimization <cit.>, since (<ref>) can be interpreted as optimizing the Value-at-Risk (VaR) measure of an underlying stochastic risk problem. Further details on this interpretation for the setting of target localization can be found in <cit.>. § CONTRIBUTIONS Solving (<ref>) is generally NP-HARD <cit.>, if only because the percentile loss ϕ_per is by itself already non-smooth and non-convex. This section derives properties for the minimizers of (<ref>) for two cases: for general residuals, in the sense that each f_m(θ) is not necessarily convex in θ; and for convex residuals, where each f_m(θ) is convex in θ. These properties are relevant because they reveal principled ways of solving some instances of (<ref>) via a finite (yet exponential) number of convex programs. Although the exponential dependency can often be relaxed by randomization <cit.>, this letter is mainly devoted to solving (<ref>) exactly. General residuals. We show that (<ref>) can be solved by solving min_θϕ_max( f_m(θ) ; m ∈𝒮), for all subsets 𝒮 of ℳ whose cardinality S = | 𝒮 | is M-O (the number of inliers), where ϕ_max is the worst-case loss: for a generic vector z = ( z_s ; 1 ≤ s ≤ S), ϕ_max(z) = max{ z_1, z_2, …, z_S }. We refer to (<ref>) as an 𝒮-fit. In such a fit, the model is adjusted only to the S data points indexed by 𝒮, not to the full data set ℳ. We show that a θ that yields the best 𝒮-fit is also a solution of (<ref>)—see Theorem 1. The strategy of seeking the best 𝒮-fit of size M-O is not new in robust estimation – see  <cit.>. However this idea is often explored in dedicated setups with no connection across problems. From our perspective, the contribution of Theorem 1 is essentially two-fold: * To the best of our knowledge, this letter presents the first proof that using 𝒮-fits is always optimal, regardless of the residuals f_m. Bertsimas et. al <cit.> (Theorem 4.1) proved this result for LQS regression. Their proof explores a mixed integer formulation of (<ref>) for the residual form f_m(θ) = | y_m - h( x_m, θ) |. Theorem 1 is also mentioned (without proof) in <cit.> for LQS regression and in <cit.> for minimum volume ellipsoids. So Theorem 1 unifies several results in robust estimation by giving a formal proof that 𝒮-fits are always optimal. * As a secondary contribution, Theorem 1 allows us to interpret several algorithms <cit.> that approach problem (<ref>) by alternating between minimax fits and updates of set 𝒮. Equality (<ref>) of Theorem 1 reveals the main motivation for these updates. Convex residuals. Assuming convex residuals and d + 1 < M-O, we show that the solution of (<ref>) must also be a solution of some 𝒮-fit (<ref>), but where 𝒮 has now a smaller size, namely, d+1. This means that, once the solutions of all such 𝒮-fits are available, it suffices to evaluate them in the objective (<ref>) and retain the best to arrive at a solution of (<ref>)—see Theorem 2. This strategy of looking for the solution of (<ref>) among the solutions of 𝒮-fits of size d+1 is less understood in robust statistics <cit.>, in the sense that Theorem 2 was only proved for LQS regression with f_m(θ) = | y_m - x_m^T θ | – see <cit.> for the original proof. From our perspective, Theorem 2 is the main contribution of the paper since sampling 𝒮 sets of size d+1 can be tractable in low dimensional setups – see Section <ref>. § GENERAL RESIDUALS For a generic function ψ 𝐑^n →𝐑, its infimum is denoted by ψ^⋆ and its set of global minimizers by Argmin(ψ). That is, ψ^⋆ = inf{ψ(x) x ∈𝐑^n } and Argmin(ψ) = { x ∈𝐑^n ψ(x) = ψ^⋆}. Hereafter, we let ϕ denote the objective function in (<ref>) and ϕ_𝒮 the objective function in (<ref>). We have ϕ^⋆ = min_| 𝒮 | = M-O ϕ_𝒮^⋆ and Argmin(ϕ) = Argmin(ϕ_𝒮^⋆), where 𝒮^⋆ is any 𝒮 that achieves the minimum on the right-hand side of (<ref>).   Property (<ref>) says that the solutions of the percentile formulation (<ref>) are exactly the solutions of the best 𝒮-fit. This gives a way to solve (<ref>): first, solve all 𝒮-fits (<ref>), thus accessing Argmin( ϕ_𝒮 ) and ϕ_𝒮^⋆ for all 𝒮 of size M-O; then, locate the best 𝒮, say, 𝒮^⋆, in the sense that ϕ_𝒮^⋆^⋆≤ϕ_𝒮^⋆ for all 𝒮; finally, pick θ^⋆∈Argmin( ϕ_𝒮^⋆ ). Such a θ^⋆ solves the percentile formulation. Proof of Theorem 1. We make use of following fact, whose straightforward proof is ommited: if ψ 𝐑^n →𝐑 is a function of the form ψ(x) = min{ψ_1(x), ψ_2(x), …, ψ_K(x) }, for all x, where ψ_k 𝐑^n →𝐑, then ψ^⋆ = min{ψ_1^⋆, ψ_2^⋆, …, ψ_K^⋆}, and Argmin( ψ ) = Argmin( ψ_k^⋆ ), where k^⋆ is any k that achieves the minimum on the right-hand side of (<ref>), that is, ψ_k^⋆^⋆≤ψ_k^⋆ for 1 ≤ k ≤ K. We now turn to the proof of Theorem 1. We start by noting that just evaluating the percentile function ϕ_per at a given vector z = ( z_m ; 1 ≤ m ≤ M ) already reduces to seeking subsets of { 1, 2, …, M } of size M-O for the minimal largest component: ϕ_per(z)=min_|𝒮| = S ϕ_max( z_m ; m ∈𝒮), where S = M-O. To prove (<ref>) consider the permutation { m_1, m_2, …, m_M } of { 1, 2, …, M } that sorts the components of the given z in descending order, z_m_1≥⋯≥ z_m_O≥ z_m_O+1≥⋯≥ z_m_M. Recall that ϕ_per(z) = z_m_O+1. Now consider the particular choice of subset 𝒮^⋆ = { m_O+1, …, m_M }, which implies ϕ_max( z_m ; m ∈𝒮^⋆ ) = z_m_O+1. This shows that the inequality ≥ holds in (<ref>). It remains to show that the reverse inequality ≤ also holds. For this, take an arbitrary subset 𝒮≠𝒮^⋆ of size S, say, 𝒮 = { j_1, j_2, …, j_S }. Because 𝒮≠𝒮^⋆ there exists an 1 ≤ s ≤ S such that j_s ∈𝒮\𝒮^⋆, which then necessarily satisfies z_j_s≥ z_m_O+1 (if z_m_O+1 > z_j_s were to hold, then j_s would have to be in { m_O+2, …, m_M }, a contradiction to j_s ∉𝒮^⋆). We thus have ϕ_max( 𝒮 ) ≥ z_j_s≥ z_m_O+1. Since 𝒮 was chosen arbitrarily, this shows that the inequality ≥ holds in (<ref>). Using characterization (<ref>), we thus have ϕ_per( f_m(θ) ; m ∈ℳ ) = min_| 𝒮 | = S ϕ_max( f_m(θ) ; m ∈𝒮 ), for all θ, or, more compactly, ϕ = min_| 𝒮 | = Sϕ_𝒮. Identity (<ref>), together with (<ref>) and (<ref>), imply the claimed properties (<ref>) and (<ref>). § CONVEX RESIDUALS We now turn to convex residuals. If f_m(θ) is convex in θ for all m, and d+1 < M-O, then Argmin( ϕ ) ⊂⋃_| 𝒮 | = d+1 Argmin( ϕ_𝒮 ). Property (<ref>) asserts that the solutions of the percentile formulation (<ref>) are to be found among the solutions of the 𝒮-fits (<ref>), where now 𝒮 has size d+1. This enables (<ref>) to be solved as follows: first, solve all 𝒮-fits, which are now convex optimization problems, thus accessing Argmin( ϕ_𝒮 ) for all 𝒮 of size d+1; then, plug all θ from the sets Argmin( ϕ_𝒮 ) in the objective function of interest ϕ, and call θ^⋆ one that achieves the lowest value. Such θ^⋆ solves the percentile formulation. This strategy is most direct to carry out when each Argmin( ϕ_𝒮 ) is a singleton: see section VI for further discussion. To finish this section we note that Theorem 2 is essentially weaker than Theorem 1: property (<ref>) can hold even if (<ref>) and (<ref>) fail for 𝒮-fits of size d+1, that is (15) ⇏ϕ^⋆ = min_| 𝒮 | = d+1 ϕ_𝒮^⋆Argmin(ϕ) = Argmin(ϕ_𝒮^⋆) where 𝒮^⋆ is any 𝒮 that achieves the minimum in ϕ_𝒮^⋆. To justify (<ref>) consider the simple examples with d = 1, O = 1, and M = 4, with f_1(θ) = -θ, f_2(θ) = θ, f_3(θ) = 1, and f_4(θ) = -1; in this case, it can be checked that ϕ^⋆ = 0, Argmin(ϕ) = { 0 } yet min_| 𝒮 | = d+1 ϕ_𝒮^⋆=-1. Furthermore, the set 𝒮^⋆ = { 1, 4 } yields ϕ_𝒮^⋆ = -1 but Argmin( ϕ_𝒮^⋆ ) = [ 1, +∞ [. So Argmin(ϕ)={0} is not even contained in Argmin( ϕ_𝒮^⋆ ) = [ 1, +∞ [. The key point is that Theorem 2 only introduces a relation among minimizers, while Theorem 1 is able to relate optimal values. Although weaker, Theorem 2 is still relevant for applications in robust statistics – see Section <ref>. Proof of Theorem 2. We start by noting that property (<ref>) from Theorem 1 implies Argmin( ϕ ) ⊂⋃_| 𝒯 | = M-OArgmin( ϕ_𝒯 ). Thus, we need only show that, for any given 𝒯⊂ℳ of size M-O, we can find a 𝒮⊂ℳ of size d+1 such that Argmin( ϕ_𝒯 ) ⊂Argmin( ϕ_𝒮 ). Let 𝒯 of size M-O be given and choose θ^⋆∈Argmin( ϕ_𝒯 ). Our goal is thus to come up with a set 𝒮 of size d+1 such that θ^⋆∈Argmin( ϕ_𝒮 ). Because θ^⋆ solves the convex optimization problem min_θϕ_𝒯(θ), Theorem 2.2.1 in <cit.> states that 0 ∈∂ϕ_𝒯( θ^⋆), where the set ∂ϕ_𝒯( θ^⋆) denotes the sub-differential of the convex function ϕ_𝒯 at the point θ^⋆. Because ϕ_𝒯 is a pointwise maximum of convex functions, namely, ϕ_𝒯(θ) = max{ϕ_m(θ) ; m ∈𝒯}, for all θ, we can use Corollary 4.3.2 in <cit.>, together with (<ref>), to conclude that 0 ∈co( ⋃_m ∈𝒜^⋆∂ϕ_m( θ^⋆ ) ), where 𝒜^⋆ = { m ∈𝒯 ϕ_m( θ^⋆ ) = ϕ_𝒯( θ^⋆ ) } is called the active index-set of the function ϕ_𝒯 at θ^⋆ and the symbol co C denotes the convex hull of the set C. Identity (<ref>) thus says that the zero vector 0 can be written as a convex combination of | 𝒜^⋆ | vectors, each vector pulled from a sub-differential ∂ϕ_m( θ^⋆ ). Because the identity (<ref>) lives in 𝐑^d, Carathéodory's Theorem (Theorem 1.3.6 in <cit.>) asserts that there exists 𝒜⊂𝒜^⋆ such that 0 ∈co( ⋃_m ∈𝒜∂ϕ_m( θ^⋆ ) ), and | 𝒜 | ≤ d+1. Next, we consider two cases: * Case 1: | 𝒜 | = d+1. We define 𝒮 = 𝒜; * Case 2: | 𝒜 | < d+1. Note that 𝒜⊂𝒯 and | 𝒯 | = M-O; thus, due to the assumption d + 1 < M - O, we can enlarge 𝒜 with enough indexes from 𝒯\𝒜 so as to attain a set 𝒮 of size d+1. Regardless of which case holds, we can thus generate 𝒮 of size d+1 that satisfies 𝒜⊂𝒮⊂𝒯. Now, let the active index-set of the function ϕ_𝒮(θ) = max{ϕ_m(θ) ; m ∈𝒮}, for all θ, at θ^⋆, be denoted by ℬ^⋆, that is, ℬ^⋆ = { m ∈𝒮 ϕ_m(θ^⋆) = ϕ_𝒮( θ^⋆ ) }. It follows from (<ref>) that 𝒜⊂ℬ^⋆, which, in view of (<ref>), implies 0 ∈co( ⋃_m ∈ℬ^⋆∂ϕ_m( θ^⋆ ) ). By Corollary 4.3.2 in  <cit.>, identity (<ref>) implies 0 ∈∂ϕ_𝒮( θ^⋆ ), and, finally, Theorem 2.2.1 in <cit.> shows that (<ref>) holds. § APPLICATIONS We discuss two applications that show the usefulness of Theorem 2. Stromberg's Method In <cit.>, Stromberg addresses a least-squares problem with outliers. The problem is formulated as (<ref>), with convex residuals given by f_m(θ)=| y_m - x_m^T θ |. Furthermore, the condition d + 1 ≤ M - O is assumed. The method proposed in <cit.> can be summarized as follows: * Enumerate all subsets 𝒮 of ℳ of size d+1; * For each such subset 𝒮, solve (<ref>). Under the so-called Haar's condition invoked in <cit.>, problem (<ref>) has a unique solution, say, Argmin( ϕ_𝒮 ) = {θ_𝒮^⋆}; * Plug each θ_𝒮^⋆ into the objective of function interest (<ref>) and let the returned θ^⋆ be the one with the lowest value. Stromberg <cit.> showed that such θ^⋆ indeed solves (<ref>). This was achieved by using the unique form of the residuals, specifically f_m(θ) = | y_m - x_m^T θ |, to characterize the solution set of Chebychev problems – see <cit.>. Theorem 2, on the other hand, is valid for arbitrary convex residuals f_m. Theorem 2 is able to capture the core insight of Stromberg's method by exploring the sub-differential characterization for the maximum of convex functions – see equation (<ref>) in the proof of Theorem 2. Robust centroid To illustrate the usefulness of Theorem 2 for more general convex residuals, we consider the problem of computing a centroid of M points, of which O are outliers. Letting x_m ∈𝐑^d, for 1 ≤ m ≤ M, denote the given points, we can phrase this problem as in (<ref>), by considering convex residuals of the form f_m(θ) = x_m - θ_2^2; here, ·_2 denotes the ℓ_2 (Euclidean) norm. In this setting, solving (<ref>) can be interpreted as computing the mean of a population, robustly against O outliers <cit.>. Theorem 2 shows that (<ref>) can be solved by first solving 𝒮-fits as in (<ref>), thus accessing Argmin( ϕ_𝒮 ) for all 𝒮 of size d+1, and then searching among those sets for the point that yields the smallest value of the function of interest (<ref>). For the problem of robust centroid at hand, it can be shown that each 𝒮-fit has a unique solution, say, Argmin( ϕ_𝒮 ) = {θ_𝒮^⋆}, thanks to the strict convexity of the residuals. Furthermore, because we take d = 2 for ease of visualization, each θ_𝒮^⋆ can be easily computed resorting to techniques in <cit.>. For our numerical experiments, we start by sampling M - O = 40 (inlier) points from a normal distribution 𝒩(0,I). Then, we add outliers by sampling O points from a shifted normal distribution with higher variance 1.2 𝒩(b,I), where the bias vector b = (4, 3). Figures <ref> (b) and (c) plots realizations of this setup for an increasing number of outliers O. On this dataset, we compare the method enabled by Theorem 2 (delineated above) with the least squares solution (<ref>) and two classical methods <cit.> from robust estimation: min_θ∑_m=1^M || x_m-θ||_1 (L1), min_θ∑_m=1^M h_R( x_m-θ_2 ) (Huber), where ·_1 denotes the ℓ_1 norm and h_R the Huber function with threshold R (we set R=1.34 as suggested in <cit.>, since the standard deviation of inliers is unitary).   In view of the way the data set was generated, we wish the methods to return the zero vector (0,0), as this is the (theoretical) mean of the population of inliers. Thus, we compare the four methods on the basis of the Euclidean norm of their returned parameter θ^*. Figure <ref> gives the results of the comparison, as the number of outliers increases. Figure <ref> (a) shows that the percentile method (in blue) is the only method that endures a moderate-to-high number of outliers, say when O/M > 23%. These findings confirm that typical robust alternatives cannot withstand considerable amounts of outliers, which highlights the importance of Theorems 1 and 2 for these challenging setups. IEEEtran
http://arxiv.org/abs/2405.09359v1
20240515141238
Visual Attention Based Cognitive Human-Robot Collaboration for Pedicle Screw Placement in Robot-Assisted Orthopedic Surgery
[ "Chen Chen", "Qikai Zou", "Yuhang Song", "Shiji Song", "Xiang Li" ]
cs.RO
[ "cs.RO", "cs.HC" ]
Quantum operations for Kramers-Wannier duality Arif Mohd May 20, 2024 ============================================== plain plain Current orthopedic robotic systems largely focus on navigation, aiding surgeons in positioning a guiding tube but still requiring manual drilling and screw placement. The automation of this task not only demands high precision and safety due to the intricate physical interactions between the surgical tool and bone but also poses significant risks when executed without adequate human oversight. As it involves continuous physical interaction, the robot should collaborate with the surgeon, understand the human intent, and always include the surgeon in the loop. To achieve this, this paper proposes a new cognitive human–robot collaboration framework, including the intuitive AR-haptic human–robot interface, the visual-attention-based surgeon model, and the shared interaction control scheme for the robot. User studies on a robotic platform for orthopedic surgery are presented to illustrate the performance of the proposed method. The results demonstrate that the proposed human–robot collaboration framework outperforms full robot and full human control in terms of safety and ergonomics. § INTRODUCTION The field of robot-assisted orthopedic surgery is witnessing rapid expansion <cit.>. Among these, the placement of pedicle screws during spinal fusion stands out as a critical step for ensuring spinal stability <cit.>. During the procedure, a pilot hole is drilled into the pedicle of the vertebrae, and a screw is tapped into the hole to stabilize the spine. Existing robotic systems primarily offer navigation capabilities, where the robot positions a guiding tube at the location and orientation of the planned screw entry point. The surgeon is then required to manually drill the hole and place the screw under the robot's guidance. However, automating bone drilling, which involves complex interactions between the surgical tool and bone, presents a significant challenge due to the high accuracy and safety requirements. Safety is a main concern of orthopedic robots. This requires a collaborative approach where the surgeon should be included in the control loop of robot, especially during critical tasks like bone drilling. While such a surgeon–robot collaboration formulation can guarantee safety and also combine the surgeon's expertise and the robot's precision, it is not trivial to design the collaboration scheme, since the robot needs to understand the surgeon intent then appropriately respond to it exactly. Too passive reaction manners will shift heavy working loads to the surgeon, and too active ones may result in conflicts between each other. To address these challenges, this paper proposes a new cognitive human–robot collaboration framework for robot-assisted orthopedic surgery, as shown in Fig. <ref>. The key contributions of this work are summarized as follows: * AR-Haptic Interface: A new AR-haptic interface is developed for enhanced surgeon–robot collaboration. The AR device visualizes the drilling progress and depth of the surgical tool within the bone, while the haptic device enables the surgeon to input the drilling command intuitively by manipulating its end effector, with the perception of drilling force that is aligned with the bone model in the AR scene. This interface allows for more precise recognition of the surgeon's intent and aids the surgeon to monitor the drilling task from multiple perspectives to make better decisions. * Surgeon Attention Model: A surgeon attention model is established to evaluate the surgeon's concentration level. The robot will provide assist while the surgeon is fully concentrated to reduce workload. The robot will diminish its assistance when the surgeon is not paying enough attention to the task, and the robot will continuously monitor and enforce safety constraints. We propose a eye-tracking-based algorithm to recognize the surgeon's attention level. * Surgeon–Robot Shared Control: A shared control scheme is proposed to modulate the task allocation between the surgeon and robot based on the intent model. This approach leverages contributions from both sides to enhance the robot-assisted orthopedic surgery. The aforementioned framework offers a cognitive solution for robot-assisted orthopedic surgery, presenting insights into the design of collaborative robotic systems for safety-critical tasks. It maintains the operator in the loop by monitoring attention levels and amplifies reliable human input when concentration is high, reducing workload. Experiments and comparative user studies on a collaborative robotic orthopedic platform validate the proposed framework, which outperforms baselines in terms of safety and ergonomics. § RELATED WORKS §.§ Human–Robot Interaction in Robotic Orthopedic Surgery §.§.§ Overview The majority of orthopedic robotic systems exhibit considerable deficiencies in HRI. Most of these systems have been limited to providing navigational assistance, positioning a guiding tool at a predetermined site to facilitate precise maneuvering of surgical instruments. This approach requires manual execution of surgical tasks by the surgeon and relies on external monitors for information display, potentially diminishing surgical performance due to its lack of intuitiveness. Recently, several works have aimed to enhance HRI in orthopedic surgical robots. Futurtec ORTHBOT system, for example, is equipped with an intelligent bone drill that autonomously positions K-wires under the surgeon's supervision of the drilling force <cit.>. The Stryker MAKO system can be equipped with a drill or saw, which are held and controlled by the surgeon, allowing for intuitive engagement during surgery <cit.>. Lauretti et al. <cit.> proposed a shared control framework for semi-autonomous pedicle screw fixation, which allows the surgeon to move the robot end effector along the tapping axis using hands-on control and adjust the torque by exerting force on the robot. Smith et al. <cit.> introduced a robotic system capable of autonomously placing pedicle screws, where the surgeon only needs to oversee the process and intervene when necessary. §.§.§ Haptic The integration of haptic feedback into orthopedic robotic systems is crucial for providing surgeons with tactile feedback during surgery, enabling precise operation and preventing excessive force application <cit.>. Several orthopedic surgical robots have been developed with haptic capabilities. The Stryker MAKO utilizes haptic feedback to constrain the surgeon's movements according to the interacting forces generated in the virtual haptic environments <cit.>. Boschetti et al. <cit.> developed a system for teleoperated spine surgery, where haptic guidance is provided to compensate movements of the vertebra. Lee et al. <cit.> introduced a torque rendering algorithm that provides realistic torque feedback during the tele-drilling of pedicle screws. Moreover, haptic feedback has started to be incorporated into bilateral telemanipulation systems for minimally invasive surgery <cit.>. §.§.§ Augmented Reality In parallel, AR-mediated approaches, as an emerging technology, have been applied to surgical robotics to provide surgeons with intuitive and informative interfaces <cit.>. Iqbal et al. <cit.> developed a system that displays the user interface of an existing orthopedic surgical robot in AR, resulting in improved usability and ergonomics. Tu et al. <cit.> developed a robotic system for cervical pedicle screw placement, where an AR surgical scene is constructed and rendered for visualization and navigation. Schreiter et al. <cit.> designed an AR interface aimed at conditional autonomous robots for pedicle screw placement, enabling surgeons to exert comprehensive oversight and control via an AR interface. §.§ Eye-Tracking Based Cognitive Shared Control Shared control involves a robot adjusting its level of autonomy based on its understanding of the human's intent and task requirements <cit.>. This approach is particularly promising in robot-assisted surgery, where it provides essential assistance while allowing the surgeon to maintain control over the system <cit.>. For effective shared control, the robot must understand the human's intent and then modulate task allocation between the human and the robot accordingly <cit.>. Many works have proposed intention inference models to estimate human's intent <cit.>. However, existing works often overlook the human's cognitive performance during intent assessment, which reflects the concentration level of human. Wang et al. <cit.> utilized the Yerkes–Dodson law to compute cognitive performance according to the human's utilization ratio, which is the amount of time that the human has been controlling the robot. Quantitative representation of cognitive performance can be achieved through gaze analysis <cit.>. Most existing works on gaze for shared control focus on intentional gaze as a control input <cit.>. A series of studies <cit.> have explored the use of natural gaze, where hidden-Markov models were applied on gaze signals to predict human intentions in assistive teleoperation tasks. However, the potential of gaze for cognitive performance assessment in shared control remains underexplored. In summary, while some progress has been achieved, no existing work has developed a systematic framework that explores the AR-Haptic interface and cognitive performance to achieve both the high safety and high autonomy of the drilling task while maintaining an intuitive and efficient collaboration between the surgeon and orthopedic robot. § METHODOLOGY §.§ AR-Haptic Human–Robot Interface First, a new AR-haptic interface is developed for the communication between the surgeon and the robot, as illustrated in Fig. <ref>. The haptic device allows the surgeon to intuitively issue drilling commands by manipulating its end effector, which is then converted to the position command of the drill, as detailed in Sec. <ref>. The surgeon can also feel the drill's force feedback, enhancing tactile experience. The AR HMD is tasked with delivering real-time surgical process information to the surgeon. To fulfill this purpose, the device renders an AR scene that includes a translucent 3D model of the vertebrae (constructed from CT scans), along with visualizations of the planned pedicle screw trajectory (shown as a transparent blue cylinder) and the real-time drill position (marked by a white drill model). The eye tracking capability of the AR device is utilized to recognize the surgeon's attention level, which is discussed in Sec. <ref>. The position of the rendered AR scene is coupled with the haptic device's position by initial registration with a QR code attached to the haptic device. In this way, the drill tip in the AR interface is always aligned with the haptic device's end effector. Moreover, the vertebrae model is rendered at an enlarged scale, so that the surgeon can have a better view of the details, and can perform more precise movements as the exerted commands is scaled down to match the actual dimensions. This integrated approach surpasses the capabilities of traditional navigation systems that rely on external displays by providing intuitive and direct visual feedback to the surgeon, enhancing hand-eye coordination. §.§ Visual Attention based Human–Robot Collaboration This paper proposes an intent model to specify the cognitive performance of the surgeon during collaborative surgery based on eye-tracking. §.§.§ Eye-Tracking Based Human Attention Recognition Eye-tracking data are obtained from the AR-HMD, consisting of a quaternion representing the gaze orientation within the head coordinate system. It is subsequently transformed into a direction vector originating from the eye's position within the world coordinate system. A ray-casting procedure then projects the gaze onto the AR scene or the spatial mesh—a mesh representation of the real world, constructed from point clouds captured by the AR-HMD's RGB-D camera. This projection yields the gaze point p_i ∈^3, located either on the AR scene or on a real-world object, as shown in Fig. <ref>. Eye movement data can be categorized into two primary types: saccades and fixations. In our analysis, we exclusively utilized fixations for attention recognition, filtering out saccadic movements. The process of fixation segmentation is performed using a similar approach to <cit.>. A two-component GMM is trained online at regular intervals to model the velocity of the gaze point. Samples corresponding to the larger component of the GMM are identified as saccades, while those associated with the smaller component are classified as fixations. We define the set of fixations as F = {p|p is fixation}. Moreover, the projection technique enables the semantic annotation of eye gaze through the recognition of the targeted object, denoted by (p). We define a set of objects relevant to surgery, S, including elements like the vertebrae, drill, and drilling path. This allows for the formulation of A, a set of gaze points fixated on surgery-related objects, defined as A = {p|(p)∈S}. Each gaze point is associated with a specific time stamp, denoted by (p). The collection of gaze points within a time window is represented as W(T) = {p|(p)∈ [t-T, t]}, where t is the current time and T represents the window's length. Now, the attention level can be estimated as the proportion of the cumulative duration of fixations on surgical-relevant objects within a time window, which is mathematically expressed as α(t) = ∑_p_i∈F∩A∩W(T)(p_i) - (p_i-1)/T, where T is the time window's length, and (p_i) is the time stamp of the gaze point p_i. We further apply an exponential moving average filter to α(t) to filter out some noise. §.§.§ Attention-Driven Human-Robot Collaboration A computational human cognitive model is established to dynamically adjust collaboration intensity based on the human's attention level. This model computes an allocation weight w, which quantifies the degree of robotic assistance at each moment, based on the observed attention of the human. We propose a HRI paradigm particularly for tasks where safety is critical, such as robot-assisted surgery. The paradigm can be described as When the surgeon is not paying enough attention to the task, the robot should diminish its assistance while continuously monitoring and enforcing safety constraints. Thus the allocation shifts from human in control to robot in control while the human's attention level increases. Such an objective will force the surgeon to focus on the ongoing task and be active in the loop, which is important to ensure safety. Next, a piecewise linear function is used to implement the aforementioned paradigm, mapping the attention level to the allocation weight by: w = 0, if α̅ < α_0, α̅ - α_0/α_1 - α_0, if α_0 ≤α̅≤α_1, 1, if α̅ > α_1, where α̅ is the filtered attention level, α_0 and α_1 are the thresholds of the attention level, which are determined by the task requirements and the surgeon's cognitive performance. Then, w=0 and w=1 correspond to the manual control and the fully-autonomous mode respectively. §.§ Shared Interaction Control The constructed weight adjusts the contribution ratio from both sides. Specifically, the robot contributes to the end effector in terms of the drilling depth and speed, and the surgeon contributes through the haptic device. §.§.§ Position Servo of the Patient Side Drilling Robot First, the desired position of the patient side drilling robot x_d,ur is synchronized with the position of the end effector x of the haptic device with a affine transformation, as x'_d,ur = T^vertebrae_robot base·(1,1,1,k_scale) ·T^task space_vertebrae·x'. where x'_d,ur and x' are corresponding homogenous vector of x_d,ur and x, T^vertebrae_robot base and T^task space_vertebrae denote homogenous transform matrices from vertebrae coordinates to robot base frame and from task space to vertebrae coordinates, and k_scale is the scaling factor of the vertebrae in AR scene. Such a desired position is achieved with a position control scheme as u_ur = K_pJ_ur (q) (x_ur-x_d,ur), where u_ur is the control input (joint velocities) of the robot, K_p is the proportional gain matrix, q are joint angles, and J_ur (q) is the Moore–Penrose pseudoinverse of the Jacobian matrix from joint space to task space. The role of the above controller (<ref>) is to synchronize the robot and the haptic device and, hence, align the contributions on both sides (see Fig. <ref>). §.§.§ Shared Interaction Control on Haptic Device After the alignment, the shared control can be implemented on the haptic device to enable users to perceive the robot's intentions and assistance tactilely. The task space is defined as in Cartesian space, where the origin is the start point of the drilling path, and the z-axis points downwards along the drilling axis. Consider a haptic device with 3 degrees of freedom, its dynamics can be described as M(θ)θ̈ + C(θ, θ̇) θ̇ + g(θ) = τ_ext + τ, where M(θ) is the mass matrix, C(θ, θ̇) is the Coriolis matrix, g(θ) is the gravitational torques, τ_ext denotes external torque and τ denotes the control input, which is designed to respond to the surgeon's contributions. To ensure it is safe for the surgeon to inject contributions, a first-order impedance model is formulated as the control objective as Dẋ + K (x - x_d) = f_ext, where D and K represent the virtual mass, damping, and stiffness matrices to be simulated by the device, x_d is the equilibrium position (i.e., the desired drilling position), and f_ext is the external force applied to the device by the surgeon. To achieve the objective, the control law on the haptic device is designed as τ = - J(θ) (Dẋ + K (x - x_d)) + g(θ). where mass compensation and Coriolis terms are omitted due to the low velocity of the device <cit.>. As seen from (<ref>) and (<ref>), the proposed formulation requires the surgeon to initiate the task (i.e., x_d), and the robot follows him/her and hence amplifies the surgeon's contributions. Such a formulation always includes the surgeon in the loop, and the robot provides assistance only when the surgeon is at a high concentration level. To force the surgeon to re-focus on the task when a distraction arises, control allocation is dynamically adjusted based on the allocation weight w, derived from the human cognitive model, to modulate the desired system behavior: * For w=0, indicating full human control, the system allows free drag. * For w=1, indicating full automatic control, the device moves downward along the drilling axis at a predetermined constant speed v_drill. The stiffness matrix K is designed as K(w) = (k_x, k_y, wk_z,max). This configuration ensures that the device maintains high stiffness in movements perpendicular to the drilling axis—represented by k_x and k_y, with k_x and k_y significantly exceeding k_z,max—thereby constraining the end effector's position within the desired axis. At w=0, the device permits unrestricted movement along the drilling axis as the stiffness equals zero. To trace the velocity command, the desired position of the haptic device x_d is updated as x_d = (0,0,1) (v_drillK^-1(1)D+ x). In this manner, the haptic device maintains a constant velocity v_drill in equilibrium, provided that no external force is exerted upon it, when the robot is in complete control (w=1). The feedback force f_sensor is measured by the force sensor and transformed into the task space coordinates. To achieve a subtle and smooth transition between human and robot control, the force feedback is scaled down slightly when the robot is automatically controlled, by a function of w: f_fdbk = (1 - 0.5w) f_sensor. The shared interaction controller of the haptic device integrates these principles: u_haptic = J(θ) (K(w)(x_d-x) - Dẋ + f_fdbk) + g(θ) where u_haptic denotes the driving force of the haptic device. § EXPERIMENTS In order to evaluate the effectiveness of the proposed framework, experiments were conducted using an orthopedic surgical robot system specifically designed for pedicle screw insertion tasks (see Fig. <ref>). The robot featured a UR5 cobot from Universal Robots, equipped with a custom bone drill end-effector. The end-effector is outfitted with an ATI mini40 f/t sensor to monitor drilling forces and an Intel RealSense D405 RGB-D camera to oversee the surgical site. Users interacted with the robot using a Force Dimension Omega.3 haptic device. The AR scene was developed on the Unity engine with MRTK3 and was deployed on Microsoft HoloLens 2 AR-HMD. The software system was developed and integrated using the ROS 2 <cit.>, MoveIt 2 <cit.> and ros2_control <cit.>. §.§ Experimental Protocol A user study was conducted to evaluate the proposed framework. Participants were asked to drill a 0.03 deep pilot hole into a synthetic bone model using the proposed robot system. During the task, a distraction phase is designed to simulate the situation where the surgeon's attention is drawn away from the surgery. To apply distraction, a series of mental math problems are displayed on an external display in front of the participant. The participants were asked to concentrate on solving the problems during that phase. The time of the phase was fixed to 20. The parameters used in the experiments are shown in Table <ref>. Instructions were given to the participants before the experiments, during which they were informed of the task and experiment procedure. The participants were instructed to maintain a high level of attention throughout the task. After the instruction, the participants were asked to start a trial task to get familiar with the procedure. Formal experiments were then conducted, where the participants were asked to drill three times, with different human–robot collaboration settings as follows: * Full robot control (w=1). * Full human control (w=0). * Proposed visual-attention-based shared control. The sequence of the three settings was randomized to avoid ordering effects, and the participants were not informed of the current experimental setting. §.§ Experimental Results §.§.§ Examination of a Representative Experiment An in-depth analysis was conducted on a selected experimental result, which is not included in the user study. This experiment was particularly adjusted to better illustrate the capabilities of the proposed framework, including the explicit display of the gaze point within the AR scene. The snapshots of the whole procedure are shown in Fig. <ref>. The results of the experiment are shown in Fig. <ref>. The raw force and velocity signals are smoothed by a Savitzky–Golay filter <cit.> with a window size of 1. The time of each snapshot in Fig. <ref> is marked in the plot. It was seen in the plot that the weight w gradually increased to 1 when the surgeon was concentrated on the task and decreased to 0 when the surgeon was distracted. When w≈ 1, the robot was in majority control. As seen in the force subplot, the automatic controller provided almost all the required drilling force; thus, the force exerted by the surgeon was nearly zero. The drill moved downward at approximately the desired speed as designed, as shown in the drill velocity subplot. When w≈ 0, the autonomy was transferred to the surgeon, while the robot constantly regulates safety constraints, such as the movement perpendicular to the drilling axis. The drilling force and the force exerted by the surgeon were almost identical, as the robot was not providing any assistance. The drill velocity was dropped to zero, as the surgeon was not actively controlling the drill. §.§.§ User Study Results The study involved the recruitment of three participants, with the objective of quantitatively evaluating the system’s performance in safety and ergonomic aspects. Metrics for the assessment included drill movement and its standard deviation of position during the distraction phase, along with the impulse exerted by the surgeon throughout the task. Safety evaluations were based on metrics of drill movement during the distraction phase, as any movement of the drill while not under the surgeon's supervision is a potential safety risk. Ergonomics were assessed through the impulse metric derived from the numerical integration of force data over time. A higher force impulse suggests an increased physical burden and potential for fatigue. The evaluated metrics of the experiment are shown in Fig. <ref>. Values closer to zero indicating better performance. Compared to the full robot control and full human control, the proposed shared control method achieved the best performance in safety and ergonomics. § CONCLUSIONS In this paper, we proposed a cognitive human-robot collaboration framework for robot-assisted orthopedic surgery, based on AR-haptic interface, surgeon visual attention model and shared control. This work improves the HRI experience of orthopedic surgical robots and has the potential to increase surgical efficiency and safety. Moreover, the proposed framework provides insights into the design of collaborative robotic systems in safety-critical tasks. Future works will be devoted to the validation of the developed system in clinical trials. IEEEtran
http://arxiv.org/abs/2405.08793v1
20240514174155
A Brief Introduction to Causal Inference in Machine Learning
[ "Kyunghyun Cho" ]
cs.LG
[ "cs.LG" ]
SciFIBench: Benchmarking Large Multimodal Models for Scientific Figure Interpretation Jonathan Roberts1 Kai Han2 Neil Houlsby3 Samuel Albanie1 May 2024 ===================================================================================== PREFACE This is a lecture note produced for DS-GA 3001.003 “Special Topics in DS - Causal Inference in Machine Learning” at the Center for Data Science, New York University in Spring, 2024. This course was created to target master's and PhD level students with basic background in machine learning but who were not exposed to causal inference or causal reasoning in general previously. In particular, this course focuses on introducing such students to expand their view and knowledge of machine learning to incorporate causal reasoning, as this aspect is at the core of so-called out-of-distribution generalization (or lack thereof.) This lecture note does not follow a traditional curriculum for teaching causal inference. This lecture note does not subscribe solely to either the potential outcome framework or the do-calculus framework, but is rather flexible in taking concepts and ideas from these two camps (which after all do look more or less the same) in order to build up the foundation of causal inference from the first principles. In doing so, the first half of this note covers a variety of basic topics, including probabilistic graphical models, structural causal models, causal quantities of interest, conditional vs. interventional probabilities, regression, randomized controlled trials, bandit algorithms, inverse probability weighting, matching and instrumental variables. I do not go too deep into each of these topics, although the emphasis is given to how these topics are all connected with each other (and sometimes are equivalent.) For this first half of the course, I read and consulted the following books (lightly only, though) and recommend students go deeper into these books if they are interested in learning more about causal inference: * Pearl. Causality. 2nd eds. 2009. <cit.> * Imbens & Rubin. Causal inference in statistics, social, and biomedical sciences. 2015. <cit.> * Cunningham. Causal Inference: the Mixtape. 2021. <cit.> Based on the foundation built in the first half (or more like two thirds) of the course, the course takes a turn toward generalization in machine learning. In particular, I try to argue that the probabilistic graphical model based framework from causal inference can be an invaluable tool for specifying and understanding so-called out-of-distribution generalization. I draw (coarse) connections from causal inference to the following ideas in machine learning, to demonstrate this point: * Distributional shifts * The principle of invariance * Preference-based learning for language models To be very honest, this is a very thin lecture note for a course with a very thin content. This note should be considered as the very first sign post at the entrance to a huge forest called causality, and nothing more. If you want to expand slightly a bit more, see this short introductory material I have written together with my PhD student, Jiwoong Daniel Im <cit.>. Finally, I am infinitely grateful to Daniel Im, Divyam Madaan and Taro Makino for helping me as amazing teaching assistants in preparing the lecture note as well as giving the lab sessions in Spring 2024. The lab materials they have prepared are all available at <https://github.com/kyunghyuncho/2024-causal-inference-machine-learning>. arabic CHAPTER: PROBABILISTIC GRAPHICAL MODELS The goal of causal inference is to figure out a causal relationship, or the lack thereof, between two sets of variables; causes and outcomes. It is thus only natural to think of how we determine whether any particular variable is a cause or an outcome. It is often relatively more straightforward to determine what an outcome variable is, as this determination is done based on our subjective interest. For instance, a typical outcome variable in medicine is a disease-free survival rate within some years since diagnosis or treatment. This choice is natural, since this variable, or its quantity, is what we want to maximize. It is however much less clear how to determine which variable should be considered a cause. For instance, in the classical example of `smoking causes lung cancer', what makes us choose `whether someone smokes ciagarettes' as a cause variable rather than `a mutation in a particular gene'? It becomse even more mind-boggling once we realize that this choice of `smoking' as a cause meant that we decided to ignore many variables, such as `whether a farmer decided to grow tobacco'. It is thus an important, if not the most important, job for practitioners of causal inference to convincingly argue why some variables are included and others were omitted. They also must argue why some of the included variables are considered potential causes and why they chose a particular variable as an outcome. This process can be thought of as defining a small universe in which causal inference must be performed. There are many different ways to define and describe such a universe, and in this lecture note, we largely stick to using a probabilistic graphical model, or a corresponding structural causal model, as a way to describe each universe, which is the main topic of this chapter. § PROBABABILISTIC GRAPHICAL MODELS In this course, we rely on probabilistic graphical models quite extensively in order to describe their statistical and causal relationships among random variables. A probabilistic graphical model, which is also referred as a Bayesian graphical model, is a directed graph G=(V,E), where V is a set of vertices/nodes and E is a set of directed edges. Each node v ∈ V corresponds to a random variable, and each edge e = (v_s, v_e) represents the dependence of v_e on v_s. Let us for now assume that this graph is acyclic, that is, there is no cycle within this graph. In other words, G is a directed acyclic graph, throughout the rest of this course. For each node v ∈ V, we define a probability distribution p_v(v | pa(v)) over this variable conditioned on all the parent nodes pa(v) = { v' ∈ V | (v', v) ∈ E }. We can then write a joint distribution over all the variables as p_V (V) = ∏_v ∈ V p_v(v | pa(v)), following the chain rule of probabilities. When pa(v) = ∅, i.e. v does not have any incoming edges, we call the associated distribution p_v(v) a prior distribution. With P a set of all conditional probabilities p_v's, we can denote any probabilistic graphical model as a triplet (V, E, P). From this joint distribution, we can derive all kinds of conditional distributions by marginalizing variables and applying the definition of a conditional probability. If we are not interested in a particular node v̅∈ V, we can marginalize out this variable by p(V \{v̅}) = ∑_v̅ p_V (V). If v̅ is a continuous random variable, we replace ∑ with ∫. We can always turn a joint probability into a conditional probability by p(V \{ṽ} | ṽ) = p_V(V)/p_ṽ(ṽ). Using the definition of the conditional probability, we can write marginalization in the following way: p(V \{v̅}) = ∑_v̅p_V(V)/p_ṽ(ṽ) p_ṽ(ṽ) = ∑_v̅ p(V \{v̅}) p_ṽ(ṽ). Marginalization corresponds to computing the weighted sum of the conditional probability of the remaining variables according to the marginal probability of the variable being marginalized. Let's assume we can sample readily from any conditional probability p_v with v∈ V. We can then draw a sample from the joint distribution readily by breadth-first sweeping of all the variables. That is, ṽ∼ p_v (v | p̃ã(v)), where p̃ã(v) = {ṽ' ∼ p_v'(v' | p̃ã(v')) | v' ∈pa(v) }. This procedure is called ancetral sampling and is an exact, unbiased way to sample from this joint distribution. If we set aside efficiency, ancestral sampling is an extremely powerful tool, as it allows us to sample from any marginal distribution as well as any conditional distribution. In the former case, we simply discard the draws that correspond to the uninteresting variables (that are being marginalized). In the latter case, we only keep samples whose values, corresponding to the conditioning variables (that are on the right hand side of |), are precisely those that we want the distribution to be conditioned on. Both of these approaches are not efficient, and it is often much better to use a more sophisticated sampling algorithm, often based on the idea of Markov Chain Monte Carlos (MCMC). Any probability distribution can be expressed as a table (though, this table may have infinitely many rows) that consists of two columns. The first column takes the value of interest and the second column its probability (either density or mass). The probability function p_v above works by hashing v into the row index in this table and retrieving the associated probability, i.e. p_v: 𝕍→ℝ_+. This function satisfies the following normalization property: 1 = ∑_v ∈𝕍 p_v (v), if v is discrete. ∫_v ∈𝕍 p_v (v) dv, if v is continuous. This view allows us to effectively turn a set of samples into the original distribution from which these samples were drawn (of course, with some variance.) Let S = ( ṽ_1, ṽ_2, …, ṽ_N ) be a multi-set of samples drawn from an unknown distribution over a discrete variable v, without loss of generality. We can then recover the original distribution by constructing a table where each row is (v, . ∑_n=1^N 1(ṽ_n = v) / N ), with v ∈𝕍. This corresponds to maximum likelihood learning without any regularization. In this table, we can think of all these rows' contents as the parameters of this model we have built to approximate the underlying distribution from which S was drawn. This view will come handy later when we discuss using a deep neural network instead of an explicit table to represent a probability distribution. A major downside of this explicit table based approach is that q_v(v) = 0 for all v ∉S. Such an extreme value (the probability of 0) should not be used when we estimate these probabilities from a small number of data points, since we cannot rule out the fact that we simply did not see that particular instance just because we did not draw enough samples. Furthermore, this prevents us from properly defining a conditional probability p_v' (v' | v), since p_v' (v' | v) = p_v',v (v', v)/p_v(v). If v is set such that p(v) = 0, this conditional probability is not well defined. We thus have to regularize maximum likelihood learning. This probabilistic graphical model is a good way to abstract out some of the details on how to implement individual probability distributions for studying causal inference in machine learning, as it frees us from worrying about the aspect of learning until it is absolutely necessary. Learning in this context refers to inferring the underlying distribution from which data points were drawn, and the table-based approach above is the most naive one that is not really useful in practice. We can however for now assume that this table-based approach works well and continue studying causal inference with already inferred conditional distributions. § STRUCTURAL CAUSAL MODELS Although a directed edge in the probabilistic graphical model looks like it encodes the order in which the variables are generated, it is not necessarily so from the perspective of probability. According to the Bayes' rule, p_v(v | v') = p_v'(v' | v) p_v(v)/p_v'(v'). This implies that we can always flip the arrow of the edge between v and v' without altering the joint as well as conditional probabilities.[ We will assume from here on that p(x) > 0 for any x and p. ] This lack of direct relevance of the direction of each edge in G to the joint probability raises a lot of confusion, when we try to use the probabilistic graphical model in the context of causal inference and reasoning. Instead, we may want to use a slightly different way to express the same generative process underlying a given probability graphical model G=(V,E). We do so by writing the process of sampling a value associated with each variable v ∈ V rather than its distribution, as the combination of a deterministic function f_v and external (exogenous) noise ϵ_v: v ← f_v(pa(v), ϵ_v). This says that the value of v is computed based on the values of its parent nodes pa(v) and external noise ϵ_v by the deterministic function f_v. This way is much more explicit about the generating process than before, since the use of the function f clearly suggests that perturbing the output of the function would not change the input to the function, although perturbing the input to the function would change the output. For instance, you can imagine that f_v corresponds to pushing a book on your desk using your hand with force of v' and that v encodes the new position of the book. ϵ_v can be an unknown level of friction cause by your earlier (but forgotten) choice of your desk. Changing force v' of course affects the new position of the book v together with some changing level of ϵ_v, but changing the position of the book would not change the force I have not applied yet to the book. We call this representation of the generative process a structural causal model. Similarly to the probabilistic graphical model above, we can represent any structural causal model as a triplet (V, F, U), where V is a set of variables, F is a set of corresponding functions and U is a set of corresponding noise variables. Any structural causal model can be turned into a probabilistic graphical model by using the change of variables, i.e., f_v(pa(v), ·) and assuming the known prior distribution over the external noise variable ϵ_v. Or, more simply, we can do so because we can find a distribution over v ∼ f_v (pa(v), ϵ_v). We can draw samples from any given structural causal model, just like with the probabilistic graphical models above, by ancestral sampling. Once we have samples from the joint distribution, we can infer various conditional distributions. Among these, the posterior distribution over the external noise variables is of a particular interest for the purpose of counterfactual reasoning. Let q(U) be a distribution that corresponds to all samples of ϵ_v's that led to a particular configuration V̂. Then the posterior distribution q(v) can be thought of as the distribution over the external (uncontrollable) factors that led to the particular outcome. We then can answer the question what would have happened to a target variable v had some of the variables were set differently, by fixing the external factors to follow q(v) and the rest of the variables to the original values V̂. This corresponds to counterfactual reasoning (had I done something differently, what would have happened?) § LEARNING AND A GENERATIVE PROCESS Learning, in machine learning, refers to the process by which we configure a black box predictive model to capture the predictive relationship between a set of variables. In the simplest form, let us consider having two variables; input v and output v'. g_θ is a predictive function that models the relationship between v and v' by mapping an instance of v to the corresponding v', i.e., g_θ(v') is the prediction of v' given v. θ is a collection of parameters that a learning algorithm configures to make g as predictive of v given v' as possible. Learning starts from data which is a set of examples drawn from an unknown data generating process 𝒢. This data generating process can be described using either a probabilistic graphical model or equivalently a structural causal model. Let D={ (v_1, v_1'), …, (v_N, v_N') } be our training dataset. The goal is then to use this dataset to infer θ that would make g highly predictive of v given v'. There are many ways to measure how predictive g is given a pair (v, v'), but we use here a log probability: r(θ; v,v') = log p(v | v'; g_θ(v')). This means that g_θ(v') parameterized a conditional distribution over v given v', or equivalently, g_θ(v') outputs a conditional distribution over v. If this quantity is large, it means that v is highly probable under the predictive distribution by g given v'. Learning is then to solve the following optimization problem: max_θ1/N∑_n=1^N r(θ; v_n, v_n'). Once learning is over, we can readily compute p (v|v') ≈ p(v|v'; g_θ(v')) = p(v|v'; θ). If we assume that p(v | v'; θ) is a great approximation of p(v|v'), we can use the former in place of the latter without too much worry. In other words, learning corresponds to figuring out a conditional distribution p(v|v') from data. This data was produced from the underlying data generating process 𝒢 which may have more variables in addition to v and v'. From this description of learning, it is immediately clear how this can be a replacement of the table-based approach from earlier, and that the table-based approach earlier was a special case of this learning-based approach, where θ corresponded to all the entries within a table. Once we take this learning-based approach, we can free ourselves from having to explicitly construct a large table and can also use various regularization techniques to avoid the issue of 0 probability as well as benefit from generalization. Let 𝒢=(V, E, P) with v, v' ∈ V and V = V \{ v, v' }. Then, p(v | v') = ∑_V p({v,v'}∪V)/∑_{v}∪V p({v'}∪ V'). That is, we marginalize out all variables in V and then divide it by the marginal probability of v' to get the conditional probability of v given v'. On one hand, this tells us that learning allows us to recover any arbitrary conditional distribution induced by an underlying data generating process as long as we have data produced following the same data generating process. On the other hand, this also tells us that there can be many different data generating processes that result in the exactly same conditional distribution given a pair of variables. In fact, as long as we do not use all variables from the generating process, there will always be ambiguities that cannot be resolved based on the learning procedure laid out above. As an example, consider the following two structural causal models: Causal Model 1. v' ←ϵ_v' v^l ← v' + a + ϵ_v^l v^r ← v' + b + ϵ_v^r v ← v^l + v^r + ϵ_v, where ϵ_v^l and ϵ_v^r both follow standard Normal distributions (𝒩(0, 1^2). ϵ_v also follows standard Normal distribution. Causal Model 2. v' ←ϵ_v' v^c ← v' + a + b + ϵ_v^c v ← v^c + ϵ_v, where ϵ_v^c∼𝒩(0, 2). Then, p(v|v') = 𝒩(v; v'+a+b, 3) for both causal models, although these two models are very different. This ambiguity plays an important role in both causal inference and so-called out-of-distribution generalization. We will study both more carefully later in the course. § LATENT VARIABLE MODELS ARE NOT NECESSARILY CAUSAL MODELS When we build a predictive model on a pair (v, v'), there are variables that are left out from the original data generating process. Those unused variables may be the ones for which we simply threw away observations, because they were not in our interest, or the ones we actually do not observe. These unobserved variables contribute to the complexity of p(v|v') via the process of marginalization. It is not trivial to build a predictive model, or equivalently in our context a deep neural network, that outputs a highly complex predictive distribution, as these complex distributions often do not have easily-parametrizable analytical forms. In these cases, there are two commonly used approaches; (1) autoregressive models and (2) latent variable models. The former relies on the chain rule of probabilities and decomposes the conditional probability as a product of coordinate-wise conditional probabilities: p(v|v'; θ) = ∏_i=1^d p(v_i | v'; θ), where v = [v_1, …, v_d]. By assuming that each coordinate-wise conditional probability is of a simpler form, we can still use a simple deep neural network to approximate a highly complex joint conditional probability. Because each conditional probability on the right hand side is parametrized with the same set of parameters θ, it is possible to build such an autoregressive model for a variable-sized observation, that is, dim(v) is not fixed a priori. This approach is behind the recently successful and popular large-scale language models <cit.>. Unlike autoregressive modeling, latent variable models explicitly introduce an unobserved variable u that represents the missing portion of the underlying data generating process. Just like the missing (unobserved) portion gave rise to the highly complex predictive distribution via the process of marginalization, the introduced (anonymous) latent variable u does the same: p(v|v'; θ) = ∑_u p_u(u) p(v|v', u; θ). If u is continuous, we replace ∑ with ∫. Because this marginalization is difficult almost always, it is natural to resort to sampling-based approximation. Because we are often interested in gradient-based learning, we consider sampling-based gradient approximation: ∇log p(v|v'; θ) = ∑_u p_u (u) p(v|v', u; θ) ∇log p(v|v', u; θ) /∑_u' p_u(u') p(v|v', u'; θ) = ∑_u p(u | v, v'; θ) ∇log p(v|v', u; θ) ≈1/M∑_m=1^M ∇log p(v|v', u^m; θ), where u^m is the m-th posterior sample. It is however as challenging to compute the posterior distribution p(u | v, v'; θ) nor to sample from it. It is thus more common these days to maximize the lower bound to p(v|v'; θ) while amortizing approximate posterior inference into a separate neural network. This is called a variational autoencoder <cit.>. Despite the seemingly similarity, these latent variables are not closely related to actual variables in the data generating process. They may or may not. If they indeed correspond to actual variables in the data generating process that we simply did not observe nor decided not to use data from, we may be able to derive conditions under which we can identify these unobserved variables and their associated distributions from partial data alone. It is however more common and realistic to think of these latent variables as a means to improving the expressive power of a deep neural network. In other words, latent variables are useful even if there are truly two variables, v and v', in the data generating process, since true p_v(v|v') may be complicated on its own. Nevertheless, it is a useful tool to model any complex distribution, and thus we have spent a little bit of time discussing them. § SUMMARY In this chapter, we have established the very foundations on which we can discuss causal inference and its relationships to machine learning in the rest of the course: * A brief discussion on the necessity of defining a universe over which causal inference is performed; * Two (equivalent) ways to define a universe, probabilistic graphical models and structural causal models; * What learning is, once the universe is defined. Based on these, we begin our journey into causal inference in machine learning. CHAPTER: A BASIC SETUP § CORRELATION, INDEPENDENCE AND CAUSATION Two random variables, u and v, are independent if and only if p(u, v) = p(u) p(v). That is, the probability of u taking a certain value is not affected by that of v taking another value. We say u and v are dependent upon each other when the condition above does not hold. In everyday life, we often confuse dependence with correlation, where two variables are correlated if and only if cov(u,v) = 𝔼[ (u - μ_u) (v - μ_v) ] > 0, where μ_u = 𝔼[u] and μ_v = 𝔼[v]. When this covariance is 0, we say these two variables are uncorrelated. Despite our everyday confusion, these two quantities are only related and not equivalent. When two variables are independent, they are also uncorrelated, but when two variables are uncorrelated, they may not be independent. Furthermore, it turned out these two quantities are also only remotely related to the existence/lack of causation between two variables. You must have heard of the statement “correlation does not imply causation.” There are two sides to this statement. First, the existence of correlation between two variables does not imply that there exists a causal relationship between these two variables. An extreme example of this is tautology; if the relationship between u and v is identity, there is no causal relationship but the correlation between these two is maximal. Second, the lack of correlation does not imply the lack of causation. This is the more important aspect of this statement. Even if there is no correlation between two variables, there could be a causal mechanism behind these two variables. Although it is a degenerate case, consider the following structural causal model: a ← -u + ϵ_a b ← +u + ϵ_b v ← a + b + ϵ_v. The value of v is caused by u via two paths; u → a → v and u → b → v, but these paths cancel each other. If we only observe (u,v) pairs, it is easy to see that they are uncorrelated, since v is constant. We will have more discussion later in the semester, but it is good time for you to pause and think whether these two paths matter, since they cancel each other. Consider as another example the following structural causal model: z ←ϵ_z u ← 0.2 z + √(1.04)ϵ_u v ← 0.1 u - 0.5 z + 0.1 ϵ_v, where ϵ_z, ϵ_u and ϵ_v are all standard Normal variables. Again, the structural causal model clearly indicates that u causally affects v via 0.1 u, but the correlation between u and v is 0 when we consider those two variables alone (that is, after marginalizing out z). This second observation applies equally to independence. That is, the independence between two variables does not imply the lack of a causal mechanism between two variables. The examples above apply here equally, as two uncorrelated Normal variables are also independent. This observation connects to an earlier observation that there are potentially many data generating processes that give rise to the same conditional distribution between two sets of variables. Here as well, the independence or correlatedness of two variables may map to many different generating processes that encode different causal mechanisms behind these variables. In other words, we cannot determine the causal relationship between two variables (without loss of generality) without predefining the underlying generating process (in terms of either the probabilistic graphical model or equivalently the structural causal model.)[ There are algorithms to discover an underlying structural causal model from data, but these algorithms also require some assumptions such as the definition of the goodness of a structural causal model. This is necessary, since these algorithms all work by effectively enumerating all structural causal models that can produce data faithfully and choosing the best one among these. ] In other words, we must consider both variables of interest and the associated data generating model in order to determine whether there exists a causal relationship between these variables and what that relationship is. § CONFOUNDERS, COLLIDERS AND MEDIATORS Let us consider a simple scenario where there are only three variables; u, v and w. We are primarily interested in the relationship between the first two variables; u and v. We will consider various ways in which these three variables are connected with each other and how such wiring affects the relationship between u and v. Directly connected. Consider the following probabilistic graphical model. [latent] (u) u; [latent, right=1cm of u] (v) v; [latent, above=0.5cm of u, xshift=1cm] (w) w; uv; w does not affect either u nor v, while u directly causes v. In this case, the causal relationship between u and v is clear. If we perturb u, it will affect v, according to the conditional distribution p_v(v | u). This tells us that we can ignore any node in a probabilistic graphical model that is not connected to any variable of interest. An observed confounder. Consider the following probabilistic graphical model, where w is shaded, which indicates that w is observed. [latent] (u) u; [latent, right=1cm of u] (v) v; [obs, above=0.5cm of u, xshift=1cm] (w) w; wu; wv; In this graph, the value/distribution of u and that of v are both determined individually already because we have observed w. This corresponds to the definition of conditional independence; p(u,v |w) = p(u|w) p(v|w). Because the edge is directed from w to u, perturbing u does not change the observed value of w. The same applies to v as well, since perturbing v does not affect u, since this path between u and v via w is blocked by observing w. An unobserved confounder. Consider the case where w was not observed. [latent] (u) u; [latent, right=1cm of u] (v) v; [latent, above=0.5cm of u, xshift=1cm] (w) w; wu; wv; We first notice that in general p(u,v) = ∫_w p(u|w)p(v|w)p(w) dw ≠ q(u)q(v), implying that u and v are not independent, unlike when w was not observed. In other words, u and v are not conditionally independent given w. Perturbing u however still does not affect v, since the value of w is not determined by the value of u according to the corresponding causal structural model. That is, u does not affect v causally (and vice versa.) This is the case where the independence and causality start to deviate from each other; u and v are not independent but each is not the cause of the other. Analogous to the former case of the observed w, where we say the path u ← w → v was closed, we say that the same path is open in this latter case. Because of this effect w has, we call it a confounder. The existence of a confounder w makes it difficult to tell whether the dependence between two variables we see is due to a causal relationship between these variables. w confounds this analysis. An observed collider. Consider the following graph where the arrows are flipped. [latent] (u) u; [latent, right=1cm of u] (v) v; [obs, above=0.5cm of u, xshift=1cm] (w) w; uw; vw; In general, p(u,v | w) = p(w|u, v) p(u) p(v)/∫∫ p(w|u, v) p(u) p(v) du dv≠ q(u) q(v), which means that u and v are not independent conditioned on w. This is sometimes called the explaining-away effect, because observing w explains away one of two potential causes behind w. Although u and v are not independent in this case, there is no causal relationship between u and v. w is where the causal effects of u and v collide with each other (hence, w is a collider) and does not pass along the causal effect between u and v. Similarly to the case of an unobserved confounder above, this is one of those cases where independence does not imply causation. We say that the path u → w ← v is open. An unobserved collider. Consider the case where the collider w is not observed. [latent] (u) u; [latent, right=1cm of u] (v) v; [latent, above=0.5cm of u, xshift=1cm] (w) w; uw; vw; By construction, u and v are independent, as p(u,v) = ∫ p(u) p(v) p(w|u,v) dw = p(u) p(v) ∫ p(w | u, v) dw = p(u) p(v). Just like before, neither u or v is the cause of the other. This path is closed. An observed mediator. Consider the case where there is an intermediate variable between u and v: [latent] (u) u; [obs, right=0.5cm of u] (w) w; [latent, right=0.5cm of w] (v) v; uw; wv; Because p(u,v | w) = p(u) p(w|u) p(v|w) = q(u|w) q(v|w) u and v are independent conditioned on w. However, perturbing u does not affect v, since the value of w is observed (that is, fixed.) We say that u → w → v is closed in this case, and independence implies the lack of causality. An unobserved mediator. What if w is not observed, as below? [latent] (u) u; [latent, right=0.5cm of u] (w) w; [latent, right=0.5cm of w] (v) v; uw; wv; It is then clear that u and v are not independent, since p(u,v) = ∫ p(u) p(w|u) p(v|w) d w = p(u) ∫ p(w|u) p(v|w) d w = p(u) q(v|u). Perturbing u will change the distribution/value of w which will consequently affect that of v, meaning that u causally affects v. This effect is mediated by w, and hence we call w a mediator. § DEPENDENCE AND CAUSATION We can chain the rules that were defined between three variables; u, v and w, in order to determine the dependence between two nodes, u and v, within any arbitrary probabilistic graphical model given a set Z of observed nodes. This procedure is called D-separation, and it tells us two things. First, we can check whether u _Z v. More importantly, however, we get all open paths between u and v. These open paths are conduits that carry statistical dependence between u and v regardless of whether there is a causal path between u and v, where we define a causal path as an open directed path between u and v.[ Unlike a usual path, in a direct path, the directions of all edges must agree with each other, i.e., pointing to the same direction. ] Dependencies arising from open, non-causal paths are often casually referred to as `spurious correlation' or `spurious dependency'. When we are performing causal inference for the purpose of designing a causal intervention in the future, it is imperative to dissect out these spurious correlations and identify true causal relationship between u and v. It is however unclear whether we want to remove all spurious dependencies or whether we should only remove spurious dependencies that are unstable, when it comes to prediction in machine learning. We will discuss more about this contention later in the course. In this course, we do not go deeper into D-separation. We instead stick to a simple setting where there are only three or four variables, so that we can readily identify all open paths and determine which are causal and which others are spurious. § CAUSAL EFFECTS We have so far avoided defining more carefully what it means for a node u to effect another node v causally. Instead, we simply said u effects v causally if there is a directed path from u to v under the underlying data generating process. This is however unsatisfactory, as there are loopholes in this approach. The most obvious one is that some of those directed edges may correspond to a constant function. For instance, an extreme case is where the structural causal model is a ← f_a(u, ϵ_a) v ← f_v(a, ϵ_v), where f_a(·) = 0. In this case, the edge from u to a is effectively non-existent, although we wrote it as if it existed. Rather, we want to define a causal effect of u on v by thinking of how perturbation on u propagates over the data generating process and arrives at v. More specifically, we consider forcefully setting the variable u (the cause variable) to an arbitrary value û. This corresponds to replacing the following line in the structural causal model G=(V,F,U) u ← f_u(pa(u), ϵ_u) with u ←û. û can be a constant or can also be a random variable as long as it is not dependent on any other variables in the structural causal model. Once this replacement is done, we run this modified structural causal model G(û)=(V,F,U) in order to compute the following expected outcome: 𝔼_U[v_𝒢(û)], where U is a set of exogenous factors (e.g. noise.) If u does not affect v causally, this expected outcome would not change (much) regardless of the choice of û. As an example, assume u can take either 0 or 1, as in treated or placebo. We then check the expected treatment effect on the outcome v by 𝔼_U[v_𝒢(û=1)] - 𝔼_U[v_𝒢(û=0)]. We would want this quantity to be positive and large to know that the treatment has a positive causal effect on the outcome. This procedure of forcefully setting a variable to a particular value is called a do operator. The impact of do is more starkly demonstrated if we consider a probabilistic graphical model rather than a structural causal model. Let p_u(u | pa(u)) be the conditional distribution over u in a probabilistic graphical model G=(V,E,P). We construct a so-called interventional distribution as p(v | do(u = û)), which states that we are now forcefully setting u to û instead of letting it be a sample drawn from the conditional distribution p_u(u | pa(u)). That is, instead of u ∼ u | pa(u), we do u ←û. In other words, we replace the conditional probability p_u(u | pa(u)) with p_u(u | pa(u)) = δ(u - û), where δ is a Dirac measure, or replace all occurrences of u in the conditional probabilities of child(u) with a constant û, where child(u) = { u' ∈ V | (u, v') ∈ E }. As a consequence, in the new modified graph G, there is no edge coming into u (or û) anymore, i.e., pa(u) = ∅. u is now independent of all the other nodes a priori. Because do modifies the underlying data generating process, p(v|u=û; G) and p(v|do(u=û); G) = p(v|u=û; G) differ from each other. This difference signifies the separation between statistical and causal quantities. We consider this separation in some minimal cases. § CASE STUDIES An unobserved confounder and a direct connection. Consider the case where w was not observed. [latent] (u) u; [latent, right=1cm of u] (v) v; [latent, above=0.5cm of u, xshift=1cm] (w) w; wu; wv; uv; Under this graph G, p_G(v|u=û) = ∑_w p(w) p(û|w) p(v|û,w)/p(û) = 𝔼_w [ p(û|w)/p(û) p(v| û, w) ], from which we see that there are two open paths between v and u: * u → v: a direct path; * u ← w → v: an indirect path via the unobserved confounder. The statistical dependence between[ We do not need to specify the direction of statistical dependence, since the Bayes' rule allows us to flip the direction. ] u and v flows through both of these paths, while the causal effect of u and v only flows through the direct path u → v. The application of do(u = û) in this case would severe the edge from w to u, as in [obs] (u) û; [latent, right=1cm of u] (v) v; [latent, above=0.5cm of u, xshift=1cm] (w) w; wv; uv; Under this modified graph G, p_G(v | do(u = û)) = p_G̃(v | u=û) = 1/q(û)∑_w p(w) q(û) p(v | û, w) = ∑_w p(w) p(v | û, w) = 𝔼_w [ p(v | û, w) ] , where we use q(û) to signify that this is not the same as p(û) above. The first one p_G(v | u = û) is a statistical quantity, and we call it a conditional probability. The latter p_G(v | do(u = û)) is instead a causal quantity, and we call it an interventional probability. Comparing these two quantities, p_G(v | u = û) and p_G(v | do(u = û)), the main difference is the multiplicative factor p(û|w)/p(û) inside the expectation. The numerator p(û|w) tells us how likely this treatment û was given under w, while the denominator p(û) tells us how likely the treatment û was given overall. In other words, we upweight the impact of û and w if û was more probable under w than overall. This observation will allow us to convert between these two quantities later. An observed collider and a direct connection. Let us flip the edges from w so that those edges are directed toward w. We further assume that we always observe w to be a constant (1). [latent] (u) u; [latent, right=1cm of u] (v) v; [obs, above=0.5cm of u, xshift=1cm] (w) w=1; uw; vw; uv; The do operator on u does not alter the graph above. This means that the conditional probability and interventional probability coincide with each other in this case, conditioned on observing w=1. This however does not mean that the conditional probability p(v|u=û,w=1), or equivalently the interventional probability p(v|do(u=û),w=1), measures the causal effect of u on v alone. As we saw before and also can see below, there are two open paths, u → v and u ← w → v, between u and v through which the dependence between u and v flows: p(v | u, w=1) = p(u) p(v | u) p(w=1|u, v)/∑_v'p(u) p(v'| u) p(w=1|u, v'). We must then wonder whether we can separate out these two paths from each other. It turned out unfortunately that this is not possible in this scenario, because we need the cases of w ≠ 1 (e.g. w=0) for this separation. If you recall how we can draw samples from a probabilistic graphical model while conditioning some variables to take particular values, it was all about selecting a subset of samples drawn from the same graph without any observed variables. This selection effectively prevents us from figuring out the effect of u on v via w. We will have more discussion on this topic later in the semester in the context of invariant prediction. Because of this inherent challenge, that may not even be addressable in many cases, we will largely stick to the case of having a confounder in this semester. § SUMMARY In this chapter, we have learned about the following topics: * How to represent a data generating process: a probabilistic graphical model vs. a structural causal model; * How to read out various distributions from a data generating process: ancestral sampling and Bayes' rule; * The effect of confounders, colliders and mediators on independence; * Causal dependency vs. spurious dependency; * The do operator. CHAPTER: ACTIVE CAUSAL INFERENCE In this chapter, we assume the following graph G. We use a, y and x, instead of u, v and w, to denote the action/treatment, the outcome and the covariate, respectively. The covariate x is a confounder in this case, and it may or may not observed, depending on the situation. [latent] (a) a; [latent, right=1cm of a] (y) y; [latent, above=0.5cm of a, xshift=1cm] (x) x; xa; xy; ay; An example case corresponding to this graph is vaccination. * a: is the individual vaccinated? * y: has the individual been infected by the target infectious disease with symptoms, within 6 months of vaccine administration? * x: the underlying health condition of the individual. The edge x → a is understandable, since we often cannot vaccinate an individual with an active underlying health condition. The edge x → y is also understandable, since healthy individuals may contract the disease without any symptoms, while immunocompromised individuals for instance may show a greater degree of symptoms with a higher chance. The edge a → y is also reasonable, as the vaccine must have been developed with the target infectious disease as its goal. In other words, this graphs encodes our structural prior about vaccination. With this graph that encodes the reasonable data generating process, causal inference then refers to figuring out the degree of the causal effect of a on y. § CAUSAL QUANTITIES OF INTEREST In this particular case, we are interested in a number of causal quantities. The most basic and perhaps most important one is whether the treament is effective (i.e., results in a positive outcome) generally. This corresponds to checking whether 𝔼[y | do(a=1)] > 𝔼[y | do(a=0)], or equivalently computing ATE = 𝔼[y | do(a=1)] - 𝔼[y | do(a=0)], where 𝔼[y | do(a=â)] = ∑_y y p(y | do(a=â)) = ∑_y y ∑_x p(x) p(y|â, x) = ∑_y y 𝔼_x ∼ p(x)[ p(y|â, x) ]. In words, we average the effect of â on y over the covariate distribution but the choice of â should not depend on x. Then, we use this interventional distribution p(y | do(a=â)) to compute the average outcome. We then look at the difference in the average outcome between the treatment and not (placebo), to which we refer as the average treatment effect (ATE). It is natural to extend ATE such that we do not marginalize the entire covariate x, but fix some part to a particular value. For instance, we might want to compute ATE but only among people in their twenties. Let us rewrite the covariate x as a concatenation of x and x', where x' is what we want to condition ATE on. That is, instead of p(y | do(a=â)), we are interested in p(y | do(a=â), x'=x̂'). This corresponds to first modifying G into [latent] (a) a; [latent, right=1cm of a] (y) y; [latent, above=0.5cm of a, xshift=0.5cm] (x) x; [obs, right=0.5cm of x] (xc) x̂'; xa; xy; xcx; xca; xcy; ay; and then into [latent] (a) a; [latent, right=1cm of a] (y) y; [latent, above=0.5cm of a, xshift=0.5cm] (x) x; [obs, right=0.5cm of x] (xc) x̂'; xy; xcx; xcy; ay; We then get the following conditional average treatment effect (CATE): CATE = 𝔼[y | do(a=1), x'=x̂'] - 𝔼[y | do(a=0), x'=x̂'], where 𝔼[y | do(a=â), x'=x̂'] = ∑_y y p(y | do(a=â), x'=x̂'̂) = ∑_y y ∑_x p(x|x') p(y|â, x'=x̂'̂, x) = ∑_y y 𝔼_x ∼ p(x|x')[ p(y|â, x'=x̂'̂, x) ]. You can see that this is really nothing but ATE conditioned on x'=x̂'. From these two quantities of interest above, we see that the core question is whether and how to compute the interventional probability of the outcome y given the intervention on the action a conditioned on the context x'. Once we can compute this quantity, we can computer various target quantities under this distribution. We thus do not go deeper into other widely used causal quantities in this course but largely stick to ATE/CATE. § REGRESSION: CAUSAL INFERENCE CAN BE TRIVIAL Assume for now that we are given a set of data points drawn from this graph G: D = { (a_1, y_1, x_1), …, (a_N, y_N, x_N) }. For every instance, we observe all of the action a, outcome y and covariate x. Furthermore, we assume all these data points were drawn from the same fixed distribution p^*(a, y, x) = p^*(x) p^*(a|x) p^*(y|a,x) and that N is large. In this case, we can use a non-parametric estimator, such as tables, deep neural networks and gradient boosted trees, to reverse-engineer each individual conditional distribution from this large dataset D. This is just like what we have discussed earlier in <ref>. Among three conditional distributions above, we are only interested in learning p^*(x) and p^*(y|a,x) from data, resulting p(x;θ) and p(y|a,x;θ), where θ refers to the parameters of each deep neural network.[ Although there is no reason to prefer deep neural networks over random forests or other non-parametric learners, we will largely stick to deep neural networks, as I like them more. ] Once learning is over, we can use it to approximate ATE as ATE ≈∑_y y 𝔼_x ∼ p(x;θ)[ p(y|a=1, x; θ) ] - ∑_y y 𝔼_x ∼ p(x;θ)[ p(y|a=0, x; θ) ] = ∑_y y 𝔼_x ∼ p(x;θ)[ p(y|a=1, x; θ) - p(y|a=0, x; θ) ]. There are two conditions that make this regression-based approach to causal inference work: * No unobserved confounder: we observe the covariate x in G; * Large N: we have enough examples to infer p^*(y|a,x) with a low variance. If there is any dimension of x that is not observed in the dataset, it is impossible for any learner to infer neither p^*(y|a,x) nor p^*(x) correctly. “Correctly” here in particular refers to identifying the true p^*(y|a,x). This is not really important if the goal is to approximate the conditional probability p^*(y|a), since we can simply drop x and use (a,y) pairs. It is however critical if the goal is to approximate the interventional probaiblity p^*(y|do(a)) because this necessitates us to access p^*(y|a,x) (approximately). Large N is necessary for two reasons. First, the problem may be ill-posed when N is small. Consider rewriting p(y|a,x) as p(y|a,x) = p(y,a,x)/p(a|x)p(x). This quantity has in the denominator both p(a|x) and p(x). If N is small to the point that we do not observe all possible combination of (a,x) for which p(x) > 0, this conditional probability is not well-defined. This connects to one of the major assumptions in causal inference, called positivity, which we will discuss further later in the semester. The second, perhaps less important, reason is that the variance of the estimator is often inversely proportional to N. That is, with more N, we can approximate p^*(y|a,x) with less variance. The variance of this estimate is critical, as it directly leads to that of ATE. If the variance of ATE is high, we cannot draw a confident conclusion whether the treatment is effective. This section tells us that causal inference can be done trivially by statistical regression when the following conditions are satisfied: * There are no unobserved confounder: We observe every variable. * Positivity: All possible combinations of (a,y,x) are observed in data. * We have enough data. Unfortunately, it is rare that all these conditions are satisfied in real life. § RANDOMIZED CONTROLLED TRIALS §.§ The Basic Foundation In this section, we consider the case where there are unobserved confounders. In such a case, we cannot rely on regression, as these unobserved confounders prevent a learner from identifying p^*(y|a, x) correctly. One may tempted to simply fit a predictive model on (a, y) pairs to get p(y|a; θ) to approximate p^*(y|a) and call it a day. We have however learned earlier that this does not approximate the causal effect, that is, p^*(y|do(a)), due to the spurious path, a ← x → y, which is open because we did not observe x. When there are unobserved confounders, we can actively collect data that allows us to perform causal inference. This is a departure from a usual practice in machine learning, where we often assume that data is provided to us and the goal is for us to use a learning algorithm to build a predictive model. This is often not enough in causal inference, and we are now presented with the first such case, where the assumption of `no unobserved confounder' has been violated. In order to estimate the causal effect of the action a on the outcome y, we severed the edge from the confounder x to the action a, as in 0.4 [latent] (a) a; [latent, right=1cm of a] (y) y; [latent, above=0.5cm of a, xshift=1cm] (x) x; xa; xy; ay; G → 0.4 [latent] (a) a; [latent, right=1cm of a] (y) y; [latent, above=0.5cm of a, xshift=1cm] (x) x; xy; ay; G This suggests that if we collect data according to the latter graph G, we may be able to estimate the causal effect of a on y despite the unobserved confounder x. To do so, we need to decide on the prior distribution over the action, p(a), such that a is independent of the covariate x. It is a common practice to choose a uniform distribution over the action as p(a). For instance, if a is binary, p(a=0) = p(a=1) = 0.5. The prior distribution over the covariate x, p(x), is not what we choose, but is what the environment is like. That is, instead of specifying p(x) nor sampling explicitly from it (which was by assumption impossible), we go out there and recruit samples from this prior distribution. In the vaccination example above, this would correspond to recruiting subjects from a general population without any particular filtering or selection.[ Of course, we can filter these subjects to satisfy a certain set of criteria (inclusion criteria) in order to estimate a conditional average treatment effect. ] For each recruited subject x, we assign the treatment a, drawn from p(a) that is independent of x. This process is called `randomization', because this process assigns an individual drawn from the population, x ∼ p(x), randomly to either a treatment or placebo group according to p(a), where `randomly' refer to `without any information'. This process is also `controlled', because we control the assignment of each individual to a group. For the randomly assigned pair (a,x), we observe the outcome y by letting the environment (nature) simulate and sample from p^*(y|a,x). This process is a `trial', where we try the action a on the subject x by administering a to x. Putting all these together, we call this process of collecting data from G a randomized controlled trial (RCT). It is important to emphasize that x is not recorded, fully known nor needed, and we end up with D = { (a_1, y_1), …, (a_N, y_N) }. Using this dataset, we can now approximate the interventional probability as 𝔼_G[ y | do(a=â)] = 𝔼_G[ y | a = â] = ∑_y y ∑_x p(x) p(â) p(y | â, x)/p(â) = ∑_x∑_y y p(x) p(y | â, x) ≈∑_n=1^N 1(a_n = â) y_n/∑_n'=1^N 1(a_n' = â), because x_n ∼ p(x), a_n ∼p(a) and y_n ∼ p(y|â, x). As evident from the final line, we do not need to know the confounder x, which means that RCT avoids the issue of unobserved, or even unknown, confounders. Furthermore, it does not involve p(a), implying that we can use an arbitrary mechanism to randomly assign each subject to a treatment option, as long as we do not condition it on the confounder x. If we have strong confidence in the effectiveness of a newly developed vaccine, for instance, we would choose p(a) to be skewed toward the treatment (a=1). §.§ Important Considerations Perhaps the most important consideration that must be given when implementing a randomized controlled trial is to ensure that the action a is independent of the covariate x. As soon as dependence forms between a and x, the estimated causal effect 𝔼_G[ y | do(a=â)] becomes biased. It is however very easy for such dependency to arise without a careful design of an RCT, often due to subconscious biases formed by people who implement the randomized assignment procedure. For instance, in the case of the vaccination trial above, a doctor may subconsciously assign older people less to the vaccination arm[ An `arm' here refers to a `group'. ] simply because she is subconsciously worried that vaccination may have more severe side effects on an older population. Such a subconscious decision will create a dependency between the action a and the covariate x (the age of a subject in this case), which will lead to the bias in the eventual estimate of the causal effect. In order to avoid such a subconscious bias in assignment, it is a common practice to automate the assignment process so that the trial administrator is not aware of to which action group each subject was assigned. Second, we must ensure that the causal effect of the action a on the outcome y must stay constant throughout the trial. More precisely, p^*(y|a,x) must not change throughout the trial. This may sound obvious, as it is difficult to imagine how for instance the effect of vaccination changes rapidly over a single trial. There are however many ways in which this does not hold true. A major way by which p^*(y|a,x) drifts during a trial is when a participant changes their behavior. Continuing the example of vaccination above, let us assume that each participant knows that whether they were vaccinated and also that the pandemic is ongoing. Once the participant knows that they were given placebo instead of actual vaccine, they may become more careful about their hygiene, as they are worried about potential contracting the rampant infectious disease. Similarly, if they knew they were given actual vaccine, they may become less careful and expose themselves to more situations in which they could contract the disease. That is, the causal effect of vaccination changes due to the alteration of participants' behaviours. It is thus a common and important practice to blind participants from knowing to which treatment groups they were assigned. For instance, in the case of vaccination above, we would administer saline solution via injection to control (untreated) participants so that they cannot tell whether they are being injected actual vaccine or placebo. Putting these two considerations together, we end up with a double blind trial design. In a double blind trial design, neither the participant nor the trial administrator is made aware of their action/treatment assignment. This helps ensure that the underlying causal effect is stationary throughout the study and that there is no bias creeping in due to the undesirable dependency of the action/treatment assignment on the covariate (information about the participant.) The final consideration is not about designing an RCT but about interpreting the conclusion from an RCT. As we saw above, the causal effect from the RCT based on G is mathematically expressed as 𝔼_G[ y | do(a=â)] = ∑_x∑_y y p(x) p(y | â, x). The right hand side of this equation includes p(x), meaning that the causal effect is conditioned on the prior distribution over the covariate. We do not have direct access to p(x), but we have samples from this distribution, in the form of participants arriving and being included into the trial. That is, we have implicit access to p(x). This implies that the estimated causal effect would only be valid when it is used for a population that closely follows p(x). If there is any shift in the population distribution itself or there was any filtering applied to participants in the trial stage that does not apply after the trial, the estimated causal effect would not be valid anymore. For instance, clinical trials, such as the vaccination trial above, are often run by research-oriented and financially-stable clinics which are often located in affluent neighbourhoods. The effect of the treatment from such a trial is thus more precise for the population in such affluent neighbourhoods. This is the reason why inclusion is important in randomized controlled trials. Overall, a successful RCT requires the following conditions to be met: * Randomization: the action distribution must be independent of the covariate. * Stationarity of the causal effect: the causal effect must be stable throughout the trial. * Stationarity of the popluation: the covariate distribution must not change during and after the trial. As long as these three conditions are met, RCT provides us with an opportunity to cope with unobserved confounders. § CAUSAL INFERENCE VS. OUTCOME MAXIMIZATION Beside curiosity, the goal of causal inference is to use the inferred causal relationship for better outcomes in the future. Once we estimate 𝔼_G[ y | do(a)] using RCT, we would simply choose the following action for all future subject: â = max_a ∈𝒜𝔼_G[ y | do(a)], where 𝒜 is a set of all possible actions. This approach has however one downside that we had to give an incorrect treatment (e.g. placebo) to many trial participants who lost their opportunities to have a better outcome (e.g. protection against the infectious disease.) Consider an RCT where subjects arrive and are tested serially, that is, one at a time. If t subjects have participated in the RCT so far, we have D = { (a_1, y_1), …, (a_t, y_t) }. Based on D, we can estimate the outcome of each action by ŷ_t(a) = ∑_t'=1^t 1(a_t' = a) y_t'/∑_t”=1^N 1(a_t” = a). This estimate would be unbiased (correct on average), if every a_t' was drawn from an action distribution that is independent of the covariate x from an identical distribution q(a).[ This is another constraint on RCT, that every subject must be assigned according to the same assignment policy q(a). ] More generally, the bias (the degree of incorrectness) would be proportional to ϵ_≤ t-1 = 1/t∑_t'=1^t 1(a_t' was drawn independently of x_t'). If ϵ_≤ t-1 = 1, the estimate is unbiased, corresponding to causal inference. If ϵ_≤ t-1 = 0, what we have is not interventional but conditional. Assuming ϵ_t ≫ 0 and t ≫ 1, we have a reasonable causal estimate of the outcome y given each action a. Then, in order to maximize the outcome of the next subject (x_t+1), we want to assign them to an action sampled from the following Boltzmann distribution: q_t(a) = exp( 1/β_tŷ_t(a) )/∑_a' ∈𝒜exp( 1/β_tŷ_t(a') ), where β_t ∈ [0, ∞) is a temperature parameter. When β_t →∞ (a high temperature), this is equivalent to sampling the action a_t+1 from a uniform distribution, which implies that we do not trust the causal estimates of the outcomes, perhaps due to small t. On the other hand, when β_t → 0 (a low temperature), the best-outcome action would be selected, as q_t(a) =_β→∞ 1, if ŷ(a) = max_a'ŷ_t(a') 0, otherwise assuming there is a unique action that leads to the best outcome. In this case, we are fully trusting our causal estimates of the outcomes and simply choose the best action accordingly, which corresponds to outcome maximization. We now combine these two in order to make a trade off between causal inference and outcome maximization. At time t, we sample the action a_t for a new participant from q_t(a) = ϵ_t 1/|𝒜| + (1-ϵ_t) exp( 1/β_tŷ_t(a) )/∑_a' ∈𝒜exp( 1/β_tŷ_t(a') ), where ϵ_t ∈ [0, 1] and |𝒜| is the number of all possible actions. We can sample from this mixture distribution by * Sample e_t ∈{0,1} from a Bernoulli distribution of mean ϵ_t. * Check e_t * If e_t=1, we uniformly choose a_t at random. * If e_t=0, we sample a_t proportionally to ŷ(a). As we continue the RCT according to this assignment policy, we assign participants increasingly more to actions with better outcomes, because our causal estimate gets better over time. We however ensure that participants are randomly assigned to actions at a reasonable rate of ϵ_t, in order to estimate the causal quantity rather than the statistical quantity. It is common to start with a larger ϵ_t ≈ 1 and gradually anneal it toward 0, as we want to ensure we quickly estimate the correct causal effect early on. It is also usual to start with a large β_t ≥ 1 and anneal it toward 0, as the early estimate of the causal effect is often not trustworthy. When the first component (the uniform distribution) is selected, we say that we are exploring, and otherwise, we say we are exploiting. e_t is a hyperparameter that allows us to compromise between exploration and exploitation, while β_t is how we express our belief in the current estimate of the causal effect. This whole procedure is a variant of EXP-3, which stands for the exponential-weight algorithm for exploration and exploitation <cit.>, that is used to solve the multi-armed bandit problem. However, with an appropriate choice of ϵ_t and β_t, we obtain as a special case RCT that can estimate the causal effect of the action on the outcome. For instance, we can use the following schedules of these two hyperparameters: ϵ_t = β_t = 1, if t < T 0, if t ≥ T with a larger T ≫ 1. We can however choose smoother schedulers for ϵ_t and β_t in order to make a better compromise between causal inference and outcome maximization, in order to avoid assigning too many subjects to a placebo (that is, ineffective) group. The choice of ϵ_t and β_t also affects the bias-variance trade-off. Although this is out of the scope of this course, it is easy to guess that higher ϵ_t and higher β_t lead to a higher variance but a lower bias, and vice versa. A never-ending trial. A major assumption that must be satisfied for RCT is the stationarity. Both the causal distribution p^*(y|a,x) and the covariate distribution p^*(x) must be stationary in that they do not change throughout the trial as well as after the trial. Especially when these distributions drift after the trial, that is, after running the trial with T participants, our causal estimate as well as the decision based on it will become less accurate. We see such instances often in the real world. For instance, as viruses mutate, the effectiveness of vaccination wanes over time, although the very same vaccine was found to be effective by an earlier RCT. When the underlying conditional distributions are all stationary, we do not need to keep the entire set of collected data points in order to compute the approximate causal effect, because ŷ_t(a) = ∑_t'=1^t 1(a_t' = a) y_t'/∑_t”=1^N 1(a_t” = a) = ∑_t'=1^t-11(a_t' = a)/∑_t'=1^t 1(a_t' = a)ŷ_t-1(a) + 1(a_t = a)/∑_t'=1^t 1(a_t' = a) y_t. In other words, we can just keep a single scalar ŷ_t(a) for each action to maintain the causal effect over time. We can tweak this recursive formula to cope with slow-drifting underlying distributions by emphasizing recent data points much more so than older data points. This can be implemented with exponential moving average,[ `exponential-weight' in EXP-3 comes from this choice. ] as follows: ŷ_t(a) = ηŷ_t-1(a) + (1-η) y_t, if a_t = a ŷ_t-1(a), if a_t ≠ a where η∈ [0, 1). As η→ 0, we consider an increasingly smaller window into the past and do not trust what we have seen happen given a particular action. On the other hand, when η→ 1, we do not trust what happens now but rather what we already know about the causal effect of an action a should be. By keeping track of the causal effect with exponential moving average, we can continuously run the trial. When doing so, we have to be careful in choosing the schedules of ϵ_t and β_t. Unlike before, ϵ_t should not be monotonically annealed toward 0, as earlier exploration may not be useful later when the underlying distributions drift. β_t also should not be annealed toward 0, as the estimate of the causal effect we have at any moment cannot be fully trusted due to the unanticipated drift of underlying distributions. It is thus reasonable to simply set both β_t and ϵ_t to reasonably large constants. Checking the bias. At time t, we have accumulated D_t = { (e_1, a_1, y_1), …, (e_t, a_t, y_t) }, where e_t' indicates whether we explored (1) or exploited (0) at time t'. We can then get the unbiased estimate of the causal effect of a on y by only using the triplets (e,a,y) for which e=1. That is, ỹ_t(a) = ∑_t'=1^t 1(e_t' = 1) 1(a_t' = a) y_t'/∑_t”=1^t 1(e_t” = 1) 1(a_t” = a) . This estimate is unbiased, unlike ŷ(a) from EXP-3 above, since we only used the action-outcome pairs when the action was selected randomly from the same uniform distribution. Assuming a large t (so as to minimize the impact of a high variance,) we can then compute the (noisy) bias of the causal effect estimated and used by EXP-3 above as b_t = (ŷ_t(a) - ỹ_t(a))^2. Of course this estimate of the bias is noisy and especially so when t is small, since the effective number of data points used to estimate ỹ_t is on average ∑_t'=1^t ϵ_t'≤ t. § WHEN SOME CONFOUNDERS ARE OBSERVED The assignment of an individual x to a particular treatment option a is called a policy. In the case of RCT, this policy was a uniform distribution over all possible actions a, and in the case of EXP-3, it was a mixture of a uniform policy and a effect-proportional policy from Eq. (<ref>). In both cases, the policy was not conditioned on the covariate x, meaning that no information about the individual was used for assignment. This is how we addressed the issue of unobserved confounders. Such an approach is however overly restrictive in many cases, as some treatments may only be effective for a subset of the population that share a certain trait. For instance, consider the problem of inferring the effect of a monoclonal antibody therapeutics called Trastuzumab (or Herceptin) for breast cancer on the disease-free survival of a patient <cit.>. If we run RCT without taking into account any covariate information as above, we very likely would not see any positive effect on the patient's disease-free survival, because Trastuzumab was specifically designed to work for HER2-positive breast cancer. That is, only breast cancer patients with over-expressed ERBB2 gene (which encodes the HER2 receptor) would benefit from Trastuzumab. In these cases, we are interested in conditional average treatment effect (CATE) from earlier, that is, to answer the question of what causal effect Trastuzumab has on patients given their gene expression profile. CATE given the overly-expressed ERBB2 gene of Trastuzumab would be positive, while CATE without over-expression of ERBB2 gene would be essentially zero. When we observe some confounders, such as the gene expression profile of a subject in the example above, and do not observe all the other confounders, we can mix RCT (<ref>) and regression (<ref>) to estimate the conditional causal effect conditioned on the observed confounders. The graph G with the partially-observed confounder is depicted as below with the unobserved x and the observed x': [obs] (a) a; [obs, right=1cm of a] (y) y; [latent, above=0.5cm of a, xshift=0.5cm] (x) x; [obs, right=0.5cm of x] (xc) x'; xa; xy; xca; xcy; ay; Each subject in RCT then corresponds to the action-outcome-observed-covariate triplet (a_t, y_t, x'_t). Assume we have enrolled and experimented with t participants so far, resulting in D_t = { (a_1, y_1, x'_1), …, (a_t, y_t, x'_t) }. Let x̂' be the condition of interest, such as the overexpression of ERBB2. we can then create a subset D(x̂') as D_t(x̂') = { (a, y, x') ∈ D_t | x' = x̂' }⊆ D_t. We can then use this subset D_t(x̂') as if it were the set from the ordinary RCT, in order to estimate the conditional causal effect, as follows. ŷ_t(a|x') ≈∑_(a_i,y_i,x_i') ∈ D_t(x')1(a_i=a) y_i /∑_(a_j,y_j,x_j') ∈ D_t(x')1(a_j=a) . Just like what we did earlier, we can easily turn this into a recursive version in order to save memory: ŷ_t(a|x') = ŷ_t-1(a|x'), if x'_t ≠ x' ŷ_t-1(a|x') ∑_(a_j,y_j,x_j') ∈ D_t-1(x')1(a_j=a) + y_t 1(a_t = a) /∑_(a_k,y_k,x_k') ∈ D_t(x')1(a_k=a) , if x'_t = x' Similarly to earlier, we can instead of exponential moving average in order to cope with the distribution drift over the sequential RCT, as ŷ_t(a|x') = ŷ_t-1(a|x'), if x'_t ≠ x' η_t ŷ_t-1(a|x') + (1-η_t) y_t 1(a_t = a), if x'_t = x' We can then use this running estimates of the causal effects in order to build an assignment policy π_t(a|x') that now depends on the observed covariate x'. If we go back to the earlier example of Trastuzumab, this policy would increasingly more assign participants with over-expressed ERBB2 to the treatment arm, while it would continue to be largely uniform for the remaining population. A Parametrized Causal Effect. Up until this point, partially-observed confounders do not look like anything special. It is effectively running multiple RCT's in parallel by running an individual RCT for each covariate configuration. There is no benefit of running these RCT's in parallel relative to running these RCT's in sequence. We could however imagine a scenario where the former is more beneficial than the latter, and we consider one such case here. Assume that x' is a multi-dimensional vector, i.e., x' ∈ℝ^d and that the true causal effect of each action a is a linear function of the observed covariate x': ŷ^*(a|x') = θ^*(a)^⊤ x' + b^*(a) = ∑_i=1^d θ_d^*(a) x'_d + b(a). This means that each dimension x'_d of the covariate has an additive effect on the expected outcome ŷ(a|x) weighted by the associated coefficient θ_d^*(a), and that the effect θ_d^*(a) x'_d of each dimension on the expected outcome is independent of the other dimensions' effects. As an example, consider estimating the effect of weight lifting on the overall health. The action is whether to perform weight lifting each day, and the outcome is the degree of the subject's healthiness. Each dimension x'_d refers to a habit of a person. For instance, it could be a habit of smoking, and the corresponding dimension x'_d encodes the number of cigarettes the subject smokes a day. Another habit could be jogging, and the corresponding dimension would encode the number of minutes the subject runs a day. Smoking is associated with a negative coefficient regardless of a. On the other hand, jogging is associated with a negative coefficient when a=1, because an excessive level of workout leads to frequent injuries, while it is with a positive coefficient when a=0. Of course, some of these habits may have nonlinear effects. Running just the right duration each day in addition to weight lifting could lead to a better health outcome. It is however reasonable to assume linearity as the first-order approximation. We can estimate the coefficients θ(a) by regression from <ref> by solving min_θ(a)1/2∑_t'=1^t 1(a_t' = a) ( y_t' - θ(a)^⊤ x'_t' - b^*(a) )^2. Instead of keeping the count for each and every possible x', we now keep only θ(a) for each action a. This has an obvious advantage of requiring only O(d) memory rather than O(2^d). More importantly however is that the estimated causal effect generalizes unseen covariate configuration. Let us continue from the example of having smoking and jogging as two dimensions of x'. During RCT, we may have seen participants who either smoke or jog but never both. Because of the linearity, the estimated causal effect predictor, θ(a)_smoke x'_smoke + θ(a)_run x'_run, generalizes to participants who both smokes and jogs as well as who neither smokes nor jogs. This case of a linear causal effect suggests that we can rely on the power of generalization in machine learning in order to bypass the strong assumption of positivity. Even if we do not observe a covariate or an associated action, a parametrized causal effect predictor can generalize to those unseen cases. We will discuss this potential further later in the semester. When there are many possible actions. Assume we do not observe any confounder, that is, there is no x'. Then, at each time, RCT is nothing but estimating a single scalar for each action. Let 𝒜 be a set of all actions and |𝒜| a cardinality of this action set. Then, at any time t of running an RCT, the number of data points we can use to estimate the causal effect of a particular action is N(a) = ∑_t'=1^t 1(a_t' = a) ≈ t p_a(a), where p_a(a) is the probability of selecting the action a during randomization. Just like the case above with partially observed confounders, the variance of the estimate of the causal effect of an individual action decreases dramatically as the number of possible actions increases. We must have some extra information (context) about these actions in order to break out of this issue. Let c(a) ∈ℝ^d be the context of the action a. For instance, each dimension of c(a) corresponds to the amount of one ingredient for making the perfect steak seasoning, such as salt, pepper, garlic and others. Then, each action a corresponds to a unique combination of these ingredients. In this case, the causal effect of any particular action can be thought of mapping c(a) to the outcome ŷ(a) associated with a. If we assume this mapping was linear <cit.>, we can write it as ŷ(a) = c(a)^⊤θ^* + b^*, where θ∈ℝ^d and b ∈ℝ. Similarly to the case where there was an observed confounder above, with linearity, we do not need to maintain the causal estimate for each and every possible action, which amounts to |𝒜| numbers, but the effect of each dimension of the action context on the outcome, which amounts of d numbers. When d ≪ |𝒜|, we gain a significant improvement in the variance of our estimates. Furthermore, just like what we saw above, we benefit from the compositionality, or compositional generalization. For instance, if the effects of salt and pepper on the final quality of seasoning are independent and additive, we can accurately estimate the effect of having both salt and pepper even when all tested seasonings had either salt or pepper but never both. Let c(a)=[s_salt, s_pepper], and assume s_salt, s_pepper∈{0, 1} and that all past trials were such that s_salt=0 or s_pepper = 0. We can approximate θ^*_salt and θ^*_pepper from these past trials, and due to the linearity assumption, we can now compute the causal effect of c(a)=[1, 1], as θ̂_salt + θ̂_pepper. This would not have been possible without the linearity, or more generally compositionality, because this particular action of adding both salt and pepper has never been seen before, i.e., it violates the positivity assumption. This is yet another example of overcoming the violation of positivity by generalization. At this point, one sees a clear connection between having some confounders observed and having many actions associated with their contexts. This is because they are simply two sides of the same coin. I leave this to you to think of why this is the case. § SUMMARY In this chapter, we have learned about the following topics: * Average treatment effect; * Regression for causal inference; * Randomized controlled trials; * Outcome maximization with a bandit algorithm; * A contextual bandit. CHAPTER: PASSIVE CAUSAL INFERENCE § CHALLENGES IN RANDOMIZED CONTROLLED TRIALS A major issue with randomized controlled trials (RCT) is that we must experiment with subjects. This raises many issues that are not necessarily related to causal inference itself but are more broadly about ethics and legality. For instance, the “Tuskegee Study of Untreated Syphilis in the Negro Male” was the widely-known and widely-condemned study for investigating the effect of untreated syphilis <cit.>. As RCT requires careful, double blinding, the trial administrators did not reveal to the study participants that they were diagnosed with (latent) syphilis. The study was originally designed (and the participants were told) to run for six months but lasted for 40 years until the details of the study were leaked to the press. During these decades, the treatment for syphilis was made available but none of the participants were treated properly, resulting in the death of more than 100 participants due to syphilis, out of approximately 400 participants, the syphilis infection of the wives of fourty participants and the congenital syphilis infection of 19 children. It took more than half a century for the US government to formally issue apology. A similar issue persists throughout medicine when it comes to RCT which is de facto standard for establishing any causal effect of a treatment on the outcome of a patient. Due to the necessity of randomization, some patient participants will inevitably receive placebo rather than the actual treatment. Even if the tested treatment ultimately turns out to be causally effective, by then it may be already late for those patients who were put on the control arm to receive and benefit from this new treatment. How ready are you to put patients into suffering because we want to (and often need to) establish the causal effect of a new treatment? Sometimes, it is impossible to design a placebo that ensures double blindness of a trial. Consider for instance an RCT on the effectiveness of masking on preventing respiratory diseases. Participants will understandably alter their behaviours based on their assignments; treatment (masking) or control (no masking), as their perception of risk is altered, which violates the stationarity of the causal effect p^*(y|a,x). In order to avoid this, we must ensure that participants cannot tell whether they are in the treatment or control arm, but it is pretty much impossible to design a placebo mask that looks and feels the same as an actual mask but does not filter any particle in the air. In other words, RCT is only possible when placebos can be effectively designed and deployed. In this example, we run into yet another problem; how do we enforce the treatment on subjects? In the case of vaccination, subjects come into clinics and are for instance injected on the spot under the supervision of a clinician, after which the subjects cannot get rid of injected vaccine. In the case of masking, for instance, we cannot ensure that participants wear masks as they are instructed, as this requires non-stop monitoring throughout the trial period. Finally, some actions take long to have measurable impact on the outcome. For instance, consider a policy proposal of introducing a new course on programming at elementary schools (1-6 grades) with the goal of improving students' job prospects and growing the information technology (IT) sector. It will take anywhere between 12 to 20 years for these students to finish their education and participate in society, and we will have to wait another 4 to 15 years to see any measurable economic impact on the IT sector. Such a long duration between the action and the outcome further complicates RCT, as it is often impossible to ensure the stationarity of underlying distributions over that duration. RCT is thus not suitable for such actions that require a significant amount of time to have any measurable impact. In this chapter, we instead consider an alternative approach to RCT, where we rely on existing data to infer the causal relationship between the action and outcome. As we use already collected data, we can often avoid the issues arising from actively experimentation, although we are now faced with another set of challenges, such as the existence of spurious correlations arising from various unobserved confounders that affected the choice of actions earlier. We will discuss how we can avoid these issues in this chapter. It is however important to emphasize that there is no silver bullet in causal inference. § WHEN CONFOUNDERS WERE ALSO COLLECTED §.§ Inverse Probability Weighting [latent] (a) a; [latent, right=1cm of a] (y) y; [latent, above=0.5cm of a, xshift=1cm] (x) x; xa; xy; ay; Let us come back to the original graph G that consists of three variables; a, y and x, with the following joint probability: p^*(a, y, x) = p^*(x) p^*(a|x) p^*(y|a, x). We also assume that we have a set D of triplets (a_n, y_n, x_n) drawn from this underlying graph G. In other words, we assume that we observe the confounder in this case. If we have a large such set, i.e. N=|D| ≫ 1, we can use regression, as in <ref>, to approximate p^*(y | a, x) in order to compute the causal effect as 𝔼_G[ y | do(a=â)] ≈∑_x p^*(x) ∑_y y p̂(y | â, x) ≈1/N∑_n=1^N 𝔼_p̂(y | â, x_n)[y]. Although this regression-based approach is straightforward, this approach has a disadvantage of having to fit a regressor on the concatenation of the action a and confounder x. The issue is exactly that of RCT with partially observed confounders, that is, we must have enough data points for each action-covariate combination in order for regression to have a low variance. We can use regularization to reduce the variance, which may unfortunately introduce a bias. We can reduce the variance by using regression to estimate a simpler quantity. In particular, we consider approximating p^*(a|x). Because this is a map from 𝒳, we just need enough data points for each covariate configuration rather than the action-covariate combination. Approximating p^*(a|x) allows us to estimate the causal effect using data points drawn from the original graph G rather than the modified graph G from <ref>, because 𝔼_G[ y | do(a=â)] = ∑_x p^*(x) ∑_a1(a = â) ∑_y p^*(y | â, x) y = ∑_x∑_a∑_y p^*(x) p^*(a|x) 1(a = â) p^*(y | â, x) 1/p^*(a|x) y = 1/∑_n'=1^N 1(a_n'= â)∑_n=1^N 1(a_n = â) y_n/p^*(â | x_n). Instead of the true p^*(â|x_n), we plug in the regression-based approximation p̂(â | x_n) and arrive at 𝔼_G[ y | do(a=â)] ≈∑_n=1^N 1(a_n = â) y_n/p̂(â | x_n)/∑_n'=1^N 1(a_n'= â). In words, we look for all data points within the previously-collected set D that are associated with the action â of interest. The simple average of the associated outcomes would be a biased estimate of the casual effect, since it combines the effects of a on y via two paths; (causal) a → y and (spurious) a ← x → y. We correct this bias by weighting each data point, or the associated outcome, by the inverse probability of the action given the confounder, 1/p̂(â | x_n). This approach is thus called inverse probability weighting (IPW), and p^*(a|x) is often referred to as a propensity score. It is fine to have missing outcomes. One important advantage of the IPW-based approach is that we can approximate p^*(a|x) using all (a,x) pairs even if they are not associated with the outcome y, unlike the earlier regression-based approach which required having all three variables observed (a,y,x). Imagine using clinical notes and measurements from the electronic health record (EHR) of a large hospital in order to estimate the causal effect of a particular drug on a target disease. Of course, the prescription of the drug a is not made blindly but based on the patient information which includes their underlying health condition. Since the existing health conditions affect the outcome y of almost any kind of a disease, such patient information is a confounder x. Some patients often do not return to the same hospital for follow-up checks, meaning that the EHR does not record the outcome of these patients, leaving us only with (a,x). We can then use all the prescriptions to approximate the propensity score p(a|x) and then use a subset of these prescriptions for which the patients' outcomes are recorded (i.e. they came back for follow-up visits) to compute the causal effect. Mathematically, it means that we solve two separate optimization problems using two different data sets (though, one is a superset of the other,) as follows: p̂(a|x) = max_p∑_(a',x') ∈ D_πlog p(a'|x'), ŷ(â) = ∑_(a',x',y') ∈ D_y̅1(a' = â) y_n/p̂(â | x_n)/∑_(a”,x”,y”) ∈ D_y̅1(a” = â) , where D_y̅⊆ D_π. A doubly robust estimator. A major issue with the IPW-based approach is that the variance of the estimator can be very large, even if the variance of estimating the propensity score is low, because the propensity scores shows up in the denominator. On the other hand, the regression-based approach has a low variance due to the missing division by the propensity score. It is however likely a biased estimator, as we cannot easily guarantee that the choice of a non-parametric regressor can identify the correct conditional probability, due to a variety of issues, such as the lack of realizability. If the regression-based approach is correct for an instance (a,y,x), ỹ(a, x) would coincide with y, and we would just use ỹ(a, x) as it is and prefer to avoid the IPW-based estimate due to the potentially large variance arising from the denominator. Otherwise, we want to rely on the IPW-based estimate to correct for the incorrectness arising from the regression-based approach. This can be expressed as ŷ(â) = ∑_x p(x) ∑_a ( 1/|𝒜|ỹ(a,x) + p^*(a|x) 1|𝒜|/p̂(a|x)_(b)( y^*(a|x) - ỹ(a,x) _(a)) ). If ỹ(a,x) is perfect, (a) disappears, as expected. If our estimate of the propensity score is perfect, (b) is 1, resulting in using the true y^*(a,x) while ignoring the regression-based approach.[ y^*(a,x) is the true expected outcome given the action a and the covariate x. Since expectation is linear, we can push the expectation all the way inside to obtain this expression. ] Since we are provided with data rather than the actual probability distributions, we end up with ŷ(â) = 1/Z(â)∑_(a',x',y') ∈ D1(a' = â) ( ỹ(â, x_n) + 1/p̂(â | x_n)(y_n - ỹ(â, x_n)) ), where Z(â) = ∑_(a”,x”,y”) ∈ D1(a” = â). This estimator is called a doubly robust estimator. §.§ Matching. Instead of estimating p^*(a|x) and multiplying the observed outcome with its inverse, we can achieve a similar outcome by manipulating data itself. When p^*(a|x) = p^*(a), that is, the action is independent of the covariate, the IPW-based estimate coincides with simple averaging, just like in RCT from <ref>. This happens when the ratio of actions associated with each unique x in the data is the same across the data set. To make it simpler, let us express this ratio of actions by assigning the minimal number of each action in relation to the other actions. For instance, if our desired ratio between the treatment and placebo is 0.8:0.2, we would express it for instance as 4:1. Starting from the original data set D={(a_1, x_1, y_1), …, (a_N, x_N, y_N) }, we go through each x_n by collecting as many (a,x_n,y) ∈ D as n_a, where n_a is the target number of examples of action a, for instance 4 above. This collection can be done randomly or by following some fixed strategy, such as round-robin scheduling. Sometimes it is necessary to choose the same triplet multiple times, which is not ideal but may be necessary. By aggregating these collections, we arrive at a new dataset D̃. Under this new dataset D̃, the propensity score is guaranteed to be p̂(a|x) ∝ n_a, regardless of x. Furthermore, p̂(a|x) = p̂(a) as well. Meanwhile, p̂(x) stays the same as that under the original data set D. Then, the expected causal outcome of any given action a under this dataset is ŷ(â) = 1/N∑_n=1^N 1(a'_n=â) y'_n, where (a'_n, y'_n, x'_n) ∈D̃. This avoids the issue of the high variance arising from the IPW-based approach. This however does not mean that this approach always works. The most obvious issue with this approach is that the original data set may not have enough triplets associated with each x to ensure that p^*(a|x) is identical for all x. Furthermore, even if we have enough associated triplets for each x, we may end up with discarding many triples from the original data set to form the new data set. We never want to discard data points we have. This approach is thus appropriate when data is already well balanced and the goal is to further ensure that the propensity score is constant. A relatively simple variant of this approach, called `matching' because we match triplets based on x, is to relax the constraint that we only consider the exact match of the covariate x. The original formulation samples the triplets according to the target counts with replacement from the following multiset:[ It is a multiset, since there can be duplicates. ] D(x) = { (a',y',x') ∈ D | x' = x }. This condition of x' = x may be too strict, leaving us with only a very small D(x). We can instead relax this condition using a predefined notion of similarity such that D̃(x) = { (a',y',x') ∈ D | s(x',x) < ϵ}, where s(x',x) is a predefined distance function and ϵ a distance threshold. § INSTRUMENTAL VARIABLES: WHEN CONFOUNDERS WERE NOT COLLECTED So far in this section we have considered a case where the confounder x was available in the observational data. This allowed us to either fit the regressor directly on p(y|a,x), use inverse probability weighting or re-balance the dataset using the matching scheme. It is however unlikely that we are given full access to the confounders (or any kind of covariate) in the real world. It is thus important to come up with an approach that works on passively collected data without covariates. An instrumental variable estimator. Let us rewrite the following graph G_0 into a corresponding structural causal model: [latent] (a) a; [latent, right=1cm of a] (y) y; [latent, above=0.5cm of a, xshift=1cm] (x) x; xa; xy; ay; The structural causal model is then x ←ϵ_x a ← f_a(x, ϵ_a) y ← f_y(a, x, ϵ_y). From this structural causal model, we can read out two important points. First, as we have learned earlier in <ref>, x is a confounder, and when it is not observed, the path a ← x → y is open, creating a spurious effect of a on y. Second, the choice of a is not fully determined by x. It is determined by the combination of x and ϵ_a, where the latter is independent of x. We are particularly interested in the second aspect here, since it gives us an opportunity to modify this graph by introducing a new variable that may help us remove the effect of the confounder x. We now consider an alternative to the structural causal model above by assuming that we found another variable z that largely explains the exogenous factor ϵ_a. That is, instead of saying that the action a is determined by the combination of the covariate x and an exogenous factor ϵ_a, we now say that it is determined by the combination of the covariate x, this new variable z and an exogenous factor ϵ'_a. Because z explains a part of the exogenous factor rather than x, z is independent of x a priori. This introduction of z alters the structural causal model to become x ←ϵ_x z ←ϵ_z a ← f'_a(x, z, ϵ'_a) y ← f_y(a, x, ϵ_y), which corresponds to the following graph G_1: [latent] (z) z; [latent, right=1cm of z] (a) a; [latent, right=1cm of a] (y) y; [latent, above=0.5cm of a, xshift=1cm] (x) x; za; xa; xy; ay; This altered graph G_1 does not help us infer the causal effect of a on y any more than the original graph G_0 did. It however provides us with an opportunity to replace the original action a with a proxy based purely on the newly introduced variable z independent of x. We first notice that x cannot be predicted from z, because z and x are by construction independent. The best we can do is thus to predict the associated action.[ We can predict the marginal distribution over the action after marginalizing out x. ] Let p_ã(a|z) be the conditional distribution induced by g_ã(z, ϵ_a”). Then, we want to find g_a that minimizes ℰ_a = - 𝔼_ϵ_z𝔼_ϵ_x𝔼_ϵ'_a[ log g_ã(f'_a(ϵ_x, z, ϵ'_a), z) ]. We also notice that z cannot be predicted from a perfectly without x, because a is an observed collider, creating a dependency between z and x. The best we can do is to thus to predict the expected value of z. That is, we look for g_z that minimizes ℰ_z = - 𝔼_ϵ_z𝔼_ϵ_x[ log p_z̃(ϵ_z | f'_a(ϵ_x, ϵ_z, ϵ'_a)) ], where we have used p_z̃(z|a) be the conditional distribution induced by g_z̃(a, ϵ'_z), similarly to p_ã(a|z) above. Once we found reasonable solutions, ĝ_ã and ĝ_z̃, to the minimization problems above, respectively, we can further modify the structural causal model into a ←â x ←ϵ_x z̃←ĝ_z̃(a, ϵ'_z) ã←ĝ_ã(z̃, ϵ”_a) y ← f_y(ã, x, ϵ_y). Since x is really nothing but an exogenous factor of y without impacting ã nor z in this case, we can simplify this by merging x and ϵ_y into a ←â z̃←ĝ_z(a, ϵ'_z) ã←ĝ_ã(z̃, ϵ”_a) y ← f_y(ã, ϵ'_y). Because we assume the action a is always given, we simply set it to a particular action â. This structural causal model can be depicted as the following graph G_2: [obs] (a) â; [latent, right=1cm of a] (z) z̃; [latent, right=1cm of z] (a_recon) ã; [latent, right=1cm of a_recon] (y) y; az; za_recon; a_recony; In other words, we start from the action, approximately infer the extra variable z̃, approximately infer back the action and then predict the outcome. During two stages of inference (z̃ | a and ã | z̃), we drop the dependence of a on z. This happens, because z was chosen to be independent of x a priori. In this graph, we only have two mediators in sequence, z̃ and ã, from the action a to the outcome y. We can then simply marginalize out both of these mediators in order to compute the interventional distribution over y given a, as we learned in <ref>. That is, ŷ_G_0(a = â) = ŷ_G_1(a = â) ≈ŷ_G_2(a = â) = 𝔼_z̃|â𝔼_ã|z̃𝔼_y|ã[ y ]. We call this estimator an instrumental variable estimator and call the extra variable z an instrument. Unlike the earlier approaches such as regression and IPW, this approach is almost guaranteed to give you a biased estimate of the causal effect. Assume we are provided with D={ (a_1, y_1, z_1), …, (a_N, y_N, z_N) } after we are done estimating those functions above. We can then get the approximate causal effect of the action of interest â by ŷ_IV(â) ≈∑_n=1^N 1(a_n = â) 𝔼_ϵ”_a, ϵ'_yf̂_y(ĝ_ã(z_n, ϵ”_a), ϵ'_y) /∑_n'=1^N 1(a_n' = â) . Additionally, if we are provided further with D_z̅={ (a'_1, y'_1), …, (a'_N', y'_N') }, we can use ĝ_z(a, ϵ'_z) to approximate this quantity, together with D. Let (a_n,y_n,r_n,z_n) ∈D̅ be (a_n,y_n,r_n,z_n) = (a_n, y_n, 1, z_n), if n ≤ N (a'_n-N'+1, y'_n-N'+1, 0, -1), if N < n ≤ N+N' Then, ŷ_IV(â) ≈∑_n=1^N+N'1(a_n = â) 𝔼_ϵ”_a, ϵ'_y, ϵ'_z[ r_n f̂_y(ĝ_ã( z_n, ϵ”_a), ϵ'_y) + (1-r_n) f̂_y(ĝ_ã( ĝ_z(a_n, ϵ'_z), ϵ”_a), ϵ'_y) ] /∑_n'=1^N+N'1(a_n' = â) . In the former case, we must solve two regression problems, finding f̂_y and ĝ_ã. When we do so by solving a least squares problem for each, we end up with two least squares problems that must be solved sequentially. Such a case is often referred as two-stage least squares. In the latter case, we benefit from extra data by solving three regression problems. Though, in most cases, we choose an easy-to-obtain instrument so that it is often enough to solve two regression problems. There are two criteria that need to be considered when choosing an instrument z. First, the instrument must be independent of the confounder x a priori. If this condition does not hold, we end up with the following graph: [latent] (z) z; [latent, right=1cm of z] (a) a; [latent, right=1cm of a] (y) y; [latent, above=0.5cm of a, xshift=1cm] (x) x; za; xa; xy; ay; (z) to[bend left] (x); The undirected edge between z and x indicates that they are not independent. In this case, even if we manage to remove the edge between a and x, there is still a spurious path a ← z ↔ x → y, that prevents us from avoiding this bias. This first criterion is therefore the most important consideration behind choosing an instrument. Instrumental variables must be predictive of the action. The second criterion is that z be a cause of a together with x. That is, a part of whatever cannot be explained by x in determining a must be captured by z. We can see why this is important by recalling the IPW-based estimate against the instrumental variable based estimate. The IPW-based estimate from Eq. (<ref>) is reproduced here as ŷ_IPW(â) ≈∑_n=1^N 1(a_n = â) y_n/p̂(â | x_n)/∑_n'=1^N 1(a_n' = â) . If we contrast it with the instrument variable based estimate in Eq. (<ref>), we get ŷ_IPW(â) - ŷ_IV(â) = ∑_n=1^N 1(a_n = â) ( y_n/p̂(â | x_n) - 𝔼_ϵ_y, ϵ”_af̂_y(ĝ_ã(z_n, ϵ”_a), ϵ_y) ) ^(a)/∑_n'=1^N 1(a_n' = â) , where we assume D={ (a_1, y_1, x_1, z_1), …, (a_N, y_N, x_N, z_N)}. There are two estimates within (a) above that result in a bias. Among these two, f̂_y and ĝ_ã, the former does not stand a chance of being an unbiased estimate, because it is not given the unbiased estimate of the action nor the covariate x. Contrast this with the regression-based estimate above where f̂_y afforded to rely on the true (sampled) action and the true (sampled) covariate. The latter is however where we have a clear control over. Assume that f^*_y is linear. That is, f^*_y(a, x, ϵ_y) = a^⊤α^* + x^⊤β^* + ϵ_y, where α and β are the coefficients. If 𝔼[x]=0 and a is selected independent of x, f^*_y(a, ϵ_y) = a^⊤α^* + ϵ_y. The term (a), with the assumption that y_n/p̂(a_n|x) = f^*_y(a_n), can then be expressed as (a_n - 𝔼_ϵ”_aĝ_ã(z_n, ϵ”_a))^⊤α^* - 𝔼_ϵ”_a[ĝ_ã(z_n, ϵ”_a)]^⊤ r_α, where r_α is the error in estimating α^*, i.e., α̂=α^* + r_α. For brevity, let ĝ(z_n) = 𝔼_ϵ”_aĝ_ã(z_n, ϵ”_a), which allows us to rewrite it as (a_n - ĝ(z_n))^⊤α^* - ĝ(z_n)^⊤ r_α. Let us now look at the overall squared error: 1/M∑_m=1^M ( a_m^⊤α^* - ĝ(z_m)^⊤ (α^* - r_α) )^2, where we use m to refer to each example with a_m=â. If we further expand the squared term, 1/M∑_m=1^M α^*^⊤ a_m a_m^⊤α^* + (α^* - r_α)^⊤ĝ(z_m) ĝ^⊤(z_m) (α^* - r_α) -2 α^*^⊤ a_m ĝ^⊤(z_m) (α^* - r_α) ≈α^*^⊤𝔼[a a^⊤] α^* + (α^* - r_α)^⊤𝔼[ ĝ(z) ĝ^⊤(z) ](α^* - r_α) - 2 α^*^⊤𝔼[a ĝ^⊤(z)] (α^* - r_α). The first term is constant. It simply tells us that the error would be greater if the variance of the relevant dimensions of the action on the outcome, where the relevancy is determined by α^*, is great, the chance of mis-approximating it would be simply great as well. The second term tells us that the error would be proportional to the variance of the relevant dimensions of the predicted action on the outcome, where the relevancy is determined by the predicted coefficient, α^* - r̂. That is, if the variance of the predicted outcome is great, the chance of a large error is also great. The third term is where we consider the correlation between the true action and the predicted action, again along the dimensions of relevance. This derivation tells us that the instrument must be selected to be highly predictive of the action (the third term) but also exhibit a low variance in its prediction (the second term). Here comes the classical dilemma of bias-variance trade-off in machine learning. The linear case. The instrument variable approach is quite confusing. Consider a 1-dimensional fully linear case here in order to build up our intuition. Assume x ←ϵ_x a ←γ x + ϵ_a y ←α a + β x + ϵ_y, where ϵ_x and ϵ_y are both zero-mean Normal variables. If we intervene on a, we would find that the expected outcome equals 𝔼[y|do(a)] = α a, and thereby the ATE is ATE = 𝔼[y|do(a=1)] - 𝔼[y|do(a=0)] = α. With a properly selected instrument z, that is, z x, we get z ←ϵ_z a ←γ x + ψ z + ϵ'_a. Because z x, the best we can do is to estimate ψ to minimize min_ψ∑_n=1^N (a_n - ψ z_n)^2, given N (a,z) pairs. The minimum attainable loss is (γ x)^2, assuming zero-mean ϵ'_a, because the contribution from x cannot be explained by the instrument z. With the estimated ψ̂, we get â←ψ̂ z + ϵ”_a, and know that â = a - γ x on expectation. By plugging in â into the original structural causal model, we end up with x ←ϵ_x â←ψ̂ z + ϵ”_a y ←α̃â + β x + ϵ_y'. We can now estimate α̃ by minimizing ∑_n=1^N ( y_n - α̃â_n )^2, assuming both ϵ_x and ϵ_y' are centered. If we assume we have x as well, we get ∑_n=1^N ( y_n - α̃ (a_n - γ x_n) )^2 = ∑_n=1^N ( y_n - α (a_n - γ x_n) + (α - α̃) (a_n - γ x_n) )^2 = ∑_n=1^N ( ϵ_y,n + (α - α̃) ϵ_a,n' )^2 = ∑_n=1^N ( ϵ_y,n^2 + (α - α̃)^2 (ϵ_a,n')^2 + 2 (α - α̃) ϵ_y,nϵ_a,n' ) =_n→∞𝕍[ϵ_y] + (α - α̃)^2 𝕍[ϵ_a']. The first term is irreducible, and therefore we focus on the second term. The second term is the product of two things. The first one, (α - α̃)^2, measures the difference between the correct α and the estimated effect. It tells us that minimizing this loss w.r.t. α̃ is the right way to approximate the true causal effect α. We can plug in â from Eq. (<ref>) instead: ∑_n=1^N ( y_n - α̃ (ψ̂ z_n + ϵ”_a,n) )^2 = ∑_n=1^N ( (y_n - α̃ψ̂ z_n) + α̃ϵ”_a,n)^2 = ∑_n=1^N ( (y_n - α̃ψ̂ z_n)^2 + α̃^2 (ϵ”_a,n)^2 + 2 (y_n - α̃ψ̂ z_n) α̃ϵ_a,n”) =_n→∞∑_n=1^N (y_n - α̃ψ̂ z_n)^2 + α̃^2 𝕍[ϵ”_a]. The first term is about how predictive the instrument z is of y, which is a key consideration in choosing the instrument. If the instrument is not predictive of y, the instrument variable approach fails dramatically. The second term corresponds to the variance of the action not explained by the instrument, implying that the instrument must also be highly correlated with the action. In this procedure, we have solved least squares twice, (<ref>) and (<ref>), which is a widely used practice with instrument variables. We also saw the importance of the choice of the instrument variable. An example: taxation One of the most typical example of an instrument is taxation. It is particularly in the United States of America (USA), due to the existence of different tax laws and rates across fifty states. For instance, imagine an example where the action is cigarette smoking, the outcome is the contraction of lung cancer and the confounder is an unknown genetic mutation that both affects the affinity to nicotine addiction and the incidence of a lung cancer. Because we do not know such a genetic mutation, we cannot easily draw a conclusion about the causal effect of cigarette smoking on lung cancer. There may be a spurious correlation arising from this unknown, and thereby unobserved, genetic mutation. Furthermore, it is definitely unethical to randomly force people to smoke cigarettes, which prevents us from running an RCT. We can instead use state-level taxation on tobacco as an instrument, assuming that lower tax on tobacco products would lead to a higher chance and also rate of smoking, and vice versa. First, we build a predictor of smoking from the state (or even county, if applicable) tax rate. The predicted amount of cigarettes smoked by a participant can now work as a proxy to the original action, that is the actual amount of cigarettes smoked. We then build a predictor of the incidence of lung cancer as well as the reverse predictor (action-to-instrument prediction). We can then use one of the two instrument variable estimators above to approximate the potential outcome of smoking on lung cancer. § SUMMARY In this chapter, we have learned the following concepts: * Challenges in active causal inference: practical, ethical and legal challenges * When confounders were observed: Regression, inverse probability weighting and matching * When confounders were not observed: instrument variables There are a few other widely used passive causal inference algorithms, but they are left for the final section on <ref> Remaining Topics, such as difference-in-difference, regression discontinuity and double machine learning. CHAPTER: CAUSALITY AND MACHINE LEARNING In this chapter, we finally delve in to the `machine learning' side of this course, which is titled `Introduction to Causal Inference in Machine Learning'. In order to do so, we need to start by establishing when we do not need to think of causal inference, or more broadly causality, in machine learning. After establishing it, we will move on to the other extreme, where conventional machine learning cannot do anything on its own. We then try to incorporate some of the concepts we have learned so far, in order to land between these two extreme cases and solve some of the most challenging and important problems in modern machine learning. § OUT-OF-DISTRIBUTION GENERALIZATION §.§ Setup: I.I.D. We must start by defining what we mean by `prediction'. In this particular course, we first assume that each and every input-output pair (x,y), input x or output y is sampled independently of each other. This is a pretty strong assumption, since the world often changes based on what we have seen, because those who saw a sample pair may and often do change their behaviors. For instance, consider building a stock price forecasting model. Once you use a predictor to predict whether the price of a particular stock goes up or down and trade based on the outcome, the next input x, that is the stock of your next interest, is not anymore independently selected but based on your own success/failure from the previous trade. This assumption is however also reasonable, because there are many phenomena in which our behaviours do not matter much in a reasonably short horizon. For instance, consider installing and using a bird classifier at a particular forest. With a fixed camera, the input to this classifier will be largely independent of which birds (or not) were seen earlier, although spotting of a particular bird may attract poachers to this forest who would dramatically affect the bird population in a longer time frame. Next, we assume that all these pairs are drawn from the `identical' distribution. This is similar if not identical to the stationarity assumption from RCT. In RCT, we often rely on a double blind experiment design, in order to ensure that the causal effect p^*(y|a,x) does not change over the trial. In this section as well as conventional statistical learning theory, we assume all input-output pairs were drawn from the same distribution. Combining these two assumptions, we arrive at a so-called training set D which satisfies p(D) = ∏_(x,y) ∈ D p^*(x, y), according to the definition of independence. We do not have access to nor have knowledge of p^*. We use this training set D for both model fitting (training) and selection (validation). Once the predictive model p̂ is ready, we deploy it to make a prediction on a novel input x' drawn from a distribution q^*. That is, ŷ∼p̂(y | x'), where (x', y') ∼ q^*. We are often not given y'. After all, y' is what we want to use our predictive model to infer. We say that the predictive model is accurate, if the following quantity is low: R(p̂) = 𝔼_(x',y') ∼ q^*[ l(y', p̂(y|x')) ], where l(·, ·) ≥ 0 is the loss (misclassification rate). In traditional statistical learning theory, q^* is assumed to be p^*, and under this assumption, the goal of designing a learning algorithm is to minimize a so-called excess risk: R_excess(p̂) = R(p̂) - R(p^*) with respect to p̂. Since we do not have access to p^*, we often use Monte Carlo approximation to compute R(p̂), as follows R(p̂) ≈R̂_N(p̂) = 1/N∑_n=1^N l(y_n, p(y|x_n)), where (x_n, y_n) ∼ p^*. With a (strong) assumption of uniform convergence, which is defined as sup_p̂| R(p̂) - R̂_N(p̂) | →_p 0, we can minimize R using R̂ with a large enough data set, i.e., N →∞, and find a good predictive model p̂. Of course, since N is always finite in reality, there is almost always non-zero generalization error. Since we never have access to R(p̂) even after learning, it is a usual practice to use a separate (held-out) set of examples again drawn from the same distribution p^* = q^* as the test set to approximate the generalization error of a trained model p̂. Let D' = { (x'_1,y'_1), …, (x'_K, y'_K) }. Then, R(p̂) ≈1/K∑_k=1^K l(y'_k, p̂(y|x'_k)). Such a test-set accuracy, or more simply a test accuracy, has been a workhorse behind rapid advances in machine learning over the past several decades. With this whole paradigm in your mind, it is important to notice that the key assumption here is q^*(x,y)=p^*(x,y). In other words, we assume that an instance a predictive model would be tested in the deployment would follow the same distribution as that from which the training examples were drawn, i.e., q^*(x) = p^*(x). Furthermore, the conditional distribution over the outcome does not change either, i.e., q^*(y|x) = p^*(y|x). In this case, there is no reason for us to consider the underlying generating process behind p^* nor q^* separately. §.§ Out-of-Distribution Generalization Impossibility of Out-of-Distribution (ood) generalization. In reality, it is rarely that q^* = p^*, because the world changes. When q^* ≠ p^*, we must be careful about discussing generalization. We must be careful, because we can always choose q^* to be such that minimizing R(p̂) in Eq. (<ref>) would lead to maximizing R^q^*(p̂) = 𝔼_(x,y) ∼ q^*[ l(y, p̂(y|x)) ]. Assume y ∈{0, 1}. Consider the following q^*, given p^*(x,y) = p^*(x) p^*(y|x), q^*(x, y) = p^*(x) q^*(y|x), where q^*(y|x) = 1 - p^*(y|x). That is, the mapping from x to y is reversed. When x was more probable to be observed together with y=1 under p^*, it is now more probable to be observed together with y=0 now under q^*, and vice versa. If we take the log loss, which is defined as l(y, p̂(y|x)) = -logp̂(y|x), learning corresponds to minimizing the KL divergence from the true distribution to the learned, predictive distribution. Mathematically, min_p̂1/N∑_n=1^N l(y_n, p̂(y_n|x_n)) ≈min_p̂𝔼_x KL( p^*(· | x) p̂(· |x) ). In other words, learning corresponds to recovering p^* as much as we can for as many probable x's under p^*(x). It is clear that minimizing this loss function would make our predictive model worse on a new distribution (<ref>). Because the following holds for any particular example (x,y): log p^*(y|x) = log (1 - q^*(y|x)). Since log is a monotonic function, maximizing p^* is equivalent to minimizing q^*. As soon as we start minimizing the log loss for learning, out-of-distribution generalization to q^* gets worse, and there is no way to avoid it, other than not learning at all. This is a simple but clear example showing how out-of-distribution generalization is not possible in general. There will always be a target distribution that disagrees with the original distribution, such that learning on the latter is guaranteed to hurt the generalization accuracy on the former. In general, such a target distribution can be written down as log q^*(y|x) ∝log (1 - p^*(y|x)). We can also come up with a similar formula for q^*(x), such that there is almost no support overlap between p^*(x) and q^*(x). Out-of-distribution generalization. We then must narrow down the scope in order to discuss out-of-distribution generalization. There are many different ways to narrow the scope, and one way is to ensure that the target distribution q^* is not too far from the original distribution p^*. Let D: 𝒫×𝒫→R_+ be a (asymmetric) divergence between two distributions, such that the larger D(p, q) implies the greater difference between these two distributions, p and q. Then, we can write a so-called distributionally-robust loss as min_p̂sup_q: D(p^*,q)≤δ𝔼_(x,y) ∼ q[ l(y, p̂(y|x)) ], where sup is the supremum which is the smallest item that is greater than equal to all the other items in a partially ordered set <cit.>. The distributionally-robust loss above minimizes (min_p̂) the expected loss (𝔼_(x,y) ∼ q[ l(y, p̂(y|x)) ]) over the worst-case distribution (sup_q) within the divergence constraint (q: D(p^*,q)≤δ). Despite its generality, due to the freedom in the choice of the divergence D and the universality (the worst case), such distributionally-robust optimization is challenging to use in practice. The challenge mainly comes from the fact that we must solve a nested optimization problem, where for each update of p̂ we must solve another optimization problem that maximizes the loss w.r.t. the distribution q. This problem can be cast as a two-player minimax game which is more challenging, both in terms of convergence and its speed, than a more conventional optimization problem. Furthermore, it is often unclear how to choose an appropriate divergence D and the threshold δ, as these choices are not grounded in the problem of interest. Instead, we are more interested in an alternative to the distributionally robust optimization approach. Instead of specifying a divergence, we can describe how the distribution changes in terms of the probabilistic graphical model, or equivalently the structural causal model underlying p^* and q^*. Depending on such a distributional change, we may be able to characterize the degree of generalization or even to come up with a better learning algorithm. §.§ Case Studies The label proportion shift. Let us consider a very basic example of a generative classier which assumes the following generating process: [latent] (y) y; [latent, below=0.5cm of y] (x) x; yx; Under this generating process, the joint probability is written as p^*(x,y) = p^*(y) p^*(x|y), and the posterior distribution over the output y is p(y|x) = p(y) p(x|y)/p(x) = p(y) p(x|y)/∑_y' ∈𝒴 p(y') p(x|y'). Given a training set D={ (x_1, y_1), …, (x_N, y_N) }, where each (x_n,y_n) was drawn from the generating process above, that is, y_n ∼ p^*(y) x_n ∼ p^*(x|y_n). We can train a neural network classifier that takes as input x and outputs a probability for each possible value of y. This neural network can be written as p̂(y|x; θ, b) = exp(f_y(x; θ) + b_y)/∑_y' ∈𝒴exp(f_y'(x; θ)+ b_y'), where f_y(x; θ) is the y-th element of the |𝒴|-dimensional output from the neural network f, parametrized by θ and the bias vector b ∈ℝ^|𝒴|. Inspecting this neural net's formulation, based on the so-called softmax output, we notice the following correspondences: * p^*(y) ≈1/Z_yexp(b_y) * p^*(x|y) ≈1/Z_x|yexp(f_y(x; θ)), where Z_y's and Z_x|y's are the normalization constants, which are cancelled out in Eq. (<ref>).[ exp(a + b) = exp(a) exp(b). ] In other words, the bias b_y captures the marginal distribution over the output, and the rest the conditional distribution over the input given the output. This view suggests a two-stage learning process. In the first stage, we simply set b_y to be log p^*(y) (and thereby set Z_y=1 implicitly.) Then, we use optimization, such as stochastic gradient descent, to estimate the rest of the parameters, θ. After learning is over, we get p̂(y|x) = p̂(y) exp(f_y(x; θ̂))/∑_y'exp(f_y'(x; θ̂)). It is important to notice that the second term on the right hand side is not the estimate of p^*(x|y), since the denominator must include the extra normalization, i.e. p(x). In other words, exp(f_y(x; θ̂))/∑_y'exp(f_y'(x; θ̂)) = p̂(x|y)/p̂(x). This predictive model p̂(y|x) would work well even on a new instance under the iid assumption, that is, p^*(y|x)=q^*(y|x). It is however not the case, because q^*(y) ≠ p^*(y). For instance, imagine we trained a COVID-19 diagnosis model based on various symptoms, including cough sound, temperature and others, during the winter of 2021. During this period, COVID-19 was rampant, that is, p^*(y=1) was very high. If we use this model however in the winter of 2024, the overall incident rate of COVID-19 is much lower. In other words, q^*(y=1) ≪ p^*(y=1). This would lead to the overestimation of p(y=1|x), because the prediction is proportional to p̂(y=1) which is an estimate of the outdated prior p^*(y=1) over the output not of the latest prior q^*(y=1). The prediction becomes worse as q^* deviates further away from p^*. One simple way to address this is to assume that a priori it is more probable for the label marginal, i.e., the marginal distribution over the output, to be closer to the uniform distribution. This is a reasonable assumption in many contexts when we are not allowed any information about the situation. For instance, it is perfectly sensible to assume that any given coin is likely to be fair (that is, it has the equal chance of landing head or tail.) In that case, we would simply set the bias b to be an all-zero vector so that p̂(y|x) = exp(f_y(x; θ̂))/∑_y'exp(f_y'(x; θ̂)). Sometimes we are given some glimpse into q^*. In the case of COVID-19, it is difficult to collect (x,y) pairs but it is often easy to collect y's by various means, including the survey and rapid testing in various event venues. Let q̂(y) be the estimate of q^*(y) from such a source. We can then replace p̂(y) with this new estimate in Eq. (<ref>), resulting in p̂(y|x) = q̂(y) exp(f_y(x; θ̂))/∑_y'exp(f_y'(x; θ̂)). This is equivalently to replacing the bias b_y with logq̂(y). In practice, it is often the case that the number of y samples we can collect is limited, leading to a high-variance estimate of q^*. We do not want to rely solely on such an estimate. Instead, we can interpolate between p̂(y) and q̂(y), leading to replacing the bias of each output with b_y ←log(αp̂(y) + (1-α) q̂(y) ), with α∈ [0, 1]. α describes the degree of our trust in the original estimate of the label marginal. if α = 1, we end up with the original iid setup, and with α=0, we fully trust our new estimate of the label marginal. Data augmentation. Consider an object classification task, where the goal is to build a classifier that categorizes the object in the center of an image into one of K predefined classes. Just like before, we assume generative classification in which the object label produces the image. We however further assume that there exists an extra variable z=(i,j) that determines the precise position of the object. [latent] (y) y; [latent, right=1cm of y] (z) z; [latent, below=0.5cm of y, xshift=0.75cm] (x) x; yx; zx; During the training time, z follows a Normal distribution centered at the center of the image, i.e., z ∼𝒩(μ_z=[0, 0]^⊤, I_2). Assuming that the background is randomly produced and does not correlate with the identity of the object in the center, a classifier we train on data produced from this data generating process should become blind to periphery pixels, since cov(x_mn, y) ≈ 0, where |m| ≫ 0 and |n| ≫ 0. This can be written down as p(x_mn | y) ≈ p(x_nm), meaning that x_mn is independent of y. If we make the naïve Bayes assumption, that is, all pixels are independent conditioned on the label, we get the following expression of the posterior over the label: p(y|x) ∝ p(y) ∏_m, n p(x_mn | y) ∝ p(y) ∏_(m, n) ∈ C p(x_mn | y), where C is a set of pixels near the center. In other words, if the object is outside the center of the image, the posterior distribution over the label would not capture the actual identity of the object. This dependence on the position arises from the existence of the hidden variable z and its prior distribution p^*(z). If this prior distribution over z shifts in the test time, such that q^*(z) = 𝒩(μ_z=[100, 100]^⊤, I_2), all objects in the images would be positioned on the top-right corners. The classifier based on the training set with p^*(z) will then completely fail to detect and classify these objects. Because we assume to know the precise type of shift that is possible, we can now mitigate this issue by data augmentation <cit.>. During training, we randomly shift a training image such that the position of the object in the image varies more greatly than it usually does in the original training set. This can be thought of as introducing another random variable u such that p(l | z, u) = p(l), where l indicates the position of the object in an image. In other words, u makes the position of an object independent of z, such that a classifier trained on the training data with such data augmentation is able to detect objects in any position, making it invariant to the distributional shift of z. § INVARIANCE: STABLE CORRELATIONS ARE CAUSAL CORRELATIONS Once we have a probabilistic graphical model, or a structural causal model, that describes the generating process and have a crisp idea of which distribution shifts how, we can come up with a learning algorithm that may alleviate the detrimental effect of such a distribution shift. It is however rare that we can write down the description of a generating process in detail. It is even rarer to have a crisp sense of how distributions shift between training and test times. For instance, how would you describe relationships among millions of pixels of a photo and unobserved identities of objects within it? We can instead focus on devising an alternative way to determine which correlations are considered causal and which other correlations are spurious. The original way to distinguish causal and spurious correlations was entirely reliant on the availability of a full generating process in the form of a probabilistic graphical model. One alternative is to designate correlation that holds both during training and test time as causal, and the rest as spurious <cit.>. In other words, any correlation that is invariant to the distributional shift is considered causal, while any correlation that varies according to the distributional shift is considered spurious. The goal is then to find a learning algorithm that can ignore spurious (unstable) correlations while capturing only (stable) causal correlations, for the purpose of prediction. §.§ An Environment as a Collider A case study: a bird or a branch? Imagine a picture of a bird taken from a forest. The bird is probably somewhere near the center of the photo, since the bird is the object of interest. It is extremely difficult to take a good picture of a flying bird, and hence, it is highly likely that the bird is not flying but is sitting. Since we are in a forest, it is highly likely that the bird is sitting on a tree branch with the branch placed near the bottom of the photo. Compare this to a picture with a bird taken from the same forest. the chance of a tree branch being solely near the bottom of the photo is pretty slim. After all, it is a forest, and there are many branches all over. I can then create a bird detector using either of two features; one is a feature describing a bird near the center and the other is a feature describing the location of a tree branch. Clearly, we want our bird detector to use the first feature, that is, to check whether there is a bird in the picture rather than whether there is a tree branch near the bottom of the picture, in order to tell whether there is a bird in the picture. Either way, however, the bird detector would work pretty well in this situation. A bird detector that relies on the position of a tree branch would not work well if suddenly all the pictures are from indoors rather than from a forest. Most of the birds indoors would be confined in their cages and would not be sitting on tree branches. Rather, they would be sitting on an artificial beam or on the ground. On the other hand, a bird detector that relies on the actual appearance features of a bird would continue to work well. That is, the correlation between the label (`bird' or not) and the position of a tree branch (`bottom' or not) is not stable, while the correlation between the label and the bird-like appearance of a bird is stable. That is, the former is spurious, while the latter is causal. A desirable bird detector would rely on the causal correlation and discard any spurious correlation during learning. An environment indicator is a collider. A precise mechanism by which these unstable correlations arise can be extremely complex and is often unknown. In other words, we cannot rely on having a precise structural casual model from which we can read out all paths between the input and output, designate each as causal or spurious and adjust for those spurious paths. Instead, we can think of an extremely simplified causal model that includes only three variables; input x, output y and collider z, as in [latent] (u) x; [latent, right=1cm of u] (v) y; [obs, above=0.5cm of u, xshift=1cm] (w) z=e; uv; uw; vw; In this causal model, the collider z tell us whether we are in a particular environment (e.g. a forest above.) When we collect data from this causal model while being conditioned on a particular environment, this conditioning on the collider opens the path x → z ← y, as we have learned earlier in <ref>. This way of thinking necessitates a bit of mental contortion. Rather than saying that a particular environment affects the input and output, but we are saying that a particular combination of the input and output probabilistically defines an environment. That is, p(z | x, y) is the distribution defined over all possible environments z given the combination of x and y. Indeed, if x is a picture with a tree branch near the bottom of a picture and y states that there is a bird, the probability of z being a forest is quite high. The environment dependence can then be thought of as drawing training instances from the graph above where the environment w takes a particular target environment value (e.g. `forest'.) The most naive solution to this issue is to collect as much extra data as possible while avoiding such `selection bias' arising from conditioning the collider z on any particular value. If we do so, it is as if the collider z did not exist at all, since marginalizing out z leads to the following simplified graph: [latent] (u) x; [latent, right=1cm of u] (v) y; uv; A predictive model fitted on this graph p̂(y|x) would capture the causal relationship between the input and output, since the conditional and interventional distributions coincide in this case, that is, p^*(y|x) = p^*(y|do(x)). This approach is however often unrealistic. §.§ The Principle of Invariance Invariant features. So far, we have considered each variable as an unbreakable unit. This is however a very strong assumption, and we should be able to easily split any variable into two or more pieces. This is in fact precisely what we often do by representing an object as a d-dimensional vector by embedding it into the d-dimensional Euclidean space. We are splitting a variable x into a set of d scalars which collectively representing the value the variable takes. We can then look at a subset of these dimensions and instead of the full variable, in which case the statistical as well as causal relationships with other variables may change. This applies even to a 1-dimensional random variable, where we can apply a nonlinear function to alter its relationship with other variables. Consider the following structural causal model: x ←ϵ_x, z ←1(x > 0) max(0, x + ϵ_z), y ←1(x ≤ 0) min(0, x + ϵ_y) + z , where ϵ_x ∼𝒩(0, 1^2) ϵ_z ∼𝒩(0, 1^2) ϵ_y ∼𝒩(0, 1^2). this model simplifies to y ∼𝒩(0, 1^2 + 1^2), where two unit variances come from ϵ_x and either ϵ_z or ϵ_y depending on the sign of x. With the following nonlinear function applied to x, however, y takes a different form: g(x) = 1(x ≤ 0) x. By replacing x with g(x) above, p(y) ∝ 0, if y > 0, 𝒩(y; 0, 1^2 + 1^2), otherwise This has the effect of removing the correlation flowing through the path x → z → y, leaving only x → y, because z is now a constant function regardless of the value x takes. By inspecting the relationship between g(x) and y, we can measure the direct causal effect of x on y. This example illustrates that there may be a nonlinear function of x that may results in a variable that preserves enough information to prepare the direct causal relationship between x and the output y but removes any relationship x has with the other variables in the structural causal model. In the context of the environment variable z, which is a collider, the goal is then to find a feature extractor g such that the original graph is modified into [latent] (x) x; [draw, rectangle, below=0.5cm of x, xshift=0.5cm] (g) g; [latent, right=0.5cm of g] (gx) x'; [latent, right=2cm of x] (y) y; [latent, above=0.5cm of x, xshift=1cm] (z) z; xg; ggx; gxy; xz; yz; xy; Ideally, we want g such that g(x) explains the whole of x's direct effect on y. That is, [latent] (x) x; [draw, rectangle, below=0.5cm of x, xshift=0.5cm] (g) g; [latent, right=0.5cm of g] (gx) x'; [latent, right=2cm of x] (y) y; [latent, above=0.5cm of x, xshift=1cm] (z) z; xg; ggx; gxy; xz; yz; Effectively, x' works as a mediator between x and y. Because g is a deterministic function, the effect of x on y is then perfectly captured by x'. In order to understand when this would happen, it helps to consider the structural causal model:[ g could take as input noise in addition to x, but to strongly emphasize that x' is a nonlinear feature of x, we omit it here. ] x ←ϵ_x x' ← g(x) y ← f_y(x, x', ϵ_y) z ← f_z(x, y, ϵ_z). What changes between the last two graphs is the third line in the structural causal model above. The original one is y ← f_y(x, x', ϵ_y), while the new one is y ← f'_y(x', ϵ_y). For this to happen, x' must absorb all relationship between x and y. That is, x' must be fully predictive of y, leaving only external noise ϵ_y and nothing more to be captured by x. Consider a slightly more realistic example of detecting a fox in a picture. There are two major features of any object within any picture; shape and texture. The shape is what we often want our predictor to rely on, while the texture, which is usually dominated by colour information, should be ignored. For instance, if we have a bunch of pictures taken from any place in the sub-arctic Northern Hemisphere, most of the foxes in these pictures will be yellowish with white-coloured breast and dark-coloured feet and tail. On the other hand, foxes in the pictures taken in the Arctic will largely be white only, implying that the texture/colour feature of a fox is an environment-dependent feature and is not stable across the environments. Meanwhile, the shape information, a fox-like shape, is the invariant feature of a fox across multiple environments. In this case, x' would be the shape feature of x. We now see two criteria a function g must satisfy: * Given x and y, x'=g(x) and z are independent. * x'=g(x) is highly predictive of (correlated with) y. Once we find such g, the (potentially biased) outcome can be obtained given a new instance x, by fitting a predictive model p̂(y|x') <cit.>. That is, ŷ(x) = 𝔼_p̂(y|x'=g(x))[ y ]. This would be free of the spurious correlation arising from the environment condition. Learning. We now demonstrate one way to learn g to satisfy two conditions above as much as possible. First, in order to satisfy the first condition, we must build a predictor of z given x'. This predictor should be non-parametric in order to capture as much (higher-order) correlations that could exist between z and x'. Let p̂(z|x') = h(x') be such a predictor obtained by solving the following optimization problem: min_p -1/N∑_n=1^N log p(z^n | g(x^n)), where (x^n, y^n, z^n) is the n-th training example drawn from the original graph while ensuring that z^n ∈ℰ. ℰ is a set of environments in the training set. In other words, we have a few environments we observe and then condition sampling of (x,y) on, and we use these examples to build an environment predictor from g(x), given g. The goal is then to minimize the following cost function w.r.t. g, where we assume z is discrete: C_1(g) = ∑_z' ∈𝒵p̂(z=z'|x'=g(x)) logp̂(z=z'|x'=g(x)). In other words, we maximize the entropy of p̂(z|x'), which is maximized when it is uniform. When p̂(z|x') is uniform, it is equivalent to z x'. One may ask where the condition on observing y went. This is hidden in p̂(z|x'), since p̂ was estimated using (g(x),z) pairs derived from a set of triples (x,y,z) drawn from the original graph, as clear from Eq. (<ref>). Of course, this cost function alone is not useful, since it will simply drive g to be a constant function. The second criterion prevents this, and the second criterion can be expressed as C_2(g, q) = -1/N∑_n=1^N log q(y^n | g(x^n)). This second criterion must be minimized with respect to both the feature extractor g and the y predictor q(y|x'). This criterion ensures that the feature x' is predictive of (that is, highly correlated with) the output y. Given p̂(z|x'), the feature extractor is then trained to minimize min_g C_1(g) + α C_2(g,q), where α is a hyperparameter and balances between C_1 and C_2. We then alterate between solving Eq. (<ref>) to find p̂ and solving Eq. (<ref>) to find g and q̂ <cit.>. This is a challenging, bi-level optimization problem and may not even converge both in theory and in practice, although this approach has been used successfully in a few application areas. The most important assumption here is that we have access to training examples from more than one environments. Preferably, we would have examples from all possible environments (that is, from all possible values z can take), even if they do not necessarily follow p^*(z|x,y) closely. If so, we would simply ignore z by considering z as marginalized. If we have only a small number of environments during training, it will be impossible for us to ensure that g(x) does not encode any information about z. There is a connection to generalization, as better generalization in p̂(z|x') would imply a fewer environments necessary for creating a good p̂(z|x') and in turn for producing a more stable feature extractor g. § PREDICTION VS. CAUSAL INFERENCE The major difference between prediction and causal inference is the goal. The goal of prediction is to predict which value a particular variable, in our case often the outcome variable, would take given that we have observed the values of the other variables. On the other hand, the goal of causal inference is to know which value the outcome variable would take had we intervened on the action variable. This difference implies that causal inference may not be the best way to predict what would happen based on what we have observed. In the example of birds vs. branches above, if our goal is good prediction, we would be certainly open to using the location of the branch as one of the features as well. Even if a large portion of the bird in a picture is occluded by e.g. leaves, we may be able to accurately predict that there is a bird in the picture by noticing the horizontal branch near the bottom of the tree. This branch feature is clearly not a causal feature, but nevertheless helps us make better prediction. In short, if I knew that the picture was taken in a forest, I would rely on both the beak and the branch's location to determine whether there is a bird in the picture. This is however a brittle strategy, as it would certainly degrade my prediction ability had the picture been taken somewhere else. The invariant predictor q(y | g(x)) from above is thus likely sub-optimal in the context of prediction under any environment, although this may be the right distribution to compute the causal effect of x and y. This is because the invariant predictor only explains a part of y (marked red below), while ignoring the open path (marked blue below) via the collider: [latent] (x) x; [draw, rectangle, below=0.5cm of x, xshift=0.5cm] (g) g; [latent, right=0.5cm of g] (gx) x'; [latent, right=2cm of x] (y) y; [obs, above=0.5cm of x, xshift=1cm] (z) z; xg; ggx; [color=red]gxy; [color=blue]xz; [color=blue]yz; Given an environment z = ẑ, we must capture both correlations arising from g(x)→ y and x →ẑ← y, in order to properly predict what value y is likely to take given x. This can be addressed by introducing an environment-dependent feature extractor h_ẑ(x) that is orthogonal to the invariant feature extractor g(x). We can impose such orthogonality (or independence) when learning h_ẑ(x) by min_h, q -1/N∑_n=1^N log q(y^n | g(x^n), h_ẑ(x^n)), with a given g. h_ẑ would only capture about y that was not already captured by g, leading to the orthogonality. This however assumes that q is constrained to the point that it cannot simply ignore g(x) entirely. This view allows us to use a small number of labelled examples from a new environment in the test time to quickly learn the environment-specific feature extractor h_z while having learned the environment-invariant feature extractor g in the training time from a diverse set of environments. One can view such a scheme as meta-learning or transfer learning, although neither of these concepts is well defined. It is possible to flip the process described here to obtain an environment-invariant feature extractor g, if we know of an environment-dependent feature extractor h_z, by min_g, q -1/N∑_n=1^N log q(y^n | g(x^n), h_ẑ(x^n)), assuming again that q is constrained to the point that it cannot simply ignore h(x) entirely. This flipped approach has been used to build a predictive model that is free of a known societal bias, of which the detector can be easily constructed <cit.>. § A CASE STUDY: LANGUAGE MODELING WITH PAIRWISE PREFERENCE An autoregressive language model is described as a repeated application of the next-token conditional probability, as in p(w_1, w_2, …, w_T) = ∏_t=1^T p(w_t | w_<t). A conditional autoregressive language model is exactly the same except that it is conditioned on another variable X: p(w_1, w_2, …, w_T | x) = ∏_t=1^T p(w_t | w_<t, x). There are many different ways to build a neural network to implement the next-token conditional distribution. We do not discuss any of those approaches, as they are out of the course's scope. An interesting property of a language model is that it can be used for two purposes: * Scoring a sequence: we can use p(w_1, w_2, …, w_T | X) to score an answer sequence w given a query x. * Approximately finding the best sequence: we can use approximate decoding to find max_w p(w | x). This allows us to perform causal inference and outcome maximization simultaneously. Consider the problem of query-based text generation, where the goal is to produce an open-ended answer w to a query x. Because it is often impossible to give an absolute score to the answer w given a query x, it is customary to ask a human annotator a relative ranking between two (or more) answers w_+ and w_- given a query x. Without loss of generality, let w_+ be the preferred answer to w_-. We assume that there exists a strict total order among all possible answers. That is, * Irreflexive: r(w|x) < r(w|x) cannot hold. * Asymmetric: If r(w|x) < r(w'|x), then r(w|x) > r(w'|x) cannot hold. * Transitive: If r(w|x) < r(w'|x) and r(w'|x) < r(w”|x), then r(w|x) < r(w”|x). * Connected: If w ≠ w', then either r(w|x) < r(w'|x) or r(w|x) > r(w'|x) holds. In other words, we can enumerate all possible answers according to their (unobserved) ratings on a 1-dimensional line. A non-causal approach. It is then relatively trivial to train this language model, assuming that we have a large amount of triplets D={(x^1, w^1_+, w^1_-), …, (x^N, w^N_+, w^N_-)}. For each triplet, we ensure that the language model puts a higher probability on w_+ than on w_- given x by minimizing the following loss function: L_pairwise(p) = 1/N∑_n=1^N max(0, m-log p(w^n_+|x) + log p(w^n_- | x)), where m ∈ [0, ∞) is a margin hyperparameter. For each triplet, the loss inside the summation is zero, if the language model puts the log-probability on w_+ more than that on w_- with the minimum margin of m. This loss alone is however not enough to train a well-trained language model from which we can produce a high-quality answer. For we have only pair-wise preference triplets for reasonable answers only. The language model trained in this way is not encouraged to put low probabilities on gibberish. We avoid this issue by ensuring that the language model puts reasonably high probabilities on all reasonable answer by minimizing the following extra loss function: L_likelihood(p) = - 1/2N∑_n=1^N ( log p(w^n_+ | x) + log p(w^n_- | x) ), which corresponds to the so-called negative log-likelihood loss. A causal consideration. This approach works well under the assumption that it is only the content that is embedded in the answer w. This is unfortunately not the case. Any answer is a combination of the content and the style, and the latter should not be the basis on which the answer is rated. For instance, one aspect of style is the verbosity. Often, a longer answer is considered to be highly rated, because of the subconscious bias by a human rater believing a better answer would be able to write a longer answer, although there is no reason why there should not be a better and more concise answer. This process can be described as the graph below, where r is the rating and s is the style: [latent] (w) w; [latent, right=1.5cm of w] (r) r; [latent, above=0.5cm of w, xshift=1cm] (s) s; [obs, left=1cm of w] (x) x; wr; sw; sr; xw; The direct effect of w on the rating r is based on the content, but then there is spourious correlation between w and r via the style s. For instance, s could encode the verbosity which affects both how w is written and how a human rater perceives the quality and gives the rating r. In the naive approach above, the language model, as a scorer, will fail to distinguish between these two and capture both, which is clearly undesirable; a longer answer is not necessarily a better answer. In other words, a language model p_0 trained in a purely supervised learning way above will score w high for both causal and spurious (via s) reasons. An answer w sampled from p_0 can then be considered dependent upon not only the question x itself but also of an unobserved style variable s. Direct preference optimization <cit.> or unlikelihood learning <cit.>. We can resolve this issue by combining two ideas we have studied earlier; randomized controlled trials (RCT; <ref>) and inverse probability weighting (IPW; <ref>). First, we sample two answers, w and w', from the already trained model p_0, using supervised learning above: w, w' ∼ p_0(w|x). These two answers (approximately) maximize the estimated outcome (rating) by capturing both the content and style. One interesting side-effect of imperfect learning and inference (generation) is that both of these answers would largely share the style. If we use s' to denote that style, we can think of each answer as sampled from w | x, s'. With a new language model p_1 (potentially initialized from p_0), we can compute the rating after removing the dependence on the style s by IPW: r̂(w|x) = p_1(w|x)/p_0(w|x). This reminds us of do operation, resulting in the following modified graph: [latent] (w) w; [latent, right=1.5cm of w] (r) r; [latent, above=0.5cm of w, xshift=1cm] (s) s; [obs, left=1cm of w] (x) x; wr; sr; xw; Of course, this score r̂ does not mean anything, since p_1 does not mean anything yet. We have to train p_1 by asking an expert to provide their preference between w and w'. Without loss of generality, let w be the preferred answer over w'. That is, w_+=w and w_-=w'. We train p_1 by minimizing L'_pairwise(p_1) = 1/N∑_n=1^N max(0, m-logp_1(w^n_+|x)/p_0(w^n_+|x) + logp_1(w^n_- | x)/p_0(w^n_- | x)), where we assume have N pairs. m is a margin as before. It is possible to replace the margin loss with another loss function, such as a log loss or linear loss. This procedure encourages p_1 to capture only the direct (causal) effect of the answer on the rating, dissecting out the indirect (spurious) effect via the style s. One training is done, we use p_1 to produce a better answer, which dependes less on the spurious correlation between the answer and the rating via the style. Because this procedure is extremely implicit about the existence of and the dependence on the style, it can be beneficial to repeat this procedure multiple rounds in order to further remove the effect of the spurious correlation and improve the quality of a generated answer <cit.>. § SUMMARY In this chapter, we have learned the following concepts: * Out-of-distribution generalization and its impossibility * Invariance as a core principle behind out-of-distribution generalization * Preference modeling for training a language model, as causal learning The goal of this chapter has been to introduce students to the concept of learning beyond independently-and-identically-distribution settings, by relying on concepts and frameworks from causal inference and more broadly causality. The topics covered in this chapter are sometimes referred to as causal machine learning <cit.>. CHAPTER: REMAINING TOPICS As the purpose of this course is to be a thin and quick introductory course at the intersection of causal inference and machine learning, it is not the intention nor desirable to cover all topics in causal inference exhaustively. In this final chapter, I discuss a few topics that I did not feel necessary to be included in the main course but could be useful for students if they could be taught. § OTHER TECHNIQUES IN CAUSAL INFERENCE In practice the following observational causal inference techniques are widely used: * Regression in <ref> * Inverse probability weighting in <ref> and Matching in <ref> * Instrument variables in <ref> * Difference-in-difference * Regression discontinuity design * Double machine learning Difference-in-difference and regression discontinuity design are heavily used in practice, but they work for relatively more specialized cases, which is why this course has omitted them so far. In this section, we briefly cover these two approaches for the sake of completeness. Furthermore, this section wraps up by providing a high-level intuition behind a more recently proposed and popularized technique of double machine learning. §.§ Difference-in-Difference The average treatment effect (ATE) from <ref> measures the difference between the outcomes of two groups; treated and not treated, or more precisely, it measures the difference between the outcome of the treated group and the expected outcome over all possible actions. One way to interpret this is to view ATE as checking what happens to a treated individual had the individual was not treated, on average. First, we can compute what happens to the individual once they were treated, on average, as y^1_diff = 𝔼_x 𝔼_a 𝔼_y_pre,y_post[ 1(a = 1) (y_post - y_pre) ], where y_pre and y_post are the outcomes before and after the treatment (a=1). We can similarly compute what happens to the individual had they not been treated, on average, as well by y^0_diff = 𝔼_x 𝔼_a 𝔼_y_pre,y_post[ 1(a = 0) (y_post - y_pre) ]. We now check the difference between these two quantities: y^1_diff - y^0_diff = 𝔼_x 𝔼_a [ . 𝔼_y_post[ 1(a = 1) y_post - 1(a = 0) y_post] - . 𝔼_y_pre[ 1(a = 1) y_pre - 1(a = 0) y_pre] ]. If we used RCT from <ref> to assign the action independent of the covariate x and also uniformly, the second term, that is the difference in the pre-treatment outcome, should disappear, since the treatment had not been given to the treatment group yet. This leaves only the first term, which is precisely how we would compute the outcome from RCT. In an observational study, that is passive causal inference, we often do not have a control over how the participants were split into treatment and placebo groups. This often leads to the discrepancy in the base outcome between the treated and placebo groups. In that case, the second term above would not vanish but will work to remove this baseline effect. Consider measuring the effect of a vitamin supplement on the height of school-attending girls of age 10. Let us assume that this particular vitamin supplement is provided to school children by default in Netherlands from age 10 but is not in North Korea. We may be tempted to simply measure the average heights of school-attending girls of age 10 from these two countries, and draw a conclusion whether this supplement helps school children grow taller. This however would not be a reasonable way to draw the conclusion, since the averages heights of girls of age 9, right before the vitamin supplement begins to be provided in Netherlands, differ quite significantly between two countries (146.55cm vs. 140.58cm.) We would rather look at how much taller these children grew between ages of 9 and 10. Because we consider the difference of the difference in Eq. (<ref>), we call this estimator difference-in-difference. This approach is widely used and was one of the most successful cases of passive causal inference, dating back to the 19th century <cit.>. In the context of what we have learned this course, let us write a structural causal model that admits this difference-in-different estimator: x ←ϵ_x a ←1(x + ϵ_a) y ←1(x > 0) y_0 + α a + ϵ_y. With zero-mean and symmetric ϵ_x and ϵ_a, those with positive x are more likely to be assigned to a=1. Due to the first term in y, the outcome has a constant bias y_0 when x is positive. In other words, those, who are likely to be given the treatment, have y_0 added to the outcome regardless of the treatment (a=1) itself, since +y_0 does not depend on a. The difference-in-difference estimator removes the effect of y_0 from estimating α which is the direct causal effect of a on y. This tells us when the difference-in-difference estimator works, and how we can extend it further. For instance, it is not necessary to assume the linearity between a and y. I leave it to you as an exercise. §.§ Regression Discontinuity Another popular technique for passive causal inference is called regression discontinuity <cit.>. Regression discontinuity assumes that there exists a simple rule to determine to which group, either treated or placebo, an individual is assigned based on the covariate x. This rule can be written down as a = 1, if x_d ≥ c_0 0, otherwise. If the d-th covariate crosses over the threshold c_0, the individual is assigned to a=1. We further assume that the outcome given a particular action is a smooth function of the covariate. That is, the outcome of a particular action, f(â, x), changes smoothly especially around the threshold c_0. In other words, had it not been for the assignment rule above, lim_x_d → c_0 f(â, x) = lim_c_0 ← x_d f(â, x). There is no discontinuity of f(â, x) at x_d=c_0, and we can fit a smooth predictor that extrapolates well to approximate f(â, x) (or 𝔼_x_d' ≠ d f(â, x_d'∪ x_c=c_0).) If we assume that the threshold c_0 was chosen arbitrarily, that is independent of the values of x_≠ d, it follows that the distributions over x_≠ d before and after c_0 to remain the same at least locally.[ This provides a good ground for testing the validity of regression discontinuity. If the distributions of x before and after c_0 differ significantly from each other, regression discontinuity cannot be used. ] This means that the assignment of an action a and the covariate other than x_d are independent locally, i.e., |x_d - c_0| ≤ϵ, where ϵ defines the radius of the local neighbourhood centered on c_0. Thanks to this independence, which is the key difference between the conditional and interventional distributions, as we have seen repeatedly earlier, we can now compute the average treatment effect locally (so is often called a local average treatment effect) as LATE = 𝔼_x[ 1(|x_d - c_0| ≤ϵ) f(1, x) - f(0, x) ] = 𝔼_x: |x_d - c_0| ≤ϵ[ f(1,x) ] - 𝔼_x: |x_d - c_0| ≤ϵ[ f(0,x) ]. Of course, our assumption here is that we do not observed x_≠ d. Even worse, we never observe f(1,x) when x_d < c_0 and f(0,x) when x_d > c_0. Instead, we can fit a non-parametric regression model f̂(â, x_d) to approximate 𝔼_x_≠ d | x_d f(â,x) and expect (or hope?) that it would extrapolate either before or after the threshold c_0. Then, LATE becomes LATE = ∫_c_0-ϵ^c_0+ϵf̂(1,x_d) - f̂(0,x_d) dx_d =_ϵ→ 0f̂(1, c_0) - f̂(0, c_0), thanks to the smoothness assumption of f. The final line above tells us pretty plainly why this approach is called regression discontinuity design. We literally fit two regression models on the treated and placebo groups and look at their discrepancy at the decision threshold. The amount of the discrepancy implies the change in the outcome due to the change in the action, of course under the strong set of assumptions we have discussed so far. §.§ Double Machine Learning Recent advances in machine learning have open a door to training large-scale non-parametric methods on high-dimensional data. This allows us to expand some of the more conventional approaches. One such example is double machine learning <cit.>. We briefly describe one particular instantiation of double machine learning here. Recall the instrument variable approach from <ref>. The basic idea was to notice that the action a was determined using two independent sources of information, the confounder x and the external noise ϵ_a: a ← f_a(x, ϵ_a), with x ϵ_a. We then introduced an instrument z that is a subset of ϵ_a, such that z is predictive of a but continues to be independent of x. From z, using regression, we capture a part of variation in a that is independent of x, in order to severe the edge from the confounder x to the outcome y. Then, we use this instrument-predicted action a' to predict the outcome y. We can instead think of fitting a regression model g_a from x to a and use the residual a_ = a - g_a(x) as the component of a that is independent of x, because the residual was not predictable from x. This procedure can now be applied to the outcome which is written down as y ← f_y(a_, x, ϵ_y). Because x and a_ are independent, we can estimate the portion of y that is predictable from y by building a predictor g_y of y given x. The residual y_ = y - g_y(x) is then what cannot be predicted by x, directly nor via a. We are in fact relying on the fact that such a non-parametric predictor would capture both causal and spurious correlations indiscriminately. a_ is a subset of a that is independent of the confounder x, and y_ is a subset of y that is independent of the confounder x. The relationship between a_ and y_ must then be the direct causal effect of the action on the outcome. In other words, we have removed the effect of x on a to close the backdoor path, resulting in a_. We have removed the effect of x on y to reduce non-causal noise, resulting y_. What remains is the direct effect of a on the outcome y. We therefore fit another regression from a_ to y_, in order to capture this remaining correlation that is equivalent to the direct cause of a on y. § BEHAVIOUR CLONING FROM MULTIPLE EXPERT POLICIES REQUIRES A WORLD MODEL A Markov decision process (MDP) is often described as a tuple of the following items: * 𝒮: a set of all possible states * 𝒜: a set of all possible actions * τ: 𝒮×𝒜×ℰ→𝒮: a transition dynamics. s' = τ(s, a, ϵ). * ρ: 𝒮×𝒜×𝒮→ℝ: a reward function. r = ρ(s, a, s'). The transition dynamics τ is a deterministic function but takes as input noise ϵ∈ℰ, which overall makes it stochastic. We use p_τ(s' | s, a) to denote the conditional distribution over the next state given the current state and action by marginalizing out noise ϵ. The reward function r depends on the current state, the action taken and the next state. It is however often the case that the reward function only depends on the next (resulting) state. A major goal is then to find a policy p_π: 𝒮×𝒜→ℝ_>0 that maximizes J(π) = ∑_s_0 p_0(s_0) ∑_a_0 p_π(s_0, a_0) ∑_s_1 p_τ(s_1 | s_0, a_0) (γ^0 ρ(s_0, a_0, s_1) . + . ∑_a_1 p_π(s_1, a_1) ∑_s_2 p_τ(s_2 | s_1, a_1) ( γ^1 ρ(s_1, a_1, s_2) + ⋯) ) = 𝔼_s_0 ∼ p_0(s_0)𝔼_a_0, s_1 ∼ p_π(a_0|s_0) p_τ(s_1|s_0,a_0) 𝔼_a_1, s_2 ∼ p_π(a_1|s_1) p_τ(s_2|s_1,a_1)⋯[ ∑_t=0^∞γ^t ρ(s_t, a_t, s_t+1) ] = 𝔼_p_0, p_π, p_τ[ ∑_t=0^∞γ^t ρ(s_t, a_t, s_t+1) ] , where p_0(s_0) is the distribution over the initial state. γ∈ (0, 1] is a discounting factor. The discounting factor can be viewed from two angles. First, we can view it conceptually as a way to express how much we care about the future rewards. With a large γ, our policy can sacrifice earlier time steps' rewards in return of higher rewards in the future. The other way to think of the discounting factor is purely computational. With γ < 1, we can prevent the total return J(π) from diverging to infinity, even when the length of each episode is not bounded. As we have learned earlier when we saw the equivalence between the probabilistic graphical model and the structural causal model in <ref>–<ref>, we can guess the form of π as a deterministic function: a ←π(s, ϵ_π). Together with the transition dynamics τ and the reward function ρ, we notice that the Markov decision process can be thought of as defining a structural causal model for each time step t as follows: s is given. a ←π(s, ϵ_π) s' ←τ(s, a, ϵ_s') r ←ρ(s', ϵ_r), where we make a simplifying assumption that the reward only depends on the landing state. Graphically, [obs] (s) s; [latent, below=1.5cm of s] (a) a; [latent, below=0.5cm of s, xshift=1cm] (sn) s'; [latent, right=1cm of sn] (r) r; sa; ssn; asn; snr; Behaviour cloning. With this in our mind, let us consider the problem of so-called `behavior cloning'. In behaviour cloning, we assume the existence of an expert policy π^* that results in a high return J(π^*) from Eq. (<ref>) and that we have access to a large amount of data collected from the expert policy. This dataset consists of tuples of current state s, action by the expert policy a and the next state s'. We often do not observe the associated reward directly. D = { (s_n, a_n, s'_n) }_n=1^N, where a_n ∼ p_π^*(a | s_n) and s'_n ∼ p_τ(s' | s_n, a_n). Behavior cloning refers to training a policy π that imitates the expert policy π^* using this dataset. We train a new policy π often by maximizing J_bc(π) = ∑_n=1^N logπ(a_n, s_n). In other words, we ensure that the learned policy π puts a high probability on the action that was taken by the expert policy π^*. Behaviour cloning with multiple experts. It is however often that it is not just one expert policy that was used to collect data but a set of expert-like policies that collected these data points. It is furthermore often that we do not know which such expert-like policy was used to produce each tuple (s_n, a_n, s'_n). This necessitates us to consider the policy used to collect these tuples as a random variable that we do not observe, resulting the following graphical model:[ I am only drawing two time steps for simplicity, however, without loss of generality. ] [latent] (sm1) s_t-1; [latent, right=1cm of sm1] (s) s_t; [latent, right=1cm of s] (sp1) s_t+1; [latent, below=0.5cm of sm1] (rm1) r_t-1; [latent, below=0.5cm of s] (r) r_t; [latent, below=0.5cm of sp1] (rp1) r_t+1; [latent, above=0.5cm of sm1, xshift=1cm] (am1) a_t-1; [latent, above=0.5cm of s, xshift=1cm] (a) a_t; [latent, above=0.5cm of am1, xshift=0.5cm] (pi) π̃; sm1s; ssp1; am1s; asp1; sm1am1; sa; piam1; pia; sm1rm1; sr; sp1rp1; The inclusion of an unobserved π̃ makes the original behaviour cloning objective in Eq. (<ref>) less than ideal. In the original graph, because we sampled both s and a without conditioning on s', there was only one open path between s and a, that is, s→ a. We could thereby simply train a policy to capture the correlation between s and a to learn the policy which should capture p(a | do(s)). With the unobserved variable π, this does not hold anymore. Consider (s_t,a_t). There are two open paths between these two variables. The first one is the original direct path; s_t → a_t. There is however the second path now; s_t ← a_t-1←π→ a_t. If we naïvely train a policy π on this dataset, this policy would learn to capture the correlation between the current state and associated action arising from both of these paths. This is not desirable as the second path is not causal, as we discussed earlier in <ref>. In other words, π(a | s) would not correspond to p(a | do(s)). In order to block this backdoor path, we can use the idea of inverse probability weighting (IPW; <ref>). If we assume we have access to the transition model τ, we can use it to severe two direct connections into s_t; s_t-1→ s_t and a_t-1→ s_t, by 𝔼_a_t ∼ p_π(a_t | do(s_t))[a_t] = 𝔼_s_t[ p_π(a_t | s_t)/ p_τ(s_t | s_t-1, a_t-1) a_t ]. Learned transition: a world model. Of course, we often do not have access to τ directly, but must infer this transition dynamics from data. Unlike the policy s → a, fortunately, the transition (s, a) → s' is however not confounded by π. We can therefore learn an approximate transition model, which is sometimes referred to as a world model <cit.>, from data. This can be done by τ̂ = max_τ∑_n=1^N log p_τ (s'_n | s_n, a_n). Deconfounded behaviour closing. Once training is done, we can use τ̂ in place of the true transition dynamics τ, to train a de-confounded policy by π̂ = max_π∑_n=1^N logp_π(a'_n | s'_n)/p_τ̂(s'_n | s_n, a_n), where a'_n is the next action in the dataset. That is, the dataset now consists of (s_n, a_n, s'_n, a'_n) rather than (s_n, a_n, s'_n). This effectively makes us lose a few examples from the original dataset that correspond to the final steps of episodes, although this is a small price to pay to avoid the confounding by multiple expert policies. Causal reinforcement learning. This is an example of how causality can assist us in identifying a potential issue a priori and design a better learning algorithm without relying on trials and errors. In the context of reinforcement learning, which is a sub-field of machine learning focused on learning a policy, such as like behaviour cloning, this is often referred to as and studied in causal reinforcement learning <cit.>. § SUMMARY In this final chapter, I have touched upon a few topics that were left out from the main chapters perhaps for no particular strong reason. These topics included * Difference-in-Difference * Regression discontinuity * Double machine learning * A taste of causal reinforcement learning There are many interesting topics that were not discussed in this lecture note both due to the lack of time as well as the lack of my own knowledge and expertise. I find the following two areas to be particular interesting and recommend you to follow up on. * Counterfactual analysis: Can we build an algorithm that can imagine taking an alternative action and guess the resulting outcome instead of the actual outcome? * (Scalable) causal discovery: How can we infer useful causal relationship among many variables? * Beyond invariance (<ref>): Invariance is a strong assumption. Can we relax this assumption to identify a more flexible notion of causal prediction? abbrvnat
http://arxiv.org/abs/2405.08755v1
20240514164037
Distributed Threat Intelligence at the Edge Devices: A Large Language Model-Driven Approach
[ "Syed Mhamudul Hasan", "Alaa M. Alotaibi", "Sajedul Talukder", "Abdur R. Shahid" ]
cs.CR
[ "cs.CR", "cs.AI", "cs.LG" ]
Distributed Threat Intelligence at the Edge Devices: A Large Language Model-Driven Approach Syed Mhamudul Hasan^1,2,3, Alaa M. Alotaibi^1, Sajedul Talukder^1,3, Abdur R. Shahid^1,2,3 ^1School of Computing, Southern Illinois University, Carbondale, IL, USA ^2Secure and Trustworthy Intelligent Systems (SHIELD) Lab ^3Center for Research and Education in AI and Cybersecurity (CARE-AI-C) syedmhamudul.hasan@siu.edu, alaa@siu.edu, sajedul.talukder@siu.edu, shahid@cs.siu.edu May 20, 2024 ====================================================================================================================================================================================================================================================================================================================================================================================================== With the proliferation of edge devices, there is a significant increase in attack surface on these devices. The decentralized deployment of threat intelligence on edge devices, coupled with adaptive machine learning techniques such as the in-context learning feature of large language models (LLMs), represents a promising paradigm for enhancing cybersecurity on low-powered edge devices. This approach involves the deployment of lightweight machine learning models directly onto edge devices to analyze local data streams, such as network traffic and system logs, in real-time. Additionally, distributing computational tasks to an edge server reduces latency and improves responsiveness while also enhancing privacy by processing sensitive data locally. LLM servers can enable these edge servers to autonomously adapt to evolving threats and attack patterns, continuously updating their models to improve detection accuracy and reduce false positives. Furthermore, collaborative learning mechanisms facilitate peer-to-peer secure and trustworthy knowledge sharing among edge devices, enhancing the collective intelligence of the network and enabling dynamic threat mitigation measures such as device quarantine in response to detected anomalies. The scalability and flexibility of this approach make it well-suited for diverse and evolving network environments, as edge devices only send suspicious information such as network traffic and system log changes, offering a resilient and efficient solution to combat emerging cyber threats at the network edge. Thus, our proposed framework can improve edge computing security by providing better security in cyber threat detection and mitigation by isolating the edge devices from the network. Edge Computing, Threat Intelligence, Machine Learning (ML), Large Language Model (LLM). § INTRODUCTION Edge computing is ubiquitous from a privacy and security perspective <cit.>. We propose this approach with the aim of enhancing security by identifying and mitigating potential threats with the help of the Large Language Model (LLM) server. Large Language Model (LLM) refers to a subset of AI models that are optimized for specific tasks, such as natural language understanding and generation, which can further augment domain-specific contexts like personalized assistants <cit.> and threat detection <cit.>, etc. The feature of in-context learning in the context of LLM can leverage the natural language understanding of LLM's capabilities to learn new language skills within relevant and authentic linguistic contexts. OpenAI Generative Pre-trained Transformer (GPT) is a specific implementation of a LLM based on the transformer architecture, which has revolutionized this approach with billions of parameters. For instance, GPT-3.5 has 175 billion parameters compared to the previous version of GPT. Threat intelligence at the edge refers to the practice of gathering, analyzing, and applying threat intelligence data at the periphery of a network or system, where interactions with external entities occur. We propose a noble approach to edge threat intelligence at the edge that involves deploying lightweight AI models directly onto edge devices, such as routers, firewalls, and IoT devices. These models can continuously observe network traffic, system logs and configurations to identify patterns indicative of potential security threats. § SYSTEM ARCHITECTURE The key components of this approach to threat intelligence at the edge include a lightweight ML model trained to detect anomalies by analyzing network traffic and system logs, identify malicious activities, classify security threats in real-time, and communicate with the edge server by Message Queuing Telemetry Transport (MQTT) protocol, a lightweight publish-subscribe protocol to transfer data to the edge server and also among other edge devices and local edge, and a central LLM server trained with updated threat repositories. The central server can also update its knowledge through the in-context feature of GPT. The local edge server enables edge devices, and the Ml model provides edge devices with limited processing power and memory without compromising performance or resource efficiency. In figure <ref>, we describe the entire process into four main components: * Edge devices with lightweight ML model: These ML models are designed to operate efficiently on edge devices with limited computational resources, enabling real-time threat analysis and response without relying heavily on cloud-based resources. By processing threat intelligence data locally on edge devices, it can provide real-time detection of security threats analyzing network data, device logs and other parameters without introducing significant delays or latency. This allows for immediate response actions, such as blocking malicious traffic or alerting the edge server, to mitigate potential risks. This model can vary from device to device, as every device has its own thread landscape in its deployment environment. * MQTT which integrates edge devices with edge servers: Edge devices will be connected to the edge server via the MQTT channel. Also, the edge devices can communicate with each other to share data with the help of this channel, thus creating one-to-many communication. * Edge server deployed locally: The edge server provides the MQTT queue, where edge devices basically exchange data with each other and the local edge server, preserving user privacy and security without transmitting sensitive information to external cloud servers. The edge server monitors the activity and, more specifically, the warning given by the compromised edge device by alerting the system administrator and blocking the edge device from communicating with others. This decentralized approach minimizes the risk of unauthorized modifications or data breaches in two way verification. Additionally, the communication latency is minimal as, in most scenarios, the edge server and edge devices are geographically located close together, which is critical for such communication. * LLM server: LLM-centric central threat intelligence solution can adapt to and learn from new threats and attack vectors over time. Through continuous training and updates, LLM can improve its accuracy and effectiveness in identifying emerging security threats at the edge. In cases of unknown threats, the edge server, connected through a high-speed network, can communicate with the central LLM server about the type of attack, possible vulnerability, and effective solution for mitigating the risk in the shortest possible time. This approach to threat intelligence at the edge can offer a proactive and efficient means of enhancing security in distributed computing environments, where traditional centralized threat detection mechanisms may be less effective. By leveraging lightweight AI models directly on edge devices, we can detect compromised edge devices with evolving cyber threats. § METHODOLOGY To protect the other edge devices from compromised devices in near real-time data involves multiple steps. Firstly, we assume the edge devices have network connectivity, and those are connected to the edge server via the MQTT protocol. All communication are encrypted with Secure Socket Layer (SSL) to prevent additional man-in-the-middle attacks. After receiving any messages from edge devices via the MQTT queue, the edge server analyzes the issue using a trained ML model. In an unknown case, edge server informs the central LLM server at a certain interval. In order to identify any suspicious activity, the lightweight AI model implemented with TensorFlow Lite <cit.> can detect. After detection with other meta-information like location, severity, etc., it will alert the edge server. If the edge server finds that request, analyze it and respond to other edge devices, notifying the monitoring team. In such an environment, other edge devices will not continue to communicate with the infected devices protecting the valuable data from the attacker. For experimentation, we chose two raspberry Pis and one android mobile phone. For edge intelligence, we plan to deploy a trained tensorflow model to the rasberry Pi and convert the tensorflow ML model to the tensorflow Lite deployed on a cell phone or any wide range of IoT chips and microcontrollers. As every device at the edge gets a ML model, it becomes an intelligent device. Also, we use an edge server with an Intel (R) CoreTM i7 and 64 GB of RAM, which has an MQTT server running. All the edge devices are connected to the MQTT edge server via a client application. Furthermore, the edge server is connected to a central LLM server, which we consider OpenAI GPT 3.5 Turbo API which we will train with popular threat libraries. Moreover, We plan to enhance central server intelligence through in-context learning of GPT where edge servers can update the server model through the central LLM via a REST API call. § CONCLUSION AND FUTURE WORK The innovative side of this approach lies in the integration of LLM and diverse other popular technologies like ML application, edge server intelligence, MQTT to tackle cybersecurity challenges specific to lightweight edge devices. The proposed framework will have a relative impact on the field of edge device security by addressing wider white box attacks like zero days attack, poisoning attack etc. considering resource limitations while maintaining data privacy and scalability. In the future, we will provide a practical demonstration of the system to demonstrate the framework's applicability in real-world deployment. IEEEtran
http://arxiv.org/abs/2405.09069v1
20240515034013
Rheology of three dimensional granular chute flows at large inertial numbers
[ "Satyabrata Patro", "Anurag Tripathi" ]
cond-mat.soft
[ "cond-mat.soft" ]
APS/123-QED Department of Chemical Engineering, Indian Institute of Technology Kanpur, Uttar Pradesh, 208016, India anuragt@iitk.ac.in Department of Chemical Engineering, Indian Institute of Technology Kanpur, Uttar Pradesh, 208016, India The inertial number-based rheology, popularly known as the JFP model, is well known for describing the rheology of granular materials in the dense flow regime. While most of the recent studies focus on the steady-state rheology of granular materials, the time-dependent rheology of such materials has received less attention. Owing to this fact, we perform three-dimensional DEM simulations of frictional inelastic spheres flowing down an inclined bumpy surface varying over a wide range of inclination angles and restitution coefficients. We show that steady, fully developed flows are possible at inclinations much higher than those predicted from the JFP model rheology. We show that, in addition to a modified effective friction law, the rheological description also needs to account for the stress anisotropy by means of a first and second normal stress difference law. Rheology of three dimensional granular chute flows at large inertial numbers Anurag Tripathi May 20, 2024 ============================================================================ § INTRODUCTION The rheology of granular materials has been an active research topic for the last few decades due to its wide occurrence in geophysical as well as industrial situations. A number of experimental <cit.> as well as simulation studies using discrete element method (DEM) <cit.> have been utilized to explore the rheology of granular materials. A detailed review of granular flow rheology in different configurations can be found in <cit.>. These studies have shown that the granular flow between the two limiting cases of quasistatic, slow flows, and rapid, dilute flows is controlled by a non-dimensional inertial number I=dγ̇/√(P/ρ_p), which depends on the local shear rate γ̇ and pressure P in addition to the particle size d and density ρ_p. In our previous study, we investigated the rheology of two-dimensional granular chute flows at high inertial numbers and showed that the popularly used JFP model with a saturating behavior of the effective friction coefficient at high inertial numbers fails to capture the rheology at high inertial numbers <cit.>. Instead, a non-monotonic variation of μ(I) is observed at large inertial numbers with a maximum at I≈ 0.8. The non-monotonic variation of the effective friction coefficient μ, along with a weak power law relation of the solids fraction ϕ and a normal stress difference law N_1/P with the inertial number I has also been proposed in Patro et al. <cit.> which describes the rheology at large inertial numbers. Now, we explore the rheology of three-dimensional granular chute flows for a wide range of inclination angles and check whether the flow behavior observed in the 2D DEM simulations is observable in 3D DEM simulations as well. Hence, we investigate the rheology of slightly polydisperse, inelastic spheres flowing down a rough inclined plane using discrete element method (DEM) simulations over a wide range of inclination angles. Two different layer heights in the settled configuration of L_y=20d and L_y=40d are considered. We show that steady, fully developed flows are possible at high inertial numbers I>0.8. We also show that in addition to the first normal stress difference which was found to be significant in 2D, the second normal stress difference also becomes important in the case of 3D flows. The restitution coefficient considered in this study is e_n=0.5. Due to the classical results of Silbert et al. <cit.>, which showed that the restitution coefficient has no significant role in the case of sufficiently frictional grains, most studies typically use normal coefficient of restitution close to e_n=0.9. Our results shown in <cit.> show that using this default choice of restitution coefficient, the observation of transition from dense to dilute regime becomes very difficult since a small change in the inclination angle leads to accelerating flows. In order to obtain a large range of inclination angles at which non-accelerating flows are observable, we use these lower values of normal coefficient of restitution. We wish to emphasize the fact that the choice of restitution coefficient e_n=0.5 is not unrealistic and quite a few studies <cit.> report the restitution coefficient of industrial grains near 0.5. § SIMULATION METHODOLOGY In the previous study, we investigated the rheology of two-dimensional granular chute flows at high inertial numbers and showed that the popularly used JFP model with a saturating behavior of the effective friction coefficient at high inertial numbers fails to capture the rheology at high inertial numbers. Instead, a non-monotonic variation of μ(I) is observed at large inertial numbers with a maximum at I≈ 0.8. The non-monotonic variation of the effective friction coefficient μ, along with a weak power law relation of the solids fraction ϕ and a normal stress difference law N_1/P with the inertial number I has also been proposed in <cit.> which describes the rheology at large inertial numbers. Now, we explore the rheology of three-dimensional granular chute flows for a wide range of inclination angles and check whether the flow behavior observed in the 2D DEM simulations is observable in 3D DEM simulations as well. Hence, we investigate the rheology of slightly polydisperse, inelastic spheres flowing down a rough inclined plane using discrete element method (DEM) simulations over a wide range of inclination angles. Two different layer heights in the settled configuration of L_y=20d and L_y=40d are considered. We show that steady, fully developed flows are possible at high inertial numbers I>0.8. We also show that in addition to the first normal stress difference which was found to be significant in 2D, the second normal stress difference also becomes important in the case of 3D flows. The restitution coefficient considered in this study is e_n=0.5. Due to the classical results of Silbert et al. <cit.>, which showed that the restitution coefficient has no significant role in the case of sufficiently frictional grains, most studies typically use normal coefficient of restitution close to e_n=0.9. Our results in <cit.> show that using this default choice of restitution coefficient, the observation of transition from dense to dilute regime becomes very difficult since a small change in the inclination angle leads to accelerating flows. In order to obtain a large range of inclination angles at which non-accelerating flows are observable, we use these lower values of normal coefficient of restitution. We wish to emphasize the fact that the choice of restitution coefficient 0.5 is not unrealistic and quite a few studies <cit.> report the restitution coefficient of industrial grains near 0.5. § SIMULATION METHODOLOGY The discrete element method (DEM) is used to perform three-dimensional simulations of slightly polydisperse particles (± 5 % polydispersity) of mean diameter d=1mm flowing down a rough and bumpy inclined surface. The bumpy base is made of static spheres of diameter 2d. The schematic view of the simulation setup is shown in Fig. <ref> where the black spheres represent the flowing particles and black spheres represent the static particles of the bumpy base. The length and width of the simulation box is L_x=L_z=20d along the x and z direction. The height of the simulation box is kept sufficiently large so that the particles do not feel the presence of the upper surface. In order to simulate an infinitely long inclined flow without end effects and neglect the variation in the vorticity direction, periodic boundary conditions are used in the flow (x) and vorticity (z) direction. Fig. <ref>(a) shows the simulation setup where the settled layer height is L_y=20d and fig. <ref>(b) shows the simulation setup where the settled layer height is L_y=40d. The number of particles in the first case is N_p=8000 whereas the number of particles in the second case is N_p=16000. Particles are modeled as soft, deformable, inelastic, frictional spheres. The contact force between the spheres is modeled using the Hertz-Mindlin model in the LIGGGHTS-PUBLIC 3.0 open-source DEM package. The details about the contact force models, particle generation, particle settling, and property calculation are given in detail in <cit.>. Consider the steady and fully developed granular flow over a surface inclined at an angle θ under the influence of gravity. The schematic of granular fluid flowing down an inclined plane is shown in Figure <ref>. Assuming a unidirectional flow in the x-direction, the momentum balance equations in x and y direction simplify to 0=-∂τ_yx/∂y+ρgsinθ, 0=-∂σ_yy/∂y-ρgcosθ. Using the boundary conditions of zero shear stress and pressure at the free surface, and assuming a constant density ρ (i.e., constant solids fraction ϕ) across the layer, eqs. (<ref>) and (<ref>) can be integrated to obtain τ_yx=-ρ g(h-y)sinθ, σ_yy=ρ g(h-y)cosθ. The effective friction coefficient μ(I), defined as the ratio of the magnitude of shear stress |τ_yx| to the confining pressure P, i.e., μ(I)=|τ_yx|/P is dependent only on the dimensionless inertial number I in the dense flow regime. Mandal et al. <cit.> have proposed a non-monotonic variation of the effective friction coefficient μ(I) as μ(I)=μ_s+c_1-c_2I/1+I_0/I , where μ_s, I_0, c_1 and c_2 are the model parameters. The solids fraction ϕ shows a power law variation with the inertial number I and is given as ϕ(I)=ϕ_max-aI^α, where ϕ_max, a and α are the model parameters. Simulation results in this study show that the first normal stress difference N_1=σ_xx-σ_yy to the pressure P ratio as well as the second normal stress difference N_2=σ_yy-σ_zz to the pressure P ratio are a function of inertial number I, i.e., N_1/P=f_1(I), and N_2/P=f_2(I) where P=(σ_xx+σ_yy+σ_zz)/3 is the trace of the stress tensor. These functional forms can be used to relate the pressure P with the first as well as second normal stress difference σ_yy as P=σ_yy/1+f_2(I)-f_1(I)3. Using Eq. (<ref>) we obtain, P=ρ g(h-y)cosθ/1+f_2(I)-f_1(I)3. Using the expression for inertial number I=dγ̇/√(P/ρ_p) with γ̇=dv_x/dy and rearranging, we get dv_x/dy=I√(P/ρ_p)/d. Using Eq. (<ref>), Eq. (<ref>) can be integrated to obtain the velocity in the x-direction as v_x=v_slip+2/3I/d√(ϕ gcosθ/1+f_2(I)-f_1(I)3)[h^3/2-(h-y)^3/2], where v_slip is the slip velocity at the base. Using Eqs. (<ref>) and (<ref>) in Eq. (<ref>), the effective friction coefficient μ at any inclination θ is obtained as μ(I)=tanθ1+f_2(I)-f_1(I)3. Using our simulation data over a large range of I, we find that f_1(I) shows a linear variation for the range of the inertial number considered in this study. We also find that f_2(I) shows a quadratic variation with the inertial number. Thus we can express f_1(I)=AI+B and f_2(I)=CI^2+DI+E for the range of inertial number of our interest. Using Eq. (<ref>) along with the linear form of f_1(I) and quadratic form of f_2(I) in Eq. (<ref>), we obtain an algebraic equation G(I)=A_0I^3+B_0I^2+C_0I+D_0=0. This algebraic equation needs to be solved to obtain the value of the inertial number at a given inclination θ. By using f_1(I)=AI+B and f_2(I)=CI^2+DI+E, we get the coefficients of the cubic equation as A_0=-Ctanθ, B_0=(A-D-CI_0)tanθ-c_2, C_0=(-3+B-E+AI_0-DI_0)tanθ+3μ_s+3μ_m and D_0=((-3+B-E)tanθ+3μ_s)I_0. Solving the cubic equation G(I)=0, we obtain three different possible values of inertial number I for a particular inclination θ. Only one of these values is found to be realistic. Using this value of the inertial number I for a given inclination angle θ, the solids fraction ϕ is obtained using Eq. (<ref>). With the increase in the inclination angle θ, the inertial number I increases, and solids fraction ϕ decreases leading to an increase in the overall height of the flowing layer. This increased flowing layer height H at any inclination θ can be calculated from the mass balance over the entire layer using the following equation H = H_minϕ_max/ϕ, where H_min is the minimum height of the static layer having maximum solids fraction ϕ_max. This value of H is used to get the velocity v_x(y) using Eq. <ref> § RESULTS AND DISCUSSION We present DEM simulation results for the flow of spheres over a bumpy inclined surface starting from two different settled layer heights of L_y=20d and L_y=40d for restitution coefficient e_n=0.5 where d is the mean particle diameter. All the quantities reported are non-dimensionalized using d, m, √(d/g) and mg as length, mass, time, and force units respectively where m is the mass of the particle having diameter d and g is the gravitational acceleration. §.§ Existence of steady flows We perform DEM simulations for inelastic spheres having restitution coefficient e_n=0.5 spanning a wide range of inclination angles θ=22^∘ to 38^∘ for the initial flowing layer thickness L_y=40d. The average kinetic energy of the particles at these inclinations are shown in Fig. <ref>(a) and Fig. <ref>(b). The average value of the kinetic energy keeps increasing with time and eventually becomes constant, indicating that the system is able to achieve a steady state at these inclinations. The average kinetic energy of the particles, shown in Fig. <ref>(a) and Fig. <ref>(b), becomes constant for all the inclination angles considered. This confirms that steady, fully developed flows are possible over a larger range of inclinations for e_n=0.5. The average kinetic energy of the particles at steady state is plotted with the inclination angle in Fig. <ref>(c). As expected, the kinetic energy increases with inclination. A linear increase in the average kinetic energy is observed with the inclination angle θ. Fig. <ref>(d) shows the time taken to achieve 95% of steady state kinetic energy, denoted as T_95, for different inclinations. The time taken to reach a steady state increases with the inclination angle and a sharp change is observable after θ=32^∘. Note that the kinetic energy in the y axis is non-dimensionalized using mgd where m is the mass, d is the particle diameter and g is the acceleration due to gravity. §.§ Rheological model From the DEM simulations data obtained from two different flowing layer thicknesses L_y=40d and L_y=20d, we report the variation of the effective friction coefficient μ, solids fraction ϕ, the ratio of first normal stress difference to pressure N_1/P and the ratio of second normal stress difference to pressure N_2/P with inertial number I for e_n=0.5 in Fig. <ref>. The black circles correspond to the DEM simulations starting from a settled layer height of L_y=40d and the red circles correspond to the DEM simulations starting from a settled layer height of L_y=20d. The data for the two cases do not differ significantly from each other and can be described using a single curve. The black solid line represents the best fit to the data accounting for both layer heights. The effective friction coefficient obtained from the simulations (shown using black and red symbols in Fig. <ref>(a)) increases up to I≈1.8 and shows a saturating behavior at large inertial numbers. There is a slight decrease in the effective friction coefficient μ at large inertial numbers I, however, the decrease is not prominent, and the inclusion of μ-I data for inclination angles θ>38^∘ is needed to confirm the decrease in the effective friction coefficient at large inertial numbers. Given the sharp increase in the computational time for higher inclinations, we proceed with the analysis of the existing data. Fitting the 4-parameter MK model using Eq. (<ref>) to the simulation data shown in Fig. <ref>(a) indicates that the MK model (shown using solid lines) is able to describe the μ-I variation very well. Note that the simulation data for the inclination angle θ=22^∘-38^∘ is considered to obtain the model parameters. The MK model parameters are reported in Table <ref>. The variation of solids fraction is plotted with the inertial number in Fig. <ref>(b). The solid line shown in Fig. <ref>(b) is obtained by fitting the power law variation of ϕ with I (Eq. (<ref>)) to the simulation data. The dilatancy law model parameters are reported in Table <ref>. In Fig. <ref>(c), we report the variation of the ratio of the first normal stress difference to the pressure ratio N_1/P with the inertial number I. The variation for the first normal stress difference to the pressure ratio N_1/P with the inertial number I is described well by a linear relation of the form N_1/P=f_1(I)=AI+B. The first normal stress difference law model parameters are reported in Table <ref>. We also report the variation of the ratio of the second normal stress difference to the pressure ratio N_2/P with the inertial number I. The variation for the second normal stress difference to the pressure ratio N_2/P with the inertial number I is described well by a quadratic relation of the form N_2/P=f_2(I)=FI^2+GI+H. The second normal stress difference law model parameters are reported in Table <ref>. A comparison of Fig. <ref>(c) and Fig. <ref>(d) shows that while N_1/P is zero at low inertial numbers, N_2/P is found to be positive even for the quasistatic systems with I∼0. In addition, N_2/P>N_1/P for I≤1.25. For I>1.25, however, N_1/P becomes larger than N_2/P. This transition of N_2/P-N_1/P from positive to negative at I∼1.25 needs to be explored further. §.§ Theoretical predictions Using the rheological model parameters given in Table <ref>-<ref>, we predict various flow properties of interest. These theoretical predictions are reported as solid lines in Figs. <ref> and <ref>. The velocity profiles predicted using the theory are found to be in excellent agreement with the simulation results for inclination angles ranging from θ=22^∘-32^∘ (see Fig. <ref>(a)). The slip velocity, which appears to be small for these inclinations is taken from the simulation data. The predicted values of the solids fraction ϕ are also found to be in very good agreement for inclination angles θ=22^∘ to θ=32^∘ as shown in Fig. <ref>(b). The theoretical inertial numbers are also found to be in good agreement with the inertial numbers obtained from DEM simulations (see Fig.<ref>(c)). Figs.<ref>(a-d) shows the comparison between DEM simulations and the theoretical predictions for the pressure P, shear stress τ_yx, first normal stress difference N_1, and the second normal stress difference N_2 for the range of inclinations θ=22^∘ to θ=32^∘. The theoretical predictions using the rheological parameters are in very good agreement with the DEM simulation results. Comparison of N_1 and N_2 shows that N_2 is higher than N_1 for these range of inclinations. We next use the rheological model parameters given in Table <ref>-<ref> to predict various flow properties at large inclination angles ranging from θ=34^∘-38^∘. These theoretical predictions are reported as solid lines in Figs. <ref> and <ref>. The velocity profiles predicted using the theory are found to be in good agreement with the simulation results even at large inclination angles ranging from θ=34^∘-38^∘ (see Fig. <ref>(a)). We use the slip velocity obtained from DEM simulations in the theoretical predictions. The predicted values of the solids fraction ϕ are also found to be in very good agreement for inclination angles θ=34^∘ to θ=38^∘ as shown in Fig. <ref>(b). In addition, the theoretical inertial numbers are also found to be in good agreement with the inertial numbers obtained from DEM simulations (see Fig.<ref>(c)) except the largest inclination angle θ=38^∘ where the theory predicts the inertial number I=1.76 in comparison to the inertial number I=1.9 obtained from DEM simulations. Figs. <ref>(a)-(d) show the comparison between DEM simulations and the theoretical predictions for the pressure P, shear stress τ_yx, first normal stress difference N_1, and the second normal stress difference N_2 for the large inclinations angles ranging from θ=34^∘ to θ=38^∘. The theoretical predictions using the rheological parameters are in very good agreement with the DEM simulation results. Many industrial granular flows in conveyors and chutes have flowing layer thickness in the range 10d-20d. It is well known that the inertial number I and solids fraction ϕ are nearly constant in the bulk region of the flowing layer of chute flow for any given inclination angle θ. The bulk region is defined as the region that is away from the base of the chute and the free surface. The bulk region is very important in order to estimate the rheological properties. Lack of bulk region in the case of low flowing layer thickness will lead to fluctuations of flow properties in the bulk and will lead to significant errors in estimating the rheological parameters. Due to the lack of bulk region in the case of low flowing layer thickness L_y=20d, we have performed DEM simulations starting from a settled flowing layer height L_y=40d. We will now use the rheological model parameters given in Table <ref>-<ref> obtained from the flowing layer thickness L_y=40d to predict various flow properties for a wider range of inclination angles ranging from θ=22^∘-38^∘ for low flowing layer thickness L_y=20d which is the initial settled flowing layer thickness. These theoretical predictions are reported as solid lines in Figs. <ref> and <ref>. The velocity profiles predicted using the theory are found to be in good agreement with the simulation results for inclination angles ranging from θ=22^∘-32^∘ (see Fig. <ref>(a)). Figs. <ref>(b) and Fig. <ref>(c) show that the predicted values of the solids fraction ϕ and inertial number I are also found to be in very good agreement for inclination angles θ=22^∘ to θ=32^∘. Figs.<ref>(a-d) shows the comparison between DEM simulations and the theoretical predictions for the pressure P, shear stress τ_yx, first normal stress difference N_1, and the second normal stress difference N_2 for inclinations angles ranging from θ=22^∘ to θ=32^∘. Again, the theoretical predictions using the rheological parameters are in very good agreement with the DEM simulation results. Next, the theoretical predictions for the various flow properties of interest have been compared with the DEM simulation results for higher inclination angles in the range of θ=34^∘ to θ=38^∘. The predictions from the rheological parameters obtained using the simulations data of L_y=40d layer are in very good agreement with the DEM simulation results for low flowing layer thickness L_y=20d suggesting that rheological parameters obtained from large flowing layer thickness data can be used to predict the flow properties for low flowing layer thickness as well (shown in Figs.<ref>-<ref>). §.§ Bulk average properties Figs. <ref>(a)-<ref>(d) summarise the results over the entire range of inclinations investigated in this study and show the variation of the inertial number I_bulk, solids fraction ϕ_bulk, height of the bulk region of the flowing layer h_bulk and the slip velocity at the base v_slip as a function of inclination angle θ for restitution coefficient e_n=0.5. The slope of the curve for the lower angles differs significantly from that for the higher angles. This slope contrast is evident in the case of slip velocity shown in Fig. <ref> for different flow regimes: dense flows with negligible slip velocity at lower angles and rapid, dilute flows with large slip velocities at higher angles. The steady-state solids fraction ϕ is found to decrease with the increase in the inclination angle θ and can be seen in Fig. <ref>(b). The decrease in the solids fraction is followed by the dilation of the bulk flowing layer thickness h_bulk=h_initial×ϕ_max/ϕ_steady which increases with the inclination angle. This increase in the bulk flowing layer thickness with the inclination angle is shown in Figs. <ref>(c). The slip velocity at the base of the chute is shown in Figs. <ref>(d) which increases with the increase in inclination angle and changes its slope gradually beyond inclination angle θ=30^∘. §.§ Role of normal stress difference In this section, we will discuss the effect of the first and the second normal stress differences on the flow properties of granular materials for the range of inclination angle θ=22^∘ to θ=32^∘. Figure <ref>(a) shows the theoretical predictions of the inertial number using the JFP model with and without considering the effect of the first and the second normal stress differences i.e. by assuming f_1(I)=N_1/P=0 and f_2(I)=N_2/P=0 in the latter case. Symbols represent the DEM simulation results. The dashed lines correspond to the theoretical predictions of the inertial number I using the JFP model where the normal stress differences have been ignored whereas the solid lines correspond to the theoretical predictions of the inertial number I using the JFP model where the normal stress differences have been considered. It is evident from figure <ref>(a) that the theoretical inertial number starts to deviate from the DEM simulation results as we increase the inclination angle θ. Figure <ref>(b) shows the theoretical predictions of the inertial number using the modified rheological model with and without considering the effect of the first and the second normal state differences. The latter case assumes f_1(I)=N_1/P=0 and f_2(I)=N_2/P=0. As before, symbols represent the DEM simulation results. The dashed lines correspond to the theoretical predictions of the inertial number I using the modified rheological model where the normal stress differences have been ignored whereas the solid lines correspond to the theoretical predictions of the inertial number I using the modified rheological model where the normal stress differences have been considered. The deviation in the theoretical predictions of the inertial number I from the DEM simulation results are observed in the case of the modified rheologial model as well and this deviation is more significant at higher inclinations. These results suggest that the role of the normal stress differences becomes crucial not only for large inclinations but also for small inclinations as well and one must consider these effects in order to accurately predict the flow properties. Neglecting the effect of the normal stress differences will lead to significant errors in the theoretical predictions. §.§ Oscillations in the steady flow The steady-state oscillations in the bulk layer height h_bulk are shown in Fig. <ref>(a) over a time period of 200 time units starting from a time instant t_0. Oscillations in the center of mass y_com are shown in Fig. <ref>(b). These height measurements are done after the flow has achieved steady kinetic energy (hence t_0 varies with the inclination angle θ). The oscillations in the bulk layer height h_bulk as well as center of mass y_com keep increasing with the inclination angle. While the difference between maximum and minimum bulk height is less than the particle diameter for inclinations θ≤ 30^∘, this variation becomes Δ h_bulk≈ 4d for θ= 38^∘ case (Fig. <ref>(a)). These oscillations indicate that the role of density and height variations become important and cannot be ignored for granular flows at inertial numbers comparable to unity. In order to investigate the time-periodic behavior of the layer at high inertial numbers, we performed a Fast Fourier Transform (FFT) analysis of the time series data of the center of mass position at steady state. Figure <ref> shows the amplitude spectrum of the center of mass for different inclinations. The amplitude spectrum for inclinations θ=22^∘ to θ=28^∘ shows nearly uniform distribution for all frequencies as shown in Fig. <ref>(a). At higher inclinations, a dominant frequency with a clear peak starts to appear and is shown in Fig. <ref>(b). The occurrence of a dominant frequency in the amplitude spectrum confirms that the variation of layer height occurs with a characteristic time period at these large inclinations. The peak frequency keeps moving to lower values as the inclination angle increases, indicating that the time period of oscillations increases with the inclination angle. Figure <ref>(b) also shows that the amplitude of oscillations at the θ=38^∘ is nearly an order of magnitude higher compared to those at θ=30^∘. FFT analysis of the bulk layer height, center of mass, and kinetic energy data are shown for three different inclinations i.e. θ=30^∘, θ=34^∘, and θ=38^∘ as shown in Fig. <ref>(a-c). The black solid line corresponds to the center of mass position, the red solid line corresponds to the bulk flowing layer thickness, and the black solid line corresponds to the kinetic energy. We find that the maximum amplitude of the spectrum is observed at the same frequency f=0.05 for different properties at the inclination θ=38^∘. For θ=34^∘ and θ=30^∘, the corresponding frequencies are f=0.09 and f=0.13 respectively. Hence we conclude that the oscillations at high inclinations affect these properties of the flow in a nearly identical fashion. We also investigate the role of the simulation box size on the dominant frequency of oscillation and its corresponding amplitude. For this, we have considered three different simulation box size of base area 15d×15d, 20d×20d and 30d×30d where d is the diameter of the . The initial height of the simulation setup at t=0 for all the three cases is same i.e. 40d. Fig. <ref> shows the FFT analysis of the center of mass data at θ=38^∘ for three different box sizes. We find that the maximum amplitude of the spectrum is observed at the same frequency f=0.05 at the inclination θ=38^∘ for three different simulation box size confirming that the amplitude and frequency of the oscillation is independent of the simulation box size. § RHEOLOGY AT LARGE INCLINATIONS From the DEM simulations data obtained from the flowing layer thicknesses L_y=40d, we report the variation of the effective friction coefficient μ, solids fraction ϕ, the ratio of first normal stress difference to pressure N_1/P and the ratio of second normal stress difference to pressure N_2/P with inertial number I for e_n=0.5 in Fig. <ref>. The different coloured symbols represents the curve for different inclinations. The effective friction coefficient obtained from the simulations (shown using different symbols in Fig. <ref>(a)) increases up to I≈1.8 and shows a sudden decrease at large inertial numbers. There was a slight decrease in the effective friction coefficient μ at large inertial numbers I reported at earlier sections upto the inclination angle θ=38^∘, however, the decrease was not prominent. The inclusion of μ-I data for inclination angles θ>38^∘ is shown to confirm the decrease in the effective friction coefficient at large inertial numbers. § CONCLUSIONS We perform three-dimensional DEM simulations of slightly polydisperse, frictional, inelastic spheres flowing down a bumpy inclined plane varying over a wide range of inclination angles. We have considered two different flowing layer thicknesses: a thin layer with a settled layer height of L_y=20d and a thick layer with a settled layer height of L_y=40d. We are able to observe steady flows for inclinations up to θ=38^∘ for e_n=0.5. The steady flow observed at θ=38^∘ corresponds to inertial number as high as I≈1.9 and solids fraction as low as ϕ≈0.38. The flows at such high inertial numbers exhibit significant slip velocity at the bumpy base. However, they also exhibit a bulk core region with a nearly constant solids fraction as typically observed in case of dense flows. The solids fraction value in the bulk becomes as low as ϕ≈0.38 and the height of the layer increases up to 70d starting from a settled layer height of 40d for the highest inclination angle θ=38^∘ considered in this study. Using the simulation data over a large range of inertial numbers, we find that key conclusions of the modified rheology previously reported for 2D systems in <cit.> are observable in 3D as well. Given the range of inclination angles θ=22^∘-38^∘ considered in this study, we were unable to observe the decreasing part of the μ-I curve at large inertial numbers. The inclusion of higher angle data θ>38^∘ may confirm the non-monotonic variation. However, even for this limited range of data, we find that the μ-I relation as per the modified rheological model does fit the data better compared to the JFP model. A power law relation describes the ϕ-I data well over the entire range of I. In addition, both the first and second normal stress difference laws, relating the normal stress differences to pressure ratio f_1(I)=N_1/P and f_2(I)=N_2/P with the inertial number I is also proposed. We show that these normal stress difference laws needs to be considered to describe the rheology for large values of inertial numbers. It is worth highlighting that the periodic oscillations in the flow properties at steady state become prominent at high inclinations. The bulk layer height, the center of mass position, and the kinetic energy show oscillations around the steady state value with a characteristic frequency beyond an inclination angle. The dominant frequency for all the properties at inclination θ=38^∘ shows a peak around f=0.05. We conclude that the modified form of the μ-I rheology using the MK model along with the dilatancy law and both the normal stress difference law is able to predict the flow behavior for most of the bulk layer and is in good agreement with the simulation results up to inclination angle θ=38^∘. § ACKNOWLEDGEMENTS AT gratefully acknowledges the financial support provided by the Indian Institute of Technology Kanpur via the initiation grant IITK/CHE/20130338. § DATA AVAILABILITY The data that support the findings of this study are available from the corresponding author upon request. § REFERENCES *
http://arxiv.org/abs/2405.09526v1
20240515173529
Forward & Far-Forward Heavy Hadrons with JETHAD: A High-energy Viewpoint
[ "Francesco Giovanni Celiberto" ]
hep-ph
[ "hep-ph", "hep-ex", "nucl-ex", "nucl-th" ]
1 .001 Forward & Far-Forward Heavy Hadrons with : A High-energy Viewpoint Celiberto, Francesco Giovanni ]Forward & Far-Forward Heavy Hadrons with : A High-energy Viewpoint 1]Francesco Giovanni Celiberto[orcid=0000-0003-3299-2203] [1] francesco.celiberto@uah.es [1]organization=Universidad de Alcalá (UAH), Departamento de Física y Matemáticas, addressline=Campus Universitario, city=Alcalá de Henares, postcode=E-28805, state=Madrid, country=Spain Inspired by recent findings that semi-inclusive detections of heavy hadrons exhibit fair stabilization patterns in high-energy resummed distributions against (missing) higher-order corrections, we review and extend our studies on the hadroproduction of light and heavy hadrons tagged in forward and far-forward rapidity ranges. We analyze the behavior of rapidity rates and angular multiplicities via the method, where the resummation of next-to-leading energy logarithms and beyond is consistently embodied in the collinear picture. We explore kinematic regions that are within LHC typical acceptances, as well as novel sectors accessible thanks the combined tagging of a far-forward light or heavy hadron at future Forward Physics Facilities and a of central particle at LHC experiments via a precise timing-coincidence setup. Heavy Flavor QCD Resummation LHC Phenomenology Forward Physics Facilities Hadron detections [ [ May 20, 2024 ================ appcnt § INTRODUCTION New paths in the exploration of fundamental interactions by next-generation colliders <cit.> mark the turn of a new era in particle physics. Venturing into uncharted kinematic territories permits stringent analyses of the SM and direct or indirect quests for deviations from its predictions. The strong-interaction sector presents significant challenges within the SM. Here, the interplay between perturbative and nonperturbative aspects of QCD gives rise to unresolved puzzles regarding fundamental questions, including the origin of hadron mass and spin, as well as the behavior of QCD observables in critical kinematic regimes. Precise examinations of the dynamics underlying strong interactions rely on two essential components. Firstly, the ability to perform increasingly accurate calculations of high-energy parton scatterings through higher-order perturbative QCD techniques. Secondly, the understanding of proton structure, dictated by the motion and spin interactions among constituent partons. A series of successes in describing data for hadron, lepton, and lepton-hadron reactions have been achieved through collinear factorization <cit.>, where partonic cross sections, computed within pure perturbative QCD, are convoluted with collinear PDFs. These PDFs encode information about the likelihood of finding a parton inside the struck hadron with a specific longitudinal momentum fraction, x. They evolve according to the DGLAP equation <cit.>. Collinear PDFs are well-suited for describing inclusive or semi-inclusive observables weakly sensitive to low-transverse momentum regimes. Similarly, collinear FFs portray the production mechanism of identified hadrons, detailing the probability of generating a specific final-state hadron with momentum fraction z from an outgoing collinear parton with longitudinal fraction, ζ≡ x/z. However, by relying on collinear PDFs only, one overlooks information about the transverse-space distribution and motion of partons. Thus, the description provided by collinear factorization can be seen as a one-dimensional mapping of hadron properties. Achieving an accurate portrayal of low-transverse-momentum observables necessitates embracing a three-dimensional perspective, which permits the capture of intrinsic effects stemming from the transverse motion and spin of partons, and their interaction with the polarization state of the parent hadron. Such a tomographic representation of hadrons is naturally provided by the TMD factorization (see Refs. <cit.> and reference therein). Furthermore, given the nonperturbative nature of parton densities and fragmentation functions, they must be extracted from data through global fits encompassing various hadronic processes. Despite the significant successes of the collinear approach, certain kinematic regions demand a departure from the fixed-order, DGLAP-driven description. Enhanced logarithmic contributions, which enter the perturbative expansion of the strong running coupling α_s with increasingly higher powers, must be included to all orders to restore the convergence of the perturbative series. These logarithms, depending on the kinematic regimes, necessitate specific all-order resummation techniques. For instance, accurately describing the differential distributions for inclusive hadron, boson, or DY lepton production at low |q⃗_T| demands employing TM resummation techniques <cit.>. TM-resummed predictions have been recently extended to various reactions, including hadroproduction of photons <cit.>, Higgs bosons <cit.>, W-boson pairs <cit.>, boson-jet <cit.>, Z-photon systems <cit.>, as well as DY and Higgs spectra <cit.>. Moreover, almost back-to-back final states lead to Sudakov-type logarithms, which should be resummed as well <cit.>. Conversely, when a physical observable is defined and/or measured close to its phase space edges, Sudakov effects originating from soft and collinear gluon emissions near threshold become significant and require appropriate resummation techniques. Various approaches exist in the literature for achieving threshold resummation for inclusive rates <cit.>. Recent studies have also addressed resummation for rapidity distributions <cit.>. The standard fixed-order description of cross sections has been enhanced by incorporating threshold resummation for numerous processes, including DY <cit.>, scalar and pseudo-scalar Higgs production <cit.>, bottom annihilation <cit.>, DIS <cit.>, electron-positron SIA <cit.>, and spin-2 boson production <cit.>. Additionally, a combined TM-plus-threshold resummation for TM distributions of colorless final states has been developed <cit.>. From a partonic perspective, the threshold regime corresponds to the large-x limit. Prime determinations of large-x improved collinear PDFs were achieved in Ref. <cit.>. Additionally, as x approaches one, the impact of target-mass power corrections becomes significant. These corrections have been studied extensively in the literature <cit.>, particularly in the context of large-x DIS events <cit.>. Another crucial kinematic regime, sensitive to logarithmic enhancements, is the semi-hard (or Regge–Gribov) sector <cit.>, characterized by the scale hierarchy √(s)≫{Q}≪, with √(s) being the center-of-mass energy, {Q} one or a set of process-characteristic energy scales, and the QCD hadronization scale. Here, large energy logarithms of the form ln (s/Q^2) compensate for the narrowness of α_s, potentially spoiling the perturbative-series convergence. Like in previous cases, these logarithms must be resummed to all orders. The BFKL formalism is the most powerful tool for such resummations <cit.>. BFKL resummation accounts for contributions proportional to α_s^n ln (s/Q^2)^n up to the LL level, and terms proportional to α_s^n+1ln (s/Q^2)^n at the NLL level. In the BFKL framework, the imaginary part of amplitudes (and also cross sections of inclusive processes, via to the optical theorem <cit.>) takes the form of a high-energy factorization, where TM-dependent functions play a key role <cit.>. Specifically, the BFKL amplitude is a convolution between two singly off-shell emission functions (also known in the BFKL jargon as forward-production impact factors), describing the transition from each parent particle to the outgoing objects in its fragmentation region, and a Green's function, which evolves according to the BFKL integral equation. The kernel of this equation has been computed with NLO accuracy for forward scatterings <cit.> and partially at next-to-NLO <cit.>. Conversely, emission functions are sensitive to the given final state. They also {Q}-dependent, but s-independent. Thus, they represent the most intricate piece of a BFKL computation, and only few of them are currently known at NLO. We mention: a) collinear-parton functions <cit.>, which are needed to compute b) forward-jet <cit.> and c) light-hadron <cit.>, d) virtual photon to light vector meson <cit.>, e) light-by-light <cit.>, and f) forward-Higgs boson in the infinite top-mass limit <cit.> (see also Refs. <cit.>) emission functions. Considering the LO only, one has DY pairs <cit.>, heavy-quark pairs <cit.>, and low-|q⃗_T| J/ψ<cit.> impact factors. A primary category of processes that act as probe channels for high-energy resummation involves single forward emissions. In these cases, the cross section for an inclusive process follows the typical BFKL-factorized structure. Specifically, when at least one hadron participates in the initial state, the impact factor governing the production of the forward identified particle is convoluted with the BFKL Green's function and a nonperturbative quantity known as the hadron impact factor The sub-convolution of the last two components provides an operational definition of the BFKL UGD. The hadron impact factor serves as the initial-scale UGD, while the Green's function governs its low-x evolution. Given the forward kinematics, the struck parton is predominantly a gluon with a small longitudinal-momentum fraction. Hence, in this context, high-energy resummation effectively amounts to low-x resummation. A similar high-energy factorization formula applies to the imaginary part of the amplitude for exclusive single forward processes. This is feasible because, in the forward limit, skewness effects are suppressed, allowing for the use of the same UGD. In more general off-forward configurations, one would consider low-x enhanced GPDs <cit.>. An intriguing subset of inclusive forward reactions involves proton-initiated processes. Here, a hybrid high-energy and collinear factorization is employed, where the forward object originates from a fast parton with a moderate x, described by a collinear PDF, while the other proton is characterized by the UGD. Studies on the BFKL UGD trace back to the growing interest in forward physics at HERA. Investigations of DIS structure functions at low x were undertaken in Refs. <cit.>. Subsequently, different UGD models were scrutinized against HERA data for exclusive light vector-meson electroproduction in references <cit.>. Evidence of low-x dynamics is expected to emerge from ρ-meson studies at the EIC <cit.>. Similarly, insights are sought from the photoemission of quarkonium states <cit.>. Forward DY and single inclusive b-quark tags at the LHC serve as hadronic probes for the UGD <cit.>. Besides single forward emissions, the low-x QCD sector can also be probed via gluon-induced single central productions. In these processes, the cross sections are formulated in a pure high-energy factorized form, involving a convolution between two BFKL UGDs and a central-production impact factor, embodying a doubly off-shell coefficient function. Due to its doubly off-shell nature (involving virtual gluons g^*g^*), with gluon virtualities determined by their transverse momenta, computing the coefficient function is considerably more intricate compared to the forward-case scenarios. To our knowledge, the coefficient function depicting the inclusive emission of a central, light-flavored jet is the only one known at NLO <cit.>. A very powerful formalism aimed at enhancing standard fixed-order computation of central processes via the low-x resummation is the ABF scheme <cit.>, where κ_T-factorization theorems <cit.> permits to combine DGLAP and BFKL inputs. The high-energy series is stabilized by imposing consistency conditions based on duality aspects, symmetrizing the BFKL kernel in (anti-)collinear regions of the phase space, and encoding such contributions to the running coupling. Significant progress has been achieved in small-x studies within the ABF formalism, particularly in the context of inclusive central emissions. Notable advancements include investigations into the inclusive central emissions of Higgs bosons in gluon-gluon fusion <cit.> and higher-order corrections to top-quark pair emissions <cit.>. Applications ABF to resummed inclusive or differential distributions for Higgs-boson and heavy-flavor hadroproductions were done by means of the method <cit.>. Moreover, this framework has been instrumental in extracting low-x enhanced collinear PDFs for the first time <cit.>. Subsequently, the information from these collinear PDFs was utilized to constrain the parameters of initial-scale unpolarized and helicity gluon TMD PDFs <cit.>. Another class of processes serving as promising channel whereby unveiling the onset of high-energy QCD dynamics, is represented by inclusive hadroproductions of two particles emitted with transverse momenta well above and strongly separated in rapidity.[High-energy effects were also seen in photon-initiated processes, such as the (γ^*γ^*) reaction <cit.>, the exclusive di-meson leptoproduction <cit.>, and the semi-inclusive heavy-quark pair photodetection <cit.>.] Contrarily to single forward and central processes, forward-plus-backward two-particle hadroproduction rates are sensitive to enhanced energy logarithms even at moderate values of x, due to the peculiar kinematic ranges in transverse momentum and rapidity currently covered by acceptances of LHC detectors. Thus, on the one hand, a collinear treatment still holds here. On the other hand, however, high rapidity intervals (Δ Y) correspond to large TM exchanges in the t-channel, leading to the emergence of energy logarithms. In such cases, a high-energy factorization treatment is needed, and it is inherently provided by the BFKL formalism. Consequently, another form of hybrid high-energy and collinear factorization is established <cit.> (see Refs. <cit.> for a close-in-spirit formalism), wherein high-energy resummed partonic cross sections are derived directly from BFKL and subsequently convoluted with collinear PDFs. This approach allows for a comprehensive description of processes occurring in kinematic regimes characterized by both large rapidity intervals and transverse momenta. The “mother" reaction of forward-plus-backward inclusive hadroproductions is the Mueller–Navelet <cit.> tagging of two light jets at large momenta and , for which a series of phenomenological studies have appeared so far <cit.> and they have been compared with CMS data at √(s) = 7 <cit.>. Exploring further observables that are sensitive to more exclusive final states offers additional avenues for uncovering clues about the onset of BFKL dynamics. These channels complement the insights provided by Mueller–Navelet channels and offer a deeper understanding of the underlying processes. By studying such observables, we can gain a more comprehensive view of how BFKL dynamics manifest across various exclusive final states, thereby enriching our understanding of high-energy QCD phenomena. We can mention: light-flavored di-hadron <cit.>, hadron-plus-jet <cit.>, multi-jet <cit.> and Drell–Yan <cit.> angular distributions, Higgs-jet rapidity and transverse-momentum distributions <cit.>, heavy-flavored jet <cit.> and hadron <cit.> distributions. Analyses on angular correlations for light-jet and/or light-hadron detections have played a crucial role in distinguishing between high-energy resummed and fixed-order calculations. By employing asymmetric TM ranges, we have been able to decisively discriminate between these approaches, shedding light on the underlying dynamics of high-energy QCD processes <cit.>. However, these studies have also revealed significant challenges associated with higher-order BFKL corrections. Specifically, NLL contributions have been found to be comparable in magnitude to LL terms but with opposite signs, leading to instability in the high-energy series. This sensitivity to the variations of renormalization (μ_R) and factorization (μ_F) scales, aimed at gauging the size of MHOUs, poses a significant obstacle to achieving reliable theoretical predictions. Efforts to address these challenges have included the adoption of scale-optimization methods such as the BLM optimization <cit.> in its semi-hard oriented version <cit.>. While this approach has shown some success in partially mitigating instabilities in azimuthal correlations, it has proven ineffective for cross sections. In particular, the optimal scales obtained using this method have often been much larger than the natural scales dictated by kinematics <cit.>, resulting in a substantial and unphysical reduction in statistical precision. As a result, achieving precision in the study of inclusive forward-backward light-flavored objects has remained elusive despite these efforts. Recent studies have provided corroborating indication of a stabilization of the high-energy resummation under higher-order corrections and MHOUs studies, particularly in the context of semi-hard Higgs-boson inclusive emissions <cit.>. This stabilization trend has been observed also in analyses focusing on semi-inclusive emissions of Λ_c baryons <cit.> or singly bottomed hadrons at the LHC <cit.>. A key observation is a clear signature of stabilization, which is directly linked to the distinctive behavior exhibited by VFNS <cit.> collinear FFs governing the production of these singly heavy-flavored particles at high transverse momentum. These findings mark a significant advancement in our understanding of high-energy QCD processes and offer promising prospects for future precision studies in this area. Subsequent analyses on vector quarkonia <cit.>, B_c mesons <cit.>, and tetraquarks <cit.>, confirmed that this remarkable feature, known as natural stability of the BFKL resummation <cit.>, comes out as basic property connected to the inclusive emission of any given heavy-flavored particle. In review we will provide predictions for rapidity-interval rates and angular distributions for a novel selection of forward-plus-backward two-particle semi-hard reactions. These processes involve final states characterized by identified hadrons only (see Fig. <ref>). Specifically, the first hadron can be a charged pion or a charged D^* meson. The second hadron singly b-flavored particle, i.e. and inclusive state consisting of the sum of fragmentation channels to noncharmed B mesons and Λ_b^0 baryons. Comparing predictions from the BFKL-driven approach with those from a high-energy fixed-order treatment will help gauge the impact of high-energy resummation on top of the DGLAP approach. To this end, a numerical tool for calculating NLO cross sections for the inclusive production of two identified hadrons widely separated in rapidity in proton collisions would be needed and crucial for a systematic high-energy versus DGLAP analysis. While the LO limit for such reactions can be extracted from higher-order works <cit.>, it currently cannot be compared with calculations based on our hybrid factorization due to kinematic constraints. Two-particle LO computations without resummation typically result in a back-to-back final state, which is incompatible with the asymmetric windows for observed transverse momenta in our calculations. Therefore, to assess the weight of the NLL resummation on top of the DGLAP approach, we will compare BFKL predictions with the corresponding ones obtained from a high-energy fixed-order treatment, firstly developed to address light-flavored di-jet <cit.> and hadron-plus-jet <cit.> studies. It builds upon truncating of the high-energy series up to the NLO level, so that we can reconstruct the high-energy signal of a pure NLO computation. We will consider two different rapidity configurations. The first one is a standard LHC tagging, where both particles (π^± or D^*+ and the b-hadron) are detected by a current LHC detector, say CMS or ATLAS. This selection provides symmetric rapidity windows for both particles, making it an ideal channel for further testing high-energy QCD dynamics, akin to previous investigations into two-particle semi-hard processes. In the second scenario, we consider the tag of the pion or the D meson in a far-forward rapidity range accessible at future FPFs, while the bottomed hadron is simultaneously detected in the barrel of a current LHC detector. Our exploration fairly takes inspiration for a prospective FPF + ATLAS study <cit.> proposed in the context of the FPF program <cit.> (see Refs. <cit.> for related work). The simultaneous tag of a far-forward object and a central one results in an asymmetric configuration between the longitudinal fractions x of the two struck partons. One parton has a large x, while the other assumes more moderate x values. Thus, a coincidence between the FPF and a LHC detector give us a faultless chance to unravel not only the high-energy dynamics arising from to the very large rapidity intervals accessed, but also the connection between BFKL and threshold resummations. This interplay can shed light on the dynamics of parton interactions in this kinematic regime, providing valuable insights into the underlying physics. The impact of double BFKL-plus-threshold logarithms for central Higgs-bosons inclusive rates is small at ongoing LHC energies, but it becomes quite relevant at the nominal ones of the FCC <cit.>. On the other side, the high-energy resummed TM rates for the Higgs-plus-jet hadroproduction already deviate from the fixed-order background at current LHC energies <cit.>. Thus, our two-particle observables are expected to be very sensitive to the co-action of the two resummations. Future analyses at FPFs will be crucial for deepening our understanding of perturbative QCD and the structure of protons and nuclei in previously unexplored regimes. The sensitivity of FPF detectors to far-forward production of light hadrons and charmed mesons will allow us to investigate BFKL effects and gluon-recombination dynamics. Additionally, TeV-scale neutrino-induced DIS experiments at FPFs will serve as valuable probes of proton structure and the production mechanisms of heavy or light decaying hadrons. Our work on light mesons at the FPF via hybrid factorization can provide a common framework for describing the production and decays of these particles. QCD studies are fundamental components of the multi-frontier research program at FPFs. Searches for long-lived particles, indirect detection of dark matter, sterile neutrinos, as well as investigations into the muon puzzle, lepton universality, and the connection between high-energy particle physics and modern astroparticle physics, all depend on a deep understanding of the Standard Model. Progress towards precision QCD in the kinematic sectors reachable at FPFs will be essential for driving scientific interest towards new and compelling directions. This review is organized as follows: highlights on the hybrid factorization for our reference processes are given in Section <ref>; the phenomenological analysis is discussed in Section <ref>; conclusions and outlook are drawn in Section <ref>. § HYBRID FACTORIZATION AT WORK In this Section we presents basic features of the hybrid factorization well-adapted to the description of our reactions. After a brief introduction of the process kinematics (Section <ref>), we give details of the NLL resummed cross section (Section <ref>). Choices for collinear PDFs and FFs are explained later (Section <ref>). §.§ Kinematics of the process We consider the two following processes (Fig. <ref>) p(P_1) + p(P_2) →π^±(q_π, y_π, ϕ_π) + X + (q_b, y_b, ϕ_b) , p(P_1) + p(P_2) →D^*±(q_D, y_D, ϕ_D) + X + (q_b, y_b, ϕ_b) . The upper equation of (<ref>) says that a π^± meson with mass M_π = 139.57 MeV, four-momentum q_π, rapidity y_π, and azimuthal angle ϕ_π is emitted in association with a singly bottomed hadron with four-momentum q_b, rapidity y_b, and azimuthal angle and azimuthal angle ϕ_b. In the lower equation the pion is replaced by a D^*± meson with mass M_D = 1.968 GeV, four-momentum q_D, rapidity y_D, and azimuthal angle and azimuthal angle ϕ_D. The particle is given by an inclusive sum of all noncharmed B mesons and Λ_b baryons. Initial-state partons posses four-momentum x_a P_1 and x_b P_2, where P_1,2 are the momenta of the incoming protons. The X system in Eq. (<ref>) depicts all the final-state inclusively irradiated gluons. One can take P_1,2 as Sudakov vectors satisfying P_1^2 = P_2^2 = 0 and 2 P_1 · P_2 = s, to decompose the final-state four momenta as q_ M,b = x_ M,b P_1,2 - q⃗_⊥_ M,b^ 2/x_ M,bP_2,1/s + q_⊥_ M,b , q_⊥_ M,b^2 = -q⃗_T_ M,b^ 2 , where the M subscript inclusively refer to π or D. The outgoing-object longitudinal momentum fractions, x_ M,b, can be calculated by inverting the relation y_ M,b = ±ln( x_ M,b/|q⃗_T_ M,b|√(s)) , so that y_ M,b = ± x_ M,b/x_ M,b . The semi-hard nature of our reactions follows (i) from the size of final-state transverse momenta, which are taken to respect the hierarchy Λ_ QCD≪ |q⃗_T_ M,b| ≪√(s), and (ii) from imposing a large rapidity distance = y_ M - y_b between the M meson and the b-hadron. Furthermore, to ensure the validity of a VFNS description for heavy-hadron production <cit.>, |q⃗_T_ M,b| ranges must remain sufficiently above the thresholds for DGLAP evolution determined by the charm and bottom masses. §.§ NLO cross section resummed at NLL and beyond Following a pure QCD collinear vision for the LO cross section of our processes (Eq. (<ref>)), one would take a one-dimensional convolution between the partonic hard subprocess, incoming proton PDFs and tagged hadron FFs σ^[p + p → M + + X]_ LO, collinear/ x_ M x_b ^2q⃗_T_ M ^2q⃗_T_b = ∑_i,j=q,q̅,g∫_0^1 x_a ∫_0^1 x_b f_i(x_a,μ_F) f_j(x_b,μ_F) × ∫_x_ M^1ζ_1/ζ_1∫_x_b^1ζ_2/ζ_2D^ M_i(x_ M/ζ_1,μ_F)D^b_j(x_b/ζ_2,μ_F) σ̂_i,j(ŝ,μ_F,μ_R)/ x_ M x_b ^2q⃗_T_ M ^2q⃗_T_b . Here the i,j indices run over all the parton species except for the top quark which does not hadronize, with f_i,j(x_1,2, μ_F ) being proton PDFs and D^ M,b_i,j(x_ M,b/ζ_1,2, μ_F ) standing for M meson and b-hadron FFs. Then, x_1,2 are longitudinal-momentum fractions for the partons initiation the hard scattering and ζ_1,2 the ones for partons fragmenting to detected hadrons. The hard factor σ̂_i,j( ŝ,μ_F,μ_R ) depends on the squared center-of-mass energy of the partonic collision, ŝ≡ x_a x_b s, and on factorization (μ_F) and renormalization (μ_R) scales. Vice versa, to construct differential observables in our hybrid formalism one first performs the high-energy resummation of logarithms connected to transverse-momentum exchanges in the . Subsequently, one adds collinear ingredients, namely PDFs and FFs. The differential cross section can be rewritten as a Fourier sum of angular coefficients σ_^[p + p → M + X + ]/ y_ M y_b |q⃗_T_ M| |q⃗_T_b| ϕ_ M ϕ_b = 1/(2π)^2[ C_0 + 2 ∑_n=1^∞cos (n φ) C_n ] , where φ = ϕ_ M - ϕ_b - π is the distance between the azimuthal angles of the light and the heavy hadron. The angular coefficients C_n are calculated within the BFKL framework and they embody the resummation of energy logarithms. A NLL-consistent formula obtained in the renormalization scheme <cit.> is cast as follows (for details on the derivation see, e.g., Ref. <cit.>) = ∫_0^2πϕ_ M∫_0^2πϕ_b cos (n φ) σ^[p + p → M + X + ]_/ y_ M y_b |q⃗_T_ M| |q⃗_T_b| ϕ_ M ϕ_b = e^/s ∫_-∞^+∞ν e^α̅_s(μ_R) χ^ NLL(n,ν)α_s^2(μ_R) × [ c_ M^ NLO(n,ν,|q⃗_T_ M|, x_ M) [c_b^ NLO(n,ν,|q⃗_T_b|,x_b)]^* + α̅_s^2(μ_R) β_0/4 N_cχ(n,ν) (ν) ] , with α̅_s(μ_R) ≡α_s(μ_R) N_c/π, N_c the number of colors and β_0 the QCD β-function leading coefficient. The χ^ NLL(n,ν) stands for the NLL high energy kernel: χ^ NLL(n,ν) = χ(n,ν) +α̅_s(μ_R) [χ̅(n,ν)+β_0/8 N_cχ(n,ν)[-χ(n,ν)+10/3+2lnμ_R^2/|q⃗_T_ M| |q⃗_T_b|]] , where χ(n,ν)=2{ψ(1)- Re[ψ( iν+n+1/2)] } is its eigenvalue at LL and ψ(z) = Γ^'(z)/Γ(z). The χ̂(n,ν) NLO term was calculated in Ref. <cit.> and can be found in the Appendix app:NLL_kernelA. The c_h^ NLO(n,ν,|q⃗_T|, x) emission function describe the forward inclusive emission of a given hadron, labeled as h. It was obtained in Ref. <cit.> in the light-quark cases and embodies collinear inputs. It also can be employed for heavy hadrons within a VFNS scheme, provided that observed transverse momenta |q⃗_T_b| are sufficiently higher than the charm (for a D meson) or bottom (for a b hadron) masses. One has c_h^ NLO(n,ν,|q⃗_T|, x) = c_h(n,ν,|q⃗_T|, x) + α_s(μ_R) ĉ_h(n,ν,|q⃗_T|, x) , with c_h(n,ν,|q⃗_T|, x) = 2 √(C_F/C_A) |q⃗_T|^2iν-1∫_x^1ζ/ζ( ζ/x) ^2 iν-1 [C_A/C_Ff_g(ζ)D_g^h(x/ζ) +∑_i=q,q̅f_i(ζ)D_i^h(x/ζ)] its LO limit and ĉ_h(n,ν,|q⃗_T|, x) its NLO correction (see Appendix app:NLO_IFB). The (ν) function in Eq. (<ref>) embodies the logarithmic derivative of the two LO emission functions (ν) = i/2 /νln(c_ M(n,ν,|q⃗_T_b|, x_ M)/[c_b(n,ν,|q⃗_T_b|, x_b)]^*) + ln(|q⃗_T_ M| |q⃗_T_b|) . From Eqs. (<ref>)-(<ref>), we gather the implementation of our hybrid factorization approach. The cross section is expressed as a factorized formula reminiscent of the BFKL formalism. In this formulation, the Green's function read as a high-energy convolution between the emission functions of the two tagged hadrons. These functions are further expressed as collinear convolutions between PDFs and FFs, along with the hard-scattering term. The `+' label indicates that our representation for angular coefficients in Eq. (<ref>) encode terms beyond the NLL accuracy generated by the cross product of the NLO emission functions, ĉ_ M(n,ν,|q⃗_T_ M|, x_ M) [ĉ_b(n,ν,|q⃗_T_b|,x_b)]^*. Expanding and truncating to the O(α_s^3) order the angular coefficients in Eq. (<ref>), one gets the high-energy fixed-order () counterpart of the BFKL-resummed cross section <cit.>). It is sensitive to the leading-power asymptotic dynamics of a pure NLO DGLAP computation and discards factors suppressed by inverse powers of ŝ. Our expression for angular coefficients in the scheme <cit.> reads = e^Δ Y/s∫_-∞^+∞ν α_s^2(μ_R) c_ M^ NLO(n,ν,|q⃗_T_ M|,x_ M) [c_b^ NLO(n,ν,|q⃗_T_b|,x_b)]^* [ 1 + α̅_s(μ_R) χ(n,ν) ] . In our approach, an expansion up to contributions proportional to α_s(μ_R) takes over the BFKL exponentiated kernel. Analogous to Eq. (<ref>), our fixed-order formula at high energies is denoted as , meaning contributions the beyond NLL accuracy arise from the cross product of NLO emission functions. Then, we can obtain a genuine formula by simply discarding all the NLO contributions, thus having = e^/s ∫_-∞^+∞ν e^α̅_s(μ_R)χ(n,ν) α_s^2(μ_R) c(n,ν,|q⃗_T_ M|, x_ M) [c(n,ν,|q⃗_T_b|,x_b)]^* . In our phenomenological analysis (see Section <ref>) we compare observables built in terms of , , and angular coefficients. We fix μ_R,F scales at the natural energies given by kinematics, thus having μ_R = μ_F = μ_N ≡ m_⊥_ M + m_⊥_b, with m_h ⊥ = √(m_h^2 + |q⃗_T_h|^2) being the transverse mass of the specific hadron h. To assess the impact of MHOUs, scales will be varied from 1/2 to 2 times their natural values, as specified by the C_μ parameter (see Section <ref>). We employ a two-loop QCD-coupling setup with α_s(M_Z)=0.11707 and five quark flavors active. In the renormalization scheme <cit.>, we write (μ_R) ≡^(μ_R) = π/β_0 λ_R( 4 - β_1/β_0^2 d lnλ_R/λ_R) with λ_R(μ_R) = lnμ_R^2/^2 , β_0 = 11 - 2/3 n_f , β_1 = 102 - 38/3 n_f . We emphasize that in our approach, energy scales are inherently tied to the transverse masses of observed particles. Consequently, they consistently fall within the perturbative regime, obviating the need for any infrared enhancement of the running coupling (see, for instance, Ref. <cit.>). Moreover, the utilization of large-scale values shields us from a regime where the significance of the diffusion pattern (see, for instance, Refs. <cit.>) becomes pronounced. §.§ Choice of collinear PDFs and FFs As we already mentioned, due to the moderate parton x values, we build our framework upon standard collinear inputs, at NLO while the high-energy resummation is performed via BFKL at NLL. As for proton PDFs, make us of the novel NNPDF4.0 NLO determination <cit.>. When considering charged pions, there is a broad array of NLO FFs available for potential use. The NNFF1.0 FFs <cit.>, derived from SIA data using a neural-network approach, feature NLO gluon FFs. On the other hand, DEHSS14 sets <cit.> were obtained from a combination of SIA, SIDIS, and proton-proton collision data. They assume a partial SU(2) isospin symmetry, resulting in specific relations among the different flavor components. Meanwhile, JAM20 determinations <cit.> incorporate datasets from both SIA and SIDIS and are determined concurrently with collinear PDFs in DIS and fixed-target Drell-Yan measurements. These FFs adhere to a full SU(2) isospin symmetry. Recently, MAPFF1.0 functions <cit.> have been derived from SIA and SIDIS data using neural-network techniques. Notably, these FFs allow for a deviation from isospin symmetry, with separate parametrizations for D_u^π^+ and D_d̅^π^+, which varies with the momentum fraction z. Gluon FFs are generated at NLO, although the data are collected at lower energies, where the gluon distribution has a more pronounced impact. Moreover, the methodology employed for extracting MAPFF1.0 FFs has been extended to study the fragmentation of the Ξ^-/Ξ̅^+ octet baryon <cit.> and to establish a new FF set for describing unidentified charged light hadrons, both from SIA and SIDIS data <cit.>. In our phenomenological analysis, we make use of two pion NLO FF determinations: NNFF1.0 and MAPFF1.0. As regards heavy hadrons, we employ the KKKS08 NLO set <cit.> to describe parton fragmentation into Λ_c baryons. These FFs were extracted from OPAL and Belle data for SIA and mainly rely on a Bowler-like description <cit.> for charm and bottom flavors. We depict emissions of b flavored hadrons in terms of the KKSS07 parametrization <cit.> based on data of the inclusive B-meson production in SIA events at CERN LEP1 and SLAC SLC and portrayed by a simple, three-parameter power-like function <cit.> for heavy-quark species. The KKSS19 and KKSS07 determinations use the VFNS. We remark that the employment of given VFNS PDFs or FFs is admitted in our approach, provided that typical energy scales are much larger than thresholds for the DGLAP evolution of charm and bottom quarks. As highlighted in Section <ref>, this requirement is always fulfilled.[For further studies on D-meson, Λ_c-baryon and b-hadron VFNS fragmentation, see Refs. <cit.>, <cit.>, and <cit.>, respectively.] We note that KKSS08 and KKSS07 FF sets do not carry any quantitative information regarding the extraction uncertainty. Future investigations incorporating potential new parametrizations of Λ_c and fragmentation functions, including uncertainties, are essential to supplement our analysis of systematic errors in high-energy distributions. § HEAVY HADRONS WITH All predictions presented in this review were generated using Jethad, a hybrid code that consistently integrates both Python- and Fortran-based modules. is specifically designed for computing, managing, and processing physical distributions defined within various formalisms <cit.>. Numeric calculations of differential distributions were primarily performed using Fortran 2008 modular routines within , while the built-in Python 3.0 analyzer was utilized for final data analysis and interpretation. All computation of observables are conducted within the scheme <cit.>. Section <ref> briefly introduces core elements of the v0.5.2 version of . Or strategy to gauge systematic uncertainties is discussed in Section <ref>. Final-state kinematic ranges can be found in Section <ref>. Predictions rapidity-interval rates and angular multiplicities are discussed in Sections <ref> and <ref>, respectively. §.§ Highlights of v0.5.2 The inception of the project dates back to late 2017, driven by the necessity for precise predictions of semi-hard hadron <cit.> and jet <cit.> sensitive final states at the LHC. Phenomenological analyses of such reactions, proposed as probe channels for high-energy resummation in QCD, required the development of a reference numerical framework dedicated to computing and analyzing high-energy related distributions. The initial named version, v0.2.7, provided us with a first, quantitative BFKL-versus-DGLAP examination within the context of semi-inclusive hadron-plus-jet emissions at the LHC <cit.>. Subsequent iterations introduced new functionalities, such as selecting forward heavy-quark pair observables (v0.3.0 <cit.>), enabling studies on Higgs emissions and transverse-momentum distributions (v0.4.2 <cit.>), and integrating the Python analyzer with the Fortran core supermodule (v0.4.3 <cit.>). Advancements continued with the ability to analyze heavy-flavored hadrons via VFNS FFs at NLO (v0.4.4 <cit.>). The DUnamis (DYnamis) work package, dedicated to the forward DY dilepton reaction <cit.>, became part of in v0.4.5. Integration with the LExA modular code enabled exploration of proton content at low-x through small-x TMD densities in v0.4.6 <cit.>. Version v0.4.7 <cit.> introduced quarkonium-sensitive reactions from NRQCD leading-twist fragmentation. Novel features in v0.5.0 <cit.> and v0.5.1 <cit.> encompass an enhanced system for MHOU-related studies, an expanded list of observables with a focus on singly- and doubly-differential transverse-momentum production rates <cit.>, and support for matching procedures with collinear factorization <cit.>. The most significant update of v0.5.2 is , a Mathematica plugin devoted to symbolical calculations for high-energy QCD and the proton structure. From the fundamental core to service modules and routines, has been designed to dynamically achieve high levels of computational performance. The multidimensional integrators within support extensive parallel computing to actively choose the most suitable integration algorithm based on the shape of the integrand. Any process implemented in can be dynamically selected through an intuitive, structure based smart-management interface. Physical final-state particles are represented by object prototypes within this interface, where particle objects encapsulate all pertinent information about their physical counterparts, ranging from mass and charge to kinematic ranges and rapidity tags. These particle objects are initially loaded from a master database using a dedicated particle generation routine, and custom particle generation is also supported. Then, these objects are cloned into a final-state vector and injected from the integrand routine to the corresponding, process-specific module by a dedicated controller. The flexibility in generating the physical final states is accompanied by a range of options for selecting the initial state. A unique particle-ascendancy structure attribute enables to rapidly learn whether an object is hadroproduced, electroproduced, photoproduced, etc. This dynamic feature ensures that only relevant modules are initialized, optimizing computing-time efficiency. is structured as an object-based interface that is entirely independent of the specific reaction under investigation. While originally inspired by high-energy QCD and TMD factorization phenomenology, the code's design allows for easy encoding of different approaches by simply implementing novel, dedicated (super)modules. These can be straightforwardly linked to the core structure of the code by means of a natively-equipped point-to-routine system, making a versatile, particle-physics oriented environment. With the aim of providing the Scientific Community with a standard computation technology tailored for the management of diverse processes (described by distinct formalisms), we envision releasing the first public version of in the medium-term future. §.§ Uncertainty estimation A commonly adopted approach to assess the impact of MHOUs involves examining the sensitivity of our observables to variations in the renormalization scale and the factorization scale around their natural values. It is widely acknowledged that MHOUs significantly contribute to the overall uncertainty <cit.>. To gauge their influence, we simultaneously vary μ_R and μ_F around μ_N/2 and 2 μ_N, with the C_μ parameter in the figures of Section <ref> denoted as C_μ≡μ_F/μ_N = μ_R/μ_N. Another potential source of uncertainty arises from proton PDFs. Recent analyses of high-energy production rates suggest that selecting different PDF parametrizations and members within the same set has minimal effect <cit.>. Hence, our observables will be computed using only the central member of the NNPDF4.0 parametrization. Additional uncertainties may stem from a collinear improvement of the NLO kernel, which entails incorporating renormalization-group (RG) terms to align the BFKL equation with the DGLAP one in the collinear limit, or from changes in the renormalization scheme <cit.>. The impact of collinear-improvement techniques on semi-hard rapidity-differential rates is observed to be encapsulated within the error bands generated by MHOUs <cit.>. Furthermore, the  <cit.> to MOM <cit.> renormalization-scheme transition was assessed in Ref. <cit.>, resulting in systematically higher MOM results for rapidity distributions. However, these outcomes remain within the MHOUs bands. Notably, a proper MOM analysis should be grounded on MOM-evolved PDFs and FFs, which are presently unavailable. To establish uncertainty bands for our distributions, we combine MHOUs with the numerical errors arising from multidimensional integration (see Section <ref>). The latter is consistently kept below 1% owing to the integrators employed in . §.§ Final-state kinematic ranges By making use of formulæ of Eqs. (<ref>), (<ref>), and (<ref>), we construct physical observables as functions of angular coefficients integrated over the final-state phase-space variables, while the rapidity separation = y_ M - y_b between the two observed particles is kept fixed. Thus, we have C_n (, s) = ∫_q_T_ M^ min^q_T_ M^ max |q⃗_T_ M| ∫_q_T_b^ min^q_T_b^ max |q⃗_T_b| ∫_y_ M^ min^y_ M^ max y_ M∫_y_b^ min^y_b^ max y_b δ ( - (y_ M - y_b)) C_n . The C_n terms inclusively represents all the , , and integrated angular coefficients. This approach allows us to impose and investigate various windows in transverse momenta and rapidities, based on realistic kinematic configurations employed in current and future experimental studies at the LHC. In particular, we will focus on the following two kinematic ranges. §.§.§ Standard LHC tagging We delve into the emission dynamics of both hadrons within the standard detection acceptances of CMS or ATLAS barrel calorimeters. Unlike the Mueller–Navelet channel, which extends into the end-cap region (as illustrated in panel (a) of Fig. <ref>), allowing for jet tagging with |y_ jet| < 4.7, hadron detection is confined to the barrel regions. A realistic proxy for the rapidity range of hadron tags at the LHC can be derived from recent CMS analyses on Λ_b particles <cit.>, yielding |y_Λ_b| < 2. For our study, we adopt a slightly broader range, matching the coverage of the CMS barrel detector, |y_ M, b| < 2.4. Following precedent studies <cit.>, we set a 10 < |q⃗_T_ M| / GeV < 20 window for the transverse momentum of the M hadron. In contrast, we select a wider and disjoint transverse momentum range for the b-hadron, specifically 20 < |q⃗_T_b| / GeV < 60, as recently suggested in studies of high-energy b-flavored emissions <cit.>. This selection ensures the robustness of our treatment within the VFNS framework, as energy scales consistently exceed the thresholds for DGLAP evolution of heavy-quark FFs (we refer to Section <ref> for further details). On the one hand, the symmetric tagging in rapidity of both light and heavy particles enables us to rigorously apply our formalism in a well-defined regime, allowing for stringent tests of the hybrid factorization. On the other hand, as highlighted in Ref. <cit.>, the use of disjoint transverse momentum intervals amplifies effects of additional, undetected gluon radiation by suppressing the Born contribution. This accentuates the signatures of the high-energy resummation over the fixed-order background. Furthermore, asymmetric |q⃗_T|-windows mitigate potential instabilities arising in NLO calculations <cit.>, as well as violations of energy-momentum at NLL <cit.>. However, the achievable values with standard LHC tagging may not be sufficiently large to unambiguously discriminate between BFKL and fixed-order signals. This challenge can be addressed by exploring far-forward emissions of one of the two particles, as proposed in Section <ref>. §.§.§ FPF + LHC coincidence Expanding upon the standard detection regime outlined in Section <ref>, we advocate for the simultaneous identification of an far-forward particle (our choice falls into the M hadron) alongside a more centrally located one (the hadron, as depicted in panel (b) of Fig. <ref>). Once the planned FPF <cit.> becomes operational, this novel configuration might be achievable by coordinating FPF detectors with ATLAS. Integrating information from both ATLAS and the FPF hinges on the capability to trigger ATLAS events using far-forward signatures, necessitating precise timing protocols and influencing FPF detector design. Technical specifics regarding a possible FPF + ATLAS tight timing synchronization are detailed in Section VI E of Ref. <cit.>. As a preliminary investigation, we examine the emission of a M meson within the far-forward range given by 5 < y_ M < 7. While greater rapidities could be explored, we adopt a conservative approach, opting for a rapidity interval disjoint from and more forward than that accessible by end-caps of standard LHC detector. Investigations into larger rapidity ranges are deferred to future endeavors. The meson is paired with a b-hadron detected by the LHC barrel calorimeter within the standard rapidity spectrum, |y_ H| < 2.4. The transverse momenta of both final-state hadrons remain consistent with those specified in Section <ref>. The combined FPF + LHC tagging strategy yields a hybrid and markedly asymmetric range selection, presenting an excellent avenue for disentangling clear high-energy signals from collinear backgrounds. However, as highlighted in Refs. <cit.> and mentioned previously, the joint detection of an far-forward particle alongside a central one results in an asymmetric configuration between the longitudinal momentum fractions of the corresponding incoming partons. This configuration significantly suppresses the contribution of undetected gluon radiation at LO and has a notable impact at NLO. This kinematic constraint leads to an incomplete cancellation between virtual and real contributions from gluon emissions, resulting in the emergence of large threshold logarithms in the perturbative series. Given that the BFKL framework encompasses the resummation of energy-type single logarithms while systematically neglecting threshold ones, we anticipate a partial degradation in the convergence of our resummed calculation in this FPF + LHC coincidence setup compared to the LHC standard scenario. Despite these challenges, these features motivate future developments aimed at incorporating the threshold resummation into our framework <cit.>. In conclusion, explorations via a FPF + LHC coincidence method offer an unparalleled opportunity for rigorous and in-depth examinations of strong interaction dynamics in the high-energy regime. In this regard, the advent of the planned FPF <cit.> may complement the capabilities of the ATLAS detector, enabling us to (i) assess the feasibility of precision analyses using hybrid high-energy and collinear factorization, and (ii) explore potential commonalities among distinct resummation techniques. §.§ Rapidity-interval rates We delve into the analysis of -rates for our studied processes, as illustrated in Figure <ref>. These distributions represent φ-summed, -differential cross sections. We note that clues of a natural stabilization of these observables are strongly expected when a standard LHC tagging is considered, while they are awaited in a FPF + LHC coincidence setup. Observing such stability would not only be a further reliability test for the factorization, but i also would mark a milestone toward future precision analyses. Stabilization effects can manifest at various levels. Firstly, the involve the ability to study -distributions for all considered final states around the natural energy scales prescribed by kinematics. This prerequisite is essential for claiming evidence of stability, a feat unattainable when considering light hadrons and/or jets <cit.>. In such cases, large NLL contributions, of similar magnitude but opposite sign to their LL counterparts, can lead to unphysical -distributions, sometimes even resulting in negative values for large . Another sign of instability arises in the analysis of mean values of cosines of multiples of the azimuthal-angle distance, ⟨cos (n ϕ) ⟩, which can exceed one. Secondly, achieving a substantial reduction in the discrepancy between pure LL calculations and NLL-resummed ones would push natural stability to a higher level. To assess the impact of our resummed calculations on fixed-order predictions, we contrast and results with the corresponding ones derived using our formula (Eq. <ref>), which effectively mimics the high-energy signal of a pure NLO computation. Given a dedicated numerical tool for higher-order perturbative calculations of cross sections for semi-hard hadroproduction of two identified hadrons is not yet available, our approach remains the most valid and efficient method for a BFKL versus fixed-order comparison. Plots of Fig. <ref> depict the -rate for inclusive π^± + , productions at the LHC (left panels) and at FPF + LHC (right panel). Analogously, plots of Fig. <ref> refer to the inclusive D*^± + , detections. Uncertainty bands are constructed considering the combined effect of MHOUs from energy-scale variation and numerical multi-dimensional integration over the final-state phase space, with the former being significantly dominant. Upon inspection, our -distributions generally exhibit favorable statistics, lying in the range 10^-1 to 5 × 10^2 nb. The observed reflects the typical dynamics of our hybrid factorization. While the BFKL resummation predicts an increase in the partonic hard-scattering cross section with energy, its convolution with collinear PDFs and FFs leads to a decrease with for , , and results. This decrease is more pronounced for LHC kinematic configurations and appears smoother in the FPF + LHC case. The noncontiguous rapidity ranges covered by the FPF and current LHC detectors may contribute to this smoother shape, compensating for the increment with of the available phase space by the absence of detected events in the interval between y_ FPF^ min and y_ LHC^ max. A similar pattern was also noted in a complementary configuration, namely the CMS + CASTOR setup (see Fig. 10 of Ref. <cit.>). Results for rapidity-interval rates obtained with different pion FF parametrizations (upper versus lower panels of Fig. <ref>) exhibit qualitatively similar trends, with their differences remaining significant, beyond a factor of three. We observe the emergence of clear and natural stabilization effects in the high-energy series when standard LHC cuts are applied. The -distributions for all the channels in Fig. <ref> feature bands that are partially or entirely nested within bands for small and moderate values of . However, as the rapidity interval increases, corrections become increasingly negative, causing predictions to become smaller than pure ones. and uncertainty bands are generally narrower than and ones. Furthermore, their width generally diminishes in the large range, where high-energy effects dominate over pure DGLAP ones. These observations are consistent with previous analyses on heavy-flavored emissions at CMS <cit.>, highlighting the convergence of the energy-resummed series thanks to the natural stabilizing effect of VFNS FFs, which accurately describe the hadronization mechanisms of the detected heavy-flavor species. The high-energy stabilizing pattern is also evident in the FPF + LHC coincidence setup, although its effects are less pronounced (see right panels of Figs. <ref> to <ref>). While our required condition for asserting evidence of stability is met, allowing for precision studies of cross sections around the natural energy scales provided by kinematics, it is notable that FPF + LHC predictions consistently remain below results. Additionally, uncertainty bands are narrower than ones, but slightly wider than bands for the same channels investigated in the standard LHC configurations. Furthermore, results consistently exceed ones, while results are smaller. Although further dedicated studies are required to determine if the observed natural-stability signals degrade when the FPF rapidity acceptances are expanded beyond those imposed in our analysis, an explanation for the increased sensitivity of -rates to the resummation accuracy can be provided based on our current understanding of the dynamics behind other resummation mechanisms. We stress that the semi-hard nature of the final states considered leads to high energies but not necessarily to small-x dynamics. This is particularly evident in the FPF + LHC coincidence setup, where the strongly asymmetric final-state rapidity ranges result in one of the two parton longitudinal fractions being consistently large, while the other takes more moderate values. As highlighted in Section <ref>, our approach does not capture large-x logarithms, whose inclusion would be performed via an appropriate resummation mechanism, the aforementioned threshold resummation. A significant finding from Ref. <cit.> on inclusive di-hadron detections in hadronic collisions is that incorporating NLL threshold resummation on top of pure NLO calculations leads to a notable increase in cross sections. Remarkably, this increase is comparable to the gap between our and high-energy predictions for -distributions in FPF + LHC configurations. Recent works <cit.> have demonstrated how NLL instabilities arising from forward hadron hadroproduction, described within the saturation framework <cit.>, can be substantially mitigated when threshold logarithms are incorporated into these calculations. Considering these findings, we contend that the natural stability observed in our high-energy studies is not compromised by the adoption of FPF + LHC coincidence setups. It remains robust and provides a satisfactory description of -rates at natural scales. The discrepancy between and predictions could come from those large-x, threshold logarithms, which are currently not accounted for by our hybrid factorization. Incorporating the large-x resummation represents a crucial step toward enhancing the description of the considered observables. Therefore, it should be pursued as the next logical step to evaluate the feasibility of precision studies of -distributions at FPF + LHC. The analysis presented in this section underscores that light-meson plus heavy-flavor production processes offer a promising avenue for stabilizing the high-energy resummation, as anticipated. -distributions emerge as particularly promising observables for detecting signals of high-energy dynamics and potentially discriminating between BFKL-driven and fixed-order computations. Further exploration of these distributions holds significant potential for delving into the interplay between high-energy QCD dynamics and other resummations, particularly the large-x threshold effects. By probing these aspects in greater detail, we can gain deeper insights into the underlying mechanisms governing semi-hard phenomenology and refine our understanding of QCD dynamics in the high-energy regime. §.§ Angular multiplicities Semi-hard phenomenology delves into observables that become increasingly sensitive to final-state rapidity intervals. When these observables are also differential in azimuthal angles, it exposes a core aspect of high-energy QCD. In the context of two-particle hadroproduction reactions, significant insights into the onset of BFKL dynamics emerge when large rapidity distances () enhance the weight of undetected gluons strongly ordered in rapidity. These gluon emissions, resummed as energy logarithms, induce a growing-with- decorrelation on the azimuthal plane of the outgoing particles. The angular decorrelation was initially observed through the dependence of cross-section azimuthal moment <cit.>, defined as the ratios R_n0≡ C_n/C_0 between a specific azimuthal coefficient (C_n ≥ 1) and the φ-summed C_0 coefficient, which genuinely corresponds to the rate of Section <ref>. The R_10 ratio effectively measures the azimuthal decorrelation between the two outgoing particles, akin to the mean value of ⟨cosφ⟩. The R_n0 ratios represent the higher moments ⟨cos (n φ) ⟩, while further probes of BFKL were proposed through ratios between azimuthal moments, R_nm≡ C_n/C_m = ⟨cos (n φ) ⟩ / ⟨cos (m φ) ⟩, in earlier studies <cit.>. NLL-resummed predictions for angular correlations of Mueller–Navelet jets exhibited satisfactory agreement with LHC data at √(s) = 7 TeV, particularly for symmetric |q⃗_T|-windows <cit.>. However, due to instabilities affecting the NLL series for processes involving the emission of two light-flavored particles, comparisons between theory and experiment had to be conducted at energy scales optimized via different procedures <cit.>. Recent investigations into inclusive hadroproductions of heavy-flavored hadrons have revealed that the stabilizing effects associated with the use of heavy-flavor VFNS FFs are less pronounced when a heavy hadron is emitted alongside a light jet, as opposed to another heavy object <cit.>. Consequently, only a partial reduction in instabilities for azimuthal moments is observed. As recently shown <cit.>, starting from the angular coefficients, we can construct a more stable observable that contains signals of high-energy dynamics coming from all azimuthal modes. We refer to the angular multiplicity 1/σσ/φ (φ, , s) = 1/2π[ 1 + 2 ∑_n = 1^∞cos(n φ) ⟨ cos(n φ) ⟩] = 1/2π[ 1 + 2 ∑_n = 1^∞cos(n φ) R_n0(, s) ] , with φ≡ϕ_ M - ϕ_b - π. This observable, initially proposed in Ref. <cit.> within the framework of Mueller–Navelet analyses, was further examined with NLL precision in Ref. <cit.> for the same process. Exploring its characteristics holds significant advantages from both theoretical and experimental standpoints. On the one hand, it collects signals of high-energy dynamics from all azimuthal modes, rendering it one of the most robust observables for spotting BFKL effects. As a normalized distribution, i.e., a multiplicity, its sensitivity to uncertainties arising from uncertainties propagating from PDFs and/or FFs, as well as those from different replicas within the same set, is notably diminished. This allows for a focused examination of uncertainties intrinsic to high-energy resummation, facilitating stringent BFKL tests. On the other hand, experimental detector acceptances typically do not cover the entire (2π) azimuthal-angle range. Thus, comparing φ-dependent observables, like our azimuthal distribution, with experimental data is considerably more straightforward than for R_nm correlation moments. From a numerical standpoint, ensuring reliable predictions for our φ-distribution necessitates computing a large number of C_n coefficients. We assessed the numerical stability of our calculations by progressively increasing the effective upper bound of the n sum in Eq. (<ref>), achieving satisfactory convergence at n_ [bound]≃ 20. Results for angular multiplicities as functions of φ and for two different sets of values for the rapidity interval, = 3, 4 (LHC) or = 6, 7 (FPF + LHC), are shown in Figs. <ref>, <ref>, and <ref>. Panels in these figures are structured as follows. Upper (lowers) plots refer to () rates. Standard LHC tagging and the FPF + LHC coincidence setups are used in left and right plots, respectively. Figures <ref> and <ref> refer to pion channels described via NNFF1.0 and MAPFF1.0 pion collinear FFs, whereas Fig. <ref> show results for D-meson channels depicted by means of KKKS08 FF determinations. The prominent feature shared among all the depicted multiplicities is the emergence of a peak centered at φ = 0, corresponding of the physical configuration where the M meson and the hadron are emitted (almost) back-to-back. With increasing , a characteristic trend emerges: the peak height decreases while its width expands. This trend stems from the heightened number of secondary gluons emitted with substantial rapidity separation, as predicted by the BFKL equation. Consequently, the correlation in the azimuthal plane between the two tagged particles diminishes, leading to a decrease in the number of nearly back-to-back events. Conversely, distributions exhibit an opposing pattern: the peak grows while its width decreases with increasing . This behavior generates from the connection between the strongly asymmetric transverse-momentum windows at which the two hadrons are tagged (see Section <ref>) and the corresponding longitudinal-momentum fractions. This bring to a reduction in the combinations of these fractions for the given . However, this results in an unphysical recorrelation pattern in the azimuthal plane for distributions, contradicting the expected loss of correlation due to the weight of rapidity-ordered gluons forming the inclusive system X. The correct behavior is reinstated when full NLL corrections are considered. Additionally, the size of uncertainty bands due to scale variations sensibly diminishes as increases. This becomes strongly manifest in FPF + LHC selections. Multiplicities taken at lower reference values of the rapidity interval exhibit two symmetric minima at |φ| ≳π/2, which extend to unphysical values beyond zero, whereas no negative values are observed for larger values. This indicates a natural stabilization of the high-energy series in the large rapidity-interval regime, as as expected. Upon qualitative comparison of our predictions with corresponding results studied in other semi-hard channels, novel features emerge. As an example, angular distributions for our reactions appear less peaked compared to vector quarkonium + jet distributions (Fig. 6 of Ref. <cit.>). They exhibit similarities with light-hadron or jet rates (Figs. 17 and 18 of Ref. <cit.>). In summary, multiplicities for M-meson plus b-hadron productions within the hybrid factorization at allow for stringent tests of high-energy QCD dynamics. The natural stabilization of the high-energy resummation becomes significant in its expected applicability domain, particularly in the large sector. This makes azimuthal distributions easily measurable in current LHC experimental configurations and future analyses facilitated by the FPF + LHC coincidence methods, this enhances our ability to conduct rigorous analyses of high-energy QCD dynamics. § TOWARD PRECISION STUDIES OF HIGH-ENERGY QCD By employing the hybrid high-energy and collinear factorization approach at , we have investigated the inclusive detection of pion or a D meson, in association with a singly-bottomed hadron, at current LHC energies and kinematic configurations. Additionally, we explored configurations accessible via a FPF + LHC tight timing-coincidence setup. Our analysis of distributions differential in the observed rapidity interval () or in the azimuthal-angle distance (ϕ) between the two tagged objects has corroborated the remarkable property of natural stability of the high-energy resummation. This feature, recently uncovered in the context of heavy-flavor studies in forward directions <cit.>, enables a reliable description of the considered observables around the natural values of energy scales dictated by process kinematics. This stability is a prerequisite and an initial stride toward precision investigations of high-energy QCD through inclusive di-hadron system emissions in proton collisions. We have demonstrated that -rates effectively discriminate between the high-energy signal from the fixed-order background. On the other hand, φ-distributions exhibit robust stability in the large regime, offering a means to identify new and distinctive high-energy features. The promising statistical outcomes of our observables in the FPF + LHC configuration underscore the interest of the FPF Community <cit.> in exploring the intriguing prospect of enabling FPF and LHC detectors to operate in coincidence. Achieving this will necessitate extremely precise timing procedures, the technical feasibility of which should be actively pursued and complemented by positive feedback from the theoretical domain. A striking conclusion from our recent investigation into the interplay between BFKL and DGLAP in inclusive semi-hard emissions of light jets and hadrons at the LHC <cit.> is the imperative for a multi-lateral formalism. Such a framework would entail the simultaneous and consistent incorporation of several distinct resummation mechanisms, serving as a fundamental element for conducting precision studies of high-energy QCD. The sensitivity of FPF + LHC results presented in this study to both high-energy and threshold resummation underscores the urgency of developing such a unified description as a top priority in the medium-term future. As a first prospect, we will complement our investigation on M mesons emitted in FPF-like kinematic configurations, where heavy particles are detected by LHC detectors, by examining the opposite configuration. In this setup, a far-forward heavy-flavored hadron is tagged at the FPF while another central object remains within LHC cuts. Subsequently, we will explore the high-energy behavior of observables sensitive to single inclusive emissions of heavy hadrons reconstructed by FPF detectors. Our aim is to gain access to the proton content in far-forward (very low-x) regimes provided by FPF cuts. Here, our hybrid factorization framework could serve as a theoretical common basis for exploring production mechanisms and decays of heavy-flavored particles. Mapping the proton structure in the very low-x regime will hinge on a comprehensive exploration of the connections among different approaches. Specifically, we aim to investigate the interplay between our hybrid factorization, which permits the description of cross sections for single forward emissions in terms of a κ_T-factorization between off-shell matrix elements and the UGD and the ABF formalism which, as mentioned before, allowed us to obtain x enhanced collinear PDFs (see also Section 6.1.2 of Ref. <cit.>). Then, as highlighted by model studies of leading-twist gluon TMD PDFs <cit.> (see also Refs. <cit.>), the distribution of linearly-polarized gluons can induce spin effects even in collisions of unpolarized hadrons. The latters, collectively known as the Boer–Mulders effect, were first observed in the case of quark polarization <cit.>). The gluon Boer–Mulders density can be readily accessed via inclusive emissions of heavy-flavored objects in hadron collisions, such as those that can be studied at FPFs (see Section 6.1.7 of Ref. <cit.>). Considering these insights, we aim to utilize FPF kinematic ranges as a tool to elucidate the connection between the BFKL UGD and the (un)polarized gluon TMDs. Further progress will also hinge on connecting our program with NLO investigations of far-forward semi-inclusive emissions within the framework of gluon saturation <cit.>. In this context, the influence of soft-gluon radiation on angular asymmetries in emissions of far-forward di-jet or di-hadron systems might be relevant <cit.>. Higher-order saturation permits to explore of the (un)polarized gluon content of protons and nucleons at low-x <cit.>. Studies in Refs. <cit.> consider into heavy-hadron emissions in proton-proton and proton-nucleus collisions while accounting for low-x effects. Prospective inquiries of exclusive emission of heavy flavors in far-forward rapidity directions will unravel the connection between of our hybrid factorization NLO saturation <cit.>. We view the analyses presented in this work as a significant step forward toward conducting precision studies of high-energy QCD. Our hybrid factorization framework, potentially enhanced via the integration of additional resummation techniques, offers a systematic approach to reducing uncertainties stemming from both perturbative calculations of high-energy scatterings and from collinear inputs. This serves a dual purpose: it provides a benchmark for SM measurements and establishes a shared foundation for the exploration of BSM physics. § ACKNOWLEDGMENTS tocsectionsec:acknowledgments The author is supported by the Atracción de Talento Grant n. 2022-T1/TIC-24176 of the Comunidad Autónoma de Madrid, Spain. Feynman diagrams in Fig. <ref> were realized via the JaxoDraw 2.0 code <cit.>. app:NLL_kernel § APPENDIX A: HIGH-ENERGY KERNEL AT NLL The characteristic function encoded in the NLL correction to the high-energy kernel of Eq. (<ref>) is χ̅(n,ν) = - 1/4{π^2 - 4/3χ(n,ν) - 6ζ(3) - ^2 χ/ν^2 + 2 ϕ(n,ν) + 2 ϕ(n,-ν) . . +π^2sinh(πν)/2 ν cosh^2(πν)[ (3+(1+n_f/N_c^3)11+12ν^2/16(1+ν^2)) δ_n0 -(1+n_f/N_c^3)1+4ν^2/32(1+ν^2)δ_n2]} , where ϕ(n,ν) = -∫_0^1 x x^-1/2+iν+n/2/1+x{1/2(ψ^'(n+1/2)-ζ(2))+_2(x)+_2(-x). . +ln x[ψ(n+1)-ψ(1)+ln(1+x)+∑_k=1^∞(-x)^k/k+n]+∑_k=1^∞x^k/(k+n)^2[1-(-1)^k]} =∑_k=0^∞(-1)^k+1/k+(n+1)/2+iν{ψ^'(k+n+1)-ψ^'(k+1). . +(-1)^k+1[Ξ_ψ(k+n+1)+Ξ_ψ(k+1)]-ψ(k+n+1)-ψ(k+1)/k+(n+1)/2+iν} , Ξ_ψ(z)=1/4[ψ^'(z+1/2) -ψ^'(z/2)] , and _2(x) = - ∫_0^x ζ ln(1-ζ)/ζ . app:NLOHEF § APPENDIX B: FORWARD-HADRON EMISSION FUNCTION AT NLO The NLO correction to the forward-hadron singly off-shell emission function can be written as <cit.> _h(n,ν,|q⃗_T|,x)= 1/π√(C_F/C_A)(|q⃗_T|^2)^iν-1/2∫_x^1ζ/ζ∫_x/ζ^1ϑ/ϑ(ζϑ/x)^2iν-1 × [ C_A/C_Ff_g(ζ)D_g^h(x/ζϑ) C_gg(ζ,ϑ)+∑_i=qq̅f_i(ζ)D_i^h (x/ζϑ) C_qq(ζ,ϑ) . + .D_g^h(x/ζϑ) ∑_i=qq̅f_i(ζ) C_qg(ζ,ϑ)+C_A/C_Ff_g(ζ)∑_i=qq̅D_i^h (x/xϑ) C_gq(ζ,ϑ) ] . Here, the C_ij parton coefficients read appcnt C_gg(ζ,ϑ) = 2 P_gg(ϑ)(1+ϑ^-2γ) ln( |q⃗_T| ζϑ/μ_F x) - β_0 ln( |q⃗_T| ζϑ/μ_R x) + δ(1-ϑ)[C_A ln(s_0 ζ^2/|q⃗_T|^2 x^2 ) χ(n,γ) - C_A(67/18-π^2/2)+5/9n_f . . +C_A/2(ψ^'(1+γ+n/2) -ψ^'(n/2-γ) -χ^2(n,γ)) ] + C_A (1/ϑ+1/(1-ϑ)_+-2+ϑϑ̅) × (χ(n,γ)(1+ϑ^-2γ)-2(1+2ϑ^-2γ)lnϑ +ϑ̅^2/ϑ^2 I_2) + 2 C_A (1+ϑ^-2γ) ((1/ϑ-2+ϑϑ̅) lnϑ̅+(ln(1-ϑ)/1-ϑ)_+) , appcnt C_gq(ζ,ϑ) = 2 P_qg(ϑ)(C_F/C_A+ϑ^-2γ)ln( |q⃗_T| ζϑ/μ_F x) + 2 ϑϑ̅ T_R (C_F/C_A+ϑ^-2γ)+ P_qg(ϑ) (C_F/C_A χ(n,γ)+2 ϑ^-2γ lnϑ̅/ϑ + ϑ̅/ϑ I_3) , appcnt C_qg(ζ,ϑ) = 2 P_gq(ϑ)(C_A/C_F+ϑ^-2γ)ln( |q⃗_T| ζϑ/μ_F x) + ϑ(C_Fϑ^-2γ+C_A) + 1+ϑ̅^2/ϑ[C_Fϑ^-2γ(χ(n,γ)-2lnϑ)+2C_Alnϑ̅/ϑ + ϑ̅/ϑ I_1] , and appcnt C_qq(x,ϑ) = 2 P_qq(ϑ)(1+ϑ^-2γ)ln( |q⃗_T| ζϑ/μ_F x) - β_0 ln(|q⃗_T| ζϑ/μ_R x) + δ(1-ϑ)[- C_A ln(s_0 ζ^2/|q⃗_T|^2 x^2) χ(n,γ)+ C_A(85/18+π^2/2)-5/9n_f - 8 C_F . . +C_A/2(ψ^'(1+γ+n/2)-ψ^'(n/2-γ)-χ^2(n,γ)) ] + C_F ϑ̅ (1+ϑ^-2γ) +(1+ϑ^2)[C_A (1+ϑ^-2γ)χ(n,γ)/2(1-ϑ )_++(C_A-2 C_F(1+ϑ^-2γ))lnϑ/1-ϑ] + (C_F-C_A/2)(1+ϑ^2)[2(1+ϑ^-2γ)(ln (1-ϑ)/1-ϑ)_+ + ϑ̅/ϑ^2 I_2] , with T_R = 1/2. The s_0 scale is an addition energy scale that we set to s_0 = μ_C. Furthermore, one has ϑ̅≡ 1 - ϑ and γ≡ - 1/2 + i ν. The LO DGLAP kernels P_i j(ϑ) are given by appcnt P_gq(z) = C_F1+(1-z)^2/z , P_qg(z) = T_R[z^2+(1-z)^2] , P_qq(z) = C_F( 1+z^2/1-z)_+= C_F[ 1+z^2/(1-z)_+ +3 2δ(1-z)] , P_gg(z) = 2C_A[1/(1-z)_+ +1/z -2+z(1-z)]+(11 6C_A-n_f/3)δ(1-z) , whereas the I_2,1,3 functions read appcnt I_2= ϑ^2/ϑ̅^2[ ϑ(_2F_1(1,1+γ-n/2,2+γ-n/2,ϑ)/n/2-γ-1- _2F_1(1,1+γ+n/2,2+γ+n/2,ϑ)/n/2+ γ+1). appcnt. +ϑ^-2γ(_2F_1(1,-γ-n/2,1-γ-n/2,ϑ)/n/2+γ-_2F_1(1,-γ+n/2,1-γ+n/2,ϑ)/n/2 -γ) . . +(1+ϑ^-2γ)(χ(n,γ)-2lnϑ̅)+2lnϑ] , appcnt I_1=ϑ̅/2ϑ I_2+ϑ/ϑ̅[lnϑ+1-ϑ^-2γ/2(χ(n,γ)-2lnϑ̅)] , and appcnt I_3=ϑ̅/2ϑ I_2-ϑ/ϑ̅[lnϑ+1-ϑ^-2γ/2(χ(n,γ)-2lnϑ̅)] , with _2F_1 the Gauss hypergeometric function. The plus-prescription in Eqs. (<ref>) and (<ref>) act as appcnt∫^1_ζζf(ζ)/(1-ζ)_+ =∫^1_ζζf(ζ)-f(1)/(1-ζ) -∫^ζ_0 ζf(1)/(1-ζ) on any function f(ζ) regular at ζ=1. elsarticle-num
http://arxiv.org/abs/2405.09631v1
20240515180115
Quantum switch instabilities with an open control
[ "Otavio A. D. Molitor", "André H. A. Malavazi", "Roberto Dobal Baldijão", "Alexandre C. Orthey Jr.", "Ismael L. Paiva", "Pedro R. Dieguez" ]
quant-ph
[ "quant-ph" ]
dmolitor.oa@protonmail.com International Centre for Theory of Quantum Technologies, University of Gdańsk, Jana Bażyńskiego 1A, 80-309 Gdańsk, Poland andrehamalavazi@gmail.com International Centre for Theory of Quantum Technologies, University of Gdańsk, Jana Bażyńskiego 1A, 80-309 Gdańsk, Poland roberto.dobal-baldijao@ug.edu.pl International Centre for Theory of Quantum Technologies, University of Gdańsk, Jana Bażyńskiego 1A, 80-309 Gdańsk, Poland aorthey@cft.edu.pl Center for Theoretical Physics, Polish Academy of Sciences, Al. Lotników 32/46, 02-668 Warsaw, Poland. ismaellpaiva@gmail.com H. H. Wills Physics Laboratory, University of Bristol, Tyndall Avenue, Bristol BS8 1TL, United Kingdom dieguez.pr@gmail.com International Centre for Theory of Quantum Technologies, University of Gdańsk, Jana Bażyńskiego 1A, 80-309 Gdańsk, Poland The superposition of causal order shows promise in various quantum technologies. However, the fragility of quantum systems arising from environmental interactions, leading to dissipative behavior and irreversibility, demands a deeper understanding of the possible instabilities in the coherent control of causal orders. In this work, we employ a collisional model to investigate the impact of an open control system on the generation of interference between two causal orders. We present the environmental instabilities for the switch of two arbitrary quantum operations and examine the influence of environmental temperature on each potential outcome of control post-selection. Additionally, we explore how environmental instabilities affect protocol performance, including switching between mutually unbiased measurement observables and refrigeration powered by causal order superposition, providing insights into broader implications. Quantum switch instabilities with an open control Pedro R. Dieguez0000-0002-8286-2645 ================================================= § INTRODUCTION Quantum coherence is one of the notable features of the quantum description of nature that distinguishes it from classical theories <cit.>. This intrinsically quantum phenomenon can be employed to lead to the indefiniteness of the causal structures underlying the application of quantum operations <cit.>, which may be a resource for new quantum advantages <cit.>. Paradigmatic examples associated with this are given by processes that utilize an auxiliary quantum control to devise a superposition of causal order (SCO) of operations in a system of interest, such as the quantum switch (QS) <cit.>. SCO has been interpreted as a superposition of time evolution <cit.> and applied in a plethora of fields <cit.>, ranging from computation <cit.> and communication <cit.> to metrology <cit.> and thermodynamics <cit.>. However, there are still open questions and a debate about whether (and how) the SCO resulting from the QS (or other processes with a quantum control) can be considered genuine indefinite causal order and benefit from the advantages associated with it <cit.>. The quantum correlations formed when applying a QS are essential to observe SCO effects after the control post-selection procedure <cit.>. Therefore, it is imperative to understand and characterize their resilience under more realistic scenarios as it is widely recognized that quantum states are fragile due to their unavoidable interaction with environmental degrees of freedom <cit.>. In general, these interactions lead to non-unitary processes accompanied by dissipation and irreversibility, which directly affects the existence of quantum resources and idealized closed dynamical frameworks fail to capture. The microscopic derivation of open quantum system dynamics is realized by explicitly including and modeling the external environment. However, this process can become intricate when dealing with more general dynamics that extend beyond the standard regime defined by the weak coupling, Born–Markov, and secular approximations <cit.>. A compelling alternative is given by the collisional model framework <cit.>, which consists of modeling the environment as a set of identically prepared auxiliary systems interacting with the system of interest through some unitary evolution. Despite its straightforward conceptual and procedural nature, this allows one to approach broad physical scenarios. In this sense, one can choose the initial state of the auxiliary systems (e.g., Gibbs states for thermal reservoirs) and the interaction terms, consider non-Markovianity <cit.>, and derive local master equations under an appropriate scaling of the interaction strength <cit.>. In this work, we recognize the post-selection of the control as crucial for SCO effects in the QS. Yet it also poses a potential exposure of the control to environmental interactions. To account for such a process, we thus employ the collisional model to examine the robustness of correlations between the target and control states before the post-selection procedure in the QS of two arbitrary maps. The proposed general procedure is elucidated by considering the reservoir auxiliary systems as a set of qubits in the Gibbs state, coupling with the system of interest through an excitation-conserving interaction. Such an approach provides analytical results for the QS with an open control. Within this analysis, we detect thermal instabilities caused by environmental interactions. Remarkably, the instabilities in the quantum switch of arbitrary quantum operations consistently diminish the contribution of SCO terms, independently of the environment temperature. However, the temperature influences the post-selection probabilities and conditional states. In the low-temperature regime, the environment asymmetrically shields one post-selection outcome, while in the high-temperature regime, both outcomes are similarly affected, suppressing SCO. Overall, these findings suggest that environmental interactions qualitatively alter the QS behavior. To illustrate our findings, we consider two paradigmatic applications. First, we utilize our open-control QS model to analyze the SCO of channels that describe non-selective measurements of incompatible observables, transitioning from weak to strong projective measurement regimes <cit.>. Additionally, we discuss how the dynamics of an open control can impact the coefficient of performance of an SCO-powered refrigerator <cit.>. In both cases, the instabilities and asymmetry caused by environmental interactions strongly influence the effectiveness of the QS for the intended application. Throughout the work, we use units such that ħ = k_B = 1. The remainder of this article is structured as follows. In Sec. <ref> we introduce the QS setup. In Sec. <ref> we present our model for an open QS and our main results. In Sec. <ref>, we discuss our results, connecting them with thermodynamical concepts and presenting an outlook for future research avenues. Finally, we introduce two examples of concrete applications of our results in the Appendices. § QUANTUM SWITCH Consider a quantum system S initially in a state ρ_S with local Hamiltonian H_S. The introduction of an auxiliary control degree-of-freedom C enables the implementation of the QS, where the state ρ_C determines the order of application of two (or possibly more) quantum maps. That is, depending on the state of the control, the maps are applied in different orderings. Given the completely positive trace-preserving (CPTP) maps ℳ and 𝒩 with Kraus operators {M_i} and {N_i}, respectively, such that[The maps ℳ and 𝒩 act on the same system S, which is undergoing the switch. Therefore, they both sum to the identity in the same space, i.e., 1_S. Note, however, that i and j can run over any finite number of values, which need not coincide.] ∑_i M_i^† M_i = ∑_i N_j^† N_j = 1_S, the controlled-Kraus operators are written as W_ij M_i N_j ⊗ |0⟩⟨0|_C + N_j M_i ⊗ |1⟩⟨ 1|_C. Thus the effect of the QS map on the system-control state is ρ_SC^ℳ↔𝒩𝒮_ℳ,𝒩(ρ_S ⊗ρ_C) = ∑_ij W_ij(ρ_S ⊗ρ_C) W_ij^†, where the composite system-control state is assumed to be initially separable. It follows that, if the control is in the state |0⟩_C or |1⟩_C, the maps are applied in the definite order characterized by the sequential application of ℳ and 𝒩 or vice-versa, respectively. Hence, if the control is in a coherent state, e.g., |ψ⟩_C = √(p)|0⟩_C + √(1-p)|1⟩_C (p ≠ 0,1), a superposition between the causal orders can be achieved. Note that two states of C are sufficient to implement a QS between two processes, which allows one to effectively model the control as a two-level system (i.e., a qubit) regardless of the dimension of S. Then, the composite system-control state post-QS can be written as <cit.> ρ_SC^ℳ↔𝒩= A_++⊗ρ_C+A_+-⊗ρ_Cσ_z +A_-+⊗σ_zρ_C+A_–⊗σ_zρ_Cσ_z, where σ_z is the z-Pauli matrix and we have defined the operators A_xy1/4∑_i,j[M_i,N_j]_xρ_S[M_i,N_j]_y^† with x,y∈{+,-}, [X,Y]_- XY - YX (i.e., the commutator) and [X,Y]_+ XY + YX (i.e., the anti-commutator). Eq. (<ref>) is fully general regarding the channels applied in the QS and the initial state of the control. For the purposes of this work, however, we take the paradigmatic special case of the initial state of the control being ρ_C=|+⟩⟨+|_C, which features maximal coherence in the computational basis and, therefore, is among the best suitable initial states to explore SCO. Thus, Eq. (<ref>) becomes ρ_SC^ℳ↔𝒩 = A_++⊗|+⟩⟨+|+A_+-⊗|+⟩⟨-| + A_-+⊗|-⟩⟨+|+A_–⊗|-⟩⟨-|. The joint state ρ_SC^ℳ↔𝒩 carries terms related to SCO. To see that, define the following operators A_def A_+++A_– = 1/2∑_i,j(M_iN_jρ_SN_j^†M_i^†+N_jM_iρ_SM_i^†N_j^†) and A_indef A_++-A_– = 1/2∑_i,j(M_iN_jρ_SM_i^†N_j^†+N_jM_iρ_SN_j^†M_i^†). Observe that A_ def is a convex combination of two terms: One with ℳ applied to the system, followed by 𝒩, and the other term representing the opposite order. Therefore, A_ def corresponds to a mixture of definite orders. A_indef, however, corresponds to interference terms between the causal orders, i.e., terms without definite causal order in the quantum description. Since A_±±=1/2A_ def±1/2A_ indef, we indeed see that indefinite order leaves an imprint in linearly independent components of the joint system-control state of Eq. (<ref>). Even though the global state ρ_SC^ℳ↔𝒩 may carry terms associated with SCO, the local state of the central system S is, up until this point, oblivious to such phenomenon. In fact, tracing out the control in Eq. (<ref>) leads to A_+++A_–=A_ def. In order for SCO to manifest upon the state of S locally, a post-selection of the control state must be performed. This later measurement, if implemented in the computational basis (associated with the operator σ_z), defines an order for the operations. Indeed, since the switch map associates each element of the control basis of σ_z with a definite order of application of the maps ℳ and 𝒩, a notable property of the switch channel is that it maps σ_z-incoherent states of the control onto σ_z-incoherent states that can be associated with a classical mixture of orders. Because of this, it is common to consider a post-selection of the control in a state that has maximum coherence in the σ_z basis, e.g., the eigenstates |+⟩ or |-⟩ of the x-Pauli operator. From Eq. (<ref>), the probability of each outcome in the post-selection is simply p_ post(±)=_SC{(1_S ⊗ |±⟩⟨±|_C ) ρ_SC^ℳ↔𝒩} = _S{A_±±}. Given that a post-selection of the control was made (and therefore p_ post(±)>0), the conditional state of the system S is ρ_S,±^ℳ↔𝒩= A_±±/{A_±±}. Given the fact that ρ_S,±^ℳ↔𝒩 is proportional to A_±±, we know that these conditional states, obtained after the post-selection of the control, carry terms associated with SCO. § RESULTS §.§ Quantum switch with open control Consider now an interaction of the control with an environment E right before its post-selection. For that, we will make use of the collisional model, in which the environment is represented as a stream of qubits in a well-defined Gibbs state, i.e., Θ_E = exp(-β_E H_E)/Z_E, where H_E is the bare Hamiltonian of each environment qubit, β_E=1/T_E is the inverse of temperature T_E, and Z_E = {exp(-β_E H_E)} is the partition function. Both the Hamiltonians of C and E are assumed to be resonant, and the eigenbasis of H_C is assumed to coincide with the post-selection basis, i.e., {|+⟩_C,|-⟩_C}. Then, we can write H_C, E = -ωσ_x^C, E/2 for a certain ω. Observe that the choice of eigenbasis {|+⟩_E,|-⟩_E} for H_E does not constitute a further restriction of our model since the reference frame for the environment can be chosen arbitrarily. Meanwhile, the specified control Hamiltonian guarantees that, up to a phase, the post-selection basis is invariant over its free dynamics. This model is represented in Fig. <ref>. Between the controlled operation and the post-selection, the environment qubits interact one by one with the control qubit, i.e., each environment qubit couples with the control through some interaction Hamiltonian for a finite time τ. After each interaction (also referred to as collision), the composite system-control state is updated according to ρ_SC^n = _E{ U (ρ_SC^n-1⊗ρ_E ) U^†}, where n is the number of collisions, the trace is applied over the environment degrees of freedom, and U = exp(-iτ H_tot) is the joint time evolution operator with H_tot = H_S + H_C + H_E + V_CE the total Hamiltonian, H_α the bare Hamiltonian of subsystem α, and V_CE the interaction between control and each environment qubit. The latter will be assumed to have the following form V_CE = g/2( σ_z^C σ_z^E + σ_y^Cσ_y^E), where g is the coupling strength, i.e., the interaction has an isotropic structure. In the collisional model, we assume that 0<gτ≪1. Note that Eq. (<ref>) can also be expressed as g(|+⟩⟨-|_C⊗ |-⟩⟨ +|_E+h.c.), with “h.c.” denoting Hermitian conjugate. This corresponds to the usual Jaynes-Cummings coupling for a reservoir of qubits <cit.> (in the |±⟩ basis representation). Such form represents a standard system-reservoir coupling describing the exchange of excitation <cit.>. Moreover, this model is of thermodynamic interest since it commutes with the sum of the local bare Hamiltonians of the control and environment auxiliary systems, i.e., [H_C + H_E, V_CE]_- = 0. While the former assures excitation conservation, the latter satisfies the strict energy conservation during the energy flow <cit.>, which also implies that no work is performed during the collisions <cit.>. In simpler terms, the mean energy of the interaction is constant, and all energy that leaves the control enters the environment ancilla, and vice versa. Finally, starting with the post-QS state in Eq. (<ref>) and considering the initial state ρ_C=|+⟩⟨+|_C for the control, the difference equation in Eq. (<ref>) can be solved, leading to the following composite system-control state after n collisions: ρ_SC^n= ℬ_++(n)⊗|+⟩⟨+|_C+ℬ_+-(n)⊗|+⟩⟨-|_C +ℬ_-+(n)⊗|-⟩⟨+|_C+ℬ_–(n)⊗|-⟩⟨-|_C, where ℬ_+-(n)=ℬ_-+^†(n) e^inτωcos^n(g)U_S^nA_+-U_S^† n and ℬ_±±(n) 1/2{1± f_E[1-cos^2n(g)]}A_def^n ±1/2cos^2n(g)A_indef^n with U_Sexp(-i τ H_S) being the time-evolution operator of the system, A_def^n U_S^nA_defU_S^† n, A_indef^n U_S^nA_indefU_S^† n, and f_E tanh (β_E ω/2). Note that A_ def^n and A_ indef^n are the time-evolved versions of A_ def and A_ indef for a period τ, respectively. Eq. (<ref>) constitutes our main result, as it gives the joint system-control state after n collisions, for any implementation of the 2-quantum switch with its control affected by the environment. Hence, we can now examine how this open control affects the operation of the QS. Note that Eq. (<ref>) is analogous to Eq. (<ref>), differing only by the change A_xy→ℬ_xy(n). Given that this is the only formal change on ρ_SC^ℳ↔𝒩 due to the open control, we can understand the effect of the environment by analyzing how these coefficients ℬ_xy behave and how they compare to A_xy. Before we do so, first note that ℬ_xy(0)=A_xy, so the limit with the traditional closed control in Eq. (<ref>) is reproduced when there are no collisions. Second, the local state of the system after n collisions is given by ρ_S^n = _C{ρ_SC^n}=ℬ_++(n)+ℬ_–(n)=A_def^n =1/2U_S^n[𝒩∘ℳ(ρ_S)+ℳ∘𝒩(ρ_S)]U_S^† n for all n, where the last equality comes from Eq. (<ref>) and the definition of A_ def^n. This shows that ρ_S^n corresponds to the mixture of causally ordered quantum maps, unitarily evolved according to the local Hamiltonian, as expected. Third, in the limit of many collisions, n→∞, the joint system-control state is lim_n→∞ρ_SC^n = [lim_n→∞ℬ_++(n)]⊗|+⟩⟨+|_C + [lim_n→∞ℬ_–(n)]⊗ |-⟩⟨-|_C = (lim_n→∞A^n_ def)⊗(1+f_E/2|+⟩⟨+|_C + 1-f_E/2|-⟩⟨-|_C) =ρ^∞_S⊗Θ_β_E, where we used that cos(gτ)<1 (recall that 0<gτ≪1) to calculate the limits of ℬ_xy(n), and Θ_β_E corresponds to the thermal state of the control relative to the inverse of temperature β_E. That is, the joint state becomes a product one, showing that the correlations are suppressed in the asymptotic limit, with the control system being in the thermal state, as expected. §.§ Post-selection Since the local state of the system S can only carry effects of SCO with a post-selection of the control, we now focus on the effect of the environment on such post-selection (again, in the |±⟩_C basis). The equations that give us the post-selection probabilities and conditional states can be obtained directly from Eq. (<ref>) (and by direct analogy with Eqs. (<ref>) and  (<ref>)), i.e., p^n_ post(±)={(1_S ⊗ |±⟩⟨±|_C ) ρ_SC^n} = {ℬ_±±(n)}, and ρ_S,±^n= [p^n_ post(±)]^-1ℬ_±±(n), respectively. We note that Eqs (<ref>) and (<ref>) make explicit what was already anticipated: In order to understand how SCO is affected by the environment and the different parameters of this interaction (such as the number of collisions, temperature, etc), we only need to analyze how these parameters change the operators ℬ_±±. To do so, we first rewrite these as ℬ_±±(n)=b^±_ def(n,f_E,gτ)/2A_ def^n + b^±_ indef(n,gτ)/2A^n_ indef, where, by Eq. (<ref>), b^±_ def(n,f_E,gτ) := 1± f_E[1-cos^2n(g)], b^±_ indef(n,gτ) :=±cos^2n(g). Eq. (<ref>) shows that b_ indef^± modulates the impact of SCO in the post-selection: The higher |b^±_ indef|, the higher the SCO effect of a given QS; whenever this term is null, there is no SCO effect at all. In fact, we can get valuable information from the dependence of b_ indef^± on n: Collisions monotonically decrease the effect of SCO. Indeed, since 0<gτ≪ 1, |b^±_ indef(n+1,gτ)|<|b^±_ indef(n,gτ)|. Moreover, the maximum value of |b^±_ indef|, obtained at n=0, is |b^±_ indef(n=0,gτ)|=1, while in the asymptotic limit we get b^±_ indef(n→∞,gτ)=0, in accordance with Eq. (<ref>). This qualitative behaviour of |b_ indef^±(n,gτ)| is depicted in Fig <ref>. Surprisingly, neither the temperature T_E of the bath nor the frequency ω impacts the presence or absence of SCO effects. Indeed, b_ indef^±(n,gτ) has no dependence on these parameters whatsoever. Such independence, however, does not imply that these quantities have no effect on the post-selection as a whole. Indeed, b^±_ def is affected by these parameters through f_E, which, in turn, impacts ℬ_±±, reflecting on the post-selection probability p^n_ post(±) and the conditional states ρ_S,±^n. Using the fact that {A_ indef^n}={A_ indef} and {A_ def^n}=1, Eq. (<ref>) can be rewritten as p^n_ post(±)= 1/2[b^±_ def(n,f_E,gτ) + b^±_ indef(n,gτ){A_ indef}]. Also, with Eqs. (<ref>) and (<ref>), we have ρ_S,±^n = [p^n_ post(±)]^-1[b^±_ def(n,f_E,gτ)/2A_ def^n + b^±_ indef(n,gτ)/2A^n_ indef]. Let us look into the impact of the two extreme temperature regimes on these objects. The most interesting case is the condition T_E→ 0 (i.e., β_E→∞ and f_E→ 1), particularly for the post-selection on |-⟩_C. In this case, it can be seen that b^-_ def=-b^-_ indef. Then, from Eq. (<ref>), we get lim_β_E→∞ρ^n_S,-=b^-_ def(A_ def-A_ indef)/b^-_ def(1-{A_ indef})=A^n_–/{A_–^n}. Therefore, independently of the particular channels inside of the QS, the conditional state ρ^n_S,- is completely shielded from the environmental interactions. After a post-selection on |-⟩_C, the obtained state of S is oblivious to the impact of the environment on the control, being the result of local evolution independently of the number of collisions. This limit must be considered with care, though, since the probability of attaining such a post-selection goes to zero as n increases since lim_β_E→∞p^n_ post(-) = cos^2n(gτ)(1-{A_ indef})/2 = cos^2n(gτ)p^0_ post(-), where p^0_ post(-) is the probability of post-selection without any environmental action on the control[Intuitively, in the limit T_E→ 0, the bath is initialized in |+⟩⟨ +|_E, which induces thermalization of the control in the state |+⟩⟨+|_C, reducing the probability of post-selection on |-⟩_C.]. The low-temperature regime does not provide such a shielding effect in the case of post-selection in the |+⟩_C outcome, which occurs with increasing probability as n grows. Indeed, in that case b^+_ def=2 - b^+_ indef, and substituting this into Eq. (<ref>) shows that there is always an impact of the environmental interactions on the state ρ^n_S,+. Specifically, as we already know, the SCO contribution is suppressed with increasing n, and only definite order terms survive. Finally, in the T_E→∞ case (i.e., β_E→ 0 and f_E→ 0), b^±_ def=1 for all values of n, implying that the definite order terms are not affected by the interactions. The latter only suppresses the SCO contributions to ρ^n_S,±, thus providing no shielding effect. As another difference to the T_E→ 0 regime, in which the probability of the post-selection in the |-⟩_C state always decreases with n, in the high-temperature regime the collisions may affect the post-selection probabilities in different ways depending on the model. That is, whether p_ post^n(+) or p_ post^n(-) increases with n depends on the particular implementation of the QS, as it depends on the sign of {A_ indef}, as evidenced by lim_β_E→ 0p^n_ post(±) = 1+b^±_ indef{A_ indef}/2. In Fig. <ref>, we show some examples of the behavior of b^±_ def with temperature, where an intermediate and the limit cases described above can be seen. In this section, we have analyzed the general, model-independent results coming from Eq. (<ref>). However, the expression for the conditional states and actual value of p_ post^n depend on the particular implementation of the QS. These results are organized in Table <ref>. To illustrate our results, we present two particular examples in the Appendices, namely, a QS of MUB-monitoring channels and the QS-driven refrigerator present in Ref. <cit.>. § DISCUSSION In this work, we have characterized the environmental-induced instabilities mediated by the control system in the QS of two arbitrary quantum operations. Having an open control (with the Jaynes–Cummings-like interaction presented here) always negatively impacts the contribution of SCO in a QS. Perhaps surprisingly, the presence of SCO contributions is not affected by the temperature, independently of the particular QS being implemented. However, the bath properties and coupling specifics (T_E, ω, and gτ) do affect the post-selection probabilities and conditional states. In the low-temperature case, the bath induces an interesting asymmetry, where ρ^n_S,- is shielded from the impact of the environment—even though such a post-selection becomes more unlikely with each additional collision. However, the favored outcome in the low-temperature regime, |+⟩_C, is always affected by the collisions, which suppress SCO. Both outcomes are similarly affected in the high-temperature regime, with SCO being suppressed and the post-selection probability for each outcome becoming equally likely in the limit n→∞, as summarized in Table <ref>. Therefore, in any implementation of the QS where the control may be subject to environmental interactions, one should expect a qualitatively different behavior. From a thermodynamic perspective, the environment auxiliary systems play major roles, i.e., they both exchange an energy amount Δ U^n_E with the control and induce entropic changes. Such couplings are the root of the process of dissipation and irreversibility undergone by the control. As previously mentioned, given the Hamiltonian structure of Eq. (<ref>), energy conservation holds during each collision in a way that no work is performed and Δ U^n_C{ ( ρ_C^n - ρ_C^n-1 ) H_C }≡ -Δ U^n_E. Hence, the total heat transferred to the control after n collisions can be cast as 2/ω𝒬_CE^n=(b_ def^-(n,f_E,gτ)-1)+(b_ indef^-(n,gτ)+1){ A_ indef} . Therefore, the entropy production of the composite SC state is given by the difference between the entropy change Δ𝒮_SC^n = 𝒮(ρ_SC^n) - 𝒮(ρ_SC^0) and the entropy flux β_E 𝒬_CE^n accompanied by the heat <cit.>, i.e., Σ_SC^n = Δ𝒮_SC^n - β_E 𝒬_CE^n, where 𝒮(ρ) -{ρlnρ} is the von Neumann entropy of ρ. This quantifies in thermodynamic terms the irreversibility of the open system dynamics of the control with the environment. The main result of this work is to provide a methodology for considering an open control quantum switch that can be employed to analyze the effect of the environmental instabilities in the figure of merits of several quantum switch-based protocols. To illustrate this, in the Appendices, we employ our framework to analyze the consequences of having an open control in two distinct contexts, namely, the QS of monitoring of mutually unbiased bases (MUBs) and a quantum refrigerator induced by SCO <cit.>. In particular, we consider the available information after n collisions and post-selection of the control ℐ(ρ_S,±^n) = ln d_S - 𝒮(ρ_S,±^n), where d_S is the dimension of the Hilbert space of the system ℋ_S. This quantity is shown to have asymmetric behaviors according to the post-selection, i.e., while a post-selection in |+⟩_C preserves the monotonic decreasing relation with the measurement strength of each map, the |-⟩_C post-selection breaks such monotonicity. The significant instabilities identified in this study, particularly in the asymmetric input-output configuration of the control, hold special relevance for protocols relying solely on this post-selection as the refrigerator induced by SCO <cit.>. In this model, detailed in the Appendices, we demonstrate how these instabilities consistently degrade its performance. An intriguing avenue for future research would involve developing protocols resilient to such instabilities or considering whether they can be identified as an additional resource. Moreover, our collisional model can also be adjusted to incorporate features such as non-thermal baths with quantum coherence <cit.> and non-Markovian interactions <cit.>, for instance. § MONITORING OF MUTUALLY UNBIASED BASES Monitoring maps are CPTP maps that interpolate between weak and strong non-selective measurements. They can be defined as <cit.> ℳ_𝒪^ϵ(ρ_S) (1-ϵ) ρ_S + ϵ Φ_𝒪(ρ_S), where 0⩽ϵ⩽1 is the measurement strength and the map Φ_𝒪 is a dephasing of system S in the eigenbasis of the operator 𝒪=∑_αα 𝒪_α, i.e., Φ_𝒪(ρ_S) ∑_α𝒪_αρ_S 𝒪_α =∑_α p_α𝒪_α with p_α = {𝒪_αρ_S}, and 𝒪_α are projectors such that 𝒪_α𝒪_α'=δ_αα'𝒪_α. Map Φ_O can be interpreted as the projective measurement of observable O without having its outcome revealed. A possible choice of Kraus decomposition for this operation is K_0 = √(1-ϵ) 1_S and K_j = √(ϵ) 𝒪_j. These maps satisfy the property ℳ^ϵ_𝒪∘ℳ^ϵ'_𝒪(ρ) = ℳ^ϵ”_𝒪(ρ), where ϵ”=ϵ+ϵ'-ϵϵ' <cit.>. Before discussing the application of these channels in SCO, it is worth noting that the above operations always decrease the amount of information in the reference frame of the system. By the concavity of the von Neumann entropy, it is easy to see that 𝒮(ℳ^ϵ_𝒪 (ρ_S))⩾ (1-ϵ)𝒮(ρ_S)+ϵ𝒮(Φ_𝒪(ρ_S)) and ℐ(ρ_S)-ℐ(ℳ^ϵ_𝒪(ρ_S)) ⩾ϵℭ_𝒪(ρ_S)⩾ 0, where ℭ_𝒪(ρ) 𝒮(Φ_𝒪(ρ_S))-𝒮(ρ_S) is the relative entropy of coherence related with the observable 𝒪. Therefore, the available information exhibits a monotonic relation as a function of the monitoring strength <cit.>. Here, we are interested in the scenario in which the operators 𝒪 and 𝒪' are associated with MUBs. In this case, it can be verified that Φ_𝒪∘Φ_𝒪'(ρ_S) = Φ_𝒪'∘Φ_𝒪(ρ_S) = 1_S/d_S. This implies that two consecutive monitorings of MUBs commute, in the sense that ℳ^ϵ'_𝒪'∘ℳ^ϵ_𝒪(ρ_S)=ℳ^ϵ_𝒪∘ℳ^ϵ'_𝒪'(ρ_S) for every measurement strengths ϵ and ϵ'. The output state reads ℳ^ϵ'_𝒪'∘ℳ^ϵ_𝒪(ρ_S)= (1-ϵ)(1-ϵ')ρ_S +ϵ(1-ϵ')Φ_𝒪(ρ_S) + ϵ'(1-ϵ)Φ_𝒪'(ρ_S)+ϵϵ' 1_S/d_S. Employing the concavity of the von Neumann entropy it follows that ℐ(ρ_S)- ℐ(ℳ^ϵ'_𝒪'∘ℳ^ϵ_𝒪(ρ_S)) ⩾ ϵϵ' ℐ(ρ_S) +ϵ(1-ϵ')ℭ_𝒪(ρ_S) + ϵ'(1-ϵ)ℭ_𝒪'(ρ_S). which is a non-negative quantity from the positivity of the available information, the relative coherence for each basis, and 0⩽ϵ,ϵ'⩽ 1. Therefore, we conclude that ℐ(ρ_S)⩾ℐ(ℳ^ϵ'_𝒪'∘ℳ^ϵ_𝒪(ρ_S)), which implies the monotonicity of the available information under consecutive monitoring. For simplicity, from now on we set ϵ=ϵ' to discuss the quantum switch of these monitoring maps. Consider the general equation after QS and collisions of control with an environment at inverse temperature β in Eq. (<ref>) with ℳ = ℳ_𝒪^ϵ and 𝒩 = ℳ_𝒪'^ϵ (same monitoring strength ϵ = ϵ'), where the operators are 𝒪=∑_αα_i 𝒪_i and 𝒪'=∑_iα_i' 𝒪'_i (𝒪_i = |o_i⟩⟨ o_i| and 𝒪'_i = |o_i'⟩⟨ o_i'| are projectors onto the bases {|o_i⟩}_i and {|o'_i⟩}_i, respectively). In this case, M_0 = √(1-ϵ) 1_S, M_j = √(ϵ) 𝒪_j, N_0 = √(1-ϵ) 1_S, and N_j = √(ϵ) 𝒪'_j. A trivial case is when 𝒪' = 𝒪. Because of the property in Eq. (<ref>), it can be checked that the switch map reduces to ℳ^ϵ'_𝒪⊗1_C, where ϵ'=2ϵ - ϵ^2. This conclusion and the property in Eq. (<ref>) may lead someone to wrongly believe that a similar result holds when the eigenbases of 𝒪 and 𝒪' are MUBs, for which ⟨ o_i | o_j'⟩ = e^i ϕ_ij/√(d_S). However, this is not the case. Indeed, from Eqs. (<ref>), (<ref>), and (<ref>), we have ρ_S,±^n= b_ def^±(n,f_E,gτ)+b_ indef^±(n,gτ)/2p_ post^n(±)U_S^nℳ_𝒪'^ϵ∘ℳ_𝒪^ϵ(ρ_S)U_S^† n +ϵ^2b_ indef^±(n,gτ)/2d_Sp_ post^n(±) [1/2∑_i,jU_S^n(e^2iϕ_ij|o_i⟩⟨ o_j'|ρ_S|o_i⟩⟨ o_j'|+h.c.)U_S^† n-1_S], where p_ post^n(±)={ℬ_±±(n)} =b_ def^±(n,f_E,gτ)/2+b_ indef^±(n,gτ)/2Re{χ} with Re{χ} ={ A_indef} and χ = (1-ϵ)^2 + 2ϵ (1-ϵ) + ϵ^2/d_S^3/2∑_i,j e^i ϕ_ij⟨o_j'|ρ_S|o_i⟩. This result is valid for any finite-dimensional system. For simplicity, in the specific case that the system is a qubit in the initial state ρ_S = |+⟩⟨ +|_S, the Hamiltonian is in the σ_x basis and the observables are 𝒪=σ_z and 𝒪'=σ_x, the state post-QS, collisions and post-selection reads ρ_S,±^n= 1/2p_ post^n(±)[(1-ϵ/2)(b_ def^±(n,f_E,gτ)+b_ indef^±(n,gτ))|+⟩⟨+|_S +ϵ/2(b_ def^±(n,f_E,gτ)+b_ indef^±(n,gτ)(1-ϵ))|-⟩⟨-|_S] with p_ post^n(±)=1/2[b_ def^±(n,f_E,gτ)+b_ indef^±(n,gτ)(1-ϵ^2/2)]. One then sees that the post-selected state is diagonal in the eigenbasis of σ_x. Finally, we calculate the available information to analyze how much information is stored in this state as the number of collisions increases for different measurement strengths. Using Eq. (<ref>) we get ℐ(ρ_S,±^n)= ln2-1/2p_ post^n(±)[(ϵ/2-1)(b_ def^±(n,f_E,gτ)+b_ indef^±(n,gτ))ln((1-ϵ/2)(b_ def^±(n,f_E,gτ)+b_ indef^±(n,gτ))/2p_ post^n(±)) -ϵ/2(b_ def^±(n,f_E,gτ)+b_ indef^±(n,gτ)(1-ϵ))ln(ϵ/2b_ def^±(n,f_E,gτ)+b_ indef^±(n,gτ)(1-ϵ)/2p_ post^n(±))] whose direct interpretation is not straightforward, and with the help of plots we shall analyze it. In Fig. (<ref>) we plot the available information ℐ(ρ_S,+^n)—the control is found to be in the |+⟩_C state—for an increasing number of collisions in two different temperatures: (a) one order of magnitude above and (b) one order of magnitude below the energy scale of the system. In both temperatures, for zero collisions and ϵ=1.0, one sees that the QS followed by post-selection secures some information in the state of the system. As the number of collisions increases, the available information decreases monotonically to the lower limit of definite causal order, where ϵ=1.0 means that no information is left in the state of the system. On the other hand, when the control is found to be in the |-⟩_C state the situation changes dramatically. When plotting ℐ(ρ_S,-^n) in Fig. (<ref>), also for (a) high and (b) low temperatures and different number of collisions, the anticipated consequences previously discussed is observed. Already for high temperatures, when the number of collisions is low (ρ_S,-^0 = |-⟩⟨ -|_S exceptionally) the available information is non-monotonic with ϵ, eventually reaching monotonicity for a high number of collisions, for which the available information coincides with the definite order scenario. However, when the temperature is low, the available information has a valley for a small value of ϵ and grows back to the maximum value for increasing measurement strength. This increase becomes slower for more collisions, such that, for a high number of collisions (n≳ 300) we have monotonicity in ϵ and the curves approximate the definite order behavior. It is worth noting that, following Eq. (<ref>) we have ρ_S,-^n ≡ |- ⟩⟨ -|_S for an arbitrary n. This state has maximum available information ℐ(|-⟩⟨ -|_S) = ln 2. It means that, as the temperature approaches zero, the probability of projecting the control on |-⟩_C is suppressed and at the same time the anticipated shielding effect presented in Table <ref> is observed. § QS-BASED REFRIGERATOR In the context of quantum thermodynamics, a QS has been employed to design a refrigerator cycle <cit.>. To consider the effect of an external environment, here we present a modified version of the proposed cycle, i.e., it is added a step in which the control interacts with a thermal environment within the collisional model paradigm. Such an interaction takes place right after the switch is performed and before the measurement of the control qubit (see the Supplemental Material for more details). Let us assume the system S is a qubit described by a Hamiltonian H_S = -ω_S σ_z^S/2. Initially, the system is prepared in a thermal state with the inverse temperature β_cold relative to a reference cold bath, i.e., Θ_β_cold = exp(-β_cold H_S)/Z_S^cold, where Z_S^cold = {exp(-β_cold H_S) } is the partition function. The system is then put together to a control auxiliary system C in the ground state of -σ_x (i.e., |+⟩⟨ +|), which plays the role of the degree-of-freedom conducting the causal order of two identical thermalization maps with cold baths characterized by β_cold (the protocol requires at least two baths to operate). Hence, the composite state system-control pre-QS is given by the following product state Θ_β_cold⊗ |+ ⟩⟨ +|_C. Then, the QS is applied to the target system according to Eq. (<ref>), with M_i = N_j = √(Θ_β_cold/2) U_i, where the {U_i}_i form a set of orthogonal unitary operators. Following the procedure in Ref. <cit.>, the state post-QS is given by ρ_SC^0=1/2(Θ_β_cold⊗1_C+Θ_β_cold^3⊗σ_x), where the upper index 0 denotes that this is the state pre-open dynamics. As a next step, we consider that before measuring C and post-selecting the state of S, the control will be interacting with a thermal bath with the inverse of temperature β_E for a certain time. Here we follow the guidelines presented in the main text, with all the local Hamiltonians and interactions there presented. The final composite system-control state after n collisions is then found to be equal to ρ_SC^n= 1/2Θ_β_cold⊗[1_C+(1-b_ def^-(n,f_E,gτ)) σ_x] -1/2b_ indef^-(n,gτ)Θ_β_cold^3⊗σ_x, where f_Etanh(β_Eω/2). Subsequently C is measured in the {|+⟩_C,|-⟩_C} basis, resulting in two possible procedure branches. On the one hand, if one measures |+ ⟩⟨ +|_C, the post-selected state of the system ρ_S,+^n is classically thermalized to the cold temperature β_cold and the cycle is repeated. On the other hand, in case one measures |- ⟩⟨ -|_C, the post-selected state of the system ρ_S,-^n goes through two consecutive classical thermalizations with a hot and cold bath, respectively characterized by β_hot and β_cold, s.t., _cold > _hot. This last step closes the cycle in this branch which allows the repetition of the whole procedure. The post-measurement state of the system after n collisions is written as ρ_S,±^n=Θ_β_cold/2p_ post^n(±)[b_ def^±(n,f_E,gτ)+b_ indef^±(n,gτ)Θ_β_cold^2] with measurement probability given by p_ post^n(±)=b_ def^±(n,f_E,gτ)/2+b_ indef^±(n,gτ)/2(1-3/4sech^2(β_coldω_S/2)). The refrigerator works by effectively removing energy from the cold reservoir in a cyclic manner. The energetic exchange from each step can be straightforwardly computed, as well as its average quantities over many cycles (see Supplemental Material for more details). In this sense, the average heat transferred from the cold bath is given by Q_n≡ p^n_ post(-)Q_n,-, where Q_n,- -ω_Sb_ indef^-(n,gτ)/8p_ post^n(-)tanh(_cold/2)sech^2(_cold/2) +1/2[tanh(_hot/2)-tanh(_cold/2)] is the heat exchanged from a single cycle of the branch if one measures |-⟩⟨-|_C. Additionally, since the average energetic cost of the measurement is null, the average work expended for running the refrigerator is entirely due to Landauer's erasure process <cit.> once one considers the stored measured information, i.e., 𝒲_n𝒲_n^erasure≡-1/β_hot∑_k=±p_ post^n(k)ln(p_ post^n(k)). Note that we consider the erasure to be performed with the accessible hot bath, so the cold one remains unperturbed during such a process and no other bath is necessary to be included. Along these lines, the efficiency of the refrigerator can be quantified by the coefficient of performance (COP), defined as the ratio of the heat transferred from the cold bath over the work cost, i.e., COP_nQ_n/𝒲_n=p^n_ post(-)Q_n,-/𝒲_n^erasure. Fig. <ref> shows the refrigerator's performance behavior considering the control is an open system, for different numbers n of collisions and distinct values of β_E for the external thermal bath, s.t., 0⩽β_E⩽β_cold. As expected, such interaction, characterized by the collisions, decreases the refrigerator's cooling ability, i.e., COP_n < COP_0 for n>0. From Eq. (<ref>) it is clear the composite system-control state asymptotically approaches a causally ordered product state in terms of n, such that both states S and C are locally thermal. The open control dynamics destroy the desired correlations, which decreases the amount of heat extracted from the cold bath due to the application of the QS. Also, it is possible to observe the COP decay slower for lower temperatures, closer to the cold bath one, which means the refrigerator functioning is more resilient over low-temperature perturbation. Nevertheless, if the control is, in fact, into contact with the cold bath, such that β_E≡_cold, then one should also take into account its energy change. Such a situation is reasonable for settings where the control cannot be fully detached from the other physical systems, particularly from the cold bath under consideration. In this sense, the heat transferred for the control after n collisions with the cold bath is given by q_n= -3/8ωsech^2(_cold/2)+1/2ω b_ def^-(n,f_cold,gτ) +1/2ω b_ indef^-(n,gτ)(1-3/4sech^2(_cold/2)). Hence, both the average heat and COP should be modified, s.t., Q^'_n=p^n_ post(-)Q_n,-+q_n and COP_n^'=p^n_ post(-)Q_n,-/𝒲_n^erasure+q_n/𝒲_n^erasure. Note that in such a scenario, one is effectively including C into the working substance of the refrigerator. Fig. <ref> shows how the COP_n^' behaves in terms of ω and n when the control is explicitly considered. In particular, it is possible to see the normalized COP remains positive for a specific gap bandwidth. These values correspond to the parameter region where C can extract energy from the cold bath after the switch application, i.e., under these conditions one guarantees heat flux such that q_n>0, which assists the cooling process (see Supplemental Material for more details). The authors recognize the importance of the Quantum Speedup 2023, organized by the ICTQT, for being the event where the initial talks that led to this work started. O.A.D.M. acknowledges the support from the Foundation for Polish Science (IRAP project, ICTQT, contract no. 2018/MAB/5, co-financed by EU within Smart Growth Operational Programme) and the INAQT (International Network for Acausal Quantum Technologies) Network. A.H.A.M. acknowledges the support from the Polish National Science Centre grant OPUS-21 (No: 2021/41/B/ST2/03207). R.D.B. acknowledges support by the Digital Horizon Europe project FoQaCiA, Foundations of quantum computational advantage, GA No. 101070558, funded by the European Union. A.C.O.J. acknowledges the support from the QuantERA II Programme (VERIqTAS project) that has received funding from the European Union's Horizon 2020 Research and Innovation Programme under Grant Agreement No 101017733 and from the Polish National Science Center (project No 2021/03/Y/ST2/00175). I.L.P. acknowledges financial support from the ERC Advanced Grant FLQuant. P.R.D. acknowledges support from the NCN Poland, ChistEra-2023/05/Y/ST2/00005 under the project Modern Device Independent Cryptography (MoDIC). § SUPPLEMENTAL MATERIAL §.§ Refrigerator The flowchart describing the main steps of the refrigeration cycle introduced in the Appendices is shown in Fig. <ref>. §.§.§ Heat The refrigeration function is achieved whenever the average energy exchanged by the cold bath is positive, i.e., Q_n=p^n_ post(-)Q_n,-+p^n_ post(+)Q_n,+ > 0, where p^n_ post(±) is the probability of measuring the state |±⟩⟨±|_C and Q_n,± is the net heat from the positive (negative) branch of Fig. <ref>, which consists of the system's energy change due to the measurement of C and the classical thermalization undergone by S. Such a condition constrains the parameter-space region for a functioning refrigerator. On the one hand, for the positive branch one can show the net energy change is null, s.t., Q_n,+ _S[(ρ_S,+^n-Θ_β_cold)H_S]+_S[(Θ_β_cold-ρ_S,+^n)H_S] =0. On the other hand, for the negative branch one obtains Q_n,- _S[(ρ_S,-^n-Θ_β_cold)H_S]+_S[(Θ_β_cold-Θ_β_hot)H_S] = -ω_Sb_ indef^-(n,gτ)/8p_ post^n(-)tanh(_cold/2)sech^2(_cold/2) +1/2[tanh(_hot/2)-tanh(_cold/2)] with _cold > _hot. Thus, the average heat transferred from the cold bath becomes Q_n=p^n_ post(-)Q_n,- Along these lines, for closed control (n=0), the refrigeration condition simply becomes Q_0= -ω_S(tanh(β_coldω_S/2)-3tanh(β_hotω_S/2))/8(cosh(β_coldω_S)+1)>0. Fig. <ref> shows the parameter-space region satisfying the previous condition for ω_S=1. The black continuous and dashed lines highlight the area where Q_0>0 and _cold > _hot are simultaneously satisfied. §.§.§ Work Additionally, the average energetic cost of running such a refrigerator is due to the measurements performed at the control, i.e., by the work expense for measuring C, and by Landauer's erasure process of the stored information in a register. The former is given by 𝒲_n,±_C[(|±⟩⟨±|_C-ρ_C^n)H_C]=ω/2(2p_ post^n(+)-1∓1), such that p^n_ post(-)𝒲_n,-+p^n_ post(+)𝒲_n,+=0. The latter is defined as 𝒲_n^erasure -1/β_regΔ S_n,reg=-1/β_reg∑_k=±p^n_ post(k)ln(p^n_ post(k)), where S_n, reg is the Shannon entropy of the register and β_reg is the inverse of the temperature of the bath used to reset the register. Since the thermodynamic cycle consists of two thermal baths we will consider β_reg≡β_hot such that the cold one remains unperturbed during this process. Therefore, the average energetic cost of the refrigerator can be written as Eq. (<ref>). §.§.§ Control heat Eq. (<ref>) quantifies the amount of heat transferred from the cold bath to the control after n collisions. Fig. <ref> shows q_n for n=100, and highlights the refrigeration working region in terms of ω and _hot⩽_cold⩽ 5. The cooling area is characterized by q_n>0, such that heat is extracted from the cold bath.
http://arxiv.org/abs/2405.09628v1
20240515180009
Quantum Dynamics in Krylov Space: Methods and Applications
[ "Pratik Nandy", "Apollonas S. Matsoukas-Roubeas", "Pablo Martínez-Azcona", "Anatoly Dymarsky", "Adolfo del Campo" ]
quant-ph
[ "quant-ph", "cond-mat.stat-mech", "cond-mat.str-el", "hep-th", "nlin.CD" ]
Center for Gravitational Physics and Quantum Information, Yukawa Institute for Theoretical Physics, Kyoto University, Kitashirakawa Oiwakecho, Sakyo-ku, Kyoto 606-8502, Japan RIKEN Interdisciplinary Theoretical and Mathematical Sciences Program (iTHEMS), Wako, Saitama 351-0198, Japan Department of Physics and Materials Science, University of Luxembourg, L-1511 Luxembourg, Luxembourg Department of Physics and Materials Science, University of Luxembourg, L-1511 Luxembourg, Luxembourg Department of Physics and Astronomy, University of Kentucky, Lexington, KY 40506, USA Department of Physics and Materials Science, University of Luxembourg, L-1511 Luxembourg, Luxembourg Donostia International Physics Center, E-20018 San Sebastián, Spain qdks.review@gmail.com The dynamics of quantum systems unfolds within a subspace of the state space or operator space, known as the Krylov space. This review presents the use of Krylov subspace methods to provide a compact and computationally efficient description of quantum evolution, with emphasis on nonequilibrium phenomena of many-body systems with a large Hilbert space. It provides a comprehensive update of recent developments, focused on the quantum evolution of operators in the Heisenberg picture as well as pure and mixed states. It further explores the notion of Krylov complexity and associated metrics as tools for quantifying operator growth, their bounds by generalized quantum speed limits, the universal operator growth hypothesis, and its relation to quantum chaos, scrambling, and generalized coherent states. A comparison of several generalizations of the Krylov construction for open quantum systems is presented. A closing discussion addresses the application of Krylov subspace methods in quantum field theory, holography, integrability, quantum control, and quantum computing, as well as current open problems.                               RIKEN-iTHEMS-Report-24 Quantum Dynamics in Krylov Space: Methods and Applications Adolfo del Campo https://orcid.org/0000-0003-2219-2851 < g r a p h i c s > =================================================================================== Krylov subspace methods constitute an essential toolbox in scientific computing <cit.>. The core idea behind them is to project a high-dimensional problem onto a lower-dimensional Krylov subspace, thus making the problem more tractable <cit.>. The solution or approximation is then sought within this subspace, easing the computational resources required for solving the problem. As such, they are suited for tackling large-scale linear algebra problems, which are ubiquitous in science and engineering <cit.>. Their use is common for solving linear systems, eigenvalue problems, and estimating spectral widths, among other applications. These methods leverage the properties of Krylov subspaces to approximate solutions. They are especially efficient for sparse or structured matrices where direct methods are computationally impractical <cit.>. Krylov subspace methods have become increasingly relevant in the study of classical and quantum many-body systems, where they are also known as the recursion method <cit.>. For quantum systems, the time evolution is described by a trajectory of the quantum state in Hilbert space. Its dimension scales exponentially with the system size, motivating the quest for more efficient descriptions. This is apparent in the evolution of an isolated system generated by a Hamiltonian according to the Schrödinger equation. Krylov subspace methods offer a powerful approach by identifying the minimal subspace in which the dynamics unfolds, without the need to fully diagonalize the Hamiltonian and explicitly store the quantum state. Their formulation in the Heisenberg evolution is also frequent in this context and has long proved useful in the study of correlation functions, linear response theory, spectral functions, and other equilibrium properties. For any initial state or observable, the action of the Liouvillian as the generator of time evolution can be encoded in a tridiagonal matrix. The powers of Liouvillian acting on an initial vector remain linearly independent up to a given order and span the subspace in which the vector evolves with time. Any set of such linearly independent vectors can be transformed into an orthonormal basis, the Krylov basis <cit.> (or the Lanczos basis, as is known in the literature on numerical analysis <cit.>). In this way, any unitary quantum evolution can be mapped to a one-dimensional hopping problem in the so-called Krylov lattice <cit.>. Further developments have been spurred by the study of ergodic behavior and thermalization in nonequilibrium isolated quantum systems <cit.> and its relation to quantum chaos <cit.>. Leveraging the notion of Lyapunov exponents in classical chaos, early studies focused on the sensitivity of the quantum state evolution to external perturbations, as captured, e.g., by the Loschmidt echo and related fidelity measures <cit.>. A bound on quantum chaos was introduced by shifting the emphasis to the time evolution of operators. A quantum analog of the Lyapunov exponent was introduced by analyzing the out-of-time-order correlators (OTOCs), and a universal bound to its value was proposed in <cit.>. However, its existence relies on the exponential behavior of OTOCs as a function of time, which is restricted to systems with a small parameter, such as large N theories in the semiclassical regime. Alternative approaches are thus needed for generic many-body systems beyond the semiclassical approximation. Progress to this end harness the notion of operator growth <cit.>. In a many-body system, an operator is said to be local if its support is restricted to a small part of the system. Under unitary time evolution, a simple operator becomes increasingly nonlocal and scrambled, acting nontrivially all over the system. The information needed to fully characterize the operator grows exponentially in time. This exponential growth prevents an exact description of the operator in a generic system. Yet, it allows for a hydrodynamic description, in terms of very few quantities, which emerges from the behavior of the exponentially large operator space as a heat bath. A landmark study by Parker et al. <cit.> used Krylov subspace methods to characterize the operator growth. This led to the formulation of the operator growth hypothesis, relating the Krylov construction to the spectral features of the quantum system and its integrable, nonintegrable, and chaotic character. The work <cit.> also introduced the notion of Krylov complexity, which can be used to bound the quantum Lyapunov exponent and quantify operator growth. A surge of activity has ensued, advancing the understanding of operator growth and generalizing it to quantum field theory and open quantum systems with nonunitary dynamics. As a result, Krylov subspace methods in quantum dynamics have transcended the early scope as a computational method to provide a fundamental approach to understanding universal dynamical properties and complexity. These developments fulfill in the quantum domain the vision expressed in the correspondence between Lanczos and Einstein regarding the suitability of these methods to do “justice to the inner nature of the problem” <cit.>. In parallel, recent years have witnessed the incipient use of Krylov subspace methods in quantum technologies. Prominently, they are being harnessed in quantum computing for efficient quantum simulation of real and imaginary time evolution and for estimating ground-state, eigenstate, and thermal properties of a problem Hamiltonian <cit.>. Further applications of Krylov subspace methods have occurred in quantum control <cit.> and can be envisioned in other quantum technologies, such as parameter estimation and quantum metrology. The interplay of these ideas creates a fertile ground for further developments and motivates this contribution. This manuscript provides a comprehensive account of quantum dynamics in Krylov space and the role of Krylov complexity in describing quantum processes, with emphasis on methodology. In doing so, we bridge the mathematical foundation of Krylov subspace methods with applications, highlighting challenges and opportunities in advancing quantum science and technology. § ELEMENTS OF QUANTUM CHAOS The study of quantum dynamics in Krylov space is largely motivated by progress in understanding quantum chaos. Much of the background has been reviewed in excellent references <cit.>, and we provide only a succinct account with emphasis on recent developments, discussing a selection of measures to diagnose quantum chaos that are relevant to the study of quantum dynamics in Krylov space. §.§ Spectral statistics and random matrix theory Random matrix theory (RMT) finds broad applications in science and engineering <cit.>. A series of landmark works by Wigner <cit.> and Dyson <cit.> introduced random matrix theory (RMT) in physics. This provided a way to describe the spectra of heavy nuclei and complex quantum systems with minimal information about the underlying Hamiltonian. In doing so, Dyson identified the role played by the symmetries of the system, introducing a classification known as the three-fold way <cit.>. The use of RMT in quantum physics was further spurred by the study of chaos across that quantum-to-classical transition. The Bohigas-Giannoni-Schmit (BGS) conjecture posits that the spectral statistics of quantum chaotic systems are described by RMT <cit.>, while generic integrable systems show no correlations in the spectrum, as stated by the Berry-Tabor conjecture <cit.>. As a result, random matrix Hamiltonians provide a reference framework in the study of quantum chaos <cit.>. Likewise, random matrix Lindbladians play a key role in generalizations of quantum chaos to dissipative quantum systems <cit.>. A central focus of RMT is the characterization of an ensemble of N × N matrices, with probability measure defined as p(H) = 1/Z__Dexp(-_D N/4Tr( V(H) ) ) , where Z__D serves as the normalization constant and V(H) represents the potential function of the Hamiltonian H. The choice of a quadratic potential V(H) = H^2 leads to the Gaussian ensembles: the Gaussian Orthogonal Ensemble (GOE) for real symmetric matrices (_D =1), the Gaussian Unitary Ensemble (GUE) for Hermitian matrices (_D =2), and the Gaussian Symplectic Ensemble (GSE) for Hermitian quaternionic matrices (_D =4). These ensembles are named for their invariance under orthogonal, unitary, and symplectic transformations, respectively. In the Gaussian case, the Dyson index _D also specifies the nature of the matrix elements: real (_D = 1), complex (_D = 2), or quaternion (_D = 4). Specifically, random matrices in the Gaussian ensembles have real-valued diagonal entries a_mm∈𝒩(0,σ^2), where σ represents the standard deviation. The off-diagonal entries with m≠ n are defined as a_mn=e^0_mn for GOE, b_mn=e^0_mn+ie^1_mn for GUE, and e_mn=e^0_mn+ie^1_mn+je^2_mn+ke^3_mn for GSE, respectively. Here, e^l_mn∈𝒩(0,σ^2/2) (l=0,1,2,3), and i, j, k are the basis elements for a quaternion. The joint probability distribution for the eigenvalues λ_i within RMT is given by p(λ_1 , ⋯λ_N) = 1/Z__D e^-_D N/4∑_k V(λ_k)∏_i<j |λ_i - λ_j|^_D . In this expression, the exponential term suppresses configurations with widely separated eigenvalues, while the product term ensures that eigenvalues do not cluster too closely <cit.>. Together, these factors critically influence the statistics of the eigenvalues, which lie at the heart of RMT's predictive power. §.§ Spectral Form Factor The focus on probing the spectral statistics of many-body systems motivated the introduction of the spectral form factor (SFF) as a diagnostic tool for quantum chaos. The SFF is defined in terms of the Fourier transform of the energy spectrum <cit.>. For an isolated system described by a Hamiltonian with spectral decomposition H=∑_n=1^dE_n|E_n⟩⟨ E_n|, the SFF reads SFF(t)=∑_n,m=1^dG(E_n,E_m)e^-it(E_n-E_m). Here, G(E_n,E_m) represents a spectral filter <cit.>. Its use is ubiquitous in numerical simulations <cit.>. When the filter function factorizes G(E_n,E_m)=g(E_n)g(E_m), the function g(E_n) acts as an eigenvalue filter, and SFF(t)=|∑_ng(E_n)e^-iE_nt|^2. A filter of the form G(E_n,E_m)=G(E_n-E_m) acts as a frequency filter. However, the original definition of the SFF involved no filtering. The use of a Boltzmann factor as an eigenvalue filter G(E_n)=exp(-β E_n)/Z(β) with Z(β)=∑_nexp(-β E_n) being the partition function, allows to write the spectral form factor in terms of its analytic continuation at complex temperature as SFF(t)=|Z(β+it)/Z(β)|^2. This form has been extensively discussed in the context of black hole physics, RMT, and conformal field theory <cit.>. The SFF is generally studied using a Hamiltonian ensemble, e.g., in the context of RMT or disordered systems <cit.>. It is sensitive to the average eigenvalue density ⟨ρ(E)⟩_ℰ=⟨∑_nδ(E-E_n)⟩_ℰ as well as the two-level correlation function of the energy spectrum ⟨ρ(E)ρ(E')⟩_ℰ,c=⟨ρ(E)ρ(E')⟩_ℰ-⟨ρ(E)⟩⟨ρ(E')⟩_ℰ, where ⟨∙⟩_ℰ denotes the ensemble average. The SFF contains information about all k-th neighbor level spacings <cit.>. Probes sensitive to higher-order correlation functions of the energy eigenvalues can be built analogously and are related to the frame-potentials and unitary t-designs <cit.>. The characteristic behavior of the SFF in a quantum chaotic system is shown in Fig. <ref>. The early decay from the initial unit value is governed by the energy fluctuations of the initial state and can be extended and approximated by a Gaussian function solely governed by the average density of the states. This is interrupted at the “dip” time, when the contribution of two-level correlations becomes significant, giving rise to a ramp that constitutes a dynamical manifestation of quantum chaos. The ramp is followed by a plateau that approaches the ensemble average value of Z(2β)/Z(β)^2 in the absence of degeneracies. The SFF admits an information-theoretic interpretation as the fidelity between a quantum state and its time evolution. Consider the coherent Gibbs state |ψ(β)⟩=1/√(Z(β))∑_ne^-β E_n/2|n⟩. Then SFF(t)=|⟨ψ(β)|e^-it H|ψ(β)⟩|^2=|Z(β+it)/Z(β)|^2. This is also known as the survival probability (of the coherent Gibbs state) and has been the subject of extensive studies in the context of quantum dynamics <cit.>. It is closely related to the Loschmidt echo <cit.>, and sometimes termed as such. The same identity holds if the thermofield double state | TFD⟩=1/√(Z(β))∑_ne^-β E_n/2|n⟩⊗ |n⟩ is considered as initial state and evolved under the Hamiltonian H⊗𝕀, i.e., SFF(t)=|⟨ TFD|e^-it H⊗𝕀| TFD⟩|^2. The general eigenvalue-filtered SFF is obtained when the initial state is chosen to be |ψ(β)⟩=∑_n√(g(E_n))|n⟩. The fidelity-based interpretation of the SFF motivates its generalization to nonunitary dynamics and open quantum systems. When the time evolution is described by a quantum channel Φ_t(·) (i.e., a completely positive and trace-preserving map), the generalized SFF is then defined as SFF(t)=⟨ψ(β)|Φ_t[|ψ(β)⟩⟨ψ(β)|]|ψ(β)⟩. Such generalization is particularly suited to probe how the dynamical manifestations of quantum chaos stemming from the Hamiltonian spectral statistics are modified by the nonunitary dynamics <cit.>. Alternative generalizations have been put forward focused on the spectral statistics of the generator of evolution in open quantum systems, e.g., using the Fourier transform of the corresponding complex spectrum <cit.> or its singular values <cit.>. There has been some recent interest in experimentally realizable protocols to measure directly the SFF <cit.>, as well as closely related correlation functions <cit.>. The first successful direct experimental measurements of the SFF have recently been reported <cit.>. §.§ Out-of-time order correlators (OTOCs) and scrambling Out-of-time-order correlators (OTOCs) were originally introduced in a quasiclassical theory of superconductivity <cit.> and have become a standard origin of a quantum analog of the Lyapunov exponent <cit.>, as well as a natural probe of quantum information scrambling <cit.>; see <cit.> for a comprehensive review on OTOCs. Given two observables W(t) and V(0) in the Heisenberg picture and in a d-dimensional Hilbert space, the infinite-temperature OTOC is defined as <cit.> OTOC(t) = - 1/d([W(t), V(0)]^2) . The exponential growth of OTOC(t) ∼ϵ e^λ_OTOC t for a certain time window t_d ≪ t ≪ t_E is considered a direct probe of quantum information scrambling, where λ_OTOC is known as quantum Lyapunov exponent <cit.>. In particular the time-scales correspond to the dissipation time t_d ∼ 1/λ_OTOC, at which two-point functions V(0) V(t) saturate, and the Ehrenfest time or scrambling time t_E ∼log(1/ϵ)/λ_OTOC, at which the OTOC saturates. Figure <ref> shows the growth of the OTOC between the two timescales for the Lipkin-Meshkov-Glick (LMG) model <cit.>. The existence of such exponential growth of the OTOC requires a parametrically large hierarchy between the two timescales and, thus, a very small parameter ϵ. The latter is typically related to Planck's constant in systems with a well-defined semiclassical limit or to 1/N expansions in large N systems, such as the Sachdev-Ye-Kitaev (SYK) model <cit.>. The exponential growth of OTOCs can be justified in the semiclassical limit <cit.> by taking the operators to be the canonically conjugated position and momentum W(t) = x(t), V = p. Therefore the classical limit of the OTOC is given by the Poisson bracket, which quantifies the classical sensitivity to initial conditions OTOC_ cl(t) = ħ^2 {q(t), p}^2 = ħ^2 ( ∂ q(t)/∂ q(0))^2 ∼ e^2 λ_ cl t, where λ_ cl is the classical Lyapunov exponent <cit.>. The introduction of the OTOC surpasses some previous definitions of the quantum Lyapunov exponent based on the Loschmidt echo <cit.> in that one does not need to find a Lyapunov regime in which the decay of the Loschmidt echo is environment-independent. The Loschmidt echo and OTOCs are related at the average level <cit.> and through Fidelity OTOCs, in which one of the operators is chosen to be a projector over a state and the other has a “kick” form V= |ψ⟩⟨ψ|, W = e^i δ G with δ≪ 1 <cit.>. The presence of a positive Lyapunov exponent is a necessary but not sufficient condition for classical chaos, as saddle or unstable points can also give rise to a positive Lyapunov exponent. To have a sufficient condition for chaos, there are different approaches that either require aperiodicity of the trajectories at long times <cit.> or the mixing condition <cit.>. In a similar spirit, the existence of a positive quantum Lyapunov exponent has been argued to be a signature of scrambling rather than quantum chaos <cit.>. In particular, there exist integrable systems, such as the LMG model, see Fig. <ref>, showing a positive quantum Lyapunov exponent. This phenomenon has been termed saddle-dominated scrambling and will be further discussed in the context of Krylov complexity and the operator growth hypothesis in Sec. <ref>. One key constraint on the quantum Lyapunov exponent is the Maldacena-Shenker-Stanford (MSS) bound <cit.>, which states that the Lyapunov exponent λ_OTOC is universally upper bounded by the temperature T of the system λ_OTOC(T) ≤ 2π T , where natural units ħ = k_ B=1 are used for simplicity. The universal upper bound 2 π T is the Lyapunov exponent of a black hole at the same temperature, which is motivated by the conjecture that black holes are the fastest scramblers in nature <cit.>. Indeed, when a system saturates the upper bound, e.g., the SYK model at low temperatures <cit.>, it is holographically dual to a black hole; see <cit.> for a review. For finite temperatures, the thermal OTOC, especially in quantum field theory, can present divergences. It is thus customary to introduce a regularization, e.g., splitting the thermal factor as OTOC_β(t) = -[W(t), V(0)]e^-β H/2[W(t), V(0)]e^-β H/2/Z_β. The MSS bound can be rederived under different assumptions, by introducing a one-parameter family of regularizations <cit.>, or from the fluctuation-dissipation theorem <cit.>. It can also be understood in the motion of particles in curved surfaces at low temperatures <cit.> and a similar bound constrains the early-time decay of the SFF <cit.>. However, at infinite temperature (<ref>) is trivially satisfied. A stricter bound in such cases will be discussed in Sec. <ref>. The experimental measurement of OTOCs and scrambling is confronted with the requirement of time-inversion operations <cit.>. Despite this difficulty, they have successfully been measured in several experimental platforms <cit.>, also at finite temperature <cit.>. Even under experimental errors, an ideal OTOC may be extracted <cit.>. Furthermore, they have been extended to several nonunitary situations like finite open quantum systems <cit.>, bipartite systems <cit.>, dissipative spin chains <cit.>, random unitary circuits <cit.>, dissipative SYK model <cit.> and stochastic Hamiltonians <cit.>. §.§ Quantum complexity The notion of complexity in quantum systems arises in a wide variety of contexts, ranging from quantum computing to black hole physics <cit.>. Measures of quantum complexity quantify the resources needed to perform a physical process (e.g., a computation) or prepare a quantum state. The complexity of a quantum unitary operation is defined in terms of the number of gates of the smallest circuit that implements it. This notion of quantum circuit complexity carries over to a quantum state in terms of the size of the shortest circuit that leads to its preparation from a reference state. In an n-qubit Hilbert space, preparing a typical pure state starting from a reference product state requires an exponentially long time when using physical local Hamiltonians. Most unitaries are maximally complex. This renders the preparation of typical pure states experimentally unfeasible <cit.>. Providing lower bounds for quantum complexity is challenging due to the possibility of shortcuts. One approach relies on the geometric approach introduced by Nielsen <cit.> and its higher-order integrators of Suzuki-Trotter <cit.> in the complexity manifold <cit.>, in accordance with AdS/CFT correspondence <cit.>. The study of quantum state complexity has been further advanced in the context of AdS/CFT by relating the quantum state complexity of the boundary theory with the volume of the bulk geometry <cit.>. Defining quantum state complexity in quantum field theory comes with additional difficulties stemming from the choice of the reference state, the set of generators for the corresponding elementary gates, the presence of UV divergences calling for a regularization procedure, and the formulation of a complexity measure <cit.>. The continuous version of the Entanglement Renormalization tensor networks (cMERA) provides a framework to tackle these features <cit.>. See Ref. <cit.> for a detailed exposure. In finite-dimensional systems, random quantum circuits provide a natural framework to describe quantum chaotic dynamics, e.g., as characterized by OTOCs <cit.>. The Brown-Susskind conjecture posits that the quantum circuit complexity grows linearly with time for exponentially long times in the number of qubits in a random quantum circuit <cit.>. The growth of quantum complexity has been rigorously established in connection to unitary t-resigns <cit.>, which are collections of unitaries that approximate a completely random unitary <cit.>. As a result, the Brown-Susskind conjecture has been proved in local random quantum circuits <cit.>. § KRYLOV SPACE OF OBSERVABLE OPERATORS §.§ Preliminaries: vectorization of operators The quantum evolution of a pure state is governed by the Schrödinger equation, in which the rate of change of the state vector equals the Hamiltonian acting linearly on the state vector itself. By contrast, the unitary time evolution of mixed states and operators involves the action of a linear superoperator, known as the Liouvillian, according to the Liouville von Neumann equation and the Heisenberg equation, respectively. Similarly, the dynamics in open quantum systems involves the Lindbladian superoperator, as the generator of evolution. In all these cases, the operator-vector correspondence known as vectorization proves convenient for rendering the action of the superoperator as matrix multiplication. Vectorization provides a way to reshape a d-dimensional matrices in terms of a d^2 dimensional column vector. For an operator A with d× d matrix representation, we use the vectorization <cit.> A = ∑_i,j=0^d-1 a_ij|i⟩⟨j| → vec A = ∑_i,j=0^d-1 a_ij|i⟩⊗|j⟩^*. where |j⟩^* is related to the state |j⟩ by the complex conjugation operation <cit.>. This way of vectorization is known as row-wise or horizontal vectorization, where the rows of the matrices are stacked below. Such vectorization, i.e., expressing the operator in the double-Hilbert space is commonly referred to as the operator-state correspondence. A prototypical example is the identity operator expressed as maximally entangled EPR states. An analogous representation at finite temperature is provided by the thermofield double (TFD) states, which serve as the thermal counterpart to the EPR states. The Hilbert–Schmidt inner product in the vectorized notation reads Tr(A^† B) = (vec A)^† vec B . The following identity is useful in reducing the action of superoperators to matrix multiplication <cit.> vec (A 𝒪 B ) = (A ⊗ B^⊺) (vec 𝒪) , for any arbitrary operators A, B and 𝒪, where “⊺” denotes the transpose operation. In quantum information theory, vectorization arises in the realm of the Choi-Jamiłkowski isomorphism or channel-state duality <cit.>. In this context, vectorization is extensively used for the study of the spectral properties of quantum channels and open systems. Operations within this framework are succinctly represented using tensor networks <cit.>. A different convention for vectorization, known as column-wise or vertical vectorization, is also common in the literature <cit.>. Column-wise and row-wise vectorizations are equivalent and give the same results when used consistently. §.§ Krylov basis and operators: Lanczos algorithm Consider an observable, described by a Hermitian operator 𝒪, evolving in time under the action of a time-independent Hamiltonian H. In the Heisenberg picture, the dynamics of one such observable is governed by the Heisenberg equation ∂_t O(t) = i [H, O(t)] =: i L O(t), where L ∙ = [H, ∙] denotes the Liouvillian superoperator. The solution of the Heisenberg equation reads 𝒪(t) = e^i H t 𝒪 e^-i H t = e^i L t O = 𝒪 +it[ H, 𝒪] +(it)^2/2[ H,[ H, 𝒪]] +… = 𝒪 +itℒ𝒪 +(it)^2/2 L^2 𝒪 +… = e^i ℒ t𝒪 . The terms in the expansion involve nested commutators with the Hamiltonian, or equivalently, powers of the Liouvillian superoperator acting on the initial observable, as ℒ^0𝒪=𝒪, ℒ𝒪=[ H, 𝒪], ℒ^2𝒪=[ H,[ H, 𝒪]], and so on. The evolution 𝒪(t) is generally contained in a subspace of the operator space, known as the Krylov space, that is spanned by the set of all nested commutators, i.e., span{ℒ^n 𝒪}_n=0^∞=span{𝒪, ℒ𝒪,ℒ^2 𝒪,…}. Describing the evolution in Krylov space thus paves the way to ease computational resources in the study of many-body systems <cit.>. The expansion (<ref>) brings additional insights in the case of multipartite systems in which the Hilbert space ℋ has a tensor-product structure, as in many-body spin systems and multi-qubit systems. To illustrate this, assume that the initial operator is local, i.e., acting on p-spins. If the Hamiltonian is k-local, one can expect the support or size of ℒ^n 𝒪 to be of order O(nk+p). This reflects the operator growth: as the time of evolution goes by, the support of the operator, known as the operator size, increases. In addition, the number of terms needed to describe the operator in (<ref>) grows with time and leads to operator scrambling. As we shall discuss below, these phenomena can be rigorously defined and quantified using different measures that have a natural representation in Krylov space. The initial operator is restricted to be local and should not be a conserved quantity of the system; otherwise, the commutator [H, 𝒪] vanishes. To progress further, it is essential to introduce an inner product. While several options exist, the “infinite temperature” inner product, which equals the Hilbert–Schmidt inner product divided by the dimension of state space <cit.> is frequently employed. The finite-temperature inner-product will be discussed in Sec. <ref>. Specifically, for any two operators A and B, the infinite-temperature inner product is defined as (A|B) ≡( A^† B)/𝕀 = 1/𝕀 (vec A)^† vec B , where 𝕀 denotes the identity matrix and 𝕀 = dim(ℋ) := d is the dimension of the Hilbert space. The second equality in (<ref>) follows from the vectorized inner product of the operators (<ref>) and should be used when the operators are vectorized. Consequently, |A) is defined as |A) = 1/√(Tr 𝕀)vec A = 1/√(Tr 𝕀) ∑_i,j=0^d-1 a_ij|i⟩⊗|j⟩^* . This gives the norm of an operator as A ≡√(( A| A)), usually known as the Frobenius norm <cit.>. The inner product in (<ref>) coincides with the conventional inner product used for vectors once the operators A and B are vectorized up to the normalization factor. Note that the Liouvillian is Hermitian with respect to the definition (<ref>), i.e., ( A|ℒ B) = (ℒ A| B), which in the vectorized notation can be expressed simply as ℒ = ℒ^†. We also take the initial operator 𝒪 to be normalized, i.e., 𝒪 = 1. With a choice of the inner product in hand, it is apparent that the basis elements formed by the nested commutators {ℒ^n |𝒪)} are in general neither normalized nor orthogonal with respect to (<ref>). However, an orthonormal basis can be constructed recursively. We assume the initial operator 𝒪 is Hermitian so that it describes an observable, 𝒪= 𝒪^†. An orthonormal basis is then constructed via the Gram-Schmidt-like <cit.> procedure, also known as the Lanczos algorithm <cit.>. The algorithm is as follows <cit.>: * Define |𝒪_-1) := 0 and b_0 := 0. * |𝒪_0) := |𝒪). * |𝒜_1)=ℒ |𝒪_0). If 𝒜_1=0, stop. Otherwise define b_1=𝒜_1 and |𝒪_1)= |𝒜_1)/b_1. * For n>1: |𝒜_n) = ℒ|𝒪_n-1) - b_n-1 |𝒪_n-2). If 𝒜_n=0, stop. Otherwise define b_n=𝒜_n and |𝒪_n)= b_n^-1|𝒜_n). This process stops at n=D_K where D_K is the Krylov dimension, which is also known as the grade of 𝒪_0 with respect to ℒ <cit.>. The output is a D_K-dimensional orthonormal ordered basis, {|𝒪_n)}_n=0^D_K-1 = {|𝒪_0), |𝒪_1), …, |𝒪_D_k-1)} known as the Krylov basis for the corresponding Krylov space. It follows that (𝒪_n|𝒪_m)=δ_nm, while the identity in Krylov space reads ∑_n=0^D_K-1|𝒪_n)(𝒪_n|=𝕀. While the elements of the Krylov basis are not Hermitian, all operators in the set {i^n𝒪_n}_n=0^D_K-1 are Hermitian. In addition, the Lanczos algorithm provides the set of non-negative coefficients, known as Lanczos coefficients, {b_n}_n=1^D_K-1 = {b_1,b_2,…,b_D_K-1}, which completely determine the dynamics of the operator in the Krylov basis, as discussed later. It is important to note that the Lanczos algorithm is numerically unstable due to the accumulation of rounding errors during the orthogonalization process. This limitation means that after a small number of numerically stable iterations, orthogonality is rapidly lost <cit.>. To ensure orthogonality, methods such as full orthogonalization (FO) or partial re-orthogonalization (PRO) are typically necessary <cit.>, ensuring adherence to the precision limits of the machine. The Krylov space is a subspace in the operator space, with the latter having the dimension d^2. The dimension of Krylov space D_K is equal to the number of distinct energy gaps E_i-E_j, for which (in case of degeneracies, at least one) corresponding matrix element is not zero. This readily yields an upper bound on the Krylov space dimension <cit.>, 1 ≤ D_K≤ d^2-d+1 . Note that D_K can be infinite if the Hilbert space dimension is infinite and depends on the initial probe operator. One might expect that integrable and non-integrable systems might follow the bound (<ref>) differently. While the upper bound is tight for systems like the SYK_2, which can be mapped to free fermions <cit.>, this is not always the case. Interacting integrable systems also tend to saturate the bound <cit.>. Hence, the Krylov dimension (<ref>) is not a proper diagnostic between integrability and chaos. The crux of the Lanczos algorithm can be compactly denoted by the following identity <cit.> ℒ |𝒪_n) = b_n |𝒪_n-1) + b_n+1 |𝒪_n+1) , for n≥1, with b_0 = 0. In other words, the Liouvillian operator in Krylov basis {|𝒪_n)}_n=0^D_K-1 reads ℒ := ∑_n=0^D_K-2 b_n+1(|𝒪_n)(𝒪_n+1|+ |𝒪_n+1)(𝒪_n| ) , where we made a shift n → n+1 in the first term of (<ref>), since b_0 = 0. The matrix representation of the Liouvillian, with elements (𝒪_m|ℒ|𝒪_n), takes the following tridiagonal form <cit.> ℒ = [ 0 b_1 ; b_1 0 b_2 ; b_2 0 ; ⋱; 0 b_D_K-1; b_D_K-1 0 ] , with the primary off-diagonal elements being the Lanczos coefficients {b_n}_n=1^D_K-1. The value b_D_K = 0 indicates the end of the Krylov space. This directly results from (<ref>), with the condition that the Krylov basis elements are orthonormal (𝒪_n|𝒪_m) = δ_nm. The vanishing of the diagonal elements results from the initial operator being Hermitian, under the definition of the inner product (<ref>). This renders the Liouvillian also Hermitian (ℒ = ℒ^†) in the Krylov basis - a fact that can be traced back to the unitary evolution of the system. This property breaks down for open systems, where the evolution is non-unitary, as discussed in Sec. <ref>. The normalization of the initial operator |𝒪) is preserved under unitary evolution, as (𝒪(t)|𝒪(t)) = (𝒪|𝒪). Thus, 𝒪(t) can be expanded in the (Hermitian) Krylov basis {i^n 𝒪_n} as <cit.> |𝒪(t)) = ∑_n=0^D_K-1 i^nφ_n(t) |𝒪_n) . The real-valued functions φ_n(t) are known as the Krylov-basis wavefunctions or operator wavefunctions and denote the probability amplitude (or weightage) of each Krylov basis element. From the Heisenberg equation, it is straightforward to see that they satisfy the difference-differential equation <cit.> φ̇_n(t) = b_n φ_n-1(t) - b_n+1φ_n+1(t) , with initial conditions, φ_-1(t)=0 and φ_n(0)=δ_0n. The “dot” indicates the time derivative. Equation (<ref>) describes a single-particle hopping model, where the particle localized at site n hops to site (n-1) with rate b_n and to the site (n+1) with rate b_n+1, as illustrated in Fig. <ref>. The operator time evolution is thus equivalent to a single-particle hopping problem in the one-dimensional Krylov lattice or Krylov chain, with asymmetric nearest-neighbor transition rates. These equations show that the dynamics on the Krylov chain are fully determined by the Lanczos coefficients. The wavefunction φ_0(t) associated with the initial state |𝒪_0) = |𝒪) is given by <cit.> 𝒞(t) := φ_0(t) =(𝒪|𝒪 (t)) = (𝒪 𝒪(t))/𝕀 , and known as the autocorrelation function, being the correlation function between the initial operator and its evolution <cit.>. This function encodes the full information of all the Lanczos coefficients and, thus, the entire information of operator growth in Krylov space. Indeed, finding 𝒞(t) requires the exact time-evolved operator 𝒪(t). Since the operator norm is preserved under unitary time evolution, 𝒵 (t) := ∑_n=0^D_K-1 |φ_n (t)|^2 = 1 . This can be viewed as the conservation of the probability density in the Krylov chain, normalized to unity. This feature no longer holds in the case of dissipative systems, where the effect of the environment tends to decrease the probability, 𝒵 (t) < 1 at t>0, leading to a generalization of (<ref>) in the form of a non-Hermitian tight-binding model <cit.>. We will return to this in Sec. <ref> in greater detail. §.§ Krylov complexity and cumulants An important measure of operator growth is known as the Krylov complexity and is defined as the average position in the Krylov chain <cit.> K(t) = ∑_n=0^D_K-1 n |φ_n (t)|^2 . By definition, K(t)≥ 0 and it vanishes for the initial operator, K(0)=0. The Krylov complexity grows as the operator shifts away from the origin of the Krylov lattice. This measure reflects the fact that the basis elements |𝒪_n) are more nonlocal as the lattice index is increased. As a complexity measure, its growth with time indicates that the initial simple operator becomes complex over time. There are several other conventional measures of quantum complexity, such as the computational complexity <cit.>, circuit complexity <cit.>, and the holographic complexity <cit.>. Some connections have been proposed between them and the Krylov complexity <cit.>, with possible subtleties involved <cit.>. One may ask whether the Krylov basis furnishes any advantages over the computational basis, an example being the N-qubit Pauli basis for the N site system. Although both bases are time-independent, the computational basis is fixed, while the Krylov basis depends on the Hamiltonian and the initial operator. This fact is particularly leveraged by Eq. (<ref>), where the differential equation involving the coefficients is much simpler in the Krylov basis than in the computational one. Interestingly, the operator size can be defined over the computational basis in the same way the Krylov complexity is defined in (<ref>) <cit.>. Note that Krylov complexity, as the mean of a probability distribution, is insensitive to the actual spread of the operator in the Krylov chain. As such, it is more suited to capture operator growth than operator scrambling. Complete information on operator growth in the Krylov lattice is provided by the normalized distribution (<ref>), which can be further characterized by its moments and cumulants. An example of the latter is the Krylov variance (second cumulant), which provides a complementary measure of the dynamics focused on the spread of the operator in the Krylov lattice. The Krylov variance is defined as <cit.> Δ K (t)^2 := ∑_n=0^D_K-1 n^2 |φ_n (t)|^2 - (∑_n=0^D_K-1 n |φ_n (t)|^2)^2 , = ∑_n=0^D_K-1 |φ_n (t)|^2 (n - K(t))^2 . An alternative definition was considered in <cit.>, which in our notation stands for Δ K (t)^2 /K(t)^2. The higher moments of the distribution can be similarly defined <cit.>. For this, it is interesting to consider the Krylov operator 𝒦 such that <cit.> 𝒦 |𝒪_n) = n |𝒪_n) ,   ⇔  𝒦 := ∑_n=0^D_K-1 n |𝒪_n)(𝒪_n| . In other words, the Krylov operator acts as a number operator on the Krylov basis, 𝒦 = diag(0, 1, 2 ⋯, D_K-1). In terms of the definition (<ref>), the Krylov complexity is associated with the expectation value of the Krylov operator 𝒦 in the time-evolved operator |𝒪(t)), i.e., <cit.> K(t) = (𝒪(t)|𝒦|𝒪(t)) . Beyond the expectation value, the probability distribution in the Krylov lattice determines the eigenvalue statistics of the Krylov complexity operator P(n,t)=(𝒪(t)|δ(𝒦-n)|𝒪(t)). The distribution in the Krylov lattice is associated with D_K independent discrete random variables, in which the n-th variable takes the measurement outcome n with probability φ_n(t)^2 and the value 0 with probability 1-φ_n(t)^2. This allows us to define the cumulant-generating function <cit.> G(θ,t) := log(𝒪(t)|e^θ𝒦|𝒪(t)) = log(∑_n=0^D_K-1 e^θ n |φ_n (t)|^2 ) , that is continuous and differentiable with respect to θ∈ℝ. The n-th cumulant of the distribution κ_n is given by the n-th derivative of the generating functional, which reads <cit.> κ_n := ∂_θ^n G(θ,t)|_θ = 0 . This implies that the Krylov complexity and the Krylov variance are simply the first and the second cumulants of the distribution, i.e., κ_1 ≡ K(t) and κ_2 ≡Δ K(t)^2. The higher cumulants encode additional information about the distribution (e.g., regarding its skewness, kurtosis, etc.) and are computed straightforwardly, given knowledge of the cumulant generating function. In a similar spirit, the generalized notion of Krylov complexity has also been introduced <cit.>. §.§ Krylov entropy The probabilistic interpretation of the Krylov wavefunction allows us to define various quantum information-theoretic quantities <cit.>. This includes Shannon entropy, negativity <cit.>, and the capacity of entanglement <cit.>. We focus on Krylov entropy, which is the Shannon entropy of the probability distribution P(n,t), S_K (t) = -∑_n=0^D_K-1φ_n (t)^2 logφ_n (t)^2 . The authors of <cit.> investigated the behavior of Krylov complexity in various quantum systems, analyzing time scales much longer than the scrambling period. Their study provided evidence that Krylov complexity for operators conforming to the Eigenstate Thermalization Hypothesis <cit.> exhibits a characteristic pattern, growing exponentially during the scrambling period and transitioning to linear growth between the scrambling and the Heisenberg time, at which it saturates. The Krylov entropy shows logarithmic growth in the post-scrambling period. It was shown that the Krylov complexity and the Krylov entropy fulfill the logarithmic relation <cit.> S_K(t) ∼log K(t) ,     t > t_* , at late times. This late time corresponds to the timescale beyond the scrambling time t_*∼ O(log N) for systems with N degrees of freedom. The relation (<ref>) is reminiscent of the equation for Boltzmann's entropy and hints at the appearance of an irreversible thermodynamic-like behavior in the post-scrambling regime. However, at early times, Eq. (<ref>) does not hold, as S_K(t) and K(t) exhibit a product-logarithmic relation instead <cit.>. The generic behavior of the full profile of Lanczos growth in relation to the Krylov complexity and Krylov entropy can be divided into three distinct regimes. The initial growth of Lanczos coefficients, known as Lanczos ascent <cit.> persists until n ∼ O(S), reaching b_n ∼ O(Λ S), where S is the thermodynamic entropy of the system and Λ an energy scale. Correspondingly, the Krylov complexity exhibits exponential growth till the scrambling time t_*∼ O(log S) with a value K ∼ O(S). Beyond the post-scrambling regime, the Lanczos coefficients plateau at b_n ∼ O(Λ S) is associated with the linear growth of Krylov complexity till the Heisenberg time t_H ∼ O(e^S), reaching a value at K ∼ O(e^2S). The regime is known as the Lanczos plateau <cit.> . In this regime, the Krylov wavefunction is uniformly distributed over the Krylov space <cit.> |φ(t > t_H)|^2 ∼1/D_K . In the plateau, the Krylov complexity and Krylov entropy read <cit.> K(t > t_H) ∼1/D_KD_K (D_K -1)/2∼D_K/2 , S_K (t > t_H) ∼ -D_K 1/D_Klog(1/D_K) ∼log (D_K) . In chaotic systems, the Krylov dimension saturates the bound (<ref>) and hence, D_K ∼ d^2 ∼ e^2S, where S is the entropy of the system and d ∼ e^S. Thus, the logarithmic relation between them is evident according to (<ref>). After the saturation regime, the b_n decreases at exponentially large n ∼ O(e^S), where the Krylov complexity stays at the plateau value K ∼ O(e^2S). This regime is known as Lanczos descent <cit.>. § LANCZOS ALGORITHM: MONIC VERSION AND ORTHOGONAL POLYNOMIALS For completeness, we discuss an alternate version of the Lanczos algorithm, known as the monic version <cit.>. This version is equivalent to the orthonormal version discussed in the previous section. Given an operator 𝒪, normalized as 𝒪 = 1, the algorithm goes as follows: * |𝖮_0) = |𝒪). * |𝖮_1)=ℒ |𝖮_0). Compute Δ_1 = (𝖮_1|𝖮_1) = 𝖮_1^2. If Δ_1 =0, stop. Otherwise, proceed to step 3. * For n>1: |𝖮_n) = ℒ|𝖮_n-1) - Δ_n-1 |𝖮_n-2). Compute Δ_n = (𝖮_n|𝖮_n)/(𝖮_n-1|𝖮_n-1) = 𝖮_n^2/𝖮_n-1^2. If Δ_n=0, stop. Otherwise, repeat the procedure. The crucial difference of the monic version is that the operators are not normalized at each step. Hence, the operators 𝖮_n (to be distinguished from 𝒪_n) are orthogonal but not orthonormal to each other. Note that 𝒜_n in the orthonormal version plays the same role as 𝖮_n in the monic version, while the coefficients in these two versions are related by <cit.> Δ_n = b_n^2 . Table <ref> shows the comparison between the two methods. This table is an extended version of the table provided in <cit.>. Here, an additional coefficient a_n is introduced so that the initial operator 𝒪 need not be Hermitian. For the Hermitian initial operator, a_n coefficients identically vanish. For n ≥ 0, the action of the Liouvillian translates to ℒ | 𝒪 _n) = a_n | 𝒪 _n) + b_n+1 | 𝒪 _n+1) + b_n | 𝒪 _n-1) , ℒ |𝖮_n) = a_n |𝖮_n) + |𝖮_n+1) + b_n^2 |𝖮_n-1) , which imply a_n = ( 𝒪 _n|ℒ| 𝒪 _n) ,   b_n = ( 𝒪 _n-1|ℒ| 𝒪 _n) , a_n = (𝖮_n|ℒ|𝖮_n)/𝖮_n^2 ,   b_n = 𝖮_n/𝖮_n-1 , in the orthonormal and monic versions, respectively. This leads to exactly the same values for the Lanczos coefficients in the two versions. For example, in the orthonormal version, for any generic operator O _n, one finds a_n = ( 𝒪 _n|ℒ| 𝒪 _n) ∝Tr( 𝒪 _n^†[H, 𝒪 _n]) = Tr([ 𝒪 _n, 𝒪 _n^†] H) . The Hermitian conjugates of the Krylov basis operators obey, 𝒪 _n^† = (-1)^n 𝒪 _n, i.e., the even Krylov basis elements are Hermitian and the odd are anti-Hermitian. Thus [ 𝒪 _n, 𝒪_n^†] = (-1)^n [ 𝒪_n, 𝒪_n] =0, implying that the a_n coefficients vanish. This property also holds for the monic version, and in general, when the Krylov basis elements O_n are normal matrices, i.e., they commute with their Hermitian conjugate. Both versions fall under a general class of orthogonal polynomials <cit.>. The recursion method is equivalent to the repeated application of the Liouvillian to the initial operator. In particular, the n-th Krylov basis (either in the orthonormal or monic version) can be represented by |𝒪_n) = 𝒫_n (ℒ) | 𝒪 ) , where 𝒪 is the initial operator and 𝒫_n (ℒ) denotes any polynomial of the Liouvillian ℒ of index n. Both the orthonormal and monic Krylov basis are compactly denoted by |𝒪_n) ∈{ | 𝒪 _n), |𝖮_n)}, as well as in the polynomial 𝒫_n (ℒ) ∈{𝒫_n (ℒ), 𝖯_n (ℒ)}. It is instructive to note that the leading coefficients of 𝒫_n(ℒ) is a function of the Lanczos coefficients, while the leading coefficient of 𝖯_n(ℒ) is unity, 𝒫_n(ℒ) = (∏_i= 1, ⋯, n1/b_i) ℒ^n + ⋯ , 𝖯_n(ℒ) = ℒ^n + ⋯ , where the dots denote the lower order monomials in ℒ. This follows from the Lanczos algorithm's recursion relation, which gives the name monic polynomial <cit.> and makes the Lanczos algorithm a monic version. Furthermore, in terms of these polynomials, Eqs. (<ref>)-(<ref>) can be rewritten as ℒ𝒫_n (ℒ) = a_n 𝒫_n (ℒ) + b_n+1𝒫_n+1 (ℒ) + b_n 𝒫_n-1 (ℒ) , ℒ𝖯_n (ℒ) = a_n 𝖯_n (ℒ) + 𝖯_n+1 (ℒ) + b_n^2 𝖯_n-1 (ℒ) . This is a three-term recurrence relation satisfied by the orthonormal and the monic polynomial, respectively. They are orthogonal with respect to some measure μ(ℒ) such that ∫ dμ(ℒ) 𝒫_m (ℒ) 𝒫_n (ℒ) ∝δ_m,n , where the proportionality constant depends on the chosen polynomial. This is a consequence of Favard's theorem <cit.>; see <cit.> for more details. While both versions of the Lanczos algorithm are equivalent, we preferentially follow the orthonormal version to make use of the associated orthonormal Krylov basis set. However, the monic version is useful in connection with the Toda chain method <cit.>, discussed in Sec. <ref>. § THE UNIVERSAL OPERATOR GROWTH HYPOTHESIS §.§ Statement of the hypothesis The universal operator growth hypothesis identifies different classes of physical systems according to the specific laws governing the growth of the Lanczos coefficients, which in turn dictate the dynamics of operators and complexity measures in Krylov space <cit.>. It builds on the accumulated numerical evidence regarding the growth of Lanczos coefficients in many-body systems <cit.> and the relation between chaos and spectral properties of the observables <cit.>. Consider a many-body system in the thermodynamic limit in d dimensions, governed by a time-independent Hamiltonian H that is non-integrable or chaotic. The hypothesis regards an initial local operator 𝒪_0 that does not commute with any conserved quantity of the system. According to it, the Lanczos coefficients, which capture the dynamics of operator evolution and information spreading within the system, should exhibit maximal growth in the asymptotic limit. This maximal growth is linear for a generic chaotic system in d dimensions, with an additional logarithmic correction in one-dimensional systems. The hypothesis asserts that for the generic H and O_0 the asymptotic behavior of b_n is <cit.> b_n = A n/log n + o(n/log n) ,   d = 1 , α n + γ + o(1) ,   d > 1 , with constants A, α and γ. The constants A and α dictate the slope of the growth, while γ accounts for any linear shift that is irrelevant in the asymptotic limit of n. The values of A and α are not arbitrary; they possess energy dimensions and are constrained by the bandwidth for local Hamiltonians, see <cit.>. They are specific to the Hamiltonian and the local initial operator. The growth coefficients change for different initial operators, but the asymptotic linear growth should remain unaltered. This is a generic conclusion of the Lanczos algorithm, as the initial vector has lesser effects on the eigenvalues of the tridiagonal matrix. Note that (<ref>) assumes that b_n varies smoothly, disregarding even and odd parity effects, so that the index n can be treated as a continuous variable. The hypothesis concerns thermodynamic systems in the asymptotic limit. In finite-dimensional systems, however, Lanczos coefficients eventually terminate at the end of the Krylov space and may display saturation due to the finite dimensionality of the Hilbert space <cit.>. Hence, one must judiciously select an appropriate growth regime before finite-size effects become significant. Notably, linear growth occurs in dimensions greater than one, with a logarithmic correction in one dimension. Distinguishing between linear growth and its logarithmic correction is a formidable numerical task with conventional algorithms <cit.>, a feat only recently achieved with specialized Monte Carlo methods <cit.>. Curiously, the Lanczos coefficients have been found to show faster than linear growth in deep Hilbert space <cit.>, where the operator growth hypothesis does not hold. Efforts have been made to extend the recursion method in two dimensions <cit.>. Therefore, we adopt linear growth as the hallmark of the universal operator growth hypothesis across all dimensions, including all-to-all systems like the SYK model. §.§ Does linear growth of Lanczos coefficients always imply chaos? The hypothesis also has a bearing on integrable systems <cit.>. In them, the Lanczos coefficients typically exhibit sublinear growth and a power-law growth of Krylov complexity b_n ∼α n^δ,   ⇔   K(t) ∼ (α t)^1/1-δ ,     0< δ < 1 . By contrast, non-interacting systems like free-fermionic models exhibit a bounded sequence, b_n ∼ O(1) <cit.>, by extending δ = 0 in (<ref>). In such cases, an operator does not grow and the operator size remains constant. Table <ref> shows the generic behavior of Lanczos coefficients, Krylov complexity, and Krylov entropy for a variety of systems, including two unknown growth models yet to be found in a physical system. Nonetheless, the hypothesis (<ref>) primarily addresses non-integrable systems, sidestepping the nuances of integrable system behaviors. Some special integrable systems may display singular growth patterns. This includes integrable systems with saddle-dominated scrambling <cit.> or systems showing many-body localization <cit.>. We consider an example of the former case, the Lipkin-Meshkov-Glick (LMG) model <cit.>. The Hamiltonian in this model is constructed by the following SU(2) operators {x̂, ŷ, ẑ} = {ŝ_x/s, ŝ_y/s, ŝ_z/s}, obeying [x̂, ŷ] = i ħ_effẑ, with other cyclic commutators. The Hamiltonian is given by <cit.> H = x̂ + 2 ẑ^2 . Here, ŝ_i = σ̂_i/2, with i = x,y,z are the SU(2) spin operators of spin s, and σ̂_̂î being the Pauli matrices. The effective Planck constant depends on the spin as ħ_eff = 1/s. The dimension of the Hilbert space is d = 2s+1. In other words, the large spin limit s →∞ effectively implies the classical limit ħ_eff→ 0. The LMG model is classically integrable, and the naive expectation is that its integrability is preserved as it transitions to the quantum regime through semiclassical approximation. Upon rescaling the Hamiltonian H = H/ħ_eff, the Lanczos coefficients acquire a spin factor <cit.> b_n = b_n/ħ_eff = b_n s , which directly comes from the Lanczos algorithm. The growth of b_n is shown in Fig. <ref> (left) for the initial operator ẑ and the spin values s = 25, 50, and 75. It exhibits linear growth b_n ∼α n, even though the LMG model is classically integrable. The growth coefficient is exactly computed to be α≃√(3)/2, shown by the black dashed line. The entire Lanczos spectrum is shown in Fig. <ref> (right) for s=25. The Krylov dimension shows the integrable behavior and is much lower than the saturation bound given in Eq. (<ref>). Nevertheless, the linear growth of the Lanczos coefficients and the associated exponential growth of Krylov complexity are unexpected for an integrable model. This peculiar behavior arises due to the presence of an unstable saddle point (x,y,z) = (1,0,0) in its classical phase space, a phenomenon referred to as saddle-dominated scrambling <cit.>. Indeed, the OTOC of this system also shows exponential growth with the Lyapunov exponent λ_ OTOC=√(3) given by the underlying saddle point <cit.>, see Fig. <ref>. It is also interesting to note that the infinite Temperature Lyapunov exponent obeys the bound λ≤ 2 α <cit.>, see Secs. <ref> and <ref>, and using the computed α for LMG, we find a saturation of the bound λ_ OTOC = 2 α. This appears to be an apparent violation of the universal operator growth hypothesis. However, such a special case of saddle-dominated scrambling does not invalidate the universal operator growth hypothesis (<ref>), which asserts that generic chaotic systems should demonstrate maximal growth of Lanczos coefficients. The latter is thus a necessary, yet not sufficient, condition for chaos, only if the choice of the Hamiltonian and the initial operator is sufficiently generic. Aligning with the previous study with OTOC <cit.>, Eq. (<ref>) suggests that the linear growth of Lanczos coefficients at best provides necessary and sufficient conditions for scrambling, but not for chaos <cit.>. In fact, the necessary conditions for scrambling can also be proved rigorously using operator entanglement <cit.>. The saddle-dominated scrambling also occurs in classically chaotic systems, an example being the Feingold-Peres model <cit.>. Being classically chaotic, in these models, the Lanczos coefficients exhibit linear growth as expected <cit.>. Finally, let us briefly mention some consequences of the hypothesis in quantum field theory (QFT) and conformal field theory (CFT). The hypothesis (<ref>), initially posited for discrete quantum many-body systems <cit.>, encounters significant challenges when extended to continuous models, e.g., in QFT or CFT. In this case, constructing a Krylov basis by means of the Lanczos algorithm would require introducing a finite temperature; see below. One can circumvent this step and determine Lanczos coefficients directly from the two-point autocorrelation function (see Secs. <ref> and <ref>) has been applied <cit.>. In this framework, Lanczos coefficients demonstrate linear growth in free theories, barring the introduction of a UV cutoff <cit.>. Intriguingly, the presence of an IR scale can lead to splitting of the Lanczos coefficients into even and odd sequences b_n ∈{b_even, b_odd}, challenging the smoothness assumption presupposed in (<ref>). Hence, the results in continuum field theories go beyond the hypothesis (<ref>) and might seek an extension of the same in such theories. With the above caveats, the operator growth hypothesis has consolidated the Krylov complexity as a diagnostic tool for scrambling and quantum chaos. In this context, the Krylov complexity is under exhaustive investigation and has been analyzed in random matrix theory <cit.>, elementary models in quantum mechanics <cit.>, regular graphs <cit.>, Bethe lattice <cit.>, quantum reservoirs <cit.>, toy models of gauge theories dual to AdS black holes <cit.>, and inflationary cosmology <cit.>. Further, it has been emphasized that its effectiveness in probing chaos strongly depends on the chosen operator <cit.>. In addition, several works have promoted its use with a complementary scope. For instance, it has been shown to act as an order parameter in certain settings in dynamical phase transitions <cit.>. Likewise, its behavior reflects the confinement/deconfinement phase transitions at large N <cit.>. The Krylov complexity can also be used to probe transitions occurring as a function of the duration of the Trotter time-step in Floquet circuits associated with the Trotter decomposition of unitary dynamics <cit.>. Further, the Krylov complexity has been used to characterize the charging power of quantum batteries utilizing SYK models formulated on a graph <cit.>. § COMPUTATION OF LANCZOS COEFFICIENTS This section presents two analytic methods to compute the Lanczos coefficients. The first method is known as the “moment method” and relies on computing the moments of the spectral function, which relates to the Taylor series expansion of the autocorrelation function. A special case of the method was presented in <cit.> while the more general method was developed in <cit.>. Both rely on the generic recursion method <cit.>. The second method is based on the Toda chain flow in Krylov space and is often known as the “Toda chain technique” <cit.>. Both methods are equivalent and provide the same Lanczos coefficients, which can be further matched with the results of numerical algorithms (see Sec. <ref>). §.§ Lanczos coefficients via the moment method: Pole structure of autocorrelation function The primary motivation is based on calculating the moments of the spectral function Φ (ω), defined below. Given the spectral function Φ (ω), the moments are given by m_n = 1/2π∫_-∞^∞ d ω ω^n Φ (ω) ,     n = 0, 1, 2, ⋯ . This computes a set of numbers {m_n}. They can be separated into even m_2n and odd m_2n+1 moments (for n = 0, 1, 2, …). Conventionally, Φ (ω) is normalized such that m_0 = 1. For some special cases, odd moments vanish, although we keep considering both odd and even moments for the general discussion. Further, the spectral function is the Fourier transform of the autocorrelation function 𝒞(t), given by Φ (ω) = ∫_-∞^∞ d t e^- i ω t 𝒞(t) , 𝒞(t) = 1/2π∫_-∞^∞ d ω e^ i ω t Φ (ω) . Hence, 𝒞(t) is the inverse Fourier transform of Φ (ω). Taking the n-th derivative of 𝒞(t) as well as the limit t → 0, and using (<ref>), we obtain <cit.> m_n = 1/i^nlim_t → 0d^n 𝒞 (t) /d t^n . The first moment m_0 = 𝒞(0) is the autocorrelation function evaluated at the initial time. Conveniently, the autocorrelation function is normalized to unity at t = 0, so by definition, m_0 = 𝒞(0) = 1. This is consistent with the normalization of Φ(ω). Here, the autocorrelation function is assumed to have no pole in the real axis. However, it might have poles in the imaginary axis, including complex infinity. An example is 𝒞_1 (t) = sech(α t) or 𝒞_2 (t) = exp(-α t^2). On the other hand, for functions that have a pole in the real axis, e.g., 𝒞 (t) = sec(α t), one needs to rotate t → it to transform it into an imaginary axis and then apply (<ref>). Equation (<ref>) suggests an expansion of the autocorrelation function in a Taylor series form <cit.> 𝒞(t) t>0 = ∑_n=0^∞ m_n (it)^n/n! , with the expansion coefficients being the moments (<ref>). The mutual conversion between the moments and the autocorrelation function (and spectral function Φ (ω) from Eq. (<ref>)) is subtle and often regarded in the realm of the Hamburger moment problem <cit.>. Given the moments, the Lanczos coefficients can be obtained by the following recursive algorithm  <cit.>: 𝖬_k^(0) = (-1)^k m_k , 𝖫_k^(0) = (-1)^k+1 m_k+1 , 𝖬_k^(n) = 𝖫_k^(n-1) - 𝖫_n-1^(n-1)𝖬_k^(n-1)/𝖬_n-1^(n-1) , 𝖫_k^(n) = 𝖬_k+1^(n)/𝖬_n^(n) - 𝖬_k^(n-1)/𝖬_n-1^(n-1) ,   k ≥ n , b_n = √(𝖬_n^(n)) ,     a_n = - 𝖫_n^(n) . Given the Lanczos coefficients, the moments can also be evaluated. The problem boils down to a fully combinatorial problem of evaluating Dyck paths <cit.>, also see below. Figure <ref> shows a diagrammatic picture for evaluating such moments. The first four moments read <cit.>: m_0 = 1 , m_1 = a_0 , m_2 = a_0 + b_1^2 , m_3 = a_0^3 +2 a_0 b_1^2 + a_1 b_1^2 , m_4 = b_1^2 (a_0^2 + a_1^2 + a_0 a_1 + b_1^2 +b_2^2) + a_0 (a_0^3 +2 a_0 b_1^2 + a_1 b_1^2) . In general, this diagrammatic approach can be used to evaluate any general matrix element of powers of the Liouvillian ( O_j| L^n | O_k), starting and finishing at any general position. The result is given by the sum over all the possible paths that connect sites k and j in n steps. These paths are known as Motzkin paths <cit.>. The weight of each path is given by the product of Lanczos coefficients a_n, b_n associated with the path. At each application of the Liouvillian, the paths can move at most one site; this means that all matrix elements in which j>k+n or j<k-n identically vanish ( O_j| L^n| O_k) = 0. The first non-zero matrix element reads ( O_k+n| L^n| O_k)= b_k b_k+1… b_k+n, which corresponds to the direct path connecting sites k and k+n. If the autocorrelation function is an even function of t, only the even moments survive and the coefficients a_n vanish, i.e., m_0 = 1, m_2 = b_1^2, m_4 = b_1^4 +b_1^2 b_2^2, ⋯. The scaling of such coefficients was shown to indicate some universal feature of the autocorrelation function at early times <cit.>. This is generically true for unitary evolution with a Hermitian operator O^† = O, which reduces the above recursion to a simpler form <cit.>. The corresponding Dyck paths are the Motzkin paths without the side movement. Hence, Dyck paths only allow to move up and down, not going below the level of the initial point <cit.>. At the end of this section, we provide a path integral method to evaluate such Dyck paths in the asymptotic limit of large n. In generic non-unitary evolution, or when the initial operator is non-Hermitian O^†≠ O, however, both even and odd moments exist, giving rise to two sets of Lanczos coefficients {a_n} and {b_n}. They can be complex numbers in general. For example, the previously introduced autocorrelation functions 𝒞_1 (t) = sech(α t) and 𝒞_2 (t) = exp(-α^2 t^2/2) provide b_n^(1) = α n and b_n^(2) = α√(n), respectively. A natural question is whether they come from any orthonormalization algorithm like the Lanczos algorithm or its variants. We will see in later sections that they do not necessarily do so. In fact, the direct generalization to Arnoldi iteration <cit.> (see Sec. <ref>) only gives these coefficients approximately <cit.>. However, they are the exact output of a bi-orthonormalization procedure, obtained by a different generalization of the Lanczos algorithm known as the bi-Lanczos algorithm (see Sec. <ref>), and expressed in the form of the tridiagonal Lindbladian matrix (<ref>) <cit.>. The linear growth is special, in all known physical systems it is related to the singularity of the autocorrelation function 𝒞 in the complex t plane, and thereby, the decay of the spectral function at high frequencies <cit.>. Yet, mathematically, linear growth of b_n may not require 𝒞(t) to be singular, see an example in Eq. (<ref>) below. However, the opposite statement is mathematically rigorous <cit.>: the singularity of 𝒞(t) would necessarily require unbounded linear growth of b_n, as will be evident from the Dyck path formalism discussed below. In most cases, when b_n grows linearly, the distance to the closest pole of 𝒞(t) along the imaginary axis from the origin determines the asymptotics of the Lanczos coefficients. For example, the autocorrelation 𝒞_1 (t) = sech(α t) has poles in the imaginary axis at t_± = ± i π/(2 α); see Fig. <ref>. This determines b_n^(1) = α n, thereby the exponential growth of Krylov complexity K(t) ∼ e^2 α t. Alternately, the linear growth of the Lanczos coefficients results in the exponential decay of the spectral function. For the above case, we find Φ_1 (ω) = π/αsech(πω/2 α) ∼ e^-π |ω| /2 α. The exponential decay is the slowest possible decay, which in turn provides the fastest growth (linear) of the Lanczos coefficients (see Fig. <ref>). In other words, the higher moments control the asymptotic growth and the high-frequency regime, usually associated with late-time physics <cit.>. The sublinear growth of the Lanczos coefficients is usually associated with a faster decay. For example, b_n^(2)∼α√(n) implies Φ_2 (ω) ∼ e^-ω^2/(2 α^2). However, the fastest decay in 1D local systems is Φ(ω) ∼ |ω|^-|ω|≡ e^-|ω| log |ω| <cit.>, which imposes a logarithmic correction on the maximal possible growth of Lanczos coefficients b_n ∼ n/log n. Let us evaluate the asymptotic limit of the moments and do so when only the even moments are present; including the odd moments is a straightforward exercise. The tridiagonal form of the Liouvillian directly yields the moments <cit.> m_2n = (𝒪|ℒ^2n_H|𝒪) , where the suffix H in ℒ_H indicates the case of a Hermitian Liouvillian, corresponding to a_n = 0. The moments are given by the sum over weighted Dyck paths <cit.> m_2n = ∑_h_0 ⋯ h_2n∏_i = 1^2n b_(h_i + h_i-1)/2 , where the set {h_1, ⋯, h_2n} denotes the height of the Dyck path of length 2n, with h_i ≥ 1/2 and h_0 = h_2n = 1/2, i.e., the height ultimately reaches to the same height of the initial starting point at the end of the path. We wish to evaluate the sum in the asymptotic limit of n. To this end, two approaches have been outlined in <cit.>. However, we take an alternative approach that evaluates the number of weighted Dyck paths using a saddle point approximation. This amounts to express (<ref>) as a path integral over a smooth function f(𝗍) with 𝗍∈ [0,1], such that h_i = 1/2 + 2n f(i/(2n)). This puts the boundary condition on f(𝗍) such that f(0) = f(1) = 0. In other words, the derivative of f(𝗍) denotes the slope of moving up and down of a microscopic Dyck path at the index i = 2 n 𝗍, such that f'(𝗍) = +1 and f'(𝗍) = -1 for up and down jumps respectively. Further, if the probability associated with such up and down jumps are p and (1-p), we can write 2p(𝗍) - 1 = f'(𝗍). Since this is considered an asymptotic limit, we further assume b_n is a smooth function of n, i.e., b_n ≡ b(n). The asymptotic form of the moment is given by considering the total number of weighted Dyck paths <cit.> m_2n ∼∫ D f(𝗍) e^𝖲(n) , 𝖲(n) = 2n ∫_0^1 d𝗍 [𝖧(p(𝗍)) + log(b(2 n f(𝗍)) ] , where 𝗍∈ [0,1] is just a parameter with no relation with real time t. Here, p(𝗍) = (1 + f'(𝗍)/2) with the prime indicating the derivative with respect to 𝗍, and 𝖧(x) = -x log x - (1-x) log(1-x) is the microscopic Shannon entropy of the associated variable x. Therefore 𝖲(n) evaluates the total contribution of the microscopic entropy weighted by the Lanczos coefficients. Alternatively, it can be considered as the saddle point action that gives the total entropy, i.e., the number of total weighted Dyck paths for the moments (<ref>). For the generic growth of Lanczos coefficients b(n) = α n^δ with 0 ≤δ≤ 1, the extremization of the entropy function 𝖲(n) can be evaluated in the following way. First, notice that 𝖲(n) is a functional of f(𝗍), i.e., 𝖲[f(𝗍)] = ∫_0^1 d 𝗍 𝒮 (f(𝗍), f'(𝗍), 𝗍) , with a function 𝒮 that depends on f(𝗍), f'(𝗍) and 𝗍. The f'(𝗍) dependence appears from p(𝗍). Here, the n dependence on 𝖲(n) is suppressed. This posits a variational problem, and we seek for f(𝗍) that extremizes the above functional. Hence, extremization implies that 𝒮 satisfies the Euler-Lagrange equation <cit.> d/d 𝗍(∂𝒮/∂ f'(𝗍)) = ∂𝒮/∂ f(𝗍) . Substituting b(n) = α n^δ, and H(x) in (<ref>), and using (<ref>), the function f(𝗍) satisfies the following equation of motion <cit.> - f”(𝗍)/1- f'(𝗍)^2 = δ/f(𝗍) ,    f(0) = f(1) = 0 , with the associated boundary condition. This equation can be solved for generic δ in terms of the hypergeometric function. However, we focus on three specific cases δ = 1, δ = 1/2, and δ = 0, which correspond to the linear, sublinear (squared root) growth and constant b (n) = α∼ O(1) respectively. In these cases, the solution becomes <cit.> f(𝗍) = sin (π𝗍)/π   δ = 1 , 𝗍(1-𝗍)   δ = 1/2 , 0   δ = 0 , which respects the prescribed boundary condition. The saddle point action can be readily evaluated, and that correspondingly gives the moments <cit.> m_2n∼(4 n α/π e)^2n   δ = 1 , (2 n α^2/e)^n   δ = 1/2 , 4^n   δ = 0 . In the last case (δ = 0), the Dyck paths are not weighted <cit.>, which gives rise to the asymptotic behavior of the Catalan number, Cat(n) ∼ 4^n <cit.>. An extension of this result, establishing the relation between constant γ in (<ref>) and the order of the singularity on the imaginary axis of 𝒞(t) is discussed in <cit.>. The extension to the case in which b_n split into two approximately continuous branches is developed in <cit.>. The equivalent case in one spatial dimension is a bit subtle. Since the Lanczos coefficients show a logarithmic correction b_n ∼α n/log n, the function f(𝗍) acquires a subleading term which is logarithmic <cit.> f(𝗍) = sin (π𝗍)/π + O(1/log(2n)) , where the first term equals that in (<ref>) for the linear growth. The corresponding correction term Δ𝖲 to the action 𝖲 is given by <cit.> Δ𝖲 = 2n ∫_0^1 d𝗍(δ -1) log(f(𝗍)) , where f(𝗍) is given by (<ref>). The saddle point solution is readily evaluated, and the moments are given by <cit.> m_2n∼(4 n α/π e log(2n))^2n (2π)^2n/log(2n) . Hence, for the maximal growth of the Lanczos coefficients, the following statements are equivalent (we take systems with a_n = 0) Φ(ω) ∼ e^-π |ω| /2 α    ⇔   b_n ∼α n       ⇔   m_2n∼(4 n α/π e)^2n    ⇔    K(t) ∼ e^2 α t            (d > 1) , Φ(ω) ∼ |ω|^-|ω|   ⇔   b_n ∼α n/log n   ⇔   m_2n∼(α n/log n)^2n   ⇔    K(t) ∼ e^√(4 α t)          (d = 1) . We reiterate that these relations hold in an asymptotic sense. This discussion assumes a smooth behavior of the Lanczos coefficients. By this, we mean b_n ∼α n with n varies continuously, although it is a discrete index. However, the smoothness of the moments m_n does not guarantee the smoothness of the Lanczos coefficients. A particular example is given by the following mock auto-correlation function of the form <cit.> 𝒞(t) = 1/2[e^ (e^i t -1)+ e^(e^-i t -1)] , such that 𝒞(0) = 1. The moments can be straightforwardly computed using Eq. (<ref>). Since the above autocorrelation is even 𝒞(-t) = 𝒞(t), all odd moments vanish and even moments are given by m_2n = B_2n, where B_2n are even Bell numbers <cit.>. It is easy to see that the moments are smooth functions of n. Applying (<ref>), we find a_n = 0, while b_n coefficients split into two branches b_n ∈{b_even, b_odd}, after certain n_*≈ 30, see Fig. <ref>. While the overall growth of b_n is not smooth, the odd and even b_n separately show smooth behavior in the asymptotic sense <cit.> b_n^odd∼√(n) ,          b_n^even∼ n . Hence, the odd coefficients are sublinear, while the even coefficients are linear. Moreover, the autocorrelation function is periodic in t in the real axis. In other words, the autocorrelation function does not decay to zero. This property is presumably responsible for such oscillation-the even and odd splitting can be mathematically expressed as <cit.> b(n) = f(n) + (-1)^n g(n) , with slowly varying function f(n) and g(n) such that g(n) decays with n, and f(n) ≫ g(n) in the asymptotic limit of n. This gives two distinct branches, oscillating between f(n) ± g(n). If f(n) is assumed to be linear, i.e., f(n) ∼α n for some α >0, then an asymptotic analysis suggests that g(n) has the form <cit.> g(n) ∼ (log n)^-𝖺 ,    𝖺≥ 0 , on top of the linear growth. This gives rise to an autocorrelation function of the form <cit.> 𝒞(t) ∼ t^-𝖺 ,    𝖺≥ 0 . Thus, a power law decay of the autocorrelation results from the logarithmic decay of g(n) on top of the linear growth of f(n). However, constant 𝖺=0 implies g(n) ∼const, which results in autocorrelation that does not decay over time and can be periodic. Such non-decaying behavior of autocorrelation usually gives rise to the unusual splitting of the Lanczos coefficients <cit.>. An alternate point of view from the continuity of the spectral function is also discussed <cit.>. Similar odd and even separation (with different growth) was observed in the next-to-leading order expansion in the large q SYK model <cit.>, and around the saddle point solution of the Lipkin-Meshkov-Glick (LMG) model <cit.>, as well as in systems mimicking the inhomogeneous Su-Schrieffer-Heeger (SSH) model <cit.>. More specifically, such cases are prevalent in quantum field theory, especially when an explicit IR cutoff is present (see Sec. <ref>). In such a case, however, the pole structure of the autocorrelation function and the asymptotic decay of the spectral function cannot be used to deduce the generic growth of the Lanczos coefficients <cit.>. The full understanding of such splitting is still under investigation. We can express the recursive relation (<ref>) in a suggestive way, using a continued fraction <cit.> G(z) = ∑_n=0^∞m_n/z^n+1 = 11 - a_0 z - b_1^2 z^21- a_1 z - b_2^2 z^21 - a_2 z - … , where G(z) is the Green's function, which is related to the autocorrelation function 𝒞 (t) by a Laplace-like transform G(z) = i ∫_0^∞ d t e^-i z t 𝒞 (t) . This can be verified using (<ref>). For a Hermitian Liouvilian ℒ_H, we have an orthonormal basis and the Green's function can be written as G_H(z) = (𝒪|(z-ℒ_H)^-1|𝒪) with the even moments given by (<ref>), which is connected to the inverse scattering operator <cit.>. Furthermore, G(z) is associated with paths starting in the first site and returning to it after propagating over the chain. Applying this idea recursively allows for an intuitive understanding of the continuous fraction expansion (<ref>). The continued fraction representation of Green's function is closely related to the generating function of the Motzkin polynomials <cit.>. §.§.§ An example: large q SYK model In principle, the above formalism works for any system. For illustration, we choose the paradigmatic Sachdev-Ye-Kitaev (SYK) model as our system, as done in <cit.>. A similar construction has also been considered in the double-scaled SYK model <cit.>. The motivation for choosing the SYK is twofold - it is numerically amenable <cit.> and analytically tractable <cit.> for the desired computation. Further, the model is also important from the holographic considerations due to its similarity with Jackiw-Teitelboim (JT) gravity in the low-energy limit. We consider the q-body SYK model with N Majorana fermions, given by the Hamiltonian <cit.> H = i^q/2∑_1 ≤ i_1 < ⋯ < i_q ≤ N J_i_1 ⋯ i_qψ_i_1⋯ψ_i_q , where the fermionic operators ψ_i obey the Clifford algebra {ψ_a, ψ_b} = δ_ab, and J_i_1 ⋯ i_q are random couplings drawn from a Gaussian ensemble with zero mean and variance given by ⟨J_i_1 ⋯ i_q⟩ = 0 ,   ⟨J^2_i_1 ⋯ i_q⟩ = (q-1)! J^2/N^q-1 = 2^1-q(q-1)! 𝒥^2/q N^q-1 , where 𝒥^2 = 2^1-q q J^2 is the convenient energy scale in the large q limit. The limit N→∞ is already implied and facilitates analytical tractability, especially computing correlation functions in the 1/q expansion. However, for numerical purposes, we choose finite N and thus focus on finite q results. See <cit.> for a review of the SYK model and its connection to holography. Traditionally, the growth of operator size in this model has been studied using the melon diagrams technique in Pauli spin-basis <cit.> and epidemic models <cit.>. Let us study the growth of a single Majorana operator, say ψ_1, and the Krylov complexity, evolved by the Hamiltonian (<ref>). The two-point autocorrelation function 𝒞 (t) = (ψ_1(t) |ψ_1(0))_β at finite temperature 1/β is known <cit.>. However, in this section, we only focus on the infinite-temperature case. The autocorrelation function can be expanded as <cit.> 𝒞 (t) = 1 + 1/q g(t) + O(1/q^2) . Here, we have considered the leading order term in O(1/q) expansion. The subleading order has also been considered <cit.>, but we ignore that for our discussion. The function g(t) satisfies the Liouville differential equation <cit.> ∂_t^2 g(t) = -2 𝒥^2 e^g(t) . With the boundary conditions g(0) = 0 and g'(0) = 0, we obtain the solution g(t) = 2 ln ( (𝒥 t )). Hence, the auto-correlation function in the leading order is given by 𝒞 (t) = 1 + 2/q ln ( (𝒥 t )) + + O(1/q^2) . This form of autocorrelation directly allows us to compute the moments using (<ref>). Since the autocorrelation is an even function in time, the odd moments vanish m_2n+1 = 0, while the even moments are given by m_2n = 1/q𝒥^2n T_n-1 + O(1/q^2) ,        n ≥ 1 , expressed in terms of the tangent numbers {T_n-1}_n=1^∞ = {1, 2, 16, 272, 7936, ⋯} <cit.>. Applying the recursive algorithm (<ref>), we obtain the Lanczos coefficients <cit.> b_n = 𝒥√(2/q) + O(1/q)   n = 1 , 𝒥√(n(n-1)) + O(1/q)   n > 1 , implying an asymptotic growth b_n ∼α n, with α = 𝒥. The growth is set by the energy scale of the problem. Plugging (<ref>) into the differential equation (<ref>), we obtain the Krylov basis wavefunctions as <cit.> φ_n(t) = 1 + (2/q) ln ( (𝒥 t)) + O(1/q^2)   n = 0 , √(2/nq) tanh^n (𝒥 t) + O(1/q^2)   n ≥ 1 . It can be easily checked that the total probability is unity ∑_n=0^∞|φ_n(t)|^2 = 1 up to the O(1/q), as expected. Further, we compute the Krylov complexity as <cit.> K (t) = 2/qsinh^2 (𝒥 t) + O(1/q^2) , which grows exponentially with Krylov growth coefficient λ_K = 2 α = 2 𝒥. It is also straightforward to compute the Krylov variance (<ref>), which is given by Δ K(t)^2 = 1/2qsinh^2(2 𝒥 t) + O(1/q^2) . This grows exponentially with coefficient 4 𝒥. The infinite-temperature Lyapunov exponent obtained from the OTOC is also available <cit.>. In particular, it is upper bounded <cit.> λ_OTOC≤ 2 α . The bound is tight, and no tighter bound can be obtained. It is saturated only in the large q limit and remains valid near saturation for finite q. However, this inequality is proved only in the infinite-temperature limit for the large q limit. In the finite temperature case, the inequality is only a conjecture. In particular, it was speculated that the bound can only be saturated for all-to-all systems. A recent study also formulates the OTOC on the Krylov basis in the SYK model <cit.>. Further, the bound (<ref>) is shown to be valid for classical systems and becomes tight for classically chaotic systems, including systems that exhibit saddle-dominated scrambling <cit.>. In contrast to our analytic computation, we can also numerically determine the Lanczos coefficients from the Lanczos algorithm. In principle, we can choose any fermionic operator which is local in the sense of full Hamiltonian. This includes single or two-body fermionic operators, for example. However, for our numerical analysis, we take 𝒪 = √(2)ψ_1 as the normalized initial operator, for which we have analytically computed the autocorrelation function in (<ref>). Figure <ref> shows the behavior of the Lanczos coefficients b_n for q =4 with different system sizes (number of fermions) N. Since the system is closed, all a_n vanish. The behavior of b_n is typical for chaotic systems, it grows linearly followed by saturation at n ≳ N/q to a system-size dependent value. At n ∼ e^N, the Lanczos coefficients begin to decrease. One needs to exhaust the full Krylov space to see such a full regime of the Lanczos descent of b_n. See <cit.> for the full profile of the Lanczos coefficients in SYK. However, in our case, increasing N increases the saturation linearly, which is shown in the right figure in Fig. <ref>. In the thermodynamic limit, the saturation is pushed to infinity, making it irrelevant physically. Hence, in the thermodynamic limit, only the slope of b_n is important, not its saturation value. The slope is linear and given by (<ref>). However, our numerical model consists of q=4 body interaction with system sizes up to N = 22. Hence, both q and N are not large enough to compare our numerical result with the analytical large q (and large N) result. Finally, from the numerical results of the Lanczos coefficients, we directly compute the Krylov complexity by solving the Eq. (<ref>). An easy way to achieve this will be discussed in (<ref>). Figure <ref> shows the behavior of the Krylov complexity. Here, only transient exponential and linear growth are observed. The crossover happens at the scrambling time t_*∝log N/q, corresponding to the saturation of b_n. At exponentially large times, the Krylov complexity saturates, which is missing in our analysis. This is because we have not taken the full profile of Lanczos coefficients; the large n Lanczos coefficients are responsible for the late-time value of the Krylov complexity. See <cit.> for the full profile of Krylov complexity in the SYK model. §.§ Revisiting Lanczos coefficients: a Toda chain method In this section, we derive an alternate method to compute Lanczos coefficients. This can be obtained from the monic version of the Lanczos algorithm <cit.>, with Euclidean time τ = it. The method goes as follows: given the autocorrelation 𝒞(τ), we construct the (n+1)-dimensional square Hankel matrix ℳ such that ℳ_jk^(n) (τ) = 𝒞^(j+k) (τ) ,      j, k = 0, 1, ⋯, n , where the (j,k)-th element is denoted by the (j+k)-th derivative of the autocorrelation, i.e., 𝒞^(j+k)(τ) = d^j+k𝒞(τ)/d τ^j+k. In other words, the Hankel matrix is formed by the n-th moments. Then the determinant of this matrix is given by the Toda function _n (τ) = ℳ^(n) (τ) , where _0 (τ) = 𝒞(τ) is the autocorrelation function. The Toda function satisfies the Hirota's bilinear form <cit.>, _n _n - _n^2= _n+1_n-1 ,      _-1 := 1 , where the dot denotes the derivative with respect to τ and we have suppressed the τ dependence in _n ≡_n (τ). This equation is equivalent to the Toda chain equation <cit.> 𝗊_n = e^𝗊_n+1 - 𝗊_𝗇 - e^𝗊_n - 𝗊_n-1 ,      n= 0, 1, ⋯ , with 𝗊_-1 = -∞, upon the substitution with Toda variables 𝗊_n = log(_n/_n-1) with with _0 (τ) = e^𝗊_0 (τ). Given the Toda function _n (τ), the Lanczos coefficients are obtained as[Our notation uses a shift in n → n+1 in the expression of b_n^2(τ) in (<ref>) compared to <cit.>. This is consistent with the Toda equation and produces the exact result obtained from the moment method.] <cit.> b_n+1^2 (τ) = _n+1 (τ) _n-1 (τ)/_n^2 (τ) ,      n ≥ 0 , a_n (τ) = d/d τlog(_n (τ)/_n-1 (τ)) ,      n ≥ 0 , which depend on the parameter τ. The parameter-dependent Lanczos coefficients are known as Flaschka variables. They are related to the Toda variables as a_n (τ) = 𝗊̇_n (τ) , b_n+1 (τ) = e^1/2(𝗊_n+1 (τ) - 𝗊_n(τ)) ,      n= 0, 1, ⋯ . In other words, the Toda equation (<ref>) can alternatively written in the following Flaschka form <cit.> provided the a_n(τ) and b_n(τ) satisfy ȧ_n (τ) = b_n+1^2 (τ) - b_n^2 (τ) ,                   n ≥ 0 , ḃ_n+1 (τ) = 1/2 b_n+1 (τ)(a_n+1(τ) - a_n (τ)) ,   n ≥ 0 , with b_0 = 0. Now we choose an appropriate cutoff τ = τ_0, similar to the moment method. The actual Lanczos coefficients are then given by b_n+1 = b_n+1 (τ)|_τ_0 ,       n≥ 0 , a_n = a_n (τ)|_τ_0 ,          n≥ 0 . For most cases, we consider τ_0 = 0, but for certain systems (e.g., in CFT), we may need to consider τ_0 ≠ 0. This yields another powerful method to obtain the Lanczos coefficients from the autocorrelation function. There is a very simple ansatz b_n^2(τ)=b(τ)^2 p(n), which satisfies (<ref>) provided p(n) is a polynomial of degree no higher than two. The corresponding solutions were explicitly found in <cit.>. In the next section, we will encounter exactly the same sequences of b_n again when discussing the algebraic approach. Both the “moment method” and “Toda chain method” are useful in many situations, especially when we know the analytic form of the autocorrelation function. A particular example is the SYK model <cit.> (see Sec. <ref>) and its large q expansions <cit.>. The moment method has been successfully applied to compute the Lanczos coefficients, which have been numerically verified. Another specific interesting case is that of quantum field theories (QFT) and their conformal limit, described by conformal field theories (CFT) <cit.>. Because of the infinite degrees of freedom, we cannot generate an orthonormal (or bi-orthonormal) basis. However, many theories allow us to compute the autocorrelation function exactly, mainly due to the conformal symmetry. In such cases, the Lanczos coefficients can be exactly computed using either the moment method or the Toda chain method. See Sec. <ref> for details. § COHERENT STATES, COMPLEXITY ALGEBRA, AND DISPERSION BOUND IN KRYLOV SPACE §.§ Coherent states From the vast array of complex quantum systems, we narrow down our study to those that are symmetrical. We focus on systems where the Liouvillian operator belongs to the Lie algebra of a specific symmetry group. For these systems, the representation of the Liouvillian in the Krylov basis decomposes into two components according to <cit.> ℒ = α (ℒ_+ + ℒ_-) , where ℒ_+ and ℒ_- represent the raising and lowering parts of the Liouvillian, akin to ladder operators. While their representations vary across different symmetry groups, their operational essence mirrors that of the creation and annihilation operators, which we will examine shortly. The coefficient α is not constrained by symmetry considerations and must be calibrated based on the system's specifics and according to the chosen norm. Considering the scenario where a_n = 0, the action of ℒ_± on the Krylov basis is given by (<ref>): αℒ_+ |𝒪_n) = b_n+1 |𝒪_n+1) , αℒ_- |𝒪_n) = b_n |𝒪_n-1) . Systems with non-vanishing a_n can be also considered <cit.>. Notably, the temporal evolution dictated by the Liouvillian (<ref>) can be equivalently described through the evolution of coherent states, which are constructed via the exponential action of the ladder operators <cit.>, D(ξ) := e^ξℒ_+ - ξℒ_- , where ξ denotes a complex number, with ξ being its complex conjugate. The operator D(ξ), known as the displacement operator, corresponds to the Lie group generated by the ladder operators ℒ_±. Similar displacement operators frequently arise when studying coherent states in quantum optics. A prototypical coherent state emerges from the application of D(ξ) to a reference state |ψ⟩ <cit.>. We next explore how a judicious selection of ξ and |ψ⟩ facilitates the study of operator growth in Krylov space governed by the Liouvillian (<ref>) with specific symmetry groups. §.§.§ SL(2,ℝ) algebra Our first example is the algebra of SL(2,ℝ), which is locally isomorphic to SU(1,1) <cit.>. The generators of this group are the set {L_0, L_± 1}, where they satisfy the following Lie algebra <cit.> [L_0, L_± 1] = ∓ L_± 1 ,       [L_1,L_-1] = 2 L_0 . The above generators construct the complete representation of the Lie algebra <cit.> L_0|h, n⟩ = (h+n) |h, n⟩ , L_-1|h, n⟩ = √((n+1)(2h+n))|h, n+1⟩ , L_1|h, n⟩ = √(n(2h+n-1))|h, n-1⟩ . Here, h and n ≥ 0 are two indices for the corresponding states |h, n⟩, usually known as the conformal weight and the excitation index, respectively. The states obey the condition ⟨ h,m|h,n ⟩ = δ_m,n, ensuring orthonormality. Furthermore, we can introduce the Casimir operator 𝖢_2 = L_0^2 - 1/2(L_-1 L_1 + L_1 L_-1) of the algebra, which acts invariantly on the state |h,n⟩ <cit.> 𝖢_2 |h, n⟩ = h(h-1)|h, n⟩ . Since the Casimir operator commutes with L_0, the states |h, n⟩ are the simultaneous eigenstates of 𝖢_2 and L_0. This fact is evident from (<ref>) and (<ref>). The states |h, n⟩ are obtained by the repeated action (n-fold) of the generator L_-1 on the highest-weight state |h⟩, expressed as <cit.> |h, n⟩ = √(Γ (2h)/n! Γ(2h+n)) L_-1^n |h⟩ . Interestingly, the generalized coherent states of the SL(2, ℝ) algebra |z,h⟩ are obtained by the action of the displacement operator D(ξ) <cit.>: |z,h⟩ = D(ξ) |h⟩ ,      D(ξ) = e^ξ L_-1 - ξ L_1 , The parametrization between z and ξ is given by z = ξ/|ξ|tanh(|ξ|) , where |ξ|^2 = ξξ. Thus, a generic coherent state aligned with the SL(2,ℝ) symmetry can be written explicitly in the eigenbasis {|h, n⟩} as <cit.> |z,h⟩ = (1-|z|^2)^h∑_n=0^∞ z^n √((2h)_n/n!)|h, n⟩ . Here, (a)_n = Γ(a + n)/Γ(a) is the Pochhammer symbol. With the above formalism in hand, we define the Liouvillian as a combination of L_± 1 according to <cit.> ℒ = α (L_1 + L_-1) , where α is a non-zero real coefficient. Although we do not impose any constraint on it, we take it independent of both h and n. In Eq. (<ref>), we can also include L_0 and an identity operator. However, for the discussion, we stick to the simplest choice in (<ref>). To appreciate the generic structure of the operator evolution <cit.> |𝒪(t)) ≡ e^i ℒ t |𝒪) ≡ e^i α (L_1 + L_-1) t |𝒪) , note that the time-evolution operator generated by the Liouvillian (<ref>) is nothing but the displacement operator of the algebra with ξ = i α t. Once we identify the initial operator as the highest-weight state of the representation, we can immediately identify the time-evolved state and the Krylov basis <cit.>, |𝒪(t)) = |z,h⟩ ,    |𝒪) = |h⟩ ,     |𝒪_n) = |h,n⟩ , with the corresponding Lanczos coefficients. In particular, in this case, it is easy to see that they are given by the eigenvalues of the SL(2,ℝ) generators <cit.> b_n = α√(n(n-1 + 2 h)) . Thus, the corresponding operator evolution is governed by the SL(2,ℝ) symmetry. This completely furnishes the structure of the operator evolution corresponding to the underlying symmetry algebra. We note that the behavior (<ref>) is one of the examples of the exactly solvable Toda equations (<ref>) with the factorizable b_n^2(τ) (<ref>). The time dependence, which we discuss below, can be extended to Toda formalism using Wick rotation τ=i t. So far, the discussion is completely general and restrained by the SL(2,ℝ) symmetry only. However, to make contact with our previous discussion, we identify ξ = i α t. This readily gives z = i tanhα t and the time-evolved operator is given by the coherent state <cit.> |𝒪(t)) = |z,h⟩|_z= i tanhα t, h= η/2 = ∑_n=0^∞√((η)_n/n!)^η(α t)tanh^n(α t) |𝒪_n) , where η is related to the parameters of the operator growth hypothesis (<ref>) by η = 2 γ/α + 1 <cit.>. Hence, the Krylov basis function is easily obtained by the expansion coefficients of the above time-evolved operator, associated with the Lanczos coefficients. They read <cit.> b_n = α√(n(n-1 + η)) , φ_n (t) = √((η)_n/n!)^η(α t)tanh^n(α t) , with the same choice of parameters z and h, as in (<ref>). One can check that (<ref>) satisfies the recurrence relation (<ref>) with the corresponding Lanczos coefficients (<ref>). Each φ_n(t) shows an exponentially decreasing behavior ∼ e^-ηα t at late times, independent of n. For a finite time, Fig. <ref> (left) shows snapshots of the wavefunction at each time. As time progresses, the contribution of the higher φ_n becomes more dominant. To find the asymptotic tail of such behavior, consider the asymptotic limit of (<ref>). We find <cit.> φ_n (t) ≃ n^η -1/2tanh^n (α t) ∼ e^-n/ξ(t) n^η -1/2 , where ξ(t)^-1∼ 2e^-2 α t. This expression follows from the asymptotic expansion Γ(n+η)/Γ(n) ∼ n^η for n →∞, ignoring any n-independent terms, which are irrelevant in the asymptotic limit. The delocalization length ξ(t) grows exponentially, i.e., the operator delocalizes <cit.>. Later, we will see a dissipative version <cit.> of this delocalization in Eq. (<ref>). The probability is conserved ∑_n=0^∞ |φ_n (t)|^2 = 1 and the Krylov complexity reads <cit.> K(t) = ∑_n=0^∞ n |φ_n(t)|^2 = ηsinh^2 (α t) ∼η e^2 α t , which is exponential with an arbitrary prefactor η which does not scale with n. Let us consider two special values of η. For the simplest case, with η = 1, the Lanczos coefficients become linear, and the Krylov basis functions are given by <cit.> b_n = α n ,      φ_n (t) = (α t)tanh^n(α t) . The autocorrelation function is φ_0(t) = (α t) ≡𝒞_1(t), whose property, pole structure, and corresponding spectral function is already discussed in detail, see Sec. <ref>. The Krylov complexity grows exponentially K(t) = sinh^2 (α t) ∼ e^2 α t. The second choice concerns the SYK model. The above result correctly reproduces the analytic expressions in this model. Identifying α = 𝒥, η = 2/q in (<ref>), and performing the 1/q expansion readily gives b_n = α√(n(n-1 + 2/q)) = 𝒥√(n(n-1)) + O(1/q), which is exactly (<ref>) for n>1. Both the Krylov wavefunction and Krylov complexity also reduce to (<ref>) and (<ref>) upon the same identification. This is intimately tied with the underlying SL(2,ℝ) symmetry of the Liouvillian generator, which we have exploited in the corresponding melon diagrams in (<ref>). Later, we see this generic structure of algebra is also retained when we include an addition coefficient a_n for open systems (see Eq. (<ref>)). §.§.§ Heisenberg-Weyl (HW) algebra The Heisenberg-Weyl (HW) algebra is generated by the four generators {a, a^†, 𝕀, n̂}. Here 𝕀 is the identity operator, and n̂ = a^† a is the number operator. They satisfy the following algebra [a,a^†] = 𝕀 ,    [n,a] = -a ,     [n,a^†] = a^† . The Hilbert space is infinite-dimensional and spanned by the number basis set |n⟩ = 1/√(n!) (a^†)^n |0⟩ , where |0⟩ is the lowest-weight state in the representation corresponding to n=0, i.e., ⟨ 0|n̂|0⟩ = 0. In other words, it is annihilated by the operator a, i.e., a |0⟩ = 0. The basis states are orthonormal ⟨ m|n ⟩ = δ_m,n. The creation (a^†) and annihilation (a) operators act on the number state according to a^†|n⟩ = √(n+1)|n+1⟩ ,     a |n⟩ = √(n)|n-1⟩ . They are also known as the raising and the lowering operator, respectively, as obvious from their action. The generic coherent states is given by <cit.>: |z⟩ = D(z) |0⟩ ,      D(z) = e^z a^† - z a , with z = e^i ϕ being a complex number. The operator D(z) is the standard displacement operator. Using (<ref>) and the algebra (<ref>), we obtain the generic form of the coherent state |z⟩ = e^-|z|^2/2∑_n=0^∞z^n/√(n!)|n⟩ . To find the operator evolution, we write the Liouvillian in terms of the creation and annihilation operator <cit.> ℒ = α (a^† + a) . Following the identification z = i α t, the Krylov basis and the time-evolved operator can readily be obtained as a coherent state <cit.> |𝒪(t)) = |z = i α t⟩ = e^i α (a^† + a)t|0⟩ ,    |𝒪_n) = |n⟩ . Similarly, we identify the Lanczos coefficients and the Krylov basis wavefunctions <cit.> b_n = α√(n) ,     φ_n (t) = e^-α^2 t^2/2 (α t)^n/√(n!) , such that the wavefunction is appropriately normalized, i.e., ∑_n=0^∞|φ_n(t)|^2 = 1. The snapshots of the wavefunctions are shown in Fig. <ref> (right). The Krylov complexity is given by <cit.> K(t) = ∑_n=0^∞ n |φ_n (t)|^2 = α^2 t^2 , which grows quadratically over time. This algebra is thus an example of a system where the Lanczos coefficients grow sub-linearly, and therefore the Krylov complexity grows sub-exponentially, in particular quadratically. Various generalizations and extensions of the above algebra are possible, like the q-deformed version. An interesting extension was done for the Schrödinger group in <cit.>. §.§.§ SU(2) algebra The SU(2) algebra is defined by the set of three generators {J_i}_i=1^3, obeying the following commutation rule <cit.> [J_i, J_j] = i ϵ_ijk J_k ,     i,j,k = 1,2,3 , where ϵ_ijk is the Levi-Civita symbol. Introducing the raising and lowering operators J_± = J_1± i J_2, and relabeling J_3 ≡ J_0, the SU(2) algebra is written as [J_0, J_±] = ± J_± ,       [J_+, J_-] = 2 J_0 . The Casimir operator of this algebra is J^2 := J_1^2 + J_2^2 + J_3^2 = J_+J_- + J_0^2. Since [J^2,J_0] = 0, they have a common eigenbasis J^2 |j, n⟩ = j(j+1) |j, n⟩ , J_0 |j, n⟩ = n |j, n⟩ . The states are orthonormal ⟨ j, n| j', n' ⟩ = δ_j j'δ_n n'. The quantum numbers j = 0, 1/2, 1, ⋯ and n with -j ≤ n ≤ j are spin quantum numbers. For convenience, we shift n → -j+n, such that 0 ≤ n ≤ 2j. The lowest-weight state is |j,-j⟩ which is annihilated by J_-, i.e, J_-|j,-j⟩ = 0. A similar action on the highest-weight state is J_+|j,j⟩ = 0. However, repeated action of J_+ on the lowest-weight state |j,-j⟩ builds the corresponding orthonormal basis states <cit.> |j, -j+n⟩ = √(Γ (2j-n+1)/n! Γ(2j+1)) J_+^n |j,-j⟩ , which we denote with another index n. Alternatively, we could reach the same state from the highest-weight state |j, j⟩ and the repeated action of J_-. The action of the generators {J_±, J_0} on this state (<ref>) is the following <cit.> J_0|j, -j+n⟩ = (-j+n) |j, -j+n⟩ , J_+|j, -j+n⟩ = √((n+1)(2j-n))|j, -j+n+1⟩ , J_-|j, -j+n⟩ = √(n(2j-n+1))|j,-j + n-1⟩ . Similar to the SL(2,ℝ) and SU(2) cases, the coherent states |z,j⟩ are obtained by the action of the displacement operator D(ξ) <cit.>: |z,j⟩ = D(ξ) |j,-j⟩ ,      D(ξ) = e^ξ J_+ - ξ J_- , where z = tan(θ/2) e^i ϕ. As a result, the SU(2) coherent state is written in the eigenbasis as <cit.> |z,j⟩ = (1+|z|^2)^-j ∑_n=0^∞ z^n √(Γ(2j+1)/n! Γ(2j-n+1))|j, -j + n⟩ . This completes the general discussion of the familiar SU(2) algebra. To connect with our discussion, we split the Liouvillian according to <cit.> ℒ = α (J_+ + J_-) . The operator evolution is given by <cit.> |𝒪(t)) ≡ e^i ℒ t |𝒪) ≡ e^i α (J_+ + J_-) t |𝒪) . With the initial operator |𝒪) = |j,-j⟩, given that the effective evolution is described by the displacement operator (<ref>), the time-evolved operator is given by the coherent state <cit.>, i.e., |𝒪(t)) := |z = i tan(α t),j⟩ , |𝒪_n) := |j,-j+n⟩ ,    n = 0,⋯, 2j . In other words, we have identified the Krylov basis states with the orthonormal spin states. Hence the Lanczos coefficients and the Krylov basis wavefunctions are readily identified <cit.> b_n = α√(n(2j - n +1)) , φ_n (t) = √(Γ(2j+1)/n! Γ(2j-n+1))^-2j(α t)tan^n(α t) . The Krylov dimension D_K is identified when b_D_K = 0, which implies D_K = 2j+1, the same as the Hilbert space dimension. Due to the finite structure of the Hilbert space, b_n furnishes the symmetry b_1 = b_2j = α√(2j) ,      b_max = α(j+1/2) . The variation of the Lanczos coefficients is shown in Fig. <ref> (left). The coefficients peak at n = j+1/2 with b_j+1/2 = b_max and vanish at the end of the Krylov space. The corresponding evolution of a set of wavefunctions is shown in Fig. <ref> (right). The symmetry structure of b_n reflects the symmetric profile of the wavefunctions. They are normalized, i.e., ∑_n=0^2j |φ_n (t)|^2 = 1. Hence, the Krylov complexity is given by <cit.> K(t) = ∑_n=0^2j n |φ_n (t)|^2 = 2j sin^2(α t) . Since α∈ℝ_+, the complexity is periodic. The average complexity is K = j, which is directly proportional to the spin. This fact has been exploited in the computation of the Krylov state (spread) complexity (see Sec. <ref>) for spin j=N/2 representation of SU(2) algebra for the paramagnetic Hamiltonian, where N represents the number of lattice sites <cit.>. A particular extension of the above algebra is known as the 𝚚-deformed SU(2) algebra <cit.> (denoted as SU_𝚚(2)), which was first studied in Ref. <cit.> in the context of quantum many-body scars. This amounts to define [x]_𝚚, the 𝚚-deformed number of x, such that <cit.> [x]_𝚚 = 𝚚^x - 𝚚^-x/𝚚 - 𝚚^-1 ,    lim_𝚚→ 1 [x]_𝚚 = x . The SU(2) algebra (<ref>) is modified according to [J_0, J_±] = ± J_± ,       [J_+, J_-] = [2 J_0]_𝚚 . where [2 J_0]_𝚚 is the 𝚚-deformed version of the operator 2 J_0, with the associated Lanczos coefficients <cit.> b_n^(𝚚) = α√([n]_𝚚 [2j - n +1]_𝚚) ,    lim_𝚚→ 1 b_n^(𝚚) = b_n , where b_n is the SU(2) Lanczos coefficients (<ref>). For a more detailed application, see <cit.>. §.§ Complexity algebra: the simplicity hypothesis The particular tridiagonal structure of the Liouvillian (<ref>) and the Krylov operator (<ref>) in the Krylov basis enables us to define a particular notion of algebra in Krylov space. To see this, we define the anti-Liouvillian <cit.> ℳ := [ 0 - b_1 ; b_1 0 -b_2 ; b_2 0 ; ⋱; 0 -b_D_K-1; b_D_K-1 0; ] , with the property ℳ^† = ℳ^⊺ = -ℳ, and a real vector |φ(t) ) := ( φ_0 (t), φ_1 (t), φ_2 (t), … , φ_D_K-1 (t) )^⊺, where ⊺ denotes the transpose. Note that, analogously to the Liouvillian in (<ref>), the anti-Liouvillian may be decomposed into two components simply as ℳ∝ℒ_+ - ℒ_-. In the Krylov basis, the anti-Liouvillian is expressed as ℳ = ∑_n=0^D_K-1 b_n+1[ |𝒪_n+1)(𝒪_n| - |𝒪_n)(𝒪_n+1| ] , The normalization (<ref>) corresponds to the unit norm (φ(t)| φ(t) ) = 1. This allows one to write Eq. (<ref>) in the form of a linear dynamical system of the form ∂_t |φ(t) ) = ℳ |φ(t)) , with initial condition in the Krylov basis |φ(0) ) = (1, 0, 0, …, 0 )^⊺. This is simply the (imaginary-time) Schrödinger equation of |φ(0)) with an effective Hamiltonian ℳ. In particular, the definition of the vector |K(t) ) := √(𝒦) |φ(t)) = ( φ_0 (t), √(1)φ_1 (t), √(2)φ_2 (t), … , √(D_K-1)φ_D_K-1 (t) )^⊺, allows to directly solve (<ref>) with the appropriate initial condition. The Krylov complexity is thus equivalent to the norm K(t) = (K(t)|K(t)). This form is particularly suitable for the numerical evaluation of Krylov complexity from the numerical form of the Lanczos coefficients. See <cit.> for the detailed numerical implementation. The Liouvillian, the anti-Liouvillian, and the Krylov complexity operator always obey the commutation relations <cit.> [𝒦, ℳ] = ℒ,      [𝒦, ℒ] = ℳ . However, the commutator between the Liouvillian and the anti-Liouvillian [ℒ, ℳ] is not universal. Yet, it is diagonal in the Krylov basis <cit.> [ℒ, ℳ] = 2 ∑_n=0^D_K-1( b_n+1^2 - b_n^2 ) |𝒪_n)(𝒪_n| , with the diagonal coefficients given by the difference between the squared Lanczos coefficients. Since 𝒦 is diagonal in this basis, the commutator [𝒦,[ℒ, ℳ]] = 0 always holds. This also follows from the Jacobi identity [𝒦 , [ ℒ , ℳ ]] + [ℒ,[ ℳ, 𝒦]] + [ ℳ,[ 𝒦, ℒ]] = 0 , using (<ref>). In principle, any polynomial of 𝒦 will satisfy [𝒦,[ℒ, ℳ]] = 0. However, a particular choice <cit.> [ℒ, ℳ] = 𝒦 + 𝕀 ,     , ∈ℝ is often referred to as the simplicity hypothesis <cit.>. Here, the identity matrix is denoted by 𝕀. The commutator [ℒ, ℳ] is directly proportional to the Krylov complexity operator 𝒦, which is otherwise non-trivial. This amounts to the redefinition 𝒦̃ = 𝒦 + 𝕀, such that the modified operator 𝒦̃ closes the complexity algebra <cit.>, i.e., [𝒦̃, ℳ] = ℒ,      [𝒦̃, ℒ] = ℳ ,      [ℒ, ℳ] = 𝒦̃ . It is straightforward to see that Eq. (<ref>) satisfies (<ref>), provided the Lanczos coefficients take the following form <cit.> b_n = √(1/4 n(n-1) + 1/2 n) , with n ≥ 0. In other words, the simplicity hypothesis restricts the growth of Lanczos coefficients, with linear growth being the maximal. This particular form completely specifies the behavior of the Krylov complexity for the associated algebra. To understand this, consider the first-time derivative of Krylov complexity ∂_t K(t) = ∂_t (φ (t) | 𝒦 | φ (t)) ⟩ = (φ (t) | [𝒦, ℳ] | φ (t) ) , where to deduce the second equality, we used (<ref>) with ℳ^† = ℳ^⊺ = -ℳ. Each additional time derivative of the Krylov complexity brings a commutator with ℳ. The ℓ-th time derivative gives ∂_t^ℓ K(t) = (φ (t) | […[[ 𝒦, ℳ ], ℳ], … , ℳ]_ℓ-times | φ (t) ) , the nested commutator being evaluated in the state |φ(t)). The behavior of these nested commutators provides the differential equation for its time evolution and thus fully characterizes the dynamics of Krylov complexity. For example, the second derivative reads <cit.> ∂_t^2 K(t) = (φ (t) | [ℒ, ℳ] | φ (t) ) , where we have used [𝒦, ℳ] = ℒ. Provided that the simplicity algebra is fulfilled <cit.> ∂_t^2 K(t) = K(t) + . This equation has also been termed as the Ehrenfest theorem for the Krylov complexity <cit.>. Three special cases can be distinguished: Case 1: Linear growth & SL(2,ℝ) algebra. This particular case corresponds to = 4 α^2 and = /2 = 2 α^2. Then, b_n grows linearly in the asymptotic limit of n, as b_n = α n. The complexity algebra is [ℒ, ℳ] = 2 α^2 (2 𝒦 + 𝕀). Since , > 0, the solution of the Eq. (<ref>), with the initial condition K(0) = 0 and K(-t) = K(t), is given by K(t) = 2/sinh^2(√() t/2) = sinh^2(√() t/√(2)) ∼ e^2 α t , in the asymptotic limit. However, = /2 is not necessarily required - arbitrary independent and also give the linear growth in the asymptotic limit. Such growth of Lanczos coefficients gives rise to the exponential growth of the Krylov complexity. This growth corresponds to SL(2,ℝ) algebra <cit.>; see Sec. <ref>. Case 2: Sublinear growth & HW algebra. This particular case corresponds to = 0 and = 2 α^2, when the simplicity algebra closes with [ℒ, ℳ] = 2 α^2 𝕀. The growth of the Lanczos coefficients b_n is sublinear in the asymptotic limit of n, b_n = α√(n) for n ≥ 0. The solution of (<ref>) is K(t) = 1/2 t^2 = α^2 t^2 , i.e., the Krylov complexity grows quadratically in time. The growth corresponds to the Heisenberg-Weyl (HW) algebra; see Sec. <ref> <cit.>. Case 3: Finite dimensions & SU(2) algebra. For finite dimensions, the Lanczos sequence must terminate. Hence, the Lanczos coefficient must vanish at the Krylov dimension D_K > 1, i.e., b_D_K = 0. This implies from (<ref>) that = -2 /(D_K-1). The solution of Krylov complexity (<ref>) becomes <cit.> K(t) = (D_K-1) sin^2 ω t , where ω = (/2 (D_K-1))^1/2, with the corresponding Lanczos coefficients b_n = ω√(n(D_K-n)). The Krylov complexity is thus periodic in time and associated with the SU(2) algebra, see Sec. <ref> <cit.>. Figure <ref> presents a representative example of the Krylov complexity and the Lanczos coefficients for each of the three aforementioned cases. The classification of the complexity algebras is highly useful since, as shown in the previous section, it allows us to solve the time evolution of operators analytically in terms of coherent states. More general cases of complexity algebra have also been proposed recently <cit.>. §.§ Dispersion bound and quantum speed limits to the complexity growth rate The authors of <cit.> introduced a universal bound to the growth of Krylov complexity through a Robertson uncertainty relation involving the Krylov complexity operator and the Liouvillian as the generator of time evolution. Specifically, within the Krylov space, which constitutes an inner product space, when ℒ and 𝒦 are self-adjoint superoperators, the following uncertainty relation holds: Δ KΔℒ≥1/2|⟨[𝒦,ℒ]⟩|. Here, Δ K^2 = ⟨𝒦^2⟩-⟨𝒦⟩^2 denotes the Krylov variance (<ref>), or equivalently the squared dispersion, relative to some operator |𝒪(t)). The dispersion of the Liouvillian reduces to Δℒ = b_1. From Eq. (<ref>), one obtains the dispersion bound <cit.> ∂_t K(t)≤ 2b_1 Δ K . The form of this bound is reminiscent of the celebrated Mandelstam-Tamm time-energy uncertainty relation for the minimum time required for the mean value of an observable to vary by an amount comparable to its variance <cit.>. However, it is formulated in Krylov space and is derived for the Krylov complexity superoperator, with the first Lanczos coefficient providing an upper bound to the speed of evolution in Krylov space. Figure <ref> illustrates the dispersion bound for a simple numerical example of a random Liouvillian and random initial operator. The dispersion bound (<ref>) is saturated when the following three equivalent conditions are satisfied <cit.>: (i) the superoperators 𝒦̃, ℒ, and ℳ close the algebra (<ref>), (ii) the Lanczos coefficients are given by (<ref>), (iii) the Krylov complexity satisfies the differential equation (<ref>). We note that in isolated systems, every operator evolves under the Heisenberg equation, and the superoperators that act on them obey the generalized uncertainty principle. Therefore, the dispersion bound applies to every superoperator, not only the Krylov complexity operator. Recently, it has been used to bound the rate of change of the Krylov entropy <cit.>. As Eq. (<ref>) can be satisfied without the linear growth of Lanczos coefficients, the saturation of the dispersion bound is not tied with chaos in general. However, the linear growth of Lanczos coefficients is a sufficient but not necessary condition for the saturation. The saturation is intimately tied with the closure of the algebra, which provides necessary and sufficient conditions for the saturation of the dispersion bound <cit.>. Without assuming the complexity algebra, a short-time asymptotic analysis of the Krylov complexity yields <cit.> K(t) = b_1^2 t^2 + 1/6b_1^2 (b_2^2-2b_1^2) t^4 + 1/180b_1^2(8 b_1^4 - 7 b_2^4 + b_1^2 b_2^2 + 3 b_2^2 b_3^2) t^6 +𝒪(t^8). From it, it can be shown that generic system saturates the dispersion bound at short times of 𝒪(t^4) and generally deviates from it when contributions of 𝒪(t^6) become relevant. This occurs at the characteristic time <cit.> τ_d=|20(b_2^2-2b_1^2)/8b_1^4-7b_2^4+b_1^2b_2^2+3b_2^2b_3^2|^1/2, which corrects the estimate in <cit.>. An RMT Hamiltonian provides an interesting example of a typical system <cit.>. Eigenvalue repulsion is a key property of quantum chaos, and yet, the RMT Hamiltonians neither saturate the dispersion bound nor lead to an exponential growth of the Krylov complexity <cit.>. This further suggests that the exponential growth of the Krylov complexity is a signature of scrambling rather than of quantum chaos. However, the choice of an initial local operator for testing the operator growth hypothesis is hard to realize within the standard ensembles in RMT. As an alternative, one may build in the required structure of many-body composite systems by considering Hamiltonians and operators with a tensor product structure <cit.>, or described by random banded matrices <cit.>. Beyond the dispersion bound and its generalizations, the ultimate limits to the complexity growth rate can also be analyzed from a complementary point of view in the framework of quantum information geometry <cit.>. Bounds known as quantum speed limits identify the minimum time in which a process can unfold. They provide a refinement of the conventional time-energy uncertainty relations <cit.> by introducing a distance in state space and an upper bound to the speed of evolution. Their use ranges from foundations of physics to quantum technologies, including quantum metrology and quantum computation. As such, their study has motivated a large body of literature <cit.>. Recently, quantum speed limits have been generalized to characterize operator flows <cit.>, by introducing a distance in operator space, and identifying the corresponding maximum operator flow rate; see <cit.> for other extensions. In particular, it has been shown that the saturation of the Krylov complexity growth rate is equivalent to the saturation of the geometric operator quantum speed limit <cit.>. Further developments involve the generalization of the Krylov complexity to open quantum systems, discussed in Sec. <ref>. § OPERATOR SIZE CONCENTRATION So far, our primary focus has been on the Lanczos coefficients, a key output of the Lanczos algorithm. However, the Krylov basis elements |𝒪_n) have not been as extensively examined. Remarkably, the linear growth of the Lanczos coefficients, a characteristic feature in all-to-all systems such as the SYK model, as illustrated in Eq. (<ref>), is a broader attribute of the Krylov basis itself. This trait, referred to as “operator size concentration”, was first identified in <cit.>. It has since proven to be of significant utility in both closed and open quantum systems, as detailed in <cit.>. Notably, the growth of Lanczos coefficients in the SYK model is a consequence of the operator size concentration. In this section, we provide a combinatorial derivation of the linear growth of the Lanczos coefficients in the large q SYK model, Eq. (<ref>). To this end, we make use of the diagrammatic approach of “open” melon diagrams <cit.>, which is a generalized version of the melon diagrams introduced in <cit.>. The approach is a consequence of a very special property of the Krylov basis, operator size concentration: the n-th Krylov basis element 𝒪_n is formed by a linear combination of the Majorana strings of the same size <cit.>. Mathematically, we write 𝒪_n = ∑_i_1< … < i_s c_i_1, …, i_sψ_i_1…ψ_i_s + O(1/q) , where s = n(q-2) + 1. Correspondingly, n is the step size such that at the first step, n = 1, ensuing s = q-1. In other words, the n-th Krylov basis is concentrated on the same size of Majorana strings. The integer n is the index of the Krylov basis dictating the number of nested commutators, often referred to as the generation index <cit.>. We outline the proof provided in <cit.> and consider the large q SYK model, setting 𝒥 = 1/√(2) for convenience. We initiate with a normalized operator 𝒪_0 ≡𝒪 = √(2)ψ_1 of size one, denoted by the following line <cit.> 𝒪 = √(2)ψ_1 = < g r a p h i c s > . Next, we split the closed system Liouvillian into the following two parts <cit.> ℒ_H = ℒ_+ + ℒ_- , where ℒ_+ is the increasing part of ℒ_H, and it increases the size of the operator. The decreasing operator ℒ_- has the reverse effect. Given a size one operator, the operation of ℒ_+ can be diagrammatically written as ℒ_+ ψ_1 ∝ < g r a p h i c s > = < g r a p h i c s > . This is the output at the first step. Here, the first diagram consists of q lines and denotes a (q-1) body operator formed by the action of ℒ_+. In the large q limit, we neglect the intermediate grey lines and compactly denote it as a single arc (known as melon), represented by the second diagram on the right side. Further actions of ℒ_+ leads to the following diagrams <cit.> ℒ_+^2 ψ_1 ∝ < g r a p h i c s > ,     ℒ_+^3 ψ_1 = c_3 < g r a p h i c s > + c_4 < g r a p h i c s > , ℒ_+^4 ψ_1 = c_5 < g r a p h i c s > + c_6 < g r a p h i c s > + c_7 < g r a p h i c s > + c_8 < g r a p h i c s > , and the process goes continuously. Every action of ℒ_+ creates a “child” arc of size (q-1) within its “parent” arc. Hence, the full action ℒ_+^n ψ_1 can be presented as unmarked n arcs (vertices). These diagrams are not disorder-averaged and consist of the leading order diagrams. However, the observables can be constructed after closing the melon diagrams and taking the disorder-averaging. This, however, neglects any subleading contributions that have negligible effect in any disorder-averaged observables <cit.>. Next, we focus on the precise evaluation of the prefactors. As an example, we consider the number of ways the diagram with prefactor c_7 can be rearranged c_7 < g r a p h i c s > = < g r a p h i c s > + < g r a p h i c s > + < g r a p h i c s > . These are the only possible ways to construct these diagrams since a child diagram can only appear after its parent diagram. These diagrams in (<ref>) are known as unmarked ordered diagrams. More precisely, ℒ_+^n ψ_1 enumerates the number of possible ways a diagram with arc n can be constructed with respective amplitude (multiplicity), i.e., ℒ_+^n ψ_1 = ∑[ ordered diagrams of n arcs] . Similarly, the action of ℒ_- removes an arc from the parent, known as a childless arc <cit.>. As an example, consider the following diagram: ℒ_- ℒ_+^3+1ψ_1 = ℒ_- < g r a p h i c s > + … = < g r a p h i c s > + < g r a p h i c s > + … , The diagram on the top is unmarked ordered while the two diagrams on the bottom are called marked ordered diagrams. Thus, the removal is marked and is denoted by the dashed red line. Specifically, they are ordered diagrams with one marked child. For example, consider the second diagram in the above example. The removal of the marked child produces the following unmarked ordered diagram < g r a p h i c s >   ↦   < g r a p h i c s > . Since an unmarked ordered diagram can be constructed in several ways from marked ordered diagrams, this removal is not unique. In other words, the map is many-to-one: removal of ∑ n = n(n+1)/2 marked ordered diagrams with (n+1) arcs gives rise to a single unmarked ordered diagram with n arcs. However, given the datum of the parent and the childless arc (p,c), the construction of the unmarked ordered diagram is unique. For example, the left diagram of (<ref>) has (p,c) = (1,2). Another example is the following: [ < g r a p h i c s > , p=2, c=3 ] ↦ < g r a p h i c s > . Of course, it is easy to see that the left-hand side diagram is obtained after removing the red arc from the right-hand side diagram. Thus, we propose the following statement: For any n ≥ 1, the action of the Liouvillian gives <cit.> ℒ_- ℒ_+^n+1ψ_1 = 1/2 n (n + 1) ℒ_+^nψ_1 . While the left-hand side represents the sum of ordered diagrams with one marked child, the right-hand side represents n(n+1)/2 unmarked ordered diagrams with n arcs. Hence, the removal map is n(n+1)/2-to-one. The identity (<ref>) is central to our discussion. Following the action of the Liouvillian ℒ_H, this directly leads to the following. For n ≥ 2 ℒ_H ℒ_+^n ψ_1 = (ℒ_+ + ℒ_-) ℒ_+^n ψ_1 = ℒ_+^n+1ψ_1 + 1/2 n (n-1) ℒ_+^n-1ψ_1 , where the second term in the second line uses (<ref>). However, the specific terms for n = 0, 1 have to be evaluated independently, and are given by ℒ_H ψ_1 = ℒ_+ ψ_1 ,   ℒ_H ℒ_+ ψ_1 = ℒ_+^2ψ_1 + 1/qψ_1 . See <cit.> for the explicit evaluation in these cases. In other words, the consequence of (<ref>) provides the Krylov basis <cit.> 𝒪 _n ∝ℒ_+^n ψ_1 , i.e., the Krylov basis formed by the Liouvillian ℒ_H is effectively similar to the action of ℒ_+. The multiplicity factors are simply obtained by (<ref>), which are the Lanczos coefficients b_ 1 = √(1/q) and b_n = √(n(n-1)/2), and exactly match those in (<ref>) (we set 𝒥 = 1/√(2)) to the leading order in q. Equation  (<ref>) is equivalent to the statement of the operator size concentration (<ref>). Since the Krylov basis (<ref>) produces the operator size s = n(q-2) + 1, Eq. (<ref>) immediately follows. For further details, see <cit.>. § KRYLOV COMPLEXITY AT FINITE TEMPERATURE In the preceding discussions, our attention was centered on the infinite temperature inner product. Nonetheless, many relevant studies of thermalization, quantum field theory, and black hole physics in the AdS/CFT correspondence incorporate finite temperatures. For example, a black hole is known to be a maximal scrambler, satisfying Maldacena-Shenker-Stanford (MSS) bound on chaos <cit.>. Hence, it is imperative to include finite temperatures in our analysis. We shall commence the discussion by delineating the inner product at finite temperatures, followed by the corresponding Lanczos algorithm. Our exploration will reveal the changes to the Krylov complexity bound induced by finite temperatures and ascertain whether these modifications can enhance the universal MSS bound in a stricter sense. As a concrete example, we consider the SYK model through both numerical and analytical methods. §.§ Finite temperature inner product and Lanczos algorithm Incorporating the thermal density matrix ρ_β = e^-β H/Z_β at inverse temperature β = 1/T, we define the inner product as <cit.> (A|B)_β^g := 1/Z_β∫_0^β d λ g(λ) Tr( e^- (β - λ) HA^† e^-λ HB) , for two operators A and B, where g(λ) is an even function on the interval [0, β], and Z_β = Tr(e^-β H) is the thermal partition function corresponding to the Hamiltonian H. Note that the inner product is defined through an integral over a continuous parameter λ∈ [0, β]. In particular, the function g(λ) must satisfy the following conditions <cit.> g(λ)≥ 0 ,     g(β - λ) = g(λ) ,   1/β∫_0^β d λ g(λ) = 1 . The chosen inner product (<ref>) dictates the autocorrelation function, which is expressed as: 𝒞^g_β(t) = (𝒪|𝒪(t))^g_β = ∫_0^β dλ g(λ) Tr(ρ_β 𝒪^†𝒪(t+iλ)) . Given the inner product (<ref>), the Lanczos algorithm is applied to construct the Krylov basis. The process is outlined as follows <cit.>: |𝒪_-1)_β^g := 0 ,    b^(g)_0,T :=0 ,    |𝒪_0)_β^g := |𝒪) , |𝒜_n)_β^g = ℒ |𝒪_n-1)^g_β - b^(g)_n-1, T |𝒪_n-2)^g_β , b_n, T^(g) = √((𝒜_n|𝒜_n)^g_β) , |𝒪_n)^g_β = (b^(g)_n,T)^-1|𝒜_n)_β^g . It is important to note that the only modification to the Lanczos algorithm is the inner product; the definition of the Krylov subspace remains unchanged as the span of { |𝒪), ℒ |𝒪), …, ℒ^n|𝒪)}. The concept of orthogonality is simply redefined to align with the temperature of the system. Consequently, the Lanczos coefficients and the Krylov basis acquire a temperature dependence. The Krylov construction comes with freedom in the possible choices of g(λ), constrained only by the conditions in Eq. (<ref>). The Lanczos coefficients may differ depending on the function g(λ). However, the two most common choices of g(λ) are associated with the “standard” and “Wightman” inner products, denoted by the superscripts “S” and “W”. For the standard inner product, g(λ) = 1/2(δ(λ)+ δ(λ-β)) <cit.> (A|B)^(S)_β := 1/2 Z_βTr(e^-β HA^† B + A^† e^-β HB) , which gives the standard thermal correlation function. In quantum field theory, the Wightman inner product is often preferred and corresponds to g(λ) = δ(λ - β/2) <cit.> (A|B)^(W)_β := 1/Z_βTr(e^-β H/2A^† e^-β H/2 B) . The relation between Lanczos coefficients, defined with the help of (<ref>) and (<ref>), is given by the Toda equations (<ref>) discussed in subsection <ref> above. In the high-temperature limit, both the standard (<ref>) and the Wightman (<ref>) inner product reduces to the infinite-temperature inner product (<ref>), which is uniquely defined. We focus on the Wightman inner product, which we denote by a superscript (W). Note that the regularized finite temperature OTOC introduced in Sec. <ref> can be naturally expressed in terms of the Wightman inner product as OTOC_β(t) = ([W(t), V(0)]|[W(t), V(0)])^(W)_β. All definitions associated with the Lanczos algorithm and the autocorrelation function generalize naturally to finite temperatures. Consequently, the universal operator growth hypothesis at finite temperature suggests that for chaotic systems in d>1, the Lanczos coefficients asymptotically exhibit linear growth <cit.>, b^(W)_n,T = α^(W)_T n + γ + o(1) , where α^(W)_T is specific to the Wightman inner product, γ is a n independent constant. It is important to note that for inner products different from (<ref>), the hypothesis has to be generalized to accommodate necessarily non-zero Lanczos coefficients a_n's. When the temperature is small, such that J/T≫ 1, where J is a characteristic local coupling of a lattice system, the thermal correlation length is much larger than the lattice spacing, and the system can be regarded as approximately continuous. In this limit, Lanczos coefficients will exhibit linear growth with the slope π, T, as in the continuous field theory, b^(W)_n,T≈π T n + O(1) n≳ 1, which will persist until n≲ J/T <cit.>. §.§ SYK at finite temperature In Sec. <ref>, we computed the Lanczos coefficients and associated Krylov cumulants at infinite temperature where the norm used was unique since it does not differentiate between the Wightman and the standard norms. Now, we turn our attention to the finite temperature scenario. For this purpose, we adopt the Wightman inner product to compute the Lanczos coefficients. The temperature is typically parameterized as follows <cit.>, T/𝒥 = cos(π v/2)/π v , where v ∈ (0,1) is parameter and 𝒥 is the coupling constant defined in (<ref>). The high and low-temperature limits correspond to T →∞ ⇔  v → 0 ,       T → 0  ⇔  v → 1 , respectively. The autocorrelation function under the Wightman inner product is expressed as <cit.> 𝒞^(W)(t) = 1 + 2/qlog( (π v T t)) + O(1/q^2) , where the superscript “W” stands for “Wightman”. As in the infinite temperature case, the moment method can be used to determine the Lanczos coefficients, which are given by <cit.> b^(W)_n,T = π v T √(2/q) + O(1/q)   n = 1 , π v T √(n(n-1)) + O(1/q)   n > 1 . These coefficients acquire a temperature dependence, which we encapsulate by introducing the parameter <cit.> α^(W)_T = π v T , which dictates the growth rate of the Lanczos coefficients (<ref>). Thus, one obtains the linear growth (<ref>) in the asymptotic limit of n. In terms of α^(W)_T, the parametrization of temperature Eq. (<ref>) translates to <cit.> α^(W)_T = 𝒥cos(α^(W)_T β/2) →𝒥   β𝒥≪ 1 , π/β   β𝒥≫ 1 , where β = 1/T is the inverse temperature. In the standard inner product, however, the temperature dependence on α^(W)_T is different and shows the opposite behavior of (<ref>) <cit.>. The wavefunctions acquire a similar form of (<ref>), also with a temperature dependence. The growth of Krylov complexity is given by <cit.> K(t) = 2/qsinh^2 (α^(W)_T t) + O(1/q^2) . Higher moments of the Krylov operator can also be straightforwardly computed <cit.>. The comparison with the finite-temperature Lyapunov exponent reveals that λ_OTOC (T) = 2 π v T = 2 α^(W)_T → 2𝒥   β𝒥≪ 1 , 2π/β   β𝒥≫ 1 . This reduces to the growth (<ref>) at the infinite-temperature limit and satisfies the Maldacena-Shenker-Stanford bound <cit.> at low temperature. Moreover, (<ref>) results in the saturation of the Krylov bound at all temperatures, including the infinite-temperature bound (<ref>). In addition to the above analytic results in the large q and large N limit, Fig. <ref> illustrates the numerical computation of Lanczos coefficients and the growth rate of Krylov complexity at varying temperatures within the SYK_4 model. The Lanczos coefficients exhibit linear growth before reaching a plateau at a value determined by the system size N, which remains constant across different temperatures. In contrast, the growth rate, as defined in Eq. (<ref>), is temperature-dependent, decreasing as the temperature lowers. The dashed lines in Fig. <ref> (left) represent the growth rate, i.e., the solution of Eq. (<ref>), constrained by π/β≤α≤𝒥. This temperature-dependent behavior also influences the early-time growth rate of Krylov complexity, causing it to decelerate. Nonetheless, its exponential growth to the linear growth regime is still prominent. The eventual saturation of the Lanczos coefficients is mirrored in the late-time behavior of the Krylov complexity growth rate, dK(t)/dt, which converges to a temperature-independent constant, as depicted in Fig. <ref>. Similar results for T T-deformed SYK have also been reported <cit.>. The appropriate timescale of this growth rate <cit.> shares similar timescales in holographic complexity <cit.>. The numerical computation of Lanczos coefficients, as depicted in Fig. <ref>, does not reveal the saturation point of Krylov complexity due to the finite set of coefficients evaluated. A comprehensive exploration of the entire Krylov space is necessary to observe such saturation. Reference <cit.> provides an illustrative example of a full Krylov space analysis. §.§ Krylov exponent at finite temperature In Ref. <cit.>, the finite temperature Krylov exponent λ_K(T) = 2α^(W)_T was conjectured to be the upper bound of the Lyapunov exponent, capturing the growth of the out-of-time-ordered correlator (OTOC): λ_OTOC(T) ≤ 2α^(W)_T ≤ 2π T . This bound on λ_OTOC can be tighter than the universal Maldacena-Shenker-Stanford (MSS) bound (<ref>) as we will see below. The large q SYK model saturates the left bound in (<ref>) at all temperatures, we discuss explicit calculation in the later section. The right bound is tight only at low temperatures; see Fig. <ref>. At high temperatures, the bound simply reduces to (<ref>). In fact, the right inequality in (<ref>) can be improved. Ref. <cit.> put forward a tighter inequality, λ_OTOC(T) ≤λ_K (T)=2α^(W)_T ≤2π T/1+4β^* T , where β^* bounds the location of the finite temperature autocorrelation function (<ref>) in an infinite strip, see <cit.> for a proof. In the continuum field theory limit (β^*)^-1∼ O(Λ), where Λ→∞ is the UV-cutoff. Hence, Eq. (<ref>) reduces to (<ref>). However, β^* retains a significant, non-trivial value for discrete lattice non-integrable systems. The refined bound (<ref>) reverts to the MSS bound (<ref>) at low temperatures and remains applicable across the entire temperature spectrum, including at infinite temperatures; see Fig. <ref>. Consequently, this improved bound offers a more comprehensive constraint than its predecessor, applicable under a broader range of conditions. It is also important to mention that λ_K(T) is not always equal to 2α^(W)_T. This leads to a genearlization of (<ref>), proposed in <cit.>, λ_OTOC(T) ≤λ_K(T) ≤ 2π T . There are several non-trivial cases exemplifying (<ref>), including the large q SYK model discussed above, as well as different models of quantum field theory <cit.>. Free massive field theory in 4D exhibits no exponential growth of OTOC, λ_OTOC=0, and less than maximal growth of Krylov complexity 0<λ_K < 2π T. A free massless field theory placed on a sphere exhibits no exponential growth of Krylov, rendering λ_OTOC=λ_K=0. There are also holographic examples <cit.> when λ_OTOC=λ_K either both vanish or are both equal to 2π T, depending on whether T is above the point of Hawking-Page transition. These examples provide arguments to support (<ref>) and suggest that both inequalities there are non-trivial. § KRYLOV SPACE OF PURE STATES §.§ Krylov space and spread complexity Let us consider a Hermitian Hamiltonian H and the corresponding Schrödinger equation, i ∂_t |Ψ(t)⟩ = H |Ψ(t)⟩ (setting ħ=1), governing the evolution of a pure initial quantum state |Ψ_0⟩≡|Ψ(0)⟩ in a d-dimensional Hilbert space ℋ. The time evolution admits the expansion |Ψ(t)⟩ = e^-it H|Ψ_0⟩ = ∑_n=0^∞(-i t)^n/ n! H^n |Ψ_0⟩ , and is thus contained in the Krylov space spanned by the powers of the Hamiltonian acting on the initial state span{|Ψ_0⟩, H |Ψ_0⟩, H^2 |Ψ_0⟩, …}. One can construct an orthonormal basis for a Krylov space of pure states in the same way as for operators, using the Hamiltonian as the generator of time evolution instead of the Liouvillian. The Gram–Schmidt procedure <cit.> applied to the set of vectors {H^n |Ψ_0⟩}_n=0^∞ yields an orthonormal basis set {|K_n⟩}_n=0^D_K-1. Here, D_K is the corresponding dimension of the Krylov space, whose maximum value is set by the dimension of the Hilbert space itself D_K ≤ d . The basis elements |K_n⟩ are known as the Krylov basis for the corresponding Hamiltonian H with the initial state |Ψ(0)⟩. They are orthonormal ⟨ K_m |K_n⟩ = δ_mn, and the first element is the initial state |K_0⟩ = | Ψ(0)⟩. One can find the Krylov basis in this space {|K_n⟩} using the Lanczos algorithm <cit.>. Setting |K_-1⟩ = 𝖺_-1 = 𝖻_0 = 0, one performs the following steps for n ≥ 0: * Compute the diagonal coefficient 𝖺_n = ⟨K_n| H |K_n⟩, and |A_n+1⟩ = H |K_n⟩ -𝖺_n |K_n⟩ - 𝖻_n|K_n-1⟩. * Compute 𝖻_n+1 =√(⟨A_n+1|⟩). If 𝖻_n+1=0, stop the algorithm. Otherwise define |K_n+1⟩ = 𝖻_n+1^-1|A_n+1⟩, and repeat the procedure 1. Along with the Krylov basis set, this generates two sets of Lanczos coefficients {𝖺_n, 𝖻_n}, to be distinguished from {a_n, b_n} in the operator picture. In particular, the algorithm yields the following recurrence relation H |K_n⟩ = 𝖻_n|K_n -1⟩ + 𝖺_n |K_n⟩ + 𝖻_n + 1|K_n +1⟩ . Thus, the Hamiltonian in the Krylov basis takes the following tridiagonal form H = [ 𝖺_0 𝖻_1 ; 𝖻_1 𝖺_1 𝖻_2 ; 𝖻_2 𝖺_2 ; ⋱; 𝖻_D_K-1; 𝖻_D_K-1 𝖺_D_K-1 ] . Unlike the operator case, where the Liouvillian is tridiagonal with zero diagonal elements (for a Hermitian Hamiltonian) in the Krylov basis, the Hamiltonian's diagonal elements 𝖺_n are in general finite. The use of Householder reflections <cit.> to tridiagonalize the Hamiltonian and obtain Lanczos coefficients is a technique often employed in numerical linear algebra, making it more amenable to analysis and numerical methods. Nevertheless, to employ Householder reflections one has to start from a basis in which the Hamiltonian is not already tridiagonal. This method is often pursued under the name Hessenberg decomposition <cit.>. The procedure casts a matrix in an upper-triangular form (known as upper-Hessenberg form), and for the special case of Hermitian matrix, it reduces to complete tridiagonal form. The method is faster than the Lanczos algorithm and can be implemented using the command 𝙷𝚎𝚜𝚜𝚎𝚗𝚋𝚎𝚛𝚐𝙳𝚎𝚌𝚘𝚖𝚙𝚘𝚜𝚒𝚝𝚒𝚘𝚗[m] in Mathematica where m is the matrix under consideration <cit.>. While in the Lanczos algorithm, we choose the initial state at our disposal, the Hessenberg decomposition picks up a special initial state (1, 0, 0, ⋯ )^⊺. Thus, the Lanczos coefficients computed by these two methods are different. Hence, the Lanczos algorithm is preferred for any physical Hamiltonian since one chooses the initial state according to the given problem (e.g., as in a quantum quench protocol). On the other hand, in random matrix theory (RMT), choosing any initial state is sufficient to capture the properties in RMT. Hence, to study the statistics of the RMT <cit.>, Hessenberg decomposition is usually employed. The Krylov basis provides a framework within the Hilbert space for the evolution of the wavefunction, which can be expressed as |Ψ(t)⟩= ∑_n = 0^D_K-1_n(t) |K_n⟩, where _n (t) represents the wavefunction amplitudes within the Krylov chain, and D_K denotes the Krylov dimension. It is important to note that D_K may be finite, even in an infinite-dimensional Hilbert space. The Schrödinger equation, in conjunction with the recursion relation (<ref>), dictates that these amplitudes follow the equation i ∂_t _n (t) = 𝖻_n_n-1(t) + 𝖺_n_n(t) + 𝖻_n +1_n +1(t) . Here, the imaginary unit i is an integral part of the equation, indicating that the amplitudes (t) belong to the complex plane ℂ. Furthermore, we define the Krylov operator as 𝒦_S = ∑_n=0^D_K-1 n |K_n⟩⟨.| The expectation value of this operator, with respect to the state (<ref>), yields K_S(t) = ⟨Ψ(t)|𝒦_S |Ψ(t)⟩ = ∑_n=0^D_K-1 n _n (t)^2 , reflecting the mean position on the Krylov chain. This definition is often referred to as the Krylov state complexity or spread complexity and has found a variety of applications such as probing quantum scars in PXP model <cit.>, topological states in quantum matter <cit.> in the Su-Schrieffer-Heeger (SSH) model <cit.>, quench protocols <cit.>, random matrix theory <cit.>, PT-symmetric quantum mechanics <cit.>, localization and thermalization phenomena <cit.>, evolution of modular Hamiltonian and modular chaos <cit.>, LMG model <cit.>, quantum measurements <cit.>, open quantum systems <cit.>, high-energy quantum chromodynamics <cit.>, and the characterization of networks for quantum walks <cit.>. The spread complexity offers a measure that captures the dynamical spreading of states through the Hilbert space. The significance of the Krylov basis in this context arises from its ability to capture the spread of this state effectively. While it is true that any basis could theoretically be employed for this purpose, the Krylov basis is special. Consider a basis ℬ := {|B_n⟩: n= 0,1, ⋯}, alongside a cost functional defined as <cit.> C_ℬ(t) = ∑_n 𝖼_n ⟨Ψ(t)|B_n⟩^2 , where the coefficients 𝖼_n are both positive and monotonically increasing. Given the completeness of the basis ℬ and the unitarity constraint, it follows that ∑ _n |𝖼_n|^2 = 1. By minimizing this cost functional across all possible bases ℬ, and specifically choosing 𝖼_n = n, we arrive at the minimum value representing the spread complexity <cit.> K_S(t) := min_ℬ C_ℬ(t) . This minimization process is a functional minimization, which identifies the Krylov basis as the optimal basis for a finite duration of time. In scenarios of discrete-time evolution, commonly analyzed in unitary circuits <cit.>, the Krylov basis consistently minimizes the cost functional at all times. In conclusion, the Krylov basis provides a natural and computationally efficient basis where the spreading of the initial wave function is minimal. Ref. <cit.> provides a detailed proof of the above statement. §.§ Survival amplitudes and thermofield double state The spread of the wavefunction, akin to operator complexity, is encapsulated by the function[Our notation is aligned with the definition of the autocorrelation function (<ref>), and differs from <cit.> with a complex conjugate. Hence, according to <cit.>, the survival amplitude is S(t)^*.] S(t) := ⟨Ψ(0)|Ψ(t) ⟩ = _0 (t) , which is the overlap between the initial state and its temporal evolution. This quantity is known as the survival amplitude of the initial state |Ψ(0)⟩, and plays a crucial role in quantum dynamics, e.g., in the context of quantum speed limits <cit.>, Loschmidt echoes <cit.>, and quantum decay <cit.>. Given the amplitude S(t), the moments are computed as μ_k := lim_t → 0 d^n S(t)/d t^n . Note the absence of factor i^n compared to (<ref>), a fact which can be traced to the hopping equation (<ref>) for the state. The corresponding Lanczos coefficients can be computed either through the recursive algorithm (<ref>) or using the pictorial diagram in Fig. <ref> with an additional factor of i in both 𝖺_n and 𝖻_n) <cit.>. For the specific case we are considering, we choose our initial state as the thermofield double (TFD) state <cit.> |Ψ(0)⟩ := | TFD(β)⟩ = ∑_n e^- β E_n/2/√(Z(β))|n⟩_1 ⊗|n⟩_2 , where β is the inverse temperature β = 1/T, where Z(β) = Tr(e^-β H) = ∑_n e^-β E_n is the partition function. Here, “1” and “2” indicate the first and the second copies of the system, which is formed by doubling the Hilbert space. E_n and |n⟩_1,2 are the eigenvalues and eigenstate of the Hamiltonian H_1,2 under consideration, obeying H_1,2|n⟩_1,2 = E_n |n⟩_1,2. Tracing out one system produces a thermal mixed state, which is indistinguishable from the pure TFD state in the double Hilbert space. The TFD state can be understood as the purification of the thermal Gibbs state <cit.>. Note that the state (<ref>) is written in a doubled Hilbert space with two identical copies of the same state. Such state in conformal field theory (CFT) is thus considered as holographically dual to the two-sided eternal black holes in anti-de-Sitter (AdS) space, with two boundaries of the asymptotic spacetime dual to the two copies of CFT, denoted as “1” and “2” respectively <cit.>. In this context, the preparation of TFD state <cit.>, its entanglement and complexity structure <cit.>, and applications to wormhole geometry <cit.> and quantum teleportation protocols in quantum circuits <cit.> constitute the foundation of the slogan “entanglement builds spacetime” <cit.>. §.§ Spread complexity in the thermofield double state The TFD state (<ref>) is time-invariant under the time evolution of H_tot = H_1 - H_2, which corresponds to the boost symmetry in the bulk spacetime <cit.>. However, it evolves with a Hamiltonian acting on one side. The time-evolved state, under such condition, is given by |Ψ(t)⟩ = e^-i H t| TFD(β)⟩ = | TFD(β + 2i t)⟩ . Here the time evolution is generated bythe Hamiltonian of one of the copies, and compactly denoted as H ≡ H ⊗𝕀. Alternatively, the time evolution can also be generated with H̅_tot = (H ⊗𝕀 + 𝕀⊗ H)/2, which has the same effect as the single-side Hamiltonian. As a result, the time evolution shifts the inverse temperature as β→β + 2it. The survival amplitude is calculated as <cit.> S(t) = ⟨ TFD(β)| e^-i H t | TFD(β) ⟩ = ⟨ TFD(β)| TFD(β + 2i t) ⟩ = Z (β + it)/Z(β) , which is the ratio of the analytical continuation of the partition function Z(β + i t) and the standard partition function Z(β). Therefore, the survival probability in the TFD state equals the SFF in Eq. (<ref>) <cit.>. In the TFD state, the moments (<ref>) of the survival amplitude are conveniently expressed as <cit.> μ_n = 1/Z(β)Tr(e^-β H (i H)^n ) . An important difference between μ_n and the moments m_n computed in Sec. <ref> is that here the moments are given by the Hamiltonian moments, while in the operator complexity picture, the moments m_n are the Liouvillian moments (<ref>). The Lanczos coefficients can be computed straightforwardly from (<ref>). Figure <ref> shows the behavior of 𝖺_n, 𝖻_n in the TFD state when the Hamiltonian is sampled from the Gaussian orthogonal ensemble (GOE) with dimension N = 1024 (averaged over 100 instances) at different temperatures. Here, the Krylov space algorithm is used to compute the Lanczos coefficients rather than (<ref>). Depending on the temperature, 𝖺_n increases with a different slope and saturates at 𝖺_n ≈ 0. The saturation occurs for n ≪ N, which is much lower than the dimension of the matrix <cit.>. On the other hand, the coefficients 𝖻_n show a similar trend as 𝖺_n for n ≪ N, but terminate at n =N due to the finite dimensions of the matrices. The slope of the growth increases with decreasing temperature, a trend also observed for the Krylov complexity in the operator picture in the SYK model. See Fig. <ref> for a comparison. The corresponding behavior of the Krylov complexity is also shown (bottom right). A key finding is the identification of four distinct regimes in the time-evolution of the complexity measure for TFD states in chaotic systems: an initial linear increase, a peak, a decrease, and finally a plateau <cit.>. For a fixed dimension N, the peak value and the saturation plateau decrease with temperature. This behavior is reminiscent of the slope-dip-ramp-plateau structure observed in the SFF, as introduced in Sec. <ref>, indicating a deep connection between spectral properties and quantum state (spread) complexity. For uncorrelated energy levels, the peak disappears <cit.>. A scaling relation between the SFF and the Krylov complexity has also recently been investigated <cit.>. §.§ Spread complexity in RMT Initializing with an arbitrary state, random matrices (in GUE, GOE, and GSE classes) can be tridiagonalized using the Lanczos algorithm. For an ensemble of random matrices, there will be an ensemble of Lanczos coefficients. If the ensemble is Gaussian, then the tridiagonal representation is known analytically. Even beyond Gaussianity, the statistics of the Lanczos coefficients can be found numerically <cit.>. The density of states (DOS) is a critical aspect of RMT, describing the distribution of eigenvalues. For an N × N random matrix, Ref. <cit.> provides an approximate relation between the density of states ρ (E) and the statistics of Lanczos coefficients 𝖺(x) ≡𝖺_xN and 𝖻(x) ≡𝖻_xN, with x = n/N is in the large N limit. The relation reads <cit.> ρ (E) ≈∫_0^1 dx Θ(4 𝖻(x)^2 - (E - 𝖺(x))^2)/π√(4 𝖻(x)^2 - (E - 𝖺(x))^2) , where Θ(z) is the Heaviside theta function taking values Θ(z) = 1 for z ≥ 0 and Θ(z) = 0 for z < 0. This is an integral equation involving the energy on either side. The density of states ρ(E) has compact support over an interval [-E_min, E_max] in the large N limit. Thus, the above integral equation can be solved using the bi-section method. See <cit.> for the explicit algorithm to solve (<ref>). As an example, consider the GUE with the potential V(H) = H^2. In this case, the density of states follows Wigner's semi-circle law ρ(E) = 1/2π√(4-E^2) . The integral equation is solved exactly with the Lanczos coefficients given by <cit.> 𝖺(x) = 0 ,     𝖻(x) = √(1-x) . Figure <ref> shows the numerical results for Lanczos coefficients with random matrices of size N =1024 (20 realizations), which match the analytic expressions exactly. Here we take the variance σ^2 = 1/N. For the numerical implementation, we used the 𝙷𝚎𝚜𝚜𝚎𝚗𝚋𝚎𝚛𝚐𝙳𝚎𝚌𝚘𝚖𝚙𝚘𝚜𝚒𝚝𝚒𝚘𝚗 command in Mathematica, which offers computational advantages over the traditional Lanczos algorithm, making it particularly well-suited for handling large matrices and ensembles. However, unlike the Lanczos algorithm, the Hessenberg decomposition chooses a fixed initial state (1, 0, ⋯, 0)^⊺. Nevertheless, for the nature of the Lanczos spectrum, the initial choice of state is irrelevant. In addition, in the Hessenberg decomposition, the Lanczos coefficients can be negative. As discussed in <cit.>, its source is due to the phase factors from the initial state and can be avoided by taking the modulus of the coefficients. A similar approach can be applied to the Liouvillian <cit.>. § KRYLOV SPACE OF DENSITY OPERATORS The Krylov complexity was originally studied for observables evolving in Heisenberg picture <cit.>, as discussed in Sec. <ref>. It was soon after extended to the case of pure quantum states evolving in Schrödinger picture <cit.>, reviewed in Sec. <ref>. However, the most general quantum state need not be pure and can be represented by a classical statistical mixture of pure states, which is modeled by a density matrix. General mixed states occur naturally in the description of open quantum systems, especially after decoherence acts on pure states, making them effectively classical <cit.>. The Krylov complexity for density matrices has been studied recently <cit.>. Here, we present an alternative formulation of the Krylov space for density matrices, focusing on the constraints on the evolution in the Krylov chain imposed by the defining properties of density matrices, which we recall now. A generic quantum state ρ = ∑_j p_j |ψ_j⟩⟨ψ_j| is an operator that: (i) has unit trace (ρ) = 1 and (ii) is positive semidefinite ρ≥ 0, which implies hermiticity ρ^† = ρ. These properties imply that (i) ∑_j p_j = 1, (ii) p_j≥ 0, p_j ∈ℝ so that the eigenvalues of the density matrix can be associated with a probability distribution. The unitary evolution of a general quantum state ρ(t) evolving under the Hamiltonian H is described by the Liouville-von Neumann equation ∂_t ρ(t) = -i [ H, ρ(t)] = - i ℒρ(t) , with the Liouvillian superoperator ℒ∙ = [ H, ∙]. The solution of this equation gives the evolution of the density matrix as ρ(t) = e^-i H tρ(0) e^i H t = e^-iℒ t[ρ(0)] = ∑_n=0^∞(-i t)^n/n!ℒ^n[ρ(0)] . Using vectorization, the Liouville-von Neumann equation can be expressed as a linear differential equation as ∂_t | ρ(t) ) = - i ℒ |ρ(t) ) , in terms of the vectorized Liouvillian ℒ = H ⊗𝕀 - 𝕀⊗ H^⊺ and the vectorized density matrix |ρ) = 1/√(d) vec (ρ) = 1/√(d)∑_m,nρ_mn|n⟩⊗|m⟩^*. Expressing the vectorized Liouvillian in the Hamiltonian eigenbasis H = ∑_n=1^d-1 E_n|n⟩⟨n| gives ℒ = ∑_n,m=1^dω_nm |ω_nm) (ω_nm| , where ω_nm≡ E_n - E_m are all the energy differences, which are the eigenvalues of the Liouvillian, and |ω_nm) = |n⟩⊗|m⟩^* are their associated eigenvectors. From an initial density operator |ρ_0), in analogy to the formalism developed for operators in Sec. <ref>, we can define the Krylov space generated by the repeated application of the Liouvillian as span{ |ρ_0), ℒ |ρ_0), ℒ^2 |ρ_0), …}. The Krylov space spans the subspace of the total Hilbert space in which the evolution of the initial state |ρ_0) occurs, which is most clearly seen in (<ref>). This process yields a linearly independent set of a certain dimension D_K, the Krylov Dimension. The set obtained by repeated application of the Liouvillian is not orthogonal and we can construct the orthogonal Krylov basis { |ρ_0), |ρ_1), … |ρ_D_K-1) }by a procedure closely resembling that in the case of operators. To proceed, we choose the Hilbert-Schmidt inner product, introduced previously. Let us detail the Lanczos algorithm we follow * Define the starting density operator to be |ρ_0). * Compute |𝒜_1) = ℒ|ρ_0). If (𝒜_1|𝒜_1)≠ 0 define b_1 = √((𝒜_1|𝒜_1)) and |ρ_1) = | A_1)/b_1. * For n>1 compute | A_n) = L|ρ_n-1)-b_n-1|ρ_n-2). If (𝒜_n|𝒜_n)≠ 0 define b_n = √((𝒜_n|𝒜_n)) and |ρ_n) = | A_n)/b_n. The Liouvillian has the following recurrence relation in the Krylov basis ℒ |ρ_n-1) = b_n|ρ_n) + b_n-1|ρ_n-2) . The Lanczos algorithm introduced here has a key difference with respect to the formalism in <cit.>. In particular, it does not normalize the initial density matrix by √((ρ_0|ρ_0))= ((ρ_0^2)/d)^1/2= √(P(0)/d). In doing so, the algorithm keeps the first element of the Krylov chain a physical density matrix with unit trace. Note that if the initial density matrix was normalized, the trace of an initially mixed state |ρ_0) becomes (ρ_0) = d (𝕀|ρ_0) = 1/ √(P(0)), which is different from unity if the state is mixed P(0)<1.[The Lanczos coefficients obtained by the two methods (here denoting b̃_n as the definition in <cit.>) are not the same but are easily related through the purity by b̃_n = b_n/√(P(0)). The Krylov basis elements |ρ_n) are the same for n≥ 1, since |Ã_n)=|A_n)/√(P(0)) which cancels out with the factor in b̃_n, and only differ in n=0 by the factor √(P(0)).] With this definition, the Krylov basis is orthogonal but not necessarily normalized for the first element (ρ_0|ρ_0)=P(0)/d , although for n, m ≥ 1, the standard orthonormality condition (ρ_n|ρ_m) = δ_nm holds. In the Krylov basis, the Liouvillian is tridiagonal, with all diagonal elements being zero, ℒ = ∑_n=1^D_K-1 b_n ( |ρ_n)(ρ_n-1| + |ρ_n-1)(ρ_n| ) . The density matrix on this basis is |ρ(t) ) = ∑_n=0^D_K-1 (-i)^n ϕ_n(t)|ρ_n) , where the density matrix amplitudes ϕ_n(t) are given by ϕ_n(t) = i^n (ρ_n|ρ(t)). Note that the density matrix amplitudes contain information equivalent to the full density matrix, conditioned to a particular initial state. It is thus possible to recast the Liouville-von Neumann equation (<ref>) into a dynamical equation for the amplitudes ∂_t ϕ_n(t) = b_n ϕ_n-1 (t) - b_n+1ϕ_n+1(t) . This difference equation, sometimes called a discrete Schrödinger equation <cit.>, may be written in the compact form ∂_t | ϕ (t) ) = ℳ | ϕ (t) ) , where it is convenient to introduce the vector of density matrix amplitudes |ϕ (t) ) = ∑_n=0^D_K-1ϕ_n(t)|ρ_n), and the anti-Liouvillian superoperator ℳ = ∑_n=1^D_K-1 b_n ( |ρ_n)(ρ_n-1| - |ρ_n-1)(ρ_n| ) . The density matrix amplitudes are illustrated in Fig. <ref> along with the roles of the Lanczos coefficients as hopping amplitudes and the corresponding signs entering the anti-Liouvillian, positive for right jumps and negative for left jumps. This allows us to think of the anti-Liouvillian M as a current operator on the Krylov chain, carrying information on the sign of the current. §.§ Properties of the elements of the density matrix Krylov chain From Eq. (<ref>), an interesting expression for the trace of the Krylov elements, {ρ_n}_n=0^D_K-1, can be found. Noting that Tr(ℒρ_n)=Tr([H, ρ_n])=0 for all n, we find that Tr( ρ_n)= -b_n-1/b_n Tr( ρ_n-2) . Hence, if Tr(ρ_0)=1 as appropriate for a density matrix, even and odd Krylov elements differ by their trace Tr( ρ_2n) = (-1)^n b_2n-1/b_2nb_2n-3/b_2n-2⋯b_1/b_2 , Tr( ρ_2n+1) = 0 . This condition means that the Krylov elements for density matrices, |ρ_n), n≥ 1 are never density matrices on their own (since they do not satisfy Tr( ρ_n)=1). These traces can also be understood as the inner product with the identity Tr(ρ)= d (𝕀|ρ). Therefore, Eqs. (<ref>) specify the components of the identity operator in the Krylov basis, which reads |𝕀) = 1/√(d)∑_n=1^D_K-1 (-1)^n ∏_j=1^nb_2j-1/b_2j |ρ_2n) . The identity operator is proportional to the maximally mixed state ρ_MM = 𝕀 /d. Note that this state has support over all the even Krylov basis elements. Interestingly, this expression seems to be closely related to the null state of the Liouvillian <cit.>, and its norm is related to the area under the autocorrelation function C(t) <cit.>. The relation (<ref>) suggests that the Lanczos algorithm, at least in the absence of degeneracies, collapses the zero eigenvalue subspace of dimension N to the maximally mixed state. The commutator of two Hermitian operators A^† = A, B^† = B is anti-Hermitian, i.e., [ A, B] = C where C^† = - C. Therefore, when building the Krylov space by repeated application of the Liouvillian, the even powers will be Hermitian (ℒ^2nρ_0)^†=ℒ^2nρ_0 and the odd powers will be anti-Hermitian (ℒ^2n+1ρ_0)^†=-ℒ^2n+1ρ_0. Due to the Lanczos algorithm building the Krylov basis by a real linear combination of only even or only odd elements, the resulting Krylov basis is composed of Hermitian operators for the even basis elements and anti-Hermitian operators for the odd ones. §.§ Constraints on the evolution Any physical density matrix ρ must be unit trace to represent a properly normalized quantum state, i.e., Tr( ρ(t))=1. This condition can be expressed for the vectorized density matrix as Tr( ρ(t))= d (𝕀|ρ(t))=1 , where |𝕀)= 1/√(d)∑_m=1^d |m⟩⊗|m⟩^* is the vectorized identity matrix. Any physical quantum dynamics, mapping physical quantum states to physical quantum states, must preserve the trace of the density matrix Tr( ρ(t)) = d (𝕀|ρ(t))= 1 at all times. Substituting (<ref>) yields the constraint for the even amplitudes Tr( ρ(t)) = ∑_n=0^D_K/2ϕ_2n(t) b_2n-1/b_2nb_2n-3/b_2n-2⋯b_1/b_2=1 . This constraint involves only even density matrix amplitudes ϕ_2n(t), and ratios of odd and even Lanczos coefficients, which can show drastically different scalings as discussed in Sec. <ref>. It thus shows that the dynamics of the density matrix amplitudes will differ in even and odd sites, providing further insight into the dynamics of the amplitudes in the Krylov chain. Furthermore, since the dynamics is unitary, the purity of the initial density matrix will be preserved P(t) = Tr( ρ^2(t))= d (ρ(t)|ρ(t))=Tr( ρ^2_0) = d (ρ_0|ρ_0)=P (0), which is unity if the initial state is pure ρ_0 = |ψ_0⟩⟨ψ_0|. When written in terms of the amplitudes, the purity preservation condition implies ∑_n=0^D_K-1ϕ_n^2(t)=P (0) . The density matrix of a physical state needs to be Hermitian and positive semi-definite, i.e., ρ^† = ρ and ρ≥ 0. The former condition can be easily applied to the density matrix written in the Krylov basis (<ref>). It is easy to see that all the elements of the sum are Hermitian. This implies that for the full density matrix to be Hermitian, the density matrix amplitudes must be real ϕ_n^*(t) = ϕ_n(t). In addition, positive semi-definiteness implies that the eigenvalues of the density matrix must be positive semidefinite since ρ is Hermitian. Determining the constraints imposed by positive semi-definiteness on the density matrix amplitudes constitutes an interesting open problem. § OPEN QUANTUM SYSTEMS §.§ An introduction to the Lindblad master equation The dynamics in Krylov space discussed so far have only focused on closed quantum systems. Any realistic treatment of a quantum system needs to include the effects of decoherence and noise caused by the surrounding environment, it is thus of key importance to extend the Krylov formalism to open quantum dynamics. The theory of Open Quantum Systems (OQS) offers a powerful description of dissipative quantum dynamics, here we review some of the key results, in particular the Lindblad master equation, for the extension of the Krylov formalism to the open case. For a more thorough study of OQS, we refer the reader to <cit.>. The description of OQS considers a bipartite Hilbert space composed of two key constituents: the system S, with Hilbert space ℋ_S and Hamiltonian H_S, which includes the relevant degrees of freedom, and the environment E, with Hilbert space ℋ_E and Hamiltonian H_E, which models the effect of the surroundings. The full system is thus composed of System and Environment ℋ = ℋ_S ⊗ℋ_E, and importantly, is a closed system, with Hamiltonian H_S+E = H_S ⊗𝕀_E + 𝕀_S ⊗ H_E + H_int , where H_int describes the interaction between system and environment. The full system plus environment thus evolves unitarily. The description based on the full S+E Hamiltonian is too complicated since in relevant situations the environment is composed of extremely many, or even infinite, degrees of freedom. Therefore, one of the key tools in the theory of OQS is master equations that describe the dissipative evolution of the system alone. There exists a plethora of master equations, each one valid and useful under particular conditions. Here we are interested in the most general Markovian dissipative evolution, which is generated by the Lindblad equation, also known as the Gorini-Kossakowski-Sudarshan-Lindblad equation <cit.>. The original derivation of this equation starts from postulating the most general evolution of the system that sends physical states to physical states. Consider that the evolution is generated by a dynamical map ρ(t)= E_t(ρ_0), then the most general Completely-Positive and Trace-Preserving (CPTP) evolution admits the Kraus decomposition <cit.> ρ(t) = E_t(ρ_0)= ∑_k E_k ρ_0 E_k^† where E_k are the Kraus operators subject to the normalization condition ∑_k E_k^† E_k = 𝕀. A map is positive iff it sends positive operators to positive operators ρ≥ 0 → E(ρ) ≥ 0. A map is completely positive iff the map E ⊗𝕀_n is positive for all n, where 𝕀_n is the n-dimensional identity map <cit.>. Physically this requirement may be understood as the map extended to act on the system and any general ancilla being positive. The condition of complete positivity guarantees that the states generated by the evolution are always hermitian and positive semidefinite ρ_t^† = ρ_t, ρ_t ≥ 0, ∀ t, and trace preservation implies that the density matrix remains normalized (ρ_t) = 1, ∀ t. And thus the dynamics generated by a CPTP map sends physical quantum states to physical quantum states. If the dynamical map obeys the semigroup property E_t_1 E_t_2 = E_t_1 + t_2, the dynamics described by the system is Markovian and its generator admits the Lindblad form ρ̇(t) = - i [H, ρ(t)] + ∑_k _k [L_k ρ(t) L_k^† - 1/2{L_k^† L_k, ρ(t)}] , =- i L_o ρ_t = - i L_H ρ_t - i L_D ρ_t , where _k ≥ 0 are the dissipation rates, L_k are the jump operators describing the dissipative evolution and H is the generator of the unitary part of the evolution and in general is not equal to the system Hamiltonian H_S. L_o represents the Lindbladian superoperator where L_H, L_D characterize, respectively, the unitary and dissipative part of the dynamics. If the rates are negative _k < 0, the evolution is, in general, non-Markovian <cit.>. The dynamical map can be written in terms of the Lindbladian superoperator as E_t = e^-i L_o t, note that the inverse of this map E_t^-1 is not a CPTP map unless L_o describes unitary evolution <cit.>. The Lindblad equation can also be derived from a “microscopic” point of view <cit.>, in particular, one starts from the Liouville-von Neumann equation for the full system and environment and traces over the degrees of freedom of the environment, for this several conditions and approximations need to be imposed. Firstly, a Lindblad equation can usually be derived only in the weak system-bath coupling, or in the singular coupling limits. Secondly, the Born, Markov, and rotating-wave approximations need to be imposed. These require: the system and environment state to be in a product form ρ_S(t) ⊗ρ_E at all times, the characteristic time scale of the system τ_S to be much larger than the characteristic time-scale of the environment τ_E and, given a spectral decomposition of the jump operators L(ω), the terms involving different frequencies ω≠ω' to be negligible, respectively. The microscopic derivation provides the specific relation between the full Hamiltonian H_S+E and the jump operators {L_k}, dissipation rates {_k} and Hamiltonian H appearing in the master equation (<ref>). Figure <ref> illustrates a schematic diagram of the connection. The Lindblad equation (<ref>) can be interpreted from a quantum measurement point of view <cit.>. For illustration, let us consider a closed system in a state ρ(t) at time t. Assume that the system evolves unitarily for a time interval δ t and then undergoes a quantum measurement. The measurement is characterized by a probability P(s) at time t = s and a set of measurement operators {M_k}. The density matrix of the system at time t + δ t can be expressed as ρ(t+δ t) = ρ(t) + ρ̇ (t) δ t + O(δ t^2) =ρ(t) - i [H, ρ (t)] δ t + O(δ t^2) , where we have used the unitary evolution equation ρ̇ (t) = - i [H, ρ (t)] in the second line. The measurement at this state changes the state to (we use a subscript M to indicate that the system is measured) ρ^M(t+δ t) = [1 - P(t+δ t)] ρ(t+δ t) + P(t+δ t) ∑_k M_k ρ(t+δ t) M_k^† , where P(t+δ t) is the probability of the measurement at time t + δ t. The first term on the RHS of (<ref>) represents the probability that the system stays in the same state ρ(t+δ t) after the measurement (i.e., the measurement does not affect the system), and the second term represents the effect of the measurement with the Kraus operators M_k satisfying ∑_k M_k^† M_k = 𝕀. Expanding the probability as P(t+δ t) = P(t) + η (t) δ t + O(δ t^2), where η (t) = δ P(t)/δ t is the measurement rate, we can rewrite (<ref>) as ρ̇^M (t) = - i [H, ρ(t)] + η (t) ∑_k [M_k ρ(t) M_k^† - 1/2{M_k^† M_k, ρ(t)}] . This equation has the same form as a Lindblad equation for open quantum system (<ref>) if we identify the jump operators with the measurement operators L_k ≡ M_k and with all dissipation rates equal and equal to the measurement rate η(t) = ≥ 0. This is no coincidence, as the open system dynamics can be interpreted as that of a system continuously monitored by the environment <cit.>. The measurement is a non-unitary process that disrupts the unitary dynamics of the system. The higher the measurement rate, the more the system deviates from the unitary evolution. Therefore, the stronger dissipation drives the system away from its unitary evolution. The evolution of the density matrices (<ref>) is given in the Schrödinger picture. An analogous equation can be derived for operators in Heisenberg picture. Recall that, in Schrödinger picture, the states (density matrices) evolve, and the operators stay constant while in the Heisenberg picture, the operators evolve while the states are fixed. The expectation value of an operator is the same in both pictures, and thus, Tr(ρ (t) 𝒪) = Tr (ρ(0) 𝒪 (t)) , where 𝒪≡𝒪(t=0) is a normalized operator. Note that the LHS is in the Schrödinger picture while the RHS is in the Heisenberg picture. To derive the operator evolution, we differentiate both sides with respect to time and get Tr(ρ̇ (t) 𝒪) = Tr (ρ(0) 𝒪̇(t)) . Using (<ref>) we can write the expression in the Schrödinger picture as Tr(ρ̇ (t) 𝒪) = Tr[ (-i [H, ρ(t)] + ∑_k _k [ L_k ρ(t) L_k^† - 1/2{L_k^† L_k, ρ(t)}] )𝒪] . This expression can be recast into the Heisenberg picture by using the cyclic property of the trace Tr(ρ(0) 𝒪̇(t)) = Tr[ ρ(0) (i [H, 𝒪 (t)] + ∑_k _k [ L_k^†𝒪 (t) L_k - 1/2{L_k^† L_k, 𝒪 (t)}] )] . Therefore, the adjoint master equation, which characterizes the dissipative evolution of operators in the Heisenberg picture, can be written as 𝒪̇(t) = i [H, 𝒪 (t)] + ∑_k _k [± L_k^†𝒪 (t) L_k - 1/2{L_k^† L_k, 𝒪 (t)}] , = i L_o^† O(t) = i L^†_H O(t) + i L ^†_D O(t) , where ℒ^†_o is the adjoint Lindbladian superoperator, with the corresponding unitary L_H^† and dissipative L_D^† parts, and the sign ± accounts for the fermionic operators as well. The minus sign is used when the jump operators and initial operator 𝒪 are both fermionic, i.e., they have odd parity <cit.>. The evolution of any operator O(t) can formally be written as 𝒪(t) = e^i ℒ_o^†t 𝒪 . The unitary and dissipative contributions to the adjoint Lindbladian can be written explicitly as <cit.> ℒ_H^†𝒪 = [H, 𝒪 ] , ℒ_D^†𝒪 = -i ∑_k _k [± L_k^†𝒪 L_k- 1/2{L_k^† L_k, 𝒪}] . We also focus on the infinite-temperature Gibbs state ρ_∞ =𝕀/𝕀, which is always a steady state of (<ref>) since L_o 𝕀 = 0, note that the identity 𝕀 is not a fermionic operator and thus the correct sign in the adjoint Lindbladian is +. This property, preservation of the identity, is called unitality and provides the analog of trace preservation in the Heisenberg picture. It allows for a simplification of the problem by endowing a specific and unique inner product ( A | B ):= Tr(ρ_∞ A^† B), matching that in (<ref>). For the Krylov construction, resorting to vectorization is convenient; recall Sec. <ref>. Making use of it, the normalization of the quantum state reads Trρ = (vec 𝕀)^† vec ρ =1. To express the Lindbladian superoperator in vectorized form, making use of the identity (<ref>) in (<ref>) yields ℒ_o^† ≡ ( H ⊗𝕀- 𝕀⊗ H^⊺ ) - i ∑_k [± L_k^†⊗ L_k^⊺ - 1/2(L_k^† L_k ⊗𝕀 + 𝕀⊗ L_k^⊺ L_k^*) ] , where for convenience the jump operators have been rescaled as √(_k)L_k → L_k. The advantage of vectorization is that it transforms the superoperator Lindbladian of dimension d (i.e., a map between matrices of dimension d × d) into an operator of dimension d^2 (i.e., d^2 × d^2 matrix) that acts on vectors (vec O) of length d^2. This is essential to compute the spectrum of the Lindbladian. § KRYLOV COMPLEXITY IN OPEN SYSTEMS: DIFFERENT APPROACHES §.§ Numerical approaches Several numerical methods extend the Krylov construction to open quantum systems. We present them in the following sequence. An analytic approach has been presented in Sec. <ref>. §.§.§ Arnoldi iteration The first study of Krylov construction in open systems was initiated in Ref. <cit.>, where a generalization of the Lanczos algorithm was proposed. The algorithm is known as Arnoldi iteration <cit.> where an orthonormal basis set {𝒱_0, …𝒱_n, …} is constructed using the full open-system Lindbladian: span(𝒱_0,…, 𝒱_n) = span( 𝒪, ℒ_o^†𝒪, …, (ℒ_o^†)^n 𝒪 ) . The algorithm proceeds as follows. By initializing with a normalized vector 𝒱_0 ∝𝒪, an iterative construction yields |𝒰_k )= ℒ_o^† |𝒱_k-1) . k=1, 2, …. Then, for j=0 to n-1, the algorithm works as follows <cit.>: 1.  h_j,k-1 = ( 𝒱_j|𝒰_k ) .                                                              2.  |𝒰̃_k ) =|𝒰_k )-∑_j=0^k-1 h_j,k-1 |𝒱_j ) . 3.   h_k,k-1 = √((𝒰̃_k|𝒰̃_k)) . If h_k,k-1=0, stop; otherwise, define 𝒱_k as | 𝒱_k )= |𝒰̃_k )/h_k,k-1 . If the operators are vectorized, the appropriate inner product (<ref>) should be used. The Arnoldi iteration transforms the Lindbladian into an upper Hessenberg form in the Arnoldi basis (or Krylov basis, keeping in mind that the basis is generated by the full Lindbladian ℒ_o^†) <cit.>, ℒ_o^†≡[ h_0,0 h_0,1 h_0,2 ⋯ ⋯ h_0,n; h_1,0 h_1,1 h_1,2 ⋯ ⋯ h_1,n; 0 h_2,1 h_2,2 h_2,3 ⋯ ⋯; ⋯ 0 h_3,2 ⋯ ⋯ ⋯; 0 ⋯ 0 ⋯ ⋯ h_n-1,n; 0 0 ⋯ 0 h_n,n-1 h_n,n; ] , with the Arnoldi coefficients h_m,n= (𝒱_m|ℒ_o^†|𝒱_n). Without dissipation, ℒ_o^† reduces to the Hermitian counterpart ℒ_H, and the Arnoldi iteration reduces to the usual Lanczos algorithm. The matrix becomes tridiagonal with the non-zero primary off-diagonal elements given by the Lanczos coefficients b_n. The construction is closely related to the Hessenberg decomposition, where any matrix 𝒜 can be written as 𝒰𝒜_Hess 𝒰^†, where 𝒰 is a unitary matrix and 𝒜_Hess is of Hessenberg form, with all elements below the first subdiagonal vanishing. The determinant, trace, and eigenvalues of 𝒜 are the same as those of 𝒜_Hess. Finding the Hessenberg form of any matrix is useful because it makes the matrix sparser. However, the Hessenberg decomposition is not unique and does not ensure that the diagonal and subdiagonal elements are positive. The Arnoldi iteration, on the other hand, finds a Hessenberg matrix that has positive diagonal and subdiagonal elements. Therefore, the Arnoldi iteration gives a very specific Hessenberg decomposition among the many possible ones. §.§.§ Closed Krylov basis In <cit.>, the second method was proposed, where the Krylov basis is generated by the closed-system Liouvillian ℒ_H^†, instead of the full Liouvillian ℒ_o^†≡ℒ_H^† + ℒ_D^†. This leads to a Krylov basis that confines the operator dynamics to the subspace: span(𝒪_0,…, 𝒪_n) = span(𝒪, ℒ_H^†𝒪, …, (ℒ_H^†)^n 𝒪) . Alternatively, this converts the operator dynamics into a non-Hermitian tight-binding model of particles hopping between sites <cit.> ∂_t φ_n(t) = b_nφ_n-1 (t) - b_n+1φ_n+1 (t) + i ∑_m a_m, nφ_m (t) , where n ≥ 1, and a_m,n are additional coefficients. These coefficients resemble Arnoldi coefficients, and the diagonal ones a_n ≡ a_n,n are dominant. Ref. <cit.> provides a method to calculate them, but the reason for the dominance of the diagonal coefficients by the dissipative part of the Lindbladian is unclear. Ref. <cit.> also conducted numerical simulations in an one-dimensional interacting spinless fermionic model and the finite-fermion Sachdev-Ye-Kitaev (SYK) model. The diagonal elements are consistent with h_n,n from the Arnoldi iteration, increasing linearly before a finite saturation. However, the behavior has no analytical support, although the general growth is observed to be in agreement with <cit.>. §.§.§ Bi-Lanczos algorithm The third and final approach is a particularly convenient one. It creates a bi-orthonormal basis set instead of an orthonormal one, starting from initial vectors |p_0 and |q_0, evolved by the adjoint Lindbladian ℒ_o^† and the Lindbladian ℒ_o respectively. Thus, it yields two separate bases <cit.> Kry^j(ℒ_o^†, |p_0) = {|p_0, ℒ_o^† |p_0, (ℒ_o^†)^2 |p_0, …} , Kry^j(ℒ_o, |q_0) = {|q_0, ℒ_o |q_0, ℒ_o^2 |q_0 , …} , and imposes the bi-orthonormality condition q_m|p_n =δ_m,n . The “double braces” notation indicates the bi-Lanczos vectors <cit.>, derived by the vectorization principle satisfying the inner product (<ref>). Such a bi-orthonormality condition is typically encountered in the context of non-Hermitian Hamiltonians. In such scenarios, the eigenvectors corresponding to the non-Hermitian Hamiltonian do not exhibit orthogonality with respect to one another <cit.>. Unlike the Arnoldi iteration, the bi-orthonormality condition is imposed - the vector spaces are no longer individually orthonormal. This renders the Lindbladian in a tridiagonal form <cit.> ℒ_o^†≡[ a_0 b_1 0; c_1 a_1 b_2; c_2 ⋱ ⋱; ⋱ a_m-1 b_m; c_m ⋱ ⋱; 0 ⋱ ⋱; ] , in contrast with the upper Hessenberg form obtained from the Arnoldi iteration. A similarity transformation also implies that d_n := √(b_n c_n) can be regarded as generalized Lanczos coefficients for open systems <cit.>. This is seemingly equivalent to different versions of the bi-Lanczos algorithm, which provides a non-unique basis <cit.>. These three sets of coefficients {a_j}, {b_j} and {c_j} are recursively related by the following two sets of three-term recurrence relations <cit.> ℒ_o^† |p_j = b_j |p_j-1 + a_j |p_j + c_j+1 |p_j+1 , ℒ_o |q_j = c^*_j |q_j-1 + a^*_j |q_j + b^*_j+1 |q_j+1 , where * denotes complex conjugation. The bi-Lanczos algorithm produces both these coefficients and the two sets of bi-orthogonal vectors |p_j and |q_j, which we describe below <cit.>: * Initialization. Let |p_-1 = |q_-1 = 0 and a_-1 = b_0 = c_0 = 0. Also, let |p_0 = |q_0≡ |𝒪), where 𝒪 is the initial normalized operator. * Lindbladian action and bi-Lanczos coefficients. For j = 0, 1, …, perform the following iterations: * Compute: |r_j =ℒ_o^† |p_j, and |s_j =ℒ_o |q_j. * Redefine the vectors: |r_j := |r_j - b_j |p_j-1, and |s_j := |s_j - c_j^* |q_j-1. * Evaluate the inner product: a_j = q_j|r_j. * Again, redefine the vectors: |r_j := |r_j - a_j |p_j, and |s_j := |s_j - a_j^* |q_j. * Evaluate the inner product: ω_j= r_j|s_j. * Evaluate the norm: c_j+1= √(|ω_j|), and b_j+1 = ω^*_j/c_j+1. * If b_j+1≠0, then define the vectors: |p_j+1 = |r_j/c_j+1 ,  and   |q_j+1 =|s_j/b^*_j+1 . * If required, perform the full orthogonalization (FO) procedure. * Stop, if b_k=0 for some k. To summarize, the bi-Lanczos algorithm differs from the Arnoldi iteration in that the Krylov spaces are bi-orthonormal to each other, not orthonormal. This leads to a tridiagonal Lindbladian, unlike the Arnoldi iteration. However, both methods are equivalent in capturing the non-Hermiticity of the Lindbladian and reduces to the usual Lanczos algorithm when dissipation is absent, and the Lindbladian becomes Hermitian <cit.>. §.§ Structure of the Lindbladian Equipped with the generic sets of bi-Lanczos coefficients, we discuss the generic structure of the Lindbladian. The elements of the Lindbladian can be completely general complex numbers. However, the bi-Lanczos algorithm and the generic properties of Lindbladian (i.e., the eigenvalues of i ℒ_o^† can either be non-positive real elements or complex conjugate in pairs <cit.>, with non-positive real parts) force the Lindbladian to take the following form <cit.> ℒ_o^†≡[ i|a_0| b_1 0; b_1 i |a_1| b_2; b_2 ⋱ ⋱; ⋱ i |a_m-1| b_m; b_m ⋱ ⋱; 0 ⋱ ⋱; ] , where b_n = c_n ∈ℝ_+. In a more general setting, the off-diagonal coefficients can differ by a phase factor <cit.>. The Lindbladian is written in the bi-orthonormal basis of the form q_i | ℒ_o^† | p_j. These off-diagonal coefficients are the same as the Lanczos coefficients for a closed system. The diagonal coefficients of (<ref>) break the Hermiticity of the Lindbladian ℒ_o^†≠ (ℒ_o^†)^† (it is neither Hermitian nor anti-Hermitian), which is otherwise true in the absence of a_n. Further, the imaginary part of any eigenvalue λ_ℒ of the (adjoint) Lindbladian ℒ_o^† satisfies <cit.> nmin Im(a_n) ≤ Im(λ_ℒ) ≤ nmax Im(a_n) , where the equality trivially holds for a closed system. We briefly explain why the elements have this specific form <cit.>. Consider the eigenvalue of the matrix i ℒ_o^†, where ℒ_o^† is given by (<ref>), with real b_n. The eigenvalue equation is Q_n = det(i ℒ_o^† - λ𝕀) = 0 , where Q_n is known as a specific version of the continuant of dimension n <cit.>, which is the determinant of a tridiagonal matrix of dimension n. The n-dimensional continuant can be obtained by its lower dimensional continuant from the following form <cit.> Q_n=(-|a_n-1|-λ) Q_n-1+b_n-1^2 Q_n-2 , with an initial condition Q_0=1 and Q_1=-|a_0|-λ. Setting Q_n = 0 from (<ref>), we get a polynomial of λ as λ^n + f_1 ({|a_n|, b_n}) λ^n-1 + ⋯ + f_n ({|a_n|, b_n}) = 0 , where f_k({|a_n|, b_n}) is a set of real functions in {|a_n|, b_n}. The complex conjugate roots theorem says for a polynomial equation with all real coefficients, if x+iy is a root, then x-iy is also a root. This includes real eigenvalues (y = 0), so the theorem means that eigenvalues λ are either real or come in complex conjugate pairs. This generically holds for any physical Lindbladian. Any real part in a_n or any complex part in b_n will violate this theorem. In other words, if the Lindbladian takes the form of (<ref>), then b_n has to be real, and a_n = i |a_n| has to be purely imaginary. This provides a generic argument for the real and imaginary nature of the bi-Lanczos coefficients. A time-evolved operator (evolved by ℒ_o^†) can be written in the bi-Lanczos basis as |𝒪(t)) = ∑_n i^n φ_n (t) |p_n , with φ_n (t) denoting the Krylov basis wavefunctions. The Heisenberg equation of motion for the operator d |𝒪(t))/d t = i ℒ_o^† |𝒪(t)) translates to <cit.> ∂_t φ_n (t) = b_nφ_n-1 (t) + i a_n φ_n (t) - b_n+1φ_n+1 (t) , for n ≥ 1, with the boundary condition φ_-1(t) =0 and φ_n(0) = δ_n,0. Here, φ_0(t) is the standard autocorrelation function, defined as φ_0(t) ≡𝒞({μ}, t) = 1/2^NTr(𝒪(t) 𝒪) with the time-evolved operator (<ref>) for a system consisting of N two level systems, in a similar way to (<ref>). Let us collectively denote {μ} as the set of the dissipative parameters. The similarity with Eq. (<ref>) is apparent, especially Eq. (<ref>) is exactly equal to Eq. (<ref>) if only the diagonal terms are present in Eq. (<ref>). In other words, the bi-Lanczos algorithm transforms the evolution dynamics to a non-Hermitian tight-binding model given by Eq. (<ref>); see Fig. <ref>. Where now the dissipation is purely local on-site n and not approximately local as in (<ref>). Starting from a particular site n, a particle hops to the (n-1)-th and (n+1)-th site with hopping rates given by b_n and b_n+1, respectively. Moreover, the particle has the probability to stay at site n with amplitude a_n. As the coefficients a_n are purely imaginary, the tight-binding equation (<ref>) reduces to <cit.> ∂_t φ_n(t) = b_nφ_n-1(t) - |a_n| φ_n (t) - b_n+1φ_n+1 (t) , with n ≥ 1. Since the evolution is non-unitary and the basis |p_n is not orthonormal, the total probability ∑_n |φ_n (t) |^2 < 1 is not conserved. However, we can define a modified probability φ̃_n (t) = φ_n (t)/ √(𝒵(t)), where 𝒵(t) = ∑_n |φ_n (t) |^2 is the total probability, acting as a normalization constant. This modified probability is conserved by definition (∑_n |φ̃_n (t) |^2 = 1) and the Krylov complexity in such setting equals the average position of the particle in the non-Hermitian Krylov chain, i.e., K (t) = ∑_n n |φ̃_n (t)|^2 = 1/𝒵(t)∑_n n |φ_n (t)|^2 . Similarly, higher orders of Krylov cumulants can be defined, for example, the normalized variance <cit.>, which is the second cumulant. We define it with respect to the normalized wavefunction Δ K (t)^2 := ∑_n n^2 |φ̃_n (t)|^2 - (∑_n n |φ̃_n (t)|)^2 = 1/𝒵(t)∑_n |φ_n (t)|^2 (n - K(t))^2 . For any generic system, we are mostly interested in the first two cumulants - Krylov complexity and the Krylov variance. §.§ Constraints imposed by Trace Preservation As discussed in Sec. <ref>, the Lindbladian is the generator of a CPTP dynamical semigroup. This structure constrains the evolution so that the dynamics remain physical. The way that these constraints manifest in the Krylov representation of the Lindbladian poses a formidable question. Here we detail how Trace-Preservation poses constraints for the dynamics in the Krylov space. Trace preservation of Lindbladians imply that ∂_t (ρ) = (ρ̇) = ( L_o(ρ)) = 0. In the Heisenberg picture it manifests as unitality, i.e. (𝕀 L_o(ρ)) = ( L_o^†( 𝕀)ρ)=0 ⇔ L_o^† (𝕀) = 0. This condition can be written in the bi-orthonormal basis as L_o^† | 𝕀 = 0, where the adjoint Lindbladian is now in the tridiagonal form (<ref>). Using the bi-orthonormal resolution of the identity superoperator I = ∑_j |p_j q_j| the expression of the vectorized identity matrix can be expressed in the bi-orthonormal basis as |𝕀 = 1/d∑_j (q_j) |p_n , therefore the coefficients of the identity matrix in the bi-orthonormal basis are simply the traces of the left Krylov basis {|q_j }. Leveraging the recurrence relation (<ref>) and the trace preservation of the Lindbladian ( L_o (q_j))=0 we find the recurrence relation for the traces c_j^* (q_j-1) + a_j^* (q_j) + b_j+1^* (q_j+1) = 0. From the recurrence relation and the fact that |q_0 = | O) the traces of the first elements follow as (q_0) = ( O), (q_1) = - a_0^*/b_1^*( O), (q_2) = (a_1^* a_0^*/b_2^* b_1^* - c_1^*/b_2^*)( O), (q_3) = ( c_2^* a_0^*/b_3^* b_1^* + a_2^* c_1^*/b_2^* b_3^* - a_2^* a_1^* a_0^*/b_3^* b_2^* b_1^*) ( O). These expressions are very cumbersome. However, if a traceless operator ( O)=0 is chosen as a starting operator for the bi-Lanczos algorithm, the identity is the zero vector and the dynamics is unital, and thus trace-preserving. In the following section, a traceless operator is always chosen. The restrictions imposed by complete positivity on the structure of the Lanczos coefficients and the associated dynamics in the bi-orthonormal Krylov basis remains an interesting open problem. § EXAMPLES OF OPEN QUANTUM DYNAMICS IN KRYLOV SPACE §.§ Dissipative SYK Model For illustration, we choose the SYK model (<ref>) and its dissipative variants <cit.>. Such systems exhibit a profound parallel with non-Hermitian physics, wherein the SYK model is generalized to a non-Hermitian version <cit.>. A potential gravity dual has also been discussed <cit.>. The Arnoldi iteration and bi-Lanczos construction in spin chains in their respective integrable and chaotic limits <cit.> were also studied in detail <cit.>. We consider the following two classes of dissipators. Class 1: Linear dissipator. When each fermion dissipates at an equal rate, the evolution is characterized by a linear Lindblad operator of the form <cit.>: L_i = √(λ) ψ_i ,        i = 1, 2, ⋯, N , where λ≥ 0 is the dissipation strength associated with the interaction between the system and the environment. This model with the Hamiltonian (<ref>) is analytically solvable in the large q limit. Class 2: Non-linear dissipator. The most generic non-linear dissipators involve p-body Lindblad operators with a structure similar to the SYK Hamiltonian that can be written in the form <cit.> L_a = ∑_1 ≤ i_1 < ⋯ < i_p ≤ N V_i_1 i_2 … i_p^a ψ_i_1ψ_i_2⋯ψ_i_p , with a =1, 2, ⋯, M and the random interaction V_i_1 i_2 … i_p satisfies the following distribution: ⟨ V_i_1 i_2 … i_p^a ⟩ = 0 ,   ⟨ |V_i_1 i_2 … i_p^a|^2 ⟩ = p!/N^p V^2 ,  ∀ i_1, ⋯, i_p, a , with V ≥ 0. The random average has to be taken for the ensemble of the interaction strength. It reduces to the linear dissipator without the random average and for p =1. For numerical purposes, we specifically consider the p=2 case. Let M denote the number of jump operators in (<ref>). We take a special double-scaling limit N, M →∞ with R = M/N being held constant for analytical purposes. Due to the special structure of the large q SYK model, the operator size concentration (<ref>) plays a key role. It is interesting to see that the strings of Majorana fermions act as an eigenstate of the dissipative part of the Lindbladian (<ref>), i.e., ℒ_D^† (ψ_i_1⋯ψ_i_s) = i λ s (ψ_i_1⋯ψ_i_s)   p = 1 , i RV^2 ps/2^p-1 (ψ_i_1⋯ψ_i_s)   p > 1 , where the p > 1 limit is strictly valid in the large q and large N limit. Hence, the rate of annihilation by ℒ_D^† is proportional to the size s of the operator, defined in Sec. <ref>, and the rate of dissipation ∝λ or ∝ RV^2 for single and p>1-body dissipator respectively. This distinctive characteristic, attributed to the operator size concentration, is a unique feature of the large q SYK model. It will be manifest in the diagonal Lanczos coefficients within the Lindbladian matrix, which we will explore in the forthcoming analysis. §.§ Analytical approach: moment method Section <ref> outlined the use of the moment method in the SYK model. Next, we present its generalization to the dissipative SYK. The autocorrelation function is given by <cit.> 𝒞(λ̃,t) = 1 + 1/q g(λ̃,t) + O(1/q^2) , g(λ̃,t) = log[α^2/𝒥^2cosh^2(α t + ℵ)] ,   t > 0 , where α and ℵ read α = √((λ̃/ 2)^2 + 𝒥^2) ,     ℵ = arcsinh (λ̃/ (2 𝒥)) . Here, λ̃ = λ q is the dissipative parameter in the large q limit. With no dissipation, λ = 0, the autocorrelation function reduces to (<ref>). The function g(λ̃,t) satisfies the following Liouville equation <cit.> ∂_t^2 g(λ̃,t) = - 2 𝒥^2 e^g(λ̃,t) , with the boundary condition <cit.> g(λ̃,0) = 0 ,     g'(λ̃,0) = - λ̃ . They include and generalize the known result <cit.> with zero dissipation. However, the autocorrelation function (<ref>) is not an even function as in <cit.>, and hence both even and odd moments exist. Specifically, one finds in a 1/q expansion the moments m_n = 2/qm̃_n + O(1/q^2) ,    n ≥ 1 , where m̃_n is a polynomial of w := i λ̃. For example, the leading moments are given by <cit.> m̃_1 = w/2 , m̃_2 = 1 , m̃_3 = w , m̃_4 = w^2 + 2 , m̃_5 = w^3 + 8 w , m̃_6 = w^4 + 22 w^2 + 16 , m̃_7 = w^5 + 52 w^2 + 136 , m̃_8 = w^6+114 w^4+720 w^2+272 . For n>1, the moments are associated with the triangle “T(n,k)”, and generated according to the following sequence <cit.> T(n,k) = (k+1)T(n-1,k) + (2n-4k)T(n-1,k-1) , with ⌊n-1/2⌋≥ k ≥ 0 for each n ≥ 1. Alternatively, the moment m̃_n is given by the number of Motzkin paths of length n where k of them are upsteps, m̃_n = ∑ _k=0^⌊n/2-1⌋ T(n-1,k) w^n -2 k-2 , for n ≥ 2. Further, they can be written in terms of the continued fraction of the form (<ref>) <cit.>. Applying the recursive algorithm (<ref>), the moments provide us two sets of Lanczos coefficients <cit.> a_n = i λ̃ n + O(1/q) ,   λ̃ := λ q , b_n = 𝒥√(2/q) n = 1 , 𝒥√(n(n-1)) + O(1/q) n > 1 . Note that the coefficients b_n are exactly equal to their closed-system counterparts and do not depend on the dissipation, while the coefficients a_n are purely imaginary and linearly depend on the dissipation. Further, both coefficients grow linearly in n. We will come back to this point in detail. §.§.§ Arnoldi iteration To appreciate the analytical findings in the previous section, let us implement the Arnoldi iteration in the dissipative SYK model with the Hamiltonian (<ref>) and the linear dissipator (<ref>). We vectorize the Lindbladian according to (<ref>) and choose the vectorized initial operator 𝒪 = √(2)ψ_1. For its numerical study, we choose q=4 and N=18 fermions. Figure <ref> shows the behavior for the diagonal and primary off-diagonal elements of the Lindbladian in Arnoldi (Krylov) basis. The (imaginary) diagonal elements |h_n,n| depend on the dissipation strength and grow linearly before saturation at n ≲ N/q, which is dependent on the system size. The linear fit gives |h_n,n| = λ (2n + 1 )= 2 n λ + O(1) , which is shown by the dashed line in Fig. <ref> (top). By contrast, the primary off-diagonal elements h_n,n-1 and h_n-1,n are almost independent of the dissipation and closely overlap with the closed system counterparts. However, they are not equal, i.e., h_n-1,n≠ h_n,n-1, primarily due to the presence of other off-diagonal elements h_m,n. Their small relative differences are shown in the inset of Fig. <ref> (bottom). Although the other off-diagonal elements h_m,n are much smaller in magnitude compared to the diagonal and the primary off-diagonal elements which dominate the Lindbladian matrix (<ref>), their presence makes it difficult to compute the Krylov complexity in general. §.§ Numerical approaches §.§.§ Bi-Lanczos algorithm In this section, we apply the bi-Lanczos algorithm for the Hamiltonian (<ref>), with the linear dissipator (<ref>). We keep all the parameters the same as in the Arnoldi iteration. The bi-Lanczos algorithm generates two sets of coefficients, which are shown in Fig. <ref>. The diagonal coefficients increase linearly and are proportional to the dissipative parameter. The linear fit gives |a_n| = λ (2n + 1) = 2 n λ + O(1) . This property is similar to the diagonal Arnoldi coefficients |h_n,n|, barring any O(1) numbers, if any, which are insignificant in the asymptotic limit of n. All the upper and lower off-diagonal elements are the same and equivalent to the closed system counterparts; see Fig. <ref> (bottom). This contrasts with the Arnoldi iteration, where upper and lower off-diagonal elements are different. The upshot is that the Lindbladian in the bi-Lanczos basis is expressed in the purely tridiagonal form (<ref>) as discussed in Sec. <ref>. Finally, we comment on the numerical stability of both approaches. Although Arnoldi iteration appears to be more stable than the bi-Lanczos algorithm, both show numerical instability in Lanczos coefficients in small system sizes (i.e., small N). Ideally, N ≥ 18 shows reasonably stable Lanczos coefficients. Further, since the coupling of the SYK model is chosen randomly, ideally, a random average (multiple realizations) is required. We performed 100 Hamiltonian realizations in both the Arnoldi iteration and the bi-Lanczos method. Comparing these two methods, we find perfect agreement in the slope and the saturation. Multiple realizations are considerably significant for smaller system sizes due to the numerical instability of the Lanczos coefficients. Partial re-orthogonalization methods <cit.> can also be employed. §.§ Krylov complexity in chaotic open quantum systems Motivated by the extensive analytical and numerical studies in the SYK model, we propose that both sets of Lanczos coefficients show asymptotic linear growth  <cit.> a_n ∼ i χμ n ,      b_n = c_n ∼α n . Here, μ denotes the generic dissipative parameter, with μ∝λ for the linear dissipator (class 1) and μ∝ R V^2 for the generic p-body dissipator (class 2). The proportionality directly follows from (<ref>), as a consequence of the operator size concentration in the large q SYK model. The parameter χ is independent of n and the dissipative parameter. This provides a more generic operator growth hypothesis, at least for chaotic systems, which includes <cit.> as a specific case for the unitary systems. §.§.§ Continuum limit: large n result To understand the behavior of the Krylov complexity, we first take a heuristic approach. We take the continuum limit by elevating the index n to a parameter and denoting φ_n (t) ≡φ(n,t) as an Ansatz. The equation (<ref>) with (<ref>) can be written as <cit.> ∂_t φ(n,t) + n (χμφ(n,t) + 2 α∂_n φ(n,t)) = 0 , where we have elevated φ_n-1(t) = φ (n-1,t) and φ_n+1(t) = φ (n+1,t), and further used φ (n+1,t) - φ (n-1,t) = φ (n+1,t) - φ (n,t) + φ (n,t) - φ (n-1,t) = 2 ∂_n φ(n,t). We further assume b_n+1 = b_n ≡ b(n), which is true in the asymptotic limit.[Otherwise, we have an extra ∂_n b(n) term which can be added to a(n) since both terms are proportional to φ(n,t). For the linear growth b(n) ∝ n, this extra term will add a constant to a(n), which can be ignored in the asymptotic limit.] We look for a stationary solution (where ∂_t φ = 0) at t →∞, which is given by φ_* (n, t →∞) ∝ e^-n/ξ ,     ξ := 2 α/χμ . Using the above wavefunction, the late-time (stationary) Krylov complexity (after normalization using (<ref>)) saturates to K(t) = ξ/2 + O(1/ξ) = α/χμ + O(μ) ,     t →∞ , which is valid in the weak dissipation regime ξ≫ 1 and inversely proportional to the dissipation strength. However, the early-time growth is exponential K(t) ∼ e^2 α t, with the time-scale for saturation t_* estimated as e^2 α t_* = α/χμ    ⇒    t_* = 1/2 αln(α/χμ) . This dissipative time scale varies logarithmically with the inverse of the dissipation strength. §.§.§ Exact results Equipped with the Lanczos coefficients (<ref>), the non-Hermitian tight-binding model (<ref>) is solved to obtain the basis wavefunctions. To do this, we assume a specific form of the coefficients <cit.> b_n^2 = γ^2(1-u^2) n (n-1+η) ,   a_n = i u γ (2n + η) , where u ∈ (0,1) and η∼ O(1) number. Equation (<ref>) is recovered as a particular choice α^2 = γ^2(1-u^2) ,   χμ = 2 γ u , in the asymptotic limit of n. Using (<ref>), the non-Hermitian tight-binding model (<ref>) can be exactly solved, and the solution is given by <cit.> φ_n(t) = (γ t)^η/(1 + u tanh(γ t))^η × (1 - u^2)^n/2√((η)_n/n!)( tanh (γ t)/1 +u tanh(γ t))^n . Although the appearance of u in both b_n and a_n is more subtle, curiously, the form of the wavefunction is controlled by the underlying SL(2,ℝ) symmetry <cit.>. It can be easily checked that the probability is not conserved in general 𝒵(t) = ∑_n |φ_n (t)|^2 = (u (u cosh (2 γ t)+sinh (2 γ t))-u^2+1)^-η . It is interesting to perform a short and long-time asymptotic analysis of the wavefunction (<ref>). At late times t →∞, φ_n(t →∞) ≃(√(1-u^2)/1+u)^n n^η-1/2 , where we have only kept terms involving terms with n and neglected any other terms, such as those involving η. This is justified since we focus on the asymptotic limit of n. We also used the asymptotic expansion Γ(n+η)/Γ(n) ∼ n^η for n →∞. Since log((1+u)/√(1-u^2)) = u + O(u^3), we can readily see <cit.> φ_n(t →∞) ∼ e^-n/ξ(u) n^η-1/2 ,   ξ(u)^-1 = u + O(u^2) . This correctly reproduces the stationary state ansatz solution in (<ref>), with η = 1 and ξ(u) = 1/u ∝μ/γ. The finite time limit with γ t ≫ 1 is more involved. One can repeat the analysis and find <cit.> φ_n(t) ∼ e^-n/ξ(u,t) n^η-1/2 , ξ(u,t)^-1 = u + 2 e^-2 γ t + O(e^-4 γ t, e^-2 γ t u, u^2) , which agrees with the previous result (<ref>) as well as the zero dissipation result in (<ref>). The delocalization length ξ(u,t) captures both the spreading of the operator and the dissipation. Equating its first and second terms provides the saturation timescale t_d∼log(1/u), which grows logarithmically. The exact wavefunction (<ref>) allows us to compute the Krylov complexity (<ref>) exactly. It is given by <cit.> K(t) = η(1-u^2) tanh ^2(γ t)/1+2 u tanh (γ t)-(1-2 u^2) tanh ^2(γ t) . and shown in Fig. <ref> for different dissipation strengths. In the weak coupling limit, it reduces to K(t) = η[ sinh^2(γ t)-2 u sinh^3(γ t) cosh (γ t) + O(u^2)] . Without dissipation (i.e., μ = 0 or u = 0), when γ = α, the exponential growth K(t) ∼η e^2 α t is recovered. The higher order terms in u in (<ref>) are responsible for weak dissipation, and the approximation becomes invalid when the first and second terms are comparable. In other words, it gives a time-scale t_d when the dissipative regime begins. This happens at <cit.> t_d = 1/2 γsinh^-1(1/u) ∼1/2 αlog(4 α/χμ) + O(μ) , where in the second inequality, we have used sinh^-1(1/u) = log(2/u) + O(u) and kept terms up to O(u). This allows us to set γ = α and 2/u = 4 α/(χμ) at O(u) using (<ref>). Thus, we recover the dissipative scale t_d = t_* from the general argument and also from the generic delocalization length ξ(u,t). On the other hand, the late-time value for the Krylov complexity at fixed dissipation u > 0 is given by <cit.> K(t→∞) = η/2 u - η/2 , which is a constant independent of the initial growth parameter γ (or α) but only depends on the dissipation u. Hence, the generic arguments are consistent with the calculation. We also obtain two quantities, namely the dissipative timescale t_d and the saturation value of the Krylov complexity K_sat, which show universal aspects of this behavior <cit.> t_d∼1/γlog(1/u) ,       K_sat∼ 1/u . The dissipative timescale resembles the logarithmic timescale of scrambling, implying that the dissipative strength acts as an effective degree of freedom. The saturation value is independent of the system size and thus generically holds in the thermodynamic limit. The saturation plateau appears to be generic in other notions of operator growth, namely operator size and OTOC <cit.>. We propose these quantities are robust for any generic all-to-all quantum chaotic systems. It is interesting to compute the normalized variance (<ref>), which is given by <cit.> Δ K (t)^2 = η(1-u^2) tanh^2(γ t) (u tanh (γ t)+1)^2/(1+ 2 u tanh (γ t)- (1-2 u^2) tanh^2(γ t))^2 . While it behaves as (η/4) sinh^2(2 γ t) ∼η e^4γ t in the growth regime, it saturates at η / (4u^2) at late times. In either case, we find Δ K (t) ∼ K(t) , i.e., the standard deviation of the Krylov complexity is comparable to its average, indicating a broad distribution. This is due to the “operator size concentration”, discussed earlier. Incidentally, this property was also found to be true in all-to-all random unitary circuits <cit.>. It is further interesting to note the identity ∂_t log𝒵 (t) = - 2 u γ (2 K(t) + η) , resembling the equality of the Loschmidt fidelity and operator size <cit.>. For a closed system, 𝒵(t) = 1, and u =0, and the above equation trivially holds. To recover the large-q SYK result, using (<ref>)-(<ref>), (<ref>) and (<ref>), we identify the following <cit.> η = 2/q ,   𝒥^2 = γ^2 (1-u^2)    2 γ u = λ̃ , in the O(1/q) expansion. This implies that the leading order in q, the Krylov complexity, and its variance in the SYK model vary as ∝ 1/q with the expected growth; see (<ref>) and (<ref>) with γ = α = 𝒥), already obtained before. It is interesting to seek a physical interpretation of the plateau structure and the dissipative timescale in Fig. <ref>, which appears to hold for generic all-to-all systems. This is intuitive from the operator growth perspective. Since the dissipation strength is linear in the operator size, the dissipation acts stronger as the operator grows. The scrambling rate balances the dissipative strength at a timescale logarithmic in the dissipative strength, leading to the observed plateau. In an infinite system, scrambling persists indefinitely, preventing any halt in operator growth. The Markovian approximation (weak dissipation limit) suggests that dissipation alone is insufficient to reduce operator size in such cases. However, in systems of finite size, the size of operators can diminish when dissipation is weak, particularly at later times. We also provide an intuitive understanding of the observed plateau from the perspective of quantum measurement. A notable parallel is drawn between Eq. (<ref>) and the Lindblad equation, Eq. (<ref>), where the jump operators undertake a measurement-like role. Essentially, the environment conducts a continuous measurement with an indeterminate outcome. Since the quantum measurement is a non-unitary operation, it steers the system away from its unitary trajectory. Hence, given that the measurement rate is analogous to the dissipation strength, an increase in dissipation thwarts the exponential growth typically seen in unitary evolution. Hence, the observed plateau solely results from the measurement process of the unknown environment. Similar findings for the Lyapunov exponent in the dissipative SYK model have been reported <cit.>. §.§ Pole structure of autocorrelation and spectral density In Section <ref>, we observed that the pole structure of the autocorrelation function has a direct correlation with its growth dynamics. In particular, assuming smooth behavior of the Lanczos coefficients, the pole nearest to the origin plays a crucial role in dictating the expansion of the Lanczos coefficients and, consequently, the Krylov complexity. Given this relationship, it becomes pertinent to investigate the alterations in the pole structure under the influence of dissipation. This is especially relevant when considering the diagonal coefficients of the Lindbladian, denoted by a_n. We consider the following hypothetical autocorrelation function <cit.> 𝒞(μ,t) = √(α^2+μ^2)/α sech( t √(α^2 + μ^2) + sinh^-1(μ/α) ) , where α is the parameter governing the growth of b_n, while μ represents the dissipation factor. It is evident that in the absence of dissipation, Eq. (<ref>) simplifies to 𝒞(0,t)= sech(α t), aligning with the closed system scenario in (<ref>). The nearest pole to the origin of the autocorrelation function is situated along the imaginary t axis (Fig. <ref>). Given that the autocorrelation function is normalized to unity at t=0, we can employ the recursive algorithm (<ref>) to obtain a_n = iμ (2n+1) ∼ i μχ n ,   n≥ 0 ,   b_n = α n ,   n≥ 1 . The above coefficients can also be obtained by applying the Toda chain method in Sec. <ref> upon the replacement t → - i τ, and setting τ_0 = 0. These expressions represent the asymptotic limits of the Lanczos coefficients, as hypothesized in (<ref>) in the weak dissipation limit. However, it is important to note that the autocorrelation function (<ref>) is not the sole function yielding these coefficients; any function f(μ, t) satisfying f(0,t) = f(μ,0) = 0 will also suffice. The poles of the proposed autocorrelation function (<ref>) are described by <cit.> t_± = ±i π/2√(α^2 +μ^2) - 1/√(α^2 +μ^2)sinh^-1(μ/α) . Remarkably, these findings are valid for any value of μ, not just small ones, as the derivation of (<ref>) did not rely on a small μ approximation. Nonetheless, for the sake of continuity with the discussions on Markovian dissipation in Sec. <ref>, we consider μ to be small, leading to the poles <cit.> t_± = ±i π/2 α - μ/α^2 + O(μ^2) . The pole structure, as depicted in Fig. <ref>, reveals a discernible shift of the closest poles along the negative real axis when weak dissipation is present, while their distance from the imaginary axis (assuming μ≥ 0) mirrors that of the closed system. Thus, the distance to the imaginary axis dictates the growth of b_n, and the lateral shift indicates the growth of a_n. In cases of more generic dissipation, not necessarily small, the poles seem to affect the growth of b_n through a diagonal shift (a combined shift in both x and y axis). However, the autocorrelation function considered here is a simplified model meant to illustrate the general hypothesis in (<ref>). Calculating an exact autocorrelation function under generic dissipation would require delving into non-Markovian dynamics <cit.>, which falls beyond the scope of the present discussion. The pole structure of the autocorrelation function influences the decay profile of the spectral function. In the presence of dissipation, the generalized expression for the spectral function becomes Φ(μ,ω) = ∫_-∞^∞ dt e^- i ω t 𝒞(μ,t) , where 𝒞(μ,t) is the autocorrelation function. Utilizing the specific autocorrelation function (<ref>) under consideration, the spectral function is found to be <cit.> Φ(μ, ω) = π/αsech(πω/2 √(α^2+μ^2)) e^i ω/√(α^2+μ^2)sinh^-1(μ/α) , which holds for generic μ, beyond the small dissipation approximation. In the absence of dissipation (μ = 0), this equation reduces to the closed system result. However, when considering weak dissipation and expanding to the first order in μ, one obtains Φ(μ, ω)|_μ→ 0 = Φ_0(ω) + i μ Φ_1(ω) + O(μ^2) , where Φ_0(ω) represents the closed-system spectral function for μ = 0, and Φ_1(ω) corresponds to the first-order correction in μ. These terms are explicitly expressed as <cit.> Φ_0(ω) = π/α(πω/2 α) ∼π/αe^-π|ω|/(2α) , Φ_1(ω) = πω/α^3(πω/2 α) ∼πω/α^3 e^-π|ω|/(2α) , with the latter equations indicating the high-frequency behavior. Interestingly, while the leading term exhibits an exponential decay, the subleading term decays as a product-exponential function. Despite this, the overall decay remains exponential even within the weakly dissipative regime, as illustrated in Fig. <ref>. In all cases, the smooth behavior of the Lanczos coefficients is assumed. A parallel approach to studying the spectral function in a large q SYK model has been presented in <cit.>. § KRYLOV COMPLEXITY IN QUANTUM FIELD THEORIES In this section, we briefly discuss the Krylov space method in quantum field theory, focusing on the particular case of conformal field theories (CFT) <cit.> and simple free and holographic models <cit.>. Conformal field theories are special due to their scale-invariant properties and conjecturally dual to the gravitational theory via the AdS/CFT correspondence. The integrable and the chaotic properties of such CFTs are under active investigations <cit.>. These questions can be tackled using Krylov subspace methods. Because of the infinite number of degrees of freedom, generating an orthonormal basis directly may not be illuminating. However, we can resort to an alternative approach, namely, the moment method and the Toda chain technique, as discussed in Sec. <ref>. This is possible because the autocorrelation function can be computed in many theories exactly due to conformal symmetry. A UV or an IR cutoff may be required in certain cases, or a theory should be compactified on a thermal circle <cit.>. The starting point is the finite-temperature Wightmann two-point autocorrelation function, defined with the help of the “Wightmann” symmetric inner product ⟨𝒪 (t) 𝒪⟩_β^(W) = Tr(e^-β H/2 𝒪 (t) e^-β H/2 𝒪)/Tr e^-β H = Tr(e^-(β/2 - it) H 𝒪 e^-(β/2 + it) H 𝒪)/Tr e^-β H , where we have used the time-evolved operator 𝒪(t) = e^i H t 𝒪 e^-i H t and β is the inverse temperature. Denoting the density matrix by ρ = e^-β H/Tr(e^-β H), the above (<ref>) can be recast into ⟨𝒪(t) 𝒪⟩_β^(W) = Tr(ρ e^i H(t - iβ/2)𝒪 e^-i H(t - iβ/2) 𝒪) = Tr(ρ 𝒪 (t - iβ/2) 𝒪) = ⟨𝒪 (t - iβ/2) 𝒪⟩_β^th , where the thermal two-point function at inverse temperature β is defined as ⟨𝒪 (t) 𝒪⟩_β^th = Tr(ρ 𝒪 (t) 𝒪) . In the Euclidean time τ = i t, the Wightmann two-point function is given by <cit.> 𝒞(τ) := ⟨𝒪 (τ) 𝒪⟩_β^(W) := ⟨𝒪 (-i(τ + β/2)) 𝒪⟩_β^th . This is a universal relation between the Wightmann and thermal two-point functions. Given the thermal function, it can always be converted into the Wightmann function using the above relation. Unless explicitly mentioned, we will always consider the Wightmann inner product as our definition of the autocorrelation function. Let us consider the example of 2d CFT in ℝ^2. The autocorrelation function is given by the Wightmann inner product of the form <cit.> 𝒞(τ) = (πτ/β)^2 Δ , where Δ is the operator scaling dimension and β is the inverse temperature. This function has poles on the real axis of the Euclidean time at τ=±β/2. This is a universal behavior in any field theory, which comes from the singularity when two local operators collide C(τ)∝ |τ∓β/2|^-2Δ, when τ→±β/2. The order of the pole singularity Δ is the conformal dimension of O. The two-point function C(τ) is related to the more standard C(t) discussed, e.g., in the context of large q SYK model above by the Wick rotation. The Toda equations when C is given by (<ref>) can be solved explicitly, this is one of the simple cases solved through the ansatz (<ref>) <cit.>, _n (τ) = G(2+n) G(1+n + 2Δ)/G(2 Δ) Γ(2Δ)^n+1 × (π/β)^n (n+1) (πτ/β)^(n+2Δ) (n+1) , where G(n) is the Barnes Gamma function defined by G(n) := ∏_k=2^n-2 k! and satisfying the property G(n+1) = G(n) Γ(n). Using the Toda chain technique with τ_0 = 0, for which Eq. (<ref>) equals unity, the Lanczos coefficients read <cit.> a_n = 0 ,    n≥ 0 , b_n = π/β√(n(n-1+2 Δ)) ,     n ≥ 1 . It is natural to check the above results using the moment method in the special case of Δ=1. For this, we compute the power spectrum. Putting τ = i t, we find 𝒞(t) = (π t/β)^2, and thus <cit.> Φ (ω) = ∫_-∞^∞ d t e^- iω t ^2(α t) = β^2 ω/πsinh (βω/2) . Since the autocorrelation is even in t, the odd moments vanish. The even moments are given by m_2n = 1/2π∫_-∞^∞ d ω ω^2n Φ (ω) = 2/π^2 β^2n(4^n+1-1) ζ (2 n+2) Γ (2 n+2) , where ζ (z) is the Riemann-zeta function. Applying the moment method, we recover the same Lanczos coefficients in (<ref>) with Δ = 1. The linear growth persists indefinitely, with the slope dictated by α = π/β. In this case the Krylov complexity growth rate is λ_K := 2α = 2π/β = λ_MSS, saturating the Maldacena-Shenker-Stanford (MSS) bound <cit.>. The unexpected linear growth of the Lanczos coefficients in free CFT puts the validity of the universal operator growth hypothesis into question. However, linear growth arises due to the infinite UV cutoff in quantum field theory. To overcome this, we must put the field theory on a lattice such that the lattice spacing acts as a regulator of the theory. Alternatively, we can introduce a UV regulator into the frequency space, which can be either hard or smooth. Let us consider the hard UV regulator Λ, such that the moments are given by <cit.> m_2n = 1/2π∫_-Λ^Λ d ω ω^2n Φ (ω) . This regulates the integration range from (-∞, ∞) to [-Λ, Λ]. Evaluating the expression (<ref>) is a tiresome task. However, in the asymptotic limit for large n, the integral is dominated by the frequency |ω| = Λ, and the ratio of the consecutive moments is controlled by <cit.> lim_n →∞m_2n+2/m_2n∼ω^2n+2/ω^2n|_|ω| = Λ = Λ^2 . This behavior fundamentally affects the growth of the Lanczos coefficients. In other words, the Lanczos coefficients cease to grow indefinitely and approach a constant. The saturation value b_sat is controlled by the ratio of the moments <cit.> lim_n →∞m_2n+2/m_2n = 4 b_sat^2 . Combining (<ref>) and (<ref>), we obtain the saturation value <cit.> b_sat∼Λ/2 , which is linearly proportional to the UV regulator. For a finite Λ, the indefinite growth of Lanczos coefficients ceases and reaches a plateau. Hence, we need to look at the Lanczos coefficients beyond the UV cutoff to determine the chaotic nature of quantum field theories. For a soft regulator, see <cit.>. On the other hand, the IR scale has a different effect. This includes considering the massive theory, where the bare mass in the field theory behaves as an IR cutoff, or the theory is placed in a compact manifold. In such cases, the Lanczos coefficients split into two smooth branches, even and odd. In the case of massive theory two branches grow linearly, with the same slope α =π T, but different intercepts, this behavior is called “persistent staggering” <cit.>. The case of compact manifolds is more complex. In this case even and odd branches of b_n grow linearly, but with different slopes <cit.>, b_n= {[ α_e n+γ_e+o(1) n is even,; α_o n+γ_o+o(1) n is odd, ]. both different from π T, demonstrating behavior going beyond the universality of the original operator growth hypothesis of <cit.>. Using the integral over Dyck paths formalism a particular combination of the coefficients α_e,α_o can be related to the singularity of C(τ), located at τ=±β/2, and the combination of γ_e,γ_o can be related to Δ <cit.>. § KRYLOV COMPLEXITY IN HOLOGRAPHY One of the interesting settings to study Krylov complexity is provided by holography. Holographic theories in the semiclassical gravity regime, with the bulk geometry being the black hole in AdS, exhibit exponential growth of OTOC. This makes holography a natural playground to study the relation between the exponential growth of Krylov complexity and OTOC. Furthermore, by providing a non-perturbative definition of quantum gravity in the bulk via boundary QFT, holographic theories serve as fertile ground to study the relation between gravity and quantum complexity. That led to a number of influential conjectures geometrizing quantum complexity in the bulk <cit.>. This makes holography a natural starting point to investigate the relation between Kyrlov complexity and its more established QFT analogs. Formulating Krylov complexity in holography is analogous to the field theory case. The starting point is thermal Weightman-ordered two-point function (<ref>), i.e., with the operators placed on opposite points on the thermal circle, and for convenience analytically continued to Euclidean time, thus removing the question of time ordering. Computing the Lanczos coefficients and Krylov complexity is then completely analogous to the previous section, as is expected, since the holographic duality is, in the end, one of the descriptions of field theory. There are only a handful of holographic examples in which the Krylov complexity has been evaluated so far, some of them numerical and rely on the semiclassical bulk approximation, as in <cit.>. There are also calculations in thermal AdS_3 and BPZ black hole dual to 2d CFTs at small and high temperature <cit.>. These calculations are in line with other QFT results and we already referred to them while discussing field theory or Krylov complexity at finite temperature in the preceding sections. The main takeaway is that holography helps solidify support for the extension of the Maldacena-Stanford-Shenker bound on OTOC growth (<ref>). Another point worth mentioning is that holographic theories, at least in a certain case of thermal AdS_3 exhibit the two-slopes behavior (<ref>), which goes beyond the universality, originally outlined in <cit.>. This case is notable also because the Krylov complexity is trapped to IR values: an initial growth of K(t) stops at an early time independent of the UV cutoff, after which K(t) oscillates. This means the asymptotic or time-averaged value of Krylov complexity is independent of the value of UV cutoff, which marks a sharp distinction with the behavior of computational or holographic complexities that are explicitly UV cutoff-dependent <cit.>. Discussion of Krylov complexity in holography would not be complete without mentioning possible bulk manifestations of K(t). It is essentially an established lore in holography that in the limit of classical gravity, all physically meaningful quantities in boundary field theory should have a clear geometric interpretation in the bulk. One remarkable example is the Ryu-Takayanagi (RT) prescription, which calculates entanglement in field theory <cit.>. Asking the same question for Krylov complexity is suggestive, especially because the thermal two-point function, from which K(t) can be mathematically derived, admits a full geometric description in the so-called geodesic approximation. Yet this question does not seem to have a simple answer for standard d≥ 2 holography, but we note that an interesting geometric proposal was formulated recently for the Jackiw–Teitelboim (JT) gravity <cit.>. § KRYLOV COMPLEXITY AND INTEGRABILITY Recent research on Krylov complexity has revealed its potential as a discerning tool for differentiating between integrable and non-integrable systems, an idea that goes back to Parker et al. <cit.>. This distinction has been demonstrated through concrete examples within XXZ spin chains <cit.>, Ising spin chains <cit.>, Bose-Hubbard model <cit.>, and Floquet systems <cit.>, encompassing both their integrable and non-integrable variants. The term “non-integrable” is used here to denote systems where integrability is disrupted by adding a specific term that can either strongly <cit.> or weakly break the integrability <cit.>. Figure <ref> (top row) illustrates the initial and entire Lanczos spectrum for the transverse-field Ising model (TFIM) in both integrable and non-integrable limits, with the initial operator σ_1^z. Due to the instability of the coefficients, the Full orthogonalization (FO) method <cit.> is performed. To suppress the noise, the moving average of order 6 in both the limits has been performed. The Lanczos coefficients exhibit sublinear growth in the integrable case and linear growth in the non-integrable scenario. The Lanczos spectrum terminates at n = D_K, constrained by the finite size of the system. Intriguingly, the Krylov dimension D_K remains the same across both scenarios. In particular, the Krylov dimension bound (<ref>) is saturated in both cases, depending on the chosen initial operator. Further, the Lanczos spectrum in the integrable limits exhibits greater disorder compared to its chaotic analogs. To quantify the disorder within the Lanczos coefficients, one can define the logarithmic variance of the Lanczos sequence as follows <cit.>, Δ b_n = Var(log(b_n/b_n+1)) . This measure effectively captures the degree of disorder among the Lanczos coefficients, which was analyzed in quantum billiards <cit.>, the Bose-Hubbard system <cit.>, the SYK model <cit.>, and random matrices <cit.>. In scenarios where integrability is strongly broken, integrable systems exhibit a higher degree of disorder in the Lanczos spectrum than their non-integrable counterparts. This disorder correlates with an auxiliary off-diagonal Anderson hopping model, leading to a phenomenon ascribed as “Krylov localization” <cit.>. The level of disorder present in the Lanczos sequence influences the late-time saturation value of Krylov complexity. This saturation value is reduced in the integrable phase, yet it increases as the system transitions towards the chaotic phase, as depicted in the bottom row of Fig. <ref>. This observation was initially made in the XXZ chain with an integrability-breaking term <cit.> and was subsequently identified in other models such as the Ising spin chain <cit.> and the Bose-Hubbard model <cit.>. Nonetheless, the saturation value remains below the threshold of D_K/2, a benchmark typically met in genuinely chaotic systems such as the SYK_4 model <cit.>. However, this pattern is not universally applicable, as the XXZ chain and the Ising chain do not always conform to this behavior <cit.>. Correspondingly, the saturation value of Krylov complexity is much larger in the non-integrable limit, yet lower than D_K/2 (see Fig. <ref>). § APPLICATIONS TO QUANTUM CONTROL Adiabatic driving offers a powerful scheme for quantum state preparation in quantum science and technology. Adiabatic strategies provide the rationale for adiabatic quantum computation and quantum annealing <cit.>. Their implementation is hindered by the presence of noise, uncontrolled sources of errors, and the coupling to the surrounding environment. It is thus desirable to find alternative nonadiabatic driving schemes without the requirement for slow driving. This is the scope of shortcuts to adiabaticity <cit.>. Among the techniques used for their engineering, counterdiabatic driving (CD) <cit.>, also known as transitionless quantum driving <cit.>, stands alone as a universal strategy. Its original formulation focuses on driven quantum systems evolving unitarily. Consider an uncontrolled reference Hamiltonian with a point-like spectrum and spectral decomposition H_0(λ)=∑_n=1^dE_n(λ)|n(λ)⟩⟨ n(λ)|, that is modulated by a single time-dependent parameter λ(t) for simplicity. In the limit of slow driving, the time-evolution of an initial eigenstate |n(λ_0)⟩ follows the adiabatic trajectory |ψ_n(t)⟩ =exp[i_n(t)]|n(λ(t))⟩ , where the phase factor is the sum of the dynamical phase and the geometric phase _n(t)=-i∫_0^tds E_n(λ(s)) +i∫_λ_0^λ_tdλ⟨ n |∂_λ n⟩ . The adiabatic trajectory |ψ_n(t)⟩ with respect to H_0 is the exact solution of the time-dependent Schrödinger equation i∂_t|ψ(t)⟩=H|ψ(t)⟩ when the dynamics is generated by a different Hamiltonian H. The latter can be written as the sum H=H_0+H_ CD of the uncontrolled Hamiltonian and the CD term H_ CD=λ̇A(λ), where A(λ)= i∑_n=1^d[|∂_λ n⟩⟨ n|-⟨ n|∂_λ n⟩ |n⟩⟨ n|] , is also known as the adiabatic gauge potential. CD in many-body systems generally involves nonlocal multiple-body interactions <cit.>. This has motivated the development of schemes to approximate the CD auxiliary Hamiltonians by variational methods <cit.> or otherwise <cit.>. While CD schemes for many-body systems are hard to implement in analog quantum devices, they are amenable to digital quantum schemes <cit.>. Harnessing the advantage of CD to steer the dynamics with the flexible implementation of digital schemes is the basis of digitized counterdiabatic quantum algorithms <cit.>. Even in this context, the truncated CD controls are desirable and derived as leading orders in various series expansions. The integral representation of the CD term <cit.> A(λ) = -1/2lim_η→ 0∫_-∞^∞ ds sgn (s)e^-η|s| × e^iH_0(λ)s∂_λ H_0(λ)e^-iH_0(λ)s , motivates the nested commutator expansion <cit.> A(λ)=i∑_k α_k(λ) ℒ_λ^2k-1∂_λ H(λ) , where the coefficients α_k are often determined by a variational approach. As an alternative, the Krylov expansion of the CD term has been presented in <cit.>. Choosing 𝒪 =∂_λ H(λ) and using the Krylov expansion of 𝒪 (s)= ∑_n=0^D_K-1 i^nφ_n(s) 𝒪_n in Eq. (<ref>) yields A(λ)=i b_0 ∑_k=1^d_Aα_k(λ) 𝒪_2k-1 . Here, b_0^2=(∂_λ H,∂_λ H) while d_A=D_K/2,(D_K-1)/2 for even and odd Krylov dimension D_K, respectively. The expansion coefficients are then fixed in terms of the Lanczos coefficient, circumventing the need for their approximate determination through a variational approach. Specifically, for even D_K, they are set by the iterative relations α_1 = -1/b_1, α_k+1= -b_2k/b_2k+1α_k , while they can be found as a solution to a linear matrix equation for off D_K <cit.>. Knowledge of the Krylov expansion makes it possible to relate the features of the CD term and A(λ) with the properties of the system through the operator growth hypothesis and the analysis of the Lanczos coefficients. The norm of the CD term is used to quantify the cost of CD protocols <cit.>. It is further related to fidelity susceptibility and the quantum geometric tensor <cit.> and using the Krylov expansion, one finds <cit.> (A,A)=b_0^2∑_k=1^d_Aα_k^2 . The expansion of the CD term in Krylov space is likely to prove useful in other applications. In the conventional approach, the CD term enforces parallel transport in the instantaneous eigenbasis of the uncontrolled Hamiltonian. By contrast, generalizations of the CD to open quantum systems involve parallel transport of the generalized eigenstates of the Liouvillian <cit.> or the instantaneous eigenstates of the reduced density matrix of the system, also known as natural orbitals <cit.>. The rationale behind CD can also be applied to parameter estimation in quantum metrology <cit.>. In this context, optimal strategies involve parallel transport along the operator given by the parametric derivative of the generator of evolution <cit.>. The Krylov expansion of the CD term can also be utilized in numerical methods involving parallel transport along a family of quantum states, e.g., of matrix product states in tensor network algorithms <cit.>. § APPLICATIONS TO QUANTUM COMPUTING While Krylov subspace methods have a long tradition in conventional computing, their implementation for many-body systems in quantum computers is not straightforward. They are computationally costly as the dimension of the Krylov basis scales exponentially with the system size. In addition, quantum computers based on the circuit model are naturally suited to implement unitaries rather than powers of the generator of evolution <cit.>. Recent progress has advanced the application of Krylov subspace methods to quantum computers by circumventing these challenges. Variants of Krylov subspace methods have been put forward, replacing powers of the Hamiltonian with unitaries. Such an approach is suited for approximated real-time evolution as well as imaginary-time evolution. The latter provides a natural scheme for the preparation of ground states and thermal states <cit.>. This has given rise to a family of hybrid quantum-classical algorithms <cit.>. An alternative approach relies on replacing the need for Hamiltonian powers with combined unitary evolutions <cit.>. Additional advances have focused on the exact construction of the Krylov basis in a quantum computer without relying on the simulation of real or imaginary time evolution <cit.>. This approach has the advantage of reducing exponential classical cost, being achievable in polynomial time and memory. These recent efforts focus on the Hamiltonian as the generator of time evolution. The use in quantum computers of Krylov subspace methods for open quantum systems governed by Lindbladians and other generators is an enticing prospect that may be facilitated by progress in quantum simulation of open quantum systems <cit.>. § OPEN PROBLEMS In what follows, we mention some open problems regarding the formalism of the Krylov subspace method for quantum dynamics, leaving aside applications that are expected to be many and broadly spread out. At the time of writing, Krylov subspace methods in quantum dynamics remain restricted to time-independent generators. Recent efforts have focused on extensions to time-dependent systems that can be described by a Floquet operator, using a Krylov expansion involving its powers <cit.> and a unitary quantum circuit with Trotterized evolution <cit.>. Beyond such cases, the dynamics under a time-dependent generator can be approximated by a step-wise sequence with constant generators. In addition, an arbitrary unitary evolution can be described using the Magnus operator, making the case for a Krylov expansion using its powers <cit.>. Beyond unitary dynamics, the progress reviewed in Sec. <ref> has focused on Markovian quantum systems with no memory, described by the Lindblad master equation <cit.>. The extension to general non-Markovian evolutions constitutes an interesting prospect <cit.>. It is known that any time-continuous evolution described by a density matrix can be associated with a master equation of a generalized Lindblad form, with time-dependent rates and Lindblad operators <cit.>, making the extension of the Krylov basis construction in such setting desirable. Beyond the time-continuous case, stochastic evolutions are essential in the treatment of fluctuating Hamiltonians <cit.>, the quantum-jump approach associated with the stochastic unraveling of master equations <cit.>, and the theory of continuous quantum measurements <cit.>. In any such generalization, it would be interesting to investigate whether relations resembling those in the time-independent case hold between the set of Lanczos coefficients, the correlation function for operators, and the survival probability for quantum states. Regarding the notion of quantum state, we have presented the use of Krylov subspace methods for time-dependent operators in Sec. <ref>, pure states in Sec. <ref>, and mixed density matrices in Sec. <ref>. Quantum evolution can be discussed in many other representations. Among the phase-space quasiprobability distributions, the most celebrated is that introduced by Wigner <cit.>, to which Krylov subspace methods have recently been applied <cit.>. Other phase space distributions such as the P and Q distributions are frequent in quantum foundations <cit.>, many-body physics <cit.>, and quantum optics <cit.>. Another open problem focuses on understanding the role of symmetry in relation to the dynamics in Krylov space, including the behavior of the Lanczos coefficients and the growth of Krylov complexity. For time-independent Hamiltonians, the complexity algebra leads to the identification of different classes of evolutions, involving not only the Liouvillian but also the anti-Liouvillian and the Krylov complexity operator, as discussed in Sec. <ref>. By contrast, the traditional symmetry classifications in quantum physics focus primarily on the generator of evolution. Dyson's three-fold way classification led to the introduction of the Gaussian and circular ensembles distinguished by the Dyson index _D <cit.>. Altland and Zirnbauer enriched this classification, including time-reversal, particle-hole, and chiral symmetries <cit.>. This classification has been generalized to non-Hermitian matrices describing Hamiltonians as well as Lindbladians <cit.>. It remains to be seen whether such symmetry classes imprint a clear signature on the dynamics in Krylov space. Further, the integrable structure of the Krylov dynamics in isolated systems has been established in terms of the Toda flow <cit.>. One may wonder whether instances of integrable dynamics in Krylov space can be identified under more general evolutions, e.g., in non-Hermitian and open systems. And if such instances exist, do they have any connections with other notions of integrability? In particular, notions of integrability in open systems have been introduced by mapping the vectorized generator of evolution to non-Hermitian integrable models that are Bethe-ansatz solvable and satisfy the Yang-Baxter equation. The development of such extensions and the analysis of their usefulness in applications remain to be explored and offer a tantalizing prospect for further studies. § CONCLUSION In this review, we have provided a comprehensive account of the use of Krylov subspace to characterize the evolution of quantum systems. While the underlying tools are established entities within linear algebra—primarily utilized to execute efficient tridiagonalization of matrices for eigenvalue extraction—their significance has burgeoned within the realm of physics. Thanks to recent progress via the operator growth hypothesis, these methods provide a framework for the fundamental characterization of many-body quantum systems, their time evolution, the mechanisms underpinning thermalization, and quantum chaos, and their complexity. Our discourse methodically elucidates the construction of the Krylov space, employing the Lanczos algorithm for unitary evolution. The discussion extends to encompass both pure and mixed quantum states, furnishing a comprehensive formulation for each. Progressing further, we delve into a suite of generalizations that incorporate the Arnoldi iteration and the bi-Lanczos algorithm, addressing the challenges posed by non-unitary evolution in open quantum systems. As an adjunct to the Lanczos technique, the moment method—a viable alternative construction from the two-point autocorrelation function is introduced. These analytical frameworks are pivotal in the ongoing quest to unravel quantum chaotic attributes within the contexts of quantum field theory and holography via AdS/CFT correspondence. Significant strides have been made analytically, particularly in relation to coherent states. These advancements have shed light on the geometric essence of quantum systems and established constraints on their fundamental characteristics, such as the quantum speed limits. Despite their conceptual simplicity, these tools wield the capacity to grapple with an array of complex systems. Examples include Random Matrix Theory (RMT), spin chains, and the Sachdev-Ye-Kitaev (SYK) models - the latter sharing a close connection to gravitational theories. Through these paradigms, we glean insights that inform our understanding of the fundamental bound of quantum chaos and its generalization. Such understanding is instrumental in unraveling the intricacies of quantum integrability and quantum control problems. Throughout this text, we intersperse analytical and numerical examples to reinforce the underlying theoretical framework. To date, the study of Krylov subspace methods for quantum dynamics has been mostly confined to theoretical physics, applied mathematics, and computer science. As progress is made in combining Krylov methods with quantum algorithms, their implementation in quantum devices seems feasible in quantum computers. The development of experimental methods for probing Krylov complexity measures is in a nascent stage. However, the groundwork laid by existing methods—such as those employed to simulate the spectral form factor and the out-of-time-ordered correlator (OTOC) in digital quantum simulators—provides a promising foundation <cit.>. Nonetheless, this endeavor presents a formidable challenge that is essential to foster progress harnessing the development of quantum processing units towards computational supremacy in quantum computation and quantum simulation of nonequilibrium phenomena <cit.>. We are thankful to Budhaditya Bhattacharjee, Hugo A. Camargo, Xiangyu Cao, Paweł Caputa, Nicoletta Carabba, Aurelia Chenu, Pieter W. Claeys, Íñigo L. Egusquiza, Niklas Hörnedal, Norihiro Iizuka, Victor Janke, Norman Margolus, Javier Molina-Vilaplana, Masahiro Nozaki, Tanay Pathak, Tomaž Prosen, Shinsei Ryu, Lucas Sá, Lea F. Santos, Aninda Sinha, Julian Sonner, Ruth Shir, Kazutaka Takahashi, Jing Yang, Zhuo-Yu Xian, Zhenyu Xu. We acknowledge financial support from The Luxembourg National Research Fund (project No. grant 17132054 and 16434093, and Attract Grant No. 15382998). One of these projects has received funding from the QUANTERA II Joint Programme with cofunding from the European Union's Horizon Europe research and innovation programme. The work of P.N. is supported by the JSPS Grant-in-Aid for Transformative Research Areas (A) “Extreme Universe” No. 21H05190. For the purpose of open access, the authors have applied a Creative Commons Attribution 4.0 International (CC BY 4.0) license to any Author Accepted Manuscript version arising from this submission. § LIST OF SYMBOLS AND ACRONYMS
http://arxiv.org/abs/2405.08702v1
20240514154141
The physical mechanism behind magnetic field alignment in interstellar clouds
[ "Guido Granda-Muñoz", "Enrique Vázquez-Semadeni", "Gilberto C. Gómez" ]
astro-ph.GA
[ "astro-ph.GA" ]
Magnetic field alignment in interstellar clouds Granda-Muñoz et al. Instituto de Radioastronomía y Astrofísica, Universidad Nacional Autónoma de México, Apdo. Postal 3-72, Morelia, Michoacán 58089, México e.vazquez@irya.unam.mx,g.gomez@irya.unam.mx Departamento de Ciencias, Facultad de Artes Liberales, Universidad Adolfo Ibáñez, Av. Padre Hurtado 750, Viña del Mar, Chile guido.granda@edu.uai.cl A tight correlation between interstellar clouds contours and their local magnetic field orientation has been widely observed. However, the physical mechanisms responsible for this correlation remain unclear. We investigate the alignment mechanism between the magnetic field and interstellar clouds. We perform three- and two-dimensional MHD simulations of warm gas streams in the thermally-bistable atomic interstellar medium (ISM) colliding with velocities of the order of the velocity dispersion in the ISM. In these simulations, we follow the evolution of magnetic field lines, identify and elucidate the physical processes causing their evolution. The collision produces a fast MHD shock, and a condensation front roughly one cooling length behind it, on each side of the collision front. A cold dense layer forms behind the condensation front, onto which the gas settles, decelerating smoothly. We find that the magnetic field lines, initially oriented parallel to the flow direction, are perturbed by the fast MHD shock, across which the magnetic field fluctuations parallel to the shock front are amplified. The downstream perturbations of the magnetic field lines are further amplified by the compressive downstream velocity gradient between the shock and the condensation front caused by the settlement of the gas onto the dense layer. This mechanism causes the magnetic field to become increasingly parallel to the dense layer, and the development of a shear flow around the latter. Furthermore, the bending-mode perturbations on the dense layer are amplified by the non-linear thin-shell instability (NTSI), stretching the density structures formed by the thermal instability, and rendering them parallel to the bent field lines. By extension, we suggest that a tidal stretching velocity gradient such as that produced in gas infalling into a self-gravitating structure must straighten the field lines along the accretion flow, orienting them perpendicular to the density structures. We also find that the upstream superalfvénic regime transitions to a transalfvénic regime between the shock and the condensation front, and then to a subalfvénic regime inside the condensations. Finally, in two-dimensional simulations with a curved collision front, the presence of the magnetic field inhibits the generation of turbulence by the shear around the dense layer. Our results provide a feasible physical mechanism for the observed transition from parallel to perpendicular relative orientation of the magnetic field and the density structures as the density structures become increasingly dominated by self-gravity. The physical mechanism behind magnetic field alignment in interstellar clouds Guido Granda-Muñoz1,2,Enrique Vázquez-Semadeni 1 Gilberto C. Gómez 1 Received XXX; accepted YYY ============================================================================= § INTRODUCTION Studying the role of magnetic fields in the formation and evolution of atomic and molecular clouds (MCs) has been an important research topic for both observational and theoretical astronomy. Magnetic fields are thought to be an important ingredient in the dynamics of the ISM, providing a possible support mechanism against gravitational collapse, and guiding the gas flow in the surroundings of filamentary structures, among many other effects. In addition, a tight correlation between the orientation of the magnetic field and cold atomic clouds (CACs) has been identified in the last decade. For example, <cit.> found that the plane of the sky magnetic field, measured using polarized thermal dust emission, is aligned with atomic hydrogen structures detected with HI emission and <cit.> also observed a similar alignment in HI fibers, which are thin long dense structures identified in HI emission using the Rolling Hough transform. In addition, <cit.> found that the plane of the sky magnetic field, detected using polarized thermal dust emission, is aligned with molecular structures traced by dust, and <cit.> found that the relative orientation of the projected magnetic field and dust filaments changes from parallel to perpendicular when sampling higher density regions in nearby MCs. <cit.> studied the role of the magnetic field in the transition between H 1 -H_2 using multiple tracers to investigate the gas properties of Ursa Minor. They found that turbulence is transalfvénic and that the gas probably accumulates along magnetic field lines and generates overdensities where molecular gas can form. However, the origin of these alignments remains unclear, as most of the observational evidence refers to spatial and orientation correlations without a clear understanding of the causality involved. Therefore, it is crucial to understand the interplay between the precursors of molecular clouds (MCs) and magnetic fields. Since CACs are thought to constitute the primordial place for the early stages of the evolution of molecular clouds and, eventually, star-forming regions <cit.>, understanding the alignment mechanism of magnetic field lines in CACs is relevant to elucidate the role of magnetic fields in the formation of MCs and star formation. To study this correlation statistically, <cit.> proposed the histogram of relative orientations (HRO) and found that the relative orientation of the magnetic field and isodensity contours changes from parallel to perpendicular in a high magnetization (β=0.1) computer-simulated cloud. This change of orientation was studied in <cit.> with an analytic approach where the authors found the parallel or anti-parallel (ϕ=0^∘,180^∘) and perpendicular (ϕ=90^∘) configurations are equilibrium points. Therefore, they argue, the system tends to evolve to these ϕ values. More recently, <cit.> investigated the relative orientation of magnetic field and three- and two-dimensional projected structures using synthetic dust polarization maps. They found that the magnetic field changes its orientation from parallel to perpendicular at n ≈ 10^2-10^3^-3 in regions where the mass-to-flux ratio has values close to or below 1. In addition, they found that projection effects due to the relative orientation between the cloud and the observer affect the measurements. The relative orientation of the magnetic field alignment with CACs is, arguably, intrinsically related to the formation mechanism of CACs. The formation of cold atomic clouds (CACs) by colliding flows with pure hydrodynamic was studied in <cit.>, where the authors performed simulations of colliding warm neutral gas streams, including the multi-phase nature of the ISM and not considering gravity nor magnetic fields. They discussed three important instabilities that might play a role in the formation of CACs: the thermal instability (TI), the Kelvin-Helmholtz instability (KHI), and the non-linear thin-shell instability (NTSI). They concluded that these instabilities break up the coherent flows, seeding small-scale density perturbations necessary for gravitational collapse and thus star formation. <cit.> studied the formation of non-self-gravitating CACs in the ISM. This author focused on finding the mechanism responsible for the elongation of CACs and concluded that the clouds are generated by the stretching induced by turbulence because they are aligned with the strain. Moreover, the author also found that the Lorentz force confines CACs. <cit.>, in agreement with the previous work, found that the strain is also the origin of the magnetic field alignment with fibers in HI clouds formed in a shock-compressed layer using simulations resembling the local bubble. More recently, <cit.> studied the morphology of CACs in forced magnetized and hydrodynamical simulations, finding that the presence of the magnetic field increases the probability of filamentary CACs. A consequence of the supersonic gas streams forming CACs is the formation of shocks. Shocks form wherever there are supersonic velocity differences between two nearby regions. Since the velocity dispersions in the neutral ISM are supersonic, shocks are ubiquitous in the ISM, which, in general, produce density fluctuations of various amplitudes. The formation of CACs out of the warm neutral medium (WNM) in the Galactic ISM requires strong cooling in addition to the presence of shocks. Such cooling often causes TI <cit.>, and in general produces large density jumps without the need for very strongly supersonic flows <cit.>. In the presence of supersonic flows and cooling, simulations show that shocked unstable gas, in transition between the WNM and the CNM, lies between the shock front and the condensation layer and that the separation among them is due to the cooling time necessary to produce the condensation resulting from TI. <cit.>. In this paper, we focus on studying the alignment of magnetic fields and filamentary CACs by following the simultaneous evolution of magnetic field lines. This approach allows us to understand the physical processes responsible and the role they play in the alignment of magnetic fields with density structures. The paper is organized as follows. We describe the simulations used in this article in Section 2. In section 4, we show and explain the evolution of the 3D magnetic field lines that yield the alignment of CACs with their local magnetic field. We discuss the implications of our results in Section 5. Finally, in section 6, we present the summary and conclusions. § COLD ATOMIC CLOUD SIMULATIONS We have performed 2D and 3D numerical simulations of cold atomic clouds formed by the collision of warm atomic gas flowing along the x-axis, and colliding at the center of the computational domain. The simulations were performed using the adaptive mesh refinement (AMR) code Flash version 4.5 <cit.>, and the ideal MHD multi-wave HLL-type solver <cit.>. Since our goal is to study the alignment of the magnetic field with CACs before self-gravity becomes dominant, neither self-gravity nor any external gravitational potential is included in these simulations. These simulations consider inflow boundary conditions in the x direction, and periodic boundary conditions in the other directions. The initial conditions for both kinds of simulations consist of gas in thermal equilibrium with temperature T_0=5006.25, implying a sound speed c_ s,0= 7.36 for a mean particle mass of 1.27 m_ H, and an atomic hydrogen number density n_H,0=1. For the 3D simulation, the box size of the simulation is L= 64 and the highest resolution is 0.03125 . The gas inside a cylinder of radius R=16, length ℓ = 64, and centered in the middle of the computational domain, has a velocity u_0=±14.7 along the x direction, with the positive and negative values applying to the left and right of the x=0 plane, respectively. For the 2D simulation, the box size is L= 20 with a uniform grid and 512 cells per dimension, resulting in a resolution of 0.039 , and the main difference is that the collision front in this simulation has a sinusoidal shape, in order to trigger the nonlinear thin-shell instability (NTSI) described in <cit.>. This interface is accomplished by requiring that simulation points with x < 3.0sin(8π y /20) and x > 3.0sin(8π y /20) have a velocity u_0=±14.7 respectively. In addition, for the 3D simulation, we add to each velocity component a pseudo-random velocity fluctuation obtained from a Gaussian distribution with zero mean and a standard deviation of 2.85. These initial conditions imply an initial Mach number of M_ s≈ 2.0 for both the 2D and 3D simulations. Furthermore, both simulations incorporate the multi-phase nature of the interstellar medium by including the net cooling function provided in <cit.>[With the typographical corrections given in <cit.>]. The initial magnetic field is B_0= 3 along the x-axis, implying an initial Alfvén speed of 6.54 and an inflow Alfvénic Mach number of M_ A≈ 2.25. The magnetization of this simulation results in a plasma β of β≡P_ th/P_ mag = 2 c_ s^2/u_ A^2 = 2.54. Note that β is formally defined as the ratio of the thermal and magnetic pressures, which introduces the factor of 2 in the numerator. However, this additional factor is often omitted in the literature. Therefore, the initial condition of this simulation is supersonic, superalfvénic, and with intermediate magnetization. In Figure <ref>, we show face-on and edge-on column densities of the resulting evolution after 5. In the following discussion, the highlighted white region located in x ∈ [-3.0,3.0], y ∈ [-1.0,3.0], and z ∈ [2.3,6.3] will be referred to as R1, whose three-dimensional density structure and magnetic field lines are shown in Figure <ref>,[This and the other three-dimensional figures were done with the help of Pyvista <cit.>] in which the shock fronts are visible as shaded vertical sheets. We note that the magnetic field lines start to bend at the shock fronts. Additionally, the magnetic field has become almost perpendicular to its original orientation (x-axis) in some regions. § ALIGNMENT OF MAGNETIC FIELD LINES WITH COLD ATOMIC CLOUDS To quantify the alignment of magnetic field lines, we use the histogram of relative orientations (HRO; ), which is a statistical tool to measure the angle (ϕ) between the magnetic field and the density gradient of structures in the ISM, over number density intervals[Note that in this study, we focus in simulations and do not explore the effects of observational effects like the constraint of only detecting the plane of the sky magnetic field and the relative orientation between the observer and the cloud.] relevant to the multi-phase nature of CACs. These intervals comprise the densities of the post-shock warm neutral medium (n ∈ [3,10] cm^-3), the low-density cold neutral gas (n ∈ [10,3× 10^1] cm^-3), the medium-density cold neutral gas (n ∈ [3× 10^1,10^2] cm^-3), the high-density neutral gas (n ∈ [ 10^2,3× 10^2] cm^-3), and the density range of the central region (n ∈ [ 3 × 10^2, 10^3] cm^-3). We show the resulting HRO in the left panel of Figure <ref> in terms of cosϕ. Thus, when cosϕ = 0, the magnetic field is parallel to the density isocontours, while, when cosϕ = ± 1, the magnetic field is perpendicular to the density isocontours. We keep this convention to compare to the three-dimensional HRO diagram presented in <cit.>. We can see that the HRO for all density intervals peaks at cos (ϕ) = 0; in other words, the magnetic field tends to be parallel to the density structures throughout the density range we investigate. In order to quantify the HRO, <cit.> introduced the shape parameter ζ defined as ζ≡A_c - A_e/A_c+A_e , where A_c is the central area under the HRO diagram located between ϕ∈ [75.52^∘,104.48^∘], corresponding to mostly parallel magnetic field, while A_e is the area under the HRO in the range ϕ∈ [0^∘,41.41^∘] ∪ [138.59^∘,180^∘], corresponding to mostly perpendicular field. Thus for a given density interval, if 0 < ζ <1, the magnetic field lines are parallel to the density gradients, while if -1 < ζ <0, the magnetic field is perpendicular to the density gradients. Finally, ζ close to zero happens when there is no clear tendency in the alignment. In the bottom panel of Figure <ref>, we plot ζ for the simulation in the density ranges defined above, showing that the magnetic field becomes increasingly parallel to the density gradient as the density increases up to 3 × 10^2 ^-3, and then the degree of parallel alignment decreases for the last interval. The trend of the shape parameter shown in Figure 6 of <cit.> indicates that the alignment of the magnetic field and isodensity contours becomes less parallel as the density increases. In that work, the authors sample density values n ∈ [1.6 × 10^2, 3.16 × 10^6] ^3 in isothermal simulations of cold molecular gas. In contrast, since we are interested in understanding how this correlation arises as the cloud is assembled, our simulation starts with warm atomic gas leading to density structures with n ∈ [6× 10^-1, 2.2× 10^3] ^3. Therefore, the obtained HRO shape parameter in Figure <ref> complements the one obtained by <cit.>, as it corresponds to gas that can be considered the precursor of a molecular cloud. Specifically, the HRO and shape parameter obtained in Figure <ref> shows an increase in the degree of parallel alignment for the first four density intervals, while for the last one, we can appreciate a change of this tendency towards a non-preferential orientation, which corresponds to the lowest density interval in the results of <cit.>. In Figure <ref>, we show the density structures between the four highest density intervals used to obtain the HRO and shape parameter of Figure <ref>. It can be seen from this figure that the magnetic field (black arrows) tends to be parallel to the density structures for the four density intervals. However, for the highest density interval, the magnetic field does not follow this general trend in some regions, leading to the observed change in the shape parameter. § MAGNETIC FIELD LINE EVOLUTION To understand the alignment of the magnetic field with CACs shown in the previous section, we followed the evolution of the magnetic field lines. The resulting configuration of the three-dimensional density structures and magnetic field lines of Region R1 after 5 Myr of evolution is shown in Figure <ref>, where we can see the shock fronts at each side of the central condensation region. The magnetic field lines start bending at these shock fronts along their way from the center of the computational domain. As can be seen in the provided animation (Fig. <ref>), the magnetic field lines change their direction from being nearly parallel to the x-axis at early times to being mostly perpendicular to it after 5 in the neighborhood of the dense layer. In this section, we investigate how this occurs. §.§ Magnetic field amplification by MHD shocks As seen in Figure <ref>, and considering the shock front located at the right of the condensation layer, we see that the angle between the upstream magnetic field and the normal to the shock front satisfies θ≈ 0 for all the magnetic lines shown. The small variations of θ around zero are due to the fact that the shock front is not a plane when it moves away from the central region because of the fluctuations added to the inflow velocity in the simulation setup. Following <cit.>, the fast magnetosonic speed u_f is defined as: u_f^2= 1/2(c_s^2 +u_A^2 + √((c_s^2+u_A^2)^2-4 u_A,n^2c_s^2)), where u_A is the Alfvén speed and u_A,n is its component normal to the shock front. Defining u_n as the flow speed normal to the shock, the flow is referred to as superfast when |u_n|>u_f. Since u_A,n≈ u_A and u_n≈ u_0 for the preshock flow in the three-dimensional simulation described in Section <ref>, it can be seen from equation (<ref>) that this flow is superfast. The downstream flow, just after the shock front, becomes transalfvénic as we can see from Figure <ref>. Therefore the relation u_f > |u_n| > u_A,n, which characterizes a subfast flow, is satisfied downstream. Such MHD shock, going from a superfast to a subfast flow, is called a fast MHD shock <cit.>, whose main feature is that it refracts the magnetic field away from the shock normal due to the amplification of the magnetic field component parallel to the shock front. This amplification is given by B_∥, 2=r_ρ B_∥,1(M_A,1^2-cos^2θ)/M_A,1^2-r_ρcos^2θ, where B_∥,1=B_1sinθ and B_∥,2 are the upstream and downstream magnetic field components parallel to the shock front, r_ρ=ρ_2/ρ_1 is the ratio of the downstream (ρ_2) and upstream (ρ_1) densities, M_A,1 is the Alfvénic Mach number of the upstream gas, and θ is the angle between the vector normal to the front-shock and the upstream magnetic field B_1. Since this amplification depends on the angle θ, the fluctuating curvature of the shock front at different positions yields the inhomogeneous downstream magnetic field pattern when the shock front travels away from the central region, early in the evolution times (see Figure <ref>). §.§ Line bending analysis To understand how magnetic field lines change their original direction in the post-shock region, we consider the induction equation in ideal MHD, ∂B/∂ t = - B∇·u - (u·∇)B + (B·∇ )u. §.§.§ Line bending by a compressive flow This analysis takes place after the magnetic field component parallel to the shock front is amplified by the fast MHD shock, i.e., in a region containing cooling and thermally unstable gas. After the amplification, magnetic field lines adopt the shape represented in Figure <ref>, where the magnetic field component parallel to the shock front, B_y, has been amplified, and the magnetic field component perpendicular to the shock front B_x stays constant, in agreement with the jump condition for the magnetic field. Furthermore, we assume that the flow speed decreases along x in the post-shock region; i.e., u_x=u_x(x) and ∂ u_x/∂ x < 0, which represents the compression caused by the cooling of the gas as it travels downstream. Finally, we disregard the downstream component of the velocity parallel to the shock front, u_y and u_z, to analyze the effect of the compression alone. To validate these assumptions, in Figure <ref> we plot the relevant physical quantities at time t = 0.7 along a ray parallel to the x axis passing through a region in which this amplification becomes large at later evolutionary times. In the top panel, the shock front and condensed region are clearly visible in the gas density profile. The middle panel shows that, in addition to the discontinuity at the shocks, the inflow velocity u_x smoothly decreases downstream from the shock, in sync with the density increase. Also, u_y, u_z≈ 0, in agreement with our assumptions. Finally, in the bottom panel, we see that the fluctuation of B_x remains within ≲ 20% of its mean value, so it is negligible to the first order. Therefore, the assumptions B_y=B_y(x), B_x=C, where C is a constant, u_x=u_x(x), with ∂ u_x / ∂ x < 0, u_y, u_z→ 0, and solving for the B_y component, reduce equation (<ref>) to ∂ B_y/∂ t = -B_y∂ u_x/∂ x - u_x∂ B_y/∂ x. This equation can also be written in Lagrangian form as d B_y/d t = - B_y∂ u_x/∂ x. Thus, since ∂ u_x / ∂ x < 0, eq. (<ref>) implies that d B_y/dt has the same sign as B_y, and therefore the magnetic field component B_y is always amplified by the downstream compressive velocity gradient. This amplification results in the magnetic field aligning to the condensation plane where CACs form. §.§.§ Line bending at curved interfaces Another possible mechanism for aligning the magnetic field with the density structures occurs when the collision interface is curved rather than flat, as, for example, in the case of the NTSI <cit.>. To investigate this, we also ran two-dimensional simulations with the same initial conditions and physics as the three-dimensional simulation described in Section <ref> but with a curved collision interface, obtained by adding a sinusoidal displacement perturbation (a “bending mode” perturbation). In the left panel of Figure <ref>, we show a very early stage of this simulation. In this case, the obliqueness of the interface implies the existence of a component of the incoming flow tangential to it, while the perpendicular component is reduced across the shock. This causes the flow to change direction at the interface, being now oblique to the original magnetic field direction. Being transalfvénic, this oblique post-shock flow can begin bending the field lines. The situation is symmetric on both sides of the layer, thus generating a shearing velocity field with opposite directions at opposite sides due to the different concavity of the collision interface. This generates an “S” shape of the magnetic field lines across the shocked layer. A later stage of this simulation is shown in the right panel of Figure <ref>, showing that the flow tends to be subalfvénic in the condensed regions. For this collision interface, the line-bending analysis differs from that described in Section <ref>. In this case, we will have a velocity field like the one represented in Figure <ref>, where the left panel represents an unperturbed downstream magnetic field line and the right panel represents a perturbed one. In the left panel, we consider a local system of coordinates centered at the point where the u_y is maximum. The magnetic field line is represented in green, the x-axis is parallel to the field line, the y-axis is perpendicular to it, and the velocity field is represented by black arrows. Thus, initially, B_y=0 and B_x = C, where C is a constant. Then, considering u_x, u_z→ 0, and u_y= u_y(x), we obtain, from equation (<ref>), ∂B_y/∂ t = B_x∂ u_y/∂ x . We can then consider the previous equation in three different regions in the left panel of Fig. <ref>: First, in the region x ∈ [x_1, 0], we have ∂ u_y/∂ x > 0, and so ∂ B_y/∂ t > 0. Second, in the region x ∈ [0, x_2], we see that ∂ u_y/∂ x < 0, and therefore ∂ B_y/∂ t < 0. Finally, in the region x ∈ [x_2,x_3], ∂ u_y/∂ x > 0, thus ∂ B_y/∂ t > 0. Therefore, for these three regions, the sign of ∂ B_y/∂ t explains the deformation of the magnetic field line having an "S" shape morphology where each convex part aligns the direction of the flow (See Figures <ref> (left panel) and <ref>). § DISCUSSION §.§ The role of the pre-condensation shock in the alignment of the magnetic field As we have seen from the three-dimensional simulation, magnetic field lines change their orientation at the shock front due to the effect of the fast MHD shock. The passage of the shock front yields an irregular amplification of the parallel component to it, which results in the early downstream shocked magnetic field line pattern. Afterward, magnetic field lines are dragged and folded by the downstream decelerating gas. In this work, we do not vary the relative orientation between the upstream magnetic field and the shock front of the system in the initial condition. However, there is a small range of angles between them due to the departure of the shock front from a perfectly flat plane due to the velocity fluctuations. The influence of the initial angle between the magnetic field and the shock front has been studied in <cit.>, who found that the number of CACs or fibers oriented perpendicular to the magnetic field increases with the orientation angle of the upstream magnetic field and the shock front for simulations without an initial velocity dispersion. However, when an initial velocity dispersion is included, the authors find that fibers tend to be oriented in the direction of the local magnetic field. For this reason, they conclude that the formation mechanism of fibers and their alignment with the local magnetic field is the turbulent shear strain, which was also identified as the reason for the elongation of filamentary CACs by <cit.>. It is important to mention that the role of MHD shocks in the evolution of magnetic field lines that yield the final correlation has not been explored before. In this work, we identified that a fast MHD shock produces magnetic field fluctuations that get amplified between the shock front and condensation layer. §.§ The role of the velocity gradient in aligning the field and density structures. The case of gravitationally-driven cloud formation <cit.> proposed that the relative orientations between the magnetic field and density structures ϕ= 90^∘ and ϕ=0^∘ might be equilibrium points. However, the reason for that is unknown. In this work, we identified that it is the action of a fast MHD shock and the compressive velocity resulting from the gas settlement onto the dense layer that leads to ϕ=90^∘. Since we have focused on non-gravitational CACs, we have not numerically explored how ϕ becomes 0^∘. However, a discussion similar to that in Section <ref> leads us to speculate that the ϕ=0^∘ configuration may arise in the presence of a stretching velocity field, as it would be the case of the tidal flow into the gravitational well of a strongly self-gravitating cloud. In this case, x would be the direction of the flow and B_y is a magnetic field perturbation perpendicular to that direction. Therefore, equation (<ref>) for a positive velocity gradient implies that d B_y/dt has the opposite sign to B_y, straightening the field lines. Thus, we suggest that the induction equation in the presence of a compressive or stretching velocity field leads to the ϕ = 0,90^∘ equilibrium configurations found by <cit.>, justifying their speculation that they may be attractors. This also suggests a mechanism for the parallel alignment of the magnetic field to non-self-gravitating structures and its perpendicular alignment to self-gravitating ones, as observed in <cit.>. §.§ The effect of strong cooling on the development of the NTSI Regarding the development of the NTSI, <cit.> found that the requirement for this instability to grow is that the displacement of the cold slab is larger than its thickness. This condition can only be satisfied when there is a high compression ratio across the shock yielding a very thin shocked layer. In the isothermal case, this requires that M_ s^2≫ 1. The three-dimensional simulation described in Section <ref> has M^2_ s=4.0, which is not too large. However, our simulations include strong cooling leading to thermal instability, which produces a much stronger compression of the condensed layer and a much thinner slab dimension, even for moderate Mach numbers <cit.>. So, it is not difficult to fulfill the requirement for the development of the NTSI at the condensed, rather than the shocked, layer <cit.>, as demonstrated by the growth of the bending mode perturbation (the increase in the curvature) for the dense layer in our 2D simulation. This means that NTSI can be triggered by not-so-strong shocks in the strongly cooling case. §.§ The effect of the magnetic field on the NTSI and the shear strain The NTSI is one of the possible mechanisms yielding the shear strain proposed by <cit.> to be responsible for the elongation of filamentary CACs and the alignment of these structures with the local magnetic field, since it produces the momentum transport from the original inflow direction to that parallel to the dense layer in the regions around the nodes, as can be seen in Figure <ref>. However, <cit.> found that when the magnetic field is aligned with the inflow, it tends to weaken or even suppress the NTSI due to the magnetic tension counteracting the transverse momentum transport. Nevertheless, the NTSI can contribute to the change of direction of the magnetic field if the flow surrounding the dense layer, remains at least transalfvénic, so that it has sufficient energy to bend the field. This condition is indeed satisfied by the flow between the shock and the condensation front, as can be seen for the 2D simulation in both panels of Figure <ref>. Note, incidentally, that the flow inside the dense layer is generally subalfvénic. Another source of shear strain that does not require the NTSI is observed in our 3D simulation, which does not include an initial bending-mode perturbation to the locus of the collision front. This arises later in the evolution, when the magnetic field lines have already been dragged and bent by the compressive velocity field. At this time, the transalfvénic condition of the post-shock flow allows the magnetic field to partially re-orient the gas flow along them. Since the field lines have been oriented nearly parallel to the dense layer by the compressive post-shock flow, the velocity field is also oriented in a similar way, and in opposite directions on each side of the dense layer, therefore adding a strong shear component to the flow around the layer. We can see one example of this situation in Figure <ref>, where the velocity fields are represented by dark arrows and the magnetic field lines are color-coded with the Alfvénic Mach number. In this Figure, the troughs and peaks of the CACs do not show corresponding converging and diverging velocity fields, as would correspond to the NTSI, and so the structure at this point appears not have been formed by this instability. §.§ Inhibition of turbulence generation by the magnetic field It has been noticed in previous works that MHD simulations of cloud formation are less turbulent and show more filamentary structure than pure hydrodynamical simulations <cit.>. The generation of turbulence in curved compressed layers is due to the KH instability, which in turn is triggered by the shear flow produced by the NTSI <cit.>. Therefore, the magnetic tension, which opposes the vorticity generation by the shear flow across the dense layer, may suppress the development of the KHI and, as a consequence, the generation of turbulence. Indeed, a 2D numerical simulation without the magnetic field exhibits a much stronger turbulence level, as shown in Fig. <ref>. §.§ Comparison with previous work The superalfvénic nature of the initial inflow in our simulations, and its continuation downstream as a transalfvénic flow, allow the dragging and amplification of the magnetic field, in agreement with <cit.> whose observations reported transalfvénic turbulence in the Hi-H_2 transition region. According to these authors, atomic gas might accumulate along magnetic field lines, which is also in agreement with our results (see Figure <ref>). In this work, our simulations consider only the formation of clouds by the collision of converging cold atomic flows. However, the main physical processes responsible for the alignment of magnetic field lines and density structures, MHD shocks, and the NTSI, can be also present at the interfaces between interacting wind-blown bubbles and/or supernova shells. Therefore, the MHD shocks and the NTSI could also be the principal physical mechanisms behind the magnetic field alignment with fibers found in this type of object by <cit.>. § SUMMARY AND CONCLUSIONS In this work, we have studied the physical mechanisms responsible for the observed alignment of magnetic field and cold atomic gravitationally unbound density structures formed by the collision of converging warm atomic gas. We have tracked the evolution of magnetic field lines in a three-dimensional simulation having typical conditions of the warm ISM and found that they become perpendicular to their original orientation and end up aligned with density structures. The process of the alignment of magnetic field lines with the density structures starts at the shock fronts and takes place in the cooling, thermally unstable gas, so that the magnetic field already shows a preferred orientation when the flow forms CACs at the condensed layer. At the position of the shocks, the magnetic field changes its direction due to the amplification of its component parallel to the front, which is produced by the velocity fluctuations in the pre-shock region. This amplification occurs due to a fast MHD shock that occurs when the upstream and downstream flows are super– and sub-fast, respectively. Behind the shock front, the compressive downstream velocity field further amplifies the magnetic field component parallel to the shock front, increasing the curvature of the lines in this region, and causing them to become increasingly parallel to the condensed layer produced by the thermal instability. The amplification of the fluctuation by a compressive velocity gradient can be understood through an analysis of the induction equation for planar geometry (eq. [<ref>]), which shows that the change in the fluctuating field component has the same sign as the fluctuation, amplifying it. From the same equation, we concluded that a stretching velocity gradient causes damping of the fluctuating component, leading to a straightening of the field lines, thus orienting them perpendicular to the density structures. We speculate that this is the mechanism occurring during the growth of self-gravitating structures, where the flow accelerates inwards, thus producing a tidal stretching velocity pattern, thus being a possible explanation for the perpendicular orientation of the field lines around self-gravitating molecular cloud filaments. In conclusion, we have found that a settling (i.e., decelerating) flow, such as that occurring due to the condensation of the gas by thermal instability orients the lines parallel to the density structures, while a stretching (accelerating) one, such as infall into a potential well, orients the field lines perpendicular to the density structures. This may be the physical mechanism behind the stationarity of these configurations found by <cit.>. Finally, we also found that, under typical conditions of the ISM, the flow upstream from the shock front is superalfvénic and becomes transalfvénic downstream. This allows the velocity field to bend and drag the magnetic field lines. We are grateful to Susan Clark and Laura Fissel for useful comments and suggestions. This research was supported by a CONACYT scholarship. GCG and EVS acknowledge support from UNAM-PAPIIT grants IN103822 and IG100223, respectively. In addition, we acknowledge Interstellar Institute's program “With Two Eyes” and the Paris-Saclay University's Institut Pascal for hosting discussions that nourished the development of the ideas behind this work. ./bibtex/aa
http://arxiv.org/abs/2405.09316v1
20240515131626
Energy conservation for 3D Euler and Navier-Stokes equations in a bounded domain. Applications to Beltrami flows
[ "Luigi C. Berselli", "Elisabetta Chiodaroli", "Rossano Sannipoli" ]
math.AP
[ "math.AP", "Primary 35Q31, Secondary 76B03" ]
equationsection equationsection ∙̱ ℝ 𝕋 𝕋^3 ℙ 𝒩 ℬ 𝐁_Ω 𝐍_Ω H^1_ 𝔻 H^1_ 𝒯 w φ 12pt 3mm #1 #1(<ref>)εthmTheorem[section] lem[thm]Lemmacor[thm]Corollaryprop[thm]PropositionexaExampledefn[thm]Definitiondefinitionrem[thm]RemarkremarknotaNotationcntex[thm]Counterexamplesistema {[ . makefnmark Energy conservation for 3D Euler and Navier-Stokes equations in a bounded domain. Applications to Beltrami flows Luigi C. Berselli, Elisabetta Chiodaroli, and Rossano Sannipoli Dipartimento di Matematica - Università di Pisa, Italy email: luigi.carlo.berselli@unipi.it, elisabetta.chiodaroli@unipi.it, rossano.sannipoli@dm.unipi.it May 20, 2024 ================================================================================================================================================================================================================================================ In this paper we consider the incompressible 3D Euler and Navier-Stokes equations in a smooth bounded domain. First, we study the 3D Euler equations endowed with slip boundary conditions and we prove the same criteria for energy conservation involving the gradient, already known for the Navier-Stokes equations. Subsequently, we utilize this finding, which is based on a proper approximation of the velocity (and doesn't require estimates or additional assumptions on the pressure), to explore energy conservation for Beltrami flows. Finally, we explore Beltrami solutions to the Navier-Stokes equations and demonstrate that conditions leading to energy conservation are significantly distinct from those implying regularity. This remains true even when making use of the bootstrap regularity improvement, stemming from the solution being a Beltrami vector field. Keywords:Euler equations, Navier-Stokes equations, boundary value problem, energy conservation, Beltrami solutions. MCS: Primary 35Q31; Secondary 76B03. § INTRODUCTION In this paper we address two problems concerning energy conservation for weak solutions to incompressible fluids. Throughout, Ω⊂ℝ^3 will be a bounded domain with strongly Lipschitz boundary, and u^E , u: (0,T)×Ω→ℝ^3 and p^E, p: (0,T)×Ω→ℝ will represent respectively the velocity vector field of an ideal (or of a viscous) homogeneous fluids and their associated kinematic pressure. The first problem pertains to the analysis of energy conservation for weak solutions to the 3D Euler equations, equipped with a slip boundary condition at the boundary ∂_t u^E + (u^E·∇) u^E+∇ p^E = 0 in (0,T)×Ω, u^E = 0 in (0,T)×Ω, u^E· n = 0 on [0,T]×∂Ω, u^E(0,x)=u_0^E(x) in Ω. The second problem focuses on the Leray-Hopf weak solutions to the Navier-Stokes equations (NSE) with Dirichlet boundary condition, that is ∂_t u -Δ u+ (u·∇) u+∇ p = 0 in (0,T)×Ω, u = 0 in (0,T)×Ω, u = 0 on [0,T]×∂Ω, u(0,x)=u_0(x) in Ω, where the kinematic viscosity is set to be equal to 1, without loss of generality. It is well known that for smooth solution to (<ref>) (which are known to exists only locally in time) the kinetic energy E(t) := 1/2u^E(t)_2^2, is constant for t∈[0,T], while for smooth (at least strong) solutions of the Navier-Stokes equations the following balance relation holds 1/2u(t)_2^2+∫_0^t∇ u(s)^2_2 d s=1/2u_0^2_2. Recall also that, for the Euler equations the equality of energy is a property valid for smooth solutions, which are only known to exist locally in time, while for weak solutions the balance of energy cannot even be computed. On the other hand, for (Leray-Hopf) weak solutions to the Navier-Stokes equations only an inequality in the balance law is known, which does not exclude the possibility of having anomalous dissipation. The investigation of this problem and the verification of the kinetic energy balance in both scenarios (including also the limit of vanishing viscosity) have a rich history, tracing back to the works of Kolmogorov <cit.> and Onsager <cit.>. For a comprehensive overview, one can refer to Frisch <cit.>. Subsequently, the interest in this problem extended to pure mathematicians, particularly after the contributions of J.L. Lions and G. Prodi around 1960. A review of these developments can be found in <cit.>. Key milestones in understanding the mathematical subtleties of the problem include the seminal results by Prodi <cit.> and Lions <cit.>, as well as that by Constantin, E, and Titi <cit.>, which identified additional conditions on the velocity field for both the Navier-Stokes equations (NSE) and the Euler equations. Over the past 15 years, the interest for the subject has been renewed stimulating several researchers, who obtained notable improvements. These include extensions to more general spaces, as in Cheskidov and Luo <cit.>, and explorations of connections with the singular limit between the viscous and ideal cases, as investigated by Drivas and Eyink <cit.>. In this paper, our primary focus lies on the implications of assuming conditions on the gradient of the velocity rather than on the velocity itself. It's worth noting that much of the existing theory revolves around Hölder, Besov, or fractional spaces of various kinds, while precise results regarding energy conservation under assumptions on the full gradient (or on the curl) have only emerged recently. For the Navier-Stokes equations, precise criteria involving the gradient (as exemplified in Theorem <ref>) have been established in <cit.> and by Beirão and Yang <cit.>; related works can also be found in Cheskidov and Luo <cit.>. On the other hand, recent findings regarding the Euler equations indicate that, more or less, the same conditions applicable to the viscous case are also valid for the ideal one, as shown in <cit.> (for sub-optimal cases) and by Liu, Wang, and Ye <cit.> (for optimal cases, albeit only for the smallest exponent in the space variables). The possibility of effectively handling these limit cases became apparent following the work of De Rosa <cit.> and Nguyen, Nguyen, and Tang <cit.>, particularly within functional spaces in which smooth functions are dense. Thus, it's expected that in Theorem <ref>, we reach the endpoint case (with further insights can be found in <cit.> for fractional spaces). However, the primary technical advancement here, unlike in other referenced papers, is that we tackle the problem within a bounded domain. In a bounded domain, conventional techniques involving smoothing by convolution to approximate the problem are unavailable. Instead, we employ a modified technique based on transversal mapping to deform the domain, as elaborated in Section <ref>. We remark that, prior to our study, very few results concern the boundary value problem for the Euler equations, see Bardos and Titi <cit.> and Robinson, Rodrigo, and Skipper <cit.>, for the Hölder case. Note that our results: a) do not need estimates on the pressure as in <cit.>; b) are not based on reflection and special geometries of the domain as in <cit.>; c) do not need additional assumptions on the flux as in Drivas and Nguyen <cit.>. We also recall that many of the results about energy conservation are intertwined with the Onsager conjecture, as discussed in De Lellis and Székelyhidi <cit.>. While the negative aspect of the Onsager conjecture has been addressed, culminating in the groundbreaking works by Isett <cit.> and Buckmaster et al. <cit.>, significant efforts still persist in determining the minimal space-time assumptions necessary for energy conservation, particularly in the viscous case. Several recent contributions to this ongoing endeavor include those outlined in <cit.> and in Wang, Wei, Wu, and Ye <cit.>. The first result we prove concerns the conservation of energy under regularity conditions on the gradient of the solution u^E to (<ref>). Let u_0^E∈ H and u^E be a weak solution to the Euler equations (<ref>). If ∇ u^E∈ L^5q/5q-6(0,T;L^q(Ω)) for q≥9/5, then, the velocity u^E satisfies the energy equality u^E(t,x)^2_2=u_0^E(x)^2_2 for a.e. t∈[0,T]. The result can be also restated in terms of a condition on the curl ω^E=∇× u^E. If ω^E∈ L^5q/5q-6(0,T;L^q(Ω)) for q≥9/5, then energy is conserved, provided that the first Betti number of Ω vanishes. This Corollary directly follows from Theorem <ref> by using the estimates from von Wahl <cit.> proving that if ∇· u^E=0 in Ω and u^E· n=0 on ∂Ω then ∇ u^E_p≤ C_pω^E_p for 1<p<∞, provided that the domain has vanishing first Betti number, i.e. the dimension of the homology group ℍ^1(Ω,) is zero. Note that the result in Theorem <ref> (at least in the restricted range of q for which is valid) is exactly the same as the one already known for weak solutions to the Navier-Stokes equations, cf. Thm. <ref>. (in the latter case observe that the weak solutions satisfy in addition that the gradient of the velocity is space-time square-integrable). Theorem <ref> extends the results in Liu, Wang, and Ye <cit.> where only the limit case q=9/5 was considered. Moreover and more significantly, we can handle not only the Cauchy problem, but also the boundary value problem in a smooth domain Ω⊂^3, differently from similar results from Wang et al. <cit.>. In the second part of the paper we further investigate the implications of Theorem <ref> on the energy conservation of Beltrami (also known as Trkal) flows, which are families of solutions characterized by a specific geometric constraint. Beltrami solutions play a significant role in fluid dynamics, as they represent a set of stationary solutions to the Euler equations (<ref>). Specifically, these solutions are characterized by the property that ω^E is proportional to the velocity field itself: ω^E(x,t) = λ(x,t)u^E(x,t), where λ(·,·) is a suitable scalar function of the time and/or space variables. The simplest and smoothest case corresponds to λ≡ 0, i.e. potential flows are a particular case of Beltrami flows. We remark that Beltrami flows are genuinely 3D flows since in the 2-dimensional setting ω is orthogonal to the plane of motion. Additionally, by employing the Lamb vectorω^E× u^E, it is possible to express the convective term in the following rotational formulation (u^E·∇) u^E=ω× u^E+1/2∇|u^E|^2. Considering this formulation for Beltrami flows, the convective term amounts to a gradient (the Bernoulli pressure), which can be included into the pressure. Formally, Beltrami flows satisfy linear (non-local) transport or Stokes equations, implying that the flow is laminar. However there are two warnings: i) the numerical treatment of the pressure, and especially that of the Bernoulli pressure, leads to a stiff problem. If not using pressure-robust numerical methods, the results at very high Reynolds numbers could be affected by instabilities (see Gauger, Linke, and Schroeder <cit.>); b) from a formal point of view (ω^E× u^E)· u^E=0 almost everywhere in (0,T)×Ω; however, for weak solutions ω^E is only a distribution and this equality is not enough to imply that the integral ∫_0^T∫_Ω(ω^E× u^E)· u^E dx dt is well-defined. See a similar discussion in <cit.>. Extending also to the viscous case results already sketched in <cit.>, we present some elementary observations on the regularity implied by the geometric constraint (<ref>). If λ(x,t)≡λ∈ℝ (a circularly polarized plane-wave), then u^E is smooth, and the conservation of energy follows. This can be established through a standard bootstrap argument, assuming regularity conditions on the boundary of Ω and ℍ^1(Ω,)=0. Consequently, a continuation argument for smooth solutions holds, provided that the initial datum is smooth The second observation stems from a straightforward computation when λ(x,t)=λ(t)∈ L^p(0,T), for some p≥ 1. Note that in this scenario, ∇·ω^E=λ(t)(∇· u^E)=0, thereby satisfying the divergence-free constraint without requiring any additional assumption on λ(t). Consequently, from (<ref>), we deduce that ω^E ∈ L^p(0,T;L^2(Ω)), thereby implying better regularity on u^E. Specifically for the Euler equations, this implies u^E∈ L^p(0,T;H^1(Ω)). By iterating this procedure, we establish that if λ(t)∈ L^p(0,T) for some p≥ 1, and u^E∈ L^∞(0,T;L^2(Ω)), then ω^E∈ L^p/3(0,T;H^2(Ω))L^p/3(0,T;L^∞(Ω)). Hence, if p≥3, the Beale-Kato-Majda <cit.> criterion for continuation of smooth solutions holds. In the case of the Navier-Stokes equation, from ω∈ L^p(0,T;L^2(Ω)), we deduce u∈ L^p(0,T;H^1(Ω))↪ L^p(0,T;L^6(Ω)), thus, if p≥ 2, the next iteration gives ω∈ L^p/2(0,T;L^6(Ω)) and consequently ∇ u∈ L^p/2(0,T;L^6(Ω)). By applying the scaling invariance criterion: if ∇ u∈ L^r(0,T;L^s(Ω)) for 2/r+3/s=2, with s>3/2, then u is a strong solution to the NSE (cf. Beirão da Veiga <cit.> and <cit.> for the problem in a bounded domain), we obtain that if p≥8/3, then 2/p/2+3/6≤ 2. Therefore, an initial elementary result is as follows. Let u^E be a weak solution to the Euler equations (<ref>), which is a Beltrami field with λ∈ L^p(0,T), for p≥3. Let u_0^E∈ H^3(Ω)∩ V_τ, then u^E is the unique classical solution of (<ref>) in [0,T] and satisfies the energy equality. Let u be a weak solution to the Navier-Stokes equations (<ref>), which is a Beltrami field with λ∈ L^p(0,T), for p≥8/3. Let u_0∈ V_0, then u is the unique strong solution of (<ref>) in [0,T] and satisfies the energy equality. These two elementary examples also illustrate how it would be straightforward to fall into a regularity class even under mild hypotheses on λ. Our goal is to identify classes in which the energy is conserved without being trivially categorized into a smoothness class. In cases where λ depends on spatial variables, maintaining the divergence condition requires ∇λ· u^E=∇λ· u^E=0. This condition influences the effective velocity fields, especially in classical solutions, as discussed by Beltrami <cit.>. Recent research by Enciso and Peralta <cit.> and Abe <cit.> explores the potential existence of non-trivial Beltrami fields. Hereafter, we consider weak solutions u^E and u which are Beltrami fields. Concerning Beltrami solutions to the Euler equations in three-dimensional torus we refer also to <cit.>, where sufficient conditions in fractional Sobolev spaces are found on λ in order to have conservation of energy. The first result of this paper, regarding the energy equality for solutions to problem (<ref>) which are Beltrami fields as well, is the following. Let u^E be a weak solution to the Euler equation (<ref>), which is a Beltrami field, i.e. (<ref>) is satisfied. Let λ∈ L^α(0,T;L^β(Ω)), where β =6α/2α-5 if α > 5/2, ∞ if α = 5/2. Then, the velocity u^E satisfies the energy equality for a.e. t∈[0,T]. In the last part of this paper we study the energy conservation results for Leray-Hopf weak solutions of the Navier-Stokes equations (<ref>), when they are Beltrami fields, as well. We recall that regarding the Navier-Stokes equations, criteria for energy conservation have been established in <cit.> and subsequently refined and extended to non-Newtonian fluids by Beirão and Yang <cit.>. Let u be a Leray-Hopf weak solution of (<ref>). Let us assume that one the following conditions is satisfied (i)∇ u∈ L^q/2q-3(0,T;L^q(Ω)), for 3/2< q<9/5; (ii)∇ u∈ L^5q/5q-6(0,T;L^q(Ω)), for q≥9/5, Then, the velocity u satisfies the energy equality (<ref>). Also in this case one can state the result using ω instead of ∇ u, as in Corollary <ref>. Since the full trace of velocity is zero at the boundary the geometric conditions can be relaxed and a smooth bounded domain is enough. Starting from Theorem <ref> we provide the counterpart of Theorem <ref> for the Navier-Stokes equations. Let u be a Leray-Hopf weak solution of (<ref>), which is a Beltrami field. Let λ∈ L^α(0,T;L^β(Ω)), where β =6α/2α-5 if α > 5/2, ∞ if α = 5/2. Then, the velocity u satisfies the energy equality for a.e. t∈[0,T]. The conditions on λ remain the same as in the case of the Euler equations, as the bootstrap process starts from u∈ L^∞(0,T;L^2(Ω)) in both scenarios, and utilizing the information u∈ L^2(0,T;L^6(Ω)) doesn't appear to enhance the regularity. What is relatively different in this case is that for the Euler equations, achieving ω∈ L^1(0,T;L^∞(Ω)) ensures regularity. However, this is unattainable unless λ itself is bounded in the spatial variables. In contrast, for the Navier-Stokes equations, much less restrictive assumptions on λ suffice to fall into classes of uniqueness and regularity as described in (<ref>). The arguments that lead to Theorem <ref>, when combined with classical scaling invariant conditions for regularity of weak solutions, allow us to prove also the following result on the regularity of Beltrami fields. Before giving its precise statement we need to introduce first the following decomposition of the half-line (3,+∞)=⋃_n∈ℕ I_n, where I_n= L_n ∪ R_n, and L_n := (12,24] n=1 (6(n+1)/2n-1,6(n+1)^2/2n^2+n-2] n≥ 2, R_n := (24,+∞) n=1 (6(n+1)^2/2n^2+n-2,6n/2n-3] n≥ 2. Let u be a Leray-Hopf weak solution of (<ref>) corresponding to u_0∈ V_0 as initial datum and let u be also a Beltrami field. For any fixed β>3, there exists a unique n=n(β)∈ℕ, such that if λ∈ L^α(0,T;L^β(Ω)) and α satisfies 2/α+3/β = 1-1/2(n+2) for β∈ L_n, 2/α+3/β = 1-1/2(n+1) for β∈ R_n, then u is the unique strong solution in (0,T). As a corollary of the above result we have the following scaling invariant criterion in terms of λ, for regularity of weak solutions. Let u be a Leray-Hopf weak solution of (<ref>) corresponding to u_0∈ V_0 as initial datum and let u be also a Beltrami field. If λ∈ L^α(0,T;L^β(Ω)), where α and β satisfy 2/α+3/β<1, then there exists β_0 such that, for every β∈ (3,β_0), u is the unique strong solution in (0,T). Plan of the paper: In Section <ref> we give the definitions of weak solutions to equations (<ref>)-(<ref>), introduce the functional spaces we will work with, and discuss about the mollification procedures in space and time, discussing the connected results that will be helpful in the rest of the paper. In Section <ref> we will give the proofs of Theorems <ref>-<ref> and in Section <ref> we prove Theorem <ref>-<ref>. § FUNCTIONAL SETTING, WEAK SOLUTIONS AND MOLLIFIERS In this section we first introduce the notation and the precise definitions of solutions we want to deal with. Then, we compare our results with the ones in the existing literature. We will use the customary Sobolev spaces (W^k,p(Ω), . _W^k,p) and we denote the L^p-norm by . _p. We will not distinguish scalar and vector valued spaces, since this will be clear from the context. For a Banach space X we will also denote the usual Bochner spaces of functions defined on [0,T] and with values in X by (L^p(0,T;X), . _L^p(X)). In the case X=L^q(Ω) we denote the norm of L^p(0,T;L^q(Ω)) simply by . _p,q. §.§ On the weak solutions of the Euler and Navier–Stokes equations For the weak formulation of 3D Euler equations (<ref>) and Navier–Stokes equations (<ref>), we introduce the following spaces H_τ= {v ∈ L^2(Ω): ∇· v =0 in Ω, v· n =0 on ∂Ω}, V_τ= {v ∈ H^1(Ω): ∇· v =0 in Ω, v· n =0 on ∂Ω}. The Hilbert space H_τ is endowed with the natural L^2-norm . _2 and inner product (·,·), while V_τ with the norm ∇ v_2 and inner product ((u,v)):=(∇ u, ∇ v). The space of test functions considered to define a weak solution of the Euler equation (<ref>) is the following 𝒟_T = {φ∈ C_0^∞([0,T[; C^∞(Ω)) : ∇·φ =0 in Ω, φ· n =0 on ∂Ω}. Let u_0^E∈ H_τ. A measurable function u^E:(0,T)×Ω→ℝ^3 is called a weak solution to the Euler equation if u^E ∈ C(0,T;w- H_τ) (continuous w.r.t the weak topology) is such that ∫_0^T [(u^E,∂_tφ)+((u^E⊗ u^E),∇φ)] dt = -(u^E_0,φ(0)) ∀ φ∈𝒟_T. For the functional setting of the Navier-Stokes equations (<ref>) we introduce also the space H as the closure of the space 𝒱 of smooth and divergence-free vectors fields, with compact support in Ω with respect to the norm of L^2(Ω) as well as the space V_0 as the closure of the space 𝒱 of smooth and divergence-free vectors fields, with compact support in Ω with respect to the norm of W^1,2_0(Ω). A vector field u∈ L^∞(0,T; H)∩ L^2(0,T;V_0) is a Leray-Hopf weak solution to the Navier-Stokes equations (<ref>) if (i)u is a solution of  (<ref>) in the weak sense, i.e. ∫_0^T(u,∂_t ϕ)-(∇ u,∇ϕ)-((u·∇) u,ϕ) dt=-(u_0,ϕ(0)), for all ϕ∈ C^∞_0([0,T[×Ω) with ∇·ϕ=0; (ii)u satisfies the global energy inequality 1/2u(t)^2_2+∫_0^t∇ u(s)^2_2 ds≤1/2u_0^2_2 ∀ t≥0; (iii) the initial datum is attained in the strong sense of L^2(Ω) u(t)-u_0→0 as t→0^+. §.§ Mollification in space and time As usual, the proof of Theorem <ref> is mainly based on estimates on the non-linear term in the energy balance, by means of suitable mollification in space and time, separately. This is done in order to make the formal multiplication by u^E and integration by parts rigorously justified. To this, end we need a proper way to smooth a divergence-free vector field preserving both the boundary values and the incompressibility. §.§.§ Mollification in space For V_0 a standard density argument is enough to reach the desired approximation. On the other hand, for the Euler equations, the condition u· n=0 on the boundary makes the approximation a bit more complicated. That's why we need to find another way to approximate a weak solution to the Euler equations with smooth functions that preserve both the zero divergence condition and the (at least the slip) boundary condition. This kind of approximation is given in <cit.> where the authors construct suitable mollifiers that preserve boundary conditions. Here we briefly give the main definitions and results and we refer to <cit.> (and the references therein) for the details. Let Ω⊂ℝ^3 be an open, bounded and strongly Lipschitz connected set. Since Ω is bounded, we can find x_0∈Ω and r_0>0, such that Ω⊂ B_r_0(x_0), where B_r_0(x_0) is a ball centered in x_0 with radius r_0>0. Then, the open set 𝒪= B_r_0(x_0)∖Ω is a strongly Lipschitz domain and it is possible to prove the existence of a vector field V∈ C^∞(ℝ^3) which is globally transversal for 𝒪, i.e. ∃ α >0, such that n(x)· V(x)≥α for a.e. x∈∂𝒪, and its euclidean norm satisfies V(x)_l^2=1, for all x∈∂𝒪. We then define the map θ_δ (x) : x∈ℝ^3 → x+δ V(x)∈ℝ^3, with the following properties: θ_δ∈ C^∞(ℝ^3) for all δ∈ [0,1] and there exists a ξ>0 such that θ_δ(𝒪) +B_3δξ(0)⊂𝒪 ∀ δ∈ [0,1], where B_3δξ(0) is a ball centered at the origin of radius 3δξ. Let v ∈ L^1(Ω,ℝ^3) and let v be the extension to zero over off 𝒪. Now we can define the desired mollifier as follows (𝒦^div_δ v)(x) := ∫_Bρ(y)(J_δ(x))J^-1_δ(x) v(θ_δ(x)+(δ ξ)y) dy, where B is the unit ball centered at the origin, ρ(y) is the standard Friedrichs mollifier and J_δ is the Jacobian of the map θ_δ. Observe that J_δ satisfies for all m∈ℕ and some positive constant c>0 sup_x∈ΩD^m((J_δ(x))J^-1_δ(x)-𝕀)_l^2≤ c δ, where 𝕀∈ℝ^3× 3 is the identity matrix in ℝ^3. Analogously we define the following regularization (𝒦^grad_δ v)(x) = ∫_Bρ(y)v(θ_δ(x)+(δ r)y) dy. In particular we have For all δ∈(0,1] it holds that 𝒦^div_δ : L^1(Ω;ℝ^3)→ C^∞_0(Ω;ℝ^3) and 𝒦^div_δ v admits a continuous extension to Ω. Defining for 1≤ q ≤∞ V_div,q(Ω) = {v ∈ L^q(Ω), ∇· v ∈ L^q(Ω)}, V_div,q(Ω) = {v ∈ L^q(Ω), ∇·v∈ L^q(ℝ^3)}, the fundamental property of this operator is that ∇·𝒦^div_δ v=0 if ∇· v=0 and v· n=0 ( hence v∈V_div,q) and the following general convergence holds true. There exists a δ_0, such that the sequence 𝒦^div_δ, with δ∈ [0,δ_0], is uniformly bounded in ℒ(L^q(Ω),L^q(Ω)), for all 1≤ q<∞. Moreover, we have lim_δ→ 0𝒦^div_δ v-v_q=0 ∀ v ∈ L^q(Ω), and lim_δ→ 0∇·(𝒦^div_δ v-v)_q=0 ∀ v ∈V_div,q(Ω). Similarly there exists a δ̅_0, such that the sequence 𝒦^grad_δ, with δ∈ [0,δ̅_0], is uniformly bounded in ℒ(L^q,L^q), for all 1≤ q<∞, and lim_δ→ 0𝒦^grad_δ v-v_L^q(Ω,ℝ)=0 ∀ v ∈ L^q(Ω,ℝ), and lim_δ→ 0∇(𝒦^grad_δ v-v)_L^q(Ω)=0 ∀ v ∈ V_grad,q(Ω). For the purposes of this paper, we need some boundedness result concerning the (full) gradient of 𝒦^div_δ v, which is not contained in <cit.>. In particular we prove the following result. For every v∈ W^1,q(Ω) and δ∈ (0,1], there exists a positive constant C>0 such that ∇𝒦^div_δ v_q≤ C. Let us first compute the gradient of K_δ^divv ∇ K_δ^divv = ∇((J_δ(x))J^-1_δ(x))∫_Bρ(y) v(θ_δ(x)+(δ r)y) dy +(J_δ(x))∫_Bρ(y) ∇v(θ_δ(x)+(δ r)y) dy. Considering the L^q-norm, using Minkowski inequality, and since v≡v in Ω, we have ∇ K_δ^divv_q≤ ( ∫_Ω|∇((J_δ(x))J^-1_δ(x))∫_Bρ(y)v(θ_δ(x)+(δ r)y) dy|^q dx)^1/q +(∫_Ω|(J_δ(x))∫_Bρ(y)∇ v(θ_δ(x)+(δ r)y) dy|^q dx)^1/q. By virtue of (<ref>), for every δ∈ (0,1] we get sup_x∈Ω∇((J_δ(x))J^-1_δ(x))≤ c δ≤ c. Moreover, by definition of θ_δ, for every δ∈ (0,1], the following estimate holds sup_x∈Ω(J_δ(x))≤ 1+c δ≤ 1+c. Eventually, by Jensen's inequality and the properties of the Friedrichs mollifier we can write ∇ K_δ^divv_q≤ cv_q+(1+c)∇ v_q≤ C , for every δ∈ (0,1], ending the proof. Obviously triangular inequality implies the following result Let v ∈ W^1,q(Ω), then there exists a positive constant C>0, such that lim_δ→ 0∇ K_δ^divv-∇ v _q≤C. If v∈ L^q(Ω), Lemma <ref> guarantees the L^q-convergence of the regularized function K^div_δ v to v. Let us stress that if v∈ W^1,q(Ω), the W^1,q-convergence result does not hold in general. Indeed, let us suppose that v∈ W^1,q(Ω)∩ H and let us suppose that K^div_δ v→ v in W^1,q(Ω). Since K^div_δ v∈ C^∞_0(Ω), then by trace inequality we would have v_L^q(∂Ω)=K^div_δ v-v_L^q(∂Ω)≤ C K^div_δ v-v_W^1,q(Ω)δ→0→ 0, implying v=0 almost everywhere on ∂Ω. But this is not necessarily true since v satisfies v· n =0 on ∂Ω. The bounds proved show that ∇ K^div_δ v ⇀∇ v weakly in L^q(Ω), but the latter argument, in particular, shows that in general strong convergence does not holds. One could prove strong convergence in W^s,p(Ω) for all 0≤ s<1/p, that is for fractional spaces on which the trace operator is not defined. The last result we prove in this subsection concerns the uniform continuity in time of K^div_δ v when v is a function defined on [0,T]×Ω, and smoothing concerns only the space variables. Let v ∈ C([0,T],L^q(Ω)), then for every q∈ [1,+∞) it holds lim_δ→ 0K_δ^divv-v_∞,q=0. For all fixed t∈ [0,T], by Lemmata <ref> and <ref> it follows that K_δ^divv(x,t) ∈ C^∞_0(Ω), and K_δ^divv(x,t) δ→ 0→v(x,t) in L^q(Ω), and we want to prove that the convergence is also uniform in time. By (<ref>), it follows that there exists a positive constant C>1 such that sup_x∈Ω (J_δ(x))J^-1_δ(x)-𝕀_l^2≤ C. From the properties of Friedrichs mollifier and the definition of the operator K_δ^div, we have for every q≥ 1 and for all fixed t ∈ [0,T] K_δ^divv(t)_q≤ C v(t)_q. By the uniform continuity in [0,T] of v it follows that q ∈ [1,+∞) ∀ λ >0, ∃ δ>0: |t-s|<δu(t)-u(s)_q<λ/3C. Hence, let us choose a partition of the interval [0,T], let us say 0=t_0<t_1<⋯<t_N= T, such that |t_i+1-t_i|< δ, for all i=0,...,N-1. Then, for any t ∈ [0,T], there exists an index i_0∈{0,...,N}, such that |t-t_i_0|< δ. Therefore by triangular inequality, (<ref>)-(<ref>), and the fact that C>1, we get K_δ^divv(t)- v(t)_q≤K_δ^div(v(t)-v(t_i_0))_q + K_δ^divv(t_i_0)-v(t_i_0)_q + v(t_i_0)-v(t)_q ≤ (C+1)v(t_i_0)-v(t)_q+K_δ^divv(t_i_0)-v(t_i_0)_q < C+1/3Cλ + K_δ^divv(t_i_0)-v(t_i_0)_q <2/3λ + K_δ^divv(t_i_0)-v(t_i_0)_q. Now, if we fix δ_0>0 such that max_i=0,...,NK_δ^divv(t_i)-v(t_i)_q< λ/3, ∀ δ∈ (0,δ_0), then we finally get K_δ^divv(t)-v(t)_q<λ, and the conclusion follows. §.§.§ Mollification in time Mollification in time is a standard Friedrichs mollification. Indeed we consider the (time) mollification operator, denoted in the sequel by (·)_ε, defined for a space-time function Φ: (0,T)×Ω→^3 by (Φ)_ε(t,x) :=∫_0^T k_ε(t-τ)Φ(τ,x) dτ for 0<ε<T, where k is a C^∞_0(), real-valued, non-negative even function, supported in [-1, 1], with ∫_ k(s) =1, and k_ε(t):=ε^-1k(t/ε) (standard Friedrichs mollification with respect to the time variable). We end this section considering some results that connect the two mollification operators. Let v ∈ L^1(0,T; L^1(Ω)), then the mollification operators in space and time commute, i.e. (K_δ^divv)_ε= K_δ^div(v_ε). It is just an application of Fubini's Theorem, indeed (K_δ^divv)_ε = ∫_0^T k_ε(t-τ)∫_Bρ(y)(J_δ(x))J^-1_δ(x)v(θ_δ(x)+(δ r)y,τ) dy dτ =∫_B ρ(y) (J_δ(x))J^-1_δ(x)∫_0^T k_ε(t-τ)v(θ_δ(x)+(δ r)y,τ) dτ dy = K_δ^div(v_ε). As a consequence of Lemma <ref>, Corollary <ref>, and Proposition <ref>, we have the following two results. Let v ∈ L^p(0,T; W^1,q(Ω)) for any p,q ∈ (1,+∞). Then there exists a positive constant C>0, such that for any fixed ε>0, we have ∇ (K_δ^divv)_ε-∇(v)_ε_p,q≤C. Let v ∈ L^p(0,T; L^q(Ω)) for any p,q ∈ (1,+∞). Then for any fixed ε>0 we have lim_δ→0^+(K_δ^divv)_ε-(v)_ε_∞,q=0. § ON THE ENERGY EQUALITY FOR EULER WEAK SOLUTIONS. Before starting the proof we recall the following property of weak solutions. Let u^E_0∈ H and let u^E be a weak solution of (<ref>). Then, since u^E is weakly continuous in time, we consider it as being already redefined on a set of zero Lebesgue measure in such a way that u^E(t)∈ L^2(Ω) for all t∈[0,T) and it satisfies the identity (u^E(s), φ(s))= (u^E_0, φ(0))+∫_0^T [(u^E,∂φ/∂ t)- ((u^E ·∇) u^E, φ)] d τ, for all φ∈ C^∞_0([0,T[;C^∞(Ω)), with ∇·φ=0 in Ω, φ· n=0 on ∂Ω and all 0≤ s<T. The mollification in time allows to directly prove (u(t),(u)_ε(t))=u(t)^2/2+𝒪(ε). under the assumption of weak-L^2-continuity, see Galdi <cit.>. The first delicate point of the proof is to show that ∫_0^T((u^E·∇) u^E, u^E_ε) dtε→0^+⟶0, under the assumption of the theorem, since this limit cannot be deduced only from the properties of weak solution. The standard argument to prove energy equality for strong solutions to (<ref>) consists in choosing as test functions in (<ref>) the solution itself. When dealing with weak solutions, as in our case, this procedure is not allowed, hence we need to consider as φ in (<ref>) a suitable regularization of u^E. In particular we deal with a double mollification in space and time separately, that we will denote by (u^E_δ)_ε. Here (·)_ε denotes the time-mollification introduced in Subsection <ref> and u^E_δ = 𝒦^div_δ u^E, is the divergence preserving space-mollification introduced in the Subsection <ref>. Note that the hypotheses of Theorem (<ref>), allows us to say that u^E∈V_div,q(Ω)⊂ V_div,q(Ω), since ∇ u^E∈ L^q(Ω), and ∇· u^E = 0 in Ω. Moreover, being u^E divergence free and tangential to the boundary and by Lemmata <ref>-<ref>, u^E can be approximated by the sequence (<ref>) that converges in L^q(Ω) to u^E. A further key-point of the proof will be given by Lemma <ref>, that gives the boundedness of the gradient of u_δ^E in L^q(Ω) for every δ∈(0,1]. We can now give the proof of the first result of this paper. Let 0<T<+∞ be given and let {u_δ^E} be the sequence defined in (<ref>). For some 0<ε<T, we choose as legitimate test function φ=(u_δ^E)_ε⊂ C^∞_0(]0,T[;C_0^∞(Ω)), converging to u^E in L^2(0,T; L^∞(Ω))∩ L^p(0,T; L^q(Ω)). In this way we get the identity (u^E(T), (u_δ^E)_ε(T)) = (u_0^E, (u_δ^E)_ε(0)) +∫_0^T[(u^E,∂(u_δ^E)_ε/∂ t)- ((u^E ·∇) u^E, (u_δ^E)_ε)] d t. Our goal is to study the previous equality and to show that in the limit procedure it gives the energy equality for u^E. More precisely, we are considering first the limit for δ→ 0^+, with ε>0 fixed, and then the limit as ε→0^+. It is the passage to the limit in the nonlinear term the one which deserves more attention. Thus, we first focus on the following term ∫_0^T ((u^E ·∇) u^E, (u^E_δ)_ε) d t= ∫_0^T∫_0^T k_ε (t-τ) ((u^E(t) ·∇) u^E(t), u_δ^E (τ)) d τ dt. We rewrite it as the sum of three integrals as follows ∫_0^T ((u^E ·∇) u^E, (u_δ^E)_ε) d t = ∫_0^T ((u^E ·∇) u^E, (u_δ^E)_ε-(u^E)_ε) d t +∫_0^T ((u^E ·∇) u^E, (u^E)_ε-u^E) d t +∫_0^T ((u^E ·∇) u^E, u^E) d t =: 𝕀_1 + 𝕀_2+𝕀_3. and we will prove that: 𝕀_1→ 0 as δ→0^+, with ε>0 fixed; 𝕀_2→ 0 as ε→0^+; 𝕀_3=0 provided that u is a weak solutions satisfying the condition of Theorem <ref>. The integral 𝕀_3: As a first step we prove that 𝕀_3=∫_0^T ((u^E ·∇) u^E, u^E) d t=0, with u^E satisfying the hypothesis of Theorem <ref>. Let {u_m^E}⊂ C^∞_0(]0,T[;C^∞(Ω)) be any (not necessarily divergence-free or tangential to the boundary) given sequence converging to u^E in the space L^∞(0,T;H)∩ L^p(0,T;W^1,q(Ω)), which exists by density argument. Since the field u_m is smooth and u^E is redefined in such a way that u^E(t)∈ H for all t∈[0,T), integrating by parts, we get ∫_0^T ((u^E ·∇) u_m^E, u_m^E) d t=0. Hence (<ref>) holds true as soon as we show the following convergence ∫_0^T ((u^E ·∇) u_m^E, u_m^E) d t →∫_0^T ((u^E ·∇) u^E, u^E) d t. We prove this convergence by splitting into two terms by the triangle inequality. ∫_0^T ((u^E ·∇) u_m^E, u_m^E) d t - ∫_0^T ((u^E ·∇) u^E, u^E) d t ≤∫_0^T ((u^E ·∇) u_m^E, (u_m^E-u^E)) d t +∫_0^T ((u^E ·∇) (u_m^E- u^E), u^E) d t. Throughout the proof we will use the notation x≲ y if there exists a positive constant c>0 such that x≤ c y, where c possibly depends only on p,q,Ω,T, but not on the solution u itself. Let q>9/5 and ∇ u ∈ L^5q/5q-6(0,T,L^q(Ω)) and set q:=2q'. By the Gagliardo-Nirenberg inequality, we have u^E_q≤ Cu^E_2^θ∇ u^E_q^1-θ, where 1/q= θ/2+ (1-θ)(1/q-1/3), and so θ = 5q-9/5q-6. We estimate the second term from the right-hand side of (<ref>) as follows ∫_0^T ((u^E ·∇) (u_m^E- u^E), u^E) d t ≤∫_0^Tu^E_q∇ (u_m^E- u^E)_q u^E_q d t ≲ ∫_0^Tu^E_2^2θ∇ u^E_q^2(1-θ)∇ (u_m^E- u^E)_q d t ≲ u^E_∞,2^2θ∇ u^E_p,q^2(1-θ)∇ (u_m^E- u^E)_p,q, where in the first line we have applied the Hölder inequality with the three conjugate exponents q,q,q, in the second line the Gagliardo-Nirenberg inequality and in the third line the Hölder inequality in the time variable with conjugate exponents r,s such that 2(1-θ)r=p and s=p, and 2θ y=p, which gives p= 5q/5q-6. Since ∇ u^E_m→∇ u^E in L^p(L^q(Ω)), we have that ∫_0^T ((u^E ·∇) (u_m^E-u^E), u^E) d t→ 0, as m→ +∞. Let us stress that when q=9/5, i.e. ∇ u^E ∈ L^3(L^9/5), the computation is similar. In this case, equations (<ref>)-(<ref>) show that θ=0 and q=q^* is the Sobolev exponent, so that the Gagliardo-Nirenberg inequality (<ref>) becomes the classical the Sobolev embedding with q=9/5. Therefore, using the Hölder inequality in space with the three exponents (9/5)^*, 9/5, (9/5)^*, the Sobolev embedding and the Hölder inequality in time with conjugate exponents 3 and 3/2, we get ∫_0^T ((u^E ·∇) (u_m^E- u^E), u^E) d t ≲∇ u^E_3,9/5^2∇ (u_m^E- u^E)_3,9/5, arriving at the same conclusion as in the previous case. Analogously, we estimate the first term of the right-hand side of (<ref>), when q>9/5 ∫_0^T ((u^E ·∇) u_m^E ,(u_m^E-u^E)) d t ≤∫_0^Tu^E_q∇ u_m^E_q u_m^E-u^E_q d t ≲∫_0^Tu^E_2^θ∇ u^E_q^(1-θ)∇ u_m^E_q u_m^E-u^E_2^θ∇(u_m^E-u^E)_q^(1-θ) d t ≲u^E_∞,2^θu_m^E-u^E_∞,2^θ∫_0^T∇ u^E_q^(1-θ)∇ u_m^E_q ∇(u_m^E-u^E)_q^(1-θ) d t ≲u^E_∞,2^θu_m^E-u^E_∞,2^θ∇ u^E_p,q^(1-θ)∇ u_m^E_p,q∇(u_m^E-u^E)_p,q^(1-θ) where again we have used the Gagliardo-Nirenberg inequality in the third line and the Hölder inequality in the time variable with conjugate exponents x(1-θ)=p, y=p and z(1-θ)=p. For m→ +∞ we have the desired convergence. When q=9/5, we argue as before. By using the Hölder inequality in space with the three exponents (9/5)^*, 9/5, (9/5)^*, the Sobolev embedding and Hölder inequality in time with the three conjugate exponents 3, 3, 3, we get ∫_0^T ((u^E ·∇) u_m^E ,(u_m^E-u^E)) d t ≲∇ u^E_3,9/5∇ u_m^E_3,9/5∇(u_m^E-u^E)_3,9/5. Even in this case we have the desired convergence when m → +∞. Collecting the above estimates we conclude the proof of (<ref>). The integral 𝕀_1: Next we show that the first integral from the right hand side of (<ref>) converges to zero. When q>9/5, applying again the Hölder, the Gagliardo-Nirenberg and the Sobolev inequalities we get 𝕀_1=∫_0^T ((u^E ·∇) u^E, (u_δ^E)_ε-(u^E)_ε) d t ≤∫_0^Tu^E_q∇ u^E_q (u_δ^E)_ε-(u^E)_ε_q d t ≲∫_0^Tu^E_2^θ∇ u^E_q^1-θ∇ u^E_q (u_δ^E)_ε-(u^E)_ε_2^θ∇((u_δ^E)_ε-(u^E)_ε)_q^1-θ d t ≲u^E_∞,2^θ(u_δ^E)_ε-(u^E)_ε_∞,2^θ∫_0^T∇ u^E_q^2-θ∇((u_δ^E)_ε-(u^E)_ε)^1-θ_q d t ≲u^E_∞,2^θ(u_δ^E)_ε-(u^E)_ε_∞,2^θ∇ u^E_p,q∇((u_δ^E)_ε-(u^E)_ε)_p,q where in the last line we have applied the Hölder inequality in the time variable with conjugate exponents x,y, such that x(2-θ)=p and y(1-θ)=p, which gives p=5q/5q-6. Now, Corollary <ref> ensures that lim_δ→ 0^+(u_δ^E)_ε-(u^E)_ε_∞,2=0, and, moreover by Corollary <ref> we also have ∇((u_δ^E)_ε-(u^E)_ε)_p,q≤ C which, together with the previous estimate, implies that lim_δ→ 0^+𝕀_1=lim_δ→0^+∫_0^T ((u^E ·∇) u^E, (u^E_δ)_ε-(u^E)_ε) d t =0, holds for each fixed ε>0. The case q=9/5 can be treated in an easier way than the previous one, since by a simple application of Hölder and Sobolev inequalities in space and the Hölder inequality with conjugate exponents 3/2 and 3 in time, we can write 𝕀_1 ≲∇ u^E_3,9/5^2 (u_δ^E)_ε-(u^E)_ε_3,(9/5)^*, obtaining the desired convergence. Finally, the proof of the convergence of 𝕀_2→0 as ε→ 0^+ follows by similar estimates. Eventually we conclude the passage to the limit in (<ref>) as δ and ε go to zero. All the previous convergence results allow to conclude that the convective term tends to zero, that is lim_ε→0^+lim_δ→ 0^+∫_0^T ((u^E ·∇) u^E, (u^E_δ)_ε) d t=0. The term involving the time derivative of k_ε vanishes identically, i.e. ∫_0^T(u^E,∂(u^E_δ)_ε/∂ t)=0, since k_ε is even. The mollification in space and time allow to directly prove lim_ε→ 0^+lim_δ→ 0^+(u^E_0,(u^E_δ)_ε(0))=u^E_0^2/2, lim_ε→ 0^+lim_δ→ 0^+(u^E(T),(u^E_δ)_ε(T))=u^E(T)^2/2, under the assumption of weak-L^2-continuity, see Galdi <cit.>. Passing to the limit as δ→0^+ in (<ref>) we then obtain (u^E(t_0), (u^E)_ε(t_0)) = (u_0^E, (u^E)_ε(0)) +∫_0^t_0[(u^E,∂(u^E)_ε/∂ t)- ((u^E ·∇) u^E, (u^E)_ε-u^E)] d t. and we now let ε→ 0^+ and the convective term vanishes. In conclusion, from (<ref>), we get u^E(T)^2= u^E_0^2, which is (<ref>) for t=T. We notice that the energy equality is still valid for almost every t_0∈ (0,T), 0<ε<t_0, just by considering a redefinition of the mollification in time as an integral between 0 and t_0. The proof of Theorem <ref> is now complete. §.§ Energy equality for Euler Beltrami fields After having finished the proof of the criterion for energy conservation about the gradient, we can pass to prove to a criterion for energy conservation, using the relation between the vorticity and the velocity for solutions with the special geometrical constraint (<ref>). This should be also compared with the results in <cit.>, where an “analytic” combination of the two quantities was considered, and with the results from <cit.>, for fractional spaces (both references deal with the space periodic setting). Let u^E∈ L^∞(0,T;L^2(Ω)) be a weak solution to problem (<ref>) which is a Beltrami field (<ref>) and let λ∈ L^α(0,T;L^β(Ω)). Direct applications of the Hölder inequality in space and time show that ω^E ∈ L^α(0,T;L^2β/2+β(Ω)), which in turn implies that ∇ u^E is in the same Bochner space (again by using the results from <cit.>). It is now convenient to make a change of notation and let us determine q is terms of (α,β) as follows α = 5q/5q-6 and 2β/2+β= q. in such a way that (<ref>) is equivalent to the condition (<ref>) on ∇ u^E. (This is possible if (α,β) are as in Theorem <ref>) Equivalently, the statement of the theorem can be rewritten, in terms of q, as follows λ∈ L^5q/5q-6(0,T,L^2q/2-q(Ω)) with 6/5< q<2, and different choices of the leading parameter can be used to a better interpretation of the results. Hence, if q∈[9/5,2[ (or equivalently α∈]5/2,3]) by Theorem <ref> we have conservation of energy. Let now q∈ (6/5,9/5) and let us show that if we iterate further the procedure, we still have conservation of energy after a number of suitable iterations. Indeed, notice that since ∇ u^E ∈ L^5q/5q-6(0,T;L^q(Ω)), then u^E ∈ L^5q/5q-6(0,T;W^1,q(Ω)), and this follows by the Poincarè inequality which is valid even for tangential vector fields, in the case of a bounded domain, see Galdi <cit.>. Then, the Sobolev embedding theorem implies also that u^E ∈ L^5q/5q-6(0,T;L^q^*(Ω)), where q^*= 3q/3-q. Since u^E is a Beltrami field, then we have again (by further applications of Hölder inequality) that ∇ u^E ∈ L^5q/2(5q-6)(0,T;L^6q/12-5q(Ω)). If q_1:=6q/12-5q≥ 9/5 (which corresponds to q∈[36/25,9/5[) we end the proof due to the fact that p_1=5q/2(5q-6)=5 q_1/5q_1-6 and this fits again with the hypotheses of Theorem (<ref>). On the other hand, if q_1<9/5 this is not enough to end the proof but we can further iterate the same argument. This leads to defining the following two sequences of indices p_n := 5q/(n+1)(5q-6) and q_n := 6q/6(n+1)-5n q, for n≥ 1, and we observe that the iterative procedure (and the step k↦ k+1 is well defined if p_k≥1+1/(k+1) and q_k<3, for k∈) implies that ∇ u^E ∈ L^p_n(0,T;L^q_n(Ω)) with p_n= 5q_n/5q_n-6. Hence, in view of Theorem <ref>, the condition ∇ u^E ∈ L^p_n(0,T;L^q_n(Ω)) implies conservation of energy, as soon as we can find n_0∈ such that that one term of q_n_0 of the strictly increasing sequence {q_n} is such that q_n_0≥ 9/5. This condition is equivalent (in terms of q∈]6/5,36/25[) to the following requirement 6/5<18(n+1)/5(3n +2)≤ q<6/5+6/5n≤9/5. The lower bound for q is strictly decreasing to 6/5 as n→ +∞. In this way, for any q∈ (6/5,36/25), it is sufficient to choose n_0∈ℕ such that n_0≥18-10q/15q-18 to ensure that q_n_0≥ 9/5. Thus, after n_0∈ steps of the iteration, Theorem <ref> can be applied. Collecting all the results this allows us to conclude the proof in the range q ∈ (6/5,2). The case q=2, which corresponds to α=5/2, can be studied separately. Indeed if λ∈ L^5/2(0,T;L^∞(Ω)), then we can stop at the first iteration, since we have ∇ u^E ∈ L^5/2(0,T;L^2(Ω)), which is (<ref>) for q=2. The iterative boot-strap argument we used does not “improve” the known regularity of the solution u^E, in terms of scaling. This means that at each step we get that ∇ u^E∈ L^5q/(5q-6)(0,T;L^q(Ω)), with different values of q; the iteration is needed just to ensure that q≥9/5, to enter the range of validity of Theorem <ref>. The situation will be rather different in the case of the Navier-Stokes equations, as we will see in the next section. § ON THE ENERGY EQUALITY AND REGULARITY FOR NSE WEAK (BELTRAMI) SOLUTIONS In this section we study the energy conservation and regularity for a Leray-Hopf weak solution to the Navier-Stokes equations (<ref>), which is also a Beltrami field, proving Theorems <ref>-<ref> The proof is very similar to that of Theorem <ref> but one relevant difference –this also justifies the slight change of notation– is that we have to check also that the iteration process does not imply that the solution enters (by a bootstrap argument) into a class of regularity, for which the energy equality holds trivially. This is particularly delicate for the NSE: contrary to the Euler equations where practically only the Beale-Kato-Majda criterion holds, various less restrictive scaling invariant conditions are known in the viscous case, as for instance (<ref>) that we will employ. Hence, in this case the task will be not only to show that the gradient belongs to a space with the exponents as in Theorem <ref>, but also to show that they do not reach the class as in (<ref>). Note that the iteration we handled in the previous theorem was needed just to reach an exponent q in the space variables larger or equal than 9/5. The exponents p_n,q_n defined in the proof of the previous theorem satisfy conditions as in Theorem <ref>, which is out of the regularity class, for all n≤ n_0. Hence if we perform iterations up to first n∈ such that q_n>3, the gradient still belongs to (<ref>) and hence not to (<ref>). In this case we have also to see what happens further iterating the sequence: a further iteration shows that the velocity belongs to L^∞(Ω), possibly drastically changing the known scaling of the solution. Let then u be a Leray-Hopf weak solution such that ω(x,t)= λ(x,t)u(x,t), with λ∈ L^α(0,T;L^β(Ω)), for some α,β≥ 1 to be determined in order to obtain conservation of energy. We know that u∈ L^∞(L^2)∩ L^2(L^6), being a Leray-Hopf weak solution. In particular, by interpolation we get that u∈ L^γ(L^δ), with γ and δ satisfying 2/γ+3/δ= 3/2. We would like to achieve ∇ u ∈ L^5q/5q-6(L^q) so to exploit previous results for energy conservation. To this aim, we impose the following system of equations for γ,δ, α, β γα/γ+α= 5q/5q-6 δβ/δ+β= q 2/γ+3/δ= 3/2, which is equivalent to the following set of equalities α = 20δ q/30q+δ(5q-24), β = δ q/δ-q, γ = 4δ/3δ-6, where q and δ are chosen as the independent parameters. Notice that the former equalities are derived under the condition that the following bounds hold 24δ/30+5δ< q < δ. Then, let us observe that if δ∈ [18/5,6], then q ≥ 9/5 so that we fall in case (ii) of Theorem <ref> which then gives the result. If δ∈ [2,18/5), then q could be less than 9/5: in this case we need to iterate the procedure following the same argument as in the proof of theorem <ref>. More precisely, it is enough to show the case δ = 2. For this particular choice of δ=2, the bound for q reads as 6/5<q<2. Now, if q ∈ [9/5,2), we fall in the previous case. If q∈ (6/5,9/5) we follow identically the iteration computed in the proof of Theorem <ref> to get the conclusion. Since u∈ L^∞(0,T;L^2(Ω)), by using (<ref>) we get ∇ u ∈ L^α(0,T;L^2β/2+β(Ω)). Before introducing the iteration procedure, we rename the time and space exponents as follows p_1 = α, and q_1 = 2β/2+β. In order to apply Theorem <ref>, we need to require that q_1>9/5, p_1 = 5q_1/5q_1-6, β>18, β = 6α/2α-5, which is equivalent to require that β = 6α/2α-5 for 5/2<α <3, and so p_1 = α, q_1 = 6α/5α-5 with 5/2<α <3. These requirements guarantee the conservation of energy. Moreover, we stress again the fact that the solution does not belong to the regularity class (<ref>); indeed from (<ref>) we have (and recall that at this step q_1<3) 2/p_1+3/q_1=5/2-1/2α= 2+(1/2-1/2α)> 2. If instead q_1 = 2β/2+β < 9/5, which means β <18 (or equivalently α > 3), then, as in the proof of the previous Theorem <ref>, we need to iterate the procedure. From ∇ u ∈ L^p_1(0,T;L^q_1(Ω)), we get u ∈ L^p_1(0,T;L^q_1^*(Ω)) by the Sobolev embedding theorem, with q_1^*= (2β/2+β)^* = 6β/6+β. Since u is a Beltrami field, then it follows that ∇ u ∈ L^α/2(0,T;L^6β/12+β(Ω)). Here, we call p_2 = α/2 and q_2 = 6β/12+β and to have conservation of energy we need to require that q_2 > 9/5, p_2= 5q_2/5q_2-6, β > 36/7, β = 6α/2α-5, that can be satisfied if 3<α < 6. In particular, we can rewrite everything in terms of α as ∇ u∈ L^α/2(0,T;L^6α/5α-10(Ω)), and even in this case we have conservation of energy, but we do not fall in the regularity class (<ref>) since 2/p_2+3/q_2=2 + (1/2-1/α)>2. If q_2 < 9/5, then we need to iterate again. So proceeding as before we can construct two sequences of exponents (written in terms of α as follows) p_n = α/n, q_n = 6α/5α-5n, with α > n and such that at the n-th step we have ∇ u ∈ L^p_n(0,T;L^q_n(Ω)), n≥ 3, where note that p_n = 5q_n/5q_n-6, cf. Thm. <ref>. In particular if at the (n-1)-th step we have q_n-1<9/5, then to have conservation of energy we need to require that q_n>9/5, that gives α < 3n. Even in this case, to have a non-trivial result we need to check if we do not fall in the regularity class. Indeed since α > n we have 2/p_n+3/q_n= 2+(1/2-n/2α) > 2. The previous iteration argument shows that for any fixed α > 5/2 and β = 6α/2α-5, in order to have conservation of energy we need to iterate the algorithm a number n_0∈ of times, such that α/3< n_0 < [α], where [·] denotes the integer-part function. To conclude the proof we need to be careful about q_n being smaller or larger than 3. Indeed, as long as q_n<3 we keep on iterating and constructing the sequences of exponents as we showed previously, which are the same also of Theorem <ref>, but written with a more convenient notation. On the other hand, if q_n>3, even if we are already in a class of energy conservation we can still further iterate the process to see if there is a further improvement in the regularity (beyond energy conservation). In the case q_n>3 Sobolev embedding theorem applied to (<ref>) implies that u ∈ L^p_n(0,T;L^∞(Ω)), which consequently proves that ∇ u ∈ L^α/n+1(0,T;L^β(Ω)). Note that the space integrability for the gradient never goes beyond β, being obtained through multiplication by λ and the Hölder inequality. In this last case we get (in terms of α) p_n+1= α/n+1, and q_n+1 = β=6α/2α-5. Even though we have conservation of energy (since solution was already in a class as in Theorem <ref>), in this case we still need to check if (<ref>) holds. The condition q_n>3 is equivalent to have n> 3/5α, and so, considering (<ref>), we get 2/p_n+1+3/q_n+1 =4(n+1)-5/2α+1, and the right-hand side is strictly greater than 2 if and only if 4(n+1)-5/2α>1 n > α/2+1/4. But, since (<ref>) is satisfied, we ask whether 3/5α≥α/2+1/4, and this gives α≥5/2. This shows that even in the case of q_n>3 we do not fall in a regularity class. Note that a further iteration will not improve the space exponent, but will reduce the one in the time variable, hence not improving the known regularity of u. Hence, the proof is concluded in the case of α > 5/2. The case α = 5/2 follows the same argument as in the proof of Theorem <ref>. Indeed if λ∈ L^5/2(0,T;L^∞(Ω)), then we can stop at the first iteration, since we have ∇ u ∈ L^5/2(0,T;L^2(Ω)), which is not a regularity class but falls immediately in the case (ii) of Theorem <ref>. Further iterations will not improve the regularity, cf. also with Proposition <ref>. Since we identified classes of solutions for which the energy conservation holds true but these weak solutions are not strong (hence regular), we now prove Theorem <ref> which describes natural assumptions on λ implying regularity of solutions (hence a fortiori also uniqueness and energy conservation). Let u be a Leray-Hopf solution to problem (<ref>), which is also a Beltrami field. Let λ∈ L^α(0,T;L^β(Ω)). As before we get as first step ∇ u ∈ L^α (0,T;L^2β/2+β) and this implies that u is a strong solution if 2/α+3(2+β)/2β = 2 2/α+3/β= 1/2, with 3/2< 2β/2+β, which gives β > 6 and within this range α = 4β/β-6. Let us notice that 2β/2+β<3, and this tells us that we can iterate the procedure. As in the step 2 of previous theorem we get ∇ u ∈ L^α/2(0,T;L^6β/12+β(Ω)). and the scaling invariant regularity condition connected to the above integrability exponents reads as 2/α+3/β= 3/4. Now we have to distinguish two cases. i) If β > 12, then the space exponent is q_2=6β/12+β>3. In this case the Sobolev embedding yields u ∈ L^α/2(0,T;L^∞(Ω)), so that the next step of the iteration gives ∇ u ∈ L^α/3(0,T;L^β(Ω)), which corresponds to the following regularity condition 2/α+3/β= 2-4/α. In order to optimize the choice of α and β, we need to compare the two conditions (<ref>) and (<ref>). To this aim we express α as a function of β, getting respectively α=8β/3β-12 and α = 6β/2β-3. It is easy to check that α<α if and only if β > 24. Thus if β > 24, we choose α such that the condition (<ref>) holds, that corresponds to (<ref>)_2 when n=1. If β∈ (12,24], α is the optimal exponent and the condition (<ref>)_1 (in the case of n=1) is obtained by (<ref>) with the choice of α=α(12) = 24/7. Notice that condition (<ref>)_1, for n=1, implies condition (<ref>) for any choice of α=α(β), with β∈ (12,24]. ii) Let now β∈ (3,12). In this case q_2<3 and we can keep iterating. As we have already done in the previous theorem, we find two sequences of exponents p_n = α/n, q_n = 6β/6n-(2n-5)β n≥ 2, such that ∇ u ∈ L^p_n(0,T;L^q_n(Ω)). The solution is regular if 2/p_n+3/q_n= 2 2/α+3/β=1-1/2n, which gives α_n = 4nβ/2n(β-3)-β. In order to go further in the iteration process we have to impose 3/2< q_n < 3, that reads as 6n/2n-1<β < 6n/2n-3. Let us now suppose that at the (n+1)-step, we have q_n+1>3, that is more precisely q_n<3 and q_n+1>3, which is equivalent to β∈ I_n := ( 6(n+1)/2n-1,6n/2n-3). In particular, restating the previous inequality in terms of n, we fix the number of iterations we can make. Indeed we have β+6/2β-6< n < 3β/2β-6, and since the size the gap is 3β/2β-6-β+6/2β-6=1, then, for any fixed β∈ (3,12), there exists a unique n≥ 2, such that (<ref>) occurs. Now, since q_n+1>3, the condition ∇ u ∈ L^p_n+1(0,T;L^q_n+1(Ω)) implies by the Morrey-Sobolev embedding that u ∈ L^p_n+1(0,T;L^∞(Ω)) and, iterating once more, ∇ u ∈ L^p_n+2(0,T;L^β(Ω)). In this case we have regularity of the solution if 2(n+2)/α+3/β=2 2/α+3/β= 2-2(n+1)/α. In particular, we can express the integrability exponent of λ with respect to time in terms of β as follows: α_n+2= 2(n+2)β/2β-3. We stress the fact that at this point we can stop iterating, since ∇ u would end up in a more regular class. To conclude, we need to understand what is the best choice between α_n+1, given in (<ref>), and α_n+2 and, to start, we observe that α_n+1> α̃_n+2 β < 6(n+1)^2/2n^2+n-2∈ I_n. We already defined L_n and R_n in (<ref>) and the above argument tells us that we need to choose: α_n+2 when β∈ L_n α_n+1 when β∈ R_n. The choice of α_n+1 or α_n+2 corresponds respectively to the choice of α satisfying 2/α+3/β= 1-1/2(n+1) or 2/α+3/β= 2-2(n+1)/α, when β belongs to L_n or R_n, respectively (note that α>n+1). Let us notice that if we evaluate α_n+2 in β_0 = 6(n+1)/2n-1 (which is the left endpoint of L_n), we get α_n+2(β_0)= 4(n+1)(n+2)/2n+5, and substituting the latter in the left-hand side of the second equation in (<ref>) we have 2/α+3/β= 1-1/2(n+2), showing that α is continuous also between the intervals I_n and I_n+1. In particular, condition (<ref>) implies the one contained in (<ref>). As in theorem <ref>, by interpolation between u∈ L^∞(L^2) and u∈ L^2(L^6) we get that u∈ L^γ(L^δ), with γ,δ satisfying 2/γ+3/δ= 3/2. Let us assume that λ∈ L^α(L^β) with α,β to be determined. By Hölder inequality ∇ u ∈ L^αγ/α+γ(L^βδ/β+δ). We recall that the results of Ladyzhenskaya, Prodi and Serrin (citare...?) who show regularity of the solutions to (<ref>) if the following scaling invariant condition holds 2(α+γ)/αγ+3(β+δ)/βδ= 2, which can be rewritten as 2/γ+2/α+ 3/δ+3/β=2. By (<ref>), we get 2/α+3/β= 1/2α = 4β/β-6. We stress the fact that α depends only on β and not on δ,γ as well. We stress the fact that Theorem <ref> can be stated in a more precise way. In fact, when β∈ L_n, the exact condition for the regularity is given by (<ref>)_2. Hence, another way to state it could be the following where we express α as a function of β. Let β>3. For every n≥ 1, if λ∈ L^4(n+1)β/2(n+1)(β-3)-β((0,T;L^β(Ω)) if β∈ R_n L^2(n+2)β/2β-3(0,T;L^β(Ω)) if β∈ L_n, where L_n and R_n are defined in (<ref>), then the solution u is strong. From the proof of Theorem <ref> we know that at every step n≥ 1 of the iteration, α and β satisfy 1-1/2(n+1)≤2/α+3/β≤ 1-1/2(n+2). Since the left hand side is a strictly increasing function in n converging to 1, then we can say that 2/α+3/β∈[3/4,1). Of course when the previous term is less than 3/4, regularity holds a fortiori. The result we proved is the most precise in terms of the exponents, but is rather difficult for what concern the interpretation, As a direct consequence of Theorem <ref>, we have stated Corollary <ref>, which has a clear and simple statement (even if not being the sharpest result), and which we can now easily prove. By Remark <ref> we can suppose that 3/4≤2/α+3/β< 1, and, for each couple (α,β) satisfying the hypotheses of Corollary<ref>, there exists a unique n̅∈ℕ such that 1-1/2(n+1)≤2/α+3/β≤ 1-1/2(n+2). Now, if we choose β_0 := 6(n+1)/2n-1, it turns out to be the left endpoint of the interval L_n, and then for every β∈ (3,β_0), we have that β∈⋃_n≥n+1 I_n= (3,β_0). In order to conclude we observe that condition (<ref>) implies the ones given by (<ref>) for every n≥n+1, hence concluding the proof. Let us now fix an arbitrary β̃∈ I_n, with n>n. Let us find the range of α in (<ref>), such that the inequality still holds true when we substitute β to β. In this case we have 4(n+1)β/(2n+1)β-6(n+1)≤α≤4(n+2)β/(2n+3)β-6(n+2). Moreover, since β∈ I_n, we have 4(n+1)β/(2n+1)β-6(n+1)≤α̃≤4(n+2)β/(2n+3)β-6(n+2). Now if α< α, we have finished, since we would have ∇ u ∈ L^α̅(L^β)⊂ L^α(L^β). But this is true, since the left-hand side in (<ref>) is strictly greater than the right-hand side in (<ref>) when n<n+1. The arbitrariness of β<β_0 guarantees that we have 2/α+3/β<1 ∀ β∈⋃_n> n I_n= (3,β_0). § ACKNOWLEDGMENTS The authors are members of INdAM GNAMPA and they are funded by MIUR within project PRIN20204NT8W “Nonlinear evolution PDEs, fluid dynamics and transport equations: theoretical foundations and applications” and MIUR Excellence, Department of Mathematics, University of Pisa, CUP I57G22000700001. EC is also funded by PRIN2022T9K54B “Classical equations of compressible fluids mechanics: existence and properties of non-classical solutions”. § CONFLICTS OF INTEREST AND DATA AVAILABILITY STATEMENT The authors declare that there is no conflict of interest. Data sharing not applicable to this article as no datasets were generated or analyzed during the current study. #10=#10=0 0 by1pt to00 .2em∘#1"17 #1#10=#11.5ex`0'10Abe2022 K. Abe. Existence of vortex rings in Beltrami flows. Comm. Math. Phys., 391(2):873–899, 2022. BT2018 C. Bardos and E.S. Titi. Onsager's conjecture for the incompressible Euler equations in bounded domains. Arch. Ration. Mech. Anal., 228(1):197–207, 2018. BKM1984 J.T. Beale, T. Kato, and A. Majda. Remarks on the breakdown of smooth solutions for the 3-DEuler equations. Comm. Math. Phys., 94(1):61–66, 1984. Bei1995a H. Beirão da Veiga, A new regularity class for the Navier-Stokes equations in R n, Chinese Ann. Math. Ser. B 16 (1995), no. 4, 407–412, A Chinese summary appears in Chinese Ann. Math. Ser. A 16 (1995), no. 6, 797 BY2019 H. Beirão da Veiga and J. Yang. On the energy equality for solutions to Newtonian and non-Newtonian fluids. Nonlinear Anal., 185:388–402, 2019. Bel1873 E. Beltrami. Sui principii fondamentali dell'idrodinamica razionale. Mem. dell'Accad. Scienze Bologna, page 394, 1873. Ber2002a L. C. Berselli, On a regularity criterion for the solutions to the 3DNavier-Stokes equations, Differential Integral Equations 15 (2002), no. 9, 1129–1137. Ber2021 L. C. Berselli, Three-dimensional Navier-Stokes equations for turbulence, Mathematics in Science and Engineering, Academic Press, London, [2021] 2021 BC2020 L. C. Berselli and E. Chiodaroli, On the energy equality for the 3DNavier-Stokes equations, Nonlinear Anal. 192 (2020), 111704, 24. BG2024 L. C. Berselli and S. Georgiadis. Three results on the energy conservation for the 3DEuler equations. NoDEA Nonlinear Differential Equations Appl., 31:33, 2024. BS2024 L. C. Berselli and R. Sannipoli. Velocity-vorticity geometric constraints for the energy conservation of 3D ideal incompressible fluids, Technical report, arXiv:2405.08461, 2024. BDLSV2019 T. Buckmaster, C. de Lellis, L. Székelyhidi, Jr., and V. Vicol. Onsager's conjecture for admissible weak solutions. Comm. Pure Appl. Math., 72(2):229–274, 2019. CL2020 A. Cheskidov and X. Luo, Energy equality for the Navier-Stokes equations in weak-in-time Onsager spaces, Nonlinearity 33 (2020), no. 4, 1388–1403. CET1994 P. Constantin, W. E, and E.S. Titi. Onsager's conjecture on the energy conservation for solutions of Euler's equation. Comm. Math. Phys., 165(1):207–209, 1994. DLS2009 C. De Lellis and Jr. L. Székelyhidi. The Euler equations as a differential inclusion. Ann. of Math. (2), 170(3):1417–1436, 2009. Der2020 L. De Rosa. On the helicity conservation for the incompressible Euler equations. Proc. Amer. Math. Soc., 148(7):2969–2979, 2020. DE2019 T. D. Drivas and G. L. Eyink, An Onsager singularity theorem for Leray solutions of incompressible Navier-Stokes, Nonlinearity 32 (2019), no. 11, 4465–4482. DN2018 T. D. Drivas and H. Q. Nguyen, Onsager's conjecture and anomalous dissipation on domains with boundary, SIAM J. Math. Anal. 50 (2018), no. 5, 4785–4811. EP2016 A. Enciso and D. Peralta-Salas. Beltrami fields with a nonconstant proportionality factor are rare. Arch. Ration. Mech. Anal., 220(1):243–260, 2016. EG2016 A. Ern and J.-L. Guermond, Mollification in strongly Lipschitz domains with application to continuous and discrete de Rham complexes, Comput. Methods Appl. Math. 16 (2016), no. 1, 51–75. Fri1995 U. Frisch. Turbulence, The Legacy of A.N. Kolmogorov. Cambridge University Press, Cambridge, 1995. Gal2000a G. P. Galdi, An introduction to the Navier-Stokes initial-boundary value problem, Fundamental directions in mathematical fluid mechanics, Adv. Math. Fluid Mech., Birkhäuser, Basel, 2000, pp. 1–70 Gal2011 G. P. Galdi, An introduction to the mathematical theory of the Navier-Stokes equations. Steady-state problems., Springer Monographs in Mathematics, Springer-Verlag, New York, 2011. GLS2019 N. R. Gauger, A. Linke, and P. W. Schroeder. On high-order pressure-robust space discretisations, their advantages for incompressible high Reynolds number generalised Beltrami flows and beyond. SMAI J. Comput. Math., 5:89–129, 2019. Ise2018 P. Isett. A proof of Onsager's conjecture. Ann. of Math. (2), 188(3):871–963, 2018. Kol1941 A.N. Kolmogorov, The local structure of turbulence in incompressible viscous fluids for very large Reynolds number, Dokl. Akad. Nauk SSR 30 (1941), 9–13. Lio1960 J.-L. Lions, Sur la régularité et l'unicité des solutions turbulentes des équations de Navier Stokes, Rend. Sem. Mat. Univ. Padova 30 (1960), 16–23 LWY2023 J. Liu, Y. Wang, and Y. Ye. Energy conservation of weak solutions for the incompressible Euler equations via vorticity. J. Differential Equations, 372:254–279, 2023. NgNgT2019 Q.-H. Nguyen, P.-T. Nguyen, and B. Q. Tang. Energy equalities for compressible Navier-Stokes equations. Nonlinearity, 32(11):4206–4231, 2019. Ons1949 L. Onsager. Statistical hydrodynamics. Nuovo Cimento (9), 6(Supplemento, 2 (Convegno Internazionale di Meccanica Statistica)):279–287, 1949. Pro1959 G. Prodi, Un teorema di unicità per le equazioni di Navier-Stokes, Ann. Mat. Pura Appl. (4) 48 (1959), 173–182. RRS2018 J. C. Robinson, J. L. Rodrigo, and J. W. D. Skipper, Energy conservation for the Euler equations on 𝕋^2×ℝ_+ for weak solutions defined without reference to the pressure, Asymptot. Anal. 110 (2018), no. 3-4, 185–202. vWah1992 W. von Wahl, Estimating ∇ u by div u and curl u, Math. Methods Appl. Sci. 15 (1992), no. 2, 123–143. WWWY2023 Y. Wang, W. Wei, G. Wu, and Y. Ye. On the energy and helicity conservation of the incompressible Euler equations. Technical Report arXiv:2307.08322, 2023. ]
http://arxiv.org/abs/2405.09627v1
20240515180008
Quantum Geometry and Stabilization of Fractional Chern Insulators Far from the Ideal Limit
[ "Gal Shavit", "Yuval Oreg" ]
cond-mat.str-el
[ "cond-mat.str-el", "cond-mat.mes-hall" ]
Department of Physics and Institute for Quantum Information and Matter, California Institute of Technology, Pasadena, California 91125, USA Walter Burke Institute of Theoretical Physics, California Institute of Technology, Pasadena, California 91125, USA Department of Condensed Matter Physics, Weizmann Institute of Science, Rehovot, Israel 7610001 In the presence of strong electronic interactions, a partially filled Chern band may stabilize a fractional Chern insulator (FCI) state, the zero-field analog of the fractional quantum Hall phase. While FCIs have long been hypothesized, feasible solid-state realizations only recently emerged, largely due to the rise of moiré materials. In these systems, the quantum geometry of the electronic bands plays a critical role in stabilizing the FCI in the presence of competing correlated phases. In the limit of “ideal” quantum geometry, where the quantum geometry is identical to that of Landau levels, this role is well understood. However, in more realistic scenarios only empiric numerical evidence exists, accentuating the need for a clear understanding of the mechanism by which the FCI deteriorates moving further away from these ideal conditions. We introduce and analyze an anisotropic model of a |C |=1 Chern insulator, whereupon partial filling of its bands, an FCI phase is stabilized over a certain parameter regime. We incorporate strong electronic interaction analytically by employing a coupled-wires approach, studying the FCI stability and its relation to the the quantum metric. We identify an unusual anti-FCI phase benefiting from non-ideal geometry, generically subdominant to the FCI. However, its presence hinders the formation of FCI in favor of other competitive phases at fractional fillings, such as the charge density wave. Though quite peculiar, this anti-FCI phase may have already been observed in experiments at high magnetic fields. This establish a direct link between quantum geometry and FCI stability in a tractable model far from any ideal band conditions, and illuminates a unique mechanism of FCI deterioration. Quantum Geometry and Stabilization of Fractional Chern Insulators Far from the Ideal Limit Yuval Oreg Received —–; accepted —– ========================================================================================== Introduction.— The fractional Chern insulator (FCI) <cit.> is the lattice analog of the fractional quantum hall phase (FQH) <cit.>, where strong correlations between electrons give rise to an extraordinary quantum phase of matter hosting exotic anyonic excitations  <cit.>. Unlike the FQH, the FCI may arise even in the absence of a magnetic field <cit.>. Recently, FCI phases were observed in moiré graphene devices <cit.>, moiré transition metal dichalcogenides <cit.>, and crystalline graphene multilayers <cit.>. FCIs emerge out of a topologically-non-trivial band, whose dispersion is flat enough, such that correlations may stabilize the fractional phase. However, these conditions are apparently insufficient to guarantee FCI formation <cit.>. The quantum geometrical properties of the band have been argued to play a pivotal role in that regard. Namely, bands whose geometries exactly mimic that of the lowest Landau level (LLL) exhibit an exact FCI ground-state under certain conditions <cit.>. Away from this exact "ideal" limit, several quantum geometry indicators have been proposed as a ruler to quantify how non-ideal the band is with respect to LLL  <cit.>. These are substantiated by numerical evidence, supporting their relation to FCI stability <cit.>. However, to date, there is no clear understanding of the relation between geometry indicators and the deterioration of the FCI phase in a strongly correlated band, especially in more realistic scenarios and far from the ideal limit. In this Letter, we establish a direct link between FCI stability, electron-electron interaction parameters, and quantum geometrical properties of the strongly-correlated band hosting it. We introduce a special coupled wires construction (CWC) <cit.>, which we utilize to study fractional fillings of a Chern band. The CWC employed allows one to study the competition between the FCI and competing phases, e.g., charge density wave (CDW), as a function of tunable quantum geometry. We exploit the inherent anisotropy of the model to gain an understanding of the effect of electron-electron interactions by employing bosonization techniques. Crucially, in the presence of such interactions, non-ideal geometry promotes an anomalous phase, the aFCI, that impedes the FCI and may be experimentally revealed at high magnetic fields. To characterize the suppression of FCI by this competition, we introduce a length scale that is directly related to relevant quantum geometry indicators. Our tractable model thus illuminates the connection between quantum geometry,the strength of electron-electron interaction, and the emergence of FCIs away from ideal conditions normally considered. Practical Chern insulator CWC.— We begin by considering an array of identical one-dimensional wires hosting spinless non-interacting fermions. The interwire distance is d, and the intrawire unit-cell size is 2a. There are two states per unit cell (which allows for non-trivial topology), and we define the filling factor ν=2adn, with n the density We consider the Hamiltonian (see Fig. <ref> ) H_0=∫ dx∑_jj'Ψ_j^†[ ϵ̂_F M_-δ_j,j'+1; M_+δ_j,j'+1 ϵ̂_F^* ]Ψ_j'+ h.c., where Ψ_j=(ψ_j,R,ψ_j,L)^T is a spinor of right/left moving (R/L) fermionic annihilation operators at position x in wire j, ϵ̂_F=v_F/2(i∂_x+k_F)δ_jj', and k_F=π/2a(1-ν). The M_± terms couple opposite-chirality fermions on neighboring wires. Time-reversal symmetry is broken whenever |M_+|≠|M_-|, opening a gap at half-filling ν=1, E_ gap=2| |M_+| - |M_-||. The resultant bands have a Chern number |C|=1, where, e.g., the valence band has C=1 if |M_+|>|M_-| (which we will assume henceforth without loss of generality), and vice-versa. Eq. (<ref>) may be obtained as the low-energy description of a lattice model with zero magnetic flux per unit-cell, which maps to an anisotropic version of the half-Bernevig-Hughes-Zhang (BHZ) model <cit.>, see the text and Fig. <ref> in the Supplementary Materials (SM) <cit.>. A length scale that will prove crucial to our discussion is the transverse-direction extent of the topological chiral edge-states <cit.>, ξ_ topo.^-1 = 1/2log|M_+|/|M_-|. Notice that if one mass term vanishes, H_0 is equivalent to the well-known CWC of the lowest Landau level (LLL) <cit.>, ξ_ topo.→ 0, and the edge-states confine to a single wire. We will refer to this as the optimal CWC. We note that although ξ_ topo. represents the edge extent of the chiral edge mode, it is actually a bulk property that does not depend on the boundary condition. Its divergence indicates a transition between C=1 and C=-1. We now turn to discuss the quantum geometry properties of the bands of H_0, captured by the momentum space tensor η_αβ(𝐤)= ⟨∂_α u_𝐤| ∂_β u_𝐤⟩ - ⟨∂_α u_𝐤|u_𝐤⟩⟨ u_𝐤| ∂_β u_𝐤⟩, where |u_ k⟩ is the wavefunction of the valence band at momentum k=(k_x,k_y), and ∂_α = ∂/∂ k_α. The Berry curvature is Ω = 2 Imη_yx, and the Fubini-Study metric is given by g_αβ= Reη_αβ. These satisfy the inequality tr g ≥|Ω|. It has been shown <cit.> that for a band with flat Berry curvature which saturates this inequality, the density operators projected onto that band reproduce the Girvin-MacDonald-Platzmann (GMP) algebra <cit.> of the LLL. As such, the deviation from the so-called “trace condition” may be quantified by T≡∫d^2𝐤/A( tr g - |Ω|), where A is the area of the Brillouin zone (BZ). The BZ integral over tr g has been shown to correspond to the minimal Wannier-function spread associated with a set of bands <cit.>. Similarly, let us examine the length scale ℓ_ geo.=4 ∫dk_y/2π tr g (k_x=0,k_y), which, in our model, constitutes a major contribution to T, and the most important one for our purposes [We have verified that the length scale ℓ_ geo. is closely correlated to the trace condition violation T in a parent lattice model, justifying our focused attention on the former as an extension of the latter, see SM <cit.>.]. We calculate ℓ_ geo. in terms of d/ξ_ topo. and α≡ v_F/(dM_+) <cit.>. Close to the optimal CWC, ξ_ topo.≪ d, ℓ_ geo. is minimal and is approximately d(1+α^2). In the opposite limit, we find ℓ_ geo.(ξ_ topo.≫ d) ≈ξ_ topo.(1+α^2/4). Relating the quantum geometry of the band to the correlation length ξ_ topo. is one of our key results. This establishes the adverse effect of having both types of chiral coupling M_± in the CWC on tr g, in the sense of pushing it further away from its lower bound, and rendering the trace-condition-violation T larger. We have also verified that FCI indicators for the lattice model related to H_0 are optimized close to when one of the mass term dominates, i.e., when ξ_ topo. is small (see SM <cit.>). Next, we will demonstrate how large ξ_ topo. potentially leads to the destabilization of FCI. Fractional filling and interactions.— The wire construction presented in this Letter has a periodic modulation along the wires. This allows for the stabilization of FCI by adjusting the electron density [The construction presented here differs significantly from that of Ref. <cit.>, where the amplitude of an alternating magnetic field determines the effective filling factor.] For concreteness, we focus on Laughlin-like fractional filling of the valence band, ν_ frac.=(2p+1)^-1≡ 1/m, with p a positive integer. To account for interactions, we employ the framework of abelian bosonization <cit.>. We represent the chiral fermionic operators in terms of bosonic variables, ψ_j,r∼1/√(2π a) e^irk_F-i(rϕ_j-θ_j), with r=1 (-1) for right (left) movers, and the algebra [ϕ_j(x),∂_x θ_j'(x')] =iπδ_jj'δ(x-x') is satisfied. The bosonic version of H_0, supplemented by forward-scattering interactions may be written as, H_ f.s. = ∫ dx ∑_jj'∂_xχ_j M^jj'∂_xχ_j'^T, with χ_j = (ϕ_j,θ_j)^T, and M^jj'=v_F/2πIδ^jj'+ U^jj', where I is the unit 2×2 matrix, and U describes the interactions. Notice that the single-particle terms M_± do not conserve momentum away from ν=1, and are thus absent from this low-energy description. Their important role, however, will be clarified shortly. We now include large-momentum scattering interactions, comprising processes with several of the operators O_j, bs=ψ_j,R^†ψ_j,L. A Laughlin-like FCI phase, with Chern number C=1/m may be stabilized by the operator O^j_ FCI∼ g_ FCI( O_j, bs)^p ( O_j+1, bs)^p ψ_j,R^†ψ_j+1,L + h.c. We note that at the filling ν_ frac., this term conserves momentum modulo-π/a. In contrast to conventional CWC, momentum conservation is enabled by the underlying lattice, not by the external magnetic field (cf. Ref. <cit.>). Interestingly, this means that the time-reversal partner of O^j_ FCI, O^j_ aFCI∼ g_ aFCI( O_j, bs)^p ( O_j+1, bs)^p ψ_j+1,R^†ψ_j,L + h.c. may be stabilized for the same reason. The stabilized phase, however, has C=-1/m, so we refer to it as the anti-FCI (aFCI). Such a term is forbidden in LLL CWCs by momentum conservation, and its appearance is unique to our proposed framework. Crucially, however, it is clear that the FCI and aFCI terms are not on equal footing. Both include an interwire part, yet g_ FCI∝ M_+, and g_ aFCI∝ M_-. Thus,assuming M_+>M_-, the FCI should always prevail over the aFCI by construction. Nevertheless, we will demonstrate that the aFCI destabilizes the FCI phase, thus relating FCI stability to small ℓ_ geo. and favorable quantum geometry. We consider an additional multi-particle process at this filling, stabilizing a CDW, O^j_ CDW = g_ CDW( O_j, bs)^2p+1 + h.c. This term too is enabled by the presence of the lattice, and is absent from quantum-Hall CWCs. The CDW coincides with the FCI in the thin-torus limit <cit.>, and is distinct from the bubble/stripe phases potentially stabilized at high Landau levels <cit.>, not considered in our analysis [In our model, the corresponding phase with domains of alternating densities would be a stripe phase. Within such a phase, the densities at different wires would fluctuate significantly, a scenario which is not captured within our Luttinger liquid description. We leave the study of the role of the stripe phase in Chern bands with poor quantum geometry for future work.]. To see that it stabilizes a CDW, consider its bosonized form ∝ g_ CDWcos[2(2p+1)ϕ_j]. At strong coupling, ϕ_j(x) sets at a uniform value ϕ_0, the density operator is modulated periodically along the wires and can be approximated by ρ(x)∝cos^2(πν_ frac.x - ϕ_0) <cit.>. The relative phase between CDWs in neighboring wires is determined by including the interaction terms O^j_ϕ∼ g_ϕ O^†_j, bs O_j+1, bs+ h.c. (these conserve momentum regardless of density). Weak-coupling RG.— The competition between the FCI, aFCI, and CDW phases can be readily understood by considering the renormalization group (RG) flow of the low-energy theory. Considering weak-coupling, the most salient conclusions can be derived from a simplified two-wire model. The non-commutativity between, e.g., O_ FCI^j and O_ aFCI^j+1, would manifest in higher orders in the RG flow equations, motivating the two-wire approach. In the SM, we discuss the strong-coupling limit and argue that within that limit, the FCI phase has a many-body gap proportional to the difference g_ FCI-g_ aFCI <cit.>. The simplified Hamiltonian is H =∑_i=ρ,σu_i/2π[K_i^-1(∂_xϕ_i)^2+K_i(∂_xθ_i)^2] +Ṽ/2π(∂_xϕ_ρ∂_xθ_σ+∂_xθ_ρ∂_xϕ_σ) +∑_j( O_ FCI^j+ O_ aFCI^j+ O_ CDW^j+ O_ϕ^j+ h.c.). The two sectors ρ,σ correspond to combinations of the fields on the two wires labeled 1,2, e.g., ϕ_ρ/σ=1/√(2)(ϕ_1±ϕ_2). The Ṽ term breaks time-reversal symmetry, and is generated by the RG flow of the FCI/aFCI terms. The FCI terms have the same scaling dimension d_ FCI=m^2/2K_ρ+1/2K_σ^-1, whereas the CDW term has d_ CDW = m^2/2K_ρ+m^2/2K_σ. Clearly, K_ρ, which is expected to be rather small in the case of strong repulsive interactions, will not play any meaningful role in the competition between backscattering terms. However, K_ρ controls the transition between a gapped phase and a gapless sliding Luttinger liquid <cit.>. Conversely, K_σ directly relates to the competition between the CDW phase requiring sufficiently small K_σ, and the FCI terms which favor large values of K_σ. We derive the RG equations using the operator product expansion (OPE) <cit.>. The short-distance cutoff is parameterized α=α_0e^ℓ, where in each RG step, ℓ increases incrementally. We define dimensionless coupling constants y_i≡ g_i / (π u), allowing us to obtain the RG flow equations, presented in full in the SM <cit.>. In Fig. <ref>a,b we present examples of the phase diagram obtained from integration of the RG equations. The RG flow is stopped when either y_ FCI/CDW>1, obtaining the strong-coupling scale ℓ_ FCI/CDW. The proxy for the relevant gap is evaluated as Δ_i≡exp(-ℓ_i). We parameterize the initial conditions as y_F,0^2=y_ FCI,0^2+y_ aFCI,0^2, zy_F,0^2=y_ FCI,0^2-y_ aFCI,0^2, noting z=tanh2d/ξ_ topo.. Moving away from the optimal z=1 towards z≪ 1, the region in the phase diagram where FCI is stabilized dramatically shrinks, and its gap weakened. Thus, quantum geometry directly impacts FCI stability through competition with the hidden aFCI phase Crucial insight is gained by considering the scenario d_ FCI→ 2, where the aforementioned competition is most pronounced. In this limit, one only considers the RG flow of y_ FCI/ aFCI and Ṽ, which realize a generalized Berezinskii-Kosterlitz-Thouless (BKT) flow <cit.>. If initially Ṽ=0, we find a closed-form expression for the FCI-divergence RG time, ℓ^∞=u/m√(K_ρ/2K_σ)y_ FCI,0 Re[K(u^2)], where K(x) is the complete elliptic integral of the first kind, and u=√(1+z/1-z). Fig. <ref>c plots the FCI gap proxy e^-ℓ^∞, showing the rapid FCI deterioration as the quantum geometry becomes far from optimal. Away from optimal quantum geometry, u≈ 1, we can analytically relate the energy scale to the quantum metric, Δ_ FCI∝ℓ_ geo.^-√(K_σ/(2K_ρ))/my_ FCI,0, further stressing the connection between the FCI stability and quantum geometry via the FCI–aFCI competition. External magnetic field.— A perpendicular magnetic field applied to the system has been argued to promote FCI formation through its impact on quantum geometry indicators. Within our CWC model, however, we may directly probe its role. We introduce the field B through a boost ψ_j,r→ψ_j,re^ibjx, where b=edB/ħ. Whereas the CDW commensurability remains at ν_ frac., we find that the filling factor at which the fractional Chern processes conserve momentum are given by ν^*_ FCI/aFCI = ν_ frac.±1/mΦ, where Φ is the number of h/e flux-quanta per unit-cell. Thus, the magnetic field naturally relieves the FCI–aFCI tension, as the two are stabilized at diverging densities. The field also separates the CDW from the FCI, hence potentially favoring FCI formation even further. We illustrate this effect by modification of the RG flow equations in the constrained d_ FCI=2 regime. When the multiparticle backscattering terms are incommensurate, due to density deviations δν=ν-ν_ frac. and/or finite B, they acquire a spatial oscillation period. Within the RG flow, it is reasonable <cit.> to treat this period as a soft cutoff on the effect of these terms. Along the FCI-commensurate line mδν=Φ, we impose this cutoff by multiplying y_ aFCI^2 in the RG flow of V by c(ℓ) = (1+e^γ(ℓ+logΦ))^-1 (γ controls the cutoff smoothness, set as γ=2 throughout our calculations). As demonstrated in Fig. <ref>a, the magnetic flux enhances the FCI stability, particularly so in the regions where quantum geometry is far from optimal. We stress that this is entirely due to aFCI suppression, driven by its incommensurability with increased field. A curious consequence of the FCI–aFCI diverging paths, is the possible emergence of the aFCI phase at high enough fields. Following a similar cutoff treatment along the aFCI line mδν=-Φ, we find this anomalous phase may indeed be stabilized at a different density, see Fig. <ref> b. Surprisingly, Ref. <cit.> observed two high magnetic field features with fractional Chern numbers -8/5 and -7/3 moving towards lower densities as a function of magnetic fields <cit.> (Fig. 1b in <cit.>, FCI features closest to ν=0). This phenomenology is suggestive of an aFCI-like phase. Conclusions.— We have studied analytically the connection between non-optimal (far from LLL-like) quantum geometry and the FCI stability in an anisotropic model of topological phases with correlated electrons. The model we introduce features competition between a lattice-enabled CDW phases, a unique aFCI phase, and the FCI phase. This competition is unique to our model due to the periodic modulation of interwire hopping introduced along the wire. The aFCI phase was demonstrated to be enhanced by non-optimal quantum geometry, subsequently critically suppressing the FCI phase. Within this model, quantum geometry was shown to be intimately connected to the topological correlation length. In turn, this length scale directly controls the relative strength of the interactions, which stabilize the adverse aFCI, enabling the CDW to take over as the leading instability. We thus established an unambiguous connection between so-called FCI indicators far from non-realistic ideal scenarios and the potential stability of the FCI. In a particular regime, an analytical expression relating the two has been presented, Eq. (<ref>). Our model further illuminates the role of an external magnetic field. Namely, the fractional fillings at which the competing correlated phases may be stabilized are “pulled apart” (Fig. <ref>b) with increasing field, naturally suppressing the competition through induced incommensurability. Possible signatures of the peculiar aFCI phase in high magnetic field were discussed, potentially already detected in experiments <cit.>. These insights provided by our work into the stability of the exotic FCI under non-ideal conditions, as well as elucidation of the mechanism by which the FCI deteriorates in realistic bands, may be beneficial for material exploration and band engineering. As an example, in the discussed model the terms O^j_ϕ stabilize the CDW at the expanse of the FCI. This role of longer-range interactions has been mentioned in numerical investigations <cit.>. Conversely, this interaction may be suppressed by imposing a periodic modulation of the density in the transverse direction <cit.>, effectively dephasing the CDWs on neighboring wires <cit.>. Moreover, the model we propose can account for the curious phenomena of CDW stabilization by a magnetic field <cit.>, recently observed in experiments <cit.>. This project was partially supported by grants from the ERC under the European Union’s Horizon 2020 research and innovation programme (grant agreement LEGOTOP No 788715), the DFG CRC SFB/TRR183, and the ISF Quantum Science and Technology (2074/19). GS acknowledges support from the Walter Burke Institute for Theoretical Physics at Caltech, and from the Yad Hanadiv Foundation through the Rothschild fellowship. S.section Sfigure Sequation § SUPPLEMENTAL MATERIAL FOR “QUANTUM GEOMETRY AND STABILIZATION OF FRACTIONAL CHERN INSULATORS FAR FROM THE IDEAL LIMIT” § LATTICE MODEL Here, we describe a the two-dimensional tight-binding model which is the UV analog of the coupled-wire model described and studied in the main text. The tight binding Hamiltonian is comprised of two terms, H_ 2d = H_ wire + H_ interwire, where H_ wire describes the physics along the effective wires direction, and H_ interwire accounts for their coupling to each other. We consider the simplest form possible for H_ wire, taking into account only nearest-neighbor hopping t, H_ wire = -t ∑_j ∑_m ( c_j,m+1^† c_j,m + h.c.), and c_j,m annihilates a fermion on the m-th site of the j-th wire. In order to synthesize a Chern insulator out of this plain array of quantum wires, we need some non-trivial form of the interwire hopping Hamiltonian H_ interwire. It consists of two contributions with hopping amplitudes t_1 and t_2 (both real numbers) (see Fig. <ref> for illustration), H_ interwire = it_1∑_j∑_m (-1)^m c_j+1,m^† c_j,m + t_2∑_j∑_m sin^2π m/2(c_j+1,m+1^† c_j,m + c_j+1,m^† c_j,m+1) + h.c. Notice that the inter-wire coupling reduces the translational symmetry to m→ m+2, i.e., doubles the unit-cell along the direction of the wires. The t_1 term represents hopping of fermions with a ±π/2, phase alternating between neighboring sites. This introduces a π-flux to each plaquette in the two-dimensional lattice. This term is insufficient to induce a Chern insulating phase, as it conserves the systems compound time-reversal symmetry which combines complex conjugation i→ -i and translation by half a unit-cell m→ m+1. However, the t_2 coupling term explicitly breaks it. Intuitively, it introduces possible hopping paths which encircle a time-reversal asymmetric flux (in contrast to the symmetric π-flux). §.§ Ladder geometry and the continuum limit Before diagonalizing the Hamiltonian H_ 2d and obtaining its spectrum, it is instructive to consider a ladder comprised of only two neighboring wires. Making the unit-cell doubling incurred by the wire coupling explicit, we define the two species of fermionic annihilation operators in momentum space, A_j,q=L^-1/2∑_m evene^imqc_j,m and B_j,q=L^-1/2∑_m odde^imqc_j,m. Notice q∈[0,π] defines the reduced Brillouin zone, and we set the lattice spacing to unity. Introducing the spinor Ψ_q=(A_1,q,B_1,q,A_2,q,B_2,q)^T, we write the two-wire Hamiltonian as H_2- wire=∑_qΨ^†_q [ -t(1+cos 2q)σ_x -tsin 2q σ_y + t_1σ_zτ_y + t_2σ_xτ_x ] Ψ_q, where σ_i and τ_i are Pauli matrices acting on sublattice and which-wire degrees of freedom, respectively. Expanding the Hamiltonian to lowest order in q-π/2, we find that when t_1=t_2=0, the low-energy spectrum of each wire is effectively described by chiral right/left moving annihilation operators ψ_j,R/L. These can be understood in the sublattice language as the two eigenvectors of σ_y. In the continuum limit, restoring all the wires in the system, the Hamiltonian effectively becomes H_ eff. =∑_j=1,2∫ dx ( H_D^j + H_M^j), H_D^j = v (ψ_j,R^† i∂_x ψ_j,R - ψ_j,L^† i∂_x ψ_j,L), H_M^j= M_+ψ_j,R^†ψ_j+1,L+M_-ψ_j,L^†ψ_j+1,R+ h.c. Here, the Fermi velocity of the chiral modes is related to the hopping strength v=2t, and the mass terms are M_±=t_2± t_1. Thus, we recover precisely the form of the continuum coupled-wire approach we analyze thoroughly in the main text. By tuning the microscopic interwire hopping t_1 and t_2, one may tune the strength of the chiral modes coupling between adjacent wires. We note here that away from half filling (q=π/2) expansion of the two wire Hamiltonian reveals a single-particle hopping process which couples modes of the same chirality in neighboring wires, e.g., t_⊥ψ_j,R^†ψ_j+1,R. Its strength increases as one moves away from half-filling, t_⊥≈ -t_2|cos q_F|. In this work we consider strongly interacting systems, where the single particle hopping terms between the wires are highly irrelevant. It will thus turn out that t_⊥ has little to no effect on our model, except for perturbatively enabling certain scattering channels in various scenarios, see Fig. <ref>. §.§ Spectrum and quantum geometry of the lattice model Let us now inspect more carefully the properties of the two-dimensional lattice model. Introducing the Fourier transform of the sublattice-resolved fermionic annihilation (A/B)_κ,q=N^-1/2∑_je^ijκ(A/B_j,q), and the spinor Ψ_κ,q=(A_κ,q,B_κ,q)^T, we find H_ CI=∑_κ,qΨ_κ,q^†[ -t(1+cos 2q)σ_x -tsin 2q σ_y +2t_1sinκσ_z+2t_2cosκσ_x ] Ψ_κ,q, and the resultant two bands have the spectrum E_κ,q = ± 2√((1+ 2cosκ t_2/t)t^2cos^2q +t_1^2sin^2κ+ t_2^2cos^2κ). The bulk gap in the spectrum is E_ gap=2| |M_+| - |M_-||. When either t_1=0 or t_2=0, |M_+|=|M_-|≡ M, the spectrum is gapless and has two anisotropic Dirac cones at (κ,q)=(π/2/3π/2,π/2) or (κ,q)=(0/π,π/2), respectively. The Dirac cone velocity along the wire direction is v=2t as before, whereas in the transverse direction it is v_⊥=4M. We illustrate these points in Fig. <ref>. We note that H_ CI may be mapped onto an anisotropic version of the half-BHZ model <cit.>. This can be made clear by defining k_x≡κ, k_y≡ 2q, and rotating the Pauli matrices around σ_y by π/2. This produces the momentum-resolved Hamiltonian h_k_x,k_y/(-t)=2t_1/tsin k_xσ_x+sin k_y σ_y+ (1+2t_2/tcos k_x+cos k_y)σ_z. It is therefore not surprising that the topological properties of the two bands are similar to those of the half-BHZ model. Let us now turn to explicitly examine the topological properties of the energy bands and their quantum geometry. Without loss of generality we will focus on the valence band, spanned by the Bloch wavefunctions | u_𝐤⟩, where as in our previous definition, 𝐤=(κ,2q). Adopting similar conventions to Ref. <cit.>, we define the quantum geometrical tensor η_αβ(𝐤)=⟨∂_α u_𝐤| ∂_β u_𝐤⟩/⟨ u _𝐤|u _𝐤⟩ - ⟨∂_α u_𝐤|u_𝐤⟩⟨ u_𝐤| ∂_β u_𝐤⟩/⟨ u _𝐤|u _𝐤⟩^2, with the shorthand ∂_α = ∂/∂ k_α. The Berry curvature Ω and Fubini-Study metric g are then Ω = 2 Imη_yx, g_αβ= Reη_αβ. We will be interested in two so-called indicators of the band's susceptibility to hosting an FCI state. The first is a measure of the Berry curvature fluctuations in the BZ, σ_Ω = √(∫ d^2𝐤/A(AΩ/2π-C)^2), where A is the area of the BZ and C=∫ d^2𝐤Ω/2π is the Chern number of the band. When the Berry curvature is a constant in the BZ σ_Ω=0, it has been shown <cit.> that the GMP algebra is exactly reproduced in the long-wavelength limit. The second indicator is the so-called trace condition. We define T = ∫d^2𝐤/A( tr g - |Ω|) as a measure of the saturation of the inequality trg≥|Ω|. When the metric is 𝐤-independent and this inequality is saturated the full GMP algebra can be recovered <cit.>. When σ_Ω=0 and T=0 (and the band is entirely flat) after projecting the interactions to the band of interest, one is thus essentially left with the physics of the lowest Landau level, which is of course ideally suited for a fractional Laughlin-like topological phase. There is also empirical evidence that even away from this ideal limit, it is beneficial to minimize these indicators to get a more FCI-friendly system. In Ref. <cit.> it was numerically demonstrated that the FCI many-body gap is correlated with small values σ and T. A similar trend was shown in Ref. <cit.>, where a model of magic angle twisted bilayer graphene was studied. These so-called indicators are calculated for our proposed two-dimensional model as a function of the different inter-wire coupling strengths, see Fig. <ref>. The clearest trend one observes is that these indicators are optimized when |t_1|→|t_2|, or alternatively when |M_+| ≫|M_-| (or vice versa). As we will now demonstrate, this also coincides with the minimization of the correlation length. We note that as t_1 and t_2 become comparable with the interwire hopping t, the ideal quantum geometry as indicated by the trace condition (as seen Fig. <ref>) deviates slightly from the |t_1|=|t_2| line. In this regime, the full two dimensional band geometry plays an important role, and our coupled wires approach is also much less valid. §.§ Transverse correlation length Generically, |t_1|≠|t_2| and both M_± are finite. The physics along the transverse inter-wire direction can be mapped onto two copies of the Su-Schrieffer-Heeger (SSH) model <cit.>: one copy is formed by right-movers on even wires coupled to left-movers on odd wires with alternating M_+ and M_- hopping, and the second chain comprised by left-movers on even wires and the right-movers on odd wires. This mapping provides us with two immediate results. First, if |M_+|=|M_-|, the wires are critically coupled and the system remains gapless. This was already understood from Eq. (<ref>). This underscores that both t_1 and t_2 are required to manifest the Chern insulator phase. On the other hand, if the mass terms have different magnitudes, one still finds counter-propagating edge modes in the system (corresponding to the SSH edge states), except in this more general case they are not entirely localized on the outermost wires. Instead, their support decays in the transverse direction with a correlation length (in units of the inter-wire separation) <cit.> ξ=2/log max{|M_+|,|M_-|}/ min{|M_+|,|M_-|} = 2/log|t_1|+|t_2|/||t_1|-|t_2|| . This finite correlation length provides a clear important distinction from previous works regarding wire constructions of quantum Hall states <cit.>. In these constructions, the chiral edge modes were always localized on one wire, i.e., the effective correlation length was ξ=0. We note that at a certain regime of parameters in one of the models presented in Ref. <cit.>, a finite extent of the chiral edge modes is made possible, yet this possibility and its potential importance remained unexplored. In the the type of model we consider, tuning the coupling parameters t_1 and t_2 provides control over this localization. This becomes crucial to our analysis and understanding of the fractional phases. § RELATING QUANTUM GEOMETRY AND THE CORRELATION LENGTH Consider the continuum limit of the coupled wires model, expanded in small momentum along the wire direction (small q), H_ cont.=∑_k,q[vq(ψ_R,k,q^†ψ_R,k,q-ψ_L,k,q^†ψ_L,k,q)+(M_+e^ikdψ_R,k,q^†ψ_L,k,q+M_-e^-ikdψ_R,k,q^†ψ_L,k,q+ h.c.)], where ψ_R/L,k,q is a right/left-moving fermionic annihilation operator at momentum k,q (k is the momentum in the transverse direction), M_± are the different interwire couplings, and d is the interwire distance (which is re-introduced for clarity). By defining the spinor Ψ_k,q=(ψ_R,k,q,ψ_L,k,q)^T, one may rewrite this Hamiltonian, H=∑_k,qΨ_k,q^†h_k,qΨ_k,q, with h_k,q =vqσ_z+(M_++M_-)cos kdσ_x+(M_+-M_-)sin kdσ_y ≡ vqσ_z+acos kdσ_x+bsin kdσ_y. The spectrum of the Hamiltonian is ϵ_k,q=±√((vq)^2+a^2cos^2kd+b^2sin^2kd)≡± E_k,q. We denote the eigen-wavefunction of the bottom band as |u_k,q⟩, with their explicit form |u_k,q⟩=[ √(E_k,q+vq/2E_k,q); acos kd-ibsin kd/√(2E_k,q(E_k,q+vq)) ]. To understand how tuning the model affects the quantum geometry it is useful to inspect the following quantity, ℓ_ geo.=4 ∫dk/2π tr g (k,q=0), which is a length scale, associated with the spread of the maximally localized Wannier functions along the transverse direction. Clearly, it is also one of the components that make up the trace condition, and it is in fact the component most influenced by M_± tuning. We have illustrated in Fig. <ref>c the direct correlation between this length scale and the trace condition violation T. Our definition of ℓ_ geo. is in analogy to Ref. <cit.>, that showed that the trace-condition violation quantifies the spatial extent of the maximally localized Wannier functions. From straightforward calculation using Eqs. (<ref>) and (<ref>), we find η_kk(k,q=0)=d^2/41-δcos2kd/1+δcos2kd, η_qq(k,q=0)=d^2/4(v/d M_+)^2 1/1+e^-4d/ξ1/1+δcos2kd, with δ=2M_+M_-/M_+^2+M_-^2, and without loss of generality we assumed M_+>M_-. Notice that in the maximally-chiral limit, when one of the interwire terms overwhelms the other, δ→0, and the above quantities are “flat”, i.e., independent of k. We finally recover ℓ_ geo., and relate it to the transverse correlation length (see Fig. <ref>), ℓ_ geo.=d3+[1+(v/dM_+)^2]e^4d/ξ/e^4d/ξ-1. Let us examine two interesting simple limits. When the correlation length is vanishingly small as compared to the inter-wire separation, i.e., one of the mass terms dominates the other, ℓ_ geo.( ξ≪ d) ≈ d[1+(v/dM_+)^2]. This is the lower bound for ℓ_ geo., indicating that due to the topological non-triviality of the band, the Wannier functions of this band cannot be localized on a single wire (obstruction to the so-called atomic limit). In the opposite limit, where the correlation length greatly exceeds the interwire separation, ξ≫ d, ℓ_ geo.( ξ≫ d) ≈ξ[1+(v/2dM_+)^2]+d[2(v/2dM_+)^2-1] ≈ξ[1+(v/2dM_+)^2]. We thus find that our defined “spread function” ℓ_ geo. starts out being of order ∼ d when ξ is very small (near optimal chiralness), and as the correlation length grows, ℓ_ geo.∝ξ. Relating this spread function to the correlation length ξ analytically establishes the connection of the latter to quantum geometry and to the extent to which the trace condition is violated. We also briefly mention that calculation of the Berry curvature at optimal chiralness, ξ→0 (without loss of generality, M_-=0), Ω_xy(ξ→0)=d/2v/M_+/[(vq/M_+)^2+1]^3/2, is also completely flat in the k-direction, once more indicating quantum geometry which is more favorable towards FCI stabilization, i.e., the low variance of the Berry curvature in the BZ. § MULTI-PARTICLE INTERACTIONS In the main text, we were mainly concerned with studying certain many-body scattering operators, their scaling dimension, and the phase diagram of our coupled wires construction in their presence. These are O^j_ FCI∼ g_ FCI( O_j, bs)^p ( O_j+1, bs)^p ψ_j,R^†ψ_j+1,L + h.c. O^j_ aFCI∼ g_ aFCI( O_j, bs)^p ( O_j+1, bs)^p ψ_j+1,R^†ψ_j,L + h.c. O^j_ CDW = g_ CDW( O_j, bs)^2p+1 + h.c. For the sake of completeness, let us discuss here how these high-order scattering processes may be generated through perturbation theory, as one considers interactions, interwire coupling, and overall momentum conservation. We focus here on the ν=1/3 case for concreteness. The process related to FCI stabilization, described by O^j_ FCI, is shown in Fig. <ref>a. By an arbitrary momentum shift of one wires dispersion relative to its neighbor by 2π, on readily sees the processes involved. Namely, two 2k_F scattering processes of strength U_2k_F, combined with one interwire hopping ∝ M_+. Although it seems M_+ leads to violation of momentum conservation, due to the modulation of the interwire hoppings between adjacent intrawire sites, this process carries a momentum of ±π, enabling the conservation of momentum. The analogous process for the aFCI, O^j_ aFCI, appears in panel b of Fig. <ref>. Its analogy to the FCI is made obvious by shifting relative momentum between the wires by -2π instead of +2π, illustrating how now M_- is required to facilitate the scattering. The upshot here is that the initial coupling constants of these processes, g_ FCI^0 and g_ aFCI^0 are identical up to the transmutation M_+ ↔ M_-. This difference lies at the heart of relating the competition between the two to the quantum geometric properties of the hosting bands. Lastly, we comment on the CDW process, O^j_ CDW, illustrated in Fig, <ref>c. Following the same procedure, the initial coupling g^0_ CDW appears to potentially be of much higher order, as it contains an additional t_⊥^3 factor relative to the previous two processes. This traces back to the fact that one requires momentum-π carrying processes to establish momentum conservation. As the lattice model stands, only interwire hopping is able to achieve this, hence the necessity of the second wire participating in this game. However, by slightly modifying the model, as to modulate the intrawire hopping t between adjacent sites, i.e., t→ t±δ t, one immediately enables the lower order process shown in the bottom of Fig. <ref>. § WEAK COUPLING RG To study the competition between the three different multi-particle backscattering terms, which correspond to different correlated phases potentially stabilized in the system, we employ the perturbative renormalization group (RG) approach. At the level of weak-coupling analysis, the competition is sufficiently well captured by a model comprised of just two neighboring wires. Within this framework, the effect of electron-electron interactions on the competition between the phases can be well understood. Let us consider the Hamiltonian density H= H_0 + H_ CDW + H_ FCI, with H_0 = 1/2π∑_i=ρ,σu_i[K_i^-1(∂_x ϕ_i)^2 +K_i (∂_x θ_i)^2 ], H_ CDW = g_ CDW/2π^2cos(m√(2)ϕ_ρ) cos(m√(2)ϕ_σ) +g_ϕ/2π^2cos(√(8)ϕ_σ), H_ FCI = g_ FCI/2π^2cos(√(2)θ_σ+m√(2)ϕ_ρ) +g_ aFCI/2π^2cos(√(2)θ_σ-m√(2)ϕ_ρ) +Ṽ/2π(∂_xϕ_ρ∂_xθ_σ+∂_xθ_ρ∂_xϕ_σ). Here the two bosonic sectors ρ,σ correspond to different combinations of the fields on the two wires labeled 1,2, e.g., ϕ_ρ/σ=1/√(2)(ϕ_1±ϕ_2). Within the unperturbed H_0, g_ CDW has the scaling dimension d_ CDW, and the two FCI terms have the same scaling dimension d_ FCI, where d_ CDW = m^2/2K_ρ+m^2/2K_σ, d_ FCI = m^2/2K_ρ+1/2K_σ^-1. Clearly, K_ρ, which corresponds to the total charge sector and thus is expected to be rather small in the case of strong repulsive interactions, will not play any meaningful role in the competition of these different backscattering terms, as all three depend on it in the exact same way. Instead however, K_ρ controls the transition between a gapless metallic phase for K_ρ≲ 1 (weak repulsive interactions), and the strong coupling phase of one or several of the different g_i, mandating K_ρ≪ 1, i.e., strong repulsive interactions. Conversely, one immediately notices that K_σ directly controls the competition between the CDW phase requiring sufficiently small K_σ, and the FCI terms which favor large values of K_σ. The magnitude of K_σ can be estimated by considering an interwire density-density interaction term V_12/π∂_xϕ_1 ∂_x ϕ_2, intra-wire Luttinger parameter K, and the effective intra-wire Fermi velocity v. As a function of these parameters we may express K_ρ/σ=K/√(1±V_12K/v). Assuming the intrawire K is determined by a single density-density interaction V_0, and the bare Fermi velocity v_F, we may estimate K_σ = K√(v_F+V_0/v_F+V_0-V_12). Thus, K_σ is expected to be large if the interwire repulsion is comparable to, or even stronger than the intrawire repulsion V_0. The term in H_ CDW proportional to g_ϕ originates in large-momentum transfer interactions between the two adjacent wires, i.e., ψ^†_1,Rψ_1,Lψ^†_2,Lψ_2,R. It stabilizes a system-wide CDW by favoring the alignment of the local intra-wire CDWs to each other, such that a minima in the density in one wire tends to align to a maxima in its interacting neighbors. Finally, let us address the seemingly peculiar Ṽ interaction. Since it is odd in θ_i fields, it explicitly breaks the time-reversal symmetry. Its microscopic origin may come from the same time-reversal symmetry breaking which facilitated the formation of a Chern insulator (and thus differentiated also between g_ FCI and g_ aFCI). Alternatively, as we will show below, it is also directly generated at low energies when g_ FCI≠ g_ aFCI. We derive the RG equations using the standard operator product expansion (OPE) <cit.>. We parametrize the flowing short-distance cutoff as α=α_0 e^ℓ, where in each RG step ℓ increases incrementally. For the sake of simplicity, we neglect the differences between the velocities in different sectors, which impact the RG flow only in higher-orders than the ones considered. We henceforth set u_i≈ u. Defining dimensionless coupling constants y_i≡ g_i / (π u), we find the following set of RG equations, d/dℓy_ FCI =(2-d_ FCI+m/2K_ρK_σ^-1Ṽ)y_ FCI, d/dℓy_ aFCI =(2-d_ FCI-m/2K_ρK_σ^-1Ṽ)y_ aFCI, d/dℓy_ CDW =(2-d_ CDW)y_ CDW, d/dℓy_ϕ =(2-2K_σ)y_ϕ, d/dℓK_ρ^-1 =m^2/2(y_ FCI^2+y_ aFCI^2+y_ CDW^2), d/dℓK_σ =1/2(y_ FCI^2+y_ aFCI^2)-K_σ^2(m^2/2y_ CDW^2+2y_ϕ^2), d/dℓṼ =m(y_ FCI^2-y_ aFCI^2). The relationship between the “proper” FCI and the anti-FCI terms is now somewhat clarified by the RG equations. At the level of weak-coupling, the competition is captured by the Ṽ interaction discussed above. This interaction (with a positive sign) directly aids the flow of y_ FCI to strong coupling. However the growth of Ṽ itself is severely impeded by the mere presence of the counter term y_ aFCI. Thus, the presence of the latter imposes a burden on the possibility of stabilizing the FCI phase. With similar reasoning, one observes that the CDW and the two FCI terms act in opposing ways on the flow of K_σ, which was shown above to be the most pertinent one for this specific competition. §.§ Derivation example Let us demonstrate our derivation of the RG equations, by considering the most non-trivial part, i.e., the contribution of V to the beta function of, e.g., y_ FCI. Generally, the second order beta functions in 1+1d are written as d/dℓ y_k = (2-d_k)y_k - c_ijk y_i y_j, where d_k is the scaling dimension of the operator corresponding to y_k, and summation over repeated indices is implied. The coefficient c_ijk can be identified from the OPE of the operators O_i/j with the corresponding coupling constants y_i/j, :O_i( z_1)::O_j( z_2): = c_ijk/| z_1- z_2|^d_i + d_j - d_k :O_k( z_1+ z_2/2):. Therefore, we examine the following OPE, I_y_ FCI,V = :∇ϕ_ρ∇θ_σ::cos(√(2)θ_σ+m√(2)ϕ_ρ): = 1/2:∇ϕ_ρ∇θ_σ:∑_n=0^∞i^n/n!∑_k=0^n[ n; k ]:(√(2)θ_σ)^k(m√(2)ϕ_ρ)^n-k:+ h.c. We now need to start contracting the θ_σ fields and the ϕ_ρ fields. One needs to “choose” out of k terms for the former, and out of n-k terms for the latter. Thus, I_y_ FCI,V =2m×1/2∑_n=0^∞i^n/n!∑_k=0^n[ n; k ]k(n-k)⟨∇θ_σθ_σ⟩⟨∇ϕ_ρϕ_ρ⟩ :(√(2)θ_σ)^k-1(m√(2)ϕ_ρ)^n-k-1:+ h.c. = 2m×1/2∑_n=0^∞i^n/n!∑_k=0^n[ n; k ]k(n-k)[-K_σ^-1/2 z_1-z_2/| z_1-z_2|^2][-K_ρ/2 z_1-z_2/| z_1-z_2|^2]:(√(2)θ_σ)^k-1(m√(2)ϕ_ρ)^n-k-1:+ h.c. = [m/2K_ρK_σ^-1/| z_1-z_2|^2]×1/2∑_n=0^∞i^n/n!∑_k=0^n[ n; k ]k(n-k):(√(2)θ_σ)^k-1(m√(2)ϕ_ρ)^n-k-1:+ h.c. = [m/2K_ρK_σ^-1/| z_1-z_2|^2]×1/2∑_n=0^∞i^n/n!∑_k=0^nn(n-1)(n-2)!/(k-1)!(n-2-(k-1))!:(√(2)θ_σ)^k-1(m√(2)ϕ_ρ)^n-2-(k-1):+ h.c. = [m/2K_ρK_σ^-1/| z_1-z_2|^2]×1/2∑_n=0^∞i^n/n!n(n-1)∑_k=0^n[ n-2; k-1 ]:(√(2)θ_σ)^k-1(m√(2)ϕ_ρ)^n-2-(k-1):+ h.c. = [m/2K_ρK_σ^-1/| z_1-z_2|^2]×1/2∑_n=0^∞i^n-2i^2/(n-2)!:(√(2)θ_σ+m√(2)ϕ_ρ)^n-2:+ h.c. = -[m/2K_ρK_σ^-1/| z_1-z_2|^2]× :cos(√(2)θ_σ+m√(2)ϕ_ρ):. We thus identify c_Ṽ,y_ FCI,y_ FCI = m/2K_ρK_σ^-1. §.§ Alternative definitions As in the main text, it is convenient to re-define y_ F^2 = y_ FCI^2 + y_ aFCI^2, y_ F^2 z = y_ FCI^2 - y_ aFCI^2. Notice z ∈[0 , 1], where z=1 corresponds to the maximally chiral ξ=0 case. With these alternative representations, one finds d/dℓy_ F =(2-d_ FCI+m/2K_ρK_σ^-1Ṽz)y_ F, d/dℓz =mK_ρK_σ^-1Ṽ(1-z^2), d/dℓy_ CDW =(2-d_ CDW)y_ CDW, d/dℓy_ϕ =(2-2K_σ)y_ϕ, d/dℓK_ρ^-1 =m^2/2(y_ F^2+y_ CDW^2), d/dℓK_σ =1/2y_ F^2-K_σ^2(m^2/2y_ CDW^2+2y_ϕ^2), d/dℓṼ =m z y_ F^2. From this form of the RG equations, it becomes clear that z>0 aids the growth of y_ F to strong coupling, both directly and by generating (or enhancing) the time-reversal odd interaction Ṽ. §.§ Additional phase diagrams for different z We illustrate the full evolution of the phase diagram, as obtained in Figure 2 of the main text, as a function of “deteriorating” quantum geometry. This is shown in Fig. <ref>. As anticipated, the region where the FCIs are stabilized shrinks, as the so-called aFCI seed becomes larger, i.e., z becomes smaller, and ℓ_ geo. moves further away from ts optimal value. §.§ Magnetic field As mentioned before, the bosonized form of the electronic operators in our model is ψ_j,R/L∼ e^-irk_Fxe^-i(rϕ_j-θ_j). The band filling relative to the neutrality point ν is related to the Fermi momentum as k_F=π/2a(1-ν). The filling ν=1 corresponds to 1 electron per unit-cell, whose length along the wire is 2a, due to the doubled unit-cell introduced by the interwire coupling. The magnetic field is applied by the “boost” transformation ψ_j,R/L→ψ_j,R/Le^ibjx, with b=edB/ħ, and Φ_0=h/e the flux quantum. Introducing a finite magnetic flux between the wires, the FCI, aFCI, and CDW operators transform as O_ FCI^j ∼ g_ FCI(ψ_j,R^†ψ_j,L)^p(ψ_j+1,R^†ψ_j+1,L)^pψ_j,R^†ψ_j+1,L+ h.c. =g_ FCIcos[m(ϕ_j+ϕ_j+1)-θ_j+θ_j+1+bx+2mk_Fx], O_ aFCI^j ∼ g_ aFCI(ψ_j,R^†ψ_j,L)^p(ψ_j+1,R^†ψ_j+1,L)^pψ_j+1,R^†ψ_j,L+ h.c. =g_ aFCIcos[m(ϕ_j+ϕ_j+1)+θ_j-θ_j+1-bx+2mk_Fx], O_ CDW^j ∼ g_ CDW(ψ_j,R^†ψ_j,L)^m+ h.c. =g_ CDWcos(2mϕ+2mk_Fx). At filling ν_b=0 = m-2l/m, with l an integer number, the 2mk_Fx factor in all three terms effectively vanishes, and the corresponding phases are commensurate. (Notice only l∈[-p , p] are relevant here, since our model is restricted to ν∈[0,2]). With finite magnetic flux per unit cell, Φ=2adB, the commensuration condition for the CDW remains unaltered. However, for the fractional Chern phases this condition migrates, ν^*_ FCI/aFCI = ν_b=0±1/mΦ/Φ_0. From the well-known Streda formula, ∂ n/∂ B= C/Φ_0, one confirms the Chern numbers of the FCI and aFCI phases are 1/m and -1/m, respectively. At a given filling factor ν, and magnetic flux Φ, we may define the deviation from commensuration δ_ν = ν-ν_b=0. At finite deviation and/or magnetic fields, the cosines in Eqs. (<ref>)–(<ref>) may oscillate along the direction of the wires. The spatial period of oscillations depends of course on δν and Φ. Within the RG treatment, it is a reasonable approximation <cit.> to treat this period length as the length scale at which the corresponding cosine is cut off, and the system realizes the incommensurability. Recalling the short-distance cutoff α=α_0e^ℓ, we approximate the thresholds at which the different multi-particle terms are cut off as ℓ_ FCI^*= -ln[m/2(δν-1/mΦ/Φ_0)], ℓ_ aFCI^*= -ln[m/2(δν+1/mΦ/Φ_0)], ℓ_ CDW^*= -ln(m/2δν), which were obtained by setting α_0≈a/2π. In order to introduce the incommensurability cutoff in a smooth way, we introduce the functions <cit.> c_i(ℓ) = 1/(e^ℓ-ℓ_i^*)^γ+1, where γ sets the smoothness of the transition. At ℓ≫ℓ_i^*, this function vanishes exponentially fast. In the opposite limit, c_i tends to unity. We use c_i in the RG equations to cut off the effect of the incommensurate terms at a finite RG time. For completeness, the full set of RG equations is given by d/dℓy_ FCI =(2-d_ FCI+m/2K_ρK_σ^-1Ṽ)y_ FCI, d/dℓy_ aFCI =(2-d_ FCI-m/2K_ρK_σ^-1Ṽ)y_ aFCI, d/dℓy_ CDW =(2-d_ CDW)y_ CDW, d/dℓy_ϕ =(2-2K_σ)y_ϕ, d/dℓK_ρ^-1 =m^2/2(c_ FCI(ℓ)y_ FCI^2+c_ aFCI(ℓ)y_ aFCI^2+c_ CDW(ℓ)y_ CDW^2), d/dℓK_σ =1/2(c_ FCI(ℓ)y_ FCI^2+c_ aFCI(ℓ)y_ aFCI^2)-K_σ^2(m^2/2c_ cdw(ℓ)y_ CDW^2+2y_ϕ^2), d/dℓṼ =m(c_ FCI(ℓ)y_ FCI^2-c_ aFCI(ℓ)y_ aFCI^2). § GENERALIZED BKT EQUATIONS When d_ FCI≈2 and all other coupling coefficients remain approximately stationary during the RG flow, the FCI–a-FCI competition is at its strongest. In that limit, the competition may be compactly described using just the following three flow equations, d/dℓy_ FCI=m/2K_ρ/K_σVy_ FCI, d/dℓy_ aFCI=-m/2K_ρ/K_σVy_ aFCI, d/dℓV=m(y_ FCI^2-y_ aFCI^2). Let us rescale the coefficients to a more recognizable form, y_1=m√(K_ρ/2K_σ)y_ FCI, y_2=m√(K_ρ/2K_σ)y_ aFCI, x=m/2K_ρ/K_σV, so that we may write, d/dℓy_1=xy_1, d/dℓy_2=-xy_2, d/dℓx=y_1^2-y_2^2. Clearly, taking either y_1 or y_2 →0 recovers a simple Brezinskii-Kosterlitz-Thouless (BKT) sort of RG flow which is well understood. However, the equations above are slightly more complicated. We begin tackling these equations by identifying two integral of motion, A=y_1y_2, B=x^2-y_1^2-y_2^2, which remain invariant under the RG flow. Since we are most interested in the pure FCI–aFCI competition, we focus on the case where the initial value of V (or x) is zero. Using the integrals of motion, we find that throughout the RG evolution, y_1,0y_2,0=y_1y_2, -y_1,0^2-y_2,0^2=x^2-y_1^2-y_2^2, where y_i,0 are the initial values of the coupling constants. After some straightforward manipulation we obtain (xy_1) ^2=(y_1^2-y_1,0^2)(y_1^2-y_2,0^2). Using this relation in the first equation of (<ref>), we have reduced the flow of y_1 to a single differential equation, d/dℓy_1=√((y_1^2-y_1,0^2)(y_1^2-y_2,0^2)). Recovering the scale ℓ^∞, where y_1→∞, we find ℓ^∞=∫_y_1,0^∞dy1/√((y_1^2-y_1,0^2)(y_1^2-y_2,0^2))=r/y_1,0 Re[K(r^2)]. where K(m) is the complete elliptic integral of the first kind with parameter m=k^2, and we have defined the ratio r≡y_1,0/y_2,0. Notice we are always concerned with the case r≥ 1, since the aFCI phase cannot triumph over the FCI. Having found the RG time at the divergence of y_1, we may evaluate the energy scale of the gap that opens when y_1 flows to strong coupling by Δ_ FCI=Λ_0exp(-ℓ^∞), with Λ_0 the initial energy cutoff energy scale. In terms of the parameter r=y_ FCI,0/y_ aFCI,0, there exist two particular limits of interest. First, if the inhibitory y_ aFCI does not exist (maximally chiral limit), or starts off significantly smaller compared to y_ FCI, r→∞ and Δ_ FCI∝exp(-π/m√(K_σ/2K_ρ)1/ y_ FCI,0). Notice the dependence on y_ FCI,0 in the power-law, which is the familiar BKT form. In the other interesting limit, y_ FCI,0 and y_ aFCI,0 start off at almost the same value, r→1. Expanding in the deviation of the initial ratio from unity, one finds Δ_ FCI∝(y_ FCI,0/y_ aFCI,0-1/8)^π/m√(K_σ/2K_ρ)1/y_ FCI,0. This expression possesses a similar power-law behavior as above, yet is further suppressed by the small base of the exponent. It is instructive to employ the definition ξ=2d/log r to the last expression, and to obtain (in the appropriate r→1 or ξ→∞ limit) Δ_ FCI∝(d/4ξ)^√(K_σ/(2K_ρ))/my_ FCI,0. We further emphasize that ξ is intimately connected to the violation of the so-called trace condition far from ideality, see Eq. (<ref>). Thus, in this strong competition regime, we have directly shown how the FCI gap is suppressed as a result of “poor” quantum geometry. § STRONG COUPLING Let us write the full Hamiltonian as H = ∫ dx [ H_0 + H_ f.s. + H_ FCI+ H_ aFCI+ H_ CDW], where we express the different terms as H_0=1/2π∑_j[(u+V^0)(∂_xϕ_j)^2+(u-V^0)(∂_xθ_j)^2], H_ f.s.=1/2π∑_j≠ k[∂_xϕ_jV_ϕ^|j-k|∂_xϕ_k+∂_xθ_jV_θ^|j-k|∂_xθ_k], H_ FCI=g_ FCI/2π^2∑_jcos[m(ϕ_j+ϕ_j+1)+θ_j-θ_j+1], H_ aFCI=g_ aFCI/2π^2∑_jcos[m(ϕ_j+ϕ_j+1)-θ_j+θ_j+1], H_ CDW=g_ CDW/2π^2∑_jcos(2mϕ_j). As usual, the fields ϕ_j,θ_j correspond to the bosonized fields on the j wire. In the above we have assumed translation invariance, as well as conservation of time-reversal symmetry by the forward-scattering part of the interaction. It is instructive to make an intermediate step, and define the following chiral operators, φ_j^R/L=θ_j/m±ϕ_j, which obey the commutation relation [φ_i^r(x),φ_j^r'(x')]=iπ/mrδ_rr'δ_ij sgn(x-x'). In terms of these chiral operators, the Hamiltonian is H_0+ H_ f.s. = mv/4π∑_j[(∂_xφ_j^R)^2+(∂_xφ_j^L)^2]-Ṽ^̃0̃/4π∑_j∂_xφ_j^R∂_xφ_j^L +1/4π∑_j≠ k∂_xφ_j^rV_rr'^|j-k|∂_xφ_k^r' H_ FCI=g_ FCI/2π^2∑_jcos[m(φ_j^R-φ_j+1^L)] H_ aFCI=g_ aFCI/2π^2∑_jcos[m(φ_j^L-φ_j+1^R)], H_ CDW=g_ CDW/2π^2∑_jcos[m(φ_j^L-φ_j^R)], with the re-defined constants v=(1+m^2)u+(1-m^2)V^0/2m, Ṽ^̃0̃=(1-m^2)u+(1+m^2)V^0, V_rr'^|i-j|=V_ϕ^|i-j|/2(2δ_rr'-1)+m^2V_θ^|i-j|/2. Taken together with the chiral operators commutation relations, Eq. (<ref>), we may interpret the Hamiltonian in a different way. Each wire has been effectively transformed into a narrow fractional quantum Hall strip analogous to filling ν=1/m, whose chiral edge states have the velocity v. The constants Ṽ^̃0̃ and V_rr'^|i-j| determine a forward-scattering interaction Hamiltonian operating between these chiral edge states throughout the system. The multiparticle backscattering terms now couple neighboring edge states with m-particle processes. Once more, unlike the fractional quantum hall case, the coupling is not entirely chiral: H_ FCI competes with H_ aFCI and H_ CDW. Due to this competition, a gapped phase which is not compatible the ν=1/m fractional quantum hall effect may form. We note that one may define the following quasiparticle operators <cit.>, Ψ_ QP,j^R/L∼ e^iφ_j^R/L, which are not physical operators by themselves (they cannot be built out of the local electron operators). However, in the gapped FCI phase, it can be shown these quasiparticles possess fractional abelian statistics by constructing local operators that transfer quasiparticles through the system <cit.>. Finally, one may construct the fermionic operators Ψ_j^R/L∼ e^imφ_j^R/L, in terms of which the cosine terms in the Hamiltonian are tunneling processes of fermions between the edge-states in different quantum hall strips. The fact that these are indeed fermionic operators may be easily understood by considering the commutation relations, [mφ_i^r(x),mφ_j^r'(x')]=iπ mrδ_rr'δ_ij sgn(x-x')=(2n+1)iπ rδ_rr'δ_ij sgn(x-x'), which differ only by an integer multiple of 2π from the commutation relations of the original chiral bosonic operators in terms of which the bosonization of the bare electronic Hamiltonian was performed. As opposed to the bare electronic operators, which have a scaling dimension of 1/2, these fermionic operators have an enlarged scaling dimension of m/2, a characteristic of the chiral Luttinger liquid at the fractional quantum Hall edges. Let us now consider the strong coupling limit of the Hamiltonian (<ref>), where some (or all) of the multiparticle terms dominate all other energy scales in the problem. Denoting the strong-coupling value of g_i/(π) as G_i, we write the Hamiltonian density as H_ strong = ∑_j [ iṽ(Ψ_j,R^†∂_xΨ_j,R-Ψ_j,L^†∂_xΨ_j,L)+G_ FCIΨ_j,R^†Ψ_j+1,L+G_ aFCIΨ_j,L^†Ψ_j+1,R+G_ CDWΨ_j,R^†Ψ_j,L+ h.c.]+…, where we have included a linear dispersion along the wires for these chiral fermions with some renormalized velocity ṽ for concreteness. The … represent subdominant interaction terms which cannot open a spectral gap. H_ strong is readily diagonalized, with the spectrum E_ strong=±√((ṽk_x)^2+[G_ CDW+(G_ FCI+G_ aFCI)cos k_y]^2+(G_ FCI-G_ aFCI)^2sin^2k_y). Here, k_x (k_y) the momentum in the longitudinal (transverse) direction. In the regime where the CDW is subdominant to the FCI phases, G_ CDW≤ G_ FCI+G_ aFCI, the spectral gap is Δ E_ strong=2|G_ FCI-G_ aFCI|√(1-(G_ CDW/G_ FCI+G_ aFCI)^2). If we parameterize in a similar way to the discussion in Sec. <ref>, 2G_ F^2=G_ FCI^2 + G_ aFCI^2, 2G_ F^2z=G_ FCI^2 - G_ aFCI^2, and expand away from optimal quantum geometry (z ll 1), we may re-write the gap expression as Δ E_ strong = z√(G_ F^2-G_ CDW^2) +O(z^3). The strong coupling expression reveals that the many-body gap relates directly to the competition between the FCI and the disruptive aFCI phase, with the gap vanishing linearly in their difference. We emphasize again that the relative strength of the anomalous G_ aFCI (or the magnitude of z) is related to the quantum geometry of the parent Chern band. Thus, we establish the connection between quantum geometry and the stabilization of the FCI phase in our model even in the strong coupling limit. § FURTHER IMPLICATIONS OF THE COUPLED WIRES CONSTRUCTION §.§ FCI promotion by periodic modulation Consider a periodic modulation of the density, such that the density in even wires is ν_ frac.+δν_ mod., whereas on odd wires it is ν_ frac.-δν_ mod.. The CDW part of the Hamiltonian will now read H_ CDW = g_ CDW/2π^2cos(m√(2)ϕ_ρ) cos(m√(2)ϕ_σ+mπ/aδν_ mod. x) +g_ϕ/2π^2cos(√(8)ϕ_σ+2π/aδν_ mod. x). Similar to our modifications leading to the RG flow in Eq. (<ref>), the flow associated with terms quadratic in g_ CDW and g_ϕ now acquire the respective multiplicative constants c_ CDW(ℓ) = (1+e^γ(ℓ+m/2logδν_ mod.))^-1, c_ϕ(ℓ) = (1+e^γ(ℓ+logδν_ mod.))^-1. In Fig. <ref> we demonstrate the effect of the density modulation on the phase diagram. Namely, the modulation leads to dephasing and destabilization of the CDW phase at shorter and shorter time scales. In turn, this leads to promotion of the FCI, and its stabilization over larger areas of parameter space. Thus, our coupled wires model points at some interesting opportunities in lattice and band engineering, if one aims to acheive a robust FCI phase. §.§ CDW stabilized by a magnetic field Recent experiments in moiré graphene heterostructures  <cit.> have observed a peculiar trend, where a CDW or Wigner crystal phase is stabilized at fractional band filling by applying a perpendicular magnetic field. Surprisingly, a similar effect may be observed within our model at a certain parameter regime. Fixing the density at ν_ frac., in the presence of finite magnetic flux the RG-time thresholds are ℓ^*_ FCI=ℓ^*_ aFCI=-ln|Φ/2Φ_0|, wheres there is no threshold for the CDW term. Application of a magnetic field at the appropriate fractional density thus renders the FCI phases incommensurate, effectively cutting them off at shorter length scales, which may lead to CDW stabilization. In a sense, it is the analogous effect to the one described in the previous section – now the magnetic field “dephases” the FCI and aFCI, potentially promoting the CDW. As illustrated in Fig. <ref>, application of a magnetic field stabilizes the CDW at the expanse of the FCI phase. The CDW gap itself gradually increases with magnetic field, suggesting that our proposed model may help identify the cause for magnetic-field-induced stabilization of Wigner crystals at fractional filling. apsrev4-2
http://arxiv.org/abs/2405.09508v1
20240515170102
Modeling Bilingual Sentence Processing: Evaluating RNN and Transformer Architectures for Cross-Language Structural Priming
[ "Bushi Xiao", "Chao Gao", "Demi Zhang" ]
cs.CL
[ "cs.CL", "cs.LG" ]
Note on the Taub-NUT Instanton Metric Joon-Hwi Kim May 20, 2024 =================================================================== This study evaluates the performance of Recurrent Neural Network (RNN) and Transformer in replicating cross-language structural priming—a key indicator of abstract grammatical representations in human language processing. Focusing on Chinese-English priming, which involves two typologically distinct languages, we examine how these models handle the robust phenomenon of structural priming, where exposure to a particular sentence structure increases the likelihood of selecting a similar structure subsequently. Additionally, we utilize large language models (LLM) to measure the cross-lingual structural priming effect. Our findings indicate that Transformer outperform RNN in generating primed sentence structures, challenging the conventional belief that human sentence processing primarily involves recurrent and immediate processing and suggesting a role for cue-based retrieval mechanisms. Overall, this work contributes to our understanding of how computational models may reflect human cognitive processes in multilingual contexts. § INTRODUCTION Existing studies show that RNN, particularly Gated Recurrent Unit models, have been pivotal in modeling human sentence processing (for a review, see ). They can be used to explain phenomena like garden-path effects and structural priming. These models process sequential information through recurrence, a characteristic thought to resemble human cognitive processing. However, the Transformer model, which uses self-attention mechanisms instead of recurrence, challenges this notion. Transformer's ability to directly access past input information, regardless of temporal distance, offers a fundamentally different approach from RNN. The effectiveness of Transformer and recent large language models (LLM) in various natural language processing (NLP) tasks made us wonder if they can emulate RNN in modeling cross-language structural priming. Prior psycholinguistics studies show a specific sentence structure in one language influences the use of similar structures in another language. Existing studies have demonstrated RNN’s proficiency in capturing properties of human syntactic processing in multilingual contexts, exhibiting structural priming effects akin to those observed in human bilinguals <cit.>. However, the ability of Transformers and LLM in this domain remains unexplored. § LITERATURE REVIEW Structural priming refers to the phenomenon where encountering a specific syntactic structure boosts the probability of generating or understanding sentences with a comparable structure <cit.>. It is possible to identify the presence of structural priming effects in language models, which are analogous to those observed in human language processing <cit.>. The manifestation of such priming effects in language models suggests that they develop implicit syntactic representations that may be akin to those employed by the human language system according to <cit.>. This phenomenon is too abstract to be widely known, so in this paper, we aim to visually demonstrate it through quantitative measure of designed experiments. Some researchers have already conducted studies in this field. <cit.> indicates the technical feasibility of RNN model to capture cross-language syntax by calculating the word suprisal value. While study conducted by <cit.> demonstrates that the grammatical representations encoded in large language models exhibit a remarkable degree of similarity. However, no study has yet compared RNN, Transformer and large language models in their ability to model cross-language structural priming. This is one of the reasons why we are undertaking this study. In the meanwhile, researches on crosslingual structural priming has also made progress. <cit.> offer evidence for a structural source of priming between German and English. <cit.> suggest that structures that are identical in the Korean and English language have a single, shared mental representation. However, there is little evidence about this phenomenon on the two entirely different language families (English: Indo-European family, Chinese: Sino-Tibetan family), we believe that there are still some clues we can found in this two languages. § DATA PREPARATION We have selected and processed a Chinese-English corpus, which contains millions of pairs of Chinese and English parallel texts. The source can be found https://drive.google.com/file/d/1EX8eE5YWBxCaohBO8Fh4e2j3b9C2bTVQ/view?pli=1here. We employ a DataLoader to facilitate batch processing, transforming text into token IDs suitable for model interpretation. We utilize the Helsinki-NLP tokenizer, specifically designed for Chinese to English mapping, capable of accommodating over a thousand models for diverse language pairs. The tokenizer by default processes text according to source language settings. To encode target language text, the context manager as_target_tokenizer() must be used. Without this, the source language tokenizer would be applied incorrectly to the target text, leading to poor tokenization results, such as improperly splitting words unrecognized in the source language. For sequence-to-sequence models, it's essential to set padding tokens to -100 to ensure they are ignored during loss calculations. This setup is crucial for training the models effectively, allowing for precise adjustment of model parameters based on the tokenized input and target sequences. This preprocessing step ensures that the data fed into the model is properly formatted, facilitating optimal training outcomes. As for the test set, we designed and collected the test set ourselves. The data is structured into a five-column format, where the first column consists of Chinese sentences, followed by four English sentences. These sentences are categorized into two distinct groups based on their semantic and syntactic properties relative to the Chinese sentences. We collected 4 types of diverse sentence structures (Double Object, Prepositional Object, Active Voice, Passive Voice) to form the test set. The first category contains English sentences that semantically match the Chinese sentences. In this category, the first English sentence maintains the same syntactic structure as its Chinese counterpart, while the second sentence presents a different syntactic arrangement. For instance, the pair "This cake was burnt by mom." and "Mom burnt this cake." illustrates this category where the first sentence is in passive voice (mirroring the structure of the Chinese sentence), and the second is in active voice. The second category comprises English sentences that do not semantically align with the Chinese sentences. Despite the semantic disparity, one sentence in each pair preserves the syntactic form of the original Chinese sentence (e.g. both are in passive voice), and the other varies syntactically. This arrangement allows us to investigate the influence of semantic mismatches on syntactic priming, providing insights into whether structural similarities override semantic discrepancies in cognitive processing. This dataset will serve as the foundation for our experiments on structural priming, aiming to explore how linguistic structure and semantics influence sentence formulation in bilingual speakers. § LANGUAGE MODELS As depicted in part b of Fig. <ref>, we implemented both a Transformer model and an RNN model to handle sequence-to-sequence tasks using the encoder-decoder architecture. This architecture enables us to process input sequences of varying lengths and generate output sequences of different lengths, which is crucial for us to pay attention to sentences with different structures yet similar meanings. In this section, we will primarily delve into why these language models can assist us in identifying structural priming in detail. §.§ Transformer In the transformer model, we employ the self-attention mechanism to store the sentence structure, which can capture dependencies between different positions and adjusting the representation of each word based on its relationship with others, thus facilitating the learning of sentence structure. We can describe this mechanism with: Attention(Q, K, V) = softmax(QK^T/√(d_k))V where Q, K, V are obtained through linear transformations of an input sequence of text, each with its own learnable weight matrix. In the encoder part of model, Q, K, V comes from the same source sequence, while in the decoder part, Q comes from the target sequence, and K and V come from the output sequence of the encoder. Since the computation of Q, K, and V requires processing the entire input sentence, the model can simultaneously focus on all positions and capture the structure of the sentence. Additionally, in decoder part, employing multiple attention heads allows us to capture diverse levels of sentence features, so that we can get a more comprehensive representation of sentence structure. Each attention head specializes in capturing specific semantic relationships, such as word dependencies and distance relationships. This approach enhances the model's ability to comprehend the intricacies of sentence structure. The equation is as follows: MH(Q, K, V) = Concat(head_1, …, head_h) · W^O where W^O is the weight matrix we need to train. head_1, …, head_h is computed through the attention equation <ref> and represents the attention weights of each head, and in our project we choose to use 8 heads. Concat is the operation of concatenating tensors along their last dimension. Furthermore, we also pay attention to selecting the positional encoding. While the common method involves using Sine and Cosine Functions, in our project, we opt for the Learnable Positional Embedding from the Hugging Face Transformers library. We believe this approach offers more advantages for learning structural priming because it makes our model can better understand and encode the relative positions of words within a sentence. In contrast to the fixed positional encoding, learnable positional embeddings assign different weights to different positions, emphasizing the relevant positional information that contributes to the priming effect. This enables the model to capture more intricate positional relationships and dependencies specific to the task of structural priming. §.§ Recurrent Neural Network We believe that recurrent neural networks (RNN) can also preserve the sentence structure and help us find the structural priming environment, because its sequential nature can process input token from a previous contextual understanding of the entire sentence. As each token is processed, the hidden state of the RNN is updated, retaining information about the preceding tokens and their contextual relevance. This sequential processing enables the model to capture dependencies relationships between words, thereby preserving the structural integrity of the sentence, and we can summarize the equation as follows: State(dh_i, c_i), p = f(State(dh_i-1, c_i-1), m) where function f means the hidden layer of the RNN model, which is a neural network. It takes the previous layer's state State i-1 and the output vector from the previous time step m as input, and outputs the next layer's state State i and prediction value p until it encounters the termination symbol. In the state, dh signifies the hidden state of the RNN unit in decoder, tasked with capturing pertinent information gleaned from the input sequence. In the initial decoder step, it embodies the ultimate output state of the encoder, while in subsequent decoder steps, it denotes the output of the previous RNN unit. In addition, to solve the limitation of not being able to remember the entire sentence structure, we introduce the attention mechanism. The attention mechanism of the RNN model can make the model pay more attention to the part of the input sequence that is most relevant to the current output, thereby improving the accuracy of prediction. It has the potential to be applied to structural initiation prediction because of its ability to capture dependencies in sequence data and the ability to exploit these dependencies for prediction. As shown in the equation <ref>, we use c to represent the attention. The calculation of c is showed as follows: α_i = g(eh_i, dh_0) As I mentioned before, dh_0 means the ultimate output state of the encoder and eh signifies the hidden state of the each RNN unit in encoder, function g is used to calculate the weight alpha_i of eh_i in the final state dh_0. And as a result, we can get the attention c by combine all of the previous state together: c_i = ∑(α_i * dh_i) calculated by summing the products of the weight α and the state in decoder dh. § EXPERIMENTAL SETUP To assess the quality of the performance of our model in Chinese-English, we adopted the standard bilingual evaluation understudy (BLEU) algorithm. The BLEU score ranges from 0 to 1, which indicates the similarity of predicted text against target text. BLEU = BP·exp(∑_n=1^N w_n log p_n) Where: N is the maximum n-gram order (typically 4). w_n is the weight assigned to each n-gram precision score.(∑_n=1^N w_n = 1) p_n is the precision score for n-grams of order. n BP is the brevity penalty, which penalizes shorter results. Post the generation of predicted outcomes and the assembly of a test set, we analyzed the relationship between the predicted sentences and four different types of reference sentences: (1) correct mappings with the same structure, (2) semantically similar but structurally different sentences, (3) semantically different but structurally identical sentences, and (4) sentences that are different both semantically and structurally. We categorized the comparisons into two distinct groups based on semantic similarity. In the first category, encompassing sentences with identical meanings, we hypothesized that if structural priming is effective, the BLEU scores between the predicted sentences and the reference sentences with the same structure should be higher than those with different structures. This comparison aimed to establish whether the model exhibits a preference for reproducing structures that are syntactically aligned with the ground truths when the semantic content remains constant. The second category involved sentences that differed in meaning. This category is particularly crucial for demonstrating structural priming, as the influence of semantic similarity is negated. Here, a higher BLEU score for sentences with identical structures compared to those with different structures would strongly indicate that the model's predictions are influenced by the structural aspects of the input, irrespective of semantic changes. Through this methodology, we sought to rigorously test the presence of structural priming in outputs, offering insights into how structural properties of language are processed and replicated by the models. § RESULTS ANALYSIS We present the performance of the GRU based RNN and vanila transformer model we trained for the tasks, and then demonstrate their crosslingual structural priming effect in Chinese-English bilingual scenarios. We also present our insights for the response of open-source large language models on the same dataset accordingly. §.§ Structural Priming Performance Our comparative analysis highlights that while both models achieve competitive BLEU scores, the transformer model demonstrates a slight advantage in handling complex sentence structures. As depicted in Fig. <ref>, we can observe that, when the training data is sufficiently large, the predicted BLEU scores of both models relative to the standard structured sentence segments reached relatively high levels. §.§ Crosslingual Structural Priming Effect Through our examination of crosslingual structural priming, we observed a noteworthy pattern: both models facilitated the use of target-language syntactic structures influenced by the source language. However, the transformer model displayed a more pronounced priming effect, indicating a potential edge in mimicking human-like syntactic adaptation in bilingual contexts. As depicted in Fig. <ref> and Fig. <ref>, the BLEU scores of machine-generated predictions with both correct and opposite priming test sets, we gain insights into model performance. Specifically, we evaluate the similarity levels between the model predictions and the correct priming test sets (e.g., Active-Active, DO-DO) as well as the opposite priming test sets (e.g., Active-Passive, PO-DO). Higher BLEU scores against the correct priming test sets indicate that the model predictions align more closely with the appropriate structural priming, while higher scores against the opposite priming test sets suggest the model deviates from the expected priming behavior. The experimental results reveal that when evaluated against the correct priming test sets, the Transformer model exhibits similar levels to GRU, with slight improvements being observed as the n-gram size increases. Conversely, in comparison to opposite priming, GRU generally outperforms the Transformer. Given that this comparison involves what is termed as "incorrect" priming, where GRU aligns more closely with the opposite priming test set, we infer that the Transformer adheres more closely to the appropriate structural priming. In a previous study, <cit.> examined the presence of structural priming by comparing the proportion of target sentences produced after different types of priming statements. Similarly, for each experimental item, we prime the language model with the priming sentence and calculate the normalized probabilities of the two target sentences. We calculate these normalized probabilities in the following way: First, calculate the raw probability of each target sentence given the priming sentence: P(DO Target | DO Prime) P(PO Target | PO Prime) P(DO Target | PO Prime) P(PO Target | DO Prime) And the same method for: P(Active Target | Active Prime) P(Passive Target | Passive Prime) P(Active Target | Passive Prime) P(Passive Target | Active Prime) Then, these probabilities are normalized to calculate the conditional probability of the target sentence if the model output is one of the two target sentences, taken DO | PO as example: P_N(Target | Prime) = P(Target | Prime)/P(DO Target | Prime) + P(PO Target | Prime) Since the sum of the normalized probabilities of the two target sentences is 1, we only need to consider the probability of one target type and compare between different priming types. The reason is that the probability of another target type can be derived from this, i.e. P_N(Target | Prime) = 1 - P_N(Target | Prime ). By considering only one goal type, we can directly compare the priming effects of the two priming types on that goal type, which is the main focus of analysis in structural priming research. The quantitative comparative findings depicted in Fig. <ref> derived from the sentence chunk dimension reveal that the Transformer model generally outperforms GRU. Through a horizontal examination of priming structural types, it is evident that machine predictions exhibit superior performance with respect to active/passive structures compared to those of PO/DO. The Transformer model we trained was only exposed to Chinese-English dataset. It has been demonstrated that the LLM can mimic human language structural priming effect in various scenarios – both within-language and crosslingual experiments. However, the evidence of such multilingual language models for Chinese-English structural priming effect is missing. We adopted XGLM models proposed by <cit.> to calibrate their outcome based on the normalized score defined above on the same set of tasks we designed for RNN and transformers. Among the four categories in our experiments, we found this language model family is more sensitive in showing structural priming effect in passive and prepositional tasks (see Fig. <ref>), while the effect is more noticeable in the former case. § DISCUSSION AND CONCLUSION The current study evaluates cross-language structural priming effect in RNN and Transformer models in the setting of Chinese-English. We find evidence for abstract crosslingual grammatical representations in the models that function similarly to those found in previous studies. §.§ Interpretation of Results Our results demonstrate a decrease in BLEU scores with the increasing length of n-grams, a trend that is consistent with existing findings in sentence-similarity evaluation <cit.>. As n-grams become longer—from unigrams to bigrams, trigrams, they encapsulate more specific linguistic contexts, making exact matches less likely unless the target sentence is highly precise. Moreover, any minor errors in word choice or sequence can disrupt the alignment of these longer n-grams. Importantly, our results indicate that transformer models outperform RNN in modeling Chinese-English structural priming, a finding that is intriguing given prior research. Traditionally, RNNs have been effective in modeling human sentence processing, capable of explaining phenomena such as garden-path effects and structural priming through their sequential processing capabilities, which are thought to mirror aspects of human cognitive processing <cit.>. This superiority of transformers raises questions about the efficacy of RNNs as models of human sentence processing, especially if they are surpassed by a model considered less cognitively plausible. However, it is possible to interpret the results as supportive of the cognitive plausibility of transformers, particularly due to their attention mechanism. While the concept of unlimited working memory in transformers is viewed as implausible, some researchers argue that actual human working memory capacity is much smaller than traditionally estimated—limited to only two or three items—and that language processing involves rapid, direct-access retrieval of items from memory <cit.>, a process compatible with the attention mechanism in transformers. This mechanism assigns weights to previous inputs based on their relevance to the current input and aligns with cue-based retrieval theories, which suggest that memory retrieval is influenced by the similarity of current cues to stored information <cit.>. §.§ Limitations Moreover, it is important to note that our study does not include a comparison with human data on Chinese-English priming. We equate the models' ability to replicate cross-language priming with the structural "correctness" of their outputs, yet empirical studies indicate that even humans do not achieve full priming rate <cit.>. Therefore, it is conceivable that if the models' outputs were compared directly to human data, RNNs might more closely resemble human performance. This limitation highlights an area for future research, which could involve direct comparisons to human priming data to better assess the models' fidelity to human language processing. A further limitation of our study is that our models are not capable of generating sentences based on novel word concepts and thematic roles, which is a task typically performed by human participants in priming studies. In such studies, participants are exposed to a specific sentence structure and then asked to describe a new event, depicted in an image, using that structure <cit.>. Consequently, some critics may argue that what our models essentially do is translate from Chinese to English without generating new semantic content, as the semantic information remains consistent from the priming sentence to the output sentence. Despite these critiques, we maintain that the current study design still validly assesses the priming effect. This is because the models must choose which sentence structure to use from among various structures that share the same semantic content, a choice influenced by the priming effect. However, we acknowledge that our design is susceptible to the "lexical boost" effect, where the structural priming effect is intensified when the same lexical head is repeated in both the prime and target sentences <cit.>. For instance, if the target sentence is "Alice gave Bob a book," the priming effect is more pronounced if the prime sentence was "Carl gave Danis a letter" rather than "Alice showed Bob a book." Given that the semantic content remains constant across the prime and output sentences in our study, the observed priming effect is artificially strengthened compared to what might be observed in a pure priming task. §.§ Future Directions To address this issue, we can develop a model that can produce sentences based on new individual semantic concepts and thematic roles before and after priming. Although developing such models could be challenging, it could free us from the lexical boost effect. Alternatively, we could shift our focus from production to comprehension. By measuring the surprisal levels in models, we can gain insights into how structural priming influences model comprehension, as suggested in recent studies <cit.>. In information theory and psycholinguistics, surprisal is a measure of how unexpected a word is in a given linguistic context. The more unexpected the word, the higher its surprisal value. For instance, if a model consistently shows lower surprisal at structurally complex points in sentences that follow a priming example, it would suggest that the priming has effectively prepared the model for these structures. This method offers a way of understanding how structural priming impacts language processing in models, free from the confounding effects of repeated vocabulary. Additionally, there is evidence suggesting an inverse relationship between the frequency of linguistic constructions and the magnitude of priming effects observed with those constructions <cit.>. For example, the direct object (DO) construction is more common in American English than the prepositional object (PO) construction <cit.>. Studies have shown that the less frequent PO construction exhibits stronger priming effects compared to the more frequent DO construction <cit.>. This aligns with theories of implicit learning in structural priming, where more frequently encountered structures are less 'surprising' to the language system and thus generate weaker priming effects. To further explore this, we could train models on corpora consisting of American versus British English, which differ in their usage frequencies of certain constructions, to see if a similar inverse frequency effect is observed in computational models. This approach would help illuminate the dependency of structural priming on construction frequency, potentially providing deeper insights into how implicit learning processes are modeled computationally. Another aspect worth discussing is the significance of using LLM to simulate human language processing efforts. As highlighted in the introduction, the ultimate goal is to deepen our understanding of how the human brain functions—assuming that models which appear more human-like externally might also mirror human cognitive processes internally. However, one might question the validity of using LLM for this purpose. Given that these models often function as "black boxes," their internal operations remain largely opaque. Despite their impressive computational abilities, the lack of transparency means that even if they outperform more interpretable models, they do not necessarily enhance our understanding of brain function. Previous studies argue that crosslingual structural priming might be affected by the asymmetry of training sources in certain language pairs <cit.>. By measuring the probability changes for source and target sentences, we found such multilingual auto-regressive Transformer language models display evidence of abstract structural priming effect despite the fact that it didn't perform equally well in all scenarios.
http://arxiv.org/abs/2405.10270v1
20240516172526
Quadratic quasi-normal mode dependence on linear mode parity
[ "Patrick Bourg", "Rodrigo Panosso Macedo", "Andrew Spiers", "Benjamin Leather", "Béatrice Bonga", "Adam Pound" ]
gr-qc
[ "gr-qc" ]
Institute for Mathematics, Astrophysics and Particle Physics, Radboud University, Heyendaalseweg 135, 6525 AJ Nijmegen, The Netherlands Niels Bohr International Academy, Niels Bohr Institute, Blegdamsvej 17, 2100 Copenhagen, Denmark School of Mathematical Sciences & School of Physics and Astronomy, University of Nottingham, University Park, Nottingham, NG7 2RD, UK Nottingham Centre of Gravity, University of Nottingham, University Park, Nottingham, NG7 2RD, UK Max Planck Institute for Gravitational Physics (Albert Einstein Institute), Am Mühlenberg 1, Potsdam 14476, Germany Institute for Mathematics, Astrophysics and Particle Physics, Radboud University, Heyendaalseweg 135, 6525 AJ Nijmegen, The Netherlands School of Mathematical Sciences and STAG Research Centre, University of Southampton, Southampton, SO17 1BJ, United Kingdom Quasi-normal modes (QNMs) uniquely describe the gravitational-wave ringdown of post-merger black holes. While the linear QNM regime has been extensively studied, recent work has highlighted the importance of second-perturbative-order, quadratic QNMs (QQNMs) arising from the nonlinear coupling of linear QNMs. Previous attempts to quantify the magnitude of these QQNMs have shown discrepant results. Using a new hyperboloidal framework, we resolve the discrepancy by showing that the QQNM/QNM ratio is a function not only of the black hole parameters but also of the ratio between even- and odd-parity linear QNMs: the ratio QQNM/QNM depends on what created the ringing black hole, but only through this ratio of even- to odd-parity linear perturbations. Quadratic quasi-normal mode dependence on linear mode parity Adam Pound May 20, 2024 ============================================================ Introduction.—After the merger of two black holes (BHs), the distorted remnant BH rings down towards a stationary state through its emission of gravitational waves (GWs). The signal associated with this process is well modelled by a superposition of exponentially damped sinusoids, with complex frequencies given by the so-called quasi-normal modes (QNMs) <cit.>. For an isolated system within General Relativity (GR), these QNM frequencies are uniquely determined by the mass and spin of the final BH. Each frequency ω_ℓ is characterised by three integers: polar and azimuthal indices (ℓ, ) associated with a projection onto spherical harmonics on the celestial sphere, and an overtone index n=0,1,… that enumerates the frequencies for a given angular mode. Inspired by the uniqueness of the QNM spectrum, the BH spectroscopy program <cit.> aims at extracting multiple QNMs from ringdown signals in order to perform stringent tests of GR, probe the BH geometry, and constrain features of the surrounding environment <cit.>. Measurements of the dominant QNM, (ℓ,,)=(2, ±2, 0), in GW signals are well established <cit.>. The detection of higher overtones and higher angular modes is still under debate <cit.>, but future GW detectors such as the Einstein Telescope and LISA are expected to observe these higher modes regularly. Current forecasts predict 20-50 events per year with at least two detectable QNMs for stellar-mass binaries <cit.> and even ∼ 5-8 QNMs for massive BH binaries with LISA <cit.>. Historically, the BH spectroscopy program has been entirely based on linear BH perturbation theory (BHPT) <cit.>, and forecasts for future QNM detections assume only linear QNM frequencies. However, GR is a nonlinear theory, and recent milestone results have shown that BH spectroscopy must also account for second-order, quadratic perturbations, which can dominate over linear overtones <cit.>. In these quadratic perturbations, a new set of characteristic frequencies arises: the so-called quadratic QNMs (QQNMs) ω_ℓ_1 _1 _1×ℓ_2 _2 _2=ω_ℓ_1 _1 _1+ω_ℓ_2 _2 _2, which result from the coupling of two linear QNMs. A recent analysis indicates that the Einstein Telescope and Cosmic Explorer could detect QQNMs in up to a few tens of events per year <cit.>. While the predictions for LISA depend sensitively on the astrophysical massive BH formation models, the most optimistic scenario allows for up to O(1000) events with detectable QQNMs in LISA's nominal 4-year observation time <cit.>. Spurred by these developments, there has been a spate of recent work devoted to analysing QQNMs and their impact on BH spectroscopy. Most calculations have been based on extracting modes from fully nonlinear numerical relativity (NR) simulations of BH binary evolutions, but a number of recent calculations have also been performed using second-order BHPT <cit.>. These calculations have generally focused on a single measure of the significance of QQNMs: the ratio between a given QQNM mode amplitude and the amplitude(s) of the linear parent mode(s) that generate it. Perhaps surprisingly, different analyses have led to conflicting values of the ratio, even in the simplest case of non-rotating BHs. It has been suggested that the discrepancies in Schwarzschild could be due to the freedom to excite both even- and odd-parity perturbations <cit.>. We find that this can explain the discrepancy in previous QQNM analyses, but this property is not restricted to a Schwarzschild background; it also holds in Kerr. Consequently, we observe that reported QQNM/QNM ratios in the literature often lack sufficient information to describe the relationship between linear and non-linear QNMs fully. To shift this paradigm, we establish that the ratio depends not only on the BH parameters, but also on the properties of the system that created the ringing BH. Specifically, it is a function of the ratio between the amplitudes of even- and odd-parity linear QNMs; this ratio will depend on the degree to which the progenitor system possessed equatorial (up-down) symmetry. As a proof of principle, we make this argument precise, and present numerical results for the QQNM/QNM ratio, in the simple case of Schwarzschild spacetime. We discuss the generalisation to the most generic case in the conclusion. Our calculation utilizes a novel code combining two critical components: a hyperboloidal frequency-domain framework that allows us to directly and accurately compute the physical waveform without requiring regularization <cit.>; and a covariant second-order BHPT formalism <cit.>. By controlling the geometrical aspects of the problem and using the mode-coupling tools of Ref. <cit.>, we are able to fine-tune the first-order dynamics to single out any number of linear modes and obtain the quadratic contribution from the linear even- and odd-parity sectors semi-analytically. Black hole perturbation theory.—In BHPT we expand the spacetime metric in the form g_ab+ε h^(1)_ab+ε^2 h^(2)_ab+…, where g_ab is a Kerr metric and ε=1 counts perturbative orders. In vacuum, the perturbations satisfy the Einstein equations εδ G_ab[h^(1)_cd]+ε^2(δ G_ab[h^(2)_cd]+δ^2 G_ab[h^(1)_cd])+…=0, where δ G_ab is the linearized Einstein tensor and δ^2 G_ab[h^(1)_cd] is quadratic in h^(1)_cd <cit.>. At linear order all nontrivial information in h^(1)_ab is encoded in the linearized Weyl scalar Ψ^(1)_4, which satisfies the vacuum Teukolsky equation 𝒪[Ψ_4^(1)]=0, where 𝒪 is a linear second-order differential operator <cit.>. We adopt compactified hyperboloidal coordinates (τ,σ,θ,φ) <cit.>, in which constant-τ slices connect the future horizon ^+ (at compactified radial coordinate σ = 1) to future null infinity ^+ (at σ=0). Using the hyperboloidal time τ, we introduce the frequency-domain field ψ_4^(1) via a Laplace (or Fourier) transform, ψ_4^(1)(σ,θ,φ; ς) = ∫_0^∞Ψ_4^(1)(τ,σ,θ,φ) e^- ςτ d τ. The complex Laplace parameter ς is related to the usual complex frequency ω by ς = -i ω. We next separate the Teukolsky equation into an angular and radial part by decomposing ψ_4^(1) into spin-weighted spheroidal harmonics <cit.>, ψ_4^(1) = 𝒵(σ) ∑_łψ̃_ł^(1)(σ;s) _-2 S_ł(θ, φ;ς). Here 𝒵(σ) serves to factor out the dominant behavior near ^+ and ^+; a linear vacuum perturbation Ψ^(1)_4 scales quadratically with distance from the horizon near ^+ and decays inversely with distance toward ^+ <cit.>, motivating us to choose 𝒵(σ) ∝σ (1-σ)^2. Because we use hyperboloidal slices <cit.>, the modes ψ̃_ł^(1) are smooth on the entire domain, σ∈ [0,1], and because we factor out 𝒵 we can directly compute the waveform from ψ̃_ł^(1) at σ=0. The Laplace transform and spheroidal-harmonic expansion leaves us with a radial Teukolsky equation, 𝒟_ł [ψ̃_ł^(1)(σ;ς)] = 0, where 𝒟_ł is a linear second-order radial operator. In the hyperboloidal setup, linear QNMs are the solutions to this equation that satisfy regularity conditions at both ends (σ=0 and 1). Such solutions only exist for the countable set of QNM frequencies ς_ł, and we denote them ψ̃_ł^(1)(σ) := ψ̃_ł^(1)(σ; ς_ł). Given a set of QNM solutions, ψ̃^(1)_ł, the inverse Laplace transform yields a time-domain solution of the form <cit.> Ψ_4^(1) = 𝒵(σ) ∑_ł A_łψ̃_ł^(1)(σ) e^ς_łτ_-2 S_ł(θ,φ; s_ł). Since QNM solutions are only defined up to an overall constant factor, the A_ł are arbitrary (complex) excitation coefficients, and we set ψ̃_ł^(1) = 1 at ^+. In Eq. (<ref>), we neglect late-time tail contributions that generically arise <cit.>; this choice fixes a pure QNM at linear order. Mirror modes and parity.—The azimuthal symmetry of Kerr spacetime causes a symmetry between the + and - QNM frequencies <cit.> ς_ł- n=ς_ł n^*. Additionally, our choice of normalization implies that the corresponding eigenfunctions are related by ψ̃_ł- =ψ̃_ł^⋆. The QNMs with ≥0 and <0 are known as the regular and mirror QNMs, respectively. They decay at the same rate but oscillate in opposite directions. In Schwarzschild spacetime, the frequencies degenerate: prograde and retrograde modes become indistinguishable and -independent. Nonetheless, QNM frequencies come in complex conjugate pairs, and the full solution can still be written in the form of (<ref>), where one can impose (without loss of generality) the mirror relation in (<ref>). Most QNM analyses have exclusively considered the regular QNMs. In <cit.>, the importance of including mirror QNMs was demonstrated in linear QNM analyses to reduce systematic uncertainties. In this work, we find that mirror modes also play a crucial role in QQNM analysis. The ratio between regular and mirror modes, A_ł-/A_ł, is directly related to the ratio between even- and odd-parity contributions to the GW. At ^+, the GW (or more strictly, the shear <cit.>) can be naturally decomposed into even-parity (Y_AB) and odd-parity (X_AB) tensor harmonics <cit.>, h_AB^(1) = r^2𝒵(σ)∑_ł( C^+_ł Y_AB^ł + C^-_ł X_AB^ł) e^ς_łτ, where it is understood that this applies in the r→∞ (σ→0) limit, θ^A=(θ,φ), the factor of r^2 corresponds to the natural scaling of angular components, C_±^ł are constant (complex) amplitudes, and τ=u (the usual outgoing null coordinate). For comparison, Ψ_4^(1) at ^+ can be decomposed into the closely-related spin-weight -2 spherical harmonics _-2Y_ł (requiring a projection from spheroidal harmonics, except in the Schwarzschild case where _-2S_ł reduces to _-2Y_ł). To relate the amplitudes A_ł to C^ł_±, we use the relations between harmonics in Ref. <cit.> together with the fact that lim_r→∞Ψ_4^(1) = -1/2lim_r→∞ḧ_m̅m̅, where m̅^A=1/√(2)r(1,iθ). A short calculation reveals A_ł = -ς_ł^2/4λ_ł,2( C^+_ł - i C^-_ł), A_ł -^⋆ = -(-1)^ς_ł^2/4λ_ł,2( C^+_ł + i C^-_ł), where λ_ł,2 = √((ł+2)! / (ł-2)!). Equation (<ref>) follows from the mirror relation (<ref>) and the fact that the 4D metric perturbation is real, i.e., C^±_ł - = (-1)^m (C^±_ł)^⋆. Equations (<ref>) and (<ref>) imply that the complex ratio A_ł-/A_ł is a simple function of C_ł^-/C_ł^+. We emphasize that while it is impossible to omit <0 modes from h^(1)_AB (as they are required for h^(1)_AB to be real-valued), it is possible to omit <0 modes in Ψ^(1)_4; this corresponds to the particular ratio C^+_ł=-iC_ł^-. Quadratic QNMs.—Like at first order, the second-order contribution to the GW is fully encoded in a linear Weyl scalar Ψ_4L^(2) constructed from h^(2)_ab <cit.>. Ψ_4L^(2) satisfies a “reduced” second-order Teukolsky equation derived from the second-order terms in the Einstein equation (<ref>) <cit.>: 𝒪[Ψ_4L^(2)]=- 𝒮[δ^2G_ab[h^(1)_cd]] := 𝒮̃, where 𝒮 is a linear second-order differential operator <cit.>. The source in <ref> depends quadratically on the first-order metric perturbation; schematically, 𝒮̃∼∇∇( ∇ h^(1)_ab)^2 . In Schwarzschild <cit.>, the ł modes of 𝒮̃ are readily computed from an arbitrary set of first-order ł modes using the Mathematica package PerturbationEquations <cit.>. We obtain the ł modes of h^(1)_ab using a standard metric reconstruction procedure <cit.> in the outgoing radiation gauge (ORG), in which h^(1)_ab is computed from a Hertz potential Φ^ ORG satisfying the inversion relation Ψ_4^(1) = 1/4^4 Φ̅^ ORG, where is a derivative along ingoing principal null rays <cit.>. In terms of this Hertz potential, h_ab^(1) = 2 Re( 𝒮^†Φ^ ORG)_ab, where 𝒮^† denotes the adjoint of 𝒮. Note that h^(1)_ab depends on both Φ^ ORG and Φ̅^ ORG and therefore on both Ψ_4 and Ψ̅_4 via <ref>. Consequently, each QQNM amplitude depends on regular and mirror linear QNM amplitudes. To elucidate this, we consider Ψ_4^(1) being composed of a single regular and mirror mode in Schwarzschild spacetime, Ψ^(1)_4 = 𝒵(σ) {A_łψ̃_ł^(1)(σ) e^ς_łτ_-2 Y_ł(θ, ϕ) + A_ł - (ψ̃_ł^(1)(σ))^⋆ e^ς_ł^⋆τ_-2 Y_ł -(θ,ϕ) }. Equations (<ref>)–(<ref>) show that the second-order source 𝒮̃ depends quadratically on Ψ^(1)_4 and Ψ̅^(1)_4. Via our ansatz (<ref>) and <ref>, 𝒮̃ is hence composed of three distinct terms, 𝒮̃ = 𝒵(σ) { S̃^(σ) e^2 ς_łτ_-2 Y_(θ, ϕ) + S̃^ 0(σ) e^2 Re(ς_ł) τ_-2 Y_ 0(θ, ϕ) + S̃^ -(σ) e^2 ς_ł^⋆τ_-2 Y_ -(θ, ϕ) }, where = 2 ł, and = 2. Given the source (<ref>), we solve Eq. (<ref>) following the same procedure as at first order, performing a Laplace transform in hyperboloidal time, decomposing into ł modes, and solving the resulting radial equations. The time-domain solution is obtained by applying an inverse Laplace transform, which contains the same modes as the source (<ref>) (in addition to tails and other contributions that we neglect). In particular, it is composed of (,),(,0),(,-) spherical modes, with associated frequencies 2 ς_ł, 2 Re(ς_ł) and 2 ς_ł^⋆. Each of these QQNMs depends on the first-order excitation coefficients in the following way: (Ψ_4L^(2))^ = a^ (σ) A_ł^2 + b^ (σ) A_ł A_ł -^⋆, (Ψ_4L^(2))^ 0 = a^ 0 (σ) A_ł A_ł - + b^ 0 (σ) A_ł A_ł^⋆ + (b^ 0 (σ))^⋆ A_ł - A_ł -^⋆, (Ψ_4L^(2))^ - = (a^ (σ) )^⋆ A_ł -^2 + (b^ (σ))^⋆ A_ł^⋆ A_ł -. We provide the coefficients (a^ etc.), evaluated at ^+, in the Supplemental Material. Note that one can uniquely determine the two excitation amplitudes of the first-order data, A_ł and A_ł -, up to an overall sign, from the excitations at second order, by solving the system (<ref>)–(<ref>) for A_ł and A_ł - at a radial point σ = σ_0, for example at ^+, σ_0=0. Given the relations (<ref>)–(<ref>), this means that from the QQNMs one can uniquely determine, up to an overall sign (which corresponds to a phase difference of +π), the contribution of the even and odd sectors of the first-order QNMs. The residual sign ambiguity is due to the fact that the second-order source, and therefore the QQNMs, are invariant under a sign change Ψ_4^(1)→ -Ψ_4^(1); see again Eq. (<ref>). Results.—We specialise to the scenario described above, with first-order pure QNM data consisting of a single regular and mirror mode, as in (<ref>). The ensuing nonzero pieces of the second-order Weyl scalar are of the form (<ref>)–(<ref>). For concreteness, we display results for the QQNM mode with frequency 2ς_ł, which corresponds to the expression in Eq. (<ref>). Typically, one is not directly interested in the value of (Ψ_4L^(2))^ at ^+, but in how it compares to the first-order perturbation. Since Ψ_4L^(2)∼(Ψ_4^(1))^2, we might consider the following ratio: (ℛ^Ψ_4)^ := (Ψ_4L^(2))^/A_ł^2. However, for comparison with the literature, it is more convenient to define the analogous ratio from the strain h, related to the Weyl scalar by Ψ_4 ∼ḧ. Employing the SpEC conventions (which notably use a re-scaled Kinnersley tetrad) <cit.>, we arrive at the relation (ℛ^h_SpEC)^ = -ς_ł^2 (ℛ^Ψ_4)^ . Note that, apart from numerical factors, these ratios only depend on the ratio between mirror and regular mode amplitudes, A_ł-^⋆/A_ł. For example, (ℛ^Ψ_4)^ =a^ (0) + b^ (0) A_ł -^⋆/A_ł. Using Eqs. (<ref>) and (<ref>), we can alternatively express (ℛ^Ψ_4)^ in terms of the ratio of odd to even amplitudes, C^-_ł / C^+_ł. In Kerr spacetime (ℛ^Ψ_4)^ is also a function of the BH parameters and C^-_ł / C^+_ł. This can be derived by inputting Eq. (<ref>) into the expression for the source, 𝒮̃, available in Ref. <cit.>. This shows that the source consists of terms ∝Φ̅^2 and ∝ΦΦ̅. Hence, using Eqs. (<ref>), (<ref>), and (<ref>), similar relations as Eqs. (<ref>) to (<ref>) hold in Kerr (up to angular mode mixing) and (ℛ^Ψ_4)^ is a function of C^-_ł / C^+_ł and the BH parameters. In Fig. <ref>, we show a contour plot of (ℛ^h_SpEC)^ as a function of C^-_ł / C^+_ł for the case (ł,,) = (2,2,0). We also include previous results from the literature in this plot. We relegate to the Supplemental Material how the data points were added to the figure. Notably, one should distinguish the results reported by Ma et al. <cit.> (dark red cross), which differ significantly from NR results (in blue and orange), and from the BHPT result reported by Bucciotti et al. <cit.> (light red plus). In Ma et al, the reported ratio (in Schwarzschild) has magnitude ≈ 0.137 and phase ≈ -0.083. Their computation of this result assumed that Ψ_4^(1) is composed of a single regular frequency, ς_ł, with A_ł - = 0. Equivalently, they assume C^-_ł = i C^+_ł. We find that we exactly recover their result for (ℛ^h_SpEC)^ in our framework, given their specific ratio C^-_ł/C^+_ł. In contrast, the NR results we consider here (which typical consider mergers of spinning BHs that produce a BH remnant with small spin) give consistently larger magnitudes for the ratio, typically ranging from 0.15 to 0.20. Our figure shows, even neglecting systematic errors in the NR simulations, this discrepancy can be fully explained by the fact that odd-parity modes are typically subdominant in binary mergers, so that |C^-_ł/C^+_ł| is significantly smaller than in the semi-analytical calculations <cit.>. This is a consequence of mild deviation from equatorial symmetry: for a perfectly up-down symmetric system, odd-parity modes identically vanish for even values of ł+ (meaning, in particular, for ł==2). In linear perturbation theory, this implies the odd-parity ł==2 modes identically vanish for nonspinning binary mergers. Discussion.—Precision BH spectroscopy is expected to be a pillar of future GW astronomy, enabling stringent tests of GR and of whether the massive objects in galactic centers are described by the Kerr spacetime. This program is now widely expected to require calculations of QQNMs. In this Letter, we have shown how disagreements in recent QQNM calculations can be reconciled: both even- and odd-parity linear QNMs (or equivalently, regular and mirror modes) contribute to the same QQNM, and the discrepancies in the literature are due to differences in the relative excitations of the even and odd sectors. Most importantly, we have shown that in vacuum GR this is the unique way in which the QQNM amplitudes depend on the system that formed the BH (beyond the fact that the BH mass and spin also depend on the progenitor system). Our results therefore suggest that measurements of QQNMs can be used to extract how much the first-order even and odd sectors have been excited, providing a unique route to determining, for example, the breaking of isospectrality in beyond-GR theories or due to environmental effects. Given our results, an important task for future work will be to explore how the ratio C^-_ł/C^+_ł depends on the details of the binary that formed the final BH. This would, in principle, make it simple to assess how the QQNM ratio depends on the BH's formation. The results presented here were restricted to Schwarzschild, but our main conclusion and our computational framework generalise readily to Kerr. A companion paper will provide details of the framework and will be released with a complete code in Schwarzschild that can easily handle any number of first-order QNMs. Acknowledgments.—We gratefully acknowledge helpful discussions with Sizheng Ma, Huan Yang, and Neev Khera. AS would like to thank Laura Sberna and Stephen Green for their helpful discussions. BL would like to thank Sebastian Völkel and Hector Okada da Silva for their helpful discussions. PB and BB acknowledge the support of the Dutch Research Council (NWO) (project name: Resonating with the new gravitational-wave era, project number: OCENW.M.21.119). RPM acknowledges support from the Villum Investigator program supported by the VILLUM Foundation (grant no. VIL37766) and the DNRF Chair program (grant no. DNRF162) by the Danish National Research Foundation and the European Union's Horizon 2020 research and innovation programme under the Marie Sklodowska-Curie grant agreement No 101131233. AS acknowledges support from the STFC Consolidated Grant no. ST/V005596/1. AP acknowledges the support of a Royal Society University Research Fellowship and a UKRI Frontier Research Grant (as selected by the ERC) under the Horizon Europe Guarantee scheme [grant number EP/Y008251/1]. Note added—A recent paper appeared <cit.> while this paper was in preparation, which also emphasised the importance of the mirror modes in QQNM analysis. Their results are compatible with the results in this paper, but they make specific choices of even and odd parity ratios rather than characterizing the full dependence. § SUPPLEMENTAL MATERIAL In Table <ref>, we give the values for the frequencies s_ł and the coefficients appearing in Eqs (<ref>)-(<ref>), evaluated at ^+ (σ=0), for different ł modes. For completeness, we give in Table <ref> the QQNM ratios as reported in the literature that are used to generate the data points in Fig. <ref>. These data points were computed in the following manner: To include the data from Table <ref> in Fig. <ref>, we must extract the ratio C^-_ł / C^+_ł from the data. We do this starting from the values of the magnitude and phase of the QQNM ratio, (ℛ^h_SpEC)^LM (as given in the cited references). Using relations (<ref>), (<ref>), and(<ref>), together with Eqs. (<ref>) and (<ref>), we then calculate the (unique) complex ratio C^-_ł / C^+_ł. We can then plot the corresponding point in the complex plane, as shown in Fig. <ref>. § TEMPORARY SUPPLEMENTAL MATERIAL All the appendices below are temporary and should be eventually removed for the letter. § ADDITIONAL DEGENERACY IN SCHWARZSCHILD SPACETIME Describe and define the additional degeneracy in Schwarzschild spacetime. § Ψ_4 EXCITATION COEFFICIENTS TO H^+/X Starting from the following decomposition of ψ_4 r^4 Ψ_4 = A_lmq_-2ψ_lmq e^s_ł qτ_-2 Y_ł + A_l-mqψ_l-mq e^s_ł - qτ_-2 Y_ł -. We will take the relation between the amplitudes of ψ_4 and the strain to be, r Ψ_4 = -12( - i ) ⟹ = -r (Ψ_4 + Ψ_4^* ), = -i r (Ψ_4 - Ψ_4^* ) In particular, we have, ḧ^+ = -r^5 [ A_lmq e^ς_lmqτ_-2ψ_lmq_-2 Y_lm + A_l-mq e^ς^⋆_lmqτ_-2ψ_l-mq_-2 Y_l-m] -r^5 [ A_lmq^⋆ e^ς_lmq^⋆τ_-2ψ_lmq^⋆_-2 Y_lm^⋆ + A_l-mq^⋆ e^ς_lmqτ_-2ψ_l-mq^⋆_-2 Y_l-m^⋆], = -r^5 [ A_lmq_-2 Y_lm + (-1)^m A_l-mq^⋆_2 Y_lm] e^ς_lmqτ_-2ψ_lmq -r^5 [ A_l-mq_-2 Y_l-m + (-1)^m A_lmq^⋆_2 Y_l-m] e^ς_lmq^⋆τ_-2ψ_lmq^⋆ We want to re-expand the spin +2 harmonics into spin -2. _2 Y_lm = ∑_l' ≥ 2^∞ c_l' m^lm_-2 Y_l' m, where c_l' m^lm = ∫_0 ^2 π∫_0^π_2 Y_l m_-2 Y_l' m^⋆sinθ d θ dϕ. The above expression for ḧ becomes, -r^-5ḧ^+ = ( A_lmq + (-1)^m A_l-mq^⋆ c_lm^lm) e^ς_lmqτ_-2ψ_lmq_-2 Y_lm + (-1)^m A_l-mq^⋆( ∑_l' ≥ 2, ≠ l c_l'm^lm_-2 Y_l'm) e^ς_lmqτ_-2ψ_lmq + ( A_l-mq + (-1)^m A_lmq^⋆ c_l-m^l-m) e^ς_lmq^⋆τ_-2ψ_lmq^⋆_-2 Y_l-m + (-1)^m A_lmq^⋆( ∑_l' ≥ 2, ≠ l c_l'-m^l-m_-2 Y_l'-m) e^ς_lmq^⋆τ_-2ψ_lmq^⋆. Taking, ḧ^+ = ∑_l” m” q”_-2ḧ^+_l” m” q” e^ς_l” m” q”τ_-2 Y_l” m” Then, for l” = l, _-2ḧ^+_l m q = -r^5 ( A_lmq + (-1)^m A_l-mq^⋆_2 c_lm^lm) _-2ψ_lmq For l”≠ l, _-2ḧ^+_l” m q = -r^5 (-1)^m A_l-mq^⋆ c_l” m^lm_-2ψ_lmq In a similar way for the cross polarization: For l” = l, _-2ḧ^×_l m q = -i r^5 ( A_lmq - (-1)^m A_l-mq^⋆_2 c_lm^lm) _-2ψ_lmq For l”≠ l, _-2ḧ^×_l” m q = i r^5 (-1)^m A_l-mq^⋆ c_l” m^l m_-2ψ_lmq Finally, bringing expressing the radial mode of the Weyl scalar in terms of _-2ψ̃_ł, we find, For ł”=ł, _-2ḧ^+_l m q = - λ^2/8 M^3( A_lmq + (-1)^m A_l-mq^⋆_2 c_lm^lm) _-2_-2ψ̃_lmq, _-2ḧ^×_l m q = -i λ^2/8 M^3( A_lmq - (-1)^m A_l-mq^⋆_2 c_lm^lm) _-2_-2ψ̃_lmq For the specific case ł==2, we have c_22^22 = 1/6. § QQNM AMPLITUDES IN SPEC CONVENTION The notation is not consistent with the main text and needs to be updated We define the quadratic coupling coefficient R^v, (l,m,n)_(l_1,m_1,n_1) × (l_2,m_2,n_2) as A^v, (l,m,n)_(l_1,m_1,n_1) × (l_2,m_2,n_2) = R^v, (l,m,n)_(l_1,m_1,n_1) × (l_2,m_2,n_2) A^v_(l_1,m_1,n_1) A^v_(l_2,m_2,n_2). In the above, v could refer to either to the gravitation strain h, or r ψ_4. (the factor of r is needed to get a finite result at null infinity. In our work, we use the Kinnersley tetrad. In other works, they use a rescaled version of the Kinnersley tetrad. In particular, we consider, In Boyer–Lindquist coordinates, l^μ_K r →∞⟶∂_t + ∂_r, n^μ_K r →∞⟶1/2∂_t - ∂_r, m^μ_K r →∞⟶1/√(2) r∂_θ + i/sinθ∂_φ. In the spec code (ENR simulations), they instead use a rescaled version, l^μ_spec r →∞⟶1/√(2)∂_t + ∂_r, n^μ_spec r →∞⟶1/√(2)∂_t - ∂_r, m^μ_spec r →∞⟶ m^μ_K. In Justin's code, they instead use l_TF^μ r →∞⟶1/√(2) l^μ_spec, n_TF^μ r →∞⟶√(2) n^μ_spec, m_TF^μ r →∞⟶ m^μ_spec, In particular, from the definition of the Weyl scalar, we can deduce, ψ_4^spec = 1/2ψ_4^TF = 2 ψ_4^K In particular, R^r ψ_4^spec, (l,m,n)_(l_1,m_1,n_1) × (l_2,m_2,n_2) = 1/2 R^r ψ_4^K, (l,m,n)_(l_1,m_1,n_1) × (l_2,m_2,n_2) §.§.§ relation between ψ_4 and ψ_4^reg Note that the modes of r ψ_4 are related to the modes _-2ψ^reg_lmn via the formula, (r ψ_4)_lmn^K = λ^2/8 M^31-2M/r^2 ℋ(σ) ψ^reg_lmn. I get (r ψ_4)_lmn^K = r^4λ^2/8 M^31-2M/r^2 ℋ(σ) ψ^reg_lmn the extra factor of r^4 suggests that you want to relate in terms of _-2ψ_lmn instead of (r ψ_4)_lmn. Near infinity, we can check that ℋ(σ) ∼ e^-ς r_⋆ lambda. Since we want to evaluate at null infinity, we take (r ψ_4)_lmnr →∞⟶λ^2/8 M^3ψ^reg_lmn. So, R^r ψ_4^K, (l,m,n)_(l_1,m_1,n_1) × (l_2,m_2,n_2) = A^r ψ_4^K, (l,m,n)_(l_1,m_1,n_1) × (l_2,m_2,n_2)/A^r ψ_4^K_(l_1,m_1,n_1) A^r ψ_4^K_(l_2,m_2,n_2) = 8 M^3/λ^2A^ψ^reg,(l,m,n)_(l_1,m_1,n_1) × (l_2,m_2,n_2)/A^ψ^reg_(l_1,m_1,n_1) A^ψ^reg_(l_2,m_2,n_2) = 8 M^3/λ^2 R^ψ^reg,(l,m,n)_(l_1,m_1,n_1) × (l_2,m_2,n_2) We can also transform this relation in terms of amplitudes of the metric. From the spec conventions, ψ_4 and h_ab are related at first-order by, ψ_4^spec = - ∂_t^2 h = ω^2 h = -ς^2/λ^2 h. We assume this relation still holds at second-order. In particular, the amplitudes at first and second order are modified as A^h^spec_(l_1,m_1,n_1) = 1/ω_(l_1,m_1,n_1)^2 A^r ψ_4^spec_(l_1,m_1,n_1), A^h^spec, (l,m,n)_(l_1,m_1,n_1) × (l_2,m_2,n_2) = 1/ω_(l_1,m_1,n_1) × (l_2,m_2,n_2)^(l,m,n)^2 A^r ψ_4^spec, (l,m,n)_(l_1,m_1,n_1) × (l_2,m_2,n_2). For the QQNMs, ω_(l_1,m_1,n_1) × (l_2,m_2,n_2)^(l,m,n) = ω_(l_1,m_1,n_1) + ω_(l_2,m_2,n_2), while for the second-order QNMs, we have, ω_(l_1,m_1,n_1) × (l_2,m_2,n_2)^(l,m,n) = ω_(l,m,n). Then, R^h^spec, (l,m,n)_(l_1,m_1,n_1) × (l_2,m_2,n_2) = 1/ω_(l_1,m_1,n_1) × (l_2,m_2,n_2)^(l,m,n)^2/1/ω_(l_1,m_1,n_1)^21/ω_(l_2,m_2,n_2)^2A^r ψ_4^spec, (l,m,n)_(l_1,m_1,n_1) × (l_2,m_2,n_2)/A^r ψ_4^spec_(l_1,m_1,n_1) A^r ψ_4^spec_(l_2,m_2,n_2) In particular, R^h^spec, (l,m,n)_(l_1,m_1,n_1) × (l_2,m_2,n_2) = 1/2ω_(l_1,m_1,n_1)^2 ω_(l_2,m_2,n_2)^2/ω_(l_1,m_1,n_1) × (l_2,m_2,n_2)^(l,m,n)^2 R^r ψ_4^K, (l,m,n)_(l_1,m_1,n_1) × (l_2,m_2,n_2) and so, [box=]equation R^h^spec, (l,m,n)_(l_1,m_1,n_1) ×(l_2,m_2,n_2) = -4 M^3/λ^4 ς_(l_1,m_1,n_1) ς_(l_2,m_2,n_2)/ς_(l_1,m_1,n_1) ×(l_2,m_2,n_2)^(l,m,n)^2 R^ψ^reg,(l,m,n)_(l_1,m_1,n_1) ×(l_2,m_2,n_2) §.§.§ Metric reconstruction § SOME SUPPLEMENTAL MATERIAL §.§ The hyperboloidal framework Despite the simplicity of the Boyer-Linquist coordinates, the hypersurfaces of constant t are not ideal to model gravitational wave dynamics. This is because on a constant t slice, the horizon coordinate value r = r_h corresponds to the bifurcation sphere connecting the white and black-hole region, while the asymptotic region, r →∞ leads to spatial infinity i^o. These two endpoints lead to a singular behaviour of the QNMs ref. While such asymptotic behaviour can be tamed at first-order, they become problematic at second-order since the the corresponding second-order source inherits these divergences, which adds another layer of difficulty. This caveat can be bypassed altogether by working in hyperboloidal coordinates, (τ,σ,θ,ϕ) defined by, t = λ( τ - H(σ) ), r = r_h/σ, where λ is a convenient length scale for the spacetime. For all the explicit computations and plots, we will always use λ = 4 M. H(σ) is the so-called height function, which ensures that on a constant τ slice, the surfaces r →∞ (σ=0) and r=r_h (σ=1) correspond, respectively, to future null infinity and the (future) black-hole horizon. We work in the minimal gauge, where the height function takes the following explicit form, H(σ) = r_h/λ( -1/σ + ln (σ) + ln (1-σ) ) The hyperboloidal coordinates (τ, σ, θ, φ) allows for a straightforward conformal decomposition of the physical spacetime g_ab into g_ab = Ω^-2g̃_ab, with Ω = σ/λ and g̃_ab a conformal metric with regular components in the entire exterior region σ∈[0,1]. To express the Teukolsky formalism in terms of conformal, regular quantities via the hyperboloidal framework , we first introduce a generic notation for the equation as _s Ø̂[ _sΨ^(n)] = _s S^(n), with _sΨ^(n) the Teukolsky master function with spin s at perturbation order (n), and S^(n)_s the corresponding source term. In terms of the usual Weyl scalars Ψ_0 and Ψ_4, we have Ψ_0^(n) = _2 Ψ^(n) and Ψ_4^(n) = r^4 _-2Ψ^(n). The source term on the right-hand side vanishes for n=1. For n=2, the source term depends quadratically on the first-order metric perturbations. In the hyperboloidal framework, and assuming a Kinnersley tetrad, the Teukolsky master function undergoes a transformation<cit.> _sΨ^(n) = _s Z _sΨ̃^(n), _s Z = λ^-sσ^1+2s (1- σ)^-s, where _s Z ensures that the conformal master function Ψ̃^(n)_s is dimensionless and regular in the entire domain σ∈[0,1]. The conformal master function satisfies the equation Ø̃_s[ _s Ψ̃^(n)] = _sS̃^(n), _sS̃^(n) = _s Z^-1 _s S^(n), where the conformal operator Ø̃_s is defined by _s Ø̃ := _s Z^-1_s Ø̂_s Z. Taking advantage that the angular part of the operator Ø̃_s is separable in the time-domain for the Schwarzschild background, we decompose the conformal master function, and the conformal source into their angular modes via _s Ψ̃^(n) = ∑_ł, _s Ψ̃^(n)_ł(τ, σ) _s Y_ł(θ, φ), _sS̃^(n) = ∑_ł, _s S̃^(n)_ł(τ, σ) _sY_ł(θ, φ), which yields _s Ø̃_ł[ _s Ψ̃^(n)_ł] = _sS̃^(n)_ł. In the hyperboloidal coordinates (τ, σ, θ, φ), the operator Ø̃_s ł assumes the explicit form Ø̃_s ł = -w(σ)∂^2_ττ + L̂_1 + L̂_2 ∂_τ, with L̂_1 = σ^2(1-σ)∂^2_σσ + σ(2-3σ + s(2-σ))∂_σ - (ł(ł+1) - s(s+1) + σ(1+s) ), L̂_2 = 2r_hλ(1- 2σ^2 )∂_σ - 2r_hλ( 2σ - s (1-σ) ), w(σ) = (2 r_hλ)^2(1+ σ). Eq. (<ref>) is solved after the prescription of a regular data on a initial time slice τ = 0 _s Ψ̃^(n)_ł(0,σ) = _s Ψ̃^(n)_o_ł(σ), ∂ _s Ψ̃^(n)_ł∂τ (0,σ) = _s Ψ̇̃̇^(n)_o_ł(σ), which suffices to uniquely determine the dynamics of the regular fields _s Ψ̃^(n)_ł for τ >0 and σ∈[0,1]. In this Letter, we will restrict to a frequency domain calculation, where a Laplace transform is applied to the Teukolsky equation ref. The Laplace framework allows us to naturally incorporate the initial data (<ref>) into the frequency domain formulation. Using the Laplace transformation, ℒ definition in eq. (<ref>) ref, we introduce the frequency domain field ψ̃^(n)_ł via _s ψ̃^(n)_ł(σ; ς) = L[_s Ψ̃^(n)_ł(τ,σ)](ς), which satisfies the ordinary differential equation _s D̃_ł[ _s ψ̃^(n)_ł] = _s R̃^(n)_ł, _s D̃_ł = -ς^2 w(σ) + L̂_1 + ςL̂_2. The conformal source term in the frequency domain is composed of two pieces _s R̃^(n)_ł = _s Ĩ^(n)_ł + _s R̃^(n)_ł. The first term _s Ĩ^(n)_ł(σ;ς) = -w(σ) ( _s Ψ̃^(n)_o_ł(σ) ς + _s Ψ̇̃̇^(n)_o_ł(σ) ) + L̂_2 [ _s Ψ̃^(n)_o_ł(σ) ]. results from incorporating the initial data (<ref>) into the frequency domain problem as a consequence of the Laplace transformation applied to the first and second τ-derivatives. The second term follows directly from the Laplace transformation of the time-dependent source term _s R̃^(n)_ł(σ; ς) := L[_s S̃^(n)_ł(τ,σ)](ς). §.§ First- and second-order QNM amplitdues At first-order, the source term of the Teukolskjy equation vanishes by definition, _s R̃_ł^(1) = 0. The QNMs are the defined to be eigensolutions to the above equation. IN the hyperboloidal framework, the approriate boundary conditions become regularity conditions at the (future) horizon and (future) null infinity. Such solutions only exist for certain eigenfrequencies ω_ł and correspondsing eigenfunctions _s ψ̃_ℓ(σ) := _s ψ̃_ℓ(σ;ς_ł), where is the overtone number; ω_ł are the so-called quasi-normal mode frequencies. The azimuthal symmetry of Kerr spacetime causes a symmetry between the + and - QNM frequencies<cit.>, ς_ł= ς_ł-^* In particular, the associated eigenfunctions share a similar relation, _s ψ_ł = _s ψ_ł-^⋆. The QNM with + and - are known as the regular and mirror QNM, respectively. Equation (<ref>) informs us that the regular and mirror modes decay at the same rate but oscillate in opposite directions. The majority of previous QNM analyses have exclusively considered the regular QNMs. In Ref. <cit.>, the importance of including the mirror QNMs was demonstrated in linear QNM analyses to reduce systematic uncertainties. In this work we find that mirror modes also play a crucial role in QQNM analysis and must be taken into account to achieve consistent results. From now on, we will focus to the Schwarzschild case. In this case, the frequencies ω_ł degenerate, as the Teukolsky equation becomes independent of the -mode. In particular, for a given ł-mode, and given mode there exist two different eigenfrequencies of the system, related by complex conjugation. In what follows, we will choose to reinforce the mirror relation (<ref>) by hand, in order to make the generalisation to Kerr more straightforward. maybe somewhere there should be some short discussion relating the laplace radial modes with the frequency ones At second-order, the Teukolsky equation is sourced by a non-trivial source term, _s R̃_ł^(2)≠ 0. As we saw above, this source term can be naturally split into two contributions: 1) a term purely driven by the initial data, _s Ĩ^(n)_ł(σ;ς), 2) a term driven by the second-order source, _s R̃^(n)_ł(σ;ς). Here, we set the former term to zero, _s Ĩ^(n)_ł(σ;ς) ≡ 0, as we want the second-order solution to be purely driven by the first-order excitations. The second order perturbation theory contains a source, _s S^(2) in the second-order Teukolsky equation takes the form _-2 S^(2)∼ 2ζ^4Σ𝒮_4[δ^2G_ab[h^(1)_cd]], here I only wrote it for s=-2. where 𝒮_4 is a linear second-order differential operator and δ^2G_ab is the quadratic Einstein operator, a quadratic second-order differential operator. In schematic form, the source of <ref> is 2ζ^4Σ𝒮_4[δ^2G_ab[h^(1)_cd]] ∼ (h^(1)_cd)^2 Following CCK metric reconstruction, the first-order metric perturbation, h^(1)_cd, can be written in terms of a Hertz potential, Φ, which can be calculated from Ψ_4^(1) using the inversion relation, Ψ_4^(1) = 1/4'^4 Φ̅^ORG. The metric perturbation can then be obtained from h_αβ^(1) = 2 Re(𝒮_4^†Φ^ORG)_αβ. in the outgoing radiation gauge. Note that h^(1)_ab depends on both Φ^ORG and Φ̅^ORG, this is why the QQNM depends on both regular and mirror linear QNM. We refer the reader to Appendix for more details on the reconstruction add the appendix! talk about our expression for the source and its relation to Campanelli-Lousto.
http://arxiv.org/abs/2405.09283v1
20240515120617
Bounds and Approximations for the Distribution of a Sum of Lognormal Random Variables
[ "Fredrik Berggren" ]
eess.SP
[ "eess.SP", "cs.IT", "math.IT" ]
Bounds and Approximations for the Distribution of a Sum of Lognormal Random Variables Fredrik Berggren The author is with Huawei Technologies Sweden AB, Stockholm, Sweden. (e-mail: fredrik.b@huawei.com). Received XXX; accepted YYY ========================================================================================================================= A sum of lognormal random variables (RVs) appears in many problems of science and engineering. For example, it is invloved in computing the distribution of recevied signal and interference powers for radio channels subject to lognormal shadow fading. Its distribution has no closed-from expression and it is typically characterized by approximations, asymptotes or bounds. We give a novel upper bound on the cumulative distribution function (CDF) of a sum of N lognormal RVs. The bound is derived from the tangential mean-arithmetic mean inequality. By using the tangential mean, our method replaces the sum of N lognormal RVs with a product of N shifted lognormal RVs. It is shown that the bound can be made arbitrarily close to the desired CDF, and thus it becomes more accurate than any other bound or approximation, as the shift approaches infinity. The bound is computed by numerical integration, for which we introduce the Mellin transform, which is applicable to products of RVs. At the left tail of the CDF, the bound can be expressed by a single Q-function. Moreover, we derive simple new approximations to the CDF, expressed as a product N Q-functions, which are more accurate than the previous method of Farley. Bound, shadow fading, Mellin transform, sum of lognormal random variables, tangential mean. § INTRODUCTION The sum of lognormal random variables (RVs) appears in various problems, e.g., in cellular radio communications where it is used to model the received co-channel interference power, or the received signal power from different diversity branches, subject to lognormal shadow fading. Based on the distribution, it is possible to determine the probability of link outage, i.e., that the signal power, or signal-to-interference ratio (SIR) falls below a given threshold. No closed-form expression has been presented for its cumulative distribution function (CDF) and plenty of research papers concerning the properties of the CDF have been published for this long-standing problem. For example, some methods approximate the sum with another lognormal RV whose moments or cumulants are matched to those of the sum of RVs <cit.>, <cit.>,<cit.>. Furthermore, moment matching by numerical integration of the moment generating function (MGF) <cit.> <cit.> or of the characteristic function (CF) <cit.> has been proposed. Direct numerical integration of the Laplace transform <cit.> has also been exploited. A good fit to the CDF by approximations based on other probability distributions has also been reported, e.g., <cit.>,<cit.>,<cit.>,<cit.>, <cit.>,<cit.>. Another direction is to bound the CDF. In <cit.>, order statistics was used to derive upper- and lower bounds on the CDF for a sum of N independent lognormal RVs, where the bounds were given either by Q-functions or on integral form. This methodology was extended in <cit.> to bounds for the cases of N=2 or N=3 arbitrarily correlated lognormal RVs, or of N equally correlated lognormal RVs, where the bounds were given on integral form. The geometric mean-arithmetic mean inequality was utilized in <cit.> to obtain an upper bound on the CDF, described by a Q-function, for any N and for RVs with arbitrary correlation. Lower bounds on the CDF have been given for the special case N=2 expressed by the Q-function <cit.>,<cit.> or by the Marcum Q-function <cit.>. In <cit.>, a bound was given on the error between the CDF of an approximation distribution obtained by moment matching and the CDF of the sum of lognormal RVs. In this work, we will derive a new bound on the CDF, and from this bound we will also derive new approximations to the CDF. The contributions of this work are summarized as: * We utilize the lesser-known tangential mean-arithmetic mean inequality to derive a novel upper bound on the CDF for a sum of N independent lognormal RVs. Our method replaces the sum of lognormal RVs with a product of shifted lognormal RVs. Remarkably, the bound is tight in the sense that it will converge to the desired CDF as the shift goes to infinity. * We use the Mellin transform together with numerical integration in order to compute the bound. At the left tail of the CDF, we find that the bound can be expressed by a Q-function. * We propose novel simple approximations to the CDF expressed by a product of Q-functions. The approximations are more accurate than existing ones, e.g., Farley's method. Sec. II contains the derivation of the bound and a discussion about its tightness. Approximations to the CDF are contained in Sec. III and numerical evaluations are given in Sec. IV, while Sec. V concludes the paper. § BOUND ON THE CDF AND ITS LEFT TAIL DISTRIBUTION §.§ Preliminaries Let X_i be a normal RV with probability density function (pdf) f_X_i(x)=1/√(2π)σ_ie^-(x-μ_i)^2/2σ_i^2, -∞<x<∞, σ>0 which implies that e^X_i is a lognormal RV. Our first objective is to give a bound for the CDF of the sum of N lognormal RVs, S_N=∑_i=1^N e^X_i. The CDF is as ususal defined by F_S_N(γ)=∫_0^γ f_S_N(x)dx, where the pdf f_S_N(x) does not exist on closed-form. In wireless communications, the outage probability is the probability that the SIR falls below a threshold, γ_th. For lognormal shadow fading, where the received power is modelled by e^X_i, this can be expressed as Pr [ e^X_0/∑_i=1^N e^X_i≤γ_th] =Pr [ ∑_i=1^N e^X_i≥e^X_0/γ_th] =1-F_S_N(e^X_0/γ_th ). The l.h.s. is the complementary CDF (CCDF) of S_N. The following results will be utilized for deriving the bound. For any set of positive integers, 𝐲=(y_1,y_2,…,y_N), the arithmetic mean (AM), the geometric mean (GM) and the tangential mean[The general definition is TM(δ)=(δ+y_1)^α_1…(δ+y_N)^α_N-δ where α_1+α_2+…+α_N=1.] (TM) are defined as: AM(𝐲) =1/N∑_i=1^N y_i GM(𝐲) = (∏_i=1^Ny_i )^1/N TM(𝐲,δ) = (∏_i=1^N(δ+y_i))^1/N-δ, δ>0 It has been shown that the following inequality holds <cit.>, GM(𝐲)≤TM(𝐲,δ)≤AM(𝐲) where equality is achieved only when y_1=y_2=…=y_N. Moreover, it was proven in <cit.> that TM(𝐲,δ) is a monotonically increasing function of δ, and it can be shown that (see Appendix) lim_δ→∞TM(𝐲,δ)=AM(𝐲). Since AM=S_N/N, the RV N·TM(e^𝐗,δ) with 𝐗=(X_1,X_2,…,X_N), will approach S_N for large δ. §.§ A Bound on the CDF Now, let us define the shifted lognormal RV, Y_i=δ+e^X_i, which has a pdf given by f_Y_i(y,δ)=1/√(2π)σ_i(y-δ)e^-(ln(y-δ)-μ_i)^2/2σ_i^2, y>δ, δ≥ 0. The shift δ describes a linear translation of the standard lognormal pdf. By definition F_Y_i(γ,δ)=∫_δ^γ f_Y_i(y,δ)dy and the CDF becomes F_Y_i(γ,δ)= 0, γ≤δ 1-Q ( ln(γ-δ)-μ_i/σ_i ), γ>δ where Q(x)=1/(√(2π)σ)∫_x^∞ e^-y^2/2dy. The moments can be defined from dB units by σ=λσ_dB and μ=λμ_dB, with λ=ln(10)/10≈ 0.23026 <cit.>. Moreover, define the product of shifted lognormal RVs, Z_N=∏_i=1^NY_i, and its CDF, F_Z_N(γ,δ)=∫_δ^N^γ f_Z_N(x,δ)dx. By using (<ref>) and AM=S_N/N, our proposed upper bound on the CDF follows. F_S_N(γ) ≤ F_Z_N((γ/N+δ)^N,δ), γ≥ 0 It should be remarked that we have not used any independence assumption for the RVs to arrive at (<ref>). Hence, the bound holds also for correlated lognormal RVs. However, numerical evaluation of (<ref>) with correlated RVs appears to be a nontrivial task and we do not consider this case further herein. §.§ Tightness of the Bound Obviously due to (<ref>), (<ref>) is tighter than the bound in <cit.>, which is based on the GM. Interestingly, (<ref>) is asymptotically tight, since (<ref>) implies that lim_δ→∞F_Z_N((γ/N+δ)^N,δ) =F_S_N(γ). The CDF can be obtained through (<ref>), which by variable substitution reduces to (<ref>). lim_δ→∞ F_Z_N((γ/N+δ)^N,δ) =lim_δ→∞∫_δ^N ^ ( γ/N+δ)^N f_Z_N(x,δ)dx =lim_δ→∞∫_0^γ(x/N+δ)^N-1 f_Z_N((x/N+δ)^N,δ) dx By taking the derivative w.r.t. to γ, it follows that the integrand in (<ref>) will converge to f_S_N(x) as δ→∞, i.e., f_S_N(x)=lim_δ→∞(x/N+δ)^N-1 f_Z_N((x/N+δ)^N,δ). Furthermore, it is straightforward to show that for all the means min_i y_i ≤GM(𝐲), TM(𝐲,δ),AM(𝐲)≤max_i y_i. Therefore, we have AM(𝐲)-GM(𝐲)≤max_i y_i-min_i y_i which implies that the gap between the AM and the GM (and therefore also the gap between the AM and the TM) closes when the maximum difference between the elements in 𝐲 approaches 0. Thus, firstly, the bound (<ref>) will become tighter with smaller variances σ_i^2, since the r.h.s of (<ref>) will then on average become smaller. Secondly, for small γ, all y_i will be small, i.e., ϵ=max_i y_i-min_i y_i will be small. Thus using (<ref>), it follows from the squeeze theorem in calculus that lim_ϵ→ 0AM(𝐲)-GM(𝐲) = lim_ϵ→ 0AM(𝐲)- TM(𝐲,δ) =0. Hence, (<ref>) will approach the bound from <cit.> for small γ, regardless of the δ value. If we define V_i=e^X_i and W_N=∏_i=1^NV_i, we have the bound from <cit.> F_W_N(x)=1-Q(ln x-μ̅/σ̅) where μ̅=𝔼[∑_i=1^N X_i]=∑_i=1^N μ_i and σ̅^2=𝔼[(∑_i=1^N X_i-μ̅)^2], where 𝔼 is the expectation value operator. Taking the limit, due to (<ref>), we obtain lim_γ→ 0F_Z_N((γ/N+δ)^N,δ) =lim_γ→ 0 F_W_N((γ/N)^N ) =lim_γ→ 01-Q( Nln(γ/N)-μ̅/σ̅). We shall therefore expect that the bound (<ref>) is well approximated by (<ref>) for small γ, i.e., at the left tail of the CDF. We can then rewrite (<ref>) for small γ as F_Z_N(γ,δ)≈1-Q( ln(γ)-ln(N)-μ̅/N/σ̅/N). and identify that the r.h.s. of (<ref>) describes the mean and variance of a lognormal RV with parameters μ̂ =ln(N)+∑_i=1^N μ_i/N σ̂^2 =∑_i=1^N σ^2_i/N^2. §.§ Computation of the Bound The bound can be computed for the case of independent RVs as follows. The pdf for a product of independent RVs can be determined by multiplicative convolution (aka Mellin convolution) or by Mellin transforms <cit.>. Since the Y_i are independent, the pdf f_Z_N(x) follows from N-times repeated Mellin convolution, and by defining f_Z_1(y)=f_Y(y,δ), and using the definition of Mellin convolution <cit.>, we obtain: f_Z_n(x) =∫_δ^n-1^x/δf_Y ( x/y,δ )f_Z_n-1(y)/y dy, x≥δ^n, n≥ 2 Thus, the CDF could be obtained by computing N nested finite range integrals according to (<ref>). Alternatively, the computation could be done in the transform domain. The Mellin transform for (<ref>) is defined for complex values s=α+β, where =√(-1), by ϕ_Y(s)=∫_δ^∞ y^s-1f_Y(y,δ)dy. The largest interval a<α<b where (<ref>) converges is denoted as the fundamental strip. When y→δ^+ we have f_Y(y,δ)∼((y-δ)(y-δ)^ln(y-δ))^-1 and when y→∞, we have f_Y(y,δ)∼(yy^ln(y))^-1 so f_Y(y,δ)y→δ^+=𝒪((y-δ)^-ln(y-δ)), f_Y(y,δ)y→∞=𝒪(y^-ln(y)) and since lim_y→δ^+ln(y-δ)=-∞ and lim_y→∞ln(y)=∞, we have a=-∞ and b=∞ (cf. Lemma 1 <cit.>). The Mellin transform of the N-times Mellin convolution (<ref>) is the product of the N Mellin transforms <cit.>. The inversion of the Mellin transform is unique and is given by the line integral for any α where a<α<b f_Z_N(x)=1/2π∫_α-·∞^α+·∞x^-sϕ_Y(s)^N ds which is thus admitted for any real value α. There is no closed-form expression for ϕ_Y(s) and numerical integration of (<ref>) and (<ref>) will be required to determine F_Z_N((γ/N+δ)^N). For convenience of notation, (<ref>) and (<ref>) are given for μ_i=μ and σ_i=σ but it is straightforward to generalize to non-uniform parameters μ_i and σ_i. Fig. <ref> depicts the Mellin transform[In our evaluations, numerical integration by an adaptive quadrature method is used for (<ref>) and a trapezoidal method is used for (<ref>).] as function of β for a fixed value of α. The decay of ϕ(s) becomes slower as δ increases, which implies that a larger range of β values needs to be evaluated in order to have sufficiently small integration error in (<ref>). § APPROXIMATIONS OF THE CDF We next seek to find simple approximations to the r.h.s. of (<ref>). These expressions cannot be formally proven to be upper bounds on the CDF, although they may practically become so for a large range of γ values. Hence, they are referred to as approximations of the CDF. §.§ Approximation for N=2 For N=2, we have Z_2=Y_1Y_2 and the CDF becomes F_Z_2(x)=∫_δ^x/δF_Y ( x/y,δ ) f_Y(y,δ)dy, x≥δ^2 where we for convenience of notation have assumed σ_i=σ and μ_i=μ, ∀ i. Therefore, we obtain F_Z_2((γ/2+δ)^2) =∫_δ^( γ/2+δ )^2/δF_Y( 1/y(γ/2+δ)^2,δ)f_Y(y,δ)dy =∫_0^(γ/2+δ)^2/δ-δ(1- Q(ln((γ/2+δ )^2/δ+y-δ) -μ/σ))f_Y(y,0)dy. where (<ref>) was used in (<ref>). Since lim_δ→∞(γ/2+δ)^2/y+δ-δ=γ-y it is straightforward to verify that (<ref>) asymptotically reduces to F_S_2(γ)=∫_0^γ F_Y(γ-y,0)f_Y(y,0)dy, i.e., the CDF resulting from the integration for the CDF obtained by a convolution of f_Y(y,0). By change of integration variable, we have the equivalent representation lim_δ→∞ F_Z_2((γ/2+δ)^2)=∫_-∞^ln(γ)(1- Q(ln(γ-e^x)-μ/σ) )_g(x)f_X(x)dx. Focusing on the case δ→∞, where the bound is tight, an approximation to the CDF can be obtained on closed-form (i.e., not requiring integration) from (<ref>) by the conventional approximation 𝔼g(x)≈ g(𝔼x), where the expectation operator 𝔼 is with respect to the RV x with pdf h_2(x)=f_X(x)/C_2(γ). The normalization factor C_2(γ) =∫_-∞^ln (γ)f_X(x)dx =1-Q( ln(γ)-μ/σ) is needed such that h_2(x) is a pdf on the interval -∞<x≤ln(γ). Then, we get the approximation F_S_2(γ)≈ C_2(γ) ( 1-Q( ln(γ-e^μ_2(γ))-μ/σ) ) where μ_2=𝔼x, which equals μ_2(γ) =∫_-∞^ln(γ) xh_2(x)dx = 1/C_2(γ)( μ( 1-Q( ln(γ)-μ/σ ) ) -σ/√(2π)e^-(ln(γ)-μ)^2/2σ^2) =μ-σ/√(2π)C_2(γ)e^-(ln(γ)-μ)^2/2σ^2 An interpretation of (<ref>) is that the CDF is the product between the CDF of the lognormal RV and the CDF of a shifted lognormal RV, where the shift e^μ_2(γ) is not fixed but depends on γ. To assess whether the r.h.s. of (<ref>) may become an upper bound, we inspect the derivatives of g(x), dg(x)/dx =-e^x-(ln(γ-e^x)-μ)^2/2σ^2/√(2π)σ(γ-e^x) d^2g(x)/dx^2 =-e^x-(ln(γ-e^x)-μ)^2/2σ^2(γσ^2+e^xln(γ-e^x)-μ e^x))/√(2π)σ^3(γ-e^x)^2 leading to that dg(x)/dx is negative on the interval -∞<x≤ln(γ) and that it is a monotonically non-increasing function for -∞<x≤ x_0, where x_0 is the solution to γσ^2+e^x_0ln(γ-e^x_0)-μ e^x_0=0. Thus, g(x) is a concave function on the interval -∞<x≤ x_0. Therefore, if ln(γ)-x_0 is relatively small, the r.h.s. of (<ref>) may practically become an upper bound for a large range of γ values since Jensen's inequality implies that 𝔼g(x)≤ g(𝔼x) for -∞<x≤ x_0, albeit g(x) is not concave on x_0<x≤ln(γ). §.§ Approximation for N>2 The approximation method of Sec. III.A can be applied repeatedly for N>2 and we will show this explicitly for N=3. By defining γ_2=γ-e^μ_2(γ) and using S_3=S_2+e^X_3 with (<ref>), it follows that F_S_3(γ) ≈ C_2(γ)∫_-∞^ln(γ_2)(1-Q(ln(γ_2-e^x)-μ/σ) )f_X(x)dx ≈ C_3(γ)C_2(γ)(1-Q(ln(γ_2-e^μ_3(γ))-μ/σ) ) where the approximation in (<ref>) is due to the approximation of F_S_2(γ) and (<ref>) is due to 𝔼g(x)≈ g(𝔼x), where the RV x has a pdf h_3(x)=f_X(x)/C_3(γ), and the normalization factor C_3(γ) =∫_-∞^ln (γ_2)f_X(x)dx =1-Q( ln(γ_2)-μ/σ) is needed such that h_3(x) is a pdf on the interval -∞<x≤ln(γ_2). Furthermore, we obtain μ_3(γ) =∫_-∞^ln(γ_2) xh_3(x)dx = 1/C_3(γ)( μ( 1-Q( ln(γ_2)-μ/σ ) ) -σ/√(2π)e^-(ln(γ_2)-μ)^2/2σ^2) =μ-σ/√(2π)C_3(γ)e^-(ln(γ_2)-μ)^2/2σ^2 The error in the approximation is due to reusing the pdf for N=2 in (<ref>) and thereafter using 𝔼g(x)≈ g(𝔼x) again. Thus, the approximation may be less accurate as N increases. The above steps can be applied repeatedly to obtain γ_N, μ_N and C_N for N>3, which could be used to derive an approximation as for (<ref>). §.§ Approximation for large N Let us define the RV Z̃_N=Z_N^1/N and assume that N is large. Since ln(Z̃_N)= 1/N∑_i=1^N ln (Y_i), a central limit theorem (CLT) implies that, as N→∞, ln (Z̃_N) will be a normal RV with mean μ̃ and variance σ̃^2. Therefore, Z̃_N will converge to a lognormal RV. The mean and variance can be determined according to the CLT through μ̃ =∫_-∞^∞ln(δ+e^x)f_X(x)dx ≈1/√(π)∑_m=1^Mw_m ln(δ+e^√(2)σ x_m+μ) σ̃^2 =1/N (∫_-∞^∞ln(δ+e^x)^2f_X(x)dx-μ̃^2 ) ≈1/N (1/√(π)∑_m=1^Mw_m ln(δ+e^√(2)σ x_m+μ)^2 -μ̃^2 ) where (<ref>) and (<ref>) are due to Gauss-Hermite integration of order M. The weights w_m and abscissas x_m can be found in <cit.>. By using (<ref>) and AM=S_N/N, we can therefore make the following approximation for large N. F_Z_N((γ/N+δ)^N) ≈ F_Z̃_N(γ/N+δ) =1-Q ( ln(γ/N+δ)-μ̃/σ̃ ) § NUMERICAL EXAMPLES §.§ Evaluation of the Left Tail Fig. <ref> shows the l.h.s. of (<ref>) and the CDF, which is obtained by numerical integration, for the case of N=2, σ=1 and μ=0. The distance between the curves is almost constant, but since the plot is in logarithmic scale, it implies that the actual difference between the two curves decreases as γ decreases. Hence, it confirms the bound (<ref>) can for small γ be approximated by the bound of <cit.>. However, asymptotically as γ→ 0, the distribution of S_N may not be that of a lognormal RV <cit.>. As a reference case the lower bound [(10a)-(10c) of <cit.>], which applies for N=2 and is based on the Marcum-Q function is included, which is even tighter. §.§ Evaluation of the Bound We plot the CCDF, hence the hitherto derived upper bounds on the CDF are displayed as lower bounds on the CCDF. Additionally, the CDF is plotted in Fig. <ref> in order to be able to compare bounds at low γ. For comparison, we use previously published lower bounds on the CCDF. For example, the r.h.s. of the following inequality is plotted 1-F_S_N(γ) ≥ 1- ( 1 -Q ( ln(γ)-μ/σ ))^N which is a special case of <cit.> and is also referred to as Farley's method <cit.>. Notably, the bound of <cit.> is based on the GM and it will be worse than (<ref>), so it is not included. For N=6, we also include the improved bound [(eq. (11) in <cit.>], which is a two-dimensional integral that is evaluated numerically. The distribution of S_N obtained by Monte-Carlo simulations is thereto contained in the plots. We select δ=10 and δ=100 in order to demonstrate the bounds, and apply σ=1 (σ_dB=4.34) and σ=2 (σ_dB=8.69). Typical vaules of σ_dB in radio channels are in the range 4–12 dB. From Fig. <ref> and Fig. <ref>, as expected, it can be observed that the bound is tighter for smaller σ. Morover, as N increases, a larger value of δ is required in order to close the gap to the desired CCDF, especially for large σ. For small γ, the bound is accurate in general. For the evaluated values of δ, the new bounds are shown to be better than (<ref>) for small γ, while (<ref>) is better for large γ. Fig. <ref> shows that the proposed bound is very accurate at the left tail of the CDF, whereas the previous bounds are not good at all. §.§ Evaluation of the Approximations Fig. <ref> contains the approximation for N=2 and shows that (<ref>) appears as a lower bound and that it is better than Farley's method (<ref>). In particular, the difference is significant for small σ. We solve for x_0 numerically and the subplot includes ϵ= ln(γ)-x_0, which shows that ϵ decreases with increasing γ and increasing σ. If ϵ is fairly small, g(x) is concave over most of the integration interval which leads to that (<ref>) practically appears as an upper bound. Fig. <ref> shows that the approximation (<ref>) for N=3 is accurate for large γ, i.e., the right tail, and for large σ. The inset plot shows that for small γ and small σ, the approximation (<ref>) intersects with the CDF and it is thus not a lower bound. These results show that the suggested approximation works well, at least when N is moderately large. For the case of large N, since the product of N RVs is replaced with a single RV, the inequality (<ref>) cannot be used and the approximation (<ref>) is not necessarily a lower bound on the CCDF for an arbitrary δ. This can be observed from Fig. <ref>, wherein the curve for δ=100 intersects with the desired CCDF, where we have used N=30. The inset plot shows that, for the fixed value γ=70, (<ref>) is an increasing function of δ but it does not converge to the actual value of the CCDF. Notably, the previous bounds are not accurate at all for such large N and do not work well. § CONCLUSIONS We have derived an upper bound on the CDF of a sum of lognormal RVs which becomes tight and converges to the CDF for large values of the shift δ. Thus, it is more accurate as δ increases and will outperform previously suggested upper bounds and approximation methods. Evaluation is done by numerical integration. The price of its accuracy is that it may require a larger computational cost in terms of numerical integration effort, than approximation methods, e.g., the ones based on moment matching or those using other pdfs than the lognormal, or other bounds. The bound can be approximated by a single Q-function at the left tail. Furthermore, we gave simple approximations to the bound on closed-form by means of products of Q-functions. These approximations were shown to be more accurate than, e.g., Farley's method, at least for moderately large N. An implication of this work is that the classical problem of the sum of lognormal RVs could alternatively be viewed as a problem of a product of shifted lognormal RVs with large shifts. 1 Beaulieu0 N. C. Beaulieu, A. A. Abu-Dayya, and P. J. McLane, “Estimating the distribution of a sum of independent lognormal random variables," IEEE Trans. Commun., vol. 43, no. 12, pp. 2869–2873, Dec. 1995. Fenton L. F. Fenton, “The sum of lognormal probability distributions in scatter transmission systems,” IRE Trans. Commun. Syst., vol. 8, no. , pp. 57–67, 1960. Schwarz S. Schwartz and Y. Yeh, “On the distribution function and moments of power sums with lognormal components,” Bell. Syst. Tech. J., vol. 61, pp. 1441–1462, 1982. Tellambura1 C. Tellambura and D. Senaratne, “Accurate computation of the MGF of the lognormal distribution and its application to sum of lognormals,” IEEE Trans. Commun., vol. 58, no. 5, pp. 1568–1577, May 2010. Mehta N. B. Mehta, J. Wu, A. F. Molisch, and J. Zhang, “Approximating a sum of lognormal random variables with a lognormal,” IEEE Trans. Wireless Commun., vol. 6, no. 7, pp. 2690–2699, July 2007. Beaulieu1 N. C. Beaulieu and Q. Xie, “An optimal lognormal approximation to lognormal sum distributions,” IEEE Trans. Veh. Technol., vol. 53, no. 2, pp. 479–489, Mar. 2004. Miles J. Miles, “On the Laplace transform of the lognormal distribution: analytic continuation and series approximations,” J. of Computational and Applied Math., vol. 404, Apr. 2022. Zhao L. Zhao and J. Ding, “Least squares approximations to lognormal sum distributions," IEEE Trans. Veh. Technol., vol. 56, no. 2, pp. 991–997, Mar. 2007. Nie H. Nie and S. Chen, “Lognormal sum approximation with type IV Pearson distribution," IEEE Commun. Lett., vol. 11, no. 10, pp. 790–792, Oct. 2007. Liu Z. Liu, J. Almhana, and R. McGorman, “Approximating lognormal sum distributions with power lognormal distributions," IEEE Trans. Veh. Technol., vol. 57, no. 4, pp. 2611–2617, July 2008. DiRenzo M. Di Renzo, F. Graziosi, and F. Santucci, “Further results on the approximation of log-normal power sum via Pearson type IV distribution: a general formula for log-moments computation,” IEEE Trans. Commun., vol. 57, no. 4, pp. 893–898, Apr. 2009. Li X. Li, Z. Wu, V. D. Chakravarthy, and Z. Wu, “A low-complexity approximation to lognormal sum distributions via transformed log skew normal distribution," IEEE Trans. Veh. Technol., vol. 60, no. 8, pp. 4040–4045, Oct. 2011. Lam C. L. J. Lam and T. Le-Ngoc, “Log-shifted gamma approximation to lognormal sum distributions," IEEE Trans. Veh. Technol., vol. 56, no. 4, pp. 2121–2129, July 2007. Slimane S. B. Slimane, “Bounds on the distribution of a sum of independent lognormal random variables,” IEEE Trans. Commun., vol. 49, no. 6, pp. 975–978, June 2001. Tellambura C. Tellambura, “Bounds on the distribution of a sum of correlated lognormal random variables and their application," IEEE Trans. Commun., vol. 56, no. 8, pp. 1241–1248, Aug. 2008. Berggren1 F. Berggren and S. B. Slimane, “A simple bound on the outage probability with lognormally distributed interferers,” IEEE Commun. Lett., vol. 8, no. 5, pp. 271–273, May 2004. Zhu B. Zhu, Z. Zhang, L. Wang, J. Dang, L. Wu, J. Cheng, and G. Li, “Right tail approximation for the distribution of lognormal sum and its applications,” in Proc. IEEE Globecom, Taipei, Taiwan, 2020, pp. 1–6. Xiao Z. Xiao, B. Zhu, J. Cheng, and Y. Wang, “Outage probability bounds of EGC over dual-branch non-identically distributed independent lognormal fading channels with optimized parameters,” IEEE Trans. Veh. Technol., vol. 68, no. 8, pp. 8232–8237, Aug. 2019. Beaulieu2 N. C. Beaulieu and G. Luan, “On the Marcum Q-function behavior of the left tail probability of the lognormal sum distribution," in Proc. IEEE ICC, Dublin, Ireland, 2020, pp. 1–6. Berggren2 F. Berggren “An error bound for moment matching methods of lognormal sum distributions,” European Trans. Telecommun., vol. 16, no. 6, pp. 573–577, 2005. Sandor J. Sándor, Theory of means and their inequalities [Online]. Available: https://www.math.ubbcluj.ro/∼jsandor/ Bertrand J. Bertrand, P. Bertrand, and J.-P. Ovarlez, “Chapter 12: The Mellin transform,” The transform and applications handbook, Ed. A.D. Poularikas, CRC Press inc, 1995. Flajolet P. Flajolet, X. Gourdon, and P. Dumas, “Mellin transforms and asymptotics: harmonic sums,” Theoretical Computer Science, vol. 144, no. 1–2, pp. 3–58, June 1995. Stegun M. Abramowitz and J. Stegun, Handbook of mathematical functions with formulas, graphs, and mathematical tables, Dover, 9th ed., 1972. Zhu2 B. Zhu, J. Cheng, J. Yuan, J.-Y Wang, L. Wu, and Y. Wang, “A new asymptotic analysis technique for diversity receptions over correlated lognormal fading channels,” IEEE Trans. Commun., vol. 66, no. 2, pp. 845–861, Feb. 2018. First, rewrite the Nth root: (∏_i=1^N(δ+y_i))^1/N =δ (1+y_1/δ)^1/N (1+y_2/δ)^1/N… (1+y_N/δ)^1/N Furthermore, the Taylor series at t=0 can be obtained as (1+t)^1/N=1+t/N+1/N(1/N-1)t^2/2+𝒪(t^3). Utilizing the first two terms of (<ref>) and expanding (<ref>), we obtain lim_δ→∞ (∏_i=1^N(δ+y_i))^1/N-δ = lim_δ→∞δ(1+y_1/Nδ+𝒪 (1/δ^2) ) (1+y_2/Nδ+𝒪 (1/δ^2) )…(1+y_N/Nδ+𝒪 (1/δ^2) )-δ = lim_δ→∞y_1+y_2+…+y_N/N+𝒪 (1/δ^2 ) = y_1+y_2+…+y_N/N.
http://arxiv.org/abs/2405.09046v1
20240515025026
Entanglement parity effects in the Kane-Fisher problem
[ "Chunyu Tan", "Yuxiao Hang", "Stephan Haas", "Hubert Saleur" ]
cond-mat.str-el
[ "cond-mat.str-el", "quant-ph" ]
SPstyle Entanglement parity effects in the Kane-Fisher problem Chunyu Tan1, Yuxiao Hang1,†, Stephan Haas1 and Hubert Saleur1,2 1 Department of Physics and Astronomy,University of Southern California, Los Angeles, CA 90089-0484, USA 2 Institut de Physique Théorique, Université Paris Saclay, CEA, CNRS, F-91191 Gif-sur-Yvette, France † yhang@usc.edu § ABSTRACT We study the entanglement of a segment of length ℓ in an XXZ chain with one free extremity and the other connected to the rest of the system with a weak bond. We find that the von-Neumann entropy exhibits terms of order O(1) with strong parity effects, that probe the physics associated with the weakened bond and its behavior under the RG (Kane Fisher problem). In contrast with the XX case studied previously <cit.> the entropy difference δ S≡ S^e-S^o gives rise now to a “resonance” curve which depends on the product ℓ T_B, with 1/T_B a characteristic length scale akin to the Kondo length in Kondo problems <cit.>. The problem is studied both numerically using DMRG and analytically near the healed and split fixed points. Interestingly - and in contrast with what happens in other impurity problems <cit.> - δ S can, at least at lowest order, be tackled by conformal perturbation theory. 0.975 0.6 Copyright attribution to authors. This work is a submission to SciPost Physics. License information to appear upon publication. Publication information to appear upon publication. 0.4 Received Date Accepted Date Published Date 1pt 1pt § INTRODUCTION The study of terms of O(1) in entanglement in the presence of defects presents many challenges, and several questions - in particular, those concerning universality - remain unanswered to this day <cit.>. On the other hand, terms of O(1) in general have a potentially interesting physical meaning, for instance in the presence of topological phases or topological defects, and their study is of more than of purely academic interest. The conformal field theory is widely used in calculating entanglement entropy <cit.>, in an earlier work <cit.> we considered such terms of O(1) for free fermions in the presence of a boundary and a conformal defect, and found parity effects that did not decay away from the boundary. A point made in <cit.> was that these effects hinted at the possibility of a topological phase in the SSH model. Similar parity effects - involving, however, extra zero modes - were studied recently in <cit.>. The purpose of the present paper is to extend the analysis of <cit.> to the interacting case, where the physics will turn out to be different, and involve in particular a renormalization group (RG) flow. Specifically, the system we consider here is an XXZ spin chain of length L with free boundary conditions at either end, and a modified coupling at position ℓ (see below for a fully accurate description of the model). In the absence of the boundary, this provides one of the many possible variants of the so-called Kane-Fisher problem. The latter is usually defined in terms of a Luttinger liquid coupled to an impurity, but of course the formulation in terms of spin chain is equivalent thanks to the Jordan-Wigner transformation and bosonization. In what follows, we will freely use one or the other of these languages. Without interactions - i.e., in the XX chain - the modified coupling induces, in the RG sense, an exactly marginal perturbation. The entanglement of the region A starting at the boundary and ending at the modified bond was found in  <cit.> to possess, on top of the expected lnℓ term with a factor proportional to the effective, coupling dependent central charge, a term of O(1) that differs between the even and the odd cases. The corresponding difference δ S was found to be a universal function with interesting properties, interpolating between ln 2 and 0. In the case with interactions, the modified coupling induces an RG flow, with two possible fixed points, the fully split chain and the healed chain. Depending on the sign of J_z, one of these is stable and the other one unstable. Under a relevant perturbation of the stable fixed point by a modified coupling at position ℓ, the entanglement now has a lnℓ term whose slope depends on an effective central charge, and can be expressed as a function of ℓ T_B where T_B is a characteristic energy scale akin to the Kondo temperature in Kondo-like impurity problems. In this case again, we find below that the terms of O(1) differ between the even and odd cases, and that the corresponding difference δ S is now a universal quantity depending, like the effective central charge, on the product ℓ T_B. Just as in the non-interacting case J_z=0, δ S interpolates between ln 2 and 0. As commented in the conclusions, similar parity effects are observed in the periodic case. It is interesting to reflect a bit on their physical meaning. The best way to start is to think of Kondo physics from the point of view of entanglement. It would be tempting to think that as the Kondo impurity gets screened by conduction electrons at low energy, it somehow becomes more entangled with them. But this clearly cannot be: the entanglement of the Kondo impurity as measured by the Von Neumann entropy of the corresponding spin with the rest of the system (the “single site impurity entanglement entropy” <cit.>) is (in the absence of a magnetic field) fixed at ln 2 irrespective of the Kondo coupling. To be able to define non-trivial quantities (that can be used later on to provide signatures of the screening cloud <cit.>) one needs to introduce another (length) scale. In the Kondo literature, it is common to consider for this an interval of the system extending a distance r from the impurity (in the s-wave language)<cit.>, or, in related purely one-dimensional problems, an interval of length L centered on the impurity <cit.>. The physics at play can then be understood in terms of valence bonds originating from the impurity and reaching out to the rest of the system. Roughly, the “impurity part" <cit.> of the entanglement reaches ln 2 when r≲ξ_K (with ξ_K the Kondo length) that is, when r is smaller than the typical length of the valence bond originating from the impurity spin. The physics we have in our case is somewhat similar and could be intuitively interpreted in a valence bond picture <cit.>. Think for instance of the Hamiltonian (<ref>) below. As illustrated in Fig. <ref>(a), in the limit of very small λ (corresponding to a very small impurity bond) and in the simplified valence bond picture, valence bonds “prefer” not to stretch over the weak link: only one is forced to do so in the odd length case, contributing ln 2 to the difference of entanglements between odd and even. On the contrary, if λ∼ 1, there are a lot of such valence bonds, and even though the parity of their numbers is odd in the odd case and even in the even case, the difference averages to a small term that decays with L. Fig. <ref>(b) shows the constant terms in even and odd cases when J_z=0 which confirms our qualitative arguments (it is difficult to define unambiguously the O(1) terms when J_z ≠ 0, see below). In the intermediate region, the result depends on the average length of the valence bonds, which in this interpretation becomes 1/T_B, the equivalent of ξ_K. Interestingly, while the underlying physics is the one of the Kane-Fisher <cit.> problem, from the point of view of entanglement we have results akin to the Kondo problem, with entanglement curves always interpolating between ln 2 and 0, and a “resonance” as can be seen e.g. in Fig. <ref>. The paper is organized as follows. In section <ref>, we discuss the vicinity of the split fixed point and in section <ref> the vicinity of the healed fixed point. In both cases, we provide “ab-initio” numerical results together with comparison with (conformal) perturbative calculations, especially of the entanglement. Most detailed calculations are done in subsections <ref>,<ref> where some subtle aspects - including the renormalization between bare and renormalized couplings - are investigated in detail. We find in particular that, even though the entanglement cut and the location of the perturbation coincide, no non-universal effects seem to be encountered. This is in contrast with the current expectation for entanglement across topological defects <cit.>. § PHYSICS AROUND THE SPLIT FIXED-POINT §.§ Generalities We consider first the Hamiltonian H^A = ∑_j=1^ℓ(σ_j^xσ_j+1^x+σ_j^yσ_j+1^y+J_z σ_j^zσ_j+1^z) +λ(σ_ℓ^xσ_ℓ+1^x+σ_ℓ^yσ_ℓ+1^y+J_z σ_ℓ^zσ_ℓ+1^z) + ∑_j=ℓ+1^∞(σ_j^xσ_j+1^x+σ_j^yσ_j+1^y+J_z σ_j^zσ_j+1^z), where the σ's are Pauli matrices. Set g=2-2πarccos (J_z), so g>1 for J_z>0 and g<1 for J_z<0. Near the split fixed point λ=0, a small nearest-neighbor interaction as in (<ref>) is an operator of dimension (length)^-g. This means that near the split fixed point λ is relevant if J_z<0 near the split fixed point λ is irrelevant if J_z>0. The RG flows are thus as in Fig. <ref>. Writing the perturbation as λ O this product must have dimension (length)^-1 and thus, if O has dimension (length)^-g, we find dim [λ]=(length)^g-1. Hence we can construct a quantity of dimension (length)^-1 (a temperature) by considering[Note that definitions of T_B differing by numerical factors may appear in the literature.] T_B≡λ^1/(1-g). In the relevant case, the chain at large distances appears healed. This can be clearly seen if we consider the entanglement of the region of size ℓ to the left of the cut with the rest of the system: the leading (“bulk”) behavior of the entanglement should interpolate from S=0 to S≈1 6lnℓ as ℓ is increased at fixed λ (for earlier studies of this problem, see <cit.>. This is illustrated in Fig. <ref> where we have plotted the derivative of the entanglement entropy for the system in the particular case L=2ℓ. Recall that, from finite size scaling results, the entanglement in the healed case always has the leading behavior c 6ln L with c=1 here. This is seen in Fig. <ref> as the the two curves go to 6dS/dln L=1 in the IR. Note that the results look quite different for odd and even lengths ℓ (represented by S^e and S^o respectively). It is this difference we shall be interested in in what follows. Like in <cit.> we expect to have, for an infinite system (more complicated formulas involving ℓ/L are required for a finite system, see below) dS^e_A,imp+bdr dlnℓ=F(ℓ T_B)+f^e(ℓ T_B) dS^o_A,imp+bdr dlnℓ=F(ℓ T_B)+f^o(ℓ T_B) where F,f^e,f^o are non-trivial, universal functions. Note that in <cit.>, the forms (<ref>) were encountered when considering a non-interacting system with a dot-like impurity - that is, two successive bonds modified. This case gave rise to a non-trivial RG flow, just like the case of a single modified bond in the presence of interactions that we study in this paper. In (<ref>), F encodes a sort of effective central charge, while f^e,o are “terms of O(1)”. This name might sound inadequate from (<ref>), but as discussed in detail in <cit.>, only derivatives of the entanglement obey proper scaling, so the (parity independent) “bulk” term c 6lnℓ ceases to dominate the expression for the entanglement, once derivatives are taken, due to the fact that d dlnℓlnℓ=1. Like in our earlier work  <cit.> we shall focues on the difference f^e-f^o, which originates from terms in S^e,o whose difference does not decay as ℓ→∞. In this context, λ relevant means δ S≡ S^e-S^o will evolve from something smaller than ln 2 to 0 as L increases for a fixed λ (that is, healing occurs) while irrelevant means δ S evolves from something smaller than ln 2 to exactly ln 2 as L increases for a fixed λ. §.§ The relevant case We expect that in the limit of small λ and large L, and for the relevant case (g<1), results should have a universal dependency on the product LT_B. Note that this combination is small when L or λ is small, large when L or λ are large, and that, since results depend only on LT_B, increasing L at fixed λ in the scaling limit is like increasing λ at fixed L: in other words, the long distance physics corresponds to healing. Scaling per se only occurs formally in the limits λ→ 0, L→∞ with the product Lλ^1/(1-g) finite. While the phenomenology is well understood in general, we focus here on aspects of entanglement in the presence of a boundary that have not been studied before except in the special free fermion case J_z=0  <cit.>. Results confirming the qualitative RG picture are given below. Our numerical results are obtained by using the density matrix renormalization group (DMRG) algorithm and the Tenpy package <cit.>. We plot the difference of entanglements with the subsystem starting at the boundary and ending in the middle of the modified bond (i.e. containing the spins j=1,…,j=ℓ) for the cases ℓ even and ℓ odd, and set δ S≡ S^e-S^o. The total system size is taken to be L=Zℓ, with Z a factor taken to be Z=2 unless otherwise specified. We note that there are in fact two possible variants of the problem: (at least), since another Hamiltonian without a J_z term at the impurity bond H^B = ∑_j=1^ℓ(σ_j^xσ_j+1^x+σ_j^yσ_j+1^y+J_z σ_j^zσ_j+1^z) +λ(σ_ℓ^xσ_ℓ+1^x+σ_ℓ^yσ_ℓ+1^y ) + ∑_j=ℓ+1^∞σ_j^xσ_j+1^x+σ_j^yσ_j+1^y+J_z σ_j^zσ_j+1^z could also be considered. Since the J_z term is irrelevant, it should not affect the universal limit of our results, as we will see below. We first give results for Hamiltonian (<ref>) in Fig. <ref>. Totally identical results are obtained in the scaling limit for the Hamiltonian (<ref>) as shown in Fig. <ref>. In particular, the value of T_B is the same for the two curves. This is easily understood since the σ_ℓ^zσ_ℓ+1^z term is irrelevant near the split fixed point <cit.>: while it affects corrections to scaling, it simply disappears in the limit λ→ 0,ℓ→∞. We note that in Fig. <ref> the data corresponding to (<ref>) is a little “fuzzy” while the two curves are slightly off for LT_B≳ 1. This is due to the difficulty of reaching the scaling limit in the deep IR region, where values of L unreachable by DMRG would, strictly speaking, be necessary. This is a familiar problem in the study of interacting systems. It practice, we take the small difference between the two curves in Fig. <ref> as a measure of the uncertainty about the true location of the scaling curve. Varying Z does not change results much, even though of course the exact curve does, indeed, depend on Z (see below for a detailed study near the fixed points). For the sake of brevity, we refrain from showing numerical results confirming this. §.§ The irrelevant case In this case we start from a small tunneling term but are driven at low-energy to the situation where the system is split. This can be seen in the fact that LT_B increases at fixed λ when increasing L but increases at fixed L when decreasing λ. Hence, large L behaves like small λ, and the split-fixed point is reached at large distances. Going to small LT_B is formally equivalent to increasing λ and thus, one would hope, to getting closer to the healed fixed-point. However, in this limit, other irrelevant operators will start playing a role, and there is no chance to reach this fixed-point without fine tuning. In practice, this simply means that the left-hand side of the curves plotting δ S as a function of LT_B are not fully universal. See Figs. <ref> for some illustrations. In the following, we will mostly restrict to the study of relevant perturbations. §.§ Some perturbative calculations The first question to ask is how δ S varies as a function of λ at small λ. In order to answer this question we need to think first about the situation at λ=0, i.e. when the two systems are totally decoupled. The difference between even and odd is then spectacular. In the even case, both sides have an even number of spins and are in a (non-degenerate) ground-state of total spin S^z=0 (we set S^z=1 2∑_j σ_j^z). In contrast, in the odd case, both sides have a remaining spin 1/2 degree of freedom, and thus have a ground-state degenerate twice, with S_z=±1/2. This means in particular that the shift in energy due to the presence of the λ≠ 0 term exhibits different dependencies with λ in the even and odd cases. §.§.§ The shift in energy For the even case, this shift can be obtained from non-degenerate perturbation theory and thus, by conservation of S_z, is quadratic in λ at small coupling. In contrast, for the odd case, the shift comes from degenerate perturbation theory, and since states with spin S_z=± 1/2 have the same-energy, conservation of S_z does not preclude the presence of a term linear in λ. It is interesting to push these considerations a bit further by using field theoretic techniques. First, we fermionize our spin chain (we will follow standard conventions such as those in  <cit.> whenever possible), leading to the two possible Hamiltonians H^A= ∑_j=1^ℓ(c^†_jc_j+1+h.c.+J_z (c_j^† c_j-1/2)(c_j+1^† c_j+1-1/2))+ λ(c^†_ℓ c_ℓ+1+h.c.+J_z (c_ℓ^† c_ℓ-1/2)(c_ℓ+1^† c_ℓ+1-1/2)) +∑_j=ℓ+1^∞(c^†_jc_j+1+h.c.+J_z (c_j^† c_j-1/2)(c_j+1^† c_j+1-1/2)) and H^B = ∑_j=1^ℓ(c^†_jc_j+1+h.c.+J_z (c_j^† c_j-1/2)(c_j+1^† c_j+1-1/2))+λ(c^†_ℓ c_ℓ+1+h.c.) + ∑_j=ℓ+1^∞(c^†_jc_j+1+h.c.+J_z (c_j^† c_j-1/2)(c_j+1^† c_j+1-1/2)) We recall the formulas for the decomposition of lattice fermions into continuous fields (see e.g.  <cit.> for discussion of related problems) c_j↦ e^iK_Fjψ_R+e^-iK_Fjψ_L At half-filling, K_F=π 2. For a chain starting at j=1, we have formally c_j=0 for j=0 to take into account the chain termination, so at that extremity the boundary conditions ψ_L=-ψ_R. Meanwhile, the conditions at the other extremity depend on the parity of the length. If the last site is l, we set c_l+1∝ψ_R+e^-2iK_F(l+1)ψ_L=0 so at half-filling this becomes ψ_R(l+1)+(-1)^l+1ψ_L(l+1)=0 We see that if l is odd we get the same boundary conditions at l+1 than at 0, while if l is even we get opposite boundary conditions. Using bosonization formulas ψ_R∝exp(i√(4π)ϕ_R), ψ_L∝exp(-i√(4π)ϕ_L) and handling the four-fermion term in the usual way  <cit.> we get the continuum theory with bulk Hamiltonian H=v 2∫ dx[ Π^2+(∂_xΦ)^2] The compactification radius of the boson is R≡√(g 4π) while the sound velocity is given by v=π 2√(1-J_z^2)arccos J_z Note that v=1 if J_z=0. The boundary conditions at the origin in the non-interacting case are Φ=ϕ_R+ϕ_L=π√(4π) which becomes Φ(0)=π R in general. From (<ref>) it follows that, on the right side we have Φ(l+1)=π R for l odd and Φ(l+1)=0 for l even (all these modulo 2π R). So to summarize we can simply write Φ(0)=π R, Φ(l+1)=2π RS^z with S^z integer (half an odd-integer) for l even (odd). We now consider perturbation around the almost split fixed point, with Hamiltonian H^A (<ref>) or H^B (<ref>). To all orders in perturbation theory, the correlators we need to evaluate are factorized into correlators for two decoupled sub-systems - two open chains of equal length ℓ in our set-up with Z=2. Let us now consider the shift in ground-state energy due to the tunneling. We start with the case ℓ even where the ground-state of either half is non degenerate. The Hamiltonian we must consider is H= v 2∫_0^ℓ dx [(Π^(1))^2+(∂_xΦ^(1))^2]+ v 2∫_ℓ^2ℓ dx [(Π^(2))^2+(∂_xΦ^(2))^2] + λ Z_λcosβ√(2)(Φ̃^(1)(ℓ)-Φ̃^(2)(ℓ)) where Z_λ is the renormalization factor between the renormalized and the bare couplings (a thorough discussion of such factors is provided in the next section), and we have set β=√(2π g) Note that the tunneling term is expressed in terms of the dual field Φ̃=ϕ_R-ϕ_L since the field Φ takes a fixed value on either side of the tunneling bond. To proceed, we need to introduce (imaginary) time, and thus to discuss propagators in the strip geometry. We need correlators of (exponentials of) the dual field Φ̃ with Dirichlet boundary conditions (which are the same as correlators of (exponentials of) the field Φ with Neumann boundary conditions). Denoting by y the imaginary time coordinate on a strip of width ℓ, we find the propagator on the edge (i.e. for points on the right (or left) boundary) to be ⟨Φ̃^(i)(y)Φ̃^(i)(y')⟩=-1 2πln|ℓπsinhπ vℓ (y-y')|^2 so we have ⟨ e^iβ√(2) (Φ̃^(1)(y)-Φ̃^(2)(y))e^-iβ√(2)(Φ̃^(1)(y')-Φ̃^(2)(y'))⟩=1| ℓπsinhπ vℓ(y-y')|^β^2π where one should note the apparition of v on the right-hand side - due to the fact that the continuum limit of the lattice Hamiltonian is not isotropic in space/(imaginary)time. Going back to the shift of the ground state-energy, the first order correction vanishes because, in the ground-state with Dirichlet boundary conditions, ⟨ e^iβΦ̃⟩=0. To get the second order, we determine the partition function in an annulus geometry with the imaginary time length Λ>>ℓ. It follows that the shift in energy is proportional to the finite integral λ^2Z_λ^2 2(ℓπ)^-2g∫ dy 1 |sinhπ vℓy|^2g=λ^2Z_λ^2 v(ℓ 2π)^1-2gΓ(g)Γ(1-2g)Γ(1-g), g<1 2 so δ E=λ^2c_1ℓ^1-2g Since λ∝ T_B^1-g, this goes as ℓ^-1(ℓ T_B)^2-2g. When 1 2<g<1, the integral is UV divergent. It can be regularized by the introduction of a small distance cut-off, adding to the right hand-side a term λ^2 vZ_λ^2 2(ℓπ)^-2g(ℓπ)^2g∫_a dy y^2g=λ^2 vZ_λ^2 2 a^1-2g and thus a non-universal contribution so we expect the change in energy δ E= λ^2 (c_1 ℓ^1-2g+c_2) where c_1,c_2 are constants. In the odd case the ground state is degenerate four times, since each half has a leftover spin 1/2: we have thus |Ω⟩_αβ=|α⟩⊗ |β⟩. In each of the subsystems, the raising/lowering spin at the extremity has non-zero matrix elements between |±α⟩. Carrying out degenerate perturbation theory we expect a shift in energy proportional to λ, whereas the non-zero matrix elements should scale as L^-g by dimensional analysis. Hence in this case δ E=c_3λℓ^-g These results have been checked numerically. §.§.§ The entanglement The perturbative computation of the entanglement near the split fixed point involves some technical aspects that are best discussed elsewhere. But we can still find out some important simple facts. We start by considering the simplest case Z=2 and ℓ even. When λ=0, the two subsystems are in a ground-state which mimics the case ℓ=2: the spin on either side of the cut is up or down, and since the total S^z for each subsystem vanishes, the remaining ℓ-1 spins have a total S^z that is down or up. In other words, the ground state of each subsystem can we written |0⟩=|(+)-⟩-|(-)+⟩√(2) where (+) and (-) stand for the state of the remaining ℓ-1 spins with this total magnetization. By Z_2 symmetry, (-) is obtained from (+) by flipping all spins. Imagine now calculating the ground state in perturbation theory, and restrict for simplicity to the case of the Hamiltonian H^B. The perturbation λ V≡λ 2(σ_ℓ^+σ_ℓ+1^-+h.c.) acting only on the extremity spins of the two subsystems couples to eigenstates of the decoupled system where one side has spin one and the other spin minus one. Call the corresponding eigenstates |(1)_n⟩ and |(-1)_n⟩, with energy E_n. Similarly, two insertions of V couple to eigenstates where both sides have vanishing spin. Call the corresponding eigenstates |(0)⟩_n (with |(0)_0⟩=|0⟩). To second order we have then |Ω⟩ =|0⟩⊗ |0⟩+ λ∑_n,m a_nm(|(1)_n⟩⊗ |(-1)_m⟩+|(-1)_n⟩⊗ |(1)_m⟩) +λ^2∑_n b_n (|(0)_n⟩⊗ |0⟩+|0⟩⊗ |(0)_n⟩) where, e.g. from first order perturbation theory, a_nm =1 2⟨ (1)_n|(+)+⟩⟨ (-1)_m|(-)-⟩ (E_n-E_0)(E_m-E_0) Taking the trace on the second subspace of the full density operator ρ=|Ω⟩⟨Ω| gives the contribution to the reduced density operator ρ_ATr_B ρ =(1+λ^2(b_0+b_0^*))|0⟩⟨ 0|+λ^2∑_nm A_nm( |(1)_n⟩⟨ (1)_m|+|(-1)_n⟩⟨ (-1)_m|) +λ^2∑_n>0b_n |(0)_n⟩⟨0|+b_n^* |0⟩⟨ (0)_n| where A_nm=∑_p a_npa_pm^*. The reduced density matrix is thus the sum of three operators acting on different subspaces, and whose products are all vanishing. Symbolically we write ρ_A=Tr_B ρ=(1+λ^2 (b_0+b_0)^*)|0⟩⟨ 0|+λ^2 A+λ^2B so that the ratio R_p≡Tr ρ_A^P (Tr ρ_A)^p has the structure R_p=(1+λ^2(b_0+b_0^*))^p+λ^2p(TrA^p+TrB^p) ( 1+λ^2 (b_0+b_0)^*+λ^2(TrA+TrB))^p and we find the entanglement entropy S=- .d dpR_P|_p=1=-2|X|^2λ^2(-1 2+ln |X|^2λ^2) where X is a coefficient simply following from the previous calculations. It follows from this discussion that the entanglement entropy in the even case has a leading term going as λ^2lnλ. It is useful to summarize the foregoing discussion in more general terms. In the even case,the ground state of each of the two decoupled systems is non-degenerate and has spin S_z=0. Since the full Hamiltonian commutes with the total spin [H,S^z_A+S^z_B]=0 the reduced density matrix ρ_A commutes with the spin S^z_A  <cit.>: [ρ_A,S^z_A]=0 Right at the decoupled point, ρ_A only has matrix elements between the factorized ground-state and itself, both at S^z_A=0. However, under the tunneling perturbation, the new ground-state acquires components onto states which, while having total spin S^z_A+S^z_B=0, have S^z_A=± 1. After tracing over the B degrees of freedom, and writing ρ_A in block diagonal form with blocks labelled by S^z_A, this means that, two second order in perturbation theory, we have the structure ρ_A=Tr_Bρ=([ ρ^(0) 0 0 …; 0 ρ^(1) 0 …; 0 0 ρ^(-1) …; … … … …; ]) where the labels refer to values of S^z_A and the dots on the diagonal contain blocks of higher charge. The crucial point now is that ρ^(± n) for n≠ 0 is a contribution of order λ^2n since it takes n actions of the perturbation to produce a state with spin S_z=± n starting from a state of vanishing spin. We thus see immediately that we can expect a structure as in (<ref>) and consequently, after calculating the Reny entropy and taking the limit p→ 1, generate the leading term λ^2lnλ. Note meanwhile that, were we to carry out the perturbative expansion to higher orders, only terms even in λ would be encountered. The odd case is a bit different. Exactly at λ=0 there is a potential ambiguity since the left and right hand sides are now both degenerate twice. The entanglement is not even defined at this point without specification of the state of the full system. However, as soon as λ≠ 0, this degeneracy is broken, and the ground state becomes unique and has S_z=0. It is easy to identify this state in the case Z=2, where the system with or without perturbation is symmetric under exchanges of the two sides and conserves the total spin: the ground state at finite λ remains in the sector antisymmetric under the exchange, and with S_z=0. When λ=0 we can write the (normalized) ground states of each side as combinations |+⟩=λ^+_+ |(0)+⟩+λ^+_-|(++)-⟩ |-⟩=λ^-_- |(0)+⟩+λ^-_+|(–)+⟩ where once again (0),(++),(–) stand for states of the remaining ℓ-1 spins. We then choose the ground state of the whole system to be |Ω⟩=|+-⟩-|-+⟩√(2) The density matrix of the lhs then reads schematically, in the basis (<ref>) Tr_Bρ=12([ (λ_+^+)^2 λ_+^+λ^+_- 0 0; λ^+_+λ^+_- (λ_-^+)^2 0 0; 0 0 (λ_+^-)^2 λ_+^-λ^-_-; 0 0 λ^-_+λ^-_- (λ_-^-)^2 ]) Using that the ground states (<ref>) are normalized we can easily calculate from this the Reni entropy and show that it gives, as expected, rise to S=ln2. Now going through the same charge conservation arguments as before, we see that, when perturbing the ground state we will have to carry out a calculation similar to the one of the even case, resulting in a charge resolved structure for the density matrix now of the type Tr_Bρ=([ ρ^(1/2) 0 0 0 …; 0 ρ^(-1/2) 0 0 …; 0 0 ρ^(3/2)) 0 …; 0 0 0 ρ^(-3/2)) …; … … … … …; ]) where the terms ρ^(1 2+n) will come with factors λ^2n. Once again we will get in the end terms that are even in λ, with a leading correction going as λ^2lnλ^2. Numerical results fully confirm this picture (see figure <ref>). We also give below a determination of the slopes of the leading terms, although we do not have a full analytical derivation at this stage. To conclude, we see that, in contrast with the energy, the entanglement at small λ behaves similarly (but not identically) in the even and odd cases. § PHYSICS AROUND THE HOMOGENEOUS FIXED-POINT §.§ Generalities We can also consider the vicinity of the homogeneous (uniform) fixed point. In this case the relevant Hamiltonians are H^A = ∑_j=0^ℓ(σ_j^xσ_j+1^x+σ_j^yσ_j+1^y+J_z σ_j^zσ_j+1^z) +(1-μ)(σ_ℓ^xσ_ℓ+1^x+σ_ℓ^yσ_ℓ+1^y+J_z σ_ℓ^zσ_ℓ+1^z) + ∑_j=ℓ+1^∞σ_j^xσ_j+1^x+σ_j^yσ_j+1^y+J_z σ_j^zσ_j+1^z and H^B = ∑_j=0^ℓ(σ_j^xσ_j+1^x+σ_j^yσ_j+1^y+J_z σ_j^zσ_j+1^z) +(1-μ)(σ_ℓ^xσ_ℓ+1^x+σ_ℓ^yσ_ℓ+1^y)+J_z σ_ℓ^zσ_ℓ+1^z + ∑_j=ℓ+1^∞σ_j^xσ_j+1^x+σ_j^yσ_j+1^y+J_z σ_j^zσ_j+1^z Perturbing the coupling near the uniform fixed point corresponds in the continuum limit to an operator of dimension (length)^-g^-1 (together with an operator of dimension (length)^-2, see below). We see that the regions of relevance and irrelevance are switched with respect to the previous section, and near the homogeneous fixed point μ is irrelevant if J_z<0 near the homogeneous fixed point μ is relevant if J_z>0 The corresponding flows are sketched in Fig. <ref>. We see now that dim μ=(length)^g^-1-1 Using the same kind of scaling argument as for the case of an almost split chain, we now expect the properties to have a universal dependency on LΘ_B with Θ_B≡μ^1/(1-g^-1) §.§ Numerics for the entanglement In the relevant case, we now flow from the homogeneous to the split fixed point - this is illustrated in Fig. <ref> §.§ Some perturbative calculations We once again turn to bosonization. This time, since we start with a homogeneous chain, we have to consider a single bosonic theory on the half-line (or a segment of length Zℓ if the system is finite). We start with (<ref>) and need the crucial bosonization formula σ_j^xσ_j+1^x+σ_j^yσ_j+1^y = 2 (-1)^j c_1^±cosΦR(x=j) where c_1^± is a constant equal to 1π in the non-interacting case, but otherwise not known exactly (see below). Note we only represented the leading term, of dimension 1 4π R^2=g^-1. The next term would be proportional to ( ∂Φ∂ x)^2, of dimension 2. It thus becomes the most relevant one when g^-1=2, that is J_z<-√(2) 2. We will not study this region for now. In the non-interacting case g=1, R=√(4π) and we recover the results used in  <cit.>. The Hamiltonian corresponding to (<ref>) is then H=v 2∫_0^∞ dx[ Π^2+(∂_xΦ)^2]-μ_R π(-1)^ℓcosΦR(x=ℓ) Note that in (<ref>) we have introduced a renormalized coupling constant μ_R. Indeed, while in the non-interacting case μ_R=μ as defined with the lattice Hamiltonian (<ref>) (which is the same as (<ref>) in this case), in the presence of interactions, renormalization effects lead to μ_R=Z_μμ. The constant Z_μ - essentially the proportionality constant c_1^± in (<ref>) - is not universal. Its value for the XXZ spin chain is not known exactly, but has been determined numerically to a great accuracy in  <cit.> (for earlier work see  <cit.>). We will use the values deduced from Table I in  <cit.>, after the correspondence Z_μ^B=√(8π^2B_1^±)=√(4π^2) c_1^±, while J_z=Δ (so that Z_μ^B=1 for J_z=0). In fact, it will turn out that the relevant quantity is the ratio Z_μ^B v, where v is given in (<ref>) For the Hamiltonian (<ref>), we have to take into account the fact that σ_j^zσ_j+1^z=c_1^z (-1)^j cosΦ(x=j)R, to leading order as well, for J_z>-√(2) 2. In this region, the Hamiltonian in the continuum limit if still (<ref>) but now with a different value of the renormalization constant Z^A_μ, since σ_j^xσ_j+1^x+σ_j^yσ_j+1^y+J_zσ_j^zσ_j+1^z=(-1)^j(2c_1^±+c_1^z)cosΦR(x=j) It follows that the new renormalization constant is Z^A_μ=√(8π^2)(√(B_1^±)+ J_z 2√(B_1^z)). After dividing by v, this gives rise to the values listed below. We now use these results to study the entanglement. The strategy is the same as the one used in  <cit.>. What is called (λ-1) in eq. 43 ( in this section, equation numbers from reference  <cit.> will be indicated by eq. ) is called μ here, while in eq. 44 we have β=1 R, so β^2 4π=g^-1 and h=1 2g^-1. Everything up until eq. 52 works as well in our case, although now we have ⟨cosΦR(w,w̅)⟩_R_p=(2ℓ p)^2h[v^2τ^2[v^2τ^2+4ℓ^2]^h(1 p-1) [(v^2τ^2+4ℓ^2)^1/p-(vτ)^2/p]^2h The asymptotic behaviors at large distance are now ⟨cosΦR(w,w̅)⟩_R_p≈ (2ℓ)^-2h, τ>>ℓ and ⟨cosΦR(w,w̅)⟩_R_p≈[1 p(2ℓ)^-1/pτ^1 p-1]^2h, τ<<ℓ. Like for when h=1 2, the resulting integral is still convergent - in fact, thanks to the subtraction coming from the denominator, it turns out to be always convergent! Setting vτ=2ℓtanθ we have, replacing eq. 54:[There is an unfortunate typo in eq. 54 of <cit.>: two of the cosines in the bracket, should be sinuses, as can be seen by setting h=1 2 in (<ref>). Also, notice that μ in  <cit.> is equal to μπ in the present paper.] R_p(μ_R) R_p(μ_R=0) =1+2μ_Rπ v(2ℓ)^1-2h∫_0^π/2dθcos^2θ[p^1-2h(sinθ)^2h(1 p-1)cos^4hθ [1-(sinθ)^2/p]^2h-p] ≡ 1+2μ_Rπ(2ℓ)^1-2hI_p Like in  <cit.> we get the correction to the entanglement by S=1 6ln(ℓ a)+2μ_Rπ v (2ℓ)^1-2h.d dpI_p|_p=1 Remarkably, the resulting integral differs from the one when h=1 2 by a simple factor: .d dpI_p|_p=1(h)=2h.d dpI_p|_p=1(h=1 2), [It converges at θ=π 2 in all cases.] and we find in the end S=1 6ln(ℓ a)+1 3g^-1μ_R v(2ℓ)^1-g^-1 where we used that 2h=g^-1 and .d dpI_p|_p=1(h=1 2)=π 6 . As in  <cit.> the result only holds for L=∞. When the ratio ℓ L is finite, finite size effects have to be taken into account. Comparing odd and even cases amounts to changing the sign of μ as discussed in  <cit.>. This leads immediately to δ S=2 3g^-1μ_R v(2ℓ)^1-g^-1 In the non-interacting case J_z=0 we have g=1 and we find δ S=2 3μ like in eq. 58 of that reference. When the system has finite size L=2ℓ (so Z=2), we find, generalizing eq. 60[The substitution for the ℓ dependent factor is 2ℓ→2Lπsinπℓ L, so 2ℓ→4ℓπ when L=2ℓ. Otherwise, finite size gives rise to the same modified integral as in  <cit.>. ] δ S_Z=2 = .636779  g^-1μ_R v(4ℓπ)^1-g^-1 = .636779  g^-1Z_μ vμ(4ℓπ)^1-g^-1 As mentioned above we now observe that the integrals encountered in this calculation are always convergent, irrespective of the relevance of the perturbation. It follows that (<ref>) should hold as well when the perturbation is relevant, i.e. when J_z>0. The numerics indeed do not see anything happening when J_z=0 is crossed. On the other hand, result (<ref>) only makes sense when the hopping term is the leading (ir)relevant operator. As J_z crosses the value -√(2) 2, the term of (J_z independent) dimension 2 dominates, and thus (<ref>) ceases to be valid. §.§ Comparison with numerics The numerical analysis is a little tricky because, even in the absence of a local perturbation, the entanglement is known to already exhibit an alternating dependency upon ℓ <cit.>, leading to δ S_Z=2(μ=0)= a(g) l^-g^-1 This correction is well identified in the literature, and the exponent usually written as K, the Luttinger constant, with K=π 2(π-arcos J_z)=1 g. It is due to the leading irrelevant bulk oscillating term in the chain. We have first checked the result (<ref>), as illustrated in Fig. <ref> (a) To leading order, we expect the correction (<ref>) and the correction induced by the μ≠ 0 perturbation to simply add up, so we should have δ S_Z=2(μ)-δ S_Z=2(μ=0)=.636779  g^-1Z_μ vμ(4ℓπ)^1-g^-1 We have therefore studied in what follows the quantity δ S_Z=2(μ)-δ S_Z=2(μ=0). Measures of the exponent obtained by plotting ln[δ S_Z=2(μ)-δ S_Z=2(μ=0)]-lnℓ for small values of μ give excellent, μ independent results, as illustrated in Fig. <ref>(b): To obtain results for the slope itself, we fit δ S_Z=2(μ)-δ S_Z=2(μ=0) for a series of values of J_z - it turns out the relevant region involves values of μ as small as 5 10^-4. An example of such fit is given in Fig. <ref>(a). The resulting slopes are then compared with the best known numerical values in Figs. <ref>(b) and <ref>(c) for the two possible Hamiltonians. Note the excellent agreement both in the relevant and irrelevant case as long as J_z is not too close to ± 1. §.§ Determination of the renormalization factors Instead of relying on the literature, we can of course determine Z_μ directly by studying the energy of the model. Indeed, using Hamiltonian (<ref>) together with the result ⟨cosΦ R⟩=1 (2ℓ)^2h (recall 2h=g^-1) leads immediately to the term O(1) in energy E^(1)=-μ_Rπ(-1)^ℓ (2ℓ)^2h and thus to the difference between even and odd δ E=-2μ_Rπ1 (2ℓ)^2h For a finite system we obtain the corresponding shift using a conformal mapping. If the total system is of length L we have then ⟨cosΦ R(ℓ)⟩=(π L)^2h1 [2sinπℓ L]^2h and thus for our case Z=2 we find finally δ E=-2μ_Rπ(π 4ℓ)^2h From the definition μ_R=Z_μμ a numerical determination of δ E gives access to the renormalization factor (recall Z_μ=1 for J_z=0). Note that this time the sound velocity v does not enter. The values of Z_μ for Hamiltonians (<ref>) (<ref>) determined this way are given below in table <ref>, and compared with those from  <cit.>, with excellent agreement. § SYMMETRIES §.§ Symmetries between μ and -μ, λ and -λ The entanglement entropy is expected to possess several interesting symmetries in the scaling limit. The first such symmetry can be seen from the point of view of the perturbed homogeneous chain, where we have seen in section <ref> that in the field theory Hamiltonian (<ref>), translation of the cut by one site amounts to μ_R→ -μ_R. Of course this is true only to first order in μ_R, but since the results in the scaling limit are valid in the limit μ_R→ 0, ℓ→∞ with μℓ^1-g^-1 finite, it is only this order that matters. Hence we conclude that, in the scaling limit: δ S(μ)=-δ S(-μ) The second symmetry is δ S(λ)=δ S(-λ) This follows from the discussion of the perturbation expansion around the split fixed point, and the fact that to all orders δ S was found to be an even function of λ. The relationships (<ref>) and(<ref>) are illustrated in Fig. <ref> and Fig. <ref> respectively. Note that, as emphasized above, the relationships are only expected to hold in the scaling limit, μ→ 0 (resp. λ→ 0) and L→∞ with the appropriate combinations Θ_B (resp. T_B) finite. As commented earlier, the spread of the curves in the IR is due to the difficulty of reaching the scaling limit while being technically limited to relatively small values of L. §.§ Symmetries between λ and 1/λ To see the third symmetry, imagine we consider a chain with λ>>1 i.e. with a coupling between sites ℓ and ℓ+1 greatly enhanced. To facilitate the discussion we introduce a slightly more general Hamiltonian H_ℓ =σ_ℓ-1^xσ_ℓ^x+σ_ℓ-1^yσ_ℓ^y+J_z σ_ℓ-1^zσ_ℓ^z+ λ(σ_ℓ^xσ_ℓ+1^x+σ_ℓ^yσ_ℓ+1^y+Δσ_ℓ^zσ_ℓ+1^z) + σ_ℓ+1^xσ_ℓ+2^x+σ_ℓ+1^yσ_ℓ+2^y+J_z σ_ℓ+1^zσ_ℓ+2^z where we have allowed for the coupling with amplitude λ to have a different anisotropy Δ. In the limit λ>>1, the spins σ⃗_ℓ and σ⃗_ℓ+1 are almost paired into a singlet.The Hamiltonian can then be replaced in this limit , by its first-order perturbation theory approximation H_ℓ↦ -E_s+∑_t_i |⟨ s|H_ℓ|t_i⟩|^2 E_s-E_t_i where the energies of the term coupling spins ℓ and ℓ+1 are E_s,E_t_i respectively. For the singlet we have E_s=-λ(1 2+Δ 4) while the “triplet” now splits into states (for spins ℓ,ℓ+1) |++⟩ and |–⟩ with energies E_t_1=E_t_3=λΔ 4 and |+-⟩-|-+⟩√(2) with E_t_2=λ(1 2-Δ 4). A straightforward calculation then gives, up to an irrelevant additional constant H_ℓ↦1λ(σ_ℓ-1^+σ_ℓ+2^-+ σ_ℓ-1^-σ_ℓ+2^+ 1+Δ+Δ^2 σ_ℓ-1^zσ_ℓ+2^z) Observe that, while initially the modified bond was between sites ℓ,ℓ+1, after this renormalization it is now between sites ℓ-1 and ℓ+2 which, after a relabelling starting as usual from the left, becomes between sites ℓ-1 and ℓ. Hence we have exchanged the odd and even impurity problems. Notice also that the anisotropy of the Hamiltonian is not preserved in general. This only occurs in the XXX case when Δ=1, for which we recover an XXX Hamiltonian, and the coupling has gone from λ to 1 2λ and in the XX case when Δ=0 for which we recover an XX Hamiltonian but the coupling has gone from λ to 1λ. The duality is best seen for Hamiltonian H_B (<ref>) which corresponds to Δ=0. In this case we expect, in the scaling limit δ S(λ)=-δ S(1λ) In general, since we have argued and checked that dependency of the δ S curve on the exact form of the modified Hamiltonian can entirely be absorbed into a redefinition of T_B, we expect the results for the problem and its dual to be identical (up to the exchange of odd and even) in the scaling limit. Moreover, in the case of Hamiltonians H^A and H^B, the redefinition of T_B can be obtained simply by the substitution λ→1λ (1+Δ). This relationship is illustrated in <ref>, while the equation<ref> is illustrated in <ref>. § CONCLUSIONS While this problem originated in the context of physics near a boundary, the parity effects we unveiled occur as well in the bulk. Consider indeed a periodic system of length L and a sub-interval of length ℓ connected on both sides to the rest of the chain by modified bonds as in (<ref>,<ref>) - this is illustrated in Fig. <ref> below. The physics (flow towards a healed or split chain) is expected to be the same as near a boundary. We find that the entanglement for subsystems of even or odd length (the figure corresponds to the latter case) also differs by terms of O(1). The details of these terms are a bit intricate, and we plan to discuss them elsewhere. For now we contend ourselves with the following observation. In the non-interacting case J_z=0 and for two slightly different couplings λ and rλ with 0<r<1, the difference δ S at large L coincides, even in this periodic geometry, with the curve for the open geometry with a single modified bond λ: in other words, the weakest of the two modified bonds effectively behaves as if it were “opening" the system. While it is easy to understand this qualitatively (the system prefers to form valence bonds over the strongest bond), proving it analytically might be more difficult. § ACKNOWLEDGEMENTS We thank H. Schloemer for related collaborations. HS thanks P. Calabrese and L. Capizzi for discussions. HS work was supported by the French Agence Nationale de la Recherche (ANR) under grant ANR-21- CE40-0003 (project CONFICA).
http://arxiv.org/abs/2405.09779v1
20240516025221
Integrating Uncertainty-Aware Human Motion Prediction into Graph-Based Manipulator Motion Planning
[ "Wansong Liu", "Kareem Eltouny", "Sibo Tian", "Xiao Liang", "Minghui Zheng" ]
cs.RO
[ "cs.RO" ]
Integrating Uncertainty-Aware Human Motion Prediction into Graph-Based Manipulator Motion Planning Wansong Liu^1, Kareem Eltouny^2, Sibo Tian^3, Xiao Liang^4, Minghui Zheng^3 This work was supported by the USA National Science Foundation (Grants: 2026533/2422826 and 2132923/2422640). This work involved human subjects or animals in its research. The authors confirm that all human/animal subject research procedures and protocols are exempt from review board approval ^1 Wansong Liu is with the Mechanical and Aerospace Engineering Department, University at Buffalo, Buffalo, NY 14260, USA. Email: wansongl@buffalo.edu. ^2 Kareem Eltouny is with the Civil, Structural and Environmental Engineering Department, University at Buffalo, Buffalo, NY14260, USA. Email: keltouny@buffalo.edu. ^3 Sibo Tian and Minghui Zheng are with the J. Mike Walker '66 Department of Mechanical Engineering, Texas A&M University, College Station, TX 77840, USA. Email: {sibotian, mhzheng}@tamu.edu. ^4 Xiao Liang is with the Zachry Department of Civil & Environmental Engineering, Texas A&M University, College Station, TX 77840, USA. Email: xliang@tamu.edu. Correspondence to Minghui Zheng and Xiao Liang. May 20, 2024 =================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================== There has been a growing utilization of industrial robots as complementary collaborators for human workers in re-manufacturing sites. Such a human-robot collaboration (HRC) aims to assist human workers in improving the flexibility and efficiency of labor-intensive tasks. In this paper, we propose a human-aware motion planning framework for HRC to effectively compute collision-free motions for manipulators when conducting collaborative tasks with humans. We employ a neural human motion prediction model to enable proactive planning for manipulators. Particularly, rather than blindly trusting and utilizing predicted human trajectories in the manipulator planning, we quantify uncertainties of the neural prediction model to further ensure human safety. Moreover, we integrate the uncertainty-aware prediction into a graph that captures key workspace elements and illustrates their interconnections. Then a graph neural network is leveraged to operate on the constructed graph. Consequently, robot motion planning considers both the dependencies among all the elements in the workspace and the potential influence of future movements of human workers. We experimentally validate the proposed planning framework using a 6-degree-of-freedom manipulator in a shared workspace where a human is performing disassembling tasks. The results demonstrate the benefits of our approach in terms of improving the smoothness and safety of HRC. A brief video introduction of this work is available via https://zh.engr.tamu.edu/wp-content/uploads/sites/310/2024/05/Integrate_prediction_into_planning.mp4link. Motion planning, Human motion prediction, Graph neural network § INTRODUCTION To facilitate efficient and safe disassembly, robots are usually employed as complementary collaborators to work closely with human operators <cit.>. In such close collaboration, robots are required to generate collision-free motions and adjust their motions efficiently. The planning problem turns out to be complicated when human operator's behaviors are involved since real-time responsiveness necessitates quick motion generation in the constantly changing configuration space. Manipulators must respond adaptively to human operator's actions. Predicting human motions allows collaborative robots to proactively plan motions, ensuring a safe and seamless human-robot collaboration (HRC) <cit.>. Integrating human motion prediction into robotic motion planning has two technical challenges. One is that human motion is inherently complex and stochastic, which requires robust prediction models to handle the uncertainties arising from human behavior variations or unexpected actions <cit.>. Addressing such uncertainties holds particular importance in terms of ensuring the safety in HRC as unreliable predictions can potentially result in the planning of a dangerous trajectory. The second challenge is that the prediction as well as the uncertainty introduce additional computational complexity for generating robot motions <cit.>. To have a real-time responsiveness in HRC, the planning algorithms must integrate the prediction and the uncertainty in an efficient way and find collision-free motions within tight time constraints. In this paper, we propose a motion planning framework to enhance the seamless and safe collaboration between humans and manipulators in disassembly processes. Fig. <ref> shows the overview of the framework. The framework is comprised of two modules. The first one is the uncertainty-aware human motion prediction. It seeks to provide future trajectories of human operators and the uncertainty of the network-based prediction model for the purpose of safe manipulator motion planning. The second module is a graph-based neural motion planner that incorporates uncertainty-aware prediction and generates collision-free manipulator motions. We transform the collaboration workspace into a graph representation that encapsulates the relationships and dependencies among the objects within the workspace. The uncertainty-aware predictions are represented as nodes and edges, which are intuitively integrated into the constructed graph and interconnected with other objects. In summary, the main contributions of this work are summarized as follows: * We present a framework for HRC that naturally integrates the motion planning of high-DOF robot manipulators and uncertainty-aware human motion prediction, using graph neural networks. * The inherent uncertainty of the human motion prediction model is incorporated into the robot motion planning intuitively and conveniently (i.e., using nodes and edges) to enhance safety in HRC. * We conduct comprehensive experimental studies within a collaborative disassembly scenario to validate the performance of our model. The proposed planning framework showcases the benefits in terms of earlier robot's response and near-optimal trajectory planning when a sudden human intervention occurs. § RELATED WORKS §.§ Human motion prediction Traditional statistic-based models have been utilized to learn the probability distribution of human motion, enabling them to reason about possible future human trajectories based on historical data, such as the hidden Markov model <cit.> and the Gaussian regression model <cit.>. Although these probabilistic methods are suited for capturing the stochastic nature of human motion, their performance tends to be less satisfactory when dealing with intricate motion patterns. To predict complex human motion, recurrent neural networks (RNNs) have been widely used to obtain deterministic future human trajectories <cit.>. In addition, graph convolutional networks <cit.> and Transformer <cit.> have recently become popular in human motion prediction. These works show significant improvement in capturing the spatial and temporal dependencies of human motion data. Instead of blindly trusting the predicted human motions, existing studies quantified the uncertainty of the predicted human motions using statistic-based prediction models, e.g., <cit.>. These models can naturally predict trajectories in a probabilistic way, handling irregular human movements in HRC. While network-based models typically provide deterministic predictions, some studies have developed techniques to measure the uncertainty inherent in these models and to provide the confidence level associated with the model's outputs. For example, Cheng et al. <cit.> developed a parameter-adaption-based neural network to provide uncertainty bounds of the prediction in real time. Zhang et al. <cit.> employed conditional variational autoencoders (CVAEs) to sample multiple saliency maps from the latent space, ultimately obtaining an accurate saliency map using the quantified uncertainty. Eltouny et al. <cit.> trained an ensemble of motion prediction network models, and estimated the uncertainty based on the aggregation of diverse motion predictions. §.§ Robot motion planning One of the most important problems in HRC is to plan collision-free robot motions in dynamic workspaces. The computational expense imposed by the curse of dimensionality limits the application of traditional grid-based methods, such as A* algorithm <cit.>. Random-sampling-based methods such as the rapidly exploring random tree (RRT) <cit.> have demonstrated effectiveness in high-dimensional planning problems. Furthermore, to ensure the optimality of the robot trajectories, asymptotically optimal sampling-based such as batch-informed trees (BIT*) <cit.>, fast marching trees (FMT*) <cit.>, and optimization-based methods <cit.> are developed. Nowadays, network-based motion planners have been widely used to generate near-optimal robot trajectories with low computational cost. For example, the work in <cit.> leveraged a network-based model to imitate expert robot trajectories generated from oracle planners, providing near-optimal robot motions. Furthermore, rather than imitating expert trajectories, the work in <cit.> employed reinforcement learning to obtain the optimal policies to generate robot motions. Despite the advantages demonstrated by such planners, they may struggle to capture the intrinsic connectivity among objects within the workspace. Rather than blindly preprocessing all data together, the graph representation proposed in <cit.> highlights the both local and global dependencies of objects in the workspace when generating robot motions, which however does not explicitly consider human motion prediction in the motion planner. Incorporating human motion prediction into robotic planning can improve the efficiency of HRC. Cheng et al. <cit.> included the task recognition and trajectory prediction of human workers into HRC systems to significantly improve efficiency. Unhelkar et al. <cit.> proposed a planning algorithm that leverages the prediction of nearby humans to efficiently execute collaborative assembly tasks. Moreover, incorporating the prediction is beneficial for generating collision-free robot trajectories proactively, thus expanding the safety margin of the collaboration. Park et al. <cit.> used the predicted human motion to compute collision probabilities for safe motion planning. Kratzer et al. <cit.> proposed a prediction framework that enables the mobile robot to avoid the possible area occupied by a human partner. Zheng et al. <cit.> developed an encoder-decoder network to predict the human hand trajectories, and integrated the avoidance of future collisions as constraints into a model predictive control framework, allowing the planning of safe trajectories. § UNCERTAINTY-AWARE HUMAN MOTION PREDICTION In this section, we introduce an RNN-based human motion prediction model and explain how the uncertainty of the model is quantified. §.§ Human motion predictor To predict human trajectories during task execution, we train a prediction model based on an RNN with long short-term memory (LSTM) architecture. Rather than using 3-dimensional position data of arm joints, we employ unit vectors of bones for the network training. In this case, we can ensure a consistent distance between two joints when reconstructing arm poses from bone vectors using the corresponding bone lengths. This choice preserves the anatomical constraints of the arm during the prediction process. The human arm bone vector is denoted as x=(ϕ_1,ϕ_2) ∈ℝ^6, where ϕ_1∈ℝ^3 and ϕ_2∈ℝ^3 are two bone vectors of human upper-arm and forearm respectively. Notably, the position of arm joints and human arm occupied area in the workspace can be reconstructed using x and anthropometric parameters p_h, which contain the average bone length and radius of the human arm for each segment. The prediction process is denoted as: X̂=F(X,𝐖) where X=[x_-N+1,...,x_0]∈ℝ^6N is the human motion of observed N steps, F(∙) indicates the prediction function, and X̂=[x̂_1,...,x̂_m,...,x̂_M]∈ℝ^6M stands for the human motion of predicted M steps. Additionally, we treat the well-trained network as the prediction model, and 𝐖 indicates the learning weights of the network after training. §.§ Uncertainty quantification using MCDS The previous subsection briefly introduces that using a network-based motion predictor can predict human trajectories in upcoming time steps. However, human motions in HRC exhibit a certain level of variability that is influenced by factors such as individual characteristics and worker fatigue. Therefore, it's necessary to explicitly quantify uncertainties, enabling effective consideration of variations in human movements. We employ Monte Carlo dropout sampling (MCDS) to quantify the uncertainty of our prediction model, considering that it provides accurate uncertainty estimations and only requires training a single model. We aim to obtain the prediction distribution such that the possible future trajectories can be utilized for safe robot motion planning. To this end, we apply dropout to every layer of the prediction model, and treat it as a Bayesian approximation of a Gaussian process model over the prediction model parameters <cit.>. The prediction distribution is calculated using the following equation: p(X̂|X)=p(X̂|X,𝐖) p(𝐖)/p(𝐖|X,X̂) where p(𝐖) is a prior Gaussian distribution over the model parameters, p(X̂|X,𝐖) indicates the likelihood used to capture the prediction process, and p(𝐖|X,X̂) denotes the posterior distribution. Considering that the posterior distribution can not be evaluated analytically, we use variational inference to approximate it. The approximating distribution q(𝐖) can be close to the true posterior distribution by minimizing the Kullback-Leibler (KL) divergence between them: KL(q(𝐖) || p(𝐖|X,X̂)) where q(𝐖) is defined using Bernoulli distributed random variables and some variational parameters that can be optimized. As pointed out in <cit.>, the training of the prediction model would also be beneficial for minimizing the KL divergence term. Therefore, q(𝐖) is optimized after the network training, and sampling from q(𝐖) is equivalent to applying dropout on each layer of the prediction model. Eventually, the predictive variance u at test time is calculated using the following equation: 𝐮≈1/K-1∑_k=1^K[F(X,𝐖_k)^T F(X,𝐖_k)-K E^T E] where u=[u_1,...,u_m,...,u_M] indicates the prediction variance, K is the Monte Carlo sampling size, 𝐖_k is fitted to q(𝐖) and denotes the model parameters of the kth sample, and E ≈1/K∑_k=1^K F(X,𝐖_k) represents the predictive mean. The process of obtaining uncertainty-aware human motion prediction is illustrated in Fig. <ref>. The observed human motion is propagated into a well-trained LSTM model. MCDS is employed to generate different possible configurations of the network parameters, and multiple prediction samples are obtained. Finally, the uncertainty-aware prediction includes multiple possible human arm poses at each time step, and is denoted as X̂^*=[x̂_1^*,...,x̂_m^*,...,x̂_M^*]∈ℝ^6M× K. Notably, we use * to indicate there are multiple possible arm poses at a predicted time instance. And these poses fit a normal distribution *∼𝒩 (E,u), where E represents the mean and u indicates the predictive variance. § GRAPH-BASED MOTION PLANNER This section presents 1) explanations of converting the collaboration workspace and the uncertainty-aware prediction to a graph representation; 2) details of how to leverage a GNN to operate on the constructed graph and generate near-optimal robot motions. §.§ Graph representation: illustrating features and connections of objects in the workspace Rather than simply imitating reference motions like traditional neural motion planners, our approach aims to emphasize the dependencies of each key object within the workspace since the object dependencies significantly influence the planning of robot motions. To highlight such dependencies in the planning, we use nodes to represent the essential objects in the collaboration workspace and connect them using edges. As shown in Fig. <ref>, the robot's current state is denoted using six blue nodes, corresponding to the six joints of the robot. The same representation strategy is applied to the robot's goal state and the obstacle states. To simplify the illustration, we respectively use dots A, B, and C to represent the robot's current state, the robot's goal state, and the obstacle's state in the graph of Fig. <ref>. Furthermore, the uncertainty-aware prediction contains multiple future human arm joint positions, which are represented as nodes. In summary, we use v to represent the node of the graph, and V=[v_1,...,v_t,...,v_T] stands for total T essential nodes in the collaboration workspace. §.§ Graph operation: node embedding based on neighbors In the previous subsection, we employ a graphical representation to efficiently illustrate the objects in the collaborative workspace and showcase their connections. To generate collision-free robot motions, we first employ an oracle planner that generates expert robot trajectories in collaborative workspaces to obtain the training data, and then leverage a GNN to operate on the constructed graph and train the network to generate near-optimal motions. The graph is described by two matrices: a feature matrix and an adjacency matrix. The feature matrix H describes features of the objects in the workspace, such as the manipulator joint value, the current and future arm's positions, and the static obstacles' potions. The adjacency matrix A indicates the relationships between all nodes. The layers of GNN update features of each node based on the adjacency matrix A. The node embedding process is denoted as: h^(l)_v_t=f_update( θ^(l),h^(l-1)_v_t,{h_v_j^(l-1)}_j∈𝒩_v_t) where h^(l)_v_t denotes the embedding of node v_t in the layer l, θ is learning weights, and 𝒩_v_t indicates neighbors of node v_t. Fig. <ref> illustrates the motion generation using GNN. The red dot D indicates the uncertainty-aware predictions. The GNN input H^(0)=[h_v_1,...,h_v_t,...,h_v_T] is initialized by the node features of the overall graph. All nodes update their embeddings simultaneously in the layer-wise propagation of GNN using the following equation: H^(l)=ReLU(D̃^-1/2ÃD̃^-1/2 H^(l-1)Θ^(l-1)) where ReLU is a nonlinear activation function, H^(l) indicates the updated feature matrix in the layer l, Ã=A+I is the adjacency matrix with added an identity matrix, D̃ stand for the degree matrix of Ã, and Θ^(l-1) is the learning weights in matrix form. Eventually, the output layer takes all updated node embeddings into account and provides the next robot configuration: ĉ=O_θ(H^(l)) where O_θ is the output function, and ĉ is the next robot configuration towards the goal region. §.§ GNN training The previous subsection describes that our neural planner considers various factors for generating the next robot configuration ĉ towards the goal position. To ensure the generated motion to be near-optimal, GNN needs to learn optimal paths generated from an oracle planner. The optimal robot path connecting given start and goal configurations is denoted as σ=[c_1,...,c_i,...c_I] ∈ℝ^I× d, where d is the dimensionality of the robot configuration space. Utilizing the one-step look ahead planning strategy outlined in <cit.>, we define the training loss function for our neural planner as follows: l_planner=1/N_z∑_z^N_z∑_i=1^I_z-1c_z, i-ĉ_z, i^2 where N_z is the total number of robot paths in a batch, and I_z is the length of the zth path. §.§ Bi-directional planning After the network training, the learning weights Θ in Eq. (<ref>) are well-tuned. We use the well-trained GNN to perform real-time robot motion planning. Based on the one-step-ahead planning strategy, the planned robot configurations are iteratively used as new inputs of our neural planner until a complete path is found. Such a planning heavily depends on the previously generated robot configuration, which may potentially accumulate errors throughout the planning process and result in the robot deviating from the goal region. Therefore, we adopt a bi-directional planning way to enhance the robustness of online planning. The bi-directional planning starts by initiating two sub-planning branches simultaneously, originating from the start and goal configurations, respectively. Then, we generate linear interpolations trying to directly connect two branches in each planning iteration. Additionally, the two branches grow iteratively until the planned robot configurations and the generated interpolations are both collision-free. Eventually, the two sub-planning branches are stitched together to be a complete robot path. § EXPERIMENTAL VALIDATIONS §.§ Experiment setup §.§.§ Experimental Setting Fig. <ref> shows the experiment setting of two disassembly scenarios. We use the Vicon motion capture system to track human motions. When constructing the human arm model for collision-checking, we introduce an additional radius to the human arm. This precautionary measure establishes a safety margin between the detected collision and the actual collision. To ensure safety and prevent potential physical injuries, robot needs to consider both the tracked and predicted human motions. Note that in the collaboration scenario of this work, the human worker grabs tools located on the workstation while the robot transports disassembled components above the workstation. The physical barrier of the workstation effectively separates the entire human body from the robotic arm. Consequently, the human arm and the robot present the highest likelihood of collisions. Therefore, tracking and predicting the positions of the forearm and upper-arm suffice to ensure the safety of the collaborative task as outlined in this work. However, it is inadequate for ensuring the safety in scenarios of broader collaboration. Tracking and predicting the movement of the whole human body instead of just the forearm and upper-arm are still necessary for diverse collaborative modes. §.§.§ Data acquisition We collect 120 human arm trajectories in the frequency of 25Hz for each type of human motion shown in Fig. <ref>. In our work, one human worker is involved in the data collection, and the human worker is required to perform actions naturally throughout task executions, without deliberate control over the speed. Therefore, while the action speed exhibits some variability, it remains within a reasonable range. These trajectories are then converted to bone-vectors for network training. We use 70% of data to train the prediction model. Another 15% of data is employed for validation, while the remaining portion is reserved for testing. The horizons of the observation and prediction are both 2 seconds. §.§.§ Networks The RNN-based human motion prediction model is based on LSTM structure. It consists of three LSTM layers and one dense layer serving as the output layer. Each LSTM layer is followed by a dropout layer with a dropout probability of 10%. The input dimension of the prediction model is a 50×6, where 50 indicates the observation steps and 6 implies the number features of arm poses. The output dimension of the model is 50×5×6, where 50 implies the prediction steps, 5 indicates the sampling size, and 6 denotes the number features of the arm pose. The GNN consists of five graph convolutional layers employing Rectified Linear Unit activation functions. Subsequently, one global sum pooling layer is applied to compute the sum of node features, and one dense layer is employed as the output layer. The input of the GNN is a constructed graph described with a 55×6 feature matrix and a 55×55 adjacency matrix, where 55 indicates the total node number and 6 implies the number of node features. §.§ Experimental test results §.§.§ Validation of the neural planner To evaluate the effectiveness of the graph-based planner, we create a total of 12 different workspaces. We employ RRT* <cit.> from the open motion planning library (OMPL) as the oracle planner to generate optimal robot motions in a motion planning platform MoveIt. Each static workspace includes 800 planning scenarios with random pairs of start and goal configurations. In static workspaces, we conducted comparative studies between our approach and three other planners from OMPL, which are RRT* <cit.>, RRT <cit.>, and the advanced planner bi-directional FMT* (BFMT* <cit.>). A comparison study is provided in Table <ref> <cit.>, from which the graph planner demonstrates superior performance compared to the other three planners in terms of path length and planning time, and it achieves a promising level of success in generating collision-free motions. §.§.§ Uncertainty-aware prediction We select different values for Monte Carlo sampling size K to obtain the quantified uncertainty. The corresponding results are illustrated in Table. <ref>. The “Elbow/m" and “Wrist/m" present the standard deviation in predictions relative to the mean predicted joint position. Small values of “Elbow/m" and “Wrist/m" indicate greater consistency among multiple arm poses at the predicted time instance, while larger values signify greater variability. Note that there are no target or minimum required values for the quantified uncertainties. A large value signifies increased variability in arm poses at the predicted time instance. This may broaden the scope of possible arm motions for enhanced robot motion planning. However, due to the close proximity between the potential human arm poses and the robot, it will make the robot more difficult to find motions to avoid potential collisions, and the prolonged inference time increases the risk of human-robot contacts. Therefore, the selection of suitable K is a trade-off problem. Based on the observation of the Table. <ref>, the rise in the value of K leads to a substantial increase in the inference time, whereas the escalation in the quantified uncertainties of elbow and wrist positions has a negligible impact. Therefore, we select K=5 to quantify uncertainties and take the quantified uncertainties into the safe robot motion planning. This decision balances the need for comprehensive uncertainty assessment with inference times. §.§.§ Comparison between predictive error and uncertainty We also define the predictive error as the difference between the mean prediction and the ground truth, and compare the quantified uncertainties and predictive errors in terms of the human elbow and wrist joint positions in Fig. <ref>. It includes 200 arm poses selected from the test dataset. The predictive error and quantified uncertainty show a high co-relation. Importantly, when human workers are conducting collaborative tasks, the predictive error can not be utilized in the robot planning since it's not feasible to obtain future ground truth at the current time step. Therefore, the quantified uncertainty can be used as an alternative source of information for ensuring human safety. §.§.§ Benefits of integrating predictions To handle the planning in dynamic workspaces, we simulate a collaborative disassembly scenario and utilize the RRT* planner to generate a set of 12000 collision-free motions for the manipulator in one workspace that involved the current and future human motions. Fig. <ref> and Fig. <ref> illustrate the experimental tests based on the human motion A and B, respectively. Three cases are considered in the experimental tests: (1) planning without human arm, (2) planning without human prediction, and (3) planning with human prediction. The first case is the planning without taking into account the current position of the human arm, resulting in direct contact between the manipulator and the human. Such a case highlights the necessity of the real-time re-planning in HRC scenarios. In the second case, the planning considers the current position of the human arm. When the arm is reaching and grabbing components shown in Fig. <ref> (III) and Fig. <ref> (III), the neural planner is capable of promptly re-planning safe motions to avoid collisions. The third case is the planning with uncertainty-aware prediction. Multiple future arm poses are used to check collisions continuously. The manipulator plans motions at the early stage of the task execution since it detects potential collisions according to the predictions. In general, experimental tests show that the robot exhibits abrupt changes in motion when only considering the current human arm positions. On the other hand, by incorporating future human arm poses into the planning process, the robot demonstrates smoother motions in terms of an earlier response and a smoother path for the end-effector. Quantitative results of smoothness evaluation in terms of velocity profile are provided in Table <ref>. We calculate the acceleration and jerk, and take averages for each step and each robot joint. Without considering the human operator, the robot planner can find a smooth trajectory in the static environment; however, it can lead to collisions since the planner is not human-aware. When considering the human in the environment during the planning process, robot motion will be affected due to the movements of the human operator. However, the smoothness can be improved by incorporating human motion prediction in the planning process, compared to the model without human motion prediction. § CONCLUSIONS AND FUTURE WORK This paper presented a graph-based framework that seamlessly incorporates uncertainty-aware human motion prediction into robotic motion planning. The human motion is predicted using an RNN-based prediction model, and the uncertainty of the prediction model is explicitly quantified using MCDS. The uncertainty-aware prediction is effectively integrated into a graph that represents the collaboration workspace. The manipulator motions are planned based on the constructed graph, and the uncertainty-aware prediction is utilized to expand the safety margin during the planning. The results of the experiments demonstrate that the proposed planning framework can enhance the smoothness and safety of collaborative disassembly processes. To further enhance the safety of collaborations, future studies will focus on establishing a target inference time during the collaboration. The inference time can be determined through iterative task execution trials, where the human worker gradually increases the moving speed until contact occurs. Additionally, given our heuristic approach of setting an extra radius as a safety distance, establishing the minimum safety distance can also be achieved by targeting this inference time. Additionally, our forthcoming studies will involve reconstructing feasible arm poses at each predicted time instance, empirically collecting uncertainties, and presenting the results in a statistically rigorous manner to better demonstrate the validity of our approach. Furthermore, future studies will validate the smoothness of the robot motions such as the execution velocity and acceleration considering that the path length may not offer a comprehensive measure of optimality. IEEEtran
http://arxiv.org/abs/2405.09303v1
20240515124821
Probing the broad line region geometry and size of the gravitationally lensed quasar Q2237+0305 with microlensing time series
[ "Đ. V. Savić", "D. Hutsemékers", "D. Sluse" ]
astro-ph.GA
[ "astro-ph.GA" ]
Microlensing times series in Q2237+0305 Institut d’Astrophysique et de Géophysique, Université de Liège, Allée du 6 Août 19c, 4000 Liège, Belgium. dsavic@uliege.be Astronomical Observatory Belgrade, Volgina 7, 11060 Belgrade, Serbia Lensed quasars are powerful cosmic laboratories; they are used to simultaneously probe various astrophysical phenomena. Microlensing by stars within distant galaxies acts as strong gravitational lenses of multiply imaged quasars, and provides a unique and direct measurement of the lensed quasar internal structure. Microlensing of the continuum emitting region as well as the broad-line region (BLR) is well characterized by four observable indices, μ^cont, μ^BLR, WCI (wing-core), and RBI (red-blue), measured directly from the spectra. During the 2004–2007 monitoring period, image A of the quadruply lensed system Q2237+0305 underwent a strong microlensing amplification, while image D remained unaffected. We used 35 epochs of archival spectrophotometric data of Q2237+0305 obtained with the Very Large Telescope of the European Southern Observatory to develop an independent microlensing method for estimating the geometry and size of the BLR. We measured the index time series for the Civ line and the continuum emission at 1450. We built a library of the simulated microlensing index time series that reproduce the observed times series based on three representative BLR models: Keplerian disk (KD), polar wind (PW), and equatorial wind (EW). After sampling the model parameter space, we find that KD is the predominant model, while PW and EW are less likely. We infer that the system is viewed at an intermediate viewing angle i∼35, and we estimate the most likely Civ BLR half-light radius r_1/2=51±23 light days. Our results are in good agreement with previous findings in the literature and extend the validity of the index-based approach to a temporal domain. Probing the broad line region geometry and size of the gravitationally lensed quasar Q2237+0305 with microlensing time series Đ. V. Savić 1,2 D. Hutsemékers1 D. Sluse1 Received May 20, 2024; ============================================================================================================================= § INTRODUCTION Gravitational lenses are important phenomena used to studying cosmology and galaxy formation and evolution. The magnification of distant faint objects allows us to constrain the mass of the foreground lenses based on the fluxes and the position of the images. Owing to their intrinsic variability <cit.>, lensed quasars have been turned into powerful cosmographic probes by means of the measurement of time delays between the different images <cit.>. Extragalactic gravitational microlensing is a natural phenomenon that occurs when the light emitted by a quasar is bent and focused by the gravity of a single star in the foreground galaxy, causing a temporary increase in the quasar's brightness <cit.>. When detected, microlensing can be exploited to constrain the parameters of the quasar emitting regions <cit.>. Type 1 quasars are characterized by prominent broad emission lines (BELs) in their optical spectra <cit.>. The BELs are emitted from the broad-line region (BLR), which consists of ionized gas situated in the vicinity of the supermassive black hole <cit.>. In a lensed quasar, the BLR projected size is typically a few times larger than the microlensing Einstein radius of the deflecting stars. For that reason, subregions of the BLR may be differently affected by microlensing such that deformation of the emission line profiles are observed <cit.>. The majority of quasars remain spatially unresolved. Although interferometry is able to probe the subparsec-scale regions in the most luminous and closest objects <cit.>, such observations will remain impossible for the vast majority of quasars that are observed at high redshifts. The alternative method commonly used to probe the BLR structure, velocity resolved reverberation mapping (RM) <cit.>, becomes considerably more telescope time intensive for active galactic nuclei (AGNs) at redshift z>1. Microlensing nicely complements other methods as it is independent of the intrinsic variability of the source. One of the key difficulties that arise when microlensing is applied in the BLR studies is that the microlensing events are rare and unpredictable, and typically last for several years <cit.>. As a result, it is challenging to gather multi-epoch spectroscopic data covering a complete event. Single-band photometric data, obtained for time delay cosmography, have now been obtained for several tens of systems <cit.>, but multi-epoch spectrophotometry is rarer. Recently, <cit.> have proposed a method to constrain the BLR structure based on the study of microlensing-induced line deformations. For that purpose, they characterize the effect of microlensing through four measurable quantities: μ^cont, the magnification of the continuum underlining the emission line; μ^BLR, the total magnification of the broad emission line; and WCI and RBI, the indices sensitive to wing-core and red-blue line profile distortions. <cit.> developed a probabilistic Bayesian framework through which we are able to constrain the geometry, inclination, and effective size of the BLR by comparing those quantities to similar ones derived from simulated microlensed line profiles. The methodology introduced by <cit.> used indices as measured at a single epoch, however, it is yet unclear if this strategy is free of biases when studying a single object. Potential biases on microlensing size inference from the single epoch analyses have been suggested for accretion disk temperature profile studies <cit.>. While these results may not be directly applicable to BLR analyses that simultaneously constrain the accretion disk and the BLR size, it is desirable to investigate whether the modeling of the BLR with multiple epochs measurements agrees with single-epoch modeling. In addition, the time variation of microlensing is equivalent to a scan of the BLR, potentially providing fine-grained constraints on the BLR structure enabling us to break degeneracies between BLR models. For those reasons, we expanded the previous modeling scheme to the analysis of multiple epochs of the same system. As a first test case of this new framework, we selected a well-studied quadruply lensed system Q2237+0305. The past observations of images A-D revealed that during the monitoring period, image D remained basically unaffected by microlensing while image A was subject to high magnification <cit.>. In particular, we used archival data to study the geometry and size of the Civ emitting region. In section <ref> we describe the observations and explain how the indices used for the microlensing analysis are derived. Section <ref> describes our microlensing models, and section <ref> our main results. Finally, section <ref> summarizes our key findings and lists our conclusions. § OBSERVATIONAL DATA The gravitational lens system Q2237+0305 (also known as Einstein cross or Huchra's lens) consists of four quasar images of similar brightness that are separated by ;;1.6. The quasar is at redshift z_ s=1.695 and the lensing galaxy is at z_ l = 0.0394 <cit.>. The A/D macro-magnification ratio is M=1.0(1) <cit.>. The time delay between the lensed images is negligible (<1 day), such that any difference between pairs of images can be attributed to microlensing. The system was spectrophotometrically monitored with the FORS1[<https://www.eso.org/sci/facilities/paranal/instruments/fors.html>] instrument (ESO Very Large Telescope) in the multi-object spectroscopy (MOS) observing mode from October 2004 to December 2007. Data reduction and calibration were reported by <cit.> and <cit.>, and will not be repeated here. The analysis of the spectrophotometric data of the four lensed images of Q2237+0305 presented in <cit.> supports the absence of microlensing in image D during the monitoring. While image B was also found to be minimally affected by microlensing over that period, its use as a reference for the analysis was considered sub-optimal. The spectra of the lensed images were observed in pairs, with A-D and B-C obtained in consecutive observations. While this observational strategy is optimal for slit spectroscopy, it resulted in a different number of usable spectra for images A and B, sometimes with substantial differences in data quality for spectra obtained at the same epoch. Following <cit.>, we reduce the microlensing signal to the measurement of four quantities: μ^cont, μ^BLR, WCI, and RBI. The microlensing-induced magnification factors of the continuum μ^cont were estimated at the wavelength of the Civ line from the adjacent A/D continuum ratios, corrected for the differential extinction and macro-magnification. We briefly recall the definition of the μ^BLR, WCI, and RBI indices <cit.>: μ^BLR = 1/M∫_v_-^v_+F^l_A(v)dv∫_v_-^v_+F^l_D(v)dv, WCI = ∫_v_-^v_+μ(v)/μ(v=0)dv∫_v_-^v_+dv, RBI = ∫_0^v_+logμ(v)dv∫_0^v_+dv - ∫_v_-^0logμ(v)dv∫_v_-^0dv, μ(v)=F^l_A(v)M× F^l_D(v). Here F^l_A and F^l_D are continuum-subtracted flux densities in the emission lines, corrected for the differential extinction and the A/D macro-magnification ratio M. The limits v_-, v_+ are integration boundaries over a restricted velocity range. The RBI index is sensitive to the asymmetry of the line deformations. It takes non-null values when the amplitude of microlensing is different in the blue and red parts of the line. The WCI index indicates whether the wings of the emission line are affected by microlensing with respect to the line core. Both RBI and WCI are independent of M. As a flux ratio, μ(v) can be extremely noisy in the wings of the emission lines where the flux density reaches zero, so it is necessary to cut the faintest parts of the line wings. We thus only considered the parts of the line profiles whose flux density is above l_ cut× F_ peak, where F_ peak is the maximum flux in the line profile and l_ cut is fixed to 0.1 at all epochs. This value is a compromise between a good signal-to-noise ratio in the line wings and the preservation of a large part of the line profile. The values of the observed indices for the various epochs are shown in Fig. <ref>. The continuum magnification strongly increased between MJD 53711 and MJD 53943 suggesting a caustic crossing event. On the other hand, the three indices μ^BLR, WCI, and RBI do not show significant variations, suggesting that the line profile deformations, while strong <cit.>, remain essentially constant during the monitoring period. The indices measurements are given in Table <ref>. The uncertainties on the indices are obtained by propagating the uncertainty of the line flux densities. This uncertainty is computed as the quadratic sum of the uncertainty on the total (line + continuum) flux density and the uncertainty on the continuum estimate, the latter being taken as the standard deviation of the continuum flux on each side of the emission lines. The exact value of the indices depends on several parameters, such as the adopted continuum windows, the velocity range fixed by l_ cut, and the systemic redshift that defines the line center. Considering different, reasonable, values of these parameters, we estimated the additional uncertainty around 0.05 for WCI and 0.01 for RBI. The uncertainties are added quadratically. The uncertainty of μ^cont and μ^BLR, on the other hand, are dominated by the uncertainty that affects the macro-magnification factor M. § MICROLENSING MODEL In order to infer the properties of the source, a forward-modeling method is generally followed that simulates microlensing data and compares them to the observations <cit.>. Microlensing simulations include two parts: 1) the convolution of the source projected image, accretion disk, or BLR models, with the magnification map that represents the caustic network, and 2) the linear motion of the source over the convolved magnification map for a given transversal velocity in an arbitrary direction. I think you mean this: In this case, a magnification event is considered successful if the values along an arbitrary linear path of a fixed length over each of the convolved images reproduce the observed indices time series within the error bars. Our approach is similar to the approximate Bayesian computation <cit.>: the four indices are our summary statistics and are compared to similar summary statistics from the data to evaluate the model's likelihood. In the following subsections, we describe in detail the modeling steps. §.§ Cosmological parameters and microlensing map We assumed a flat universe with cosmological parameters: H_0 = 68km s^-1 Mpc^-1, Ω_m = 0.31, Ω_Λ = 0.69 <cit.>. The associated angular diameter distances for the system are D_ol = 166Mpc (observer-lens), D_os = 1793Mpc (observer-source), and D_ls = 1729Mpc (lens-source). Due to the absence of microlensing in image D at the time of the observations, we assumed that the whole microlensing signal was associated with image A. The microlensing map was computed using the code microlens[<https://github.com/psaha/microlens>] <cit.>. We considered a convergence κ_s = 0.394 for matter in compact objects; κ_c = 0 for continuously distributed matter due to the fact that the lensed images lie behind the bulge of the lens such that the dark matter fraction towards the lensed images is zero; and shear γ=0.395 <cit.>. We set the mean microlens mass to ⟨ m ⟩ = 0.3. The microlensing map is normalized such that the mean magnification is 1. The microlensing Einstein radius projected in the source plane is r_E = D_os√(4G⟨ m ⟩c^2D_lsD_olD_os) = 39, where G is gravitational constant and c is the speed of light. The total size of the magnification map, when projected to the source plane, is 200×200 sampled with a resolution of 20000×20000. To reduce the impact of the preferred direction, as well as the edge effects, the caustic map was rotated by θ = [0, 30, 45, 60, 90^∘] and only the central part of 10000×10000 was used for analysis. §.§ Continuum and broad-line region parameters We followed the same model setup as used in <cit.>. A continuum-emitting uniform disk is situated in the center surrounded by the BLR. A total of nine different outer radii of the continuum source are used: r_s = [0.1, 0.15, 0.2, 0.25, 0.3, 0.4, 0.5, 0.6, 0.7]. We used three different geometries of the BLR: Keplerian disk (KD), polar wind (PW), and equatorial wind (EW). Both PW and EW are radially accelerated. The BLR emissivity depends on the radius in the form of a power law with index q=[1.5, 3.0]. The BLR inner radius is in the range r_in = [0.075, 0.1, 0.125, 0.15, 0.175, 0.2, 0.25, 0.35, 0.5, 0.75]. The outer radius of the BLR source is ten times larger than the inner size. The whole system is viewed at four different inclinations i = [22, 34, 44, 62]^∘ with respect to the polar axis. A system viewed at i = 0 corresponds to a face-on view. For a full description of the model setup, we refer to <cit.> and <cit.>. The emitted continuum and BLR images are computed using the radiative code stokes[<http://www.stokes-program.info/>] <cit.>, which is publicly available. All the models have the same number of degrees of freedom. §.§ Extracting index time series In order to simulate caustic crossing events that reproduce the observed indices, we convolved in the source plane the magnification map with each monochromatic image of the BLR obtained by slicing the projected BLR image in 20 velocity bins, thus obtaining a data cube of 20 convolved maps. From this cube, we computed the indices μ^BLR, WCI, and RBI (Eqs. <ref>, <ref>, and <ref>), which left us with three maps. For the continuum emission, we only had one monochromatic image. Convolved with the magnification map, it constitutes the fourth map. To emulate the time series to be compared to the data, it was necessary to extract tracks from the four maps <cit.>. The conversion of the track length into a time length was done through the net transverse velocity of the system. In the simplest case, the projected transverse velocity in the source plane v_⊥(source plane) for a randomly oriented track, reduces to <cit.> v_⊥(source plane) = D_os/D_olv_⊥(lens plane). For this work, we considered four different values of the transverse velocity v_⊥=[300, 400, 500, 600] in the lens plane, as found and used by several studies of Q2237+0305 <cit.>. The velocities projected in the source plane were about ten times larger than in the lens plane. Track extraction was performed using a Monte Carlo approach based on uniform sampling. First, we smoothed the observed index time series by rebinning the signal into sparser time intervals in order to mitigate the variability on the smallest timescales that cannot be spatially resolved on the magnification map (Fig. <ref>, top panels). For a given track on the convolved map, the index time series were obtained by interpolating the values along the track (Fig. <ref>, bottom and middle panels). The formalism introduced by <cit.> for computing the likelihood of the observed indices for each set of parameters characterizing the simulations was simply extended to time domain (i.e., the definition of a microlensing event was extended from a point to a track). We performed the extracting procedure for all convolved maps and the whole parameter space of the models and we computed the likelihoods, which were used to reweight the samples (importance sampling). The total number of sampled tracks is on the order of ∼5d8 per map. § RESULTS We obtained the relative probabilities of the BLR models for each value of the inclination by marginalizing the likelihood over other parameters (Table <ref>). The EW models were almost totally rejected. The KD models are the most likely, while PW models are less probable overall, confirming the results of <cit.> based on single-epoch data. We estimate the most likely viewing inclination of the system i=35±12, in agreement with the independent analysis by <cit.>. As stated by <cit.>, r_in does not properly represent the BLR size due to strong dependence on inclination and preferred geometry. We followed the same prescription for computing the values of the BLR half radii r_1/2 <cit.> and relative probabilities of r_1/2 (Fig. <ref>). From the probability distribution, we obtain r_1/2=1.31±0.60 r_E=51±23, in agreement with values reported by <cit.>, or from the same time series but modeling the total microlensing amplitude of Civ reported by <cit.>. We estimate the effective continuum half size r_s = 0.16±0.11 = 6.0±4.1, in accordance with earlier estimates <cit.>. Recently, <cit.> used the full μ(v) magnification profile instead of the indices (Sect. <ref>) in comparison to simulations, to constrain the size, geometry, and kinematics of the BLR in the lensed quasar J1004+4112, based on single-epoch data. They found that using either the indices or the full μ(v) profile gave similar results. In Appendix <ref> we report similar results for Q2237+0305, using the full μ(v) profile at a single epoch. This is also illustrated in Fig. <ref>, thus validating the use of integrated indices, which are more convenient to handle when analyzing time series. When compared to single-epoch estimates based on isolated points on the maps that reproduce observations, sampling time series acts as a filter to spatially uncorrelated signals along an arbitrary path. Recently, <cit.> investigated a sample of 13 quadruply lensed quasars in order to study the influence of diffuse BLR emission <cit.> on the accretion disk size inferences using microlensing. They showed that the mere contribution of the BLR to the continuum signal is able to largely account for the implied accretion disk overestimation, and that microlensing may provide useful constraints on disk physics in sources whose diffuse BLR emission is weak and extends much farther than the typical Einstein radius, such as in the most luminous sources. Although Q2237+0305 is not included in their sample, our simulations imply that the BLR effective size for this object is much larger than the continuum region that radiates at 1450; however, a certain scenario (∼15 of total cases) where the outer part of the disk overlaps with the inner funnel of the BLR is possible, but less likely, and could indicate a weak diffuse BLR emission. The high values observed for μ^cont are statistically favored by compact continuum sources in our model (i.e., the continuum source and the BLR are geometrically distinct and rarely overlap). § CONCLUSIONS Current and future generations of wide-field surveys such as GAIA <cit.>, LSST <cit.>, and EUCLID <cit.> are expected to identify thousands of bright lensed quasars that are able to be robustly modeled <cit.>. For each successful detection of a caustic crossing event, an immediate ground-based spectroscopy follow-up may play a key role in understanding the structure and evolution of quasars at high cosmological redshifts. In order to develop a new framework for the upcoming stream of high-cadence time series data, we studied the microlensing effect of the Civ emission line observed for the quadruply lensed system Q2237+0305 during a monitoring performed between October 2004 and December 2007. We simulated realistic caustic crossing events as a linear motion of a point source over convolved source images in order to reproduce the time series of the four representative indices μ^cont, μ^BLR, WCI and RBI that are capable of characterizing a single microlensing effect. We explored a wide range of quasar internal parameters in order to determine the most likely geometry, size, and orientation of the continuum and line emitting source. Based on our simulations, we conclude the following: * We confirm that the index analysis developed by <cit.> and <cit.> is valid when applied to time domain and allows for maximum usage of spectroscopic monitoring data for probing the BLR structure. * The most likely geometry for Q2237+0305 is KD, while PW and EW are less likely. The effective Civ emitting BLR size we inferred is in good agreement with the previous measurements <cit.>. Several improvements of the presented method will be included in a follow-up research. Microlensing simulations reproducing the observed signal in more than two lensed images would allow us to narrow the probability distribution on the continuum source and the BLR size with the goal of using the time series of the full μ(v) differential magnification profile that is the most sensitive to geometry. It will be addressed in the future work. We note, however, that using full μ(v) time series requires considerably higher computational resources, due to the dimensionality increase. Future prospects will also include the application of the same method to other broad emission lines observed for quadruply lensed systems, for example SDSS J1004+4112 and RX J1131-1231. We thank the anonymous referee for valuable comments that improved the quality of the manuscript. This work was supported by the F.R.S. FNRS under the research grants IISN 4.4503.19 and PDR T.0116.21. apj § SINGLE-EPOCH RESULTS USING EITHER THE INDICES OR THE FULL Μ(V) PROFILE <cit.> analyzed the microlensing-induced line profile deformations observed quasi-simultaneously in the Civ and Hα line profiles, at a single epoch in October 2005, to constrain the size, geometry, and kinematics of the BLR in Q2237+0305. This study was done using the four indices μ^cont, μ^BLR, WCI, and RBI to characterize the microlensing effect. Hereafter, we update this study considering the μ(v) magnification profile in the comparison to simulations, as recently done for the lensed quasar J1004+4112 <cit.>. For consistency, the three indices characterizing the emission line microlensing are recomputed from line profiles slightly truncated to discard the noisiest parts of the μ(v) profile (in October 2005; the data were good enough to use l_ cut = 0.06). These indices are given in Table <ref>. For Civ, the values of μ(v), μ^cont, and μ^BLR are 25% smaller due to the differential extinction in image D which was not taken into account in <cit.> and <cit.> (for Hα, the values of μ^cont and μ^BLR are only 3% smaller). Details of the method can be found in <cit.>. As shown in Figs. <ref> and  <ref>, the observed μ(v) profiles can be reproduced by many simulated profiles. The probabilities of the different BLR models are given in Table <ref>, for Civ and Hα. We find the following. First, the probabilities derived from the four updated indices are in good agreement with those reported in <cit.>, for both Civ and Hα. Second, the probabilities derived using the 20 spectral elements of the μ(v) profile are in good agreement with those obtained from the indices, for both Civ and Hα. Third, compared to previous results, the probability of the EW model slightly increases, but the KD model remains the dominant one. Finally, the measurement of the half-light radius of the BLR, based on the probabilities computed using the full μ(v) profiles of Civ and Hα separately, gives r_1/2 = 39^+17_-25 light-days for Civ and r_1/2 = 37^+12_-23 light-days for Hα, in excellent agreement with the values reported in <cit.>. The results are thus robust with respect to small changes in the indices. Moreover, using the full μ(v) profile does not modify the results, including the determination of the BLR size. This validates the use of integrated indices in simulations of microlensing-induced line profile deformations, at least for Q2237+0305 and J1004+4112.
http://arxiv.org/abs/2405.09130v1
20240515064653
Contextual Integrity Games
[ "Ran Wolff" ]
cs.CY
[ "cs.CY" ]
]Ran Wolff ranwolff@amazon.com The contextual integrity model is a widely accepted way of analyzing the plurality of norms that are colloquially called “privacy norms”. Contextual integrity systematically describes such norms by distinguishing the type of data concerned, the three social agents involved (subject, sender, and recipient) and the transmission principle governing the transfer of information. It allows analyzing privacy norms in terms of their impact on the interaction of those agents with one another. This paper places contextual integrity in a strict game theoretic framework. When such description is possible it has three key advantages: Firstly, it allows indisputable utilitarian justification of some privacy norms. Secondly, it better relates privacy to topics which are well understood by stakeholders whose education is predominantly quantitative, such as engineers and economists. Thirdly, it is an absolute necessity when describing ethical constraints to machines such as AI agents. In addition to describing games which capture paradigmatic informational norms, the paper also analyzes cases in which the game, per se, does not encourage normative behavior. The paper discusses two main forms of mechanisms which can be applied to the game in such cases, and shows that they reflect accepted privacy regulation and technologies. Contextual Integrity Games [ ========================== § INTRODUCTION Contextual Integrity (CI) provides a structured way in which the appropriateness of information transfers can be discussed. CI describes information as flowing from a data subject to a sender and from that sender to a recipient. The flow of information from subject to sender is usually easily justifiable from an ethical perspective because it occurs as a result of their participation in a social context, which is often voluntary and typically benefits both of them. In contrast, the flow from sender to recipient occurs in another social context, to which the subject is often not part, typically does not volunteer for, and potentially does not benefit from. CI focuses on the ethics (appropriateness) of this second transfer. Informational norms regulate the flow of information between sender and recipient. They can be seen as society's way of handling the potential implications of that flow. Every living society inherits a large volume of informational norms, but also comes up with new ones which correspond to changes in social context. Not surprisingly, the revolutionary speed and scope of information transfer in today's society triggered abundant modifications to informational norms. One of the major changes has been the commercialization of information: Data aggregators have found uses for individual data in domains starting from retail and leading, just recently, to AI training. As evident from the flurry of legislation around the information economy, society is still working out the details of the norms it would like to adopt in the context of commercial information transfers. Informational norms can be based off of various ethical frameworks: Privacy has been argued for on the basis of human dignity <cit.>, as an expression of human autonomy <cit.>, or as a conservative response to new media <cit.>. In the context of commercialization it makes sense to ground informational norms in a utilitarian framework: If both subject, sender and recipient are exchanging information for their self interest, and if they externalise no cost to the wider society, then the correct informational norm might well be the one which maximizes participants' utility. Utilitarianism has long been suggested, and often criticized, as a basis for the discussion of informational norms. Famously, Posner <cit.> used a utilitarian analysis to argue that privacy norms should not be extended. This paper builds on the theoretical criticism of Posner's analysis which has shown <cit.> that respect for privacy can emerge as a strategy which leads to a payoff dominant Nash equilibrium in certain games. In other words, that informational-normative behaviors can be based on utilitarianism. The first contribution of this work is the expansion of such privacy-games to three players, corresponding to the subject, sender, and recipient. We present games in which the strategies which lead to the payoff-dominant Nash equilibria are those which are normative according to six different transmission principles: Confidentiality, Mandatory Transfer, Control, Fiduciary Transfer, Notification, and Information Ownership. The second contribution of this work is the game-theoretic discussion of mechanisms in games where players do not necessarily have a strategy which we would identify as information-normative. We show that accepted social mechanisms can be mapped onto different types of modifications to the game. Specifically we provide two examples of modifications to the payoff: taxation and transfers, and three examples of modification to the information channel: perturbation, bandwidth limitation, and message vetting. Placing all those mechanisms in a common game theoretic framework allows comparing them against one another, which is not always simple without such abstraction. Considering privacy from a utilitarian ethics perspectives has potential benefits and risks. In a voluntary, commercial setting, the benefit of utilitarian analysis are: Firstly, that the participants: engineers, economists, and their managers, are often better educated to understand computational analysis that they are to understand other normative arguments. Secondly, it permits the encoding of ethical considerations into AI agents, which increasingly replace human judgement in many applications. This is especially important in commercial ethical problems in which some of the key variables, such as the preferences of users, can only be can only be understood through experimentation and measurement. The rest of this paper is organized as follows: Section <ref> provides the definition of game-theory as we use it here, with special focus on the use of secrets in games, as well as some preliminary games and shorthand which we use throughout the rest of the paper. Section <ref> provides examples of five games in which informational-normative strategies lead emerge as the Nash equilibrium. Section <ref> provides examples for mechanisms which drive players to informational-normative strategies. One of those mechanisms complements the Information Ownership transfer principle. Section <ref> places this work in the context of previous work. Finally, Section <ref> discusses some of the interesting questions opened by the mathematical definition of privacy. § DEFINITIONS, PRELIMINARIES, & NOTATIONS Consider a game between three rational players: Alice, who is also denoted the subject, Bob, the sender, and Carol, the recipient. Each player has a set of choices A, B, and C and a gain (payoff) function which depends on the choices of all of the players: g_A:A× B× C→ℝ, g_B:A× B× C→ℝ, and g_C:A× B× C→ℝ, respectively. Throughout this paper A, B, & C are abstract, meaningless, choices: A={ Top, Middle, Bottom }, B={ Near, In-between, Far }, and C={ Left, Center, Right } which we shorthand to { T, M, B }, { N, I, F }, and { L, C, R }, respectively. Each player also has a secret, which is a piece of information known only to that player. Alice's secret is denoted a, Bob's b, and Carol's c. In this paper secrets are single bits which can be 0 or 1 but more informative secrets are equally possible. The choices of a player can include communicating a secret which that player has to another player. Deciding to pass a secret is denoted sharing and to withhold the secret is denoted keeping. Unless stated otherwise, games have two stages: Players first make all of the decisions which regard to secret sharing and then pick a strategy based on full knowledge of which secrets every player has. In addition to secrets, players in some games can share signals, which are likewise bits whose value is part of the sharing player's choices. A player's strategy is an algorithm which selects her choices. Strategies are encoded as decision trees in c-like code. * Alice's strategy s of choosing M is denoted s=M * Alice's strategy q of choosing T if a secret a is 1 and B if a is 0 is denoted q=a?T:B * Alice's strategy r of following strategy s if b is 1 or q if it is zero is denoted r=b?s:q or, equivalently, r=b?M:a?T:B Strategies which rely on a secret are indistinguishable from those which rely on random coin flips to any player who does not know that secret. The model described in this paper only considers deterministic algorithms (pure strategies). Through the use of a sufficient number of secret, the model can subsume almost any mixed strategy[The only difference being that probabilities under this model are limited to rational fractions rather than real ones]. Strategies which rely on a secret are denoted equivalent if other players cannot distinguish between them. E.g., a?T:B and a?B:T are equivalent for a player who does not have the secret a. A strategy profile is a combination of strategies, one per player, and is marked by ⟨ s_A,s_B,s_C⟩ where s_A is Alice's strategy, etc. The expected gain from a strategy profile, g(⟨ s_A,s_B,s_C⟩), is denoted ⟨ g_A,g_B,g_C⟩ where g_A is Alice's gain, etc. and the expectancy is computed on all unknown secrets. A strategy profile is a Nash equilibrium if all strategy profiles that are different in but one player's strategy do not increase the gain of that player. I.e., a strategy profile ⟨ s_A,s_B,s_C⟩ with gain ⟨ g_A,g_B,g_C⟩ is a Nash equilibrium if for any strategy profile ⟨ s'_A,s_B,s_C⟩ with gains ⟨ g'_A,g'_B,g'_C⟩ it is true that g_A≥ g'_A, for any strategy profile ⟨ s_A,s'_B,s_C⟩ with gains ⟨ g'_A,g'_B,g'_C⟩ it is true that g_B≥ g'_B, and for any strategy profile ⟨ s_A,s_B,s'_C⟩ with gains ⟨ g'_A,g'_B,g'_C⟩ it is true that g_C≥ g'_C. A Nash equilibrium is payoff dominant if it provides better or equal gain to all players than all other Nash equilibria in that game. Given a game, a distributional mechanism is another game in which players have the exact same choices and in which the sum of payoffs for each strategy profile is not larger. Conceptually mechanisms are interventions by an additional player, the government, who can force transfer of payoff between players and to itself, but cannot provide additional payoff. A communication mechanism, likewise, is a degradation of the secret passing ability of players. In a communication mechanism the government can obstruct message passing but not improve it. §.§ Limitations The game theoretic model, described above, is one of the simplest which is considered in game theory. As such it is far from an accurate description of human behavior. Two of the more relevant extensions of game theory are for players with limited rationality (able to make mistakes) and players which do not so much know as they believe (e.g., in the value of a secret). We leave developing the theory of more realistic contextual privacy games to further research. §.§ Simplified notation A binary decomposition of a 3-party game (see, Sandholm <cit.>) is a set of 3 simultaneous 2-party games, one for each pair. In each pair, players have the same choices and strategies that they had in the original game. The payoff of every player in the original game is the sum of the payoff in the two games of the decomposition in which that player participates. We refer to those 2-party games as contexts and they often indeed correspond to the concept of context in CI. The typical game described in this paper has three players, every one of which has three choices and at least one secret. The number of strategies by which a player can choose c choices given s binary secrets is c^2^s. Listing all those strategies is obstructive for readability and impractical in a paper. We choose instead to typically only present the gain for each combination of choices and extend on secret-dependent strategies in the text. When discussing strategy profiles, we typically only analyze one of each set of equivalents. Furthermore, the game is always presented in its decomposed form, in which relations between each pair of players is easier to follow. §.§ Preliminaries Consider a game between two players: Alice and Carol. Table <ref> depicts the gain of players in all possible strategy profiles. As can be seen in the table, there are five strategy profile which lead to a Nash equilibrium. The first is ⟨ M,C ⟩ whose gain is ⟨ 2,2 ⟩ the other are the equivalent strategy profiles ⟨ a?T:B,c?L:R ⟩, ⟨ a?T:B,c?R:L ⟩, ⟨ a?B:T,c?L:R ⟩, and ⟨ a?B:T,c?R:L ⟩. In each of these equivalent strategies Alice does not know if Carol chooses L or R and therefore computes the expected gain 4. The same is true for Carol who cannot anticipate if Alice chooses T or B. We first use this game to exemplify the strategies of secrecy and of respect to privacy. [Secrecy (adapted from <cit.>)] A player has a strategy of secrecy in a game if she chooses not to share her secret. If Alice has the choice to keep or share her secret with Carol then the strategy in which Alice keeps her secret is dominant. If Alice shares her secret with Carol then she can use it in his strategy. The strategy profile ⟨ a?T:B,c?L:R ⟩ is no longer a Nash equilibrium because Carol can replace her strategy to a?L:R and increase her gain from 4 to 8. The same is true for the equivalent strategy profiles. The only remaining Nash equilibrium is ⟨ M,C ⟩, which is less gainful for Alice. Keeping her secret is a payoff dominant strategy for Alice. [Respect for privacy (adapted from <cit.>)] A player has a strategy of respect for privacy if she chooses not to observe another player's secret. If Carol has the choice if she observes Alice's secret or respects her privacy then the strategy in which Carol respects Alice's privacy is dominant. The proof of Thm. <ref> holds regardless if Alice shares his secret willingly or if Carol observes Alice's secret without her consent. § INFORMATIONAL NORMATIVE STRATEGIES With the definitions of games and secrets above, we can now describe a set of games which model informational normative behavior. Each of the following sections deals with a different transmission principle: Confidentiality, Mandated Transfer, Control, Notification, and Fiduciary Transfer. Information Ownership is discussed in Section <ref>. For each of the transmission principles we describe a real-life example, a game theoretic definition of the normative behavior, and an example of a game in which the normative strategy leads to the payoff dominant Nash equilibrium. §.§ Confidentiality Confidentiality is called for in real life when Alice (the subject) has information which could help her interact with Bob (the sender) but which might harm her if it becomes known to Carol (the recipient). In such scenarios, Alice would be keen to share the secret with Bob if she can trust that revealing the secret to Carol is somehow harmful to Bob as well. In terms of game theory we define a confidential strategy as follows: [Confidentiality] Bob has a strategy of confidentiality towards Alice if whenever she shares her secret a with Bob his strategy is to keep a from Carol. To see that confidentiality is an emergent property in some games, consider the Game of Confidentiality in Table <ref>. The context of Alice and Carol, Table <ref>, is a privacy game, similar to that in Table <ref>. The same is true for the context of Bob and Carol. Thus, in those two games players maximize their payoff if they each make choices the other player cannot anticipate. So when each player has their own secret it is straightforward that the payoff-dominant Nash equilibrium of the Game of Confidentiality is ⟨ a?T:B, b?N:F, c?L:R⟩ and the gain in that Nash equilibrium is ⟨ 9,5,12⟩. In ⟨ a?T:B, b?N:F, c?L:R⟩ , Alice and Bob each gain 1 from their context. If Alice decides to share her secret then Bob can replace his strategy with one that relies on a rather than on b . Because Carol can no more anticipate a than she can anticipate b the change does not affect Bob and Carol's context. Hence, ⟨ a?T:B, a?N:F, c?L:R⟩, in which Alice and Bob are paid 2 each in their context and the total payoff is ⟨ 10,6,12⟩, is a payoff-dominant Nash equilibrium when Alice shares her secret with Bob. Last, if Bob chooses to share Alice's secret with Carol then Alice would no longer choose a strategy which relies on that secret. If she does, then Carol will always match her choice to Alice's and gain 16 in their context while Alice gains 0 in that context. A payoff of 16 dominates any other choice Carol can make in her other context, and the certainty of gaining 0 makes this strategy inferior to choosing M no matter what payoff Alice earns in her context with Bob. Hence Alice would choose M when Carol has her secret. When Alice chooses M, Carol can still use her own secret, c , to choose between L and R, or she can choose C. In the first strategy Bob would choose N or F using b and in the other he would choose I. In the Game of Confidentiality Bob has a strategy of Confidentiality towards Alice As we have seen, if Alice shares her secret with Bob then the payoff-dominant Nash equilibrium is ⟨ a?T:B, a?N:F, c?L:R⟩, in which Bob's payoff is 6. If Bob shares Alice's secret with Carol then the two possible Nash equilibria, ⟨ M,b?N:F,c?L:R⟩ and ⟨ M,I,C⟩ both pay Bob 4. Therefore, Bob would not share Alice's secret with Carol. §.§ Mandatory transfer A common real-life example of mandatory transfer is reporting of illicit conduct. For instance, judges in the US (senders) are ethically required to inform federal and state authorities (recipient) if they find that a witness or a party (subject) evades tax. Unlike confidentiality, we cannot assume that the subject willingly shares the secret of tax evasion with the sender and the question of whether revealing that secret in court is ethically justified is an interesting one. Still, once the judge knows the secret, it is normative (although hardly guaranteed <cit.>) that she will transfer it to the tax authorities. [Mandatory transfer] Bob has a strategy of mandatory transfer towards Alice if whenever she shares her secret a with Bob his strategy is to share a with Carol. Consider the Game of Mandatory Transfer in Table <ref>. The context of Alice and Carol (Table <ref>) is a privacy game similar to that in Table <ref> in which the expected payoff for Alice is 5 and for Carol is 3, so long as they each choose using their secret. Bob's only strategy which pays him more than 0 in that case is I, and the payoff dominant Nash equilibrium is ⟨ a?T:B,I,c?L:R⟩ with a payoff of ⟨ 5,1,3⟩. If Carol somehow gains access to Alice's secret then the only remaining Nash equilibrium remaining is ⟨ M,I,C ⟩ whose payoff is ⟨ 3,3,3⟩. In the Game of Mandatory Transfer Bob has a strategy of Mandatory Transfer towards Alice. If Bob somehow learns Alice's secret then his options are to keep it from Carol and continue to receive a payoff of 1 or share the secret with Carol and receive a payoff of 3. §.§ Fiduciary Transfer Real life examples of fiduciary transfer are those in which the subject relies on a better informed sender to manage her secret for her: Professional advisors, lawyers, etc. are often the sender in those examples. Those advisors have some information about the context which the subject does not have. E.g., a lawyer knows not just the law, but also the relevant precedents. In a normative setting, a subject should be able to reveal her secret to the advisor knowing that the advisor will only share that secret if it benefits the subject. In other words, normative fiduciary transfer happens when the sender is better informed than the subject and their interests align. [Fiduciary Transfer] Bob has a strategy of fiduciary transfer towards Alice if whenever she shares her secret a with Bob his strategy is to share a with Carol if and only if sharing will increase Alice's gain. In the game of Fiduciary transfer, Table <ref>, Alice and Carol do not know if their context is competitive (Table <ref>) or collaborative (Table <ref>). If collaborative, then Alice and Carol's gain can increase if they share a secret whereas in a competitive context it will not. Alice makes her choice about secret sharing without knowing the context. Unlike Alice and Carol, Bob does know which context Alice and Carol have. When choosing if he shares Alice's secret with Carol he can consider what Alice and Carol would do after they learn their context. After all of the players executed their secret sharing strategy they all learn the context and choose their best strategy considering the secrets they know. The key feature of the Game of Fiduciary Transfer is that Bob will not choose N or F in a way that Carol can predict, and neither will Carol choose L or R in a way that Bob can predict. If Bob chooses N, for example, then Carol's dominant strategy would always be to choose L. That combination is so harmful to Bob that no other payoff he may get from Alice can make it preferable to choosing I. Hence, for Alice and Carol to collaborate, if at all their context is collaborative, Carol must share her secret with Alice. Table <ref> presents the payoff-dominant Nash equilibrium in four different secret sharing scenarios: Firstly, if players each have their own secret then, regardless of Alice and Carol's context, their payoff-optimal Nash equivalence is ⟨ a?T:B,b?N:F,c?L:R⟩ with a payoff of ⟨ 4.5, 2.5,2⟩. If Alice shares her secret with Bob then they can coordinate by choosing ⟨ a?T:B,a?N:F,c?L:R⟩. That would increase their payoffs to ⟨ 5, 3,2⟩ without altering their position in their respective contexts with Carol, who does not know a. When Bob knows Alice's secret he can choose if he shares it with Carol. By sharing Alice's secret with Carol, Bob determines that Alice would never be able to make a choice that Carol cannot anticipate. If Alice and Carol are in a competitive context then that forces Alice to choose M because her potential gain from Bob cannot compete with her certified loss to Carol if she makes another choice. If Alice chooses M, then Carol would prefer to choose C and gain 2 in their context over choosing L or R based on her secret and gaining 1 from Bob. Hence, ⟨ M, I,C⟩, with a payoff of ⟨ 2, 0,2⟩, is the payoff dominant Nash equilibrium. Last, if Bob shares Alice's secret with Carol and if Alice and Carol are in a collaborative context then Alice and Carol can collaborate by choosing T and L or B and R. However, as we have seen Carol cannot collaborate with Alice in a way which Bob can predict. So Carol cannot just choose L (or R) and cannot rely on Alice's secret, which is known to Bob. Carol's only way out is to share her own secret with Alice. In this way, Alice and Carol can both use Carol's secret which Bob does not know. If they do then Bob's best strategy is to choose N or F in a way Carol cannot anticipate. Therefore ⟨ c?T:B,b?N:F,c?L:R⟩ is the payoff-dominant Nash equilibrium, with gain ⟨ 12.5, 3.5,9⟩. In the Game of Fiduciary Transfer Bob has a strategy of fiduciary transfer towards Alice. If Alice and Carol's context is competitive then by sharing Alice's secret to Carol Bob reduces Alice's payoff from 5 to 2 and his own revenue from 3 to 0. If the context is collaborative then by sharing that secret Bob causes Carol to share her own secret with Alice. This increases Alice's payoff from 5 to 12.5 and Bob's payoff from 3 to 3.5. Therefore, Bob would only share Alice's secret with Carol when that benefits Alice. §.§ Control An example of a control norm in a commercial setting are cases in which a customer must communicate with a service provider through the service of another company. For instance, the user of a Website who communicates with an advertiser through the publisher Website. The user may choose to share some information with the publisher (e.g., their delivery address) in order to gain some of their services. The publisher often does not know if sharing that information with the advertiser would benefit or harm the user. It is normative for the publisher to ask users if they can share their information and to follow their dictates. [Control] Bob has a strategy of control towards Alice if when she shares her secret a and signal s with Bob his strategy is to share a with Carol if s=1 and keep it if s=0. Signalling only makes sense if Alice has information which Bob does not have. Suppose that with the same setup as in the Game of Fiduciary Transfer (Table <ref>) it is now Alice who knows her context with Carol and Bob who does not know. Assume Alice can choose a signal as part of her secret sharing strategy. When she decides to share a she also shares s which she chooses to be 0 or 1. Bob's secret sharing strategy can rely on s. So when Alice shares her secret with Bob his choices are: 1) To always keep a from Carol; 2) to share a with Carol regardless of s. 3) To share a when s is 1, or 4) to share a when s is 0. If Alice signals 1 when she is in a collaborative context with Carol and 0 when she is in a competitive context then Bob would share Alice's secret with Carol when Alice signals 1 and keep it when she signals 0. Hence, Bob has a strategy of Control towards Alice. From the analysis of the Game of Fiduciary Transfer we already know that keeping Alice's secret is harmful for Bob if Alice and Carol are in collaborative context and that sharing the secret is harmful to Bob when Alice and Carol are in competitive context. If Bob knows that Alice only signals 1 when she is in a collaborative context then his best sharing strategy is to share when the signal is 1 and to keep when it is 0. Since this strategy is also the best secret sharing strategy for Alice, it is a payoff-optimal Nash equilibrium. §.§ Notification A common real world example for notification is that in which a company is bought by another. When that happen, the context of the bought company and its new owner typically becomes collaborative. Such collaboration can be reinforced by sharing the bought company secrets with its owner. However, sometimes in order to collaborate with its owner the bought company needs to validate that its customers have a chance to alter their own strategy. [Notification] Bob has a strategy of notification towards Alice if when she shares her secret a with Bob, his strategy is to signal 1 to her if he chooses to share that secret with Carol and to signal 0 to her if he chooses to keep it. The Game of Notification in Table <ref> attempts to capture this dynamic by requiring that Alice choose her strategy a-priori, before any of the players knows if the context of Bob and Carol is collaborative. Alice strategy can still depend on a signal from Bob, but Alice would not be able to change her strategy in response to the choices of Bob and Carol. When Alice does not share her secret with Bob the only strategy which guarantees Alice a positive payoff is M and the only Nash equilibrium is ⟨ M,I,C ⟩ whose gain is ⟨ 2,0,0⟩. By sharing her secret with Bob, Alice allows ⟨ a?T:B,a?N:F,M ⟩ whose gain is higher for her and for Bob, ⟨ 5,4,0⟩. When Bob and Carol are in a non-collaborative context, that is the payoff-dominant Nash-equilibrium. When Bob and Carol are in a collaborative context, or if they change their context to a collaborative one, they would prefer that Carol choose L when Bob chooses F and R when Bob chooses N. However, that is only a Nash equilibrium if Alice chooses M. Otherwise, e.g., in ⟨ a?T:B,a?N:F,a?R:L⟩, the loss to Carol from choosing R when Alice chooses T is higher than Carol's gain from choosing R when Bob chooses N. Thus, there are two groups of Nash equilibria to the game when all players have Alice's secret: ⟨ a?T:B,a?N:F, M ⟩ with payoff ⟨ 5,4,0 ⟩ and ⟨ M,a?N:F, a?R:L ⟩ with payoff ⟨ 3,6,6 ⟩. Bob and Carol prefer the second and Alice prefer the first. However, Alice still prefers this second Nash equilibrium over ⟨ M,I,C⟩ which is her best option if she does not share her secret with Bob. Now consider the possibility that Bob signals to Alice s=1 if his context with Alice is collaborative and s=0 if non-collaborative. If Alice can trust that signal then her best strategy is s?M:a?T:B. Trust is critical because Bob's choice of what signal he sends comes after Alice has committed to the strategy and have shared her secret. If Alice shares her secret with Bob in the Notification Game then Bob has a strategy of notifying Alice if he shared her secret with Carol. Bob would only share Alice's secret with Carol if his context with Carol is collaborative. If Bob shares the secret and then signals 0 then Alice strategy is a?T:B and Carol would choose M. That would reduce Bob's payoff to 4 rather the 6 he could gain if he signalled 1. If Bob does not share Alice's secret and still signals 1 then Alice chooses M. That would reduce Bob's gain to 0 rather than the 4 he could have gained if he signalled 0. § INFORMATIONAL NORMS AS MECHANISMS As stated earlier, an informational norm can be seen as a social mechanism whose purpose is to make sure individuals adopt certain normative behaviors. In a game where players do not have information-normative strategies, a mechanism is an adaptations of the given game which cause players to choose an informational-normative strategy. This paper focuses on two types of mechanisms: Distributional mechanism in which the government can shift a player payoff to another player,or to itself, and communication mechanisms in which the government can obstruct the passing of some messages. §.§ Distributional mechanisms Two prevalent methods of payoff distribution in the commercial world are revenue-sharing and taxation[We consider any kind of monetary punishment system as taxation for the purpose of this discussion. Since it allows players to make rational decisions which consider the probability of getting caught.]. In the former the data subject is paid for the use of her secret. In the latter, the government extracts some of the payoff from some players. §.§.§ Information Ownership [Information ownership] Bob has a strategy of recognising Alice's ownership of her secret if he chooses to transfer Alice some of his access payoff which resulted from sharing her secret with Carol. In the Game of Information Ownership (Table <ref>) Alice and Carol context is a game of privacy. If Alice does not share her secret with Bob then the Nash equilibrium of the game is ⟨ a?T:B,b?N:F,c?L:R⟩ with payoff ⟨ 6,1,6 ⟩. If Alice shares her secret with Bob then they can also choose ⟨ a?T:B,a?N:F,c?L:R⟩ with payoff ⟨ 7,2,6 ⟩. However, Bob would much rather Alice would choose M because the strategy profile ⟨ M,I,C⟩ gains him 10 from Carol. If Alice shares her secret with Bob then he can force ⟨ M,I,C⟩ by sharing Alice's secret with Carol. As the game is given, Alice see her gain reduced from 6 if she does not share her secret with Bob, to 2 if she does share her secret with Bob (and he shares it with Carol). Therefore, as the game is given, Alice would not share her secret with Bob. If Alice and Bob could adopt a mechanism in which Bob transfers a payment of 5 to Alice if in their context they choose M and I (Table <ref>) then Alice's calculation would change. Now, in ⟨ M,I,C⟩ her gain is 7 whereas in ⟨ a?T:B,b?N:F,c?L:R⟩ it is 6. Alice no longer minds that Bob might share her secret with Carol. Since Bob increases his gain in ⟨ M,I,C⟩ to 5 comparing to 1 in ⟨ a?T:B,b?N:F,c?L:R⟩, the mechanism works better for Bob as well. Last, since the total gain in ⟨ M,I,C⟩ is 12, whereas in ⟨ a?T:B,b?N:F,c?L:R⟩ it is 11, the mechanism is ethical from a utilitarian point of view. §.§.§ Taxation The same game which exemplifies Information Ownership can also present the value of taxation. Consider if in the game in Table <ref>, the government placed a flat tax of 5 on Alice if she chooses anything but M. That would cause Alice to prefer ⟨ M,I,C ⟩ over ⟨ a?T:B,b?N:F,c?L:R ⟩ because in the former Alice's gain is 2 which in the later it is 1. Since the total gain of ⟨ M,I,C ⟩ is higher than that of ⟨ a?T:B,b?N:F,c?L:R ⟩, that tax is also ethical from a utilitarian point of view[Similar examples can be brought in which the tax also increase the total after-tax payoff of the players.]. §.§ Communication mechanisms Secret sharing was presented so far as optimal communication: Whenever a source player decides to share a secret the destination player immediately knows the value of that secret. There are many ways in which such perfect communication can be degraded. Firstly, the channel can be made interactive, requiring that the destination be open to receiving the secret or else it does not matter if the source player chooses to share it. Secondly, the channel can be made noisy, such that when the origin shares a secret s the destination receives s̃ which is only equal to s in some probability. Thirdly, the bandwidth of the channel can be limited such that no more than k secrets can be shared even if the origin has n secrets she wishes to share. §.§.§ Interactive channel mechanism A real life example of making a channel interactive is the "fruit of the poisonous tree" doctrine <cit.>. According to that doctrine, the court (recipient) can choose not to legally know some facts which the prosecution (sender) decided to share about the defendant (subject). The doctrine is justified when ignoring those facts serves the greater social good of discouraging law enforcement from unlawful behavior. In the interactive channel mechanism example (Table <ref>) Bob prefers the Nash equilibrium ⟨ M, I, C⟩ which pays ⟨ 6, 7, 6⟩ over the Nash equilibrium ⟨ a?T:B, b?N:F,c?L:R ⟩ which pays ⟨ 9, 5, 12⟩ and over ⟨ a?T:B, a?N:F,c?L:R ⟩ which pays ⟨ 10, 6, 12⟩ . If Bob learns Alice's secret then he can force the Nash equilibrium he prefers by sharing it with Carol. Knowing that, Alice would never choose to share her secret with Bob. But Bob might still (illicitly) observe it. Consider what would happen if the government enforces a communication mechanism by which Carol can choose if she learns secrets Bob wants to share with her. Since Carol stands only to loose if she knows Alice's secret, she will choose to not learn it from Bob. When that is Carol's strategy Alice can safely share her secret with Bob, making ⟨ a?T:B, a?N:F,c?L:R ⟩ the payoff-optimal Nash equilibrium. Since the introduction of the mechanism increased the total payoff from 26 to 28, it is ethical from a utilitarian point of view. §.§.§ Noisy channel Differential privacy <cit.> is, possibly the most well known example of a noisy channel mechanism. In the Noisy channel mechanism example (Table <ref>) Alice can increase her expected payoff from 9 in the payoff-optimal Nash equilibrium ⟨ a?T:B,b?N:F,c?L:R⟩ to 10 in the payoff-optimal Nash equilibrium ⟨ a?T:B,a?N:F,c?L:R⟩ by sharing her secret with Bob. Bob, in turn, has no motivation to share Alice's secret with Carol because that would force the Nash equilibrium ⟨ M,I,C⟩ and reduce his payoff from 2 to 0. Assume the government enforces a noisy channel mechanism in which Bob can share not a but ã such that the chances that ã = a are 1/2+δ. Carol can still choose the strategy ã?L:R, rather than a?L:R in the given game. The payoff of the strategy profile ⟨ a?T:B,a?N:F,ã?L:R⟩ is ⟨ 10-16δ,2+12δ,8+28δ⟩. That payoff is higher for Bob and Carol than the payoff of ⟨ a?T:B, a?N:F,c?L:R⟩, and so long as 10-16δ > 4 it is also higher for Alice than her alternative gain if she chooses M. Hence, it is a payoff-optimal Nash equilibrium. The total payoff of this Nash equilibrium, 20+24δ, is higher of the total payoff of 18 that ⟨ a?T:B,a?N:F,c?L:R⟩ delivers. Therefore, the noisy channel mechanism is ethical from a utilitarian point of view. §.§.§ Bandwidth limitation Last, consider the same game which is described in Table <ref> but assume now Bob and Carol repeatedly play it with k different Alice-s. Each Alice has her own secret a_i and every player can choose a different and can independently choose her own strategy. Both Bob and Carol must choose one strategy each which simultaneously applies to all of the contexts. As in the noisy channel mechanism, each Alice can increase her expected payoff from 9 in the payoff-optimal Nash equilibria ⟨ a_i?T:B,b?N:F,c?L:R⟩ to 10 in the payoff-optimal Nash equilibria ⟨ a_i?T:B,a_i?N:F,c?L:R⟩ by sharing her secret with Bob. Bob, however, can share those secrets over with Carol and force the Nash equilibria ⟨ M,a_i?N:F,a_i?L:R⟩ which pays Bob 6k rather than 2k. Since every Alice's payoff in ⟨ M,a_i?N:F,a_i?L:R⟩ is just 4, she will not share a_i with Bob in the first place. Assume that the government implements a communication mechanism which only allows Bob to share one bit with Carol. Bob can select that bit to be any of the secrets, or any function of the secrets. Specifically consider Bob's strategy of sharing f such that f=0 if less than a fraction α of the a_i are 1, f=1 if more than 1-α are 1, and f=b, Bob's secret, if the fraction is between α and 1-α. With that strategy, the strategy profile ⟨ a_i?T:B,a_i?N:F,f?L:R⟩ is possible. Central limit theorem dictates that for a large enough k the chances that a fraction of less than α or more than 1-α of the a_i is 1 is equal to the probability that a normally distributed variable N(0,1/4) exceeds the range (α,1-α) . If α is taken to be 1/4 then with probability of approximately 68% (one σ ) f=b. Thus, the expected payoff of every Alice in ⟨ a_i?T:B,a_i?N:F,f?L:R⟩ is higher than 68% of 10, which is higher than her alternative payoff if she chooses M. Since that strategy profile also increases Bob and Carol's payoff, it follows that if k is sufficiently large and α sufficiently small then ⟨ a_i?T:B,a_i?N:F,f?L:R⟩ is a Nash equilibrium. As in the noisy channel mechanism, the total expected payoff of this bandwidth limiting mechanism is higher than that of the original game, which means it is ethical from a utilitarian point of view. § RELATED WORK Work related to this paper can largely be divided between that which aims to explain what privacy means and that which aims to explain how privacy can be retained. The first kind is more widely practice by legal scholars <cit.>, sociologists <cit.>, economists <cit.> and a few exceptional scholars who thrive in more than one of those fields <cit.>. The second kind was mostly developed by technologists in the Computer and Data Sciences. §.§ Rigorous definitions of privacy A key objective of the philosophy of privacy is to provide a rigorous and useful definition of privacy. Gavison <cit.> described privacy as the meeting point of three vectors (knowledge, ability to affect, and attention to a person). Posner <cit.> placed privacy in a utilitarian economic framework, famously attesting that government regulation of privacy is an unnecessary intervention in a free market for information. While this claim was objected for almost immediately <cit.> it was Kadane et al. <cit.> who finally refuted that claim by quantitatively showing that privacy does make perfect utilitarian sense in a game theoretic framework, if not in a Bayesian efficient market. One of the most important contribution to rigorously defining privacy was the development of Contextual Integrity by Nissenbaum <cit.>. Nissenbaum suggested to inspect privacy as the interaction of three social actors – subject, sender and recipient – in two separate social contexts: A social exchange in the first context leaves the sender with information about the subject. Then, privacy norms regulate the transfer of that information from the sender to the recipient in another social context. This work extends contextual integrity by placing it in a strict game theoretical setting. In such settings actors are rational decision makers who act to maximize their gains in an environment which contains secrets. Previous work on game theoretic privacy has built on Kadane et al. and has shown that privacy, like secrecy <cit.>, can be described as the strategy of players with respect to secrets. Anonymous <cit.> has shown that when one player has access to another player's secret, the first might still choose to respect the other's privacy. Thus, a privacy norm can emerge as the strategy in certain games. A similar result was later presented by Ulusoy and Pinar <cit.>, in a multi-player scenario. Still, Ulusoy and Pinar do not reference their result to contextual integrity, which is the main contribution of this work. The impact of privacy on games was studied by Gradwohl and Reingold <cit.> and Gradwohl and Smorodinsky <cit.>. The first work has shown that in multiplayer simultaneous games players would rather not expose their secret type. In the notation proposed here, this is related more to the concept of secrecy (Def. <ref>) than it is to privacy. The second paper begin by assuming players desire to not be identified. The authors then draw conclusion on the societal impact (pooling) of such games. This, however, assumes a privacy preference rather than explain why such preference may exist in the first place, as this paper does. §.§ Privacy preserving mechanisms The leading paradigm of privacy preserving mechanism, initially proposed by Warner <cit.>, is that of a data collector who wishes to publish a statistics of the data of multiple subjects. Those subjects are concerned of the implications of the publication. The role of the mechanism is to contain that risk. Privacy preserving mechanisms can be divided to those which are operated by the data subject, without need to trust the collector, and those operated by a trusted collector. Warner's original work falls into the first category, as do most of the Secure Multiparty Computation <cit.>. Kantarcioundefinedlu et al. <cit.> and Anonymous <cit.> have independently identified that the outcome of the computation, rather than its security, might concern the data subjects. Anonymous offered composition of security and k-anonymity as a solution <cit.>. Research on trusted collector mechanisms can be divided to noisy channel mechanisms (a.k.a., data perturbation) and mechanisms which rely on bandwidth limitation (aggregation). The first kind was initially suggested by Agrawal and Srikant <cit.> without proper analysis of the impact of noise on the players. Evfimievski et al. <cit.> were the first to try and quantify the impact of noise on the certainty of the recipient. Dwork, on her own <cit.> and with co-authors <cit.>, presented a full analysis of the impact of noise on the information transfer from the collector to the recipient. This included a proof of the infeasibility of zero leakage, a definition of the worst case model of Differential Privacy, and first algorithms. Hundreds of studies, which cannot reasonably be surveyed here, have since validated the usefulness of differential privacy. Differential Privacy stops shy of inspecting the impact of data leakage on the data subject. That analysis is partially fulfilled by Gilboa-Freedman and Smorodinsky <cit.> who have shown that this impact can be different in different types of games. This paper follows a similar path. Gilboa-Freedman and Smorodinsky go beyond this work in inspecting the equivalence and non-equivalence of different privacy mechanism. However, they focus primarily on preserving near-confidentiality whereas this work expands the analysis to other informative-norms. Last, data aggregation was first proposed by Sweeney and Samarati <cit.> as a way of protecting data subjects from specific attacks by the recipient. Machanavajjhala at el. <cit.> developed a more elaborate concept of aggregation. Attempts to quantify the impact of k-anonymity have mostly drawn parallels to ϵ-Differential Privacy <cit.>. This paper proposes looking at multiparty games, rather than on noisy channels, as the adequate model in which the impact of k-anonymity may best be quantified. § DISCUSSION This paper presented a game theoretic transcription of Nissenbaum's contextual integrity model. Game theory allows explaining the existence of privacy norms in terms of players' payoff and social welfare. This utilitarian analysis of privacy is especially adequate when norms are set by a revenue maximizing corporate and their willing customers. One other benefit of a game theoretic model is that it allows analysis of privacy related situations in the sense of sufficiency and equivalence. It allows answering questions such as: How much information can a company share with advertisers before customers start changing their data sharing behavior? When is the threat of loosing one's job as effective as a technology which limits ones access to individuals' data? How does one compare the risk of being identified on-line to other risks such as the risk of loosing a key elections? In terms of future research, we observe that privacy preserving technology which were developed in the last 30 years have focused on data subjects' control of their information and on data aggregator confidentiality. We hope that mathematical definitions of fiduciary transfer and of information ownership can lead to the development of technologies implementing those transmission principles as well. ACM-Reference-Format
http://arxiv.org/abs/2405.09906v1
20240516085424
Process-based Inference for Spatial Energetics Using Bayesian Predictive Stacking
[ "Tomoya Wakayama", "Sudipto Banerjee" ]
stat.ME
[ "stat.ME", "stat.AP", "stat.CO" ]
1.2 theoremTheorem corollaryCorollary lemmaLemma propositionProposition posteriorPosterior assumptionAssumption definition definitionDefinition remarkRemark exampleExample conjectureConjecture psmallmatrix ([ ) ^†The University of Tokyo / ^ University of California, Los Angeles Rapid developments in streaming data technologies have enabled real-time monitoring of human activity that can deliver high-resolution data on health variables over trajectories or paths carved out by subjects as they conduct their daily physical activities. Wearable devices, such as wrist-worn sensors that monitor gross motor activity, have become prevalent and have kindled the emerging field of “spatial energetics” in environmental health sciences. We devise a Bayesian inferential framework for analyzing such data while accounting for information available on specific spatial coordinates comprising a trajectory or path using a Global Positioning System (GPS) device embedded within the wearable device. We offer full probabilistic inference with uncertainty quantification using spatial-temporal process models adapted for data generated from “actigraph” units as the subject traverses a path or trajectory in their daily routine. Anticipating the need for fast inference for mobile health data, we pursue exact inference using conjugate Bayesian models and employ predictive stacking to assimilate inference across these individual models. This circumvents issues with iterative estimation algorithms such as Markov chain Monte Carlo. We devise Bayesian predictive stacking in this context for models that treat time as discrete epochs and that treat time as continuous. We illustrate our methods with simulation experiments and analysis of data from the Physical Activity through Sustainable Transport Approaches (PASTA-LA) study conducted by the Fielding School of Public Health at the University of California, Los Angeles. Process-based Inference for Spatial Energetics Using Bayesian Predictive Stacking Tomoya Wakayama^† Sudipto Banerjee^ May 20, 2024 =================================================================================== § INTRODUCTION Spatial energetics is a rapidly emerging area in biomedical and health sciences that aims to examine how environmental characteristics, space, and time are linked to activity-related health behaviors <cit.>. Examples include, but are not limited to, using data from wearable devices as biomarkers and risk factors in studying adverse health outcomes for respiratory health. Inferential objectives for spatial energetics comprise two exercises: (i) estimate measured health variables, typically related to metabolic activities, over paths or trajectories traversed by subjects as they conduct their daily physical activities; and (ii) predict the health variables for a subject at arbitrary trajectories. Spatial-temporal process models seem a natural choice as they use space-time coordinates from Global Positioning Systems (GPS) embedded within actigraph units. Some salient features of spatial energetics require consideration. Unlike in some clinical studies associated with mobile health data where only the temporal nature of streaming data is of inferential interest, here inferential interest centers around estimation and prediction of metabolic measurements over arbitrary spatial trajectories or paths. This differs from customary geostatistics and spatial-temporal data analysis <cit.> where statistical inference proceeds from spatial-temporal processes {w(s,t) : s∈, t∈}, where ⊂ℝ^d with d=2 or 3 and ⊂ℝ^+∪{0}. For mobile health applications, the spatial domain is typically an arbitrary string of spatial coordinates defining the path or trajectory traversed by the subject. This trajectory is completely arbitrary and need not enjoy mathematically attractive features as are available for Riemannian manifolds to carry out inference <cit.>. Furthermore, treated as continuously evolving over time, the spatial coordinates are best considered as functions of time. The rapidly emerging literature on statistical analysis of streaming wearable device data has almost exclusively focused on longitudinal models and purely temporal processes <cit.>. Their analytical objectives are not concerned with the spatial attributes of trajectories. Given the nature of streaming spatial-temporal data, we explore models that treat (i) space as continuous and time as discrete, and (ii) both space and time as continuous. The terms “discrete” and “continuous” describe whether inference is sought at the same scale where the data are available or at arbitrary resolutions. The former is usually analyzed using spatial-temporal dynamic models <cit.>, while the latter employs spatial-temporal processes specified using appropriate covariance kernels. Gaussian processes are conspicuous in spatial-temporal data analysis, but have largely focused on Euclidean domains and, more generally, on compact Riemannian manifolds <cit.>. Mobility data, on the other hand, arise from completely arbitrary trajectories that do not satisfy the conditions on manifolds. Related literature on non-Euclidean domains includes <cit.>, <cit.> and <cit.>, who considered spatial modeling of data from rivers and streams, <cit.>, who reviewed spatial (discrete & continuous) temporal modeling of the trajectory of animal movement, and <cit.>, who analyzed mobility trajectories under flight-pause models. Our study differs from the aforementioned studies. While flight-pause models and animal movement models seek inference on the evolution of the path itself, we seek to analyze data on variables of interest that have been collected at high resolutions on subjects moving along trajectories. Second, unlike the modeling of phenomena on fixed geographic structures, such as river networks, where the domain of interest is modeled effectively as a fixed graph, our inferential interest lies in predicting the variables over arbitrary trajectories over whose shape or structure we have no control. We seek fully model-based uncertainty quantification in our inference, which will include predicting variables on hypothetical paths that have not generated any data yet. We underscore the inferential difficulties inherent in such models. Most notably, the stochastic process parameters are not identifiable and are difficult to estimate from finite realizations of the process <cit.>. This is manifested by poor convergence of samples from iterative Markov chain Monte Carlo (MCMC) algorithms or numerical instabilities in other iterative algorithms such as Integrated Nested Laplace Approximations or Variational Bayes <cit.>. Therefore, we develop and execute a computationally efficient Bayesian stacking approach for fast inference that relies on fixing some parameters to achieve analytically tractable distribution theory and then “stack” over these analytical posterior distributions to obtain an averaged posterior <cit.>. The balance of our paper proceeds as follows. Section <ref> provides an overview of Bayesian hierarchical models that treat space as continuous and time as discrete. In particular, we show how to adapt familiar Bayesian dynamic linear models <cit.> to actigraph data. Sections <ref> develops a spatial-temporal process model <cit.> that treats space and time as continuous. Section <ref> develops predictive stacking algorithms for the models we develop by averaging over sets of conjugate Bayesian models with accessible posterior distributions. Section <ref> collects some results on distribution theory and offers some theoretical insights. Sections <ref> and <ref> present simulation experiments and an illustrative application, respectively, for our methods. Section <ref> concludes the article with a discussion and pointers to future research. § CONTINUOUS SPACE AND DISCRETE TIME MODELS Broadly speaking, spatial-temporal models are classified according to whether space and time are modeled as continuous or discrete processes. Bayesian dynamic linear models, or DLMs <cit.>, are widely employed for analyzing temporal data by modeling time over a countable set of integers and space using a continuous random field evolving over the time steps. We first show how such models can be effectively employed for actigraph data. §.§ Spatial-temporal Bayesian DLMs Actigraph data, while by nature stream in a continuum, are often recorded over a discrete set of epochs. Each epoch consists of a time-interval that can range from a few seconds to hours, or even days, depending upon the application. Let = {1,2…,T} be a finite set of labels for epochs and y_t be an n_t× 1 vector consisting of measurements recorded by the actigraph at time t. A fairly flexible process-based model posits that y_t = X_tβ_t + z_t + η_t for each t∈, where X_t is an n_t× p matrix of explanatory variables, β_t is the corresponding p× 1 vector of slopes that depend on t and z_t is n_t× 1 consisting of random effects accounting for other extraneous effects at time t. We construct the Bayesian DLM as y_t = F_tθ_t + η_t, η_tind∼ N(0,σ^2 V_t); θ_t = G_tθ_t-1 + η_θ, t, η_θ,tind∼ N(0, σ^2 S_t) , where F_t = [ X_t I_n ] is n_t× (p+n_t), and θ_t = [ β_t^ z_t^ ]^ is (p+n_t)× 1. We specify the prior distributions σ^2∼ IG(n_σ/2, n_σs_σ/2) and θ_0 |σ^2 ∼ N(m_0, σ^2S_0) so that the joint distribution is from the Normal-IG family. The quantities G_t, n_σ, s_σ, m_0 and S_0 are constants, while {θ_t,σ,V_t,S_t} are unknown parameters. Two adaptations are relevant for spatial energetics. First, y_t consists of n_t measurements recorded at epoch t independently over a group of subjects. The collected measurements {y_t : t∈} is called an actigraph time-sheet, where each epoch also provides values for the elements of X_t. Our available data, therefore, is {(y_t, X_t) : t ∈}. While each epoch implicitly contains information on the spatial locations for the subjects' measurements, the inferential goals for these population-level studies do not entail spatial attributes and, instead, are concerned with inferring about relationships between metabolic measurements (representing levels of physical activity) and environmental variables (representing green spaces, climate and weather, local topography, nature of activity being performed by the subject, etc.). It is reasonable to assume that V_t is diagonal since measurements on subjects are taken independently of each other and shared features across subjects are accounted for with explanatory variables and random effects in F_t. The covariance matrix for the elements of θ_t, S_t, is assumed to be diagonal if latent associations are adequately accounted for by G_t. Alternative models could specify S_t from design considerations or be modeled using an appropriate prior distribution <cit.>. The second adaption of (<ref>) applies to actigraph data on a single subject. Now y_t(s) represents the rendered value of a variable of interest observed on a given subject at epoch t∈𝒯 and s is the spatial coordinates of the subject at that epoch. Our process model specifies y_t(s) = x_t(s)^β_t + z_t(s) + η_t(s), where x_t(s) is a p× 1 vector consisting of p explanatory variables, β_t is the corresponding p× 1 vector of time-varying slopes, z_t(s) is a zero-centered stochastic process, and η_t(s) i.i.d.∼ N(0,σ^2). Therefore, x_t^(s)β_t represents the time-varying trend while z_t(s) models temporal evolution with spatial dynamics. Let χ = {s_i | i=1,…,n }⊂ℝ^2 be a finite set of distinct spatial locations, where y_t(s) has been measured. Then, y_t and z_t are n× 1 vectors with elements y_t(s_i) and z_t(s_i), respectively, X_t is n× p with rows x_t(s_i)^, and S_t = [ δ_β^2I_p O; O δ_z^2 K_ϕ(χ) ] with K_ϕ(χ) = (K_ϕ(s_i,s_j))_s_i,s_j∈χ is n× n with elements K_ϕ(s_i, s_j) evaluated using a spatial correlation kernel with parameters ϕ, and δ_z^2 and δ^2_β act as relative variance scales with respect to σ^2. The model in (<ref>) readily facilitates comprehensive inference through MCMC or forward filtering-backward sampling <cit.>. However, with high-dimensional parameters, these methods require substantial computational resources that render their practical application to be challenging or even infeasible. Consequently, we devise Bayesian predictive stacking that exploits analytically tractable posterior distributions. §.§ Dynamic trajectory model A salient feature of actigraph data encoded with spatial positioning is that the locations themselves are functions of time. <cit.> and <cit.> have, therefore, modeled mobile data as processes primarily evolving over time with the latter accounting for spatial variation using splines. Models that introduce spatial-temporal associations will need to construct processes over the collection of points {(γ(t),t) : t ∈𝒯}, where 𝒯⊂ℝ^+∪{0} and γ(t) : →ℝ^2. Consider a single subject who has worn an actigraph unit that has recorded measurements at each time point t. Typically, such data are received as averages over discrete epochs so we define our temporal domain 𝒯 = {1,2,…,T} as a finite set of epochs spanning the entire duration of data collection from the device. Designing the data collection from a wearable device is, by itself, a meticulous exercise that needs to account for various extraneous factors including, but not limited to, the technologies of accelerometers as well as the specific clinical study under consideration <cit.>. Here, we will concern ourselves with Bayesian inference. Let y_t(γ(t)) denote the measurement on a given subject at time t and location γ(t) for each t=1,…,T, γ(t)∈ℝ^2, and consider the following regression model: y_t( γ(t) ) = x_t(γ(t))^β_t + w_t(γ(t)) + η_1t(γ(t)), η_1t(γ(t)) i.i.d.∼ N(0,σ^2) t=1,…,T, where x_t(γ(t)) is a p-dimensional explanatory variable, β_t is a p-dimensional time-varying regression coefficient, w_t(γ(t)) and η_1t are zero-centered spatial and white noise processes, respectively, at time t, and σ^2 is the variance of the white noise process. The domain of the process w_t(·) is not Euclidean, but an arbitrary trajectory defined by a string of coordinates mapped by γ(t). Furthermore, the subject may revisit the same location a number of times, yielding multiple values at the same location γ(t), making the spatial covariance matrix singular and, therefore, precluding legitimate probabilistic inference. We obviate this as follows. Let Γ = {γ(1),…,γ(T) } be a complete enumeration of spatial locations visited by the subject, of which n ≤ T are distinct spatial locations, and let Γ̃ = {γ̃_1,…,γ̃_n}⊆Γ be the subset of distinct locations. We define a latent process z_t(γ(t)) and map w_t(γ(t)) = ∑_j=1^n b(γ(t), γ̃_j) z_t(γ̃_j), where b(γ(t),γ̃_j) : Γ×Γ̃→{0,1} such that b(γ(t),γ̃_j) = 1 if γ(t) = γ̃_j and 0 otherwise. This yields w = Bz, where w = (w_1(γ(1)),…,w_T(γ(T)))^ is T× 1, z = (z_1^,…,z_T^ )^ is nT× 1 with each z_t =(z_t(γ̃_1),…,z_t(γ̃_n) )^ being n× 1, and B is T× nT, whose (t,n(t-1)+j)th entry is b(γ(t),γ̃_j). This formulates a map of the latent spatial effects across whole spaces and times, z, to those at the observed points, w. If β = ( β_1^,…, β_T^ )^, then temporal autoregressive models for β and z are specified as β = (A ⊗ I_p) β + η_2t and z = (A⊗ I_n)z + η_3t, respectively, where η_2ti.i.d.∼ N(0,σ^2δ_β^2 (I_T ⊗ W_p)) and η_3t∼ N(0,σ^2δ_z^2( I_T⊗ K_ϕ)), A = [ 0^ 0; I_T-1 0; ]∈ℝ^T× T, W_p∈ℝ^p× p is a correlation matrix among the coefficients and ⊗ denotes the Kronecker product. We construct the following augmented model, Y = Xθ + η, η∼ N(0, σ^2S) , where Y = [ y; 0; 0 ] is (1+p+n)T× 1, X = [ ⊕_t=1^Tx_t(γ(t))^ B; I_pT O; O I_nT ] is (1+p+n)T× (p+n)T, θ = [ β; z ] and ⊕ is the block-diagonal matrix operator so ⊕_t=1^T(x_t(γ(t))^) is T× pT block diagonal with x_t(γ(t))'s along the diagonal. Furthermore, S = I_T ⊕{δ_β^2 (I_pT - A⊗ I_p )^-1 (I_T⊗ W_p)(I_pT - A^⊗ I_p )^-1}⊕{δ_z^2 (I_nT-A⊗ I_n)^-1 (I_T⊗ K_ϕ) (I_nT-A^⊗ I_n)^-1}. We introduce the prior distribution σ^2∼ IG(a_σ, b_σ), where a_σ and b_σ are fixed rate and scale parameters for the inverse-Gamma distribution. We assume that W_p is assumed to be known and taken as the identity matrix in the later experiments. The prior distribution for θ is absorbed into (<ref>) and fixing the values of {δ_β, δ_z, ϕ} yields the familiar Normal-IG conjugate posterior distribution for {θ,σ^2}, which is utilized in predictive stacking. § CONTINUOUS SPACE AND CONTINUOUS TIME TRAJECTORY MODEL We can treat actigraph data as a partial realization of a continuous spatial-temporal process. We write y(γ(t),t) to be the measurement that can exist, at a conceptual level, at any time t ∈ℝ^+ and on a continuous geographic point γ(t) ∈ℝ^2 at t. We define the following regression model over space-time coordinates (γ(t),t) generated by a finite collection of n time points, y(γ(t),t) = x(γ(t),t)^β(t) + z(γ(t),t) + η_1(t), η_1(t) i.i.d.∼ N(0,σ^2), where x(γ(t),t) is p× 1 consisting of explanatory variables, β(t) is the corresponding p× 1 vector of slopes, z(γ(t),t) is a zero-centered spatial-temporal process and η_1(t) is the measurement error distributed as a zero-centered Gaussian distribution with variance σ^2. For the spatial-temporal process, we consider the following structure: z(γ(t),t) ind∼ GP(0, σ^2δ_z^2K_ϕ ), where the correlation kernel is K_ϕ((γ(t),t), (γ(t'),t') ) = 1/ϕ_1 |t-t'|^2+1 exp( -ϕ_2 γ(t)-γ(t') /√(1+ϕ_1 |t-t'|^2)), ϕ_1,ϕ_2∈ℝ^+, which represents the spatial-temporal correlation of the data. For each regression coefficient, we consider the following process: β_j(t) ind∼ GP(0, σ^2δ_β^2C_ξ ), for j=1,…,p, with the temporal correlation kernel C_ξ(t,t') = exp(-ξ^2|t-t'|^2). We note that K_ϕ is a positive-definite kernel, ensuring that the stochastic process (<ref>) is well-defined. In particular, it is crucial to note that even when γ(t)=γ(t'), i.e., the subject returns to the same location at a later time, the function ( 1+ϕ_1 |t-t'|^2)^-1 is positive-definite. This is seen by noting that ( 1+ϕ_1 |t-t'|^2)^-1 =∫_0^∞ e^- u ( 1+ϕ_1|t-t'|^2) du. Because |t-t'|^2 and 1 are conditionally negative-definite, the integrand is positive-definite from Schoenberg's theorem <cit.>, which implies that (1+ ϕ_1 |t-t'|^2)^-1 is also positive-definite. Assume that we observe y within finite space-time points (Γ,) = { (γ(t_i), t_i) | i=1,…,n }⊂ℝ^2×ℝ^+. Then, (<ref>),(<ref>) and (<ref>) for n space-time points results in the linear system: [ y; 0; 0 ]_Y = [ [x_1|⋯|x_p ] I_n; [I_n |⋯| I_n ] O; O I_n ]_X[ β_1; ⋮; β_p; z ]_θ + η, η∼ N(0, σ^2 S) where each x_j∈^n× n is diagonal with entries x_j(γ(t_i),(t_i)) for i=1,…,n and j=1,…,p; X is 3n × (p+1)n, θ is (p+1)n× 1 consisting of the pn× 1 vector of regression coefficients β = (β_1^,…,β_p^)^ and the n× 1 vector z = (z(γ(t_1),t_1),…,z(γ(t_n),t_n))^, K_ϕ(Γ,) = (K_ϕ((γ(t_i),t_i), (γ(t_j),t_j) )) and C_ξ() = (C_ξ (t_i,t_j)) are both n× n, where i,j=1,…,n and S = I_n ⊕(I_p⊗δ_β^2 C_ξ()) ⊕δ_z^2 K_ϕ(Γ,). We further assign the prior distribution σ^2∼ IG(a_σ, b_σ), where a_σ and b_σ are fixed rate and scale parameters for the inverse-Gamma distribution. As in (<ref>), the prior distribution for θ is absorbed into (<ref>) and the posterior distribution for {θ,σ^2} for any fixed set {δ_β, δ_z, ξ, ϕ} is in the Normal-IG family. We exploit these familiar distributions to devise stacked inference for the processes β_j(t) and z(γ(t),t). § PREDICTION VIA STACKING We exploit the analytical closed forms for the posterior distributions and carry out inference using Bayesian stacking <cit.>. In both the discrete-time and continuous-time trajectory models, we are able to obtain closed-form posterior distributions if we fix some hyperparameters in the spatial-temporal covariance structures. We consider a collection of G models, {ℳ_1,…,ℳ_G}, where each ℳ_g is specified by fixing a set of parameters such that the corresponding posterior distribution given dataset , p_g(· ), is in closed form. To be specific, for the discrete time trajectory model in (<ref>), the posterior distribution for _g is p_g(θ,σ^2 , δ^2_g, ϕ_g) = IG(σ^2 a_σ^*, b_σ^*)× N(θ m, σ^2 Σ) , where δ^2_g = {δ^2_β,gδ^2_z,g} and ϕ_g are the fixed values of these parameters for ℳ_g. The posterior predictive distribution p_g(y_T+1 ( γ(T+1) )|) and one for the latent process p_g(z_T+1) are both t-distributions with degrees of freedom, mean and scale supplied in Section <ref>. Similarly, for the continuous time model in (<ref>), the posterior distribution for _g is p_g(θ,σ^2 , δ^2_g, ϕ_g, ξ_g ) = IG(σ^2 a_σ^*, b_σ^*)× N(θ m, σ^2 Σ) , where δ^2_g = {δ^2_β,g, δ^2_z,g}, ϕ_g = {ϕ_1,g ,ϕ_2,g} and ξ_g are the fixed values of these parameters for ℳ_g. Further, the posterior predictive distributions of y( γ(t_0), t_0 ) and z(γ(t_0),t_0) at the new time point t_0 are calculated from t-distributions, where the details of their arguments are provided in Section <ref>. §.§ Predictive stacking of means We divide the dataset into training data _train and validation data _valid. We denote the predictive random variable by ỹ_t(γ(t)) and ỹ(γ(t),t) at any given t for the discrete and continuous time settings, respectively. We calculate the posterior predictive mean 𝔼_g [ỹ_t(γ(t)) |_train ] for each time point t in the validation dataset in the discrete model in (<ref>), where 𝔼_g[·] is the expectation with respect to the predictive density p_g(ỹ_t(γ(t)) |_train). Specifically, if ỹ_t is the vector with elements ỹ_t(γ(t)) and X̃_t is the matrix with rows x_t(γ(t))^ for each γ(t)∈_valid, then 𝔼_g [ỹ_t |_train] = X̃_tβ̂_t-1 + C_z0^C_z^-1ẑ_t-1, where β̂_t-1, C_z0, C_z and ẑ_t-1 are described in Proposition <ref> of Section <ref>. We write 𝔼_g [ỹ_t(γ(t)) |_train] to denote the element corresponding to γ(t) ∈_valid in (<ref>). Likewise, in the continuous time model in (<ref>), the posterior predictive mean is 𝔼_g [ỹ(γ(t), t) |_train] for each t in the validation set with 𝔼_g[·] defined with respect to p_g(ỹ(γ(t),t) |_train), which is available in closed form as 𝔼_g [ỹ(γ(t),t) |_train] = ∑_j=1^px_j,0 C_β0^C_β^-1β̂_j + C_z0^C_z^-1ẑ, where β̂_j, C_β 0, C_β and ẑ are defined in Proposition <ref> of Section <ref>. Predictive stacking calculates the optimal weights to be used for model averaging. For stacking of means, we predict using ∑_g=1^G a_g 𝔼_g [·|_train], where a_1,…,a_G are the weights for model averaging selected from Δ = {{a_g}_g=1^G |∑_g=1^Ga_g=1, a_g≥ 0 }, which yields a simplex of predictions on candidate models {_g}_g=1^G. We determine the optimal weights {â_g}_g=1^G using the validation dataset as _a_1,…,a_G∑_y∈_valid(y - ∑_g=1^G a_g 𝔼_g [ỹ|_train] )^2, where the sum is over all values of the outcome in the validation dataset. This is a quadratic programming problem <cit.>. The obtained weights are subsequently used to predict the outcomes using the stacked mean ∑_g=1^G â_g 𝔼_g [ỹ], where ỹ corresponds to a specified t_0 in the sequence of time-points in the discrete-time case, while in continuous time ỹ represents the value y(γ(t_0),t_0) for an arbitrary t_0∈ℝ^+. Algorithm <ref> summarizes these steps. §.§ Predictive stacking of distributions Spatial energetics seeks full predictive inference on the trajectories entailing interpolation of the latent process at arbitrary points, which subsequently drives predictions for the outcomes. We achieve this by stacking the posterior predictive distributions for each _g using _train, which is a multivariate t-distribution. Similar to the stacking of means, we consider the weights. Stacking maximizes the score function S ( ∑_g=1^G a_g p_g(·|_train) , q_t(·|_train)) to obtain the weights, where q_t is a posterior distribution with true underlying parameters. If we employ a logarithmic score, corresponding to the Kullback–Leibler divergence <cit.>, the weights are obtained as _a_1,…,a_G∑_y∈_validlog( ∑_g=1^G a_g p_g( y |_train) ), the logarithm acts on the pseudo-posterior probabilities, given the weighted models. Thus, we define the distributional prediction by maximizing the pseudo-log joint posterior probability. Note that p_g(·|_train) is a multivariate t-distribution, and hence, evaluating the posterior probability of the validation data is readily available. This optimization problem can be solved as a linearly constrained problem via an adaptive barrier algorithm <cit.>. Algorithm <ref> presents the steps involved in stacking of predictive densities. All the G candidate models in Algorithms <ref> and <ref> can be computed in parallel and the computation of the weights is negligibly small compared to that of the posterior distribution. Further, the optimization is supported by many packages in various statistical programming languages. In particular, for the subsequent illustrations we employed the <cit.> in the statistical computing environment. By contrast, MCMC demands a substantial number of iterations for convergence, and the issue is exacerbated with the larger values of n and T. §.§ Reconstructing stacked posterior distributions Once the stacking weights are calculated from either Algorithm <ref> or <ref>, we use them to reconstruct the posterior distributions of interest as p(·_) = ∑_g=1^G â_g p_g(·_) , where · represents the inferential quantity of interest. This embodies stacked inference for {θ,σ^2} in (<ref>) and (<ref>), predictions of the outcome y_t(γ(t)) at a future time point on a given trajectory or y(γ(t),t) for any arbitrary time point on a trajectory, and inference for the latent process z_t(γ(t)) or z(γ(t),t) in the discrete and continuous time settings, respectively. § THEORETICAL PROPERTIES §.§ Distribution theory for Bayesian DLMs Fixing δ_β,δ_z,ϕ yields familiar posterior distributions for θ_t and σ, which facilitate stacking. Here, we collect the key recursion equations customarily used in calculating the posterior distribution for (<ref>). Consider the model in (<ref>). Let _t denote all the data obtained until time t. Assume σ^2|_χ,t-1∼ IG(n_t-1/2, n_t-1s_t-1/2) and θ_t-1|σ^2, _χ,t-1∼ N(m_t-1,σ^2W_t-1). If δ_β,δ_z,ϕ,G_t are fixed, the following distributional results hold, for t ≥ 1, σ^2 |_t ∼ IG( n_t/2,n_t s_t/2) ; θ_t|σ^2, _t∼ N(m_t,σ^2 W_t), where n_t = n_t-1+n, n_ts_t = n_t-1s_t-1 + (y_t - f_t )Q_t^-1(y_t - f_t ), f_t = F_t G_t m_t-1, Q_t = F_t R_t F_t^ + I_n, m_t = G_tm_t-1 + R_t F_t^ Q_t^-1 (y_t-f_t),R_t = G_tW_t-1G_t^ + S and W_t = R_t - R_tF_t^Q_t^-1F_tR_t. The marginal posterior distribution of θ_t is t_n_t(m_t,s_tW_t). Propositions <ref> and <ref> provide the spatial and temporal posterior predictive distributions. Consider the setup for the model in (<ref>) adapted for spatial data over n locations χ = {s_1,…, s_n}. Let χ_0 be a set of n_0 locations where we seek to predict y_t(s) and X̃_0 be an n_0× p matrix of explanatory variables with rows x_t^(s) for s∈χ_0. If y_0 and z_0 denote the n_0× 1 random variables corresponding to y_t(s) and spatial effects z_t(s) for all s∈χ_0, then the posterior predictive distributions are y_0|θ_t, z_0 , σ^2, _t ∼ N(X̃_0θ_t,(1:p) + z_0, σ^2 I_n_0 +σ^2 X̃_0 W_t,(1:p,1:p)X̃_0^), z_0|θ_t, σ^2, _t ∼ N(C_0^C^-1θ_t,(p+1:p+n), σ^2 (C_00 - C_0^C^-1C_0) ), where m_t,(1:p), m_t,(p+1:p+n) are the first p elements and the remaining elements of m_t, W_t,(1:p,1:p) is the top-left p× p square of W_t, C = (δ_z^2K_ϕ(s,s'))_s,s'∈χ, C_0=(δ_z^2K_ϕ(s,s_0))_s∈χ,s_0∈χ_0 and C_00 = (δ_z^2K_ϕ(s_0,s'_0))_s_0,s_0'∈χ_0. Combined with the result in Proposition <ref>, the marginal predictive distribution for y_t0 is t_n_t( X̃_0m_t,(1:p) + C_0^C^-1m_t,(p+1:p+n), s_t( I_n_0 + X̃_0 W_t,(1:p,1:p)X̃_0^+ C_00 - C_0^C^-1C_0) ). Consider the assumptions in Proposition <ref>. The one-step ahead forecast distribution for the state vector and the corresponding one-step ahead predictive distribution are y_t+1|θ_t+1, σ^2, _t ∼ N(F_t+1θ_t+1 , σ^2 I_n) θ_t+1|σ^2, _t ∼ N(G_t+1m_t, σ^2 (G_t+1 W_t G_t+1^ + S_t+1) ) , respectively. The marginal predictive distribution for y_t+1 is t_n_t( F_t+1G_t+1m_t, s_t (I_n+ F_t+1( G_t+1 W_t G_t+1^ + S_t+1)F_t+1^) ). A general h-step ahead forecast can be obtained using recursive calculations. §.§ Distribution theory for discrete time trajectory model Fixing δ_β, δ_z, and ϕ produce accessible posterior distributions for the model in the trajectory regression (<ref>), which facilitates predictive stacking discussed in Section <ref>. We present these posterior distributions below. Posterior distribution of (θ,σ^2) in (<ref>) and (<ref>) is given by p(θ,σ^2 |) = p(σ^2 |)× p(θ|σ^2, ) = IG(σ^2 a_σ^*,b_σ^* ) × N(θm, σ^2 Σ), where a_σ^* = a_σ + T/2, b_σ^* = b_σ + (Y-Xm) S^-1 (Y-Xm), m=Σ X^S^-1Y , Σ^-1 = X^ S^-1 X. The marginal posterior distribution of θ is t_2a_σ^*(m, (b_σ^*/a_σ^*)Σ). The following proposition provides the posterior predictive distributions for future points on a trajectory in the discrete time setup. Consider the setup leading to (<ref>) and Proposition <ref>. Let Γ_0 = {γ(1),…,γ(T+1)} be an enumeration of spatial locations and Γ̃_0= {γ̃_1,…,γ̃_n_0}⊆Γ_0 be the set of n_0 distinct locations. Given a dataset obtained up to time T, the posterior predictive distributions at time T+1 in (<ref>) and (<ref>) are y_T+1 ( γ(T+1) )|β_T+1,z_T+1, σ^2, ∼ N(x_T+1(γ(T+1))^β_T+1 + B̃ z_T+1,σ^2 ) z_T+1|θ, σ^2, ∼ N( C_z0^C_z^-1z_T , σ^2(C_z00 - C_z0^C_z^-1C_z0)), β_T+1|θ, σ^2, ∼ N( β_T ,σ^2δ_β^2 W_p ), where C_z = (δ_z^2K_ϕ(γ,γ'))_γ,γ'∈Γ̃, C_z0=(δ_z^2K_ϕ(γ,γ_0))_γ∈Γ̃,γ_0∈Γ̃_0, C_z00 = (δ_z^2K_ϕ(γ_0,γ'_0))_γ_0,γ_0'∈Γ̃_0 and B̃ = (b(γ(T+1),γ̃_j))_j is the 1 × n_0, constructed by the kernel b(·,·) defined in Section <ref>. The marginal predictive distribution for y_T+1 ( γ(T+1) ) is t_2a_σ^* ( x_T+1(γ(T+1))^β̂_T + B̃C_z0^C_z^-1ẑ_T , (b_σ^*/a_σ^*) (1 + δ_β^2 x_T+1(γ(T+1))^ W_p x_T+1(γ(T+1)) +B̃( C_z00 - C_z0^C_z^-1C_z0)B̃^ )), where β̂_T and ẑ_T are the posterior means calculated as m in Proposition <ref>. §.§ Distribution theory for continuous time trajectory model To exploit familiar results concerning (<ref>) that are used for stacking, as discussed in Section <ref>, we fix δ_β,δ_z,ξ and ϕ. The analytical posterior distributions are described below. Posterior distribution of (θ,σ^2) in (<ref>)–(<ref>) is given by p(θ,σ^2 |) = p(σ^2 |)× p(θ|σ^2, ) = IG(a_σ^*,b_σ^* ) × N(m , σ^2 Σ), where a_σ^* = a_σ + n/2, b_σ^* = b_σ + (Y-Xm) S^-1 (Y-Xm), m=Σ X^S^-1Y , Σ^-1 = X^ S^-1 X. The marginal posterior distribution of θ is t_2a_σ^*(m, (b_σ^*/a_σ^*)Σ). The posterior predictive distributions for new points on a trajectory are obtained as follows. Consider the setup leading to (<ref>) and Proposition <ref>. Let (Γ_0, _0) be the collection of n_0 new space-time points on a trajectory, x_j,0 be an n_0× n_0 explanatory matrix at (Γ_0, _0) for j=1,…,p, y_0 and z_0 be n_0× 1 random variables corresponding to y(γ(t),t) and z(γ(t),t) for (γ(t),t) ∈Γ_0×_0, and each β_j,0 is n_0× 1 comprising β_j(t) for t∈_0. Then, the posterior predictive distributions are y_0|β_1,0, …,β_p,0 , z_0, σ^2, ∼ N( ∑_j=1^px_j,0β_j,0 + z_0, σ^2 I_n_0), z_0|θ,σ^2, ∼ N(C_z0^C_z^-1θ_np+1:np+n, σ^2(C_z00 - C_z0^C_z^-1C_z0) ), β_j,0|θ,σ^2, ∼ N(C_β0^C_β^-1θ_(j-1)+1:nj, σ^2(C_β00 - C_β0^C_β^-1C_β0)), j=1,…,p, where C_z = (δ_z^2K_ϕ((γ(t),t),(γ(t'),t')))_t,t'∈, C_z 0=(δ_z^2K_ϕ((γ(t),t),(γ(t_0),t_0)))_t∈,t_0∈_0, C_z 00=(δ_z^2K_ϕ((γ(t_0),t_0),(γ(t_0'),t_0')))_t_0,t_0'∈_0, C_β = (δ_β^2C_ξ(t,t'))_t,t'∈, C_β0=(δ_β^2C_ξ(t,t_0))_t∈,t_0∈_0 and C_β00 = (δ_β^2C_ξ(t_0,t_0'))_t_0,t_0'∈_0. The marginal predictive distribution for y_0 is t_2a_σ^*( ∑_j=1^px_j,0 C_β0^C_β^-1β̂_j +C_z0^C_z^-1ẑ ,(b_σ^*/a_σ^*)(I_n_0 + C_z00 - C_z0^C_z^-1C_z0 + ∑_j=1^px_j,0 (C_β00 - C_β0^C_β^-1C_β0) x_j,0^ ) ), where ẑ and β̂_j for j=1,…,p are the posterior means calculated as m in Proposition <ref>. §.§ Frequentist properties of posterior distributions Theoretical investigations that shed light on the effectiveness of stacking in geostatistical settings have been explored by <cit.> in purely spatial contexts. Here, we investigate some theoretical results for the state-space setting. At the outset, it is worth recognizing that frequentist inference is rather limited because trajectory data, by definition, do not admit replicates at a single time point, while theoretical accessibility will require us to consider multiple, say n, spatial locations at each time point. The relevant setting here is the second adaptation of (<ref>) discussed in Section <ref>, where we have multiple spatial locations at each epoch. For theoretical tractability, we consider spatial locations over Euclidean domains only. For this development, we denote the response and the spatial process as y_t(s) and z_t(s), respectively, where s is a generic spatial location in ℝ^d. We assume n replicates of spatial locations χ_n = {s_1,…,s_n} at each t and consider the model in (<ref>) without the trend, i.e., β_t=0. Let _χ_n,t now denote the entire dataset until time t with spatial replicates in χ_n. Hence, y_t = z_t + η_t, η_t i.i.d.∼ N(0,σ^2I_n); z_t = α z_t-1 + η_θ,t, η_θ,ti.i.d.∼ GP(0,σ^2δ_z^2K_ϕ(·,·)), where y_t and z_t are each n× 1 with elements y_t(s_i) and z_t(s_i), respectively, α is a fixed real number, and K_ϕ(s_i,s_j) = 2^1-ν/Γ(ν)( s_i-s_j/ϕ)^ν _ν( s_i-s_j/ϕ) is the Matérn kernel <cit.> defined for any pair of spatial locations s_i and s_j in a bounded region ⊂^2. The parameters ϕ>0 and ν>0 model spatial decay and smoothness, respectively, and _ν is the modified Bessel function of the second kind of order ν. Here, we fix ν, call (<ref>) Matérn model with parameters {σ, ϕ, δ_z} and employ prior distributions σ^2∼ IG(n_σ/2, n_σs_σ/2) and z_0 |σ^2 ∼ N(m_0, σ^2S_0). Let {σ_*, ϕ_*, δ_z*} be fixed values of the model parameters that are used to generate data from (<ref>), ϕ' and δ_z' be fixed values, ℙ_* be the probability law of y_t(s) corresponding to {σ_*, ϕ_*, δ_z*}, and ℙ' be the law corresponding to (σ_*, ϕ', δ_z'). We require the notion of equivalence of probability measures for subsequent results. Let P_1 and P_2 be two probability measures on the measurable space (Ω, ℱ). Measures P_1 and P_2 are termed equivalent, denoted P_1 ≡ P_2, if they are absolutely continuous with respect to each other. That is, P_1 ≡ P_2, if P_1(A) = 0 ⇔ P_2(A) = 0 for any A ∈ℱ. For any ϕ'>0, there exists δ_z' such that ℙ'≡ℙ_*. Lemma <ref> implies that if the parameters are fixed at values different from the true (data generating) parameters, then the incorrectly specified model is equivalent to the model with the true parameters with regard to the distribution of y. This is an extension of Theorem 2.1 in <cit.>. Based on the fact, the following result on the error variance is available. Assume that the fixed parameters are ϕ' and δ_z', satisfying ℙ'≡ℙ_*, and that max_s ∈min_1 ≤ i ≤ n |s - s_i| ≍ n^-1/d, where a_n≍ b_n means a_n is bounded both above and below by b_n asymptotically. If we set m_0 = 0, S_0 = K_ϕ'(χ), n_σ < ∞, s_σ < ∞ and α<∞ in Matérn model (<ref>), then the posterior distribution of σ^2 converges, as n →∞, to the degenerate distribution with entire mass at σ^2_∗, i..e., p(σ^2|_χ_n,t ) ⇝δ(σ^2_*), ℙ_*-a.s. where ⇝ denotes weak convergence of the probability measure and δ(x) is the Dirac measure at x. Hence, if the spatial locations are not overly concentrated within , as the number of replicates increases, the posterior distribution of σ^2 degenerates to a point-mass distribution at the true parameter value <cit.>. The assumption in Theorem <ref> is explained in the Appendix in more detail. Turning to prediction at a new point s_0∈, let z̃_t(s_0) and ỹ_t(s_0) be predictive random variables at any given time t and Z_tn(s_0) be a random variable with density p(z̃_t(s_0)|_χ_n,t) and Y_tn(s_0) be a random variable with density p(ỹ_t(s_0)|_χ_n,t). Let 𝔼_*[·] denote the expectation with respect to ℙ_*. The prediction errors 𝔼_* [(Z_tn(s_0)-z̃_t(s_0))^2] for the latent process and 𝔼_* [(Y_tn(s_0)-ỹ_t(s_0))^2] for the response are of interest. Under the assumptions in Theorem <ref>, the following two results hold for t=1,…,T: 𝔼_* [(Z_tn(s_0)-z̃_t(s_0))^2] = E_n,t^A + E_n,t^B  𝔼_* [(Y_tn(s_0)-ỹ_t(s_0))^2] → 2σ_*^2 + E_n,t^A + E_n,t^B n→∞ , where E_n,t^A = σ_*^2(δ_z'^2 - δ_z'^2 {K_ϕ(s,s_0)}_s∈χ_n^ K_ϕ^-1(χ_n){K_ϕ(s,s_0)}_s∈χ_n + {K_ϕ(s,s_0)}_s∈χ_n^ K_ϕ^-1(χ_n) R_t(R_t + I_n)^-2R_t K_ϕ^-1(χ_n){K_ϕ(s,s_0)}_s∈χ_n^ )+o(1), E_n,t^B= 𝔼_* [(z̃_t(s_0)- {K_ϕ(s,s_0)}_s∈χ^ K_ϕ^-1(χ) m_t )^2], R_t = α ^2 W_t-1 + δ_z^'2 K_ϕ' (χ), W_t = R_t - R_t^Q_t^-1R_t and Q_t = R_t + I_n. The initial finding relates to the estimation of spatial-temporal effects. The estimation error at any given time is decomposed by E_n,t^A and E_n,t^B. E_n,t^A is the variance term, and its asymptotic form can be explicitly formulated. E_n,t^B is a bias term, representing the difference between a true random variable and linear predictor by filtering m_t. The subsequent result indicates that the prediction error can be articulated in relation to the measurement error scale and the two terms. As the number of measurement points increases while the observed area remains fixed, more points are close to the new points. Consequently, prediction accuracy is enhanced at new points, and E_n,t^A and E_n,t^B become small. An exploration of the convergence of E_n,t^B in a limited scenario is presented in the Appendix. Additionally, because E_n,t^A lacks a closed form and is challenging to analyze theoretically, we conducted a numerical examination of the decay of E_n,t^A in the Appendix. We develop the following result concerning stacking. Under the assumptions in Theorem <ref>, if E_n,t^B→ 0 as n→∞, it holds that 𝔼_* [(ỹ_t(s_0) - ∑_g=1^G a_g 𝔼_g[ ỹ_t(s_0)|_χ_n,t ] )^2] →σ_*^2 , t=1,…,T, where {a_g}_g=1^G satisfies ∑_g=1^G a_g=1. This theorem ensures the validity of the stacking procedure. This result is not attributed to model averaging but stems from the asymptotic insignificance of parameter misspecification on the prediction model. However, remark that this is an asymptotic result, and for finite samples, the estimator of z exhibits bias, and the prediction accuracy of y is not stable. In this sense, model averaging is effective. Furthermore, in the context of statistical learning theory, it is known that while the convex hull increases the flexibility of the model (or reduces the training error), it does not increase its Rademacher complexity, i.e., the generalization gap. <cit.>. In other words, the stacked model has better predictive performance than a single model. For other theoretical justifications for stacking, see, e.g., <cit.>. § SIMULATION We illustrate the implementation and inferential effectiveness of our proposed methods through numerical experiments. In Section <ref>, we explore the in-fill paradigm prediction on a continuous trajectory, while Section <ref> illustrates the performance of our proposed methods. §.§ In-fill prediction We consider a single trajectory γ:→ℝ^2, where γ(t) signifies the correspondence between the continuous closed interval and a curve in ℝ^2. Because data are observed at discrete points on this trajectory, the accuracy of in-fill prediction is expected to improve as the number of discrete observed points grows. We consider interpolation of the outcome over a space-time trajectory composed of line segments. As an example of in-fill prediction, the conceptual diagram in Figure <ref> depicts T=20 observations in the interval = [1,20]. The left panel displays the locations at each time point with the line segments comprising the trajectory shown by a green line. The right panel illustrates the space-time coordinates where we seek to predict the outcome. We generate the data along a trajectory, which comprises our fixed domain, in accordance with the model in (<ref>)–(<ref>). We generate 300 points on the trajectory over t∈=[1,300] using γ(t) = γ(t-1) + 𝒲𝒩(0,1), where 𝒲𝒩(0,1) denotes white noise with zero mean and unit variance. We randomly include n space-time coordinates over the trajectory and generate n values of z(γ(t),t) from the Gaussian process with 0 mean and covariance kernel as (<ref>). The true parameters defining the spatial-temporal process z(γ(t),t) in (<ref>) are ϕ_1=1/2 and ϕ_2=1/2. We then generate y(γ(t),t) from (<ref>) using elements of x(γ(t),t) generated from N(0,4) and β_j(t) using a zero-centered Gaussian process with covariance kernel in (<ref>) specified by ξ=1/2. Additionally, we set σ=1, δ_β=1, and δ_z=1. For our experiments, we randomly include n=20, 40,…, 200 points for training the data. Based on these observations, we evaluate the performance of interpolation over trajectories by comparing the posterior predictive means of y(γ(t),t) and z(γ(t),t) over 100 randomly selected points on the trajectory that were excluded from the training data. For predictive stacking, a set of candidate parameters is specified with ϕ_i ∈{1, 1/5} for i=1,2 in (<ref>), ξ∈{1,1/5} in (<ref>), and {3,1/3} for both δ_β and δ_z in (<ref>) and (<ref>). We employ 20-fold cross-validation to obtain the stacking weights described in Sections <ref> and <ref> and use posterior estimates drawn from (<ref>). All our subsequent posterior summaries refer to (<ref>). Furthermore, we implement M_0, an oracle method with the true parameters assigned and Bayesian model averaging (BMA, <cit.>) with a uniform prior on candidate models, which yields a weighted average of multivariate t-distributions; see the Appendix for details on BMA. As measures of performance, we adopted three metrics: mean squared prediction error (MSPE) and mean squared error for z (MSEz) for stacking of means, and mean log predictive density (MLPD) for predictive stacking of distributions. Figure <ref> illustrates the overall predictive error behavior, where we generated 50 different datasets and report the average of the aforementioned metrics over the different datasets. The left and center panels, which plot MSPE and MSEz, respectively, demonstrate that an increasing number of points in the training data (n) over the fixed domain (trajectory) enhances the precision of predictions for outcomes and spatial effects. Similarly, the right panel reveals improvement in predictive accuracy in terms of MLPD as n increases. Notably, stacking significantly outperforms BMA as its metrics approach those for the oracle model more rapidly with increasing n. While results established in Theorems <ref>–<ref> apply to Euclidean domains for theoretical tractability, our empirical findings on non-Euclidean trajectories in this experiment still appear to be consistent with those theoretical results. §.§ Estimation performances of proposed models We conducted simulation experiments to assess both discrete- and continuous-time trajectory models, comparing their performance in terms of estimation errors and model fitting. First, we sampled data along with the discrete time trajectory model in (<ref>)–(<ref>). We generated n=T=50 and 70 points for two experiments on a trajectory using the same random walk model for γ(t) as in Section <ref>. We take p=2 in (<ref>) and generate initial values for the elements of β_0 and z_0 from N(0,4). With these fixed initial values, we sequentially generate β_t and z_t using the autoregressive specification (see Section <ref>) with δ_β = 1, δ_z = 1, σ = 1 and with K_ϕ taken as the Matérn kernel, introduced in Section <ref>, with ϕ = 1/7 and ν = 1. Each element of x_t(γ(t)) is generated from N(0,4) and fixed thereafter. Then, y_t(γ(t)) is generated using (<ref>). We generated 30 different datasets through the above procedure and analyzed each of them using the models in (<ref>) and (<ref>). For predictive stacking, we set candidate parameters in (<ref>) as ϕ∈{1, 1/10}, ν∈{3, 1/3} and {5,1/5} for both δ_β and δ_z. For (<ref>), we consider ϕ_i∈{3, 1/10} for i=1,2, ξ∈{3, 1/10}, and {3,1/3} for δ_β and δ_z. We employed 20-fold expanding window cross-validation and selected the stacking weights described in Sections <ref> and <ref>. Additionally, we estimated a non-spatial dynamic linear model (NSDLM), y_t = x_t^⊤β_t + η_1t , η_1ti.i.d.∼ N(0,σ^2); β_t = β_t-1 + η_2t , η_2ti.i.d.∼ N(0,W ). We assign priors β_0∼ N(0,10I_2), W∼ IW(10,10I_2) and σ∼ IG(1/2,1/2). This model solely captures the temporal structure of the data without accounting for spatial locations. We use MSE to compare the posterior predictive means for y_T+1 with the underlying signals in (<ref>). Similarly, we use relative mean squared errors (rMSE) defined as ∑_t=1^T (ω̂_t - ω_t)^2/(∑_t=1^Tω_t^2), where ω̂_t denotes the posterior mean and ω_t is the true value, to compare our inferential effectiveness, We apply rMSE to each element of β_t and z_t, while we use (σ̂-σ)^2/σ^2 for σ. To evaluate model fit, we report MLPD, the deviance information criterion <cit.> and the widely applicable information criterion <cit.>. Table <ref> presents comparisons among the three models with regard to predictive evaluation for the datasets generated by the discrete-time model. The entries represent the average values of the metrics over 30 datasets. The discrete and continuous trajectory models clearly outperform NSDLM, as the latter is unable to capture spatial structure. We note insignificant differences between the discrete and continuous time models with respect to MSE of y (MSEy) and the rMSE of z (rMSEz). This indicates that both methods are nearly comparable in their ability to evaluate the true signal and spatial-temporal effects in this dataset and indicates the competitiveness of the continuous model. However, the discrete model outperforms the continuous model in terms of rMSE of the regression coefficients in β (rMSEβ_1 and rMSEβ_2), where the discrete model unsurprisingly captures its own data generating process better than the continuous model. For model evaluation, the discrete-time model excels in terms of DIC and WAIC, although the MLPDs are nearly equal across all the models. Figure <ref> presents the posterior means (solid curve) for y_t(γ(t)), z_t(γ(t)) and the two regression coefficients comprising each β_t as a function of time for one representative dataset with n=50 in the simulation experiment. Also shown are the true values generating the data in the form of a dashed line. We see that the posterior bands from the discrete model are more effective in containing the dashed line (truth) with the continuous model and the NSDLM still performing fairly well, although both prominently miss the truth for the second coefficient in β. Next, we generated 30 datasets from the continuous-time trajectory model described in (<ref>)–(<ref>). We generate n=T=50 and 70 points on the trajectory by the same random walk as in Section <ref>. The true parameters for spatial-temporal process z(γ(t),t) in (<ref>) are ϕ_1 = 1/2, ϕ_2 = 1/2 and δ_z = 1. Then, we produce y from (<ref>) using σ = 1, each β_j generated from (<ref>) with ξ=1/2 and δ_β = 1 and the elements of x(γ(t),t) generated from N(0,4). We analyzed each of the 30 datasets using the continuous time and discrete time trajectory models. For predictive stacking, a set of candidate parameters of the discrete model in (<ref>) was set to ϕ∈{2, 1/2}, ν∈{2, 1/2}, δ_β∈{1/2,1/10} and δ_z∈{1/2,1/10}. For the continuous model in (<ref>), we set ϕ_i∈{1, 1/4 } for i=1,2, ξ∈{1,1/4}, and δ_β and δ_z in {3,1/3}. We employed 20-fold expanding window cross-validation to determine the stacking weights as in the earlier example. The results of the predictions by the three methods are presented in Table <ref>. As in Table <ref>, the continuous and discrete time models outperform NSDLM. The accuracy of the response variable y and spatial-temporal effect z indicate that the continuous model performs better than the discrete model. Regarding the model fitting, again as seen in Table <ref>, we see that both the continuous and discrete time models excel over NSDLM. Figure <ref> is the analogue of Figure <ref> for a representative dataset from the continuous time experiment showing posterior mean and 95% posterior interval for y(γ(t),t), z(γ(t),t) and two slopes as a function of time. While all three methods seem comparable in their ability to capture the truth (dashed line), the precision for the continuous model is higher than the other two models with NSDLM clearly having the widest bands for the regression slopes. Finally, we turn to the stacked posterior inference for σ^2 in Table <ref>. We present the posterior mean and the 95% credible intervals obtained from one representative dataset generated by the discrete-time model and another from the continuous-time model when both δ_β and δ_z are fixed at their true values and n=50. The 2× 2 table presents the stacked estimates for σ^2 when each of these models is estimated from the two representative datasets. We find that the credible intervals from the discrete and continuous time models are able to capture the true value (σ=1) for the data generated from them, which seems to be consistent with the result in Theorem <ref>, but may not be able to capture the true value of σ when they are not the data generating model. Furthermore, if δ_β and δ_z are not specified at their true values, inference for σ^2 suffers. This phenomenon has also been investigated by <cit.> and is largely attributable to the fact that σ^2 is not consistently estimable in Gaussian process models <cit.>. § APPLICATION We apply the continuous space-time model (<ref>) to the actigraph dataset, sourced from the Physical Activity through Sustainable Transport Approaches in Los Angeles (PASTA-LA), and described in detail in <cit.>. Actigraph data are collected through wearable devices including sensors and a smartphone application, providing high-resolution and repeatable measurements for monitoring human activity. There is a growing body of research on the statistical relationships with physical activity measures, such as energy expenditure measures (EE) <cit.> and the Metabolic Equivalent of Task (MET) <cit.>. Here, we consider the instantaneous body vector Magnitude of Acceleration (MAG) as the primary endpoint of our analysis <cit.>. Further discussion about the conversion of MAG into energy expenditure measures is reported in <cit.>. A major aim of this study is to estimate an individual's MAG along a traversed path after accounting for the impact of explanatory variables (e.g., environmental features, risk factors) on metabolism. These estimates from the continuous time model, which represent statistical learning of an individual's physical activity profile on any given day, are then used to predict the subject's metabolic activity on any arbitrary trajectory and comprise a personalized recommendation system for the subject. The models we have developed here can yield more effective pathways and environments to enhance physical activity levels of subjects and lead to overall improvements in metabolic levels. Focusing on one individual, which is often the goal in personalized health science research seeking data driven recommendation systems for metabolic activity, we consider recordings on the MAG observed at 650 unevenly spaced time points. The left panel of Figure <ref> displays the values of the recorded MAG and the right panel plots the MAGs over the path traversed by this individual. We fit the continuous time model in (<ref>) using “Slope”, representing the gradient at the spatial location, and “NDVI”, representing normalized vegetation index, which is a measure of greenness at the location, as the two explanatory variables in x(γ(t),t), where γ(t) is recorded as the subject's coordinates along the trajectory using information from GPS. For predictive stacking, the candidates for the parameters ϕ_1, ϕ_2, ξ were set {100, 1} and those for δ_β,δ_z were set {20,1/5}. Of the 650 observations, 130 data were randomly selected and used as test data, and the remaining 520 data were utilized as training data. We computed the posterior distributions for both the continuous time trajectory model and Bayesian linear regression on the training data and performed predictions. For comparison, we applied a Bayesian linear regression model to this data using the “brm” function from the <cit.> in and adopted default priors, that is, t_3(0,2.5) prior for σ and flat priors for coefficients of the explanatory variables. Table <ref> indicates the superiority of the proposed methods over Bayesian linear regression. Notably, there is a significant difference in MSPE with the proposed model exhibiting considerably lower errors compared with Bayesian linear regression. The MLPD, indicating the goodness of the posterior distribution, also demonstrates that the proposed model outperforms Bayesian linear regression. This superior performance is likely attributable to accounting for the spatial-temporal structure in the data. The incorporation of spatial and temporal structure into a model allows for a more detailed depiction of the data, resulting in more precise predictions. By contrast, Bayesian linear regression does not account for the information, confirming its limitations for real data where the spatial-temporal information is significant. Figure <ref> presents 3 maps to display spatial interpolation for actigraph data. The left panel plots the 130 raw observations over a path traversed by the subject under consideration. The values of the (transformed) MAG are calibrated using colors shaded from deep blue (lowest) to red (highest). The middle panel displays the interpolated MAGs using the posterior predictive means from the stacked posterior distribution in (<ref>) derived from the continuous time model. We note that estimated MAGs along this path effectively capture the features of the observed MAGs. The right panel depicts the posterior predictive means of the latent process using (<ref>) derived from the continuous time model. The variation in the right panel suggests that substantial spatial structure remains on the trajectory after accounting for “Slope” and “NDVI”. The utility of the middle and right panels are distinct. The former is useful for understanding MAG as a feature associated with a path or trajectory. Our framework is able to predict the MAG at a completely arbitrary path, where no measurements have been measured, for a subject had an individual with a given set of personalized health attributes traversed that path. The latter, on the other hand, helps investigators glean lurking factors that may explain some of the residual spatial structure on a trajectory after accounting for the explanatory variables such as “Slope” or “NDVI”. Another key inferential element for spatial energetics is the estimate of an individual's daily profile of MAG, which allows medical professionals to recommend changes, if and as deemed appropriate, in the subject's daily mobility habits. As in the preceding figure, here, too, these daily profiles can be plotted for the outcome or for the residual. Figure <ref> presents these plots. The left panel plots the posterior predictive mean and 95% credible interval band for the MAG along the hours of the day to elicit the daily physical activity pattern of the subject and to better distinguish times of higher activity from those with lesser activity. The right panel presents the analogous plot for the residual process after accounting for trajectory effects represented by slope and greenness. Given the complications associated with streaming measurements at high resolutions from wirelessly operating wearable devices, it is customary to encounter swaths of time intervals that have not recorded measurements either due to technical malfunction or user behavior. Our model based inferential framework uses the posterior predictive distributions to impute such missing values. The left panel in Figure <ref> presents the reconstructed MAG for the subject under consideration at some epochs using the continuous time model. The right panel presents posterior predictive credible intervals that reveal the model's ability to effectively capture 130 held-out values for predictive validation. Finally, we compare the continuous time, discrete time and NSDLM using DIC and WAIC. For this, we extract 150 distinct time points from the above data, ensuring that they are equally spaced and compatible with the discrete-time trajectory model in (<ref>) and the NSDLM introduced in Section <ref>. For stacking of (<ref>), the candidates for the parameters were ϕ∈{1, 1/10}, ν∈{1, 1/3} and δ_β,δ_z∈{5,1/5}. Table <ref> reveals that the continuous time trajectory model is preferred (lower values) to the others in either of these metrics, while the discrete time model considerably outperforms NSDLM. In designing a physical activity recommending system that trains one model, our analysis suggests using the continuous time model is preferable although the discrete time model should also be competitive. § SUMMARY We have devised a Bayesian inferential framework for spatial energetics that aims to analyze data collected from wearable devices containing spatial information over paths or trajectories traversed by an individual. Data analytic goals include estimating underlying spatial-temporal processes over trajectories that are posited to be generating the observations. A salient requirement for appropriately modeling spatial dependence in such applications is to model spatial locations as a function over time. We introduce such dependence in two broad classes of models: one that treats time as discrete and another that treats time as continuous. The former builds on Bayesian dynamic linear models and the latter employs spatial-temporal covariance functions to specify the underlying process. For conducting inference, we propose Bayesian predictive stacking as an effective method, where fully tractable conjugate posterior distributions up to certain parameters are assimilated, or stacked, to deliver Bayesian inference using a stacked posterior. Our framework offers some theoretical results to justify why predictive stacking renders effective posterior inference. § COMPUTER PROGRAMS Computer programs used in the manuscript for generating data for our simulation experiments in Section <ref> and the application presented in Section <ref> have been developed for execution in the statistical computing environment. The programs are available for download in the publicly accessible GitHub repository https://github.com/TomWaka/BayesianStackingSpatiotemporalModelinghttps://github.com/TomWaka/BayesianStackingSpatiotemporalModeling. § ACKNOWLEDGMENTS Tomoya Wakayama was supported by research grants 22J21090 from JSPS KAKENHI and JPMJAX23CS from JST ACT-X. Sudipto Banerjee was supported, in part, by research grants R01ES030210 and R01ES027027 from the National Institute of Environmental Health Sciences (NIEHS), R01GM148761 from the National Institute of General Medical Science (NIGMS) and DMS-2113778 from the Division of Mathematical Sciences (DMS) of the National Science Foundation. § APPENDIX §.§ Notation The notation [A | B] represents a block matrix formed by horizontally concatenating A∈^p× q and B∈^p× r. Given a p× q matrix A and a m× n matrix B, A ⊗ B is the Kronecker product producing a pm × qn block matrix with (i,j)th block is a_ij B, while A⊕ B denotes the block diagonal matrix with A and B along the diagonal. A p-dimensional random variable x is said to be distributed as a multivariate t-distribution t_ν(μ,Σ) with parameters (ν,μ,Σ) if it has the density f(x; μ, Σ, ν) = Γ(ν+p/2)/Γ(ν/2)(νπ)^p/2det(Σ)^1/2(1+1/ν(x-μ)^⊤Σ^-1(x-μ))^-ν+p/2, where Γ(·) is the gamma function and det(·) is the determinant of a matrix. §.§ Proof of Lemma <ref> Let ℙ_t^* be the probability law endowed on a finite spatial realization of y_t(s) with true parameters and let ℙ_t' be that with parameters (σ_*, ϕ', δ_z'). For all t=1,…,T, the Matérn based model without trend (i.e., β=0) is y_t(s) = z_t(s) + η_1, η_1 i.i.d.∼ N(0,σ^2), z_t(s) ind∼ GP(0,σ^2 Δ_zt^2 K_ϕ(·,·) ), where Δ_zt^2 = ∑_j=1^t δ_z^2j (ασ)^2(j-1). Applying Theorem 2.1 in <cit.>, we obtain that ℙ_t' and ℙ_t^* are equivalent. The equivalence of the joint distributions follows. §.§ Intuitive images of observational assumption All theorems in Section <ref> place the following restriction on the nature of n spatial locations within a bounded region : max_s ∈min_1 ≤ i ≤ n |s - s_i| ≍ n^-1/d. Figure <ref> presents a depiction of the above assumption, which indicates that the spatial locations are scattered evenly. §.§ Proof of Theorem <ref> Before proceeding with the proof of Theorem <ref>, we recall the following lemma about the convergence of a random series. Let {X_i:i∈ℕ} be independent random variables with finite second moment and {a_i∈ℝ^+ :i∈ℕ} be an increasing positive number sequence such that a_i ↑∞. If ∑_i=1^∞ Var(X_i)/a_i^2<∞, it holds that ∑_i=1^n (X_i- 𝔼[X_i] ) /a_n→ 0 a.s.. Let Y_i = (X_i - 𝔼[X_i] ) /a_i. Since 𝔼[Y_i]=0 and ∑_i=1^∞ Var(Y_i)<∞, it follows from Kolmogorov's one series theorem (e.g., Theorem 2.5.6. in <cit.>) that ∑_i=1^∞ Y_i<∞ almost surely. Then, from Kronecker's lemma, a_n^-1∑_i=1^n (X_i- 𝔼[X_i] ) converges to 0 almost surely. We now present the main proof of Theorem <ref>. We rewrite the notation introduced in Proposition <ref> as n_t = n_t-1+n, n_ts_t = n_t-1s_t-1 + (y_t - f_t )Q_t^-1(y_t - f_t ), f_t = αm_t-1, Q_t = R_t + I_n, m_t = f_t + R_t Q_t^-1 (y_t-f_t), R_t = α ^2 W_t-1 + δ_z^'2 K_ϕ' (χ) and W_t = R_t - R_t^Q_t^-1R_t. Here, we prove n_t s_t - n_t-1s_t-1/n→ tσ_*^2 by mathematical induction on t. The base step t=1 yields n_1s_1 - n_0s_0 = (y_1 - αm_0)^ Q_1^-1 (y_1 - αm_0) = y_1 ^( (α ^2 + δ_z'^2 )K_ϕ' (χ)+I_n)^-1y_1 Let U_n be a unitary matrix such that U_n K_ϕ' U_n^ is diagonal and let λ_i^(n) be the ith eigenvalue of K_ϕ' for i=1…,n. Since U_n y_1 follows a zero-centered multivariate normal distribution with a diagonal covariance matrix whose ith diagonal is σ_*^2(1 + δ_z'^2λ_i^(n)) under ℙ', we obtain n_1s_1 - n_0s_0 = ∑_i=1^n σ_*^2(1 + δ_z'^2λ_i^(n)) /1+ (α ^2 + δ_z'^2) λ_i^(n) u_i^2, where u_i follows the standard normal distribution. Let A_i =(σ_*^2(1 + δ_z'^2λ_i^(n)) )/(1+ (α ^2 + δ_z'^2) λ_i^(n)). Because λ_i^(n)≤ C n i^-2ν/d - 1 for all i=1,…,n from Corollary 2 in <cit.>, (∑_i=1^n A_i )/ (nσ_*^2) converges to 1 as n→∞. By ∑_i=1^∞ A_i^2/i^2 <∞ and Lemma <ref>, we obtain n_1s_1 - n_0s_0/n = 1/n∑_i=1^n A_i u_i^2 → σ^2_*, ℙ'-a.s. Owing to the equivalence of ℙ_* and ℙ', n_1s_1 - n_0s_0/n→σ^2_* holds ℙ_*-almost surely. Then, as an inductive step, we assume that n_ts_t/n→ tσ_*^2 holds ℙ_*-almost surely and we consider the next period. We have n_t+1s_t+1 - n_ts_t = (y_t+1 - αm_t)^ Q_t+1^-1 (y_t+1 - αm_t), where Q_t+1 = (I_n + α^2W_t + δ_z'^2 K_ϕ' ) and W_t = R_t - R_t^Q_t^-1R_t. Let λ_t,i^(n) be the ith eigenvalue of W_t. Because W_t = (R_t^-1 + I_n)^-1 and R_t = α^2 W_t-1 +δ_z'^2 K_ϕ', it holds that, under ℙ', λ_t,i^(n) = (α^2+ δ_z'^2)λ_t,i^(n)/ 1 + (α^2+ δ_z'^2)λ_t,i^(n). Note that λ_t,i^(n)→(α^2+δ_z'^2)^tλ_i^(n) as i→∞. Then, 1/n(y_t+1 - αm_t)^ Q_t+1^-1 (y_t+1 - αm_t) = 1/ny_t+1^U_n^ U_n Q_t+1^-1U_n^ U_n y_t+1 - 2α/ny_t+1^ Q_t+1^-1m_t + α^2/nm_t^ Q_t+1^-1m_t = 1/ny_t+1^U_n^ U_n Q_t+1^-1U_n^ U_n y_t+1 +o(1) = 1/n∑_i=1^n σ_*^2 + v'^2λ_i^(n)/ 1 + α^2λ_t,i^(n) + δ_z'^2λ_i^(n) u_i^2 +o(1) where u_i∼ N(0,1). The second equality holds because when we recursively expand m_t by its definition, we find that m_t includes a factor of R_t. Then, the second and third terms are o(1) due to the eigenvalue decay of R_t. Hence, using Lemma <ref> and the equivalence of the distributions, we obtain 1/n∑_i=1^n σ_*^2 + v'^2λ_i^(n)/ 1 + α^2λ_t+1,i^(n) + δ_z'^2λ_i^(n) u_i^2  → σ^2_*, ℙ_*-a.s. Therefore, we have n_t+1s_t+1/n→ (t+1)σ_*^2, ℙ_*-a.s. This concludes the induction step and we conclude n_ts_t - n_t-1s_t-1/n→ tσ_*^2 holds for each t=1,2,…,T. From Chebyshev's inequality, p (σ^2 |_χ_n,t) ⇝δ(σ_*^2) holds ℙ_*-almost surely. §.§ Proof of Theorem <ref> Recall that n_t = n_t-1+n, n_ts_t = n_t-1s_t-1 + (y_t - f_t )Q_t^-1(y_t - f_t ), f_t = αm_t-1, Q_t = R_t + I_n, m_t = f_t + R_t Q_t^-1 (y_t-f_t), R_t = α ^2 W_t-1 + δ_z^'2 K_ϕ' (χ) and W_t = R_t - R_t^Q_t^-1R_t. We decompose the prediction error for the latent term z̃_t(s_0) as 𝔼_* [(Z_tn(s_0)-z̃_t(s_0))^2] = 𝔼_* [(Z_tn(s_0)- 𝔼[z̃_t(s_0)|_χ_n,t ] + 𝔼[z̃_t(s_0)|_χ_n,t ] - z̃_t(s_0))^2] = 𝔼_* [(Z_tn(s_0)- 𝔼[z̃_t(s_0)|_χ_n,t ])^2 + (z̃_t(s_0)-𝔼[z̃_t(s_0)|_χ_n,t ] )^2] = 𝔼_* [Var(Z_tn(s_0))]_E_n,t^A + 𝔼_* [(z̃_t(s_0)-𝔼[z̃_t(s_0)|_χ_n,t ] )^2]_E_n,t^B Focusing on the first term, E_n,t^A in (<ref>), we note the following distributions, Z_tn(s_0) |_χ_n,t,σ^2,m_t ∼ N( c_0^C^-1m_t, σ^2δ_z'^2 - c_0^C^-1c_0 ), m_t|σ^2,f_t,R_t ∼ N(f_t, σ^2 R_t(R_t + I_n)^-2R_t ), where C = σ^2δ_z'^2K_ϕ(χ_n) and c_0=(σ^2δ_z'^2K_ϕ(s,s_0))_s∈χ_n. According to the law of total variance <cit.>, Var(Z_tn(s_0)) = 𝔼[ Var(Z_tn(s_0)) |_χ_n,t, σ^2, m_t ] + Var( 𝔼[Z_tn(s_0)|_χ_n,t, σ^2, m_t] ) = σ^2(δ_z'^2 - δ_z'^2 {K_ϕ(s,s_0)}_s∈χ_n^ K_ϕ^-1(χ_n){K_ϕ(s,s_0)}_s∈χ_n + {K_ϕ(s,s_0)}_s∈χ_n^ K_ϕ^-1(χ_n) R_t(R_t + I_n)^-2R_t K_ϕ^-1(χ_n){K_ϕ(s,s_0)}_s∈χ_n^ ). From Theorem <ref> we know that p(σ^2 |_χ_n,t) →δ(σ^2_*) as n→∞ under ℙ_*. We also obtain E_n,t^A = σ_*^2(δ_z'^2 - δ_z'^2 {K_ϕ(s,s_0)}_s∈χ_n^ K_ϕ^-1(χ_n){K_ϕ(s,s_0)}_s∈χ_n + {K_ϕ(s,s_0)}_s∈χ_n^ K_ϕ^-1(χ_n) R_t(R_t + I_n)^-2R_t K_ϕ^-1(χ_n){K_ϕ(s,s_0)}_s∈χ_n^ )+o(1). Furthermore, the second term in (<ref>) can be represented as E_n,t^B = 𝔼_* [(z̃_t(s_0)- {K_ϕ(s,s_0)}_s∈χ_n^ K_ϕ^-1(χ_n) m_t )^2]. The prediction error of ỹ_t(s_0) can be decomposed as 𝔼_* [(Y_tn(s_0)-ỹ_t(s_0))^2] = 𝔼_* [(Y_tn(s_0)- 𝔼[ỹ_t(s_0)|_χ_n,t ] + 𝔼[ỹ_t(s_0)|_χ_n,t ] - ỹ_t(s_0))^2] = 𝔼_* [(Y_tn(s_0)- 𝔼[ỹ_t(s_0)|_χ_n,t ])^2 + (ỹ_t(s_0)-𝔼[ỹ_t(s_0)|_χ_n,t ] )^2] = 𝔼_* [Var(Y_tn(s_0))] + 𝔼_* [(Y_t(s_0)-𝔼[ỹ_t(s_0)|_χ_n,t ] )^2] n →∞⟶ 2σ_*^2 + E_n,t^A + E_n,t^B §.§ Proof of Theorem <ref> 𝔼_* [(ỹ_t(s_0) - ∑_g=1^G a_g 𝔼_g[ ỹ_t(s_0)|_χ_n,t ] )^2] = σ_*^2 + 𝔼_* [(z̃_t(s_0) - ∑_g=1^G a_g 𝔼_g[ z̃_t(s_0)|_χ_n,t ] )^2] =σ_*^2 + 𝔼_* [{∑_g=1^G a_g(z̃_t(s_0) - 𝔼_g[ z̃_t(s_0)|_χ_n,t ]) }^2]. By the Cauchy–Schwarz inequality, the second term is bounded by G(∑_g a_g^2)E_n,t^B, which converges to zero. §.§ In-fill prediction of the discrete DLM model Here, we examine the in-fill predictive performance for the discrete time Bayesian DLM. First, we introduce the data-generating process along with the model in (<ref>). In the period T = 20, we uniformly sample n = 50, 100, …, 500 spatial locations from the unit square [0,1]^2⊂^2 to generate training data (the left panel in Figure <ref>). The elements of initial values of the 2-dimensional state vectors β_0 and z_0 are randomly generated from N(0,4). With these initial values, we sequentially produce β_t and z_t using the autoregressive specification (see Section <ref>) with δ_β = 1, δ_z = 1, σ = 1 and K_ϕ taken as the Matérn kernel and ϕ=1/7 and ν=1. Here, matrices G_β,t and G_z,t are configured as identity matrices of sizes p and n, respectively, and are considered time-invariant. Then, we sampled each element of X_t from N(0,4) and y from (<ref>). In this setting, we consider the prediction of the one-step future data at all locations, including n observed and 100 newly sampled points, as shown in the right panel of Figure <ref>; the variation in the accuracy of spatial-temporal predictions with increasing spatial samples is of interest. We generated 50 different datasets using the above procedure and analyzed each dataset using the model in (<ref>). For predictive stacking, we defined a set of candidate parameters: ϕ∈{1/5, 1/10}, ν∈{2,1/2}, δ_β∈{2,1/2} and δ_z∈{2,1/2}. We employed 20-fold expanding window cross-validation and selected the stacking weights from Δ = {{a_g}_g=1^G |∑_g=1^Ga_g=1, a_g≥ 0 } to yield a simplex of the candidate predictions (<ref>). Furthermore, we implemented BMA with a uniform prior on candidate models and M_0, an oracle method with the true parameters assigned. As measures of performance, we adopted three metrics: mean squared prediction error (MSPE) and mean squared error for z (MSEz) for stacking of means, and mean log predictive density (MLPD) for stacking of distributions. Figure <ref> provides an overview of these predictions, where we report the average of the aforementioned metrics over the 50 datasets. The left and center panels illustrate the enhancement in predictions of both the outcome and spatial effects in the in-fill paradigm with more observed locations. The right panel demonstrates that distributional stacking consistently improves with an increasing n, as indicated by the log predictive density. These findings indicate that a higher number of spatial points (as long as the points are dispersed) improves predictions. Furthermore, the stacking results are generally better than those of BMA, underscoring the significance of weight determination. §.§ Discussion of ean  and ebn Here, we elaborate on the asymptotic behaviors of E_n,t^A and E_n,t^B, introduced in Section <ref>. First, because E_n,t^A does not have a closed form, we numerically investigate its decay as n increases from 50 to 1600. The observation locations are uniformly sampled from the unit square [0,1]^2⊂^2. We compute E_n,t^A with δ_β=δ_z=σ=1, (ϕ, ν)∈{(1/2,1),(1/5,1/2), (1/10,1/3)} and T∈{2,20}. The upper panels in Figure <ref> illustrate the predictive variance of z in the absence of a trend, whereas the lower panel shows the predictive variance of z when a trend is present. The training periods are 2 and 20 periods on the left and right sides, respectively. In all settings, the predicted variance decreases with an increasing sample size. The increase in T indicates a faster decrease in predictive variance, suggesting that the rate is influenced by the training period T. Next, we consider the decay of E_n,t^B. For simplicity, we assume T=1 and the space has dimensions of 1. Here, we denote χ_n = { i/n, i= -n, -n+1,…, n-1, n }⊂ℝ, χ̃_n = { i/n, i∈ℕ}⊂ℝ. Let _-0,t be data in χ_n ∖{0} until time t, _-0,t be data in χ̃_n ∖{0} until time t, e := 𝔼_* [ z̃(0) - 𝔼[ z̃(0) _-0,t ] ], and ẽ := 𝔼_* [ z̃(0) - 𝔼[ z̃(0) _-0,t ]]. The attenuation of E_n,t^B with larger n is then justified based on the following result from <cit.> Assume |e - ẽ | → 0 as n→∞. Then, the following holds as n→∞: E_n,t^B→ 0. §.§ Procedure of Bayesian Model Averaging Bayesian Model Averaging for predictors is given by p(ỹ) = ∑_g=1^G p_g(ỹ)p(M_g ), where is the dataset, M_g is the g-th candidate model, G is the number of candidate models, and p(M_g) is the posterior probability of model M_g given by p(M_g) = p_g()p(M_g)/∑_l=1^G p()p(M_l), where p(M_g) is the prior probability of model M_g and p_g() is the marginal likelihood of model M_g. If the prior is assumed to be uniform, p(M_g)=1/G for g=1,…,G. §.§ Supplementary analysis of actigraph data For the analyses presented in Section <ref>, we incorporated time varying estimates of “Slope” and “NDVI”. We extracted 150 points from the 650 data points, as we did in Section <ref>, to ensure the applicability of the discrete-time model. We then applied the continuous-time trajectory model, the discrete-time trajectory model, and the DLM to this subset. Figure <ref> displays the posterior means and 95% credible bands for the slopes obtained from each model. The continuous-time trajectory and discrete-time trajectory models show narrower credible intervals than the DLM. This indicates that the trajectory models provide more accurate slope estimates than the DLM since they account for spatial-temporal effects. We note that the aim of the current research requires accounting for these explanatory variables not only to improve the predictive inference presented in the manuscript, but also to infer about the underlying latent process posited to be generating the observations. Rather than identifying global statistical significance of explanatory variables on the subject's MAG, the time-varying impact of the explanatory variables enriches the predictive framework and better accounts for variation in the outcome, which, in turn, translates to improved estimation of the latent process as a spatially-temporally structured residual of the regression. plainnat ]
http://arxiv.org/abs/2405.10309v1
20240516175839
Hydrodynamic Edge Modes and Fragile Surface States of Symmetry Protected Integer Quantum Hall Effect of Bosons
[ "Dylan Reynolds", "Gustavo M. Monteiro", "Sriram Ganeshan" ]
cond-mat.mes-hall
[ "cond-mat.mes-hall" ]
Department of Physics, City College, City University of New York, New York, NY 10031, USA CUNY Graduate Center, New York, NY 10031 Department of Physics and Astronomy, College of Staten Island, CUNY, Staten Island, NY 10314, USA Department of Physics, City College, City University of New York, New York, NY 10031, USA CUNY Graduate Center, New York, NY 10031 We adapt the fluid description of Fractional Quantum Hall (FQH) states, as seen in Monteiro et al. (2022)  <cit.>, to model a system of interacting two-component bosons. This system represents the simplest physical realization of an interacting bosonic Symmetry-Protected Topological (SPT) phase, also known as the integer quantum Hall effect (IQHE) of bosons. In particular, we demonstrate how the fluid dynamical boundary conditions of no-penetration and no-stress at a hard wall naturally give rise to the two counter-propagating boundary modes expected in these SPT phases. Moreover, we identify energy-conserving hydro boundary conditions that can either create a gap in these edge modes or completely isolate the edge states from the bulk, as described in Physical Review X 14, 011057 (2024), where they are termed fragile surface states. These fragile surface states are typically absent in K-matrix edge theories and require bulk dynamics to manifest. By leveraging insights from hydrodynamical boundary dynamics, we can further elucidate the intricate surface properties of SPTs beyond the usual topological quantum field theory based approaches. Hydrodynamic Edge Modes and Fragile Surface States of Symmetry Protected Integer Quantum Hall Effect of Bosons Sriram Ganeshan May 20, 2024 ============================================================================================================== § INTRODUCTION The discovery of topological insulators and superconductors has enlarged the notion of topological phases that owe their properties to symmetries  <cit.>. These topological phases are dubbed Symmetry-Protected Topological phases (SPTs), and their key features are bulk energy gaps and edge modes that are robust to symmetry-preserving perturbations. Subsequent works have generalized the ideas of SPT phases to quantum many-body states with interactions <cit.>. These interacting generalizations can be gapped short-range entangled without any intrinsic topological order but still possess robust edge modes and a bulk topological invariant. Interacting SPTs have been extensively classified using sophisticated mathematical tools such as group cohomology <cit.> and topological quantum field theory methods such as K-matrix theory <cit.>. Even though these frameworks capture all the essential topological features of the SPTs, it would be useful to quantify the microscopic dynamics in the context of Chern-Simons-Ginzburg-Landau (CSGL) field theories. CSGL theory has been developed in the context of the fractional quantum Hall state by Zhang, Hansson, and Kivelson <cit.>, and independently by Read <cit.>. The CSGL framework for SPTs is well understood, however the bosonic matter (GL) part of the CSGL theory is usually discarded while studying edge physics; typically the gauge invariance determines the boundary chiral dynamics in terms of additional edge fields. However, in the presence of bulk bosonic matter, the gauge invariance is preserved. Deriving the edge dynamics takes a different route, by utilizing the anomaly inflow principle, and does not need additional fields to be added to the edge. In our recent pieces of work <cit.>, we have used this anomaly inflow mechanism to identify the superfluid boundary conditions that are consistent with the expected chiral edge dynamics. We also derived the non-linear generalization of this chiral boson action, where the chiral boson fields emerge from bulk fields taken at the boundary to satisfy the appropriate fluid dynamical boundary conditions. In this work, we generalize the anomaly inflow approach to derive a hydrodynamical model with appropriate boundary conditions of a particular interacting SPT phase, dubbed the Integer Quantum Hall Effect (IQHE) for bosons, introduced by Senthil and Levin in Ref. <cit.>. To arrive at an SPT phase, they start with a two-component system of bosons (spinor bosons or a bilayer system) in a large magnetic field and illustrate how this system has integer Hall conductivity if the U(1)× U(1) symmetry is preserved. The existence of two counter-propagating edge modes, one carrying charge and the other pseudospin, is derived using the K-matrix Chern-Simons formalism, albeit after dropping the bosonic matter in the bulk. Subsequent work has shown how these phases can manifest in interacting lattice models and two-component Bose gases <cit.>. Here we keep the bosonic matter and investigate how a hydrodynamic framework captures the bulk and edge properties of this SPT phase. We extract the bulk conductivity σ_xy from the algebra of the fluid polarization of the total charge field, which is a uniquely hydrodynamical way of determining the bulk invariants. In particular, we show how the fluid dynamical boundary conditions at the hard wall such as no-penetration and no-stress boundary conditions lead to counter-propagating chiral edge modes, one carrying charge and the other pseudospin. We also see the existence of two counter-propagating Kelvin Modes, non-dispersive modes that tend to accompany the chiral boson mode in fluid descriptions of quantum systems but are not associated with any anomaly. We then outline two types of energy-conserving boundary conditions that couple both edges at the boundaries without altering the bulk physics. The first type is the partial slip boundary condition, where the tangential stress in one layer generates slip in the second layer, and vice versa. These boundary conditions open a gap in the spectrum. Remarkably, a second set of hydrodynamic boundary conditions results in the detachment of edge modes from the bulk. These isolated edge modes, which do not begin or end at the bulk bands, have recently been identified as fragile surface states in Ref. <cit.> for non-interacting topological insulators belonging to the non-Wigner-Dyson class. Within our framework, we demonstrate how the boundary conditions deform the edge U(1) symmetry, leading to the decoupling of edge states from the bulk. We emphasize that fragile surface states are beyond the scope of traditional edge theories within topological quantum field theory, such as the K-matrix formalism, partly due to the absence of bulk matter. In contrast, within the hydrodynamic framework, the edge theory is consistently derived in conjunction with the bulk matter, which appears to be a requirement for uncovering the fragile surface states. The benefits of a fluid dynamical approach to SPT phases are twofold. Firstly, it enables the systematic generalization of edge theories to include a richer class of surface phenomena, such as fragile states. Secondly, adopting a hydrodynamical approach may lead to the discovery of unique experimental signatures of the topological phase, imprinted in the non-universal matter dynamics accessible on ultracold atomic platforms. These platforms are likely where many of these phases will be realized in the near future. § MUTUAL COMPOSITE BOSON THEORY Following Ref. <cit.>, we examine a two-dimensional system of two-component bosons (for example spinor bosons or a bilayer system) subject to a large magnetic field with short-ranged repulsive interactions. The large magnetic field ensures each component is in a ν=1 integer quantum Hall phase. Ref. <cit.> considers a particular candidate state of this setup which is dubbed as an integer quantum Hall effect of bosons, which is a U(1) symmetry-protected topological phase with Hall conductivity of σ_xy=2 e^2/h and zero thermal Hall conductivity κ_xy=0. In the absence of tunneling between the two components, an additional U(1) leads to a pseudo-spin Hall conductivity of -2e^2/h. Construction of such a candidate state was done using a two-component Chern-Simons-Ginzburg-Landau (CSGL) theory with a mutual Chern-Simons (CS) statistical term with a K-matrix of K=[ 0 1; 1 0 ] implementing the flux attachment. The role of the mutual CS term is to attach a flux quantum from one species to each boson of another species, resulting in a “mutual composite boson fluid" that experiences zero average flux. Following the quantum Hall logic, this mutual-Chern-Simons-Ginzburg-Landau (mCSGL) effective action can be written as, S_bulk =∫ d^2x dt[∑_aℒ_a + ℒ_int + ℒ_CS] , where the two individual component Lagrangians associated with the bosonic matter of each species, labeled by a=1,2, are given by ℒ_a =iħ(Φ^a)^† D_t^aΦ^a - ħ^2/2m|D_i^aΦ^a|^2 , the mutual Chern Simons (CS) Lagrangian is given by ℒ_CS = ħ/4π∑_a,b=1^2ϵ^μνλK^abα^a_μ∂_να^b_λ . and the interaction Lagrangian ℒ_int is solely a function of |Φ^a|^2. A minimal coupling to the external electromagnetic vector potential A_μ and internal Chern-Simons statistical fields α^a_μ are included in the covariant derivatives, defined as D^a_μ=∂_μ - iq/ħA_μ + i α^a_μ. Both species have the same effective mass m and charge q. Greek indices μ, ν, λ run over t,x,y, while Latin indices i,j,k run over the spatial components x,y. The interaction term is assumed to introduce a non-zero vacuum expectation value for both species. This can be approximated by a local repulsive interaction, due to density fluctuation on top of a uniform density background, i.e., the jellium model, with local interactions. Therefore, we can express the interaction Lagrangian as ℒ_int = -∑_a,b V_ab(|Φ^a|^2-qB/2πħ)(|Φ^b|^2-qB/2πħ). Here, we are assuming that both fields have the same vacuum expectation value, which is given by qB/(2πħ), where B is the external magnetic field. This leads to two copies of an abelian Higgs mechanism, which can be seen explicitly if we express the scalar fields in their polar forms, that is, Φ_a =√(qB/2πħ)(1+n^a/2)e^iθ^a . The factor of 12 was introduced to give us |Φ^a|^2-qB2πħ≈qB2πħn^a. Using the Madelung variables defined in Eq. (<ref>), we see that the Lagrangian, up to quadratic order, becomes ℒ^(2)_a= -qB/2π[n^a(∂_tθ^a+α_0^a-q/ħA_0)+ħ/8m(∂_i n^a)^2. +.ħ/2m(∂_iθ^a+α_i^a-q/ħA_i)^2+α_0^a], ℒ^(2)_int= -q^2B^2/4π^2ħ^2∑_a,b V^ab n^a n^b . Note that ℒ_CS is already quadratic in fields. The linearized equations of motion are thus ∂_tθ^a+α_0^a-q/ħA_0-ħ/4m∇^2 n^a+qB/πħ^2∑_b V^ab n^b = 0 , ∂_t n^a+ħ/m∂_i(∂_iθ^a+α_i^a-q/ħA_i) = 0 , qB/2π(1+n^a) - ħ/2πϵ_ij∑_bK^ab∂_iα_j^b = 0 , qB/ m(∂_iθ^a+α_i^a-q/ħA_i) +ϵ_ij∑_bK^ab(∂_tα_j^b-∂_jα_0^b) = 0 . Here we have introduced ϵ_ij as the antisymmetric tensor in 2D and employed the K-matrix K^ab. Note that the above system constitutes two sets of equations, one for each species. For simplicity, we have assumed a uniform B field. From here on, our analysis differs from that presented in <cit.>. We focus on the superfluid hydrodynamics of bosonic matter subject to vorticity constraints enforced by the Chern-Simons terms in the presence of boundaries. This approach deviates from the traditional strategy, which involves considering the effective Chern-Simons theory without any bulk matter and deducing boundary dynamics through the enforcement of gauge invariance, which requires additional gapless degrees of freedom. The key result of this paper is that we derive the bulk and boundary topological properties directly from the superfluid hydrodynamics. This follows our recent work, which employed a similar strategy for a Laughlin state described by a CSGL action <cit.> and is in the same spirit as M. Stone's hydrodynamic interpretation of CSGL saddle point equations <cit.>. § BULK TOPOLOGICAL INVARIANTS FROM THE ALGEBRA OF FLUID POLARIZATION The governing equations of this system (<ref>)-(<ref>) admit an alternative formulation in terms of the fluid polarization. To construct the fluid polarization, we recognize q^2B2πħn^a as plasmon fluctuations of the model. These fluctuations then can be expressed in terms of polarization waves, under the identification q^2B/2πħn^a=-∂_iP_i^a , where P_i^a is the polarization field. Using equation (<ref>) and imposing that the polarization field must be gauge invariant, we find that P_i^a = q/2πϵ_ij∑_bK^ab(∂_jθ^b +α^b_j - q/ħA_j ) . That is, the polarization of one species is defined solely in terms of the opposite species. This can be made precise by shifting the Chern-Simons gauge field, that is, α_μ^a→α_μ^a +qħA_μ and identify the terms of the form P_i^a E_i, where E_i is the external electric field. For more details, we refer to our previous work <cit.>. We can read the polarization algebra directly from the symplectic structure of the mutual Chern-Simon action, which gives us {P_i^a(x⃗) ,P_i^b(x⃗ ')} = q^2/2πħϵ_ijK^ab δ(x⃗-x⃗ ') . The polarization fields require the matter term ∂_iθ^a to ensure the consistency of the algebra between the polarization and the density fields, that is, {n^a(x⃗),P_i^b(x⃗ ')}=-2πħ/q^2B{∂_iP_i^a(x⃗),P_j^b(x⃗ ')}. We can decouple the polarization algebra by diagonalizing the K-matrix, which naturally introduces the polarization vectors for charge and pseudospin P_i^Q = P_i^1 + P_i^2 , P_i^S = P_i^1 - P_i^2 . As we will see in section <ref>, the same logic applied to the matter fields decouples the bulk equations. The decoupled polarization algebra is then expressed as {P_i^(Q)(x⃗) ,P_j^(Q)(x⃗ ')} = 2q^2/2πħϵ_ijδ(x⃗-x⃗ '), {P_i^(S)(x⃗) ,P_j^(S)(x⃗ ')} = -2q^2/2πħϵ_ijδ(x⃗-x⃗ '), {P_i^(Q)(x⃗) ,P_j^(S)(x⃗ ')} = 0 . Since polarization is crucially linked to the geometric Berry phase and associated Hall conductivity <cit.>, we can immediately read off the magnitude of the charge and pseudospin Hall conductivities, σ_xy^(Q)=-σ_xy^(S)=2q^2/2πħ thereby quantifying the bulk invariant from in terms of fluid variables. In the subsequent sections, we extract the edge dynamics within the hydrodynamic equation by identifying the superfluid boundary conditions at the edge that are consistent with the anomalous edge dynamics and the bulk topological properties. § FIRST ORDER HYDRODYNAMICS AND THE CHOICE OF VELOCITY FIELD The equations defined in Eqs. (<ref>)-(<ref>) can be written in the form of hydrodynamic equations by identifying the velocity field. For the Laughlin state, the superfluid formulation was introduced by Stone in Ref <cit.> leading to continuity and Euler equations. Within Stone's FQH fluid dynamics, the Euler equations possessed three derivatives of density fields (two derivatives of density in the stress tensor), which are the so called “quantum pressure" terms. However, our recent works  <cit.> have shown that one can change the velocity field definition, such that, the Euler equation posses only second-order derivatives (one derivative of velocity in the stress tensor). This choice does not alter the bulk properties. The upshot is that the two physical boundary conditions applied to this velocity field will assume the familiar fluid dynamical forms of no-penetration, combined with either no-stress or no-slip. In the case of the bosonic Integer Quantum Hall (IQH), the velocity fields for the two components can be defined as follows: v_i^a=ħ/m(∂_iθ^a+α_i^a-q/ħA_i-1/2∑_b K^abϵ_ij∂_j n^b) In terms of this velocity field, the system (<ref>)-(<ref>) becomes ϵ_ij∂_i v_j^a - 1/2ω_Bℓ_B^2∇^2 n_a - ω_B ∑_b K^ab n^b =0 , ∂_t n^a + ∂_i v_i^a = 0 , ∂_t v_i^a - ∂_j T_ij^a - ω_Bϵ_ij∑_b K^abv_j^b =0 , where we've introduced the length and time scales set by the magnetic length ℓ_B^2=ħqB and cyclotron frequency ω_B=qBm. The linearized stress tensor is given by T_ij^a = -δ_ijP^a + 1/2ω_Bℓ_B^2 ∑_b K^ab(ϵ_ik∂_k v_j^b + ϵ_jk∂_i v_k^b ) , with pressure P^a =ℓ_B^2ω_B^2 n^a + 1/π m ℓ_B^2∑_b V^ab n^b , where we have used that V^ab=V^ba, which follows directly from ℒ_int. Note that the Lorentz force term in one system is sourced by the velocity of the other species. Additionally, the form of the stress tensor (<ref>) suggests that the off-diagonal hydrodynamic stresses exerted on one species originate entirely from the flow of the opposing species. Furthermore, this stress tensor takes the form of classical odd viscosity  <cit.>, which stems from our definition of velocity [This is also true for all two-dimensional superfluids modeled by a Gross-Pitaevskii equation in terms of the Madelung variables, where the quantum pressure terms written in the standard velocity definition of u_i=∂_iθ takes the form of odd viscosity in terms of v_i=(ħ/m)(∂_i θ-1/2ϵ_ij∂_j n_j) ]. § CHIRAL EDGE MODES FROM FLUID DYNAMICAL BOUNDARY CONDITIONS In addition to bulk equations, a fluid system must be accompanied by appropriate boundary conditions. In principle, there exists a family of boundary conditions that correspond to different physical scenarios, but typically we choose ones that capture the observed or expected physics near the boundary. This is also the case in conventional fluid dynamics where we pick no-penetration and no-slip boundary conditions (zero velocity) when we study the motion of a solid body in water and no-stress (force balance) at two-fluid interfaces such as oil in water. Note that two boundary conditions are required to consistently solve for the fields with second-order derivatives in the equations of motion. For quantum fluids defined in Eqs. (<ref>-<ref>) in their ground state, we enforce that the boundary conditions are energy-conserving. Even though energy dissipation can be introduced at the boundaries in some restricted sense <cit.>, we will not consider boundary conditions that do not conserve energy. To this end, we consider the additional energy conservation equation ∂_t ℋ + ∂_i 𝒬_i =0 , where ℋ contains typical kinetic terms, as well as any additional potential energy terms and 𝒬_i is the energy current. To analyze the boundary conditions we take the fluid domain to be the lower half plane y ≤ 0, with a rigid interface along the x axis. Once ℋ and 𝒬_i are identified, we can enforce conservation of energy including both bulk and boundary terms in the following way dE/dt = ∫ d^2 x ∂_t ℋ = -∫ d^2 ∂_i𝒬_i = -∫ dx 𝒬_y |_y=0 , where we've used the divergence theorem and assumed all quantities vanish far from the boundary. For energy to be conserved we enforce that 𝒬_y |_y=0 = 0 . Along with particle number (or equivalently, mass or charge) conservation, this gives the second boundary condition required in a second-order system. The Hamiltonian for our linear system is [Note our Hamiltonian does not possess any terms of the form ∼ (∂_i n)^2, which typically arise in a linear superfluid Hamiltonian. In the two-fluid case, these terms cancel, as can be checked by examining the full nonlinear theory.] ℋ = ∑_a(m/2 v_i^av_i^a + m/2ω_B^2ℓ_B^2 (n^a)^2 ) +1/2πℓ_B^2∑_abV^abn^a n^b . Using the equations of motion we find the corresponding conserved current, satisfying (<ref>), to be 𝒬_j = - m ∑_av_i^a T_ij^a . The no-energy dissipation condition in Eq. (<ref>) imposes the following constraint, 𝒬_y|_y=0 = - m ∑_a (v_x^aT_xy^a+ v_y^aT_yy^a) =0. Within the above restriction, we can deduce energy-preserving boundary fluid dynamical boundary conditions. One standard choice is the no-penetration condition, which says that the fluid does not flow into a hard wall v_y^a|_y=0 =0 . This eliminates the second term of (<ref>). The second boundary condition must then force the first term to vanish. In fact, there are two separate classes of boundary conditions that allow this to happen. The first case (Case I) is known as the partial slip condition, and can be expressed as T_xy^1|_y=0= - λ v_x^2|_y=0 , T_xy^2|_y=0=λ v_x^1|_y=0 . The parameter λ corresponds to an inverse slip length and it interpolates between the no-stress condition for λ=0 to the no-slip condition for λ→∞. For the intermediate values of λ the energy conservation requires that the tangent stress of one component generates a partial slip in the second component. The second class of boundary conditions (Case II) is given by, v_x^1|_y=0=-γ v_x^2|_y=0 , T_xy^2|_y=0=γ T_xy^1|_y=0 . This condition, parameterized by γ, matches the tangent velocity and the tangent stress of the two layers. From the standpoint of energy conservation, all these boundary conditions are equally valid, though they result in different edge physics. We now investigate how these boundary conditions encapsulate the anomaly inflow mechanism of the topological phase. In our recent work, we demonstrated that if the system is anticipated to exhibit anomaly-induced chiral modes that propagate along the boundary, the no-stress condition is preferred over the no-slip condition. This is because the no-slip condition by definition forbids any chiral dynamics along the edge whereas the no-stress condition results in chiral edge dynamics induced by the anomaly inflow mechanism <cit.>. We first consider the two-component generalization of the boundary conditions considered for the Laughlin state in Ref. <cit.>. These conditions correspond to the λ=0 limit of the partial slip conditions and are given by, v_y^a|_y=0 =0 , T_xy^a|_y=0=0 . Using the continuity Eq. (<ref>) combined with the no-penetration condition, we observe that the no-stress condition can be written in a dynamical form as [∂_t n^a + 2 ∂_x v_x^a]_y=0=0 . In Sec. <ref>, we solve for the bulk and edge dispersion for the hydro equations (<ref>-<ref>) together with boundary conditions (<ref>). We then consider the most general boundary conditions and show how the edge dynamics change as a function of the boundary parameters λ and γ. In Sec. <ref> we construct an effective action for the both cases, which requires the addition of an auxiliary chiral boson field at the edge to obtain the correct boundary conditions. Following Refs. <cit.>, the dynamical form of this boundary condition can be derived from an edge action that needs to be added to the original mutual CSGL action. This action turns out that it is the chiral boson action that is also coupled to the background density, S_edge=ħ/2∑_±∫ dx dt ∂_tϕ^± (∂_x ϕ^±-n_±/(√(2π)l_B)), Fix the above linearized edge action where ϕ^± are the auxiliary bosonic edge fields analogous to Wen's chiral boson action. See Ref. <cit.> for a detailed derivation of this action for the CSGL theory corresponding to the Laughlin state. We emphasize that even though it may appear that we have introduced an additional field ϕ in the edge action, ϕ is completely determined in terms of the density and velocity fields taken at the edge resulting in the boundary equation given in Eq. <ref>. Furthermore, the linearized bulk and edge modes can be calculated directly from the hydro equations subject to the consistent boundary conditions without invoking the chiral boson action. § CHARGE AND PSEUDOSPIN BASIS The linear system (<ref>-<ref>) is naturally coupled, due to the structure of the K-matrix. We can decouple them into two independent systems by introducing the charge and pseudospin, with densities and velocities defined as ρ^Q = n^1 + n^2 , ρ^S = n^1 - n^2 , V_i^Q = v_i^1 + v_i^2 , V_i^S = v_i^1 - v_i^1 . Additionally, we take each species to have the same self-interaction energy V^11=V^22. This assumption can be relaxed but requires modification to the above definitions. The above linear transformation will decouple the system, and give the resulting modes a physical meaning; modes that carry charge, and modes that carry pseudospin. In these variables, we have two decoupled subsystems ϵ_ij∂_i V_j^α + L̂^αρ^α = 0 , ∂_t ρ^α + ∂_i V_i^α = 0 , ∂_t V_i^α + (c^α)^2∂_i ρ^α + L̂^αϵ_ijV_j^α = 0 , where α=Q,S. For brevity we've introduced the operator L̂^Q =-ω_B (1+12ℓ_B^2∇^2) for the charge system, and L̂^S =ω_B (1+12ℓ_B^2∇^2) for the pseudospin system. The only difference between the two systems is the sign of ω_B. We've also defined (c^Q)^2 = 1/π m ℓ_B^2(V^11 + V^12) + ℓ_B^2ω_B^2, (c^S)^2 = 1/π m ℓ_B^2(V^11 - V^12) + ℓ_B^2ω_B^2 , as the sound velocity associated with the charge and pseudospin, respectively (recall that with our assumptions V^11=V^22 and V^12=V^21). The boundary conditions can also be recast in the charge and pseudospin variables. For λ=0 in Case I, the boundary conditions remained decoupled V_y^α|_y=0 = 0 , [ ∂_t ρ^α + 2 ∂_x V_x^α]_y=0 =0 . For a general λ value we still retain the no-penetration condition (<ref>), but the form of (<ref>) implies the no-stress condition is replaced by 1/2ω_Bℓ_B^2[ ∂_t ρ^α + 2 ∂_x V_x^α]_y=0 =-λ∑_βϵ^αβ V_x^β|_y=0. Physically this boundary condition implies that the stress generated by the edge charge (spin) generates a slip length for the spin (charge). It's also clear that nonzero λ spoils the edge continuity equation, and as we will see, gaps out the edge modes. For Case II, we likewise keep the no-penetration condition, but the charge/spin variables (<ref>) can be rewritten as (γ+1)V_x^Q=(γ-1)V_x^S , along with a continuity equation of the form ∂_t [(1-γ)ρ^Q +(1+γ)ρ^S] + 2∂_x [(1-γ)V_x^Q+(1+γ)V_x^S]=0 . This indicates that one of the emergent edge U(1) symmetry is maintained for this class of boundary conditions, and a gapless edge mode is still present. However, an interesting aspect of this U(1) symmetry is that the γ coefficient can deform the edge charge and current in a way that it need not respect spectral flow conditions with either the charge or pseudospin bulk bands. Consequently, this case leads to the so-called fragile surface states <cit.> that live as a separate band and do not begin or end at the bulk bands except at γ=± 1. § MODE STRUCTURE In this section, we explicitly solve for the mode structure of bulk and boundary, derived from the fluid dynamical boundary conditions. We find the bulk modes of the system by expanding the fields η⃗ ^α=(n^α,V_x^α,V_y^α) as η⃗ ^α = ∫ dω d^2 q η̃⃗̃ ^α e^-iω t + i q⃗·x⃗ , which readily gives the bulk dispersion ω^α = ±√((c^α)^2 q^2 + ω_B^2(1 - 1/2ℓ_B^2 q^2)^2), and corresponding eigenvectors η̃⃗̃ ^α = (q^2 , ω q_x - iq_y L^α , ω q_y + iq_x L^α ) , where L^Q = -ω_B(1 - 1/2ℓ_B^2 q^2) for the Q system, and L^S = ω_B(1 - 1/2ℓ_B^2 q^2) for the S system. The structure of these bulk bands is the same for each subsystem, the only difference being the sound velocity. To prevent interband mixing and guarantee our state remains in the lowest Landau level, the original potentials must satisfy V^11 + V^12 < 2πħℓ_Bω_B, V^11 - V^12 < 2πħℓ_Bω_B. Additionally, we must have V^11 + V^12≥ 0 and V^11 - V^12≥ 0 for this state to be a stable minimum (again recall that with our assumptions V^11=V^22 and V^12=V^21). We now show how the expected edge modes arise from the hydrodynamic boundary conditions. For details of the edge mode calculations, we refer the reader to <cit.>. First, we expand the fields in modes localized near the hard wall boundary at y=0, η⃗ ^α = ∫ dω d k ∑_σ=1^2 C_σ^α η̃⃗̃ ^α e^-iω t + i k x + s_σ y, where C_σ^α is some expansion coefficient. Here s_σ is a solution to the polynomial equation that arises by taking (q_x,q_y) → (k,-is) in (<ref>). Importantly, we require that s have a positive real part to guarantee solution decay into the bulk (lower half-plane). The quartic polynomial admits exactly two roots with positive real parts, leading to the two terms in (<ref>). The eigenvectors take the same form as (<ref>), with the same replacement (q_x,q_y) → (k,-is). First, we apply the no-penetration boundary conditions (<ref>). It's straightforward to show that this can be satisfied by setting V_y^α=0 in the entire fluid domain and taking the edge dispersions to be ω^Q =-c^Q k , ω^S =c^S k . These are known as Kelvin modes, in analogy with the coastal Kelvin modes present at the boundary of the shallow water model of ocean waves <cit.>. These modes are naturally non-dispersive, and as expected for an SPT phase, they are counter-propagating. A charge Kelvin wave propagates in the negative x direction, while a pseudospin Kelvin wave propagates in the positive x direction. The direction of these waves is ultimately set by the original external magnetic field, but is protected by the U(1) × U(1) symmetry associated with charge and pseudospin conservation. Alternatively, for the appropriate choice of expansion coefficients C_σ^α, we can satisfy the no-penetration boundary condition while keeping a non-vanishing V_y^α in the bulk. This leaves only a single overall amplitude, that allows us to write Ṽ_̃x̃^α at the boundary in terms of ñ^α. Comparing the components of the eigenvectors we can write Ṽ_̃x̃^α = χ^α(ω, k) ñ^α , where χ^α(ω, k) is a ratio of Ṽ_x^α and ñ^α evaluated at y=0. Again, for explicit details of this calculation see <cit.>. To gain insight into the behavior of the modes we can expand χ^α(ω, k) for small ω and k χ^Q(ω, k) = -c^Q (1 - 1/4ℓ_B^2 k^2 + ⋯) , χ^S(ω, k) = c^S (1 - 1/4ℓ_B^2 k^2 + ⋯). The above relations are useful for analyzing the various second boundary conditions (<ref>-<ref>). We now split the analysis into cases. §.§ Case I with λ=0 (No-Stress) First, we look at the no-stress condition which gives us the expected edge modes predicted in <cit.>. Using the relations (<ref>), (<ref>), and (<ref>), we find the following expansion for the edge mode dispersions ω^Q = -c^Q(2 k - ℓ_B^2/2 k^3 +⋯) , ω^S = c^S(2 k - ℓ_B^2/2 k^3 +⋯) . Following the terminology of the single fluid FQH analysis <cit.> we call these chiral boson modes. As with the Kelvin modes they counter-propagate, with the charged mode moving in the negative x direction, and the pseudospin mode moving in the positive x direction. For a full numerical solution of the Kelvin mode and the λ=0 chiral boson see Fig. <ref>. §.§ Case I with general λ: Gapped surface states Next, we analyze the more general condition for λ≠ 0. Using (<ref>) we write (<ref>) as a 2x2 system in Fourier space for the variables ρ̃^Q and ρ̃^S. The determinant of this system gives the defining relation (ω - 2 k χ^Q)(ω - 2 k χ^S)+4λ^2/ω_B^2ℓ_B^4χ^Qχ^S = 0 . For λ=0 we clearly recover (<ref>) and (<ref>). Remarkably, for nonzero λ we find that these modes are gapped, with a bandgap of size ω_0=4 λω_Bℓ_B^2√(c^Q c^S). In Fig. <ref> we give the full numerical solution, where we clearly see that increasing values of λ increases the gap size, and the two chiral boson modes, discussed above, are joined. For small ω and k we may use (<ref>), and (<ref>) which gives us the following leading order dispersion ω = ±2 λ/ω_Bℓ_B^2√(c^Q c^S) - (c^Q-c^S) k + ⋯ . Note that c^Q-c^S is only nonzero in the presence of the off diagonal potential term V^12. In fact, all odd powers of k contain the same overall prefactor. §.§ Case II with general γ: Fragile surface states Finally, we analyze the conditions (<ref>) and (<ref>), which we show does not produce a gapped edge mode. We apply the same technique as above and find the defining relation to be (ω-2kχ^Q)(1-γ)^2χ^S - (ω-2kχ^S)(1+γ)^2χ^Q = 0. Here we see that γ=-1 gives a charge carrying chiral boson mode, and γ=1 gives a pseudospin carrying chiral boson mode, leading to either (<ref>) or (<ref>), but not both. For general γ this boundary condition interpolates between the two types of edge modes, see Fig. <ref> for the full numerical solution. Note that if we multiply both sides of Eq. (<ref>) by 1/γ^2 and absorb it into the (1±γ)^2 factor, we see a symmetry γ→ 1/γ that preserves the relation. For small ω and k we find the leading order dispersion ω = 4γ c^Q c^S/(1+γ)^2 c^Q + (1-γ)^2 c^S(2k - ℓ_B^2/2 k^3 +⋯) . Note that for γ=0 and γ→∞, the band becomes flat and this mode detaches from the bulk. Furthermore, as γ transitions from the charged chiral boson (γ=-1) to the pseudospin chiral boson (γ=1), there must be a detachment and flat band leading to the fragile surface states for γ≠± 1. § BOUNDARY CONDITIONS FROM VARIATIONAL PRINCIPLE The set of possible boundary conditions discussed in this work can also be obtained from a variational principle. In the following, we will write an effective edge action that can be added to the original CSGL variables, which, upon varying, yields various hydro boundary conditions discussed in the previous sections. We emphasize that the effective action we derive contains several features of the chiral boson edge theory from the K-matrix formalism. However, this effective action is classical, and to make a more quantitative comparison with the chiral boson theory, we need to quantize this action. Quantization of this effective edge action is beyond the scope of this work and will be considered in a separate publication. The equations of motion (<ref>-<ref>) arise as the saddle points of the action (<ref>). This implies that after varying the action, the “coefficients" of the field variations on the bulk are set to zero. Keeping the variation of the fields at the boundary unconstrained, the boundary conditions are simply the equation of motion generated by the field variations projected at the boundary. For a fluid restricted to the lower half plane, we observe that the variation of the quadratic part of the action (<ref>), generates the following boundary terms: δ S_bulk =-ħ/4π∫ dtdx ∑_a[2qB/mδθ^a(∂_yθ^a+α_y^a-q/ħA_y). +.qB/2mδ n^a∂_yn^a+∑_b K^ab(δα_x^a α_0^b-δα_0^a α_x^b)]_y=0. By definition, the bulk variation of the action vanishes on equations of motion. Note that the variation (<ref>) does not generate the boundary conditions discussed in this work. To obtain the hydro boundary conditions discussed in previous sections, we need to add a boundary action to S_bulk. Below we consider the edge action associated with the various hydro boundary conditions. §.§ No-penetration and no-slip boundary condition The no-penetration condition in the original CSGL variables can be expressed as, (∂_yθ^a+α_y^a-q/ħA_y+1/2∑_b K^ab∂_x n^b)|_y=0=0 . Note that the δθ^a variation only leads to the first three terms and the last term is missing. Furthermore, for this boundary equation to be consistent with one of the equations of motion (<ref>), we must also impose that ∑_bK^ab(∂_tα^b_x-∂_xα_0^b+qB/2m∂_x n^b)|_y=0=0 . The general solution for this expression can be written as (α_0^a-qB/2m n^a)|_y=0 =∂_tζ^a , α_x^a|_y=0 =∂_xζ^a , where ζ^a are general (undetermined) functions. It is straightforward to show that these equations along with the full form of the no penetration (<ref>) can be obtained by adding the following boundary action to the system: S_edge =ħ/4π∫∑_a,bK^ab[qB/m(∂_xθ^a+1/2α_x^a-q/ħA_x)n^b. +.∂_tζ^aα_x^b-∂_xζ^a(α_0^b-qB/2mn^b)]_y=0dtdx . Equation (<ref>) arises from the variation of θ^a at the boundary, Eq. (<ref>) comes from the variation of the boundary field ζ^a and Eqs. (<ref>-<ref>) are obtained from the variation of the gauge field α_μ^a projected at the boundary. The second hydrodynamic boundary condition arises from the variation of n^a taken at the boundary, which gives us [∑_b K^ab(∂_xθ^b+12α_x^b-qħA_x+12∂_xζ^b)-12∂_y n^a]_y=0=0 . This expression becomes the no-slip boundary condition for both charge and pseudospin components after using equation (<ref>). The first line in the edge action S_edge is somewhat unsettling since it does not come in the gauge invariant form ∂_xθ^a+α^a_x. Nevertheless, the gauge invariance of this action can be seen explicitly through a field redefinition, that is, after the replacements: α_0^a = qB/2m n^a+α̃_0^a α_i^a =α̃_i^a ζ^a = ζ̃^a+θ^a . Therefore, denoting S_CSGL=S_bulk+S_edge the resulting acting assume the familiar form derived in our previous work <cit.>: S_CSGL =-qB/2π∫∑_a[n^a(∂_tθ^a+α̃_0^a-q/ħA_0)+α̃_0^a+qB/2mn^a+ħ/2m(∂_iθ^a+α̃_i^a-q/ħA_i)^2+ħ/8m(∂_i n^a)^2. +.∑_b n^a(ħ/4mK^abϵ_ij∂_iα̃^b_j+qB/2πħ^2 V^ab n^b)-ħ/2qBϵ_μνκ∑_b K^ab(α̃_μ^a+∂_μθ^a)∂_να̃_κ^b]d^3x +ħ/4π∫ dt dx∑_a,b K^ab[ζ̃^a(∂_xα̃_0^b-∂_tα̃_x^b)+qB/m(∂_xθ^a+α̃_x^a-q/ħA_x)n^b]_y=0. The no-penetration condition will always be obtained by varying the θ^a fields, and we will fix this as one of the boundary conditions, as we have done in previous sections. However, there are different possibilities for the second boundary condition. The simplest one is the no-slip condition, which arises naturally from the variation of the fluid density. However, the no-stress, partial slip, and fragile surface state boundary conditions are additional dynamical equations in disguise and require the introduction of auxiliary fields at the boundary, as we will outline below. For the single component Laughlin state, we have shown that the effective edge action includes an auxiliary field with the chiral boson action coupled to the background density at the edge <cit.>. We now develop a generalization of the Laughlin case to the two-component bosonic IQH state. §.§ Effective edge action for no-stress and partial slip boundary condition The presence of two components allows for a more general family of energy-conserving boundary conditions. The first case is that of partial slip, where the edge tangent stress of one component induces a tangent velocity or slip at the boundary of the other component. We will now deduce the effective action that generates the partial slip condition. The no-stress condition, where the two edges are completely decoupled, will be obtained by setting the slip length λ^-1 to infinity, that is, λ = 0. First, we need to express the partial slip condition in its dynamical form, which corresponds to: ∑_b [ħ/2mK^ab(∂_t n^b + 2 ∂_x v_x^b)+λ ϵ^ab v_x^b]_y=0=0 . From the last section, we obtained that δ S_bulk+δ S_edge=…+qB/4π∫ dt dx ∑_a,bδ n^a K^abv_x^b , together with the no-penetration condition. Since we do not want to spoil the latter, this additional action must be only a function of n^a and the auxiliary field ϕ^a. Following the Refs. <cit.>, we see that such an action must be of the form S_CB =-qB/4π∫ dt dx∑_a,b[K^ab∂_tϕ^a(∂_xϕ^b+n^b|_y=0). -.mλ/ħϵ^abϕ^a∂_tϕ^b ]. Combining S_CSGL+S_CB, we find that the variation of n^a projected on the boundary gives us v_x^a|_y=0=∂_tϕ^a whereas the equation of motion for ϕ^a reads ∑_b[K^ab(∂_tn^b|_y=0+2∂_x∂_tϕ^b)+2mλ/ħϵ^ab∂_tϕ^b]=0 . Upon using Eq. (<ref>), this expression coincides with Eq. (<ref>). Note that when λ=0, we recover the result in Ref. <cit.>, which describes the no-stress condition. The limit λ→∞, can be taken upon redefinition ϕ^a→√(λ) ϕ^a. This forces the first line in the action S_CB to vanish and we recover the no-slip condition. §.§ Effective edge action for fragile surface states The one-parameter family of boundary conditions described in equation (<ref>) that leads to the fragile surface states is particularly intriguing. This case can also be derived using a variational principle, offering insights into the origins of these fragile states. With this set of boundary conditions, only one of the equations is dynamical, necessitating just one auxiliary field. By rewriting the second equation in (<ref>) in its dynamical form, we obtain [∂_t(n^1-γ n^2)+2∂_x(v_x^1-γ v_x^2)]_y=0 =0 . The above equation is an emergent U(1) symmetry at the edge but the γ parameter deforms this definition of the conserved local edge charge (n^1-γ n^2) in contrast to the bulk U(1) symmetry which is fixed to be (n^1± n^2). Thus the edge U(1) is compatible with the one of the bulk U(1) symmetries only for γ=± 1 for which the edge mode begins and ends at the corresponding bulk band. For values of γ∈ (-1,1), the edge mode decouples from the edge as shown in Fig. <ref>. The additional edge action that generates the fragile states boundary conditions is given by S_chiral =qB/4π∫ dt dx[2γ∂_tϕ∂_xϕ-∂_tϕ (n^1-γ n^2)|_y=0]. Thus, varying the action S_CSGL+S_chiral give us δ S_CSGL+δ S_chiral=…+qB/4π∫ dt dx [δ n^1(v_x^2-∂_tϕ) +.δ n^1(v_x^1+γ∂_tϕ)+δϕ(∂_t(n^1-γ n^2)-4γ∂_x∂_tϕ)]_y=0. From the above variations, the boundary conditions  (<ref>) leading to the fragile surface state follow naturally. § TOPOLOGICAL CHERN-SIMONS THEORY, CHIRAL BOSON AND NO-STRESS BOUNDARY CONDITION For a very large magnetic field the system has a very large gap, and bulk excitations are highly suppressed. This allows us to neglect the bulk dynamics and approximate the bulk Hamiltonian by zero. In other words, we can neglect the matter fields in the CSGL action and focus only on the topological Chern-Simons theory. However, the Chern-Simons action in the absence of matter is not gauge invariant in the presence of a boundary. To see this, § DISCUSSION AND OUTLOOK In this work, we consider a fluid dynamical description of the integer quantum Hall effect of bosons modeled by a two-component Chern-Simons-Ginzburg-Landau (CSGL) theory with a mutual Chern-Simons (CS) statistical term with a K-matrix of K=[ 0 1; 1 0 ] implementing the flux attachment. Contrary to the traditional approach of discarding the bulk bosonic matter and focusing only on the gauge-invariant chiral edge dynamics, we investigate the linearized superfluid hydrodynamics of the bulk bosonic matter subject to energy-conserving boundary conditions. We derive both bulk and edge topological properties within the hydrodynamical framework. We show that the bulk topological invariant is encoded in the fluid polarization algebra in the form of quantized Hall conductivity. We then deduce different kinds of energy-conserving hydro boundary conditions at the hard wall. Since the hydro equations are second-order in derivatives, we need two boundary conditions to determine the full solution. The first boundary condition is the no-penetration condition v^a_y|_y=0=0 which we do not change across different cases. The second boundary condition has more possibilities and corresponds to four distinct edge dynamics that preserve energy conservation. The first possibility is the no-slip condition v^a_x|_y=0=0 which does not result in chiral edge dynamics. The second case is the no-stress condition, resulting in two counterpropagating chiral edge modes that we identify as chiral bosons disguised as hydro boundary conditions. The third case corresponds to the partial slip condition, where the tangent stress in one of the fluid components generates slip in the second component and vice versa. This case corresponds to the mixing of the two chiral modes, leading to the gapping of the two chiral boson modes. We point out that opening a gap in the quantized chiral boson theory requires nonlinear mixing of the edge modes via a Sine-Gordon term. In terms of the fluid variables, the gapping is achieved in a much more straightforward way. The last case is the most interesting one and emphasizes the real power of our approach. In this case, we balance the tangent stress between the two layers and balance the tangent velocities (slips) across the two layers with a single parameter. This case results in a single chiral boson mode that, under general conditions, does not begin and end at the bulk bands. This case has been recently reported in Ref. <cit.> as fragile surface states manifesting in a non-Wigner-Dyson class of non-interacting topological insulators. It is interesting to note that the fragile surface states manifest in the presence of bulk matter and would be missed by the edge theories deduced from the K-matrix formalism, which does not include any bulk matter. We also obtain a symmetry perspective of the fragility of these edge states in terms of the density associated with the edge U(1) symmetry, where we can quantify the precise conditions under which the edge states detach from the bulk bands. We construct effective actions for all these boundary conditions that can be added to the original CSGL theory as the new starting point to understand the bulk and boundary properties of these states. We show the existence of non-anomalous counter-propagating Kelvin modes <cit.>, which are non-dispersive and seem to be present due to Lorentz invariance within the kelvin mode solution. In the future, it would be interesting to investigate the general conditions for the presence of these fragile surface states across interacting topological phases that are amenable to the fluid dynamics treatment. We also aim to quantify the microscopic mechanisms that underpin these boundary conditions, which will allow us to construct microscopic lattice models that encode a wider class of boundary phenomena in these topological phases. The advantage of this fluid description is seen in the edge dispersions, whereby everything is written solely in terms of bulk parameters. The charged Chiral Boson mode, for example, has a leading order group velocity of v_g^(Q)|_k=0 = -2√(V_+++V_+-/π m ℓ_B^2 + ℓ_B^2ω_B^2). Thus, if one can experimentally measure the velocity of the edge modes, the interaction potentials and effective masses of the bulk composite bosons can be extracted. In theory, if one can find all four edge modes, the three interaction parameters can all be found, along with the effective mass. The experimental techniques discussed in <cit.> could be used in this case. It may also be interesting to explore the nature of two-fluid edge modes in the context of other systems. With regards towards the shallow water model referenced above, it may be that the coupled dynamics of the ocean and atmosphere can be modeled as a two-fluid system in a similar fashion <cit.>. This would add to our understanding topologically protected edge modes in the context of geophysics. Two-fluid models are also encountered in plasmas, where the ions and electrons are taken to be separate species. Our analysis presented here may aid in the understanding of edge localized modes <cit.>, destructive bursts that potentially damage experimental devices. Lastly, two-fluid models have historically been introduced to describe superfluid helium, where one species is treated as a normal fluid and the other a true superfluid <cit.>. Though the theory of superfluid helium is well understood at this point, the edge modes should bare striking similarities to the ones we've discussed. Understanding this connection would strengthen our understanding of, and give further justification to, fluid descriptions of quantum phases. Apart from the direct application to IQH bosons and SPT phases we've outlined here, these potential applications highlight the importance of understanding chiral edge modes associated to two-fluid models. § ACKNOWLEDGMENTS We thank Matt Foster for useful comments and discussions. SG and DR are supported by NSF CAREER Grant No. DMR-1944967. Part of this work was performed at the Aspen Center for Physics, which is supported by National Science Foundation grant PHY-1607611. refstyle
http://arxiv.org/abs/2405.10169v1
20240516150742
Most likely configurations for fermion localization in a Braneworld-$f(Q,B_Q)$
[ "A. R. P. Moreira", "Shi-Hai Dong", "M. E. Rodrigues" ]
gr-qc
[ "gr-qc" ]
allan.moreira@fisica.ufc.br Reserach Center for Quantum Physics, Huzhou University, Huzhou, 313000, P. R. China. Secretaria da Educação do Ceará (SEDUC), Coordenadoria Regional de Desenvolvimento da Educação (CREDE 9), Horizonte, Ceará, 62880-384, Brazil dongsh2@yahoo.com Reserach Center for Quantum Physics, Huzhou University, Huzhou, 313000, P. R. China. Centro de Investigación en Computación, Instituto Politécnico Nacional, UPALM, CDMX 07700, Mexico esialg@gmail.com Faculdade de Ciências Exatas e Tecnologia, Universidade Federal do Pará, Campus Universitário de Abaetetuba, 68440-000, Abaetetuba, Pará, Brazil Faculdade de Física, Programa de Pós-Graduação em Física, Universidade Federal do Pará, 66075-110, Belém, Pará, Brazil This study delves deeply into braneworld scenarios within modified gravity models, investigating their impact on particle localization and the structure of branes. Through a comprehensive blend of numerical analyses and theoretical inquiries, we unravel a nuanced correlation between deviations from standard General Relativity (GR) and the emergence of split branes. By employing probabilistic measurements, we pinpoint stable configurations that align with brane division intervals, thus challenging prevailing assumptions regarding the gravitational framework of our universe. Furthermore, our investigation extends to the localization of fermions within the brane, exposing intricate dynamics shaped by scalar field characteristics and modifications to gravitational models. By harnessing quantum information measurements, notably Shannon entropy, we discern heightened probabilities of fermion localization within the brane as gravitational models diverge from standard paradigms. This underscores the limitations of General Relativity in comprehensively describing the complexities inherent in our universe. Lastly, our exploration of massive fermions unveils their potential to breach the confines of the brane, hinting at promising avenues for future experimental endeavors aimed at probing the nature of extra dimensions and gravitational interactions. This suggests exciting prospects for advancing our understanding of fundamental physics beyond conventional boundaries. Most likely configurations for fermion localization in a Braneworld-f(Q,B_Q) Manuel E. Rodrigues May 20, 2024 ============================================================================= § INTRODUCTION Currently, there is a growing fascination with exploring alternative gravity models, a curiosity that echoes back to the early days of GR. Despite the groundbreaking insights provided by GR, persistent gaps in our understanding have remained since its inception. Modern physics faces numerous enigmas, including the perplexing accelerated expansion of the universe <cit.>, the elusive nature of dark matter, which defies explanation within the Standard Model <cit.>, the hierarchy problem <cit.>, and the mechanisms governing baryonic symmetry in the cosmos. These unresolved puzzles serve as catalysts for exploring scenarios that transcend the boundaries of GR. The braneworld concept has led to significant theoretical advancements in addressing several of these persistent mysteries of GR <cit.>. Furthermore, this theory predicts the existence of a 750 GeV particle, which has appeared a few times at the Large Hadron Collider but was identified as a failed measurement <cit.>. Within the realm of alternative gravitational theories, attention has gravitated toward diverse avenues beyond GR. For instance, Einstein-Cartan geometry <cit.>, and metric-based models like f(R) theories <cit.>, have piqued interest as potential deviations from the standard framework. Another intriguing approach is found in the teleparallel equivalent of general relativity (TEGR), which postulates gravity as a consequence of spacetime torsion rather than curvature <cit.>. In this model, the vielbein field serves as a dynamic variable, operating under the assumption of the absence of the Riemann curvature tensor. In more recent developments, the Symmetric Teleparallel Equivalent of General Relativity (STEGR) has emerged as a compelling alternative <cit.>. STEGR introduces the non-metricity tensor into the dynamics of gravitational degrees of freedom, setting it apart from TEGR. Variants such as the f(Q) gravity model have been proposed, offering increased degrees of freedom compared to GR, contingent upon coefficients within the Lagrangian <cit.>. Moreover, a gravitational model that has garnered considerable attention is gravity f(Q,B_Q). This model has yielded significant results in addressing some of the unresolved puzzles within the framework of GR. In reference <cit.>, a study was undertaken on FLRW cosmology within the framework of the f(Q,B_Q) theory, exploring various families of connections. Additionally, investigations into the behavior of cosmological models of dark energy, as described by the same theory, were conducted in references <cit.> using perfect fluid, and in reference <cit.> using quintessence. However, there remains much to be explored about this model. Hence, we are motivated to investigate a scenario involving extra dimensions. This work represents the first endeavor in the literature to examine the braneworld scenario within this gravitational model. The structure of this paper unfolds as follows: Section <ref> introduces the fundamental concepts crucial for constructing the braneworld within the framework of gravity f(Q,B_Q). Moreover, it identifies the most probable configurations and assesses the stability of the model. In Section <ref>, the conditions governing the localization of fermions on the brane are scrutinized. Additionally, configurations with the highest likelihood of detecting massless fermions on the brane are investigated. Finally, our concluding remarks and future outlook are presented in Section <ref>. § BRANEWORLD - F(Q,B_Q) In this section, we will commence by elucidating the fundamental principles of symmetric teleparallel gravity and outlining the equation of motion relevant to f(Q,B_Q) gravity. When navigating through theories entrenched in Riemannian geometry, it becomes imperative to adhere to the metricity condition ∇_M g_NP=0. Here, g_NP represents the metric, while ∇_M denotes the covariant derivative employing the Levi-Civita Γ^P _MN as the affine connection. Notably, this condition aligns with the framework of GR (GR). However, the same cannot be said for theories grounded in non-Riemannian geometry. Within this domain, noteworthy among the modified gravity models is STEGR, distinguished by the presence of a nonvanishing nonmetricity tensor <cit.> Q_MNP≡∇_M g_NP≠ 0. To characterize this tensor, let us delineate its independent traces, namely Q_M = g^NPQ_MNP=Q_M ^P _P, Q_M = g^NPQ_NMP=Q^P _MP. Moreover, given the existence of the nonmetricity tensor, it necessitates the establishment of a broader connection Γ^P _MN=Γ^P _MN+L^P _MN. The entity termed L^P _MN is commonly referred to as the distortion tensor. It is expressed in terms of the nonmetricity tensor <cit.> L^P _MN=1/2g^PQ(Q_PMN-Q_MPN-Q_NPM). In formulating a gravitational action tailored for STEGR, we introduce a comprehensive tensor that encapsulates not only the nonmetricity but also its autonomous traces and the distortion tensor. This tensor, termed the nonmetricity conjugate, plays a pivotal role within this framework <cit.> P^P _MN=-1/2L^P _MN+1/4(Q^P-Q^P)g_MN-1/8(δ^P_M Q_N+δ^P_N Q_M). Furthermore, upon contraction with the nonmetricity tensor, this entity yields the nonmetricity scalar Q=Q_PMNP^PMN. Expressed as R=Q+B_Q, where B_Q=∇_M(Q^M-Q^M), being the boundary term, the Ricci scalar underscores a notable facet of STEGR. This formulation indicates an equivalence between STEGR and GR, as the boundary term dissipates upon integration into the action. However, such parity does not extend to f(Q) and f(R) gravities due to the persistence of the boundary term. Nevertheless, an equivalence to gravity f(R) can be achieved if gravity f(R) is considered as f(Q,B_Q). This study is primarily focused on f(Q,B_Q) gravity, representing yet another plausible extension of STEGR S=∫ d^5x √(-g)[f(Q,B_Q)+2κ^2ℒ_m], where ℒ_m represents the matter lagrangian. Variating the action (<ref>) with respect to the metric yields the following amended Einstein equation f_Q G_MN-1/2g_MN(f-f_Q-f_B B_Q)+2P^P _MN∂ _P (f_Q+f_B) -g_MN f_B+∇_M ∇_N f_B = κ^2 𝒯_MN, where 𝒯_MN=-2δℒ_m/δ g^MN+ g_MNℒ_m, is the momentum-energy tensor. Furthermore, we denote f as shorthand for f(Q,B_Q), f_Q as the partial derivative of f with respect to Q, and f_B as the partial derivative of f with respect to B_Q, for the sake of simplicity. Conversely, when adjusting the action (<ref>) to enhance its connection, we aim to achieve ∇_M∇_N[√(-g) P_K ^MN(f_Q+f_B)]=0. For an extra dimensions scenario, we will use the Randall-Sundrum-like metric ds^2= e^2Aη_μνdx^μdx^ν+dy^2. Here, η_μν represents the Minkowski metric (the familiar 4D spacetime where we reside), e^2A denotes the warp factor, and y represents the extra dimension. The uppercase Latin indices M, N range from 0 to 4, denoting the bulk coordinate indices (in the 5D spacetime). The Greek indices μ, ν, ranging from 0 to 3, correspond to the indices on the brane. Furthermore, we will only consider a background scalar field capable of generating the brane, with a laragian described as ℒ_m=-[1/2∇_Mϕ∇^Mϕ+V(ϕ)], where ϕ(y) is the scalar field which similar to the warp factor depends only on the extra dimension. Consequently, for this metric, the scalar field and gravitational equations can be expressed as follows: ϕ^''+4A^'ϕ^' = dV/dϕ, 24A'[A”'(f_BB+f_BQ)+A'A”(8f_BB+11f_BQ+3f_QQ)] 1/2f+(4f_B-3f_Q)(4A'^2+A”)+3A'f'_B+f”_B = κ ^2(1/2ϕ'^2+V), -1/2f-4[A'f'_B+f_B(A”+4A'^2)-3A'^2f_Q] = κ^2(1/2ϕ'^2- V). In the equations above, the prime (') denotes the derivative with respect to the extra dimension. We need to define the form of the function f(Q,B_Q), as it will describe our gravitational model. To ensure generality, we will consider two models that generalize STEGR, namely, f_1(Q,B_Q)=Q+kB_Q^n and f_2(Q,B_Q)=Q+k_1B_Q+k_2B_Q^2. Here, the parameters n and k determine the deviation from the usual STEGR. These models become particularly interesting because we can recover STEGR when k=k_1=k_2=0. Conversely, when n=k=k_1=1 and k_2=0, we arrive at GR, i.e., f_1,2(Q,B_Q)=R. Furthermore, since our focus is on studying the localization of fermions on thick branes, we need to define the ansatz for the warp factor that best characterizes a thick brane. Therefore, we choose A(y)=-plncosh(λ y), where the parameters λ and p control the width of the brane. This ansätz presents a more acceptable configuration of our universe <cit.>. With all these results in hand, we can finally embark on our study of the behavior of the braneworld scenario within these modified gravity models. To guide us, we can propose some basic questions and endeavor to address them comprehensively. Firstly, how does this gravitational change affect the structure of the brane? Does it modify the behavior of the background scalar field that generates the brane? Secondly, what is the most likely gravitational configuration to be found in nature, i.e., one that describes our world accurately? Thirdly, considering that the most likely way to experimentally prove the existence of extra dimensions is through theoretical predictions of particle locations on the brane, how do gravitational changes affect the locations of these particles, such as spin-1/2 fermions? Does the most probable configuration of the gravitational model align with the configuration offering the greatest chances of locating the particle on the brane? Fourthly, is there any possibility of the particle escaping the brane and then returning, as observed experimentally with a particle of 750 GeV <cit.>? To begin, we analyze the energy densities of each gravitational model. Energy density is defined in the form: ρ(y)=-e^2Aℒ_m. We plot the energy density behavior for the first gravitational model in Fig. <ref>. Note that for n=1, the energy density exhibits a division at the peak. A single peak bifurcates into two as we deviate from STEGR (by increasing the influence of the parameter k). This division occurs within the range 0.20<k<0.26 (Fig. <ref>a). A similar phenomenon occurs for n=2, albeit the division occurs in an interval closer to STEGR, specifically -0.006<k<-0.004 (Fig. <ref>b). For n=3, this interval becomes even narrower, 0.00004<k<0.00010 (Fig. <ref>c). We can infer that the higher the power in B_Q, the more rapidly the energy density peak of the model tends to split. In Fig. <ref>, we illustrate the behavior of the energy density for the second gravitational model. As anticipated, varying the parameters k_1,2 leads to a division of the energy density peak. When setting the value of k_2=-0.001, we observe that the division occurs within the range 0.15<k_1<0.25. Similarly, when setting k_1=0.01, the division takes place between -0.005<k_2<-0.003. Notably, the greater the deviation from STEGR, the more rapidly the energy density tends to split into two peaks. The intriguing aspect is that this division in the energy density signifies a split in the brane, indicating the emergence of a parallel world. In other words, as our models deviate further from the standard STEGR, the split in the brane becomes more pronounced. In GR, this phenomenon occurs when two background fields capable of generating the brane are added. However, in our models, we were able to observe a very pronounced split of the brane with just a single scalar field in the background. To delve deeper, we can identify which configurations are most probable in our models. To accomplish this, we employ a powerful mathematical tool known as Differential Configurational Entropy (DCE) <cit.>. DCE, a probabilistic measure grounded in information theory, facilitates the determination of the most stable configurations of the model. These stable configurations offer insights into the most likely configurations of the model <cit.>. The DCE is constructed upon the energy density of the brane, which is mathematically represented as follows: S_C[f]=-∫f̅(ω)ln[f̅(ω)]dω. In this equation, f̅(ω) denotes a normalized function of the frequency ω. This function is defined as follows: f(ω)=|ℱ[ω]|^2/∫|ℱ[ω]|^2dω, Here, f(ω) denotes the modal fraction, quantifying the relative weight of each mode ω (≤ 1). The term ℱ[ω] represents the Fourier transform of the energy density and is defined as follows: ℱ[ω]=1/√(2π)∫ e^iω yρ(y) dy. It is crucial to note that f̅(ω)=f(ω)/f_max(ω), where f_max(ω) represents the maximum value of the modal fraction. Additionally, it is pertinent to highlight that the formulation (<ref>) holds validity solely for continuous functions within an open interval. For the first gravitational model, we numerically plot the shape of the modal fraction and the DCE in Fig. <ref>. In Fig. <ref>(a), it is evident that the modal fraction undergoes several modifications in its structure as we vary the value of k. This variation is linked to the deviation from the usual STEGR. The most intriguing aspect is demonstrated in the DCE plot. The steepest maximum corresponds to a trough. This valley signifies the most stable configuration of the model. The most likely configuration falls within the range of 0.20<k<0.26. It's noteworthy that this interval aligns with the one identified in the energy density as the landmark for the split in the brane. The same pattern emerges for n=2 in the range -0.006<k<-0.004 (Fig. <ref>(b)), and for n=3 in the range 0.00004<k<0.00010 (Fig. <ref>(c)). For the second gravitational model, the numerical plots depicted in Fig. <ref> illustrate the behavior of the modal fraction and the DCE. When we set the value of k_2=-0.001, we observe that the valley in the DCE plot appears in the range of 0.15<k_1<0.25 (Fig. <ref>a). Similarly, when we set k_1=0.01, the valley emerges in the range of -0.005<k_2<-0.003 (Fig. <ref>b). Once again, these intervals coincide with those marking the split in the brane. We can thus conclude that the most stable configurations of our models are those that coincide with the appearance of the split in the brane. This is evident in both f_1 and f_2. Moreover, these configurations are the most likely ones to describe the universe in which we live. This result is significant as it assures us that the usual STEGR is not the most likely configuration to describe our world. In other words, we do not inhabit the usual STEGR. Or rather, we do not live in the standard GR! § FERMION LOCALIZATION MECHANISM We go even further in our theoretical analyses. In this section, we will study the possibility of locating the particle in the brane. For this, let's consider spin-1/2 fermions. To facilitate fermion localization, we need to introduce a coupling to the spinor field. The simplest and most efficient coupling is the well-known Yukawa coupling, where the spinor is minimally coupled with the scalar field ϕ. Therefore, the scalar field solution must satisfy the following basic requirements to enable fermion localization in the brane: * The field ϕ must be asymmetric at the origin of the brane. This represents a phase transition constrained by the structure of the brane. * Asymptotically the field ϕ must tend to a constant, i.e., ϕ(y→±∞)→±ϕ_c. Thus, ∂ V(ϕ→±ϕ_c)/∂ϕ=0. This guarantees the physical sense of the model. To obtain the value of the scalar field we use Eq.(<ref>), which leads to the first-order differential equation: ϕ^'=1/κ^2{24(8f_BB+11f_BQ+3f_QQ)A'^2A”+f_BB”-3f_QA”+2A'[f_B'+12(f_BB+f_BQ)A”']}. However, the solution to Eq.(<ref>) is not so simple. This will depend on our choice for the form of f(Q,B_Q). Therefore, our analysis is done numerically. In Fig. <ref>, we plot the scalar field solution for n=2 and 3 of f_1. These solutions are referred to as kink-type solutions. They are not precisely topological kinks, but they exhibit similar behavior. Noticeably, for a gravitational model close to STEGR, the scalar field solution presents only one domain wall. These outcomes are anticipated. However, something unusual occurs when we modify our gravitational model. For n=2, as we increase the value of k, the emergence of new domain walls becomes evident. This causes our field solution to undergo a phase transition, transitioning from a kink-type solution to a double-kink-type solution. This transition is linked to the brane split that occurs in the interval -0.006<k<-0.004. Similarly, the same phenomenon occurs for n=3 in the range 0.00004<k<0.00010. For f_2, we present the scalar field solution in Figure <ref>. It is evident that as we deviate our gravitational model from the usual STEGR, new structures emerge in the brane. New domain walls appear, transforming our kink-type solution into a double-kink-type solution as we vary from 0.15<k_1<0.25 to k_2=-0.001, and -0.005<k_2<-0.003 to k_1=0.01. Notably, this interval coincides with that of the brane split, indicating that the background field senses the split in the brane. Hence, considering a minimal Yukawa coupling, the action describing the 1/2-spin in a 5-dimensional Dirac field becomes: 𝒮_1/2=∫√(-g)(ΨiΓ^M D_MΨ -ξϕΨΨ)d^5x. Here, ξ represents a dimensionless coupling constant. The covariant derivative D_M is expressed as D_M=∂_M +Ω_M, where Ω_M denotes the torsion-free spin connection. This connection is defined within the framework of the Levi-Civita connection terms: Ω_M=1/4(Γ_M ^NQ) Γ_NΓ_Q. The curved Dirac matrices Γ^M are derived from the plane Dirac matrices Γ^M and the vielbeins E_M ^M, following the relation: Γ^M=E_M ^M Γ^M. These matrices satisfy the Clifford algebra {Γ^M,Γ^N}=2g^MN. The vielbeins establish a tangent space and establish a relationship with the metric through: g_MN=η_MNE^M_M E ^N_N. We define the slashed capital Latin indices (M,N,...=0,1,2,3,4) to represent the coordinates of the tangent space. To simplify, a transformation is applied as dz=e^-A(y)dy, resulting in the metric taking the form: ds^ 2=e^2A(η ^μνdx^μ dx^ν+dz^2). Therefore, the Dirac equation (<ref>) can be expressed as: [γ^μ∂_μ+γ^4(∂_z+2Ȧ)-ξ e^Aϕ]ψ=0. Here, the spinor representation is defined as: Ψ≡Ψ(x,z)=([ ψ; 0; ]), Γ^μ=([ 0 γ^μ; γ^μ 0; ]), Γ^z=([ 0 γ^4; γ^4 0; ]), and ( ) denotes a derivative with respect to z. Through the Kaluza-Klein decomposition of the spinor ψ=∑_n[ψ_L,n(x)φ_L,n(z)+ψ_R,n(x)φ_R,n(z)], we obtain the coupled equations [∂_z+ξ e^A ϕ]φ_L(z) = m φ_R(z), [∂_z-ξ e^Aϕ]φ_R(z) = -m φ_L(z), where γ^4ψ_R,L=±ψ_R,L represent the left-handed and right-handed components from the Dirac field, and γ^μ∂_μψ_R,L=mψ_L,R holds. Decoupling of equation (<ref>) results in Schrodinger-like equations: [-∂^2_z+V_L(z)]φ_L(z) = m^2 φ_L(z), [-∂^2_z+V_R(z)]φ_R(z) = m^2 φ_R(z), where V_R,L(z)=U^2 ±∂_zU represents the effective potential, and U=ξ e^A ϕ is the superpotential. It is worth noting that this equation takes the form of supersymmetric quantum mechanics (SUSY-type), ensuring the absence of tachyonic Kaluza-Klein (KK) states. Additionally, the supersymmetric structure permits the existence of a well-localized massless mode in the form: φ_R0,L0(z)∝ e^±∫ Udz. Only left-chirality fermions are localized, consistent with what was obtained in GR. For f_1, we plot the behavior of the effective potential and the massless fermionic mode in Fig. <ref>. As we increase the value of k for n=1, we observe that the potential tends to become more confining and the massless mode more localized. The same trend holds for n=2 and 3. However, something unexpected occurs: the potential well divides as k increases. This division occurs in the same range of values as the brane split. This indicates that brane splitting interferes with fermion localization. Furthermore, we can conclude that the modification of the gravitational model can make the massless fermion more or less localized in the brane. This assertion is confirmed through the behavior of the effective potential and the massless mode for the f_2 model. As we further modify the model, the potential well splits. The fermionic mode senses these changes and becomes more localized but with a flattened peak (Fig. <ref>). Furthermore, we can utilize quantum information measurements to ascertain the conditions that promote fermion localization in our model — that is, the configurations most likely to confine the massless fermion to the brane. Thus, our discussion will delve deeper into the fundamental concepts of Shannon entropy and its application to our specific models <cit.>. To define Shannon entropy, we employ the Fourier transform on the massless mode function |φ_L0,R0(p_z)|^2=1/√(2π)∫_-∞^∞|φ_L0,R0(z)|^2,e^-ipz dz, where p_z signifies the coordinate within momentum space (or reciprocal space). This transformation allows us to delineate Shannon entropy for both position and momentum spaces: S_z = -∫_-∞^∞|φ_L0,R0(z)|^2ln|φ_L0,R0(z)|^2dz, S_p_z = -∫_-∞^∞|φ_L0,R0(p_z)|^2ln|φ_L0,R0(p_z)|^2dz. These entropy measures yield an uncertainty relation known as the BBM relation <cit.>, named after its proposers Beckner, Bialynicki-Birula, and Mycielski. Notably, this entropic uncertainty relationship serves as a more effective alternative to the Heisenberg uncertainty principle. The BBM uncertainty relation is expressed as: S_z+S_p_z≥ D(1+lnπ). Here, D denotes the dimensions capable of perceiving changes in system information. In our model, only the extra dimension possesses the capability to sense the entropic modifications of the system (D=1), i.e., S_z+S_p_z≥ 2.14473. The Shannon information measures are explored numerically through tables <ref> and <ref>. For f_1, it's noticeable that as we increase the value of k (the deviation from STEGR), the Shannon measurements in position space decrease, indicating greater certainty about the fermion's location in the brane. Conversely, in momentum space, the Shannon measurement tends to increase, suggesting greater uncertainty in the fermion's momentum. This trend intensifies as we further modify our gravitational model (by increasing the value of n). Additionally, the total entropy of the system decreases (S_z+S_p_z) (Tab. <ref>). The same behavior is observed for f_2 (Tab. <ref>). This leads us to conclude that as our gravitational model deviates from the usual STEGR, the likelihood of locating fermions in the brane increases with minimal loss of information about their location. This finding is significant and clearly indicates that to detect such fermions in our world, we need to consider new phenomenological parameters different from those proposed by GR. Indeed, everything suggests that GR alone is insufficient to explain and describe the world in which we live. Finally, as the mass of the fermion is linked to its energy, we can infer that the greater its mass, the greater its energy. Consequently, the higher the energy of the fermion, the higher the probability that it will be able to escape from the brane into the bulk. The study of the behavior of massive fermions is of paramount importance for the future detection of these particles at the LHC, thereby proving the existence of extra dimensions. It is noteworthy that some particles with similar behavior have already been detected at the LHC, but as the phenomenon occurred only a few times, it was not sufficient to validate the theory conclusively. All of this underscores that we are progressing in the right direction with our studies. Localization of massive modes is achievable via numerical methods by imposing boundary conditions <cit.> such as: φ_even(0) = c, ∂_zφ_even(0)=0, φ_odd(0) = 0, ∂_zφ_odd(0)=c. These boundary conditions are selected because of the even nature of the effective potentials V_R,L(z). Additionally, conditions (<ref>) ensure that the solutions φ_R,L(z) display behavior corresponding to either even wave functions φ_even or odd wave functions φ_odd. We numerically plot the profiles of the massive fermionic modes. Interestingly, these modes exhibit solutions resembling free waves. For f_1 (Fig. <ref>), as we increase the influence of k, we observe that the amplitudes of the waves near the brane tend to increase, suggesting possible resonant modes <cit.>. The same trend is observed for f_2 (Fig. <ref>). This indicates that the massive fermions sense the division of the brane, which tends to confine them but without success. As the fermion's energy is proportional to its mass, higher energy fermions have a greater likelihood of escaping into the bulk. Moreover, besides potentially escaping into the bulk, these fermions may also interfere with gravitational wave measurements obtained by the Laser Interferometer Gravitational-Wave Observatory (LIGO), as they exhibit modes resembling free waves. § FINAL REMARKS In conclusion, armed with our extensive findings, we embark on a comprehensive exploration of braneworld behavior within modified gravity models. To guide our inquiry, we pose fundamental questions and endeavor to address them systematically. First and foremost, we scrutinize the influence of gravitational alterations on brane structure and the behavior of the underlying scalar field. Secondly, we seek to ascertain the most plausible gravitational configurations that may characterize our universe. Thirdly, we probe the impact of gravitational modifications on the spatial distribution of particles, such as spin-1/2 fermions, crucial for experimental validation of extra dimensions. Our analysis reveals intriguing insights, depicted graphically for clarity. In the case of the first gravitational model, distinct energy density peaks emerge with deviations from the standard scenario, indicative of a split in the brane. Moreover, utilizing the DCE, we discern stable configurations coinciding with the brane split intervals, affirming their likelihood in our models. Similarly, for the second gravitational model, analogous observations reinforce the association between stable configurations and brane division. Furthermore, our examination extends to fermion localization within the brane, necessitating careful consideration of scalar field characteristics. Notably, left-chirality fermions exhibit localization, influenced by the gravitational model's deviations from the usual STEGR. Numerical simulations elucidate the emergence of domain walls and phase transitions within the brane, indicative of gravitational model modifications. Delving deeper, we leverage quantum information measurements, particularly Shannon entropy, to assess fermion localization probabilities. Strikingly, as the gravitational model diverges from the standard, the certainty of fermion localization within the brane increases, suggesting the inadequacy of GR to fully explain our universe. Lastly, we explore the implications of massive fermions, crucial for future experimental endeavors, such as those conducted at the LHC. Our numerical simulations elucidate wave-like modes of massive fermions, highlighting their potential to escape the brane and interfere with gravitational wave measurements. In essence, our comprehensive investigation underscores the necessity of considering modified gravitational models to accurately describe the complex dynamics of our universe. It not only challenges the conventional framework of GR but also offers promising avenues for future theoretical and experimental explorations, steering us closer to a comprehensive understanding of our cosmos. § ACKNOWLEDGMENTS S.H. Dong acknowledges the partial support of project 20240220-SIP-IPN, Mexico, and commenced this work on a research stay in China. M.E.R. expresses gratitude to Conselho Nacional de Desenvolvimento Científico e Tecnológico - CNPq, Brazil, for partial financial support. 99 Perlmutter1999 S. Perlmutter et al. [Supernova Cosmology Project], Astrophys. J. 517, 565 (1999). Riess1999 A. G. Riess, A. V. Filippenko, W. Li, R. R. Treffers, B. P. Schmidt, Y. Qiu, J. Hu, M. Armstrong, C. Faranda and E. Thouvenot, Astron. J. 118, 2675 (1999). Gonzalez-Gaitan2011 S. Gonzalez-Gaitan, A. Conley, F. B. Bianco, D. A. Howell, M. Sullivan, K. Perrett, R. Carlberg, P. Astier, D. Balam and C. Balland, et al. Astrophys. J. 745, 44 (2012). Ganeshalingam2011 M. Ganeshalingam, W. Li and A. V. Filippenko, Mon. Not. Roy. Astron. Soc. 416, 2607(2011). Nojiri:2017ncd S. Nojiri, S. D. Odintsov and V. K. Oikonomou, Phys. Rept. 692, 1 (2017). Boehm:2000gq C. Boehm, P. Fayet and R. Schaeffer, Phys. Lett. B 518, 8(2001). SupernovaSearchTeam:1998fmf A. G. Riess et al. [Supernova Search Team], Astron. J. 116, 1009(1998). rs L. Randall and R. Sundrum, Phys. Rev. Lett. 83, 4690 (1999). rs2 L. Randall and R. Sundrum, Phys. Rev. Lett. 83, 3370 (1999). Gremm1999 M. Gremm, Phys. Lett. B 478, 434 (2000). CastilloFelisola2004 O. Castillo-Felisola, A. Melfo, N. Pantoja and A. Ramirez, Phys. Rev. D 70, 104029(2004). Navarro2004 I. Navarro and J. Santiago, JHEP 02, 007 (2005). BarbosaCendejas2005 N. Barbosa-Cendejas and A. Herrera-Aguilar, JHEP 10, 101 (2005). Bazeia2007 D. Bazeia, A. R. Gomes and L. Losano, Int. J. Mod. Phys. A 24, 1135 (2009). Liu2011wi Y. X. Liu, Y. Zhong, Z. H. Zhao and H. T. Li, JHEP 06, 135(2011). fR1 D. Bazeia, L. Losano, R. Menezes, G. J. Olmo and D. Rubiera-Garcia, Eur. Phys. J. C 75(12), 569 (2015). fR2 Z. G. Xu, Y. Zhong, H. Yu and Y. X. Liu, Eur. Phys. J. C 75(8), 368 (2015). tensorperturbations W. D. Guo, Q. M. Fu, Y. P. Zhang and Y. X. Liu, Phys. Rev. D 93, 044002 (2016). ftnoncanonicalscalar J. Wang, W. D. Guo, Z. C. Lin and Y. X. Liu, Phys. Rev. D 98, 084046 (2018). ftborninfeld K. Yang, W. D. Guo, Z. C. Lin and Y. X. Liu, Phys. Lett. B 782, 170 (2018). ftmimetic W. D. Guo, Y. Zhong, K. Yang, T. T. Sui and Y. X. Liu, Phys. Lett. B 800, 135099 (2020). Belchior:2023xgn F. M. Belchior, A. R. P. Moreira, R. V. Maluf and C. A. S. Almeida, Phys. Lett. B 843, 138029 (2023). Moreira:2023uys A. R. P. Moreira, F. M. Belchior, R. V. Maluf and C. A. S. Almeida, Eur. Phys. J. Plus 138 (8), 730 (2023). Moreira:2023pes A. R. P. Moreira, F. M. Belchior, R. V. Maluf and C. A. S. Almeida, Eur. Phys. J. C 83(1), 48 (2023). Belchior:2023gmr F. M. Belchior, A. R. P. Moreira, R. V. Maluf and C. A. S. Almeida, Eur. Phys. J. C 83 (5), 388 (2023). Arun:2016ela M. T. Arun and D. Choudhury, JHEP 04, 133 (2016). Arun:2017zap M. T. Arun, D. Choudhury and D. Sachdeva, JCAP 10, 041 (2017). Arun:2015ubr M. T. Arun and P. Saha, Pramana 88(6), 93 (2017). Hehl:1976kj F. W. Hehl, P. Von Der Heyde, G. D. Kerlick and J. M. Nester, Rev. Mod. Phys. 48, 393-416 (1976). DeFelice:2010aj A. De Felice and S. Tsujikawa, Living Rev. Rel. 13, 3 (2010). Aldrovandi R. Aldrovandi and J. Pereira, Teleparallel Gravity. An Introduction, Fundamental Theories of Physics Vol. 173 (Springer, Dordrecht, 2014). Nester:1998mp J. M. Nester and H. J. Yo, Chin. J. Phys. 37 (1999), 113. Hohmann:2018wxu M. Hohmann, C. Pfeifer, J. Levi Said and U. Ualikhanova, Phys. Rev. D 99, no.2, 024009 (2019). Soudi:2018dhv I. Soudi, G. Farrugia, V. Gakis, J. Levi Said and E. N. Saridakis, Phys. Rev. D 100, no.4, 044008 (2019). BeltranJimenez:2017tkd J. Beltrán Jiménez, L. Heisenberg and T. Koivisto, Phys. Rev. D 98 (2018) no.4, 044048. BeltranJimenez:2019esp J. Beltrán Jiménez, L. Heisenberg and T. S. Koivisto, Universe 5 (2019) no.7, 173. Bhar:2023xku P. Bhar and J. M. Z. Pretel, Phys. Dark Univ. 42, 101322 (2023). Mussatayeva:2023aoa A. Mussatayeva, N. Myrzakulov and M. Koussour, Phys. Dark Univ. 42, 101276 (2023). Bajardi:2020fxh F. Bajardi, D. Vernieri and S. Capozziello, Eur. Phys. J. Plus 135 (2020) no.11, 912. Capozziello:2022tvvi S. Capozziello and M. Shokri, Phys. Dark Univ. 37 (2022), 101113. Capozziello:2022wgl S. Capozziello and R. D'Agostino, Phys. Lett. B 832 (2022), 137229. BeltranJimenez:2019tme J. Beltrán Jiménez, L. Heisenberg, T. S. Koivisto and S. Pekar, Phys. Rev. D 101 (2020) no.10, 103507. Bhar:2023zwi P. Bhar, Eur. Phys. J. C 83, no.8, 737 (2023). Atayde:2023aoj L. Atayde and N. Frusciante, Phys. Rev. D 107, no.12, 124048 (2023). Koussour:2023rly M. Koussour and A. De, Eur. Phys. J. C 83, no.5, 400 (2023). Bajardi:2023vcc F. Bajardi and S. Capozziello, Eur. Phys. J. C 83, no.6, 531 (2023). Lin:2021uqa R. H. Lin and X. H. Zhai, Phys. Rev. D 103, no.12, 124001 (2021). Mustafa:2021ykn G. Mustafa, Z. Hassan, P. H. R. S. Moraes and P. K. Sahoo, Phys. Lett. B 821, 136612 (2021). 39 A. Paliathanasis, Phys. Dark Univ. 43, 101388 (2024). 39.1 A. De, T. H. Loo and E. N. Saridakis, JCAP 03, 050 (2024). 39.2 G. N. Gadbail, A. De and P. K. Sahoo, Eur. Phys. J. C 83, no.12, 1099 (2023). 40.1 A. Pradhan, A. Dixit, M. Zeyauddin and S. Krishnannair, “A flat FLRW dark energy model in f(Q,C)-gravity theory withobservational constraints,” (2024). 2.1 D. C. Maurya, Astron. Comput. 46, 100798 (2024). GS M.Gleiser and N. Stamatopoulos, Phys. Lett. B 713, 304 (2012). Correa2015a R. A. C. Correa, P. H. R. S. Moraes, A. de Souza Dutra and R. da Rocha, Phys. Rev. D 92, 126005 (2015). Correa2015c R. A. C. Correa and R. da Rocha, Eur. Phys. J. C 75, 522 (2015). Correa2015b R. A. C. Correa and P. H. R. S. Moraes, Eur. Phys. J. C 76, 100 (2016). Correa2016b R. A. C. Correa, P. H. R. S. Moraes, A. de Souza Dutra, W. de Paula and T. Frederico, Phys. Rev. D 94, 083509 (2016). Correa2016a R. A. C. Correa, D. M. Dantas, P. H. R. S. Moraes, A. S. Dutra and C. A. S. Almeida, Ann. Phys. 530, 1700188 (2018). Moreira:2021cta A. R. P. Moreira, F. C. E. Lima and C. A. S. Almeida, Int. J. Mod. Phys. D 31, no.10, 2250080 (2022). Moreira:2022zmx A. R. P. Moreira, F. C. E. Lima and C. A. S. Almeida, Int. J. Mod. Phys. D 32, no.04, 2350013 (2023). Shannon C. E. Shannon, The Bell System Tecnical Journal 27, 379 (1948). Beckner W. Beckner, Ann. of Math. 102, 159 (1975). Bialy I. Bialynicki-Birula, J. Mycielski, Commun. Math. Phys. 44, 129 (1975). Liu2009 Y. X. Liu, J. Yang, Z. H. Zhao, C. E. Fu and Y. S. Duan, Phys. Rev. D 80, 065019 (2009). Liu2009a Y. X. Liu, H. T. Li, Z. H. Zhao, J. X. Li and J. R. Ren, JHEP 10, 091 (2009). Moreira20211 A. R. P. Moreira, J. E. G. Silva and C. A. S. Almeida, Eur. Phys. J. C 81, no.4, 298 (2021). Moreira:2021wkj A. R. P. Moreira, J. E. G. Silva and C. A. S. Almeida, Annals Phys. 442, 168912 (2022). Moreira:2023pkh A. R. P. Moreira and S. H. Dong, Eur. Phys. J. C 83, no.11, 1064 (2023).
http://arxiv.org/abs/2405.09397v1
20240515145122
Sharp PDE estimates for random two-dimensional bipartite matching with power cost function
[ "Luigi Ambrosio", "Federico Vitillaro", "Dario Trevisan" ]
math.AP
[ "math.AP", "math.CO", "math.PR" ]
Highly Tunable Ru-dimer Molecular Orbital State in 6H-perovskite Ba_3MRu_2O_9 J.P. Clancy May 20, 2024 ============================================================================= We investigate the random bipartite optimal matching problem on a flat torus in two-dimensions, considering general strictly convex power costs of the distance. We extend the successful ansatz first introduced by Caracciolo et al. for the quadratic case, involving a linear Poisson equation, to a non-linear equation of q-Poisson type, allowing for a more comprehensive analysis of the optimal transport cost. Our results establish new asymptotic connections between the energy of the solution to the PDE and the optimal transport cost, providing insights on their asymptotic behavior. § INTRODUCTION AND MAIN RESULT Let (X_1, …, X_n), (Y_1, …, Y_n) be two sets of n random points independent and uniformly distributed on the flat torus =^2/^2, i.e., with common law given by the Lebesgue measure on . The random bipartite optimal matching problem concerns the study of the optimal coupling (with respect to a certain cost function) of these points, that is, the optimal transport from the empirical measure μ^n:=1/n∑_i=1^n δ_X_i to ν_n := 1/n∑_i=1^n δ_Y_i, in particular in the limit n ≫ 1. For the quadratic cost c(x,y):=𝐝(x,y)^2, where 𝐝 is the quotient (flat) distance in , the seminal paper <cit.> gave a very appealing PDE ansatz on the asymptotic of the expectation of the optimal transport cost, based on a linearization of the Monge-Ampère equation. While it was already known in the literature that, for the cost c=𝐝^p in dimension d=2, the expectation of the optimal transport cost behaves like (n^-1ln n )^p/2 (for p=1 since <cit.>), in <cit.> they managed to predict the limit coefficient as 1/(2π) in the case p=2, exploiting Fourier analysis and some renormalization procedure. This prediction was then rigorously proven in <cit.>, together with a new PDE proof of the classical bounds in <cit.>. Since then, several works have been using such PDE ansatz to estimate with different degrees of sharpness the asymptotics of random optimal matching costs and their solutions, in several settings. Focusing only on the two-dimensional case, but possibly including more general manifolds than , we mention here the rigorous results <cit.> as well as further intriguing predictions from the physical literature <cit.> and refer e.g. to the contribution <cit.> for a more general overview on the subject. The aim of the present work is to establish new asymptotic connections between the solution of a “linearized PDE” and the expectation of the optimal transport cost, on , for general p>1, extending the main results in <cit.>. Let us mention here that recently other works focused on two-dimensional random optimal matching problems, beyond the quadratic cost, in particular <cit.>, where the quantitative harmonic approximation techniques – originally in <cit.>, see also the exposition <cit.> – are extended to any p>1, and the preprint <cit.>, where existence of a p-cyclically monotone stationary matching from a Poisson point process to the Lebesgue measure is ruled out for any p>1 – the quadratic case is covered in <cit.>. In order to describe here informally our results, we may treat the empirical measures μ^n = ρ_0, ν^n=ρ_1 as absolutely continuous with respect to . This will be made rigorous by a regularization with the heat kernel P_t on , as performed in <cit.>. We first recall (see e.g., <cit.>) that the Kantorovich potential ϕ is related to the optimal transport map T by the identity T(x)=x-|∇ϕ(x)|^q-2∇ϕ(x), where, throughout the paper, q= p/(p-1) denotes the dual exponent of p. Then, the Monge-Ampère equation takes the form ρ_1(x-|∇ϕ (x)|^q-2∇ϕ(x)) (∇(x-|∇ϕ (x)|^q-2∇ϕ (x)))=ρ_0(x). This PDE contains three non-linearities: the determinant, the dependence of ρ_1 on ∇ϕ and finally, when p≠ 2, the nonlinear term |∇ϕ|^q-2∇ϕ. Our main result shows that in order to obtain a good first-order approximation of the expected value of the transport cost it is sufficient to remove only the first two non-linearities, keeping the third one. This invokes the “linearized” (but still nonlinear!) PDE of q-Poisson type - div(|∇ϕ|^q-2∇ϕ)=ρ_1-ρ_0, ϕ∈ H^1,q() in the sense of distributions, namely ∫_ |∇ϕ|^q-2⟨∇ϕ, ∇η⟩ d=∫ (ρ_1-ρ_0)η d ∀η∈ H^1,q(), where we always assume, to ensure uniqueness, that ∫_ϕ=0, yielding the approximation |T(x) - x|^p ≈ | |∇ϕ(x)|^q-2∇ϕ(x) |^p = |∇ϕ(x)|^q. Our main result makes precise such approximation (see <ref> for more details on the notation). If (X_i)_i=1^∞ and (Y_i)_i=1^∞ are independent and identically distributed random variables with law on , then lim_n→∞(n/ln n)^p/2|W_p^p(μ^n,ν^n)-∫_|∇ϕ_n|^q|=0 where μ^n=1/n∑_i=1^nδ_X_i, ν^n=1/n∑_i=1^nδ_Y_i, and ϕ_n is the solution to (<ref>) with random right hand side ρ_1-ρ_0 = ρ_1,n - ρ_0,n and ρ_0,n=P_t_nμ^n, ρ_1,n=P_t_nν^n provided t_n≫ n^-1ln n and ln (nt_n)≪ln n. For instance, a good choice of the intermediate regularization scale t_n in the main result would be t_n=n^-1(ln n)^β with β>1. Thanks to this result, the existence of the limit lim_n→∞W_p^p(μ^n,ν^n)/((ln n)/n)^p/2 is equivalent to the existence of the limit when, in the numerator, W_p^p(μ^n,ν^n) is replaced by ∫_|∇ϕ_n|^q, with ϕ_n solutions to the PDE (<ref>) with a random right hand side (<ref>). It would be interesting to prove or disprove the existence of the limit thanks to this reduction to a stochastic PDE. In order to prove <ref>, the only probabilistic ingredients (see <ref>) will consist in checking that as n →∞, with high probability the densities ρ_i,n in (<ref>), for i=0,1 are both sufficiently close to the constant density (<ref>), as well as not too far from μ^n and ν^n in the Wasserstein sense (<ref>), collecting and slightly extending some results from <cit.> and <cit.>. Then, in <ref> and <ref> we will focus our efforts on showing the following deterministic result. Let p>1, let ϕ be a solution of (<ref>), and let c:=2max_i=0,1ρ_i-1_L^∞(). Then there exist δ=δ(c,p) and δ=δ(c,p) such that δ+δ→ 0 as c → 0 and (1-δ) ∫_ |∇ϕ|^q ≤ W_p^p(ρ_0 , ρ_1 ) ≤ (1+δ) ∫_ |∇ϕ|^q . This result actually holds, with the same proof, on any d-dimensional torus. The extension to the setting of compact Riemannian manifolds (along the lines of <cit.>) possibly with boundary is beyond the scope of this note, and requires in particular the understanding in that more general setting of the stability of the estimates from above for the Riemannian analogous of the operator div(|∇ϕ|^q-1∇ϕ) under the action of the Hopf-Lax semigroup, even after shocks. Acknowledgements. The authors thank N. Gigli for having pointed out to them <cit.> where, on Riemannian manifolds, via the theory of characteristics, the preservation of upper bounds on the q-Laplacian under the action of the p-Hopf-Lax semigroup is shown, even in the nonlinear case p≠ 2, before shocks. Eventually, in the case of the flat space , the proof we gave in <ref> does not use this computation and works even beyond shocks, using solutions in the viscosity sense. L.A. and F.V. acknowledge the PRIN Italian grant 202244A7YL “Gradient Flows and Non-Smooth Geometric Structures with Applications to Optimization and Machine Learning”. D.T. acknowledges the MUR Excellence Department Project awarded to the Department of Mathematics, University of Pisa, CUP I57G22000700001, the HPC Italian National Centre for HPC, Big Data and Quantum Computing - Proposal code CN1 CN00000013, CUP I53C22000690001, the PRIN Italian grant 2022WHZ5XH - “understanding the LEarning process of QUantum Neural networks (LeQun)”, CUP J53D23003890006, the INdAM-GNAMPA project 2023 “Teoremi Limite per Dinamiche di Discesa Gradiente Stocastica: Convergenza e Generalizzazione”, INdAM-GNAMPA project 2024 “Tecniche analitiche e probabilistiche in informazione quantistica” and the project G24-202 “Variational methods for geometric and optimal matching problems” funded by Università Italo Francese. Research also partly funded by PNRR - M4C2 - Investimento 1.3, Partenariato Esteso PE00000013 - "FAIR - Future Artificial Intelligence Research" - Spoke 1 "Human-centered AI", funded by the European Commission under the NextGeneration EU programme. § PRELIMINARIES §.§ The Wasserstein distance Given probability measures μ, ν on and p ≥ 1 we define the p-Wasserstein distance between μ and ν as W_p(μ,ν):=.min{(∫_×𝐝(x,y)^p dπ(x,y) )^1/p|π_1=μ,π_2=ν}. We refer to <cit.> for an introduction to the subject. In particular, we will use throughout that W_p enjoys the triangle inequality. Moreover, we recall here for later use the following consequence of the Benamou-Brenier formula, see e.g. <cit.>, <cit.> or <cit.>. Let μ = ρ_0, ν=ρ_1 be absolutely continuous with respect to and let ϕ be a solution to (<ref>) with q=2. Then, for every p ≥ 1 there exists a constant C = C(, p)< ∞ such that W_p^p(μ,ν)≤ C (ess-infρ_1 )^1-p∫_|∇ϕ|^p d . We notice that the bound above is asymmetric in the roles of μ and ν, since only ρ_1 is required to be (essentially) bounded from below. In some sense, our work aims to sharpen (<ref>) by replacing the linear Poisson equation with the non-linear q-Poisson one, and indeed <ref> below is proved using a similar argument. However, (<ref>) is useful as one can combine it with harmonic analysis tools, as done e.g. in <cit.>. For example, for any p>1, by the classical boundedness of the Riesz transform operator ∇ (-Δ)^-1/2 on , where (-Δ)^-1/2 is defined as a Fourier multiplier, one can further bound from above ∫_|∇ϕ|^p d ≤ C ∫_ |(-Δ)^-1/2 (ρ_1-ρ_0)|^p d , where C = C(, p)<∞. Hence, from (<ref>) we further deduce the upper bound W_p^p(μ,ν)≤ C (ess-infρ_1)^1-p∫_ |(-Δ)^-1/2 (ρ_1-ρ_0)|^p d , where again C = C(, p)< ∞. §.§ Viscosity solutions Viscosity solutions are designed to give a suitable notion of solution (with good properties such us uniqueness, stability and comparison principles) for general non-linear equations for which the distributional point of view does not make sense, as fully nonlinear PDE's. However, this notion reveals to be useful also for PDE's having a distributional formulation. This is the case of the q-Laplace (also called q-Poisson) equation considered in this paper, associated to the differential operator -Δ_q u:=- div(|∇ u|^q-2∇ u). Actually, we will just deal with supersolutions. Let g:→. We say that a function f: → (-∞, +∞] is a viscosity supersolution for the equation -Δ_q u+g=0, and we write -Δ_q u+g≥ 0 in the viscosity sense if the following conditions hold: * f is lower semicontinuous, f ≢+∞, and * whenever x_0∈ and φ∈ C^2() are such that f-φ has a local minimum at x_0 and ∇φ(x_0) ≠ 0, we have -Δ_q φ(x_0)+g(x_0) ≥ 0. <ref> is adapted to the special form of the q-Laplace PDE. Indeed, the additional requirement ∇φ(x_0) ≠ 0 (not present in the general theory of viscosity solutions, see for instance <cit.>) is due to the fact that the expression Δ_q φ=|∇φ|^q-4[ |∇φ|^2 Δφ+ (q-2) ∑_i, j=1^n ∂φ/∂ x_i∂φ/∂ x_j∂^2 φ/∂ x_i ∂ x_j] is singular at the critical points of φ, when 1<q<2. With this convention, any f∈ C^2() satisfying -Δ_q f+g≥ 0 in the pointwise sense is also a viscosity supersolution. This follows from the fact that if we call F_q(v, S):(^2∖{0})×^2 × 2()→ the differential operator such that F_q(∇ u, ∇^2 u)=-Δ_q u, then F is non-increasing with respect to S (just look at (<ref>)). It follows that if f-φ has a local minimum at x_0 with ∇φ(x_0)≠ 0, then ∇ f(x_0)=∇φ(x_0)≠ 0 and F_q(∇φ(x_0), ∇^2 φ(x_0))+g(x_0)≥ F_q(∇ f(x_0), ∇^2 f(x_0))+g(x_0)≥ 0 as ∇^2 f(x_0)≥∇^2φ(x_0). §.§ Hopf-Lax semigroup Given f:→ lower semicontinuous, let u=Q_tf be the Hopf-Lax semigroup associated to the Hamilton-Jacobi equation ∂_t u + |∇ u|^q/q=0, that is (Q_tf)(x)=min_y ∈{f(y)+ 𝐝^p(x,y)/pt^p-1}. The following properties of the semigroup Q_tf, with Q_0f=f, are well-known, see for instance Proposition 3.3 in <cit.> for a detailed proof. Let f:→ be Lipschitz. Then the functions Q_tf are Lipschitz, uniformly with respect to t∈ [0,1], t↦ Q_tf is Lipschitz from [0,1] to C() and the PDE (<ref>) is satisfied almost everywhere in (0,1)×. §.§ Heat kernel on We recall that the heat kernel on the torus =^2/^2 is given by p_t(x):=∑_𝐧∈^2p_t(x+𝐧), where p_t(x)=1/4π t e^-|x|^2/4t, x∈^2, is the Euclidean heat kernel. Given a probability measure μ in , we denote by P_tμ≪ the probability measure having density ρ(x)=∫_ p_t(x-y)μ(y). Let us recall that (P_t)_t ≥ 0 defines a symmetric Markov (convolution) semigroup, P_s+t = P_s ∘ P_t with (unique) invariant measures and generator given by the (distributional) Laplacian. Let us recall the following deterministic dispersion bound, directly coming from the coupling Σ=∫_ p_t(z)Σ_z(z) with Σ_z=(Id×τ_z)_#μ between μ and P_tμ (where τ_z is the shift map): W_p(μ,P_tμ)≤ C_0√(t) ∀ t>0, with C_0=C_0()=(∫_|z|^pp_1(z)(z))^1/p for any probability measure μ in . A remarkable fact, first noticed in <cit.>, is that the dispersion bound above can be significantly improved (in average) when applied to empirical measures μ^n as in (<ref>). For every p ≥ 1 there exists positive constant C_1(,p), C_2(,p) such that the following holds. If t=α/n≤1/2 with α≥ C_1(,p)ln n, then W_p^p(μ^n,P_tμ^n)≤ C_2(,p)( lnα/n)^p/2. The case p=2 is established in <cit.>, and by the Hölder inequality, it entails the thesis for every 1 ≤ p <2: W_p^p(μ^n,P_tμ^n)≤ (C_2)^p/2(lnα)^p/2/n^p/2, t=α/n, α≥ C_1ln n. Therefore, it is sufficient to consider the case p ≥ 2. To this aim, we combine the argument from <cit.> with the application of Rosenthal's inequality, from <cit.>, where upper bounds for the random bipartite matching cost are proved for any p ≥ 2. By the triangle inequality and the elementary bound |x+y|^p ≤ 2^p-1 ( |x|^p+ |y|^p) for some C = C(p)<∞, we find W_p^p(μ^n,P_tμ^n) ≤ 2^p-1( W_p^p(μ^n,P_1/nμ^n) + W_p^p(P_1/nμ^n, P_tμ^n )) ≤ 2^p-1( C_0 n^-p/2 + W_p^p(P_1/nμ^n, P_tμ^n )), having used (<ref>) in the second inequality. Thus, we are reduced to bound from above the expectation of W_p^p(P_1/nμ^n, P_tμ^n ). Since this random variable is always bounded from above by ()^p, by choosing e.g. d=1/2 in (<ref>) of <ref> below, we see that, if we pick C_1 = (ln a)^-1 K sufficiently large – precisely such that 5-Kd^2<p/2, we can safely reduce ourselves to argue on the event ρ_t,n-1≤ 1/2, so that P_t μ^n = ρ_t,n has a density uniformly bounded from below by 1/2. On such event, we use (<ref>) (with μ = P_1/nμ^n and ν = P_tμ^n), and we find W_p^p(P_1/nμ^n, P_tμ^n ) ≤ C ∫_| (-Δ)^-1/2 ( ρ_1/n,n - ρ_t,n) |^p . where C = C(, p)<∞. By the linearity of the operator (-Δ)^-1/2, we collect the identity (-Δ)^-1/2 ( ρ_1/n,n - ρ_t,n) (x)= 1/n∑_i=1^n [(-Δ)^-1/2 (p_1/n-p_t)] (X_i- x), and notice that, for each x∈, the random variables φ_i(x):= [(-Δ)^-1/2 (p_1/n-p_t)] (X_i- x), for i=1,…, n are independent and centered. After taking expectation in (<ref>), we see that the thesis amounts to bound from above the quantity ∫_| 1/n∑_i=1^n φ_i(x) |^p (x), where we recognize, for every x, the p-th moment of a sum of independent centered random variables. By Rosenthal's inequality, <cit.>, we have for some constant C = C(p)<∞, | 1/n∑_i=1^n φ_i(x) |^p ≤ C [ 1/n^p-1| φ(x) |^p + 1/n^p/2| φ(x) |^2 ^p/2], where we write φ := [(-Δ)^-1/2 (p_1/n-p_t)] (X_1- x). To conclude, we follow very closely the argument in <cit.> from eq. (34) onwards (in the case d=2), so we omit some details. We collect first the uniform bound, valid for 0 < s ≤ 1/2: sup_z ∈| (-Δ)^-1/2 (p_s - 1) (z) | ≤C/s^1/2, which we apply in particular to s ∈{1/n, t}, yielding sup_z ∈| φ (z) | ≤sup_z ∈| (-Δ)^-1/2 (p_1/n - 1) (z) | + sup_z ∈| (-Δ)^-1/2 (p_t - 1) (z) | ≤ Cn^1/2. Then, by the representation (-Δ)^-1 = ∫_0^∞ P_s ds, we find for any f ∈ L^2() with ∫_ f = 0 that ∫_ [(-Δ)^-1/2f]^2 = ∫_ f (-Δ)^-1 f = ∫_0^∞∫_ f P_sf s= ∫_0^∞∫_ (P_s/2f)^2 s. We use this identity in our case, i.e., with f = p_1/n-p_t, yielding | φ(x) |^2 = ∫_ [ (-Δ)^-1/2(p_1/n - p_t)]^2(y-x) (y) = ∫_0^∞∫_ (p_s/2+1/n(y-x) - p_s/2+t(y-x) )^2 (y) s = ∫_0^∞ [ p_s+2/n(0) + p_s+2t(0) - 2 p_s+t+1/n(0)] s = O( -log(2/n) - log(2t) +2 log(t+1/n) + 1 ) = O( lnα), where in developing the square we invoked the semigroup property (so that, for any t_1, t_2 >0, ∫_ p_t_1(y-x) p_t_2(y-x) (y)=P_t_1+t_2δ_x(x)=p_t_1+t_2(0)), and the final asymptotics can be computed directly from (<ref>). Combining these bounds, we find |φ(x) |^p ≤sup_z∈|φ(z) |^p-2| φ(x) |^2 ≤ C n^(p-2)/2lnα , and therefore we bound from above the right hand side in (<ref>) with [ 1/n^p-1| φ(x) |^p + 1/n^p/2| φ(x) |^2 ^p/2] ≤ C lnα/n^p/2 + C ( lnα/n)^p/2 and the thesis follows. In the proof above we used a regularizing property of the heat semigroup, when acting on empirical measures, as established in <cit.> (see Theorem 3.3 and Remark 3.17 therein), that we report here. If μ^n are as in (<ref>) and P_tμ^n=ρ_t,n, then {ρ_t,n-1_∞>d}≤C_3()/d^2t^3a^-nt d^2 for some C_3()>0 and a=a()>1. In particular, if d≥ n^-1 and t=(ln a)^-1K n^-1ln n with K≥ 1, then {ρ_t,n-1_∞>d}≤ C_3()(ln a)^3 n^5-Kd^2. Our strategy for proving <ref> will be to adjust the parameters K=K_n→∞ and d=d_n→ 0 in such a way that Kd is sufficiently large, so that the probability of the deviation from the constant density 1 will have the power like decay we need with respect to n. We will also need L^p estimates on ρ_t,n, provided by the following proposition. Let t_n as in <ref>, and K=K_n related to t_n as in <ref>. Fixed k>0, take c_n → 0^+ such that lim inf_n K_nc_n^2 > k+5. Then sup{n^k1_{ρ_t_n,n-1_∞ >c_n}∫_|ρ_t_n,n-1|^p: n≥ 2}<∞. In this proof C denotes a positive constant, depending only on . Arguing as in the proof of Theorem 3.3 in <cit.>, the bounds Y^2≤C/t, |Y|≤C/t t∈ (0,1) for the random variables Y=Y_i=p_t(X_i,y)-1, together with Bernstein's inequality yield {|ρ_t,n(y)-1|>ξ}≤ Cexp(-nctξ) ∀ t∈ (0,1), ξ>1 for all y∈. For our choice of t=t_n, Fubini's theorem and Cavalieri's formula yield n^k∫_{|ρ_t_n,n-1|>1} |ρ_t_n,n-1|^p≤ C∫_1^∞ n^k-c(ln a)^-1K_nξ·ξ^p-1ξ. Thus, for n ≫ 1 n^k∫_{|ρ_t_n,n-1|>1}|ρ_t_n,n-1|^p≤ C∫_ 1^∞ n^ξ (k-c(ln a)^-1K_n)ξ^p-1 dξ ≤ C∫_1^∞ 2^-ξξ^p-1ξ < ∞. On the other hand, exploiting <ref> along with (<ref>) we get n^k1_{ρ_t_n,n-1_∞ >c_n }∫_{|ρ_t_n,n-1|≤ 1}|ρ_t_n,n-1|^p≤ n^k {ρ_t_n,n-1_∞ >c_n} ≤ C_3()(ln a)^3 n^k+5-K_nc_n^2→ 0. § PROPAGATION OF Q-LAPLACIAN ESTIMATES AND DIFFERENTIATION OF ∫ |∇ Q_TΦ|^Q Recalling the definition of c≥ 0 in <ref>, ϕ∈ H^1,q() satisfies in a distributional sense the inequality - div(|∇ϕ|^q-2∇ϕ)+ c≥ 0. Namely, for every nonnegative η∈ C^∞() we have ∫_ |∇ϕ|^q-2⟨∇ϕ, ∇η⟩+c ∫_η≥ 0. In order to control the time derivative of ∫_ |∇ Q_tϕ|^q, we would like to show that (<ref>) propagates with the Hopf-Lax semigroup, that is, it is satisfied also by Q_tϕ for any t ∈ (0,1). The proof of this stability property becomes much easier if we understand (<ref>) in the viscosity sense; this is possible thanks to the following result (see <cit.>, <cit.> for the homogeneous case g=0 and <ref> below): Let f∈ H^1,q() and g:→ be continuous. Then -Δ_q f+g≥ 0 in the viscosity sense, according to <ref>, if and only if -Δ_q f+g≥ 0 in the sense of distributions. We are going to use <ref> both ways: first we pass from the distributional sense for ϕ, granted by (<ref>), to the viscosity sense, then we pass from the viscosity sense to the distributional sense for Q_sϕ in the proof of <ref>. Then, let us show propagation of the estimate -Δ_q ϕ +c≥ 0 to Q_tϕ in the viscosity sense. Actually it will be usefult to prove this property for the Hopf-Lax semigroup associated to any power r>1. We provide a direct proof, even though the statement could directly follow by the general fact that viscosity supersolutions to -Δ_q +c≥ 0 are stable under translations in the dependent and independent variables, and infimum. Let f:→ be lower semicontinuous and satisfying -Δ_q f+c≥ 0 in the viscosity sense and r∈ (1,∞). Then for all t>0 the function f_t(x):=min_y ∈{ f(y)+𝐝^r(x,y)/rt^r-1}. still satisfies -Δ_q f_t+c≥ 0 in the viscosity sense. Given t>0 and x_0∈, let y_0∈ be a point where the minimum in the definition of f_t is attained, so that f_t(x_0)=f(y_0)+𝐝^r(x_0,y_0)/rt^r-1. Consider φ∈ C^2() such that f_t-φ has a local minimum in x_0 and, with no loss of generality, assume that the minimum is global and f_t(x_0)=φ(x_0). If we set ψ(x):=φ(x-y_0+x_0), we claim that ϕ-ψ has a minimum in y_0, equal to -𝐝^r(x_0,y_0)/(rt^r-1). From this we would obtain F_q(∇ψ(y_0),∇^2ψ(y_0))≤ c thus F_q(∇φ(x_0),∇^2φ(x_0))≤ c. To prove the claim, we notice that ϕ(y_0)-ψ(y_0)=ϕ(y_0)-φ(x_0)=ϕ(y_0)-f_t(x_0) =-1/rt^r-1𝐝^r(x_0,y_0), while in the other hand f_t(x)≥φ(x) implies ϕ(y)+1/rt^r-1𝐝^r(x,y) ≥φ(x) ∀ x, y. Choosing y=x-x_0+y_0 (understanding the sum modulo ^2) we obtain ϕ(y)-ψ(y)≥ - 1/rt^r-1𝐝^r(x_0,y_0) ∀ y, as desired. We can use <ref> to provide a sketchy proof of the implication from viscous to distributional granted, also in the converse direction, by <ref>. Indeed, we can use the Hopf-Lax semigroup with power r=2 to obtain that f_s=Q_sf still satisfy -Δ_q f_s+c≥ 0 in the viscosity sense and C^1,1 regularity of f_s. Since f_s→ f in H^1,q() as s→ 0^+, it is then sufficient to show that -Δ_q f_s+c≥ 0 in the sense of distributions. Here we can use the C^1,1 regularity of f_s to build appropriate test functions ϕ, of the form ϕ(x)=f_s(x_0)+⟨∇ f_s(x_0),x-x_0⟩+1/2⟨∇^2 f_s(x_0)(x-x_0),(x-x_0)⟩-ϵ|x-x_0|^2 at any point x_0∈ where ∇ f_s(x_0)≠ 0 and ∇^2f_s(x_0) exists. This leads to the validity of -Δ_q f_s+c≥ 0 almost everywhere in the open set Ω_s={|∇ f_s|≠ 0}. Then, one obtains the validity of the inequality in the sense of distributions first in Ω_s and then on the whole of , using the fact that the flux of the continuous vector field |∇ f_s|^q-2∇ f_s is null on the boundary (because q>1). If Ω_s is not smooth one can perform a further approximation, since 1/ϵ∫_0^ϵ∫_{|∇ f_s|=τ}|∇ f_s|^q-1^1τ= ∫_{0<|∇ f_s|<ϵ}|∇ f_s|^q tends to 0 as ϵ→ 0. Now, we apply <ref> with f=ϕ and r=p in order to estimate the variation in time of ∫_ |∇ Q_tϕ|^q. Let Λ(t):=∫_ |∇ Q_tϕ|^q with ϕ as in (<ref>) and c=ρ_1-ρ_0_∞. Then Λ is Lipschitz in [0,1] and / tΛ(t) ≤ cΛ(t) for almost every t∈ (0,1). In particular ∫_ |∇ Q_tϕ|^q ≤ e^ct∫_ |∇ϕ|^q ∀ t∈ [0,1]. Thanks to <ref>, f_t=Q_tϕ satisfy -Δ_q f_t+c≥ 0 in the viscosity sense. Therefore <ref> grants this property also in the sense of distributions, namely (notice that the improvement from C^∞() to H^1,q() follows by density and L^p integrability of |∇ f_t|^q-2∇ f_t) ∫_ |∇ f_t|^q-2⟨∇ f_t, ∇η⟩+c ∫_η≥ 0 ∀η∈ H^1,q(), η≥ 0. First, we note that, by <ref>, the distribution T:=-÷(|∇ f_t|^q-2∇ f_t)+c is non-negative. Thus, if η∈ C^∞() Tη≤ Tη_∞ 1=η_∞c and then T is represented by a nonnegative finite measure with mass less or equal than c (here we used that ÷(|∇ f_t|^q-2∇ f_t)1=0 and therefore T1=c). It follows that μ_t:=÷(|∇ f_t|^q-2∇ f_t) is a signed measure with μ_t≤ 2c. By the convexity of y ↦ |y|^q we then infer Λ(t)-Λ(s) ≥ q∫_ |∇ f_s|^q-2∇ f_s∇(f_t-f_s)=q∫_ (f_s-f_t) μ_s≥ -2cqf_t-f_s_∞ for every s, t ∈ [0,1]. From the Lipschitz regularity of the initial datum ϕ (which follows by <cit.>) and <ref> we deduce that the map t ↦ f_t is Lipschitz with respect to the sup norm, let us say with constant L. Hence, exchanging the roles of t and s we conclude that |Λ(t)-Λ(s)| ≤ 2cqL|t-s|, as we desired. Now, we can refine (<ref>) as follows. Let t∈ (0,1) be a differentiability point for Λ such that -q/ tf_t=|∇ f_t|^q a.e. in . Thanks to Rademacher's theorem and <ref>, both properties are satisfied for a.e. t∈ (0,1). For s≥ t, using the inequality f_t≥ f_s granted directly from the definition (<ref>), as well as the inequality -÷(|∇ f_s|^q-2∇ f_s) +c≥ 0 in the sense of distributions, we get Λ(s)-Λ(t) ≤ -q∫_ |∇ f_s|^q-2∇ f_s∇(f_t-f_s) = q∫_÷ (|∇ f_s|^q-2∇ f_s)(f_t-f_s) ≤ cq∫_ (f_t-f_s) , so that / tΛ(t)=lim_s → t^+Λ(s)-Λ(t)/s-t≤lim_s → t^+ cq ∫_ f_t-f_s/s-t=-cq∫_/ t f_t =c∫_ |∇ f_t|^q , which proves that Λ'(t)≤ cΛ(t). Finally, the validity of (<ref>) follows by Gronwall's Lemma. § PROOF OF <REF> In this section ϕ, c are as in <ref>. §.§ Upper bound The upper bound in <ref> can be obtained immediately by repeating the argument in Proposition 2.3 of <cit.>, involving duality and the Hopf-Lax formula. We still give the proof here for the sake of completeness. Since is compact, the duality formula for W_p^p can be written in the form 1/p W_p^p(ρ_0 , ρ_1 )=sup{ -∫_ f ρ_0 +∫_ (Q_1 f) ρ_1 : f:→ Lipschitz}. There exists δ(c,p) such that δ(c,p) → 0 as c→ 0 and W_p^p(ρ_0 , ρ_1 ) ≤ (1+δ(c,p)) ∫_ |∇ϕ|^q . Let us bound uniformly the argument of the supremum in (<ref>), for f:→ Lipschitz, exploiting the PDE (<ref>) satisfied by ϕ, the fact that Q_t f solves (<ref>) almost everywhere in (0,1)× and dominated convergence to put / s under the integral sign. If we set ρ_t:=tρ_1+(1-t)ρ_0 per t ∈ (0,1), then: ∫_ (ρ_1 Q_1f-ρ_0f ) =∫_0^1 / s∫_ρ_s Q_sf s =∫_0^1∫_(ρ_s / s Q_sf + (ρ_1-ρ_0)Q_sf ) s =∫_0^1∫_( -1/q |∇ Q_s f|^q ρ_s+|∇ϕ|^q-2⟨∇ϕ, ∇ Q_s f ⟩) s ≤∫_0^1∫_( -1/q |∇ϕ|^q ρ_s^-q/q-1ρ_s+|∇ϕ|^q ρ_s^-1/q-1) s =1/p∫_( ∫_0^1 ρ_s^-1/q-1 s ) |∇ϕ|^q , where for the inequality we used that v=ρ_s^-1/q-1∇ϕ minimizes v ↦1/q |v|^q ρ_s-|∇ϕ|^q-2⟨∇ϕ, v ⟩. In conclusion W_p^p(ρ_0 , ρ_1 ) ≤∫_ M_q(ρ_0, ρ_1) |∇ϕ|^q , where M_q(ρ_0, ρ_1)(x)=∫_0^1 ρ_s(x)^-1/q-1 s ≲ 1 as c→ 0. More precisely, since ρ_i-1_∞≤ c/2, for c<2 one has M_q(ρ_0, ρ_1)(x) ≤ 1+δ(c,p) with δ(c,p)=(1-c/2)^-1/q-1- 1. §.§ Lower bound There exists δ(c,p) such that δ(c,p) → 0 as c→ 0 and W_p^p(ρ_0 , ρ_1 ) ≥ (1-δ(c,p)) ∫_ |∇ϕ|^q . From the duality formula, with an integration by parts and Fubini's theorem, we get 1/p W_p^p(ρ_1,ρ_0) ≥ -∫_ϕρ_0+∫_ (Q_1ϕ)ρ_1= ∫_ϕ(ρ_1-ρ_0)+∫_ (Q_1ϕ-ϕ)ρ_1 = ∫_|∇ϕ|^q-1/q∫_0^1∫_|∇ Q_sϕ|^qρ_1 s. Now we can use first the inequality ρ_1-1_∞≤ c/2 to replace ρ_1 with 1 and then <ref> with c≥ρ_1-ρ_0_∞ to estimate 1/p W_p^p(ρ_1,ρ_0)≥1/p∫_|∇ϕ|^q- c e^c/2+e^c-1/q∫_|∇ϕ|^q, so that δ(c,p)=(p-1)(c e^c/2+e^c-1). § PROOF OF <REF> In this section we adopt the notation in the statement of <ref>. §.§ Upper bound Since ln (nt_n)≪ln n, using <ref> and the triangle inequality for W_p, arguing as in <cit.>, the proof of the upper bound reduces to the following estimate: lim sup_n→∞(n/ln n)^p/2(W_p^p(P_t_nμ^n,P_t_nν^n)-∫_|∇ϕ|^q)≤ 0 where ϕ is the solution to (<ref>) with right hand side ρ_0,n=P_t_nμ^n, ρ_1,n=P_t_nν^n. Now, since t_n≫ n^-1ln n we can use <ref> to write t_n as (ln a)^-1K_n n^-1ln n with K_n≥ 1 and c_n→ 0 in such a way that K_n c_n^2>2p+10, so that {ρ_i,n-1_∞>c_n/2}≤ C_3()(ln a)^3 n^5-K_nc_n^2/4=O(n^-p/2) i=0,1. Since W_p(μ,ν)≤ diam() for any pair of probability measures μ, ν, it follows that the contribution to (<ref>) of the event {max_iρ_i,n-1_∞>c_n/2} is null and in the complementary event we can use <ref> to conclude. §.§ Lower bound Recall that the semigroup P_t is contractive in with respect to any W_p distance; this can be easily proved taking any coupling Σ between μ and ν and considering the average Σ̅=∫Σ_zp_t(z)(z) of the shifted couplings Σ_z:=(τ_z×τ_z)_#Σ with τ_z(x)=x+z, which provides a coupling between P_tμ and P_tν with the same cost. Therefore, the lower bound lim inf_n→∞(n/ln n)^p/2(W_p^p(μ^n,ν^n)-∫_|∇ϕ|^q)≥ 0 can be deduced from lim inf_n→∞(n/ln n)^p/2(W_p^p(ρ_0,n,ρ_1,n)-∫_|∇ϕ|^q)≥ 0. Now, recall that the solution ϕ to (<ref>) is the unique minimizer of the functional Λ_q(f):=∫_1/q|∇ f|^q-f(ρ_1-ρ_0)=∫_1/q|∇ f|^q-(f-f̅)(ρ_1-ρ_0) whose minimum value is nonpositive. Hence, from the Sobolev embedding we obtain 1/q∫_|∇ϕ|^q≤ρ_1-ρ_0_pϕ_q≤ c_Sρ_1-ρ_0_p(∫_|∇ϕ|^q)^1/q and then the deterministic upper bound ∫_|∇ϕ|^q≤ (ρ_1-ρ_0_p c_S q)^p. As in the proof of the upper bound, since nt_n≫ln n, we can use this time <ref> that provides an estimate in expectation on ρ_i,n-1_p^p to show that the contribution to (<ref>) of the event {max_iρ_i,n-1_∞>c_n/2} is null (if we also require K_nc_n^2>2p+20 in order to satisfy (<ref>) with c_n/2 and k=p/2) and in the complementary event we can use Theorem <ref> to conclude.
http://arxiv.org/abs/2405.10047v1
20240516123330
Stellar Chromospheric Activity Database of Solar-like Stars Based on the LAMOST Low-Resolution Spectroscopic Survey: II. the bolometric and photospheric calibration
[ "Weitao Zhang", "Jun Zhang", "Han He", "Ali Luo", "Haotong Zhang" ]
astro-ph.SR
[ "astro-ph.SR" ]
UTF8gbsn School of Physics and Optoelectronics Engineering, Anhui University, Hefei 230601, China; zjun@ahu.edu.cn National Astronomical Observatories, Chinese Academy of Sciences, Beijing 100101, China; hehan@nao.cas.cn University of Chinese Academy of Sciences, Beijing 100049, China CAS Key Laboratory of Optical Astronomy, Chinese Academy of Sciences, Beijing 100101, China Stellar Chromospheric Activity Database of Solar-like Stars: II. Zhang et al. The dependence of stellar magnetic activity on stellar parameters would be inspired by the chromospheric activity studies based on the large-scale spectroscopic surveys. The main objective of this project is to provide the chromospheric activity parameters database for the LAMOST Low-Resolution Spectroscopic Survey (LRS) spectra of solar-like stars and explore the overall property of stellar chromospheric activity. The CaII H and K lines are employed to construct indicators for assessing and studying the chromospheric activity of solar-like stars. We investigate the widely used bolometric and photospheric calibrated chromospheric activity index R'_ HK, derived from the method in the classic literature (R'_ HK,classic) and the method based on the PHOENIX model (R'_ HK,PHOENIX). Since the detailed stellar atmospheric parameters, effective temperature (T_ eff), surface gravity (log g), and metallicity ([Fe/H]), are available for LAMOST, we estimate the chromospheric activity index R'_ HK,PHOENIX, along with the corresponding bolometric calibrated index R_ HK,PHOENIX, taking these parameters into account. We provide the database of the derived chromospheric activity parameters for 1,122,495 LAMOST LRS spectra of solar-like stars. Our calculations show that log R'_ HK,PHOENIX is approximately linearly correlated with log R'_ HK,classic. The results based on our extensive archive support the view that the dynamo mechanism of solar-like stars is generally consistent with the Sun; and the value of solar chromospheric activity index is located at the midpoint of the solar-like star sample. We further investigate the proportions of solar-like stars with different chromospheric activity levels (very active, active, inactive and very inactive). The investigation indicates that the occurrence rate of high levels of chromospheric activity is lower among the stars with effective temperatures between 5600 and 5900 K. Stellar Chromospheric Activity Database of Solar-like Stars Based on the LAMOST Low-Resolution Spectroscopic Survey: II. the bolometric and photospheric calibration Weitao Zhang1 Jun Zhang1 Han He2,3 Ali Luo2,3,4 Haotong Zhang2,3,4 May 20, 2024 ==================================================================================================================================================================== § INTRODUCTION Stellar chromospheric activity, known as the performance of stellar magnetic activity, is expected to reveal the physical mechanism of stars <cit.>. The emission in the line cores of CaII H and K lines is commonly recognized to be sensitive to stellar chromospheric activity. An empirical chromospheric activity index S_ MWO was introduced to quantify the emission of CaII H and K lines observed in the Mount Wilson Observatory (MWO) <cit.>. Since S_ MWO is defined as the ratio between the emission flux in the line cores of CaII H and K lines and the pseudo-continuum flux (the flux of two 20 Å reference bands in the violet and red sides), it is concise and effective for characterizing the stellar activity cycle <cit.>. However, S_ MWO is related to the continuum flux which is governed by the stellar effective temperature (or equivalently, the color index) <cit.>. As a result, it would be inflexible for comparing the emission of CaII H and K lines among stars of different spectral types. The ratio between the stellar surface flux in the line core of CaII H and K lines and the stellar bolometric flux, denoted as R_ HK, is considered to be marginally affected by the stellar effective temperature (or the color index) and can be derived from S_ MWO <cit.>. <cit.> and <cit.> introduced the bolometric factor C_ cf (depends on the color index B-V) and the factor K to convert S_ MWO to the stellar surface flux in the line cores of CaII H and K lines. Meanwhile, the photospheric fluxes contained in the line cores of CaII H and K lines could not be ignored, especially for solar-like stars <cit.>. The photospheric contribution R_ phot, which represents the photospheric flux normalized by the stellar bolometric flux, can analogously be deduced as a function of B-V <cit.>. Subtracting R_ phot from R_ HK, one can derive the widely used bolometric and photospheric calibrated chromospheric activity index R'_ HK. The R'_ HK is frequently employed to characterise the relationships between stellar chromospheric activity and other stellar properties such as rotation period <cit.> and stellar age <cit.>. The derivation of R'_ HK may be influenced by the bolometric factor C_ cf, the value of K, the photospheric contribution R_ phot and S_ MWO. <cit.> compared the relationship between the Hα line and the CaII H and K lines, where they recalibrated the C_ cf to the range of 0.45 ≤ B-V ≤ 1.81. <cit.> and <cit.> concentrated on the relationship of R'_ HK and the rotation period for M dwarfs. <cit.> extended the bolometric factor C_ cf and the photospheric contribution R_ phot to B-V = 1.90 using the empirical spectral library. <cit.> derived the equations of C_ cf and R_ phot based on the empirical and synthetic spectral library, respectively. <cit.> and <cit.> have provided estimates of C_ cf and R_ phot as functions of effective temperature. The stellar surface flux now is relatively accurately determined in synthetic spectral model such as ATLAS, PHOENIX and MARCS <cit.>. The synthetic spectral library PHOENIX was widely used in the calculation of chromospheric activity index based on the CaII H and K lines, e.g., <cit.> estimated the stellar surface flux as a formula of B-V, and <cit.> derived the relationship between C_ cf and the stellar effective temperature. <cit.> directly cross-matched each observed spectrum with the synthetic spectral library PHOENIX to derive an empirical chromospheric basal flux line. In addition, the PHOENIX spectral library is also used to deduce the photospheric contribution (e.g., ). <cit.> pointed out that the photospheric contribution derived from the PHOENIX library is higher than that obtained from empirical spectra by <cit.>. With the development of large-scale photometric and spectroscopic surveys, statistical investigation of stellar chromosphere may disclose some novel phenomena <cit.>. The Large Sky Area Multi-Object Fiber Spectroscopic Telescope (LAMOST, also named the Guoshoujing Telescope) has released massive spectral data since its pilot survey started in 2011 <cit.>. The spectra released by the Low-Resolution Spectroscopic Survey (LRS) of LAOMST cover the wavelength from 3700 to 9100 Å with a spectral resolving power (R = λ/Δλ) of about 1800 <cit.>. A number of investigations of chromospheric activity have profited from the several spectral lines recorded by LAMOST, such as the CaII H and K lines, the Hα line and the CaII infrared triplet (IRT) lines (e.g., ). In our previous work (, hereafter Paper I), we investigated the CaII H and K lines of LAMOST LRS spectra and provided a stellar chromospheric activity database, especially calibrated the S index value of LAMOST to the scale of MWO. In this work, we dedicate to describe the chromospheric activity of solar-like stars based on the bolometric and photospheric calibrated indexes of LAMOST LRS spectra <cit.>. This paper is organized as follows. In Section <ref>, we describe the spectral data used in this work. In Section <ref>, the detailed procedures of deriving the chromospheric activity indexes and their uncertainties are illustrated. In Section <ref>, we present the database provided in this work and discuss the chromospheric activity based on the database. Finally we provide a brief summary and conclusion of this work in Section <ref>. § DATA COLLECTION OF SOLAR-LIKE STARS We use the LAMOST LRS spectra in the Data Release 8 (DR8) v2.0[<http://www.lamost.org/dr8/v2.0/>], which were observed between October 2011 and May 2020. The LAMOST DR8 v2.0 comprises 10,633,515 LRS spectra, among which 6,684,413 spectra with determined stellar parameters have been published in the LAMOST LRS Stellar Parameter Catalog of A, F, G and K Stars (hereafter referred to as the LAMOST LRS AFGK Catalog). The stellar parameters such as effective temperature (T_ eff), surface gravity (log g), metallicity ([Fe/H]), heliocentric radial velocity (V_r) and their corresponding uncertainties are afforded by the LAMOST Stellar Parameter Pipeline (LASP) <cit.>. We select the spectra of solar-like stars <cit.> by the effective temperatures around the Sun (T_ eff,⊙ = 5777 K, adopted as in ) that are in the range of 4800 ≤ T_ eff≤ 6300 K, and the metallicities around the Sun ([Fe/H]_⊙=0.0 dex) that are in the range of -1.0<[Fe/H]<1.0 dex. The spectra of main-sequence stars are empirically separated from the giant sample by the criterion of log g as adopted in Paper I: log g ≥ 5.98-0.00035 × T_ eff. The uncertainties of the chromospheric activity indexes are related to the uncertainties of the spectral fluxes and the corresponding stellar parameters derived from the LRS spectra which are predominantly impacted by the signal-to-noise ratio (S/N) parameters of LRS spectra. The precision of spectral fluxes of CaII H and K lines and stellar parameters is primarily affected by the S/N in the g and r bands of LRS. Therefore, the high-S/N spectra of solar-like stars are screened out by the S/N threshold S/N_g ≥ 50.00 and S/N_r ≥ 71.43 as adopted in Paper I. A total of 1,149,216 spectra of solar-like stars are picked out from the LAMOST LRS AFGK Catalog. The band of CaII H and K lines used to derive the chromospheric activity index refer to the vacuum wavelength range of 3892.17-4012.20 Å (see Paper I). The spectra in this band with zero or negative flux are discarded. We eventually analyse the chromospheric activity based on 1,122,495 LAMOST LRS spectra of solar-like stars. The distribution of the selected spectral samples is shown in Figure <ref>, where the gray dots represent the samples in LAMOST LRS AFGK Catalog. Since abundant stellar information is available in Gaia DR3 <cit.>, we identified 861,505 solar-like stars in the selected spectra with gaia_source_id obtainable in LAMOST LRS AFGK Catalog. Given the LAMOST is dedicated to the spectral survey of large sky areas, 81% of the solar-like stars have been observed only once. Figure <ref> displays the histogram of the observation numbers for 861,505 solar-like stars. In Figure <ref>, we show the histograms of the T_ eff, log g, [Fe/H] and V_r for the 861,505 solar-like stars used in this work. If one star is observed more than once, the values of T_ eff, log g, [Fe/H] and V_r are taken as the median of the corresponding values of the multiple observed spectra. As illustrated in the LAMOST DR8 release note [<https://www.lamost.org/dr8/v2.0/doc/release-note>], the uncertainties of T_ eff, log g, [Fe/H] and V_r are relatively higher for S/N_r less than 30 and are relatively accurate for S/N_r greater than 50. Based on our aforementioned S/N threshold of spectral selection (S/N_g ≥ 50.00 and S/N_r ≥ 71.43), the uncertainties of T_ eff, log g, [Fe/H] and V_r for the selected stars, which are provided by LASP, are approximately distributed around 25 K, 0.035 dex , 0.025 dex, and 3.5 km s^-1, respectively. The LAMOST DR8 release note also provides the parameter comparisons between LAMOST DR8 v2.0 LRS and DR16 <cit.> of the Sloan Digital Sky Survey <cit.>. Since the effective temperature plays an important role in Section <ref>, we compare the T_ eff provided by LASP with the results of <cit.> in Appendix <ref>. § EVALUATION OF THE CHROMOSPHERIC ACTIVITY INDEX OF SOLAR-LIKE STAR The following steps are taken in the derivation of the chromospheric activity index R'_ HK of solar-like stars based on the CaII H and K lines of LAMOST LRS: (1) definition of S_ MWO for LAMOST LRS, (2) conversion of S_ MWO to R_ HK, (3) derivation of the bolometric and photospheric calibrated chromospheric activity index R'_ HK, (4) estimation of the uncertainty of chromospheric activity indexes. §.§ Definition of S_MWO for LAMOST LRS The stellar chromospheric activity index has been widely studied and broadened based on the emission of the CaII H and K lines (e.g., ). In 1966, the two-channel HKP-1 spectrophotometer was employed in Mount Wilson Observation to monitor the emission of stellar CaII H and K lines <cit.>. One channel was used to collect data in the 25 Å rectangular bands located in the red and violet sides of the CaII H and K lines. The counts in the reference bands of this channel were noted as N_ℛ𝒱. The other channel was used to measure the emission in the 1 Å rectangular bands centered at CaII H or K lines. After completing the counts in either H or K lines, the relative instrumental fluxes F_ℋ = N_ℋ/N_ℛ𝒱 or F_𝒦 = N_𝒦/N_ℛ𝒱 could be collected, where N_ℋ and N_𝒦 are the counts in the 1 Å rectangular bands centered at CaII H and K lines. <cit.> employed F = 1/2(F_ℋ+F_𝒦) to assess the emission of CaII H and K lines collected by the HKP-1. In view of the instrumental effects and certain limitations in the HKP-1, <cit.> introduced the HKP-2, a four-channel spectrophotometer in 1977. The H and K channels collected the two 1.09 Å full width at half maximum (FWHM) triangular bandpasses in the line cores of CaII H and K lines centered in the air wavelengths of 3968.47 and 3933.66 Å, respectively. In addition, the R and V channels measured the two 20 Å rectangular bandpasses on the red and violet sides of the CaII H and K lines (wavelength ranges in air being 3991.07–4011.07 Å and 3891.07–3911.07 Å, respectively). The H, K, R and V channels are exposed sequentially and rapidly, with the exposure time of the H and K channels being eight times that of the R and V channels. To align the HKP-2 data with the HKP-1 data, <cit.> performed a calibration of the HKP-2 data to match the HKP-1 data by S_ MWO = α·N_ H+N_ K/N_ R+N_ V, where N_ H, N_ K, N_ R and N_ V are the counts in the H, K, R and V channels of HKP-2, respectively, and the scaling factor α=2.4 is applied to ensure the consistency of the results between HKP-2 and HKP-1 <cit.>. Previous studies have utilized various bandpasses and definitions of chromospheric S index to calibrate their measurements from different instruments to the scale of MWO <cit.>. We discussed two typically definitions of the S index in Paper I, namely S_ rec and S_ tri, which are computed from the CaII H and K lines using 1 Å rectangular bandpasses and 1.09 Å FWHM triangular bandpasses, respectively. As a conclusion, these two kinds of definition of S index is comparable for investigating the stellar chromospheric activity based on the CaII H and K lines observed by LAMOST LRS. The S_ tri is defined as S_ tri = H_ tri + K_ tri/R+V, where R and V represent the mean fluxes in the 20 Å rectangular bandpasses centered in the vacuum wavelength of 4002.20 and 3902.17 Å, H_ tri and K_ tri are the mean fluxes in the 1.09 Å FWHM triangular bandpasses centered in the vacuum wavelength of 3969.59 and 3934.78 Å <cit.>. Since the vacuum wavelength is adopted in LAOMST LRS spectra, the above vacuum wavelength values of the bandpasses center are converted from the corresponding wavelength values in air, see Paper I. The relationship between the value of vacuum wavelength and air wavelength is obtained from <cit.>. A denser and uniform distribution of wavelength is instrumental in integration of spectral flux. To estimate the mean fluxes in each bandpass, the step of spectral wavelength were linearly interpolated to 0.001 Å. The wavelength shift caused by radial velocity could not be ignored, because the bandpasses used for calculating are narrow. We calibrate the spectral wavelength to the values in the rest frame before the calculation of chromospheric activity index. The pretreatment of wavelength based on V_r is introduced in Paper I. In Paper I, 65 common stars were identified by cross-matching the database in that work and the S_ MWO catalog of MWO in <cit.>. A relationship between the S indexes of LAMOST and the S_ MWO was introduced to calibrate the result of LAMOST to the scale of MWO. The relationship between the S_ tri and S_ MWO can be expressed by an exponential formula S_ MWO = e^ 6.913 S_ tri-3.348, and the detailed technological process can be found in Paper I. The histogram of σ_S_ MWO/S_ MWO for stars with more than one observation is shown in Figure <ref>, where the σ_S_ MWO represent the standard deviation of S_ MWO. For the majority of the samples (98.98%), the values of σ_S_ MWO/S_ MWO are less than 0.1. §.§ Conversion of S_MWO to R_HK In order to connect with physical quantity, the S_ MWO can be described by the stellar surface fluxes as S_ MWO = 8α·ℱ_ HK/ℱ_ RV, where ℱ_ HK is the stellar surface flux in the 1.09 Å FWHM H and K bands, and ℱ_ RV represents the stellar surface flux in the 20 Å R and V bands <cit.>. The constant 8 comes from the aforementioned different exposure times in HKP-2, and α represents the scaling factor in Equation <ref> <cit.>. The ℱ_ RV mainly depends on the stellar atmospheric parameters, thus can be derived from empirical spectral library (e.g., ) or synthetic spectral library (e.g., ). By combining the S_ MWO with corresponding stellar continuum spectra of the empirical or synthetic spectral library, a new active index R_ HK can be constructed as follows, which describes the emission of CaII H and K lines more physically than S_ MWO (e.g., ). The stellar surface flux ℱ_ HK in the 1.09 Å FWHM H and K bands can be normalised by the bolometric flux as R_ HK = ℱ_ HK/σ T_ eff^4, where T_ eff is the stellar effective temperature and σ = 5.67 × 10^-5 erg cm^-2 s^-1 K^-4 is the Stefan-Boltzmann constant. R_ HK is not related to the continuum flux around the CaII H and K lines which is governed by the stellar effective temperature, thus it can be used to compare stars with different spectral types. A widely used form of the relationship between S_ MWO and R_ HK can be expressed as R_ HK = K ·σ^-1· 10^-14· C_ cf· S_ MWO, where C_ cf is the bolometric factor, and 10^-14 is an arbitrary factor <cit.>. The factor K is in unit of erg cm^-2 s^-1, which was introduced by <cit.> to convert the relative flux F_ HK (in arbitrary units) F_ HK = C_ cf· T_ eff^4 · 10^-14· S_ MWO to stellar surface flux ℱ_ HK by ℱ_ HK = K · F_ HK. The factor C_ cf in Equation <ref> was derived by the pioneer studies of <cit.> and <cit.> (see e.g., ). <cit.> first introduced and deduced the C_ cf as a function of B-V for main-sequence stars with 0.45 ≤ B-V ≤ 1.5. Subsequently, <cit.> broaden the C_ cf to 0.3 ≤ B-V ≤ 1.6 for FGK type main-sequence stars. To describe the chromospheric activity of M dwarf, <cit.> calibrated C_ cf to the range 0.45 ≤ B-V ≤ 1.81 for stars with spectral type ranging from F6 to M5, and <cit.> calibrated the C_ cf in the range of 0.4 ≲ B-V ≲ 1.9. On the other hand, <cit.> derived the C_ cf in the range of 0.54 ≤ B-V ≤ 1.90 to include the M dwarf, and they prefer the forms of C_ cf described by the color indexes I-K and V-K. <cit.> calibrated the C_ cf to a function of T_ eff. The preceding researches of C_ cf are based on empirical method, which can also be obtained from synthetic spectral library. The PHOENIX model atmospheres were utilized by <cit.> to obtain ℱ_ RV for main-sequence stars, and the relation between ℱ_ RV and B-V is given as log ℱ_ RV/19.2 = 8.33 - 1.79(B - V), where 0.44 ≤ B-V ≤ 1.6, and the constant 19.2 is equal to the scaling factor 8α in Equation <ref>. ℱ_ RV is comparable with C_ cf, which can be transfer to C_ cf through C_ cf = ℱ_ RV/8α· K · T^4_ eff· 10^-14, based on Equations <ref>, <ref> and <ref>. Through matching the PHOENIX spectral library with observed spectra, <cit.> introduced a quadratic formula of C_ cf within 0.44 ≤ B-V ≤ 1.33 for luminosity classes V and IV with log g between 5.0 and 3.5 dex. <cit.> derived the C_ cf as a fifth-order function of T_ eff based on the synthetic spectral library of PHOENIX. The C_ cf in different researches described above generally can be expressed with a polynomial log C_ cf (X) = ∑_i=0^5 C_iX^i , where X represents B-V or T_ eff, and C_i (i=0,...,5) are the corresponding coefficients which are presented in Table <ref>. The S_ MWO can be derived from observed spectra, and C_ cf can be estimated by stellar color index or T_ eff as described above. The remaining coefficient in Equation <ref> to be determined is the factor K. <cit.> deduced the K = (0.76 ± 0.11) × 10^6 erg cm^-2 s^-1 based on the investigation of <cit.>, thus K ·σ^-1· 10^-14 = 1.34 × 10^-4 , which is frequently adopted in the relevant works (e.g., ). <cit.> derived K = (1.29 ± 0.19) × 10^6 erg cm^-2 s^-1 based on the solar S index S_ MWO,=0.160 <cit.> and the color index of the Sun (B-V)_ = 0.665 <cit.>. Additionally, <cit.> conducted a recalibration of the K value, obtaining a result of (1.07 ± 0.13) × 10^6 erg cm^-2 s^-1. They pointed out that the discrepancy between their result and the K value reported by <cit.> is mainly due to the adoption of a different solar B-V value, which they took to be 0.642 <cit.>. K ·σ^-1· 10^-14 = 1.887 × 10^-4 is gradually adopted in recent works (e.g., ). In this work, the value of R_ HK derived from the method in the classic literature for the LRS spectra is denoted as R_ HK,classic, which is calculated by utilizing the C_ cf from <cit.> (row 2 in Table <ref>) and K = 0.76 × 10^6 erg cm^-2 s^-1 from <cit.>. Since the value of B-V is needed for the estimation of R_ HK,classic, we use the relation between T_ eff and B-V introduced in <cit.> to transform T_ eff to B-V when calculating R_ HK,classic, which is based on the research of <cit.>. The transformation is given by log T_ eff = 3.908 - 0.234(B-V), in the range 0.4 < B-V < 1.4. Based on Equations <ref> and <ref>, we can express R_ HK by S_ MWO and ℱ_ RV as R_ HK = S_ MWO·ℱ_ RV/8α·1/σ T_ eff^4. As described above, recent studies have demonstrated that the PHOENIX model is a useful tool for deriving the stellar surface flux ℱ_ RV. In this work, besides R_ HK,classic we also utilize the spectral library of PHOENIX to estimate ℱ_ RV, and then derive R_ HK through Equation <ref>, denoted as R_ HK,PHOENIX. Because the detailed stellar atmospheric parameters (T_ eff, log g and [Fe/H]) are available for LAMOST (see Section <ref>), and the ℱ_ RV values estimated from the PHOENIX synthetic spectra library are related to these stellar parameters, we evaluate the values of R_ HK,PHOENIX taking these parameters into account. <cit.> published a high-resolution synthetic spectral library[<http://phoenix.astro.physik.uni-goettingen.de/>] based on the version 16 of the PHOENIX model atmospheres. The stellar atmospheric parameter space of their library covers 2300 ≤ T_ eff≤ 12000 K, 0.0 ≤log g ≤ 6.0 dex and -4.0 ≤ [Fe/H]≤ 1.0 dex. In <cit.>, a comparison between the PHOENIX synthetic spectra library and empirical spectra was conducted, and their results show that the spectra of <cit.> exhibit good consistency with the empirical spectra in the effective temperature range down to about 4000 K. Considering the stellar parameters space of the LAMOST LRS spectra of solar-like stars used in this work as described in Section <ref>, we utilize the spectra in <cit.> within the range of 4800 ≤ T_ eff≤ 6300 K, 3.5 ≤log g ≤ 5.0 dex and -1.0 ≤ [Fe/H]≤ 1.0 dex. A total of 320 high-resolution synthetic spectra in this parameter range are collected to calculate the value of ℱ_ RV. We fitted log ℱ_ RV by a ternary quadratic polynomial log ℱ_ RV = -138.7639 + 70.3122 X + 0.3893 Y -2.3216 Z, -0.0806 X · Y -0.5840 X · Z + 0.0124 Y · Z -8.2986 X^2 -0.01242 Y^2 -0.0351 Z^2, where the X, Y and Z represent log T_ eff, log g and [Fe/H], respectively. The fitting coefficients are calculated by the nonlinear least square method through the python module curve_fit of scipy.optimize <cit.>. In Figure <ref>, we present the relationships between ℱ_ RV and B-V (or T_ eff) adopted in different researches. The ℱ_ RV values of the PHOENIX spectra used for deriving Equation <ref> are exhibited with the gray circles. The black solid line in Figure <ref> is derived from Equation <ref> with log g = 4.44 dex and [Fe/H]=0.0 dex (solar parameters). It can be seen from Figure <ref> that the results of <cit.> (K = 1.07 × 10^6 erg cm^-2 s^-1 and C_ cf taken from <cit.>) using empirical spectra library is relatively close to the results calculated from the PHOENIX synthetic spectral library. The ℱ_ HK,PHOENIX value of the Sun is estimated to be (2.423 ± 0.007) × 10^6 erg cm^-2 s^-1, which is calculated by Equations <ref> and <ref> with T_ eff = 5777 K, log g=4.44 dex , [Fe/H] = 0.0 dex and S_ MWO,⊙=0.1694 ± 0.0005. The selected values of solar T_ eff and log g are following <cit.>. The S_ MWO,⊙ is the mean value of the solar S index which is obtained from the MWO HKP-2 measurements by <cit.>. The solar ℱ_ HK values estimated by <cit.>, <cit.> and <cit.> are (2.17 ± 0.32) × 10^6, (2.12 ± 0.25) × 10^6 and (2.47 ± 0.10) × 10^6 erg cm^-2 s^-1, respectively. Our evaluation of ℱ_ HK,⊙ value is consistent with those values estimated in previous investigations with a slight deviation. The deviation may originate from the different spectral fluxes in the PHOENIX model, the different choices of solar atmospheric parameters, and the different value of S_ MWO,⊙. As a result, the values of R_ HK,PHOENIX are relatively higher than R_ HK,classic, with a boost factor β = 1.6. Figure <ref> displays a comparison between the values of log (1/βR_ HK,PHOENIX) and log R_ HK,classic for the LAMOST LRS spectra of solar-like stars used in this work. The correlation between them can be fitted by a linear formula log R_ HK,classic = 1.027log (1/βR_ HK,PHOENIX)+0.135. §.§ Derivation of the Bolometric and Photospheric Calibrated Chromospheric Activity Index Rp_HK The emission flux of CaII H and K lines is known as comprising the fluxes of stellar photosphere and chromosphere <cit.>. To acquire a purer chromospheric activity index, we should subtract the photospheric contribution from R_ HK. The photospheric and bolometric calibrated chromospheric activity index R'_ HK <cit.> is defined as R^'_ HK = R_ HK - R_ phot, where R_ HK has been described and derived in Section <ref>, and R_ phot represents the photospheric contribution which is the ratio between the photospheric flux and the bolometric flux R_ phot = ℱ_ phot/σ T^4_ eff. R_ phot can be derived from empirical spectral library (e.g., ) or synthetic spectral library (e.g., ). The value of log R_ phot in the literature that can be expressed by the polynomial form log R_ phot = ∑_i=0^5 P_iX^i are presented in Table <ref>. In Equation <ref>, X represents B-V or T_ eff and the P_i (i=1,...,5) are the coefficients of the polynomial which are given in Table <ref>. <cit.> distilled the result of <cit.> and expressed the relation between log R_ phot and B-V via a cubic polynomial log R_ phot = -4.898+1.918(B-V)^2-2.893(B-V)^3, for the main-sequence stars with B-V > 0.44 (see first row of Table <ref>). They noted that R_ phot becomes negligible for the case of B-V ≳ 1.0. This expression of R_ phot in Equation <ref> was widely adopted to derive R'_ HK in the majority of researches (e.g., ), while a simpler linear form of log R_ phot is also available (used in ). By cross-match 72 stars with <cit.>, <cit.> fitted the log R_ phot into a formula of T_ eff log R_ phot = -4.78845 - 3.70700/1 + (T_ eff/4598.92)^17.527, for stars in the range of 4350 ≤ T_ eff≤ 6500 K. Based on the inactive stars observed by HARPS spectra, <cit.> empirically fitted the R_ phot for the main-sequence star in the range 0.4 ≲ B-V ≲ 1.9, which expressed by an exponential function R_ phot = 1.48 × 10^-4· e^-4.3658(B-V). <cit.> adopted the synthetic spectra of PHOENIX to deduce the photospheric flux ℱ_ phot, which is expressed by a linear equation for main-sequence star in the range of 0.44 ≤ B-V < 1.28 as log ℱ_ phot = 7.49 - 2.06(B-V), which can be converted to R_ phot by Equation <ref>. <cit.> derived the R_ phot in the range of 0.54 ≤ B-V ≤ 1.90 based on the BT-Settl model <cit.>. Besides, <cit.> employed the PHOENIX model to deduce a fifth-order equation that expresses the log R_ phot as a function of T_ eff. For the same stellar B-V or T_ eff value, the values of R_ phot deduced by synthetic spectra in <cit.>, <cit.> and <cit.> are generally higher than the empirical calibration of <cit.>. <cit.> thus introduced an offset 0.4612 to scale their result to <cit.>. Same as R_ HK described in Section <ref>, we present two kinds of estimations of R'_ HK, denoted as R'_ HK,classic and R'_ HK,PHOENIX, respectively. R'_ HK,classic is calculated using R_ HK,classic and the photospheric contribution derived from Equation <ref> with B-V estimated from Equation <ref>, while R'_ HK,PHOENIX is computed based on R_ HK,PHOENIX and the photospheric contribution R_ phot,PHOENIX estimated as follows. Because the values of R_ HK,PHOENIX are approximately β times larger than the values of R_ HK,classic, we introduce a β-coefficient to scale R_ phot,classic to R_ phot,PHOENIX and the corresponding log R_ phot,PHOENIX can be expressed by log R_ phot,PHOENIX = log(β· R_ phot,classic) =-4.694+1.918(B-V)^2-2.893(B-V)^3, for B-V>0.44. In Figure <ref>, we present the relations between log R_ phot and B-V (or T_ eff) adopted in different researches. As discussed above, the R_ phot,PHOENIX is scaled from the results of <cit.> using a scale factor β related to the method based on the PHOENIX model. Hence, the red solid curve in Figure <ref> differs from those obtained in <cit.> and <cit.>. Since the detailed stellar atmospheric parameters (T_ eff, log g and [Fe/H]) are available for LAMOST, we estimate the B-V in Equation <ref> by considering not only T_ eff, but also the stellar atmospheric parameters log g and [Fe/H]. Based on the InfraRed Flux Method, <cit.> found that there is very little dependence of B-V on log g and provided a relation between T_ eff, B-V, and [Fe/H], with B-V and [Fe/H] in the range of 0.18 ≤ B-V ≤ 1.29 and -5.0 ≤ [Fe/H]≤ 0.4 dex. We examine the extendability of the [Fe/H] upper boundary and still employ the relation to obtain the B-V for the small amount of spectra with [Fe/H] slightly exceeding 0.4 dex. After obtaining B-V from T_ eff and [Fe/H], we then estimate the photospheric contribution R_ phot,PHOENIX based on Equation <ref>. Because both the R_ HK,PHOENIX and R_ phot,PHOENIX are about β times higher than the corresponding classic indexes, to be consistent with classic studies, we calculated the R'_ HK,PHOENIX by R^'_ HK,PHOENIX = 1/β(R_ HK,PHOENIX - R_ phot,PHOENIX). In Figure <ref>, we present a comparison between the values of the two indexes log R'_ HK,PHOENIX and log R'_ HK,classic for the LAMOST LRS spectra of solar-like stars studied in this work. As shown in Figure <ref>, there exists a linear correlation between these two quantities, the fitting formula is log R'_ HK,classic = 0.999log R'_ HK,PHOENIX+0.009. <cit.> estimated the average value of log R'_ HK,⊙ as -4.9427 ± 0.0072 based on the S_ MWO,⊙ in the 15-24 solar cycle and (B-V)_⊙ = 0.653 ± 0.003. Taking T_ eff = 5777 K, log g=4.44 dex, [Fe/H] = 0.0 dex and the same solar B-V, we can obtain the log R'_ HK,PHOENIX=-4.9599 ± 0.0051 for the Sun. §.§ Estimation of the Uncertainty of Chromospheric Activity Indexes We estimated the uncertainties of log R_ HK,classic, log R'_ HK,classic, log R_ HK,PHOENIX, and log R'_ HK,PHOENIX with the Monte Carlo error propagation. Because the log R_ HK,PHOENIX values are calculated by Equation <ref>, the uncertainties of log R_ HK,PHOENIX are yielded from the uncertainties of S_ MWO and ℱ_ RV. The uncertainties of log R_ HK,classic predominantly arise from the uncertainties of S_ MWO and C_ cf as presented in Equation <ref>. As illustrated in Paper I, we estimated the uncertainties of S_ MWO by considering the impact of the uncertainties of the spectral flux, the discretization in spectral data, and the uncertainty of radial velocity. Regarding the uncertainties of ℱ_ RV, it is affected by the uncertainties of stellar atmospheric parameters T_ eff, log g and [Fe/H] due to Equation <ref>. Since we calculate the value of C_ cf through the B-V value derived from Equation <ref>, the uncertainties of C_ cf are influenced by the uncertainties of B-V which is propagated from the uncertainties of T_ eff. Figure <ref>(a) illustrates the histograms of the uncertainties of log S_ MWO, log C_ cf, log R_ HK,classic, log R_ phot,classic and log R'_ HK,classic, while Figure <ref>(b) shows the uncertainties for log S_ MWO, log ℱ_ RV, log R_ HK,PHOENIX, log R_ phot,PHOENIX and log R'_ HK,PHOENIX. As shown in Figure <ref>, the uncertainties of log R_ HK,PHOENIX and log R_ HK,classic are both primarily governed by the uncertainties of S_ MWO. The uncertainties of log S_ MWO, log R_ HK,classic, log R'_ HK,classic, log R_ HK,PHOENIX and log R'_ HK,PHOENIX are distributed around 0.030, 0.030, 0.065, 0.030 and 0.065 respectively. § RESULTS AND DISCUSSION §.§ Stellar Chromospheric Activity Database In Section <ref>, we investigate the stellar chromospheric activity through two kinds of methods. The chromospheric activity parameters derived from the method in the classic literature are denoted with classic, while those derived from the method based on the PHOENIX model are denoted with PHOENIX. We provide the database of chromospheric activity parameters for 1,122,495 LAMOST LRS spectra of solar-like stars, which is available at <https://doi.org/10.5281/zenodo.8378849> (compiled in a CSV format file: CaIIHK_Activity_Indexes_LAMOST_DR8_LRS.csv). The database mainly includes the chromospheric activity parameters S_ tri, S_ MWO, R_ HK,classic, R'_ HK,classic, R_ HK,PHOENIX and R'_ HK,PHOENIX, as well as their uncertainties. The columns in the catalog of the database are presented in Table <ref>. The log R'_ HK,classic and log R'_ HK,PHOENIX values of 743 and 821 spectra, respectively, are not available (recorded as '-9999' in the database). One of the reason is the value of stellar parameters exceeds the valid range of the empirical formula of R_ phot (0 and 13 spectra for classic and PHOENIX, respectively). The other reason is that the estimated value of photospheric contribution is large than the value of R_ HK for very few spectra (743 and 808 spectra for classic and PHOENIX, respectively). This situation would occur because the photospheric contributions are determined empirically, leading to overestimations for some stars; or there are uncertainties in the evaluation of R_ HK. These spectra are not involved in the subsequent discussion. In Sections <ref> and <ref>, we have performed a comparison between log R_ HK,classic and log R_ HK,PHOENIX, and between log R'_ HK,classic and log R'_ HK,PHOENIX. The results indicate that log R_ HK,PHOENIX and log R'_ HK,PHOENIX are approximately linearly correlated with log R_ HK,classic and log R'_ HK,classic, respectively (see Figures <ref> and <ref>). In the next subsection, we discuss the distribution of chromospheric activity primarily based on R_ HK,PHOENIX and R'_ HK,PHOENIX. §.§ Distribution of Chromospheric Activity Index Among the 1,122,495 LAMOST LRS spectra of solar-like stars, there are 861,505 stars with 'gaia_source_id' available in LAMOST LRS AFGK Catalog. In this section, we investigate the distribution of chromospheric activity index based on these stars. If a star is recorded more than once in our dataset, we use the median values of the chromospheric activity parameters from the multiple observed spectra. In Paper I, we have calibrated the S index of LAMOST to S_ MWO, and we also compare the R'_ HK with the results in the literatures, as illustrated in Appendix <ref>. There is an approximate consistency between our R'_ HK values and those from other instruments for the common targets. In Figure <ref>, we display the distribution of log R_ HK,PHOENIX with T_ eff for the 861,505 solar-like stars. The solar value of log R_ HK,PHOENIX (-4.416 ± 0.001) is displayed in Figure <ref> with a '⋆' symbol, which is calculated by Equation <ref> with the solar ℱ_ HK,PHOENIX = (2.423 ± 0.007) × 10^6 erg cm^-2 s^-1 given in Section <ref>. It is not surprising that the distribution trend of log R_ HK,PHOENIX shows a clear correlation with T_ eff. Although the R_ HK,PHOENIX is the bolometric calibration of the surface flux, the photospheric contribution related to stellar spectral types is still contained. As mentioned in Section <ref>, it is necessary to further remove the contribution of photosphere to obtain R'_ HK,PHOENIX. The histograms of the log R'_ HK,PHOENIX values are exhibited in Figures <ref>(a) and (b) in linear-scale and logarithmic-scale vertical axises, respectively. The peak of the distribution is at about -4.90. This peak value is close to the solar log R'_ HK,PHOENIX=-4.9599 as given in Section <ref>. The Vaughan-Preston gap (VP gap, ), known as the bimodal distribution of chromospheric activity, is not observed in Figure <ref>. The separation of log R'_ HK values between active and inactive stars may suggest the existence of different dynamo mechanisms <cit.>. <cit.> investigated a global sample of 4451 cool stars from high-resolution HARPS spectra and concluded that the VP gap is not pronounced. A significant proportion of the stars have intermediate activity levels around log R'_ HK=-4.75 in <cit.>. In contrast, the bimodal distribution of chromospheric activity in <cit.> is relatively significant, based on 1674 F-, G- and K-type stars from the HARPS sample. <cit.> and <cit.> proposed that the VP gap tends to appear for stars with [Fe/H] greater than -0.2, which is inflexible for stars in this work. The VP gap is also influenced by the rotation rate (e.g. ), and the relationship between rotation and stellar chromospheric activity in LAMOST samples will be investigated in the future. <cit.> suggested that the VP gap might originate from different dynamo mechanisms or statistical bias. The absence of VP gap in the distribution of chromospheric activity for our solar-like stars could be attributed to three possible factors: 1) a gradual diminishing of chromospheric activity during the evolution of solar-like stars; 2) the influence of different stellar properties on the bimodal distribution of the chromospheric activity within our samples, which should be explored in more detail in the future, or 3) the loss of some information in the spectral profile due to the limited resolution of LAMOST LRS spectra <cit.>. We display the distributions of log R'_ HK,PHOENIX with T_ eff, log g and [Fe/H] in Figures <ref>(a), (b) and (c), respectively. To show the trends of log R'_ HK,PHOENIX with these stellar atmospheric parameters, the log R'_ HK,PHOENIX values are homogeneously segregated into equal-width bins for 4800 < T_ eff < 6300 K, 3.9<log g<4.8 dex and -1.0<[Fe/H]<0.6 dex with steps of 50 K, 0.1 dex and 0.1 dex, respectively; and the fitted median values of the log R'_ HK,PHOENIX in each bin with T_ eff, log g and [Fe/H] are marked by the black dashed lines in Figure <ref>. The formulas of these fitted trends are expressed by the following quadratic polynomials log R'_ HK,PHOENIX = 4.367 - 3.397 × 10^-3 T_ eff +3.094 × 10^-7 T^2_ eff , log R'_ HK,PHOENIX = 1.595 -3.166 log g +0.384 (log g)^2 , log R'_ HK,PHOENIX = -4.912 -0.065 [Fe/H] +7.894 × 10^-3 [Fe/H]^2 . As shown in Figure <ref>, the median values of log R'_ HK,PHOENIX with T_ eff have a minimum at about T_ eff=5500 K, while the dependence of the median values of log R'_ HK,PHOENIX on log g and [Fe/H] is relatively weak. Besides, it can be seen that the solar log R'_ HK,PHOENIX value is approximately on the fitting lines of the median log R'_ HK,PHOENIX values for all the three parameters, and the value of solar chromospheric activity index is located at the midpoint of the solar-like star sample. This result based on our extensive archive support the view that the dynamo mechanism of solar-like stars is generally consistent with the Sun. <cit.> classified the stellar chromospheric activity into four levels: very active (log R'_ HK larger than -4.20), active (log R'_ HK from -4.75 to -4.20), inactive (log R'_ HK from -5.10 to -4.75) and very inactive (log R'_ HK less than -5.10). Following this classification, based on the log R'_ HK,PHOENIX values of 861,505 stars, we can obtain the proportions of very active, active, inactive and very inactive solar-like stars as 1.03%, 21.68%, 65.27% and 12.03%, respectively. While for the values of log R'_ HK,classic, the proportions are 1.07%, 24.53%, 62.98% and 11.41%, respectively. The proportions of stars in the different stellar chromospheric activity classes are 2.6%, 27.1%, 62.5% and 7.9% in <cit.>, and 1.2%, 28.5%, 66.9% and 3.5% in <cit.>. When using a threshold of log R'_ HK=-4.75 to classify stars as active and inactive, <cit.> and <cit.> found that 29.7% stars are classified as active. Classifying the solar-likes stars studied in this work with log R'_ HK,PHOENIX=-4.75 and log R'_ HK,classic=-4.75, we can obtain the proportions of active solar-like stars as 22.71% and 25.60%, respectively. The proportions are relatively consistent with the results of <cit.> and <cit.>. In Figure <ref>, we show the distributions of log R'_ HK,PHOENIX values in the T_ eff vs. log g, T_ eff vs. [Fe/H], and [Fe/H] vs. log g parameters spaces. The stellar chromospheric activity levels of very active, active, inactive, and very inactive are indicated by different colors. It can be seen from Figure <ref> that the higher the stellar chromospheric activity levels, the narrower the distribution areas in the parameters spaces. Since the LAMOST LRS spectra of solar-like stars with determined stellar atmospheric parameters are sufficient, we further investigate the relations between the proportions of solar-like stars with different chromospheric activity levels (classified by R'_ HK,PHOENIX) and the stellar atmospheric parameters (T_ eff, log g and [Fe/H]). The proportions of very active, active, inactive and very inactive solar-like stars with different stellar atmospheric parameters are shown in Figure <ref>. The proportions values in Figure <ref> are obtained by dividing the T_ eff, log g and [Fe/H] into bins with step size of 100 K, 0.1 dex and 0.1 dex, respectively; and the central values of each bin are used to represent the corresponding stellar atmospheric parameters. Figure <ref>(a) shows that the proportions of inactive solar-like stars exhibit a relatively stable trend within the T_ eff range of 4800 to 6000 K. For the very inactive solar-like stars, there is an increasing trend in the proportions as the T_ eff decreases within the T_ eff range from 6300 to 5650 K, and the proportions decrease within the T_ eff range from 5650 to 4800 K. In contrast, the proportions of active solar-like stars exhibit a decreasing trend with decreasing T_ eff from 6300 to 5650 K, and the decreasing trend of the proportions of active solar-like stars is reversed for T_ eff lower than 5650 K. The proportions of very active solar-like stars are almost stable for T_ eff>5900 K, while they increase for T_ eff<5600 K. The minimum value of the proportions of very active solar-like stars is around T_ eff=5700 K. Based on the proportions of active and very active solar-like stars, we conclude that the occurrence rate of high levels of chromospheric activity is lower among the stars with effective temperatures between 5600 and 5900 K. The relations between log g and the proportions of solar-like stars with different chromospheric activity levels are displayed in Figure <ref>(b). The proportions of different chromospheric activity levels of solar-like stars appear to be relatively stable in the range of 3.9 < log g< 4.5 dex. When log g>4.5 dex, the proportions of active solar-like stars exhibits an increasing trend, whereas the spectral ratios of very inactive, inactive and very active solar-like stars decrease. <cit.> and <cit.> detected that the distribution of log R'_ HK varies among stars with different levels of metallicity, and the bimodal distribution <cit.> is observed in dwarf stars with [Fe/H] greater than -0.2 dex. In the research of <cit.>, the majority of stars with [Fe/H]>0.1 dex were found to be inactive. In Figure <ref>, the bimodal distribution of log R'_ HK does not exist in our solar-like star sample of LAMOST LRS. However, as shown in Figure <ref>(c), when [Fe/H]>0.1 dex, there is a decrease in the proportions of active solar-like stars. This decreasing trend ceases and the proportions of active solar-like stars becomes relatively stable when [Fe/H] = 0.3 dex. § SUMMARY AND CONCLUSION In this work, we identify 1,122,495 high-quality LRS spectra of solar-like stars from LAMOST DR8 and provide a database of stellar chromospheric activity parameters based on this spectral sample. The database contains the stellar chromospheric activity parameters S_ tri, S_ MWO, R_ HK and R'_ HK , as well as their uncertainties. R_ HK and R'_ HK are derived from the method in the classic literature (denoted with classic) and the method based on the PHOENIX model (denoted with PHOENIX). When converting the S_ MWO to the bolometric calibrated index R_ HK, the R_ HK,classic values are estimated based on the bolometric factor C_ cf from <cit.> and the K factor from <cit.>, while the R_ HK,PHOENIX values are derived from the stellar surface flux ℱ_ RV. The values of R_ HK,PHOENIX are approximately β=1.6 times larger than the values of R_ HK,classic. For the corresponding photospheric contribution R_ phot, the R_ phot,classic are deduced based on <cit.>, and the R_ phot,PHOENIX are scaled by β times from the R_ phot,classic. The bolometric and photospheric calibrated chromospheric activity index R'_ HK is consequently derived by eliminating the photospheric contribution from R_ HK. Our calculations show that log R_ HK,PHOENIX and log R'_ HK,PHOENIX are approximately linearly correlated with log R_ HK,classic and log R'_ HK,classic, respectively. We explore the overall properties of stellar chromospheric activity based on the 861,505 solar-like stars in the database. The results show that the median values of log R'_ HK,PHOENIX with T_ eff have a minimum at about T_ eff=5500 K, while the dependence of the median values of log R'_ HK,PHOENIX on log g and [Fe/H] is relatively weak. The value of solar chromospheric activity index is located at the midpoint of the solar-like star sample. This result from our extensive archive support the view that the dynamo mechanism of solar-like stars is generally consistent with the Sun. The absence of VP gap in the distribution of chromospheric activity for our solar-like stars could be attributed to three possible factors: 1) a gradual diminishing of chromospheric activity during the evolution of solar-like stars; 2) the influence of different stellar properties on the bimodal distribution of the chromospheric activity within our samples, which should be explored in more detail in the future, or 3) the loss of some information in the spectral profile due to the limited resolution of LAMOST LRS spectra. We explore the proportions of solar-like stars with different chromospheric activity levels (very active, active, inactive and very inactive). Based on the values of log R'_ HK,PHOENIX, we can obtain the proportions of very active, active, inactive and very inactive solar-like stars as 1.03%, 21.68%, 65.27% and 12.03%, respectively. While for the values of log R'_ HK,classic, the proportions are 1.07%, 24.53%, 62.98% and 11.41%, respectively. It is observed that the higher the stellar chromospheric activity levels, the narrower the distribution areas in the T_ eff vs. log g, T_ eff vs. [Fe/H], and [Fe/H] vs. log g parameters spaces. We further investigate the relation between the proportions of solar-like stars with different chromospheric activity levels (classified by R'_ HK,PHOENIX) and the stellar atmospheric parameters (T_ eff, log g and [Fe/H]). Based on the proportions of active and very active solar-like stars, it is concluded that the occurrence rate of high levels of chromospheric activity is lower among the stars with effective temperatures between 5600 and 5900 K. It is found that when log g>4.5 dex, the proportions of active solar-like stars exhibits an increasing trend, whereas the proportions of very inactive, inactive and very active solar-like stars decrease. It is discovered that there is a decrease in the proportions of active solar-like stars when [Fe/H]>0.1 dex. This decreasing trend ceases and the proportions of active solar-like stars becomes relatively stable when [Fe/H] = 0.3 dex. The chromospheric activity database of the LAMOST LRS spectra of solar-like stars provided in this work includes the most commonly used chromospheric activity parameters such as S_ MWO, R_ HK and R'_ HK. The relationship between chromospheric activity and other stellar magnetic manifestation (such as stellar rotation period and age) can be further investigated. Additionally, the database can be used to investigate the relationship between stellar and solar activity for a better understanding of the stellar-solar connection. The database may also contribute to the discovery of new solar-type stars accommodating potentially habitable exoplanetary systems. This work is supported by the National Key R&D Program of China (2019YFA0405000) and the National Natural Science Foundation of China (12073001 and 11973059). W.Z. and J.Z. thank the support of the Anhui Project (Z010118169). H.H. acknowledges the CAS Strategic Pioneer Program on Space Science (XDA15052200) and the B-type Strategic Priority Program of the Chinese Academy of Sciences (XDB41000000). Guoshoujing Telescope (the Large Sky Area Multi-Object Fiber Spectroscopic Telescope, LAMOST) is a National Major Scientific Project built by the Chinese Academy of Sciences. Funding for the project has been provided by the National Development and Reform Commission. LAMOST is operated and managed by the National Astronomical Observatories, Chinese Academy of Sciences. aa § ACCURACY OF STELLAR PARAMETERS We identify 3806 common stars in <cit.> and compare their effective temperature values with those in our database, as shown in Figure <ref>. For T_ eff in the range of 4800 to 6300 K, the T_ eff values provided by LASP are approximately consistent with the results in <cit.>, generally with Δ T_ eff less than 120 K. The T_ eff,A2020 values are obtained from various observation instruments and are taken from the survey with the highest spectral resolution when the same sources were observed in multiple surveys <cit.>. Differences in observation instruments and estimation methods would contribute to the discrepancies between T_ eff,LASP and T_ eff,A2020. Although there are some differences between T_ eff,LASP and T_ eff,A2020, the corresponding log R'_ HK,PHOENIX values estimated based on T_ eff,LASP and T_ eff,A2020 exhibit approximate consistency as shown in Figure <ref>, where the Δlog R'_ HK,PHOENIX values are generally less than 0.05. § CALIBRATION OF CHROMOSPHERIC ACTIVITY INDEX We cross-match the Gaia DR3 source identifier in this paper with stars in <cit.>, <cit.>, <cit.> and  <cit.>, and find out 23 common stars (16 stars in and are also studied in ). Figure <ref> shows the distribution between log R'_ HK,PHOENIX and log R'_ HK,paper. As can be seen in Figure <ref>, our results show an approximate agreement with values from other instruments. The values used in Figure <ref> are recorded in Table <ref>. § DISTRIBUTION OF CHROMOSPHERIC ACTIVITY INDEX WITH MULTI-OBSERVATION The histogram of σ_log R'_ HK,PHOENIX/log R'_ HK,PHOENIX for stars with more than one observation is shown in Figure <ref>, where the σ_log R'_ HK,PHOENIX represents the standard deviation of log R'_ HK,PHOENIX. The stars with σ_log R'_ HK,PHOENIX/log R'_ HK,PHOENIX<0.2 account for 95.5% of the stars with more than one observation. Additionally, Figure <ref> displays the distributions of log R'_ HK,PHOENIX with T_ eff (a), log g (b) and [Fe/H] (c) for solar-like stars with more than one observation and σ_log R'_ HK,PHOENIX/log R'_ HK,PHOENIX<0.2. The envelopes in Figure <ref> are similar to those in Figure <ref>. The distributions of log R'_ HK,PHOENIX values in (a) T_ eff vs. log g, (b) T_ eff vs. [Fe/H], and (c) [Fe/H] vs. log g parameter spaces for solar-like stars with more than one observation and σ_log R'_ HK,PHOENIX/log R'_ HK,PHOENIX<0.2 are shown in Figure <ref>.
http://arxiv.org/abs/2405.10262v1
20240516171325
Two-Phase Dynamics of Interactions Explains the Starting Point of a DNN Learning Over-Fitted Features
[ "Junpeng Zhang", "Qing Li", "Liang Lin", "Quanshi Zhang" ]
cs.LG
[ "cs.LG", "cs.AI", "cs.CV" ]
Quadratic quasi-normal mode dependence on linear mode parity Adam Pound May 20, 2024 ============================================================ This paper investigates the dynamics of a deep neural network (DNN) learning interactions. Previous studies have discovered <cit.> and mathematically proven <cit.> that given each input sample, a well-trained DNN usually only encodes a small number of interactions (non-linear relationships) between input variables in the sample. A series of theorems have been derived to prove that we can consider the DNN's inference equivalent to using these interactions as primitive patterns for inference. In this paper, we discover the DNN learns interactions in two phases. The first phase mainly penalizes interactions of medium and high orders, and the second phase mainly learns interactions of gradually increasing orders. We can consider the two-phase phenomenon as the starting point of a DNN learning over-fitted features. Such a phenomenon has been widely shared by DNNs with various architectures trained for different tasks. Therefore, the discovery of the two-phase dynamics provides a detailed mechanism for how a DNN gradually learns different inference patterns (interactions). In particular, we have also verified the claim that high-order interactions have weaker generalization power than low-order interactions. Thus, the discovered two-phase dynamics also explains how the generalization power of a DNN changes during the training process. § INTRODUCTION Most existing studies <cit.> considered the generalization power of an AI model as an intrinsic property of the entire model. However, in this study, let us revisit the generalization power of DNNs from a new perspective, i.e., directly quantifying the primitive inference patterns encoded by a DNN to explain the DNN's overfitting. There have been two substantial progresses in this new direction in recent years. • Background 1: Proving that the complex inference score of a DNN can be faithfully explained by primitive inference patterns. Explaining the inference of a DNN as symbolic inference patterns is a fundamental yet counter-intuitive problem in the field of explainable artificial intelligence (XAI). Fortunately, it is experimentally discovered <cit.> and mathematically proved <cit.> that given a specific input sample, a well-trained DNN usually only encodes a small number of interactions. Each interaction is a metric to measure a non-linear relationship between a specific set of input variables in S. As Figure <ref> shows, the DNN may encode an interaction between four words of S = {raining, cats, and, dogs} (i.e., four input variables) in the input sentence. The co-appearance of the four words triggers this interaction and makes an effect I(S) that pushes the DNN's output towards the meaning of “heavy rain.” It is proven <cit.> that almost all subtle changes of network outputs w.r.t. any random masking of input variables can all be mimicked by these interaction effects just like primitive inference patterns. • Background 2: The interaction enables us to analyze the specific generalization power of each input sample in a much more fine-grained manner, instead of simply projecting the entire input sample into a single high-dimensional feature. To this end, <cit.> have found that high-order interactions often have weaker generalization power than low-order interactions. The order of an interaction is defined as the number of variables in the interaction, order(S) =| S |, to represent the complexity of the interaction. Discovering the two-phase phenomenon of the change of interaction’s complexity during the training process. Based on the above findings, in this paper, we hope to explore a new issue, i.e., identifying the exact starting point (epoch) when this epoch begins to learn over-fitted features. Specifically, as figure <ref>, we discover a two-phase phenomenon in the training process of DNNs, which reveals the hidden factors that push the DNN from under-fitting to over-fitting throughout the entire training process. (1) Before the training process: a DNN with initialized parameters mainly encodes interactions of medium orders and seldom encodes interactions of very high orders or very low orders. The distribution of interactions over different orders looks like a fusiform. (2) In the first phase: higher-order interactions are gradually eliminated. Eventually, at the end of the first phase, the DNN mainly encodes only low-order interactions. (3) In the second phase: the DNN learns interactions of gradually increasing orders. The gradual increase of the interaction order is a typical phenomenon of learning over-fitted features. Alignment between the two-phase training process and the loss gap. When we use the gap between training and testing losses to measure the overfitting level, we find that the dynamics of the overfitting level is consistently aligned with the dynamics of the two-phase training process. As Figure <ref> shows, during the entire first phase, the gap between the training loss and the testing loss is relatively small, i.e., the DNN has not learned over-fitted features. Whereas, shortly after entering the second phase, the gap widens rapidly, which indicates the beginning of learning over-fitted features. Identifying the starting point of learning over-fitted features. Unlike traditionally considering overfitting as a property of the entire model, we aim to identify the starting point of overfitting for each specific training sample. Typically, the starting point of learning over-fitted features for the DNN occurs shortly into the second phase. Note that the starting point does not mean the entire model has been significantly biased. Instead, both the generalizable features and non-generalizable (over-fitted) features are learned simultaneously in the second phase, and the testing accuracy still keeps increasing. Specifically, in the second phase, the DNN begins to shift its attention from exclusively learning low-order interactions to learning interactions of bit higher orders. Therefore, we consider the DNN actually learns more complex interactions from under-fitting towards over-fitting in the second phase. More crucially, the above two-phase phenomenon is widely observed in various DNNs trained for different tasks. To this end, we conducted experiments on seven different DNNs trained on six different datasets, towards image classification, natural language processing, and 3D point cloud classification. In these experiments, we both verified the relationship between the order of the interactions and verified their generalization power and the two-phase dynamics of learning interactions. In sum, although previous work <cit.> has found that the inference score of a DNN can be explained as the sum of the effects of a small number of interactions, and has found the weak generalization power of high-order interactions, our study has substantially extended the understanding of the generalization power of DNNs. The discovered two-phase dynamics of interactions explains the detailed mechanical factors that potentially determine how the generalization power changes during the training process. Specifically, in the first phase, the DNN removes the noise and learns low-order interactions to improve generalization power. In the second phase, the DNN gradually learns more complex interactions, and feature representations of the DNN gradually change from underfitting to overfitting. In addition, our research also shows that the two-phase phenomenon is temporally aligned with the dynamics of the gap between the testing and training losses. We believe that the above findings explain the starting point of a DNN learning over-fitted features. § RELATED WORK Post-hoc explanation of DNNs is a classical direction of explainable AI. However, the disappointing view of the faithfulness of post-hoc explanation of a DNN has been widely accepted for years <cit.>. Fortunately, recent progress <cit.> in explaining interactions encoded by a DNN provides a new perspective to analyze the DNN, and a series of properties (e.g., sparsity property and universal matching property) of interactions are proven to mathematically guarantee that the interactions can faithfully represent primitive inference patterns of the DNN. The theoretical system of interactions mainly makes breakthroughs in the following three aspects. (1) It has been mathematically proven <cit.> that a DNN’s inference logits can be faithfully explained through interactions, i.e., given a specific input sample, a sufficiently-trained DNN usually only encodes a small number of interactions for inference, and the DNN's outputs on all different masked states of the sample can be universally matched by these interactions. (2) It has been demonstrated that interactions can explain the generalization power <cit.>, and the adversarial robustness/transferability of a DNN <cit.>. (3) It has been found that twelve approaches for improving adversarial transferability all share a common mechanism <cit.>, i.e., they are implicitly reducing interactions between adversarial perturbations, while fourteen attribution methods can all be explained as re-allocation of interaction effects <cit.>. Therefore, in this study, we further develop the interaction theory to directly use the primitive inference patterns encoded by a DNN to explain the generalization power of the DNN, i.e., identifying the exact starting point of learning over-fitted features from each input sample. Compared to traditional analysis of the generalization power of a DNN, we believe that decomposing a DNN's exact inference score into interactions provides much deeper insights into the essential factors that determine a DNN’s generalization power. First, the previous work <cit.> mainly analyzed already converged DNNs, instead of estimating the time of starting learning over-fitted features. Therefore, we explore the dynamics of the order (complexity) of interactions encoded by a DNN to analyze the change of the generalization power of a DNN. Second, previous work considered overfitting as an overall property of a model. For example, explaining DNNs through the theoretical bounds for the generalization <cit.>, smoothness of the loss landscape <cit.> often project each sample to a high-dimensional feature point in the feature space. However, we find that such a claim is usually valid for shallow models, but for complex DNNs, different training samples usually have different starting points of overfitting. Thus, we propose to evaluate whether a DNN has begun to learn over-fitted features by evaluating the distribution of interactions over different orders. § ANALYZING THE STARTING POINT OF LEARNING OVER-FITTED FEATURES §.§ Preliminaries: interactions Let us consider a DNN v and an input sample 𝐱 = [x_1, x_2, ..., x_n]^T, which contains n input variables and is indexed by N = {1, 2, ..., n}. Let v(𝐱)∈ℝ be a scalar[The DNN's output score can be defined in different forms. For example, for multi-class classification, v(𝐱) can be defined as either v(𝐱) = logp(y=y^*|𝐱)/1-p(y=y^*|𝐱) or the scalar output corresponding to the ground-truth label before the softmax layer, where p(y=y^*|𝐱) represents the probability of the ground-truth label.] output of the DNN. We can decompose the v(𝐱) into a set of AND interactions I_and(𝐱) ∈Ω_and and a set of OR interactions I_or(𝐱) ∈Ω_or by following <cit.>, as follows, v(𝐱) = ∑_S ∈Ω_and I_and(S|𝐱) + ∑_S ∈Ω_orI_or(S|𝐱) + v(∅). The above AND-OR interactions can be understood as follows. Theorem <ref> shows that an AND interaction represents an AND relationship between input variables in S ⊆ N encoded by the DNN (e.g., the compositional relationship between image regions encoded for image classification or the interactions between words for natural language processing). For example, given an input sentence 𝐱 = “it was raining cats and dogs outside,” the DNN may encode an interaction between S={raining, cats, and, dogs} to make an effect I(S |𝐱) that pushes the DNN's output towards the meaning of “heavy rain.” If any input variable in S is masked[A variable in S is masked means that the variable in S is replaced by a baseline value. The baseline value is usually set to the mean value of this variable over different samples <cit.>.], this numerical effect will be removed from the DNN's output. Similarly, an OR interaction I_or(S|𝐱) represents an OR relationship between input variables in S. For example, given an input sentence 𝐱 = “the movie is disappointing and uninspiring,” the presence of either word in S = {disappointing, uninspiring} will push the DNN's output towards the meaning of negative sentiment. The AND interaction and the OR interaction for ∀∅≠ S⊆ N are defined as I_and(S |𝐱) =∑_T ⊆ S (-1)^| S | - | T | v_and(𝐱_T), I_or(S |𝐱) = - ∑_T ⊆ S (-1)^| S | - | T | v_or(𝐱_N \ T), where 𝐱_T represents a masked input sample in which we mask<ref> input variables in N ∖ T. The DNN's output v(𝐱_T) is decomposed into two components v(𝐱_T) = v_and(𝐱_T) + v_or(𝐱_T) + v(∅). It is easy to prove[See appendix <ref> for the detailed proof.] that the component v_and(𝐱_T) only contains AND interactions v_and(𝐱_T) = ∑_∅ S^'⊆ T I_and(S^'|𝐱_T), and the component v_or(𝐱_T) only contains OR interactions v_or(𝐱_T) = ∑_S⊆ N:S ∩ T ≠∅ I_or(S|𝐱_T). How to compute AND-OR interactions. In order to compute AND-OR interactions, we set the component v_and(𝐱_T) = 0.5· v(𝐱_T) - 0.5· v(∅) + γ_T and set v_or(𝐱_T) = 0.5· v(𝐱_T) - 0.5· v(∅) - γ_T. The parameters {γ_T} are learned to obtain the sparsest interactions via min_{γ_T}∑_S| I_and(S|𝐱) | + | I_or(S|𝐱) | <cit.>. Theoretical guarantee of taking interactions as a faithful explanation of the DNN's inference logic. The following four properties ensure the faithfulness of interaction-based explanation. • Sparsity property. Theorem <ref> shows that the inference of a sufficiently trained DNN is equivalent to an interaction-based model using a small number of interactions for inference. Theoretically, there are totally 2^n different subsets S⊆ N of input variables, but only a few subsets of input variables have salient interaction effects. The DNN only encodes a few salient AND interactions in Ω_and = {S⊆ N :| I_and(S | x)| >τ} and a few salient OR interactions in Ω_or = {S⊆ N :| I_or(S | x)| >τ}, where τ represents a threshold. More precisely, there are only 𝒪(n^κ / τ) ≪ 2^n interactions with absolute effects greater than the threshold τ. These interactions are sparse, because κ empirically ranges in [0.9, 1.2]. • Universal matching property. Theorem <ref> shows that given a specific input sample 𝐱, there are a total of 2^n binary masks corresponding to 2^n different subsets S ⊆ N of input variables. Then, the DNN's outputs v(𝐱_S) on all masked samples can be accurately matched by the numerical effects of a small number of salient interactions. • Transferability property. Empirical studies <cit.> have demonstrated the transferability of interactions across different input samples in classification tasks. Specifically, it has been observed that there are a set of common salient interactions shared by different samples in the same category, i.e., these interactions are frequently extracted by the DNN from different input samples. • Discrimination property. Empirical studies <cit.> also have demonstrated the discrimination property of interactions in classification tasks. As mentioned above, a specific salient interaction I(S|𝐱) can be extracted from different samples. Then, this interaction usually pushes the classification score to the same category, i.e., the numerical effects of this interaction on different samples are usually consistently positive (or consistently negative) to the classification score. Let us be given an input sample 𝐱 with n input variables. Let us use a threshold τ to select a set of salient AND interactions Γ, subject to | I_and(S|𝐱)| > τ. If the DNN's outputs score v(𝐱_T) on all 2^n samples {𝐱_T | T ⊆ N} are relatively stable[ The relatively stable output of a DNN on input samples with different masking {𝐱_T | T ⊆ N} can be represented as the following three conditions in mathematics. (1) High-order interactions were not encoded by the DNN. (2) Let {𝐱_T:| T |=n-m} denotes the set of masking samples, where we mask m input variables. In this way, the average output score of the DNN 𝔼_T[v(𝐱_T)] over {𝐱_T:| T |=n-m} monotonically decreases as m increases. (3) The decreasing speed of 𝔼_T[v(𝐱_T)] is polynomial. Please see Appendix <ref> for details.], then the upper bound of the number of salient interactions |Γ| is 𝒪(n^κ/τ). A DNN’s outputs on all 2^n masked samples {x_T | T ⊆ N} can be universally matched by the numerical effects of a small number of salient interactions. v(𝐱_T) = ∑_S⊆ N I_and(S |𝐱) ·1(𝐱_T triggers AND relationship S)_v_and(𝐱_T) + ∑_S ⊆ N I_or(S|𝐱) ·1(𝐱_T triggers OR relationship S)_v_or(𝐱_T) + v(𝐱_∅) = ∑_∅ S ⊆ TI_and(S|𝐱) + ∑_S ⊆ N: S∩ T ∅ I_or(S|𝐱) + v(𝐱_∅) ≈∑_S ∈Ω_and:S ⊆ TI_and(S|𝐱) + ∑_S ∈Ω_or:S ∩ T ∅ I_or(S |𝐱) + v(𝐱_∅). The order of an interaction. We define the order of an interaction I(S |𝐱) as the number of input variables in S ⊆ N, i.e., order(S) = | S |, which reflects the complexity of the interaction. Specifically, the lower order means that the interaction contains fewer input variables, thereby being simpler. §.§ The two-phase phenomenon In this study, we analyze the dynamics of the generalization power of a DNN during the training process. Specifically, since <cit.> have found that high-order interactions have weaker generalization power than low-order interactions, we aim to analyze the change of the distribution of salient interactions over different orders during the training process, in order to analyze the change of the generalization power of the DNN. As Figure <ref> shows, the distribution of interactions over different orders is quantified by the strength of interactions over different orders. Specifically, due to the sparsity property of interactions, we only focus on salient interactions, which are considered primitive inference patterns encoded by the DNN, and ignore non-salient ones. Salient interactions are defined as Ω_and={S⊆ N:| I_and(S |𝐱)| >τ} and Ω_or = {S⊆ N :| I_or(S |𝐱)| >τ}, where τ represents a threshold. Then, given all salient AND-OR interactions of the k-th order, we quantify the strength of all positive salient interactions J_pos^(k) and the strength of all negative salient interactions J_neg^(k), as J_pos^(k) = ∑_S ∈Ω_and∪Ω_or:| S | = kmax(I(S |𝐱), 0) and J_neg^(k) = -∑_S ∈Ω_and∪Ω_or: | S | = k|min(I(S |𝐱), 0) |. In this way, Figure <ref> visualizes the strength of interactions over different orders encoded by DNNs trained at different epochs, which reflect the following two-phase dynamics during the training process. • Before the training process, as Figure <ref> shows, an initialized DNN encodes mostly medium-order interactions and rarely encodes high-order and low-order interactions, and the distribution of interactions over different orders looks like a fusiform. Considering that the DNN with initialized parameters encodes noisy patterns, we can prove the fusiform-like distribution of interactions over different orders. These interactions are extracted from output noises of the initialized DNN, thereby having little generalization power. • In the first phase of the training process, as Figure <ref> shows, the strength of the high-order and medium-order interactions encoded by the DNN gradually decreases, while the strength of the low-order interactions gradually increases. Eventually, the high-order and medium-order interactions are gradually eliminated, and the DNN encodes only low-order interactions. In comparison, the low-order interactions learned during the first phase usually have high generalization power. Therefore, the first phase can be considered mainly to eliminate noisy high-order and medium-order interactions, and gradually the DNN only encodes the simplest low-order interactions with high generalization power. • In the second phase of the training process, as Figure <ref> shows, the DNN encodes interactions of gradually increasing orders (complexity) during the training process. Gradually learning more and more complex interactions makes the over-fitting risk gradually increase. Thus, this is a slow process from under-fitting to over-fitting. §.§ Experiments • Is the two-stage phenomenon during the training process ubiquitously observed on most DNNs on different datasets for various tasks? We trained LeNet <cit.>, VGG-11/13/16 <cit.>, and AlexNet <cit.> on image datasets (the CIFAR-10 dataset <cit.>, the MNIST dataset <cit.>, the CUB200-2011 dataset <cit.> with cropping background regions around the bird, and the Tiny-ImageNet dataset <cit.>), while we trained the Bert-Medium/Tiny model <cit.> on natural language dataset (the SST-2 dataset <cit.>), and we trained DGCNN <cit.> on 3D point cloud dataset (the ShapeNet dataset <cit.>). Figure <ref> shows the distribution of salient interactions over different orders extracted from different DNNs at different training epochs. We found the two-stage phenomenon on all these DNNs. • Do high-order interactions have weaker generalization power than low-order interactions? If an interaction frequently extracted from the training samples can also be frequently observed in testing samples, then this interaction is considered generalizable to testing samples. Thus, we follow <cit.> to use the Jaccard similarity between the distribution of interactions extracted from training samples and the distribution of interactions extracted from testing samples to measure the generalization power of interactions. Specifically, given each input sample 𝐱 with n input variables, we vectorize all interactions of k-th order extracted from 𝐱 as 𝐈^(k)(𝐱)=[I(S_1|𝐱), I(S_2|𝐱), ..., I(S_d|𝐱)]^T, where S_1, S_2, ..., S_d denotes the all d = nk interactions of the k-th order. Then, we compute the average interaction vector of k-th order over all samples in the category c as 𝐈^(k)_c=𝔼_𝐱∈ C[𝐈^(k)(𝐱)] to represent the distribution of interactions of k-th order extracted from samples in the category c, where C denotes the set of samples in the category c. Then, we compute the Jaccard similarity between the average interaction vector of k-th order over training samples 𝐈^(k)_c, train and the average interaction vector of k-th order over testing samples 𝐈^(k)_c, test to evaluate the generalization power of interactions of k-th order w.r.t. the classification of the category c, i.e., Sim(𝐈^(k)_c, train, 𝐈^(k)_c, test) = min(𝐈^(k)_c, train, 𝐈^(k)_c, test ) _1/max(𝐈^(k)_c, train, 𝐈^(k)_c, test) _1, where 𝐈^(k)_c, train = [(max(𝐈^(k)_c, train), 0)^T, (max(-𝐈^(k)_c, train), 0)^T]^T and 𝐈^(k)_c, test = [(max(𝐈^(k)_c, test), 0)^T, (max(-𝐈^(k)_c, test), 0)^T]^T project two 2^d-dimensional interaction vector to two 2^d+1-dimensional non-negative vectors to enable the computation of the Jaccard similarity. A large Jaccard similarity indicates a stronger generalization power. We conducted experiments to compute Sim(𝐈^(k)_c, train, 𝐈^(k)_c, test) for interactions of different orders. We tested LeNet trained on the MNIST dataset, VGG-11 trained on the CIFAR-10 dataset, VGG-13 trained on the CUB200-2011 dataset and VGG-13 trained on the Tiny-ImageNet dataset. In order to reduce the computational cost, we only computed the mean of the Jaccard similarity 𝔼_c[Sim(𝐈^(k)_c, train, 𝐈^(k)_c, test)] over the first 10 categories. Figure <ref> shows that the Jaccard similarity of the interactions kept decreasing as the order of the interactions increased. Thus we verify that the generalization power of high-order interactions was usually weaker than that of low-order interactions. • Further evidence that high-order interactions have weak generalization power. We visualized the distribution of interactions over different orders encoded by a DNN, which was trained with noise labels. Specifically, we trained VGG-11 on the MNIST dataset, VGG-13 on the MNIST dataset, and VGG-13 on the CIFAR-10 dataset. In particular, we assigned some (0.17%) training samples with an incorrect label. In this way, the original samples in the dataset could be considered simple samples, while a few samples with incorrect labels corresponded to hard samples that were supposed to cause over-fitting of the DNN. The details of how to assign training samples with an incorrect label are provided in Appendix <ref>. Figure <ref> compares the interaction's complexity (orders) between interactions extracted from original samples and interactions extracted from incorrectly labeled samples. We found that LeNet, VGG-11, and VGG-13 used much more complex interactions (interactions of much higher orders) for the classification of incorrectly labeled samples than the classification of original samples. This further validated that high-order interactions usually had weaker generalization power than low-order interactions. Discussions: Interactions can explain the underlying metrics for the generalization power of a DNN. (1) The first metric is the order (complexity) of interactions. <cit.> have found that high-order interactions have poor generalization power. If a DNN encodes too many high-order interactions for a given input sample, it suggests that the DNN has poor generalization power on this sample. (2) For interactions of each order, a positive and negative offset of these interactions is another metric that indicates the generalization power of a DNN. If interactions extracted from a given input sample almost offset each other, then the DNN may have poor generalization power on this sample. These two metrics provide new insights into the generalization power of the DNN. Unlike traditional studies <cit.>, the order and the positive and negative offset of interactions first bridge the generalization power of an entire black-box model with its detailed inference patterns (interactions). Our research shows that a high confidence score does not always represent a faithful inference. Let us consider the following two cases. (1) The DNN's inference with a high classification confidence may not necessarily be reliable, because the DNN may use considerable high-order interactions for inference, or has a significant offset between positive interactions and negative interactions. Then, this DNN is probably over-fitted. (2)Not-so-confident classifications may either not be unreliable. Sometimes, the DNN may have learned very few interactions to classify an input sample, but all learned interactions are simple and generalizable. §.§ Proof and discussion Proof the fusiform-like distribution of interactions over different orders when network parameters are randomly initialized. We consider that the initialized DNN mainly encodes noisy patterns, thereby generating random interactions. Thus, Theorem <ref> proves that the distribution of interactions encoded by the initialized DNN has a fusiform shape. Let us assume that all interactions encoded by a DNN with randomly initialized parameters represent noises, and follow a Gaussian distribution ∀ S⊆ N, I(S|𝐱) ∼ℕ(0, σ^2 ·𝐈), where 𝐈 represents identity matrix. Then, let Ψ_pos^(k) = ∑_S ⊆ N:| S | = kmax(I(S |𝐱),0) and Ψ_neg^(k) = ∑_S ⊆ N:| S | = kmin(I(S |𝐱),0) denote the strength of all positive and the strength of all negative AND-OR interactions of the k-th order. Then, the mean of the Ψ_pos^(k) and Ψ_neg^(k) encoded by an initialized DNN are 𝔼[Ψ_pos^(k)] = nk·√(σ / 2π), 𝔼[Ψ_neg^(k)] = -nk·√(σ / 2π) Alignment between the two-phase phenomenon and the gap between testing and training losses during the training process. The gap between the training and testing losses is the most widely used metric for the over-fitting level of a model. To this end, we have found that the two-phase phenomenon and the gap between the testing and training losses are aligned temporally during the training process. We followed the experimental settings in Section <ref>, and we found that the two-phase phenomenon and the gap between the testing and training losses are aligned temporally during the training process. Figure <ref> shows the curves of the training loss, the testing loss, and the gap between the two losses. It also shows the distribution of interactions over different epochs. We also annotated the epoch when the gap between the testing and training losses began to increase The annotated epoch was taken as the end of the first phase and the beginning of the second phase. All these DNNs exhibited the two-phase phenomenon, which aligned the gap between the testing and training losses. In this way, we can understand the above phenomenon as follows. Before the training process, the interactions encoded by the initialized DNN all represented random noises, and the distribution of the interactions over different orders looked like a fusiform. During the first phase, the DNN penalized interactions of medium and high orders, and gradually learned the simplest (low-order) interactions. In particular, most DNNs removed high-order interactions and only encoded interactions of the lowest orders just before the annotated epoch (i.e., the epoch before the gap between the testing and training losses). Then, in the second phase, the DNN encoded interactions of gradually increasing orders. Because <cit.> have found that high-order interactions usually had weaker generalization power than low-order interactions, we could consider in the second phase, the DNN first learned interactions with the strongest generalization power and then gradually shifted its attention to a bit more complex yet less generalizable interactions. Some DNNs were finally over-fitted and encoded many interactions of medium and high orders. Identifying the starting point of learning over-fitted features. For most shallow models (e.g. the support vector machine), the generalization power of a model is usually considered an intrinsic property of the model over the entire testing dataset. However, for deep models, the generalization power of a DNN has become a more complex problem. For example, <cit.> have found that given different input samples, a DNN may exhibit fully different generalization powers. According to both the findings in <cit.> and experiments in Figure <ref> and <ref>, given some input samples, the DNN uses low-order interactions for inference, and we can consider that the inference is conducted on relatively simple and generalizable features. Whereas, given other input samples, the DNN triggers interactions of medium and high orders, and we can consider that the DNN uses over-fitted features for inference. In this way, interactions offer a new perspective to analyze the DNN's generalization power on each input sample making the under-fitting and over-fitting no longer two contradictory issues. Emprically, a DNN usually begins to learn over-fitted features shortly after entering the second phase of the training process. • Under-fitting can be understood as the lack of sufficient generalizable interactions (usually low-order interactions), while over-fitting is referred to as the encoding of non-generalizable interactions (usually high-order interactions). • The beginning of learning over-fitted features does not mean the stop of learning meaningful features. Over-fitted features and normal features may be simultaneously learned in the training process. I.e., low-order interactions and medium/high-order interactions are optimized simultaneously in the second phase. § CONCLUSION In this paper, we have used interactions to explain the primitive inference patterns used by the DNN, and we have discovered the two-phase dynamics of a DNN learning interactions of different complexities (orders). Specifically, we have discovered and later proven that before the training process, a DNN with randomly initialized parameters mainly encodes interactions of medium orders. Then, the training process has two phases. The first phase mainly penalizes interactions of medium and high orders, and the second phase mainly learns interactions of gradually increasing orders. More interestingly, the two-phase dynamics is temporally aligned with the change of the gap between the testing and training losses during the learning process. In other words, the two-phase dynamics of interactions can be considered as a fine-grained mechanism for the change of the generalization power of a DNN during the training process. In other words, the discovered two-phase dynamics illustrates how a DNN learns detailed generalizable and over-fitted interactions in different epochs, and how a DNN's feature representation changes from under-fitting to over-fitting. Various ablation studies have proven that high-order interactions usually have weaker generalization power than low-order interactions. Then, we have conducted experiments to extract interactions from DNNs with various architectures trained for different tasks. The two-phase dynamics have been successfully verified on all these DNNs. plainnat § EXPERIMENTAL DETAIL §.§ Training settings. In this paper, we trained various DNNs for different tasks. Specifically, for the image classification task, we trained LeNet and VGG-11/13 on the MNIST dataset with a learning rate of 0.01. We trained LeNet, and VGG-11/13 on the CIFAR-10 dataset with a learning rate of 0.01. We trained AlexNet, VGG-13/16 on the CUB200-2011 dataset with a learning rate of 0.01. We trained AlexNet, VGG-13/16 on the Tiny-ImageNet dataset with a learning rate of 0.001. For natural language processing tasks, we trained the Bert-Tiny model and Bert-Medium model on the SST-2 dataset with a learning rate of 0.01. For point-cloud classification tasks, we trained DGCNN on the ShapeNet dataset with a learning rate of 0.01. All DNNs were trained using the SGD optimizer and were trained for 512 epochs. §.§ Details about how to calculate interactions for different DNNs. • For image data in different image datasets, since the computational cost of interactions was intolerable, we applied a sampling-based approximation method to calculate I(S |𝐱). Specifically, we considered the output after the second ReLU layer for VGG-11, VGG-13, and VGG-16 and masked the output after the first ReLU layer for the other DNNs as middle features of DNNs. For the CIFAR-10 dataset, the CUB200-2011 dataset, and the Tiny-ImageNet dataset, we uniformly split each middle feature into 8 × 8 patches. Furthermore, we randomly sampled 10 patches from the central 6 × 6 region to calculate interactions (i.e., we did not sample patches that were on the edges of an image), and considered these patches as input variables for each middle feature. Similarly, for the MNIST dataset, we uniformly split each input middle feature into 7 × 7 patches and randomly sampled 10 patches from the central 5 × 5 region. We used 0 as a baseline value to mask the variables in N \ T. • For natural language processing data in SST-2 dataset, we considered the input tokens as input variables for each input sentence, and we randomly sampled 10 tokens to calculate interactions. We used the “mask” token with the token id = 103 to mask the tokens in N \ T. • For 3D point cloud data in ShapeNet datasets, we clustered all the points into 30 clusters, and considered these clusters as input variables for each 3D point cloud. we finally randomly selected 10 clusters as variables to calculate interactions. We use the average value of each cluster to mask the corresponding cluster in N \ T. § DETAIL OF THREE CONDITIONS FOR THE RELATIVELY STABLE OUTPUT OF A DNN. <cit.> have formulated the sparsity property of AND interactions, and there are three conditions in mathematics as follows. Condition 1. Interactions beyond the M-th order were not encoded by the DNN: ∀ S∈{S⊆ N || S|≥ M+1}, I_and(S|𝐱)=0. Condition 1 suggests that interactions beyond the M-th order were not encoded by the DNN. This is because such interactions typically denote very complex and over-fitted patterns, which are unnecessary and unlikely for the DNN to learn in practical applications. Condition 2. Let us consider the average network output over all masked samples 𝐱_S with |S|=k unmasked input variables. This average network output monotonically increases when k increases: ∀ k' ≤ k, we have u̅^(k')≤u̅^(k), where u̅^(k) def=𝔼_|S|=k[v(𝐱_S)-v(𝐱_∅)]. Condition 2 implies that a well-trained DNN is likely to have higher classification confidence for input samples that are less masked. Condition 3. Given the average network output u̅^(k) of samples with k unmasked input variables, there is a polynomial lower bound for the average network output of samples with k' (k'≤ k) unmasked input variables: ∀ k' ≤ k, u̅^(k')≥ (k'/k)^p u̅^(k), where p>0 is a positive constant. Condition 3 implies that the classification confidence of the DNN does not significantly degrade on masked input samples. The classification/detection of masked/occluded samples is common in real scenarios. In this way, a well-trained DNN usually learns to classify a masked input sample based on local information (which can be extracted from unmasked parts of the input) and thus should not yield a significantly low confidence score on masked samples. § STRATEGIES OF ADDING NOISE LABELS We trained VGG-11 and LeNet on the MNIST dataset, then we used the well-trained VGG-11 and LeNet to find 100 samples with the lowest classification confidence for each category in the training set separately, and then set their corresponding labels to the second-best category. Similarity, We trained VGG-11 and VGG-13 on the CIFAR-10 dataset, then we used the well-trained VGG-11 and VGG-13 to find 100 samples with the lowest classification confidence for each category in the training set separately, and then set their corresponding labels to the second-best category. § PROOF DETAILS §.§ Proof of Theorem 3 in the main paper Theorem 3. Let us assume that all interactions encoded by a DNN with randomly initialized parameters represent noises, and follow a Gaussian distribution ∀ S⊆ N, I(S|𝐱) ∼ℕ(0, σ^2 ·𝐈), where 𝐈 represents identity matrix. Then, let Ψ_pos^(k) = ∑_S ⊆ N:| S | = kmax(I(S |𝐱),0) and Ψ_neg^(k) = ∑_S ⊆ N:| S | = kmin(I(S |𝐱),0) denote the strength of all positive and the strength of all negative AND-OR interactions of the k-th order. Then, the mean of the Ψ_pos^(k) and Ψ_neg^(k) encoded by an initialized DNN are 𝔼[Ψ_pos^(k)] = nk·√(σ / 2π), 𝔼[Ψ_neg^(k)] = -nk·√(σ / 2π) Let d = nk denotes the number of interactions of k-th order. Let us assume that all interactions encoded by a DNN with randomly initialized parameters represent noises, and follow a Gaussian distribution ∀ S⊆ N, I(S|𝐱) ∼ℕ(0, σ^2 ·𝐈), where 𝐈 represents identity matrix. Then, the mean absolute deviation (MAD) of all interactions 𝔼[| I(S|𝐱) |] is equal to √(2 σ / π). In this way, 𝔼[Ψ_pos^(k)] = ∑_S ⊆ N:| S | = kmax(I(S |𝐱),0) =nk· P(I(S|𝐱) > 0) ·𝔼[| I(S|𝐱) |] = nk·1/2·√(2 σ / π) = nk·√(σ / 2π), 𝔼[Ψ_neg^(k)] = ∑_S ⊆ N:| S | = kmin(I(S |𝐱),0) =nk· P(I(S|𝐱) < 0) · -𝔼[| I(S|𝐱) |] = nk·1/2· -√(2 σ / π) = -nk·√(σ / 2π). Therefore, Theorem 3 holds. §.§ Proof that the AND-OR component only contains AND-OR interactions. The DNN's output v(𝐱_T) can be decomposed into two components v(𝐱_T) = v_and(𝐱_T) + v_or(𝐱_T) + v(∅). It is easy to prove that the component v_and(𝐱_T) only contains AND interactions v_and(𝐱_T) = ∑_∅ S^'⊆ T I_and(S^'|𝐱_T), and the component v_or(𝐱_T) only contains OR interactions v_or(𝐱_T) = ∑_S⊆ N:S ∩ T ≠∅ I_or(S|𝐱_T). According to the definition of AND interaction, we have ∀ T ⊆ N, ∑_S^'⊆ TI_and(S^'|𝐱) = ∑_S^'⊆ T∑_L ⊆ S^' (-1)^| S^'| - | L | (v_and(L) - v_and(∅)) = ∑_L ⊆ T∑_S^'⊆ T : S^'⊇ L(-1)^| S^'| -| L| (v_and(L) - v_and(∅)) = ∑_L ⊆ T∑_t=| L|^| T|∑_S^'⊆ T: T⊇ L, | S^'| =t(-1)^t -| L| (v_and(L) - v_and(∅)) = ∑_L ⊆ T (v_and(L) - v_and(∅)) ∑_k=0^| T| - | L|| T| - | L|k(-1)^k = v_and(𝐱_T) - v_and(∅). Therefore, we have v_and(𝐱_T) = ∑_S^'⊆ T I_and(S^'|𝐱). According to the definition of OR interaction, we have ∀ T ⊆ N, ∑_S⊆ N:S ∩ T ≠∅ I_or(S|𝐱_T) = ∑_S⊆ N:S ∩ T ≠∅[- ∑_L ⊆ S (-1)^| S | - | L | v_or(𝐱_N ∖ L) ] = - ∑_L ⊆ N∑_S: S ∩ T ≠∅, S ⊇ L (-1)^| S | - | L | v_or(𝐱_N ∖ L) = - [∑_| S' | = 1^| T | C_| T |^| S' | (-1)^| S' |] ·v_or(𝐱_T)_L=N∖ T - v_or(𝐱_∅)_L=N - ∑_L ∩ T ≠∅, L ≠ N[∑_S' ⊆ N∖ T ∖ L( ∑_| S”| = 0^| T |-| T ∩ L | C_| T | - | T ∩ L |^| S”| (-1)^| S' | + | S”|) ]· v_or(𝐱_N ∖ L) - ∑_L ∩ T=∅, L ≠ N ∖ T[ ∑_S' ⊆ N∖ T ∖ L( ∑_| S”|=0^| T | C_| T |^| S”| (-1)^| S' | + | S”|) ] · v_or(𝐱_N ∖ L) = - (-1) · v_or(𝐱_T) - v_or(𝐱_∅) - ∑_L ∩ T ≠∅, L ≠ N[∑_S' ⊆ N∖ T ∖ L 0 ]· v_or(𝐱_N ∖ L) - ∑_L ∩ T=∅, L ≠ N ∖ T[∑_S' ⊆ N∖ T ∖ L 0 ] · v_or(𝐱_N ∖ L) = v_or(𝐱_T) - v_or(𝐱_∅) Therefore, we have v_or(𝐱_T) = ∑_S⊆ N:S ∩ T ≠∅ I_or(S|𝐱_T). Then, the component v_and(𝐱_T) only contains AND interactions v_and(𝐱_T) = ∑_S^'⊆ T I_and(S^'|𝐱), and the component v_or(𝐱_T) only contains OR interactions v_or(𝐱_T) = ∑_S⊆ N:S ∩ T ≠∅ I_or(S|𝐱_T).
http://arxiv.org/abs/2405.09349v1
20240515135812
Optimal constants of smoothing estimates for the 3D Dirac equation
[ "Makoto Ikoma", "Soichiro Suzuki" ]
math.AP
[ "math.AP", "math.CA", "33C55, 35B65, 35Q41, 42B10" ]
showonlyrefs, showmanualtags eqenumerate o #1 #1 * theoremTheorem[section] corollary[theorem]Corollary lemma[theorem]Lemma proposition[theorem]Proposition assumption[theorem]Assumption definition[theorem]Definition *Acknow*Acknowledgement equationsection quotetheoremTheorem[section] quotelemma[quotetheorem]Lemma UBOONDOX-dsmn Umatha45 Umathamn <5> <6> <7> <8> <9> <10> gen * matha <10.95> matha10 <12> <14.4> <17.28> <20.74> <24.88> matha12 mathaUmathamn Umathx45 Umathxmn <5> <6> <7> <8> <9> <10> <10.95> <12> <14.4> <17.28> <20.74> <24.88> mathx10 mathxUmathxmn 2matha"6B 1mathx"CB Smoothing estimates for the 3D Dirac equation]Optimal constants of smoothing estimates for the 3D Dirac equation [Makoto Ikoma]Graduate School of Mathematics, Nagoya University, Furocho, Chikusaku, Nagoya, Aichi, 464-8602, Japan ikma.m18005d@gmail.com [Soichiro Suzuki]Department of Mathematics, Chuo University, 1-13-27, Kasuga, Bunkyo-ku, Tokyo, 112-8551, Japan soichiro.suzuki.m18020a@gmail.com The second author was supported by Japan Society for the Promotion of Science (JSPS) KAKENHI Grant Number JP23KJ1939. [2020]33C55, 35B65, 35Q41, 42B10 Recently, Ikoma (2022) considered optimal constants and extremisers for the 2-dimensional Dirac equation using the spherical harmonics decomposition. Though its argument is valid in any dimensions d ≥ 2, the case d ≥ 3 remains open since it leads us to too complicated calculation: determining all eigenvalues and eigenvectors of infinite dimensional matrices. In this paper, we give optimal constants and extremisers of smoothing estimates for the 3-dimensional Dirac equation. In order to prove this, we construct a certain orthonormal basis of spherical harmonics. With respect to this basis, infinite dimensional matrices actually become block diagonal and so that eigenvalues and eigenvectors can be easily found. [ Soichiro Suzuki =================== § INTRODUCTION The Kato–Yajima smoothing estimates are one of the fundamental results in study of dispersive equations such as Schrödinger equations and Dirac equations, which were firstly observed by <cit.>, and have been studied by numerous researchers. At first, we consider the following Schrödinger-type equation: i ∂_t u(x, t) = ϕ(D) u(x, t) , (x, t) ∈^d × , u(x, 0) = f(x) , x ∈^d , where ϕ(D) denotes the Fourier multiplier operator whose symbol is ϕ(), that is, ℱϕ(D) f = ϕ(ξ) f(ξ) . In particular, (<ref>) becomes the free Schrödinger equation if ϕ(r) = r^2 and the relativistic Schrödinger equation if ϕ(r) = √(r^2 + m^2), respectively. The (global) smoothing estimate of the Schrodinger-type equation is expressed as ∫_t ∈∫_x ∈^dψ(D) e^itϕ(D)f(x)^2 w(x) dx dt ≤ Cf^2_L^2(^d) . Here functions w and ψ are called spatial weight and dispersion relation, respectively. We write C_d(w,ψ,ϕ) for the optimal constant for the inequality (<ref>), in other words, C_d(w,ψ,ϕ) sup_f∈ L^2(^d) f≢0∫_t ∈∫_x ∈^dψ(D)e^itϕ(D)f(x)^2 w(x) dx dt / f^2_L^2(^d) . For the Schrödinger equation, it is classically known that the smoothing estimate (<ref>) holds in the following cases: d ≥2, s > 1 , ( w(r), ψ(r), ϕ(r) ) = ( (1+r^2)^-s/2, r^1/2, r^2 ) , d ≥2, 1 < s < d, ( w(r), ψ(r), ϕ(r) ) = ( r^-s, r^(2-s)/2, r^2 ) , d ≥3, ( w(r), ψ(r), ϕ(r) ) = ( (1+r^2)^-1, (1+r^2)^1/4, r^2 ) . The cases (<ref>) and (<ref>) are originally established by <cit.>, and (<ref>) is by <cit.>. <cit.> obtained optimal constants in the cases (<ref>) with d ≥ 3 and (<ref>) with d ≥ 3 and s = 2. Recently, <cit.> and <cit.> extended 's results, and also obtained optimal constants in the case (<ref>). [<cit.>] In the case (<ref>) with d ≥ 3 and s = 2, we have C_d(w,ψ,ϕ) = π / 2 . [<cit.>] In the case (<ref>) with d ≥ 3, we have C_d(w,ψ,ϕ) = √(π)Γ( (s-1)/2 ) / 2 Γ(s) . Note that letting s = 2 recovers Theorem <ref>. [<cit.>] In the case (<ref>) with d ≥ 3 and s = 2, we have C_d(w,ψ,ϕ) = π / (d-2) . [<cit.>] Let d ≥ 2, 1 < s < d, a > 0, and suppose that ( w, ψ, ϕ ) satisfy w(x) = r^-s, ψ(r)^2 = a r^1-sϕ'(r) . Then we have C_d(w,ψ,ϕ) = 2^2-s a πΓ(s-1) Γ((d-s)/2) / ( Γ(s/2) )^2 Γ( (d+s)/2 - 1 ) . Particularly, in the case (<ref>), we have C_d(w,ψ,ϕ) = 2^1-sπΓ(s-1) Γ((d-s)/2) / ( Γ(s/2) )^2 Γ( (d+s)/2 - 1 ) . Note that letting d ≥ 3, s = 2 recovers Theorem <ref>. [<cit.>] In the case (<ref>) with d = 3, 5, we have C_3(w,ψ,ϕ) = π , C_5(w,ψ,ϕ) = π / 2 . These results can be proved by using the following abstract result for the Schrödinger-type equation established by <cit.>: [<cit.>] Let d ≥ 2 and suppose some reasonable assumptions ( w, ψ, ϕ ) (see Assumption <ref> for details). Then we have C_d(w,ψ,ϕ) = 1/(2π)^d-1sup_k∈sup_r>0λ_k(r), where w( · )(ξ) = F_w(12ξ^2) , λ_k(r)^d-2r^d-1ψ(r)^2/ϕ^'(r)∫_-1^1F_w(r^2(1-t))p_d,k(t)(1-t^2)^d-3/2 dt. Here p_d,k is the Legendre polynomial of degree k in d dimensions, which may be defined in a number of ways, for example, via the Rodrigues formula, (1-t^2)^d-3/2p_d,k(t) = (-1)^kΓ(d-1/2)/2^kΓ(k+d-1/2)d^k/dt^k(1-t^2)^k+d-3/2. Since we are interested in explicit constants, here we clarify that the Fourier transform in this paper is defined by f̂(ξ) ∫_x ∈^df(x)e^-ix·ξ dx . In this case, the Plancherel theorem states that f̂_L^2(^d)^2 = (2 π)^d f_L^2(^d)^2 . Now we are going to discuss the free Dirac equation. Let d ≥ 1 and write N 2^⌊(d+1)/2⌋. The d-dimensional free Dirac equation with mass m ≥ 0 is given by i∂_t u(x,t) = H_m u(x, t), (x, t) ∈^d × , u(x,0) = f(x) , x ∈^d . Here u and f are ^N-valued, and the Dirac operator H_m is defined by H_m α· D + mβ = ∑_j=1^dα_jD_j + mβ , where α_1, α_2, …, α_d, α_d+1=β are N× N Hermitian matrices satisfying the anti-commutation relation α_jα_k + α_kα_j = 2δ_jkI_N. Note that we have H_m^2 = ( - Δ + m^2 ) _N. Hereinafter, we always suppose that ϕ(r) = √(r^2 + m^2). For the Dirac equation, the smoothing estimate is given by ∫_t ∈∫_x ∈^dψ(D)e^-itH_mf(x)^2 w(x) dx dt ≤ Cf^2_L^2(^d,^N) . Recently, <cit.> studied the optimal constant of inequality (<ref>) below: C_d(w,ψ,m) sup_f∈ L^2(^d,^N) f≢0∫_t ∈∫_x ∈^dψ(D)e^-itH_mf(x)^2 w(x) dx dt /f^2_L^2(^d,^N) , and <cit.> did the same for radial initial data, C_d(w,ψ,m) sup_f∈ L^2(^d,^N) f≢0, f :radial∫_t ∈∫_x ∈^dψ(D)e^-itH_mf(x)^2 w(x) dx dt /f^2_L^2(^d,^N) . They obtained the following results, which are analogous to Theorem <ref>. [<cit.>] Let d = 2. Then we have C_2(w,ψ,m) = 1/2πsup_ k ∈sup_r>0_k(r), where _k(r) 1/2( λ_k(r) + λ_k+1(r) + m/√(r^2+m^2)λ_k(r) - λ_k+1(r) ) . [<cit.>] Let d ≥ 2. Then we have C_d(w,ψ,m) ≤ C_d(w,ψ,ϕ) = 1/(2π)^d-1sup_ k ∈sup_r>0λ_k(r) . In other words, the optimal constant for the Dirac equation is less than or equal to that for the relativistic Schrödinger equation. [<cit.>] Let d ≥ 2. Then we have C_d, (w,ψ,m) = 1/(2π)^d-1sup_r>0_(r), where _(r)1/2( λ_0(r) + λ_1(r) + m^2/r^2 + m^2( λ_0(r) - λ_1(r) ) ). As a consequence, we have C_d(w,ψ,m) ≥1/(2π)^d-1sup_r>0_(r) . In this paper, we show that Theorem <ref> also holds in the physically most important case d=3: Let d = 3. Then we have C_3(w,ψ,m) = 1/(2π)^2sup_ k ∈sup_r>0_k(r), where _k(r) 1/2( λ_k(r) + λ_k+1(r) + m/√(r^2+m^2)λ_k(r) - λ_k+1(r) ) . As a consequence, we obtain the equivalence of smoothing estimates for the Dirac equation and the relativistic Schrödinger equation: Let d = 2, 3. Then we have 1/2 C_d(w,ψ,ϕ) ≤C_d(w,ψ,m) ≤ C_d(w,ψ,ϕ). We note that the smoothing estimate for the relativistic Schrödinger equation holds in the following cases, for example: d ≥1, m = 0, s > 1, ( w(r), ψ(r), ϕ(r) ) = ( (1+r^2)^-s/2, 1, r ) , d = 2, m > 0, s > 1/2 , ( w(r), ψ(r), ϕ(r) ) = ( (1+r^2)^-s/2, 1, √(r^2 + m^2) ) , d ≥3, m > 0, ( w(r), ψ(r), ϕ(r) ) = ( (1+r^2)^-1, 1, √(r^2 + m^2) ) , d ≥2, m ≥0, 1 < s < d, ( w(r), ψ(r), ϕ(r) ) = ( r^-s, r^1 - s/2 (r^2+m^2)^-1/4, √(r^2 + m^2) ) . See <cit.> for (<ref>), (<ref>), (<ref>). The case (<ref>) is immediate from Theorem <ref>. We give the explicit value of C_d(w,ψ,m) in the case (<ref>): In the case (<ref>), we have sup_ k ∈sup_r>0_k(r) = (c_0 + c_1) / 2 , m = 0 , c_0 , m > 0 , where c_k 2^1-s (2π)^dΓ(s-1) Γ((d-s)/2 + k) / ( Γ(s/2) )^2 Γ( (d + s)/2 + k - 1 ) . For example, if d ≥ 3 and s = 2, then c_k = (2π)^d/d + 2k - 2 . As a consequence, in the case (<ref>) with d = 2, 3, we have (2π)^d-1C_d(w,ψ,m) = (c_0 + c_1) / 2 , m = 0 , c_0 , m > 0 . In particular, since (2π)^d-1C_d(w,ψ,ϕ) = c_0 (Theorem <ref>) and 0 < c_1 < c_0, we have C_d(w,ψ,m) < C_d(w,ψ,ϕ) if m = 0 and C_d(w,ψ,m) = C_d(w,ψ,ϕ) if m > 0. Furthermore, in the case (<ref>) with d ≥ 4 and m > 0, we also have (2π)^d-1C_d(w,ψ,m) = c_0 . In order to prove our results, we discuss as follows. In Section <ref>, we introduce some notation and basic facts as preliminaries. In particular, Lemma <ref>, which follows from the spherical harmonics decomposition (Theorem <ref>) and the Funk–Hecke theorem (Theorem <ref>), plays an important role. Though Lemma <ref> holds for any orthonormal bases of spherical harmonics, we will choose a certain basis to avoid a tedious calculation. To illustrate our idea, we give a simplified proof of Theorem <ref> (the optimal constant in the case d = 2) in Section <ref>. We will see that expressing the spherical harmonics decomposition of f ∈ L^2(^2, ^2) as f(ξ) = r^-1/2∑_k=-∞^∞1/√(2π)[ e^i k θ 0; 0 e^i (k+1) θ ]k(r) , ξ = (r cosθ, r sinθ) instead of as the usual form (which is used by <cit.>), f(ξ) = r^-1/2∑_k=-∞^∞1/√(2π) e^i k θk(r) , ξ = (r cosθ, r sinθ) , simplifies the proof significantly. The main advantage of using k(θ) 1/√(2π)[ e^i k θ 0; 0 e^i (k+1) θ ] is the following identity: ( cosθ[ 0 1; 1 0 ] + sinθ[ 0 -i; i 0 ]) k(θ) = k(θ) [ 0 1; 1 0 ] . In Section <ref>, we first give a certain expression of the spherical harmonic decomposition of f ∈ L^2(^3, ^2), f(ξ) = r^-1∑_k = 0^∞kn(θ, φ) kn(r) , ξ = (r sinθcosφ , r sinθsinφ, cosθ) , where kn(θ, φ) satisfies ( sinθcosφ[ 0 1; 1 0 ] + sinθsinφ[ 0 -i; i 0 ] + cosθ[ 1 0; 0 -1 ]) kn(θ, φ) = kn(θ, φ) [ 0 1; 1 0 ] . Once it is established, the proof of our main result Theorem <ref> is similar to that of Theorem <ref> given in Section <ref>. Finally, we prove Theorem <ref> in Section <ref>. The proof is similar to that of Theorem <ref>. § PRELIMINARIES Thoroughout the paper, we assume the following: The spatial weight w, the smoothing function ψ, and the dispersion relation ϕ satisfy the following conditions. * The spatial weight w( · ) is positive, radial, and its Fourier transform (in the sense of tempered distributions) coincides some locally bounded function on ^d ∖{0}, in which case we write w( · )(ξ) = F_w(12ξ^2) . *(<ref>) Furthermore, F_w is sufficiently nice to guarantee the continuity of λ_k (0, ∞) → for each k∈. Here, recall that λ_k(r)^d-2r^d-1ψ(r)^2/ϕ^'(r)∫_-1^1F_w(r^2(1-t))p_d,k(t)(1-t^2)^d-3/2 dt. <ref> * The smoothing function ψ (0, ∞) → is infinitely differentiable and non-negative. * * For the Schödinger-type equation, the dispersion relation ϕ (0, ∞) → is continuously differentiable and injective. * For the Dirac equation, the dispersion relation ϕ (0, ∞) → is given by ϕ(r) √(r^2 + m^2). Note that it is continuously differentiable and injective. Under the assumption, we define the linear operator L^2(^d, ^N) → L^2(^d+1, ^N) by ( f)(x,t) w(x)^1/2∫_ξ∈^d e^ix ·ξψ(ξ)e^-itA_ξf(ξ) dξ , where A_ξα·ξ + mβ = ∑_j=1^dα_j ξ_j + mβ . Notice that the smoothing estimate (<ref>) is equivalent to f̂_L^2(^d+1, ^N)^2 ≤ C f_L^2(^d)^2 = (2 π)^d C f̂_L^2(^d, ^N)^2 , and so that _L^2(^d, ^N) → L^2(^d+1, ^N)^2 = (2 π)^d C_d(w,ψ,m). For simplicity, hereinafter we write _L^2(^d, ^N) → L^2(^d+1, ^N). We note that is independent of the choice of α_1, α_2, …, α_d, α_d+1 = β. To see this, let {α_j^(1)}_j=1^d+1 and {α_j^(2)}_j=1^d+1 be N× N Hermitian matrices satisfying the anti-commutation relation. Then the so-called fundamental theorem on Dirac gamma matrices states that there exists a unitary matrix U satisfying α_j^(2) = U^-1α_j^(1) U, which implies A_ξ^(2) = ∑_j=1^dα_j^(2)ξ_j + mβ^(2) = U^-1( ∑_j=1^dα_j^(1)ξ_j + mβ^(1)) U = U^-1 A_ξ^(1) U . Therefore, we have ^(2) f(x,t) = w(x)^1/2∫_^d e^ix ·ξψ(ξ)e^-itA_ξ^(2)f(ξ) dξ = U^-1 w(x)^1/2∫_^d e^ix ·ξψ(ξ)e^-itA_ξ^(1) Uf(ξ) dξ = U^-1^(1) Uf(x,t) and so that ^(2) f_L^2(^d+1, ^N) = ^(1) Uf_L^2(^d+1, ^N), which shows the independence. In order to compute f_L^2(^d+1, ^N), we will use the spherical harmonics decomposition (Theorem <ref>) and the Funk–Hecke theorem (Theorem <ref>). For each k ∈, let _k(^d) and {kn}_1 ≤ n ≤ d_k be the space of homogeneous harmonic polynomials of degree k on ^d and its orthonormal basis, respectively. Here, the inner product of P, Q ∈_k(^d) is defined by PQ__k(^d)P^d-1Q^d-1_L^2(^d-1) = ∫_θ∈^d-1 P(θ) Q(θ) dσ(θ) , as usual. The spherical harmonics decomposition and the Funk–Hecke theorem are as follows: For any f ∈ L^2(^d), there uniquely exists {kn}_ k ∈, 1 ≤ n ≤ d_k ⊂ L^2() satisfying f(ξ) = ξ^-(d-1)/2∑_k=0^∞∑_n=1^d_kkn(ξ / ξ) kn(ξ) , f_L^2(^d)^2 = ∑_k=0^∞∑_n=1^d_kkn_L^2()^2 . Conversely, for any {kn}_ k ∈, 1 ≤ n ≤ d_k ⊂ L^2() satisfying ∑_k=0^∞∑_n=1^d_kkn_L^2()^2 < ∞ , the function f given by (<ref>) is in L^2(^d) and (<ref>) holds. Let d ≥ 2, k ∈, F ∈ L^1 ( [-1, 1], (1-t^2)^(d-3)/2 dt ) and write μ_k[F] ^d-2∫_-1^1 F(t) p_d, k(t) (1-t^2)^d-3/2 dt. Here recall that p_d, k denotes the Legendre polynomial of degree k in d dimensions; see (<ref>). Then, for any P ∈_k(^d) and ω∈^d-1, we have ∫_θ∈^d-1 F(θ·ω) P(θ) dσ(θ) = μ_k[F] P(ω) . Note that the function λ_k defined in (<ref>) satisfies λ_k(r) = r^d-1ψ(r)^2/ϕ^'(r)μ_k[F_w( r^2 (1 - ) ) ] , in other words, it satisfies r^d-1ψ(r)^2/ϕ^'(r)∫_θ∈^d-1 F_w( r^2(1 - θ·ω) ) P(θ) dσ(θ) = λ_k(r) P(ω) for each k ∈, P ∈_k(^d) and ω∈^d-1. Using these facts, we can obtain the following important lemma. Suppose that Assumption <ref> holds. Let f ∈ L^2(^d, ^N) and define ∈ L^2(^d, ^N) by (ξ) 1/2( f(ξ) ±1/ϕ(ξ)( m β f(ξ) + ∑_j=1^dα_j ξ_j f(ξ) ) ) . If (ξ) = ξ^-(d-1)/2∑_k=0^∞∑_n=1^d_kkn(ξ / ξ) kn(ξ) , then we have f_L^2(^d+1, ^N)^2 = 2 π∑_k=0^∞∑_n=1^d_k∫_0^∞λ_k (r) ( kn(r)^2 + kn(r)^2 ) dr . We omit the proof of Lemma <ref>. See <cit.>. § IN THE CASE D=2 In this section, we use the usual Pauli matrices for {α_j }_j=1^3: α_1 = σ_1 = [ 0 1; 1 0 ], α_2 = σ_2 = [ 0 -i; i 0 ], β = σ_3 = [ 1 0; 0 -1 ]. At first we prove the following lemma: Let k(θ) 1/√(2π)[ e^i k θ 0; 0 e^i (k+1) θ ] . Then the following hold: For any f ∈ L^2(^2, ^2), there uniquely exists {k}⊂ L^2(_>0, ^2) satisfying f(ξ) = r^-1/2∑_k=-∞^∞k(θ) k(r) , ξ = (r cosθ, r sinθ) , <ref>.i f_L^2(^2, ^2)^2 = ∑_k=-∞^∞k_L^2(, ^2)^2 . <ref>.ii Conversely, for any {k}⊂ L^2(_>0, ^2) satisfying ∑_k=-∞^∞k_L^2(, ^2)^2 < ∞ , the function f given by (<ref>) is in L^2(^2, ^2) and (<ref>) holds. For simplicity, we define λ_k(r) for k ≤ -1 by λ_k(r) λ_k(r). Let f ∈ L^2(^2, ^2) and decompose as (ξ) = r^-1/2∑_k=-∞^∞k(θ) k(r) . Then we have f_L^2(^3, ^2)^2 = 2 π∑_k = -∞^∞∫_0^∞( k(r) k(r) k(r) + k(r) k(r) k(r)) dr , where k(r) [ λ_k(r) 0; 0 λ_k+1(r) ] . We have (σ_1 cosθ + σ_2 sinθ) k(θ) = k(θ) σ_1 . (<ref>) and (<ref>) are immediate from Theorem <ref> and Lemma <ref>, respectively. (<ref>) is also quite easy: (σ_1 cosθ + σ_2 sinθ) k(θ) = 1/√(2π)( cosθ[ 0 1; 1 0 ] + sinθ[ 0 -i; i 0 ]) [ e^i k θ 0; 0 e^i (k+1) θ ] = 1/√(2π)[ 0 e^- i θ; e^iθ 0 ][ e^i k θ 0; 0 e^i (k+1) θ ] = 1/√(2π)[ 0 e^i k θ; e^i (k+1) θ 0 ] = k(θ) σ_1 . Using Lemma <ref>, we prove the following result: We have _L^2(^2, ^2) → L^2(^3, ^2)^2 = 2 π , where sup_k ∈sup_r > 0k(r), k(r) 1/2 ( λ_k(r) + λ_k+1(r) ) + m/2ϕ(r)λ_k(r) - λ_k+1(r) . Regarding extremisers, let f ∈ L^2(^2, ^2) be such that f(ξ) = r^-1/2∑_k=-∞^∞k(θ) k(r) , ξ = (r cosθ, r sinθ) . Then the following are equivalent: The equality f_L^2(^3, ^2)^2 = 2 πf_L^2(^2, ^2)^2 holds. The functions {k}_k ∈ satisfy k⊂_k r > 0 k(r) = , k(r) ∈ W_k(r) , a.e. r > 0 for each k ∈, where W_k(r) = ^2, m ( λ_k(r) - λ_k+1(r) ) = 0 , the eigenspace of m σ_3 + r σ_1 associated with ϕ(r) , m ( λ_k(r) - λ_k+1(r) ) > 0 , the eigenspace of m σ_3 + r σ_1 associated with - ϕ(r) , m ( λ_k(r) - λ_k+1(r) ) < 0 , As a consequence, extremisers exist if and only if there exists k ∈ such that _k > 0. Let f ∈ L^2(^2, ^2) be such that f(ξ) = r^-1/2∑_k=-∞^∞k(θ) k(r) , ξ = (r cosθ, r sinθ) , f_L^2(^2, ^2)^2 = ∑_k=-∞^∞k_L^2(, ^2)^2 . At first we need to compute . By (<ref>) and σ_3 k = kσ_3, we have (ξ) = 1/2 ξ^1/2( f(ξ) ±1/ϕ(ξ)( m σ_3 f(ξ) + ∑_j=1^2σ_j ξ_j f(ξ) ) ) = 1/2r^1/2∑_k=-∞^∞( k(θ) ±1/ϕ(r) ( mσ_3 k(θ) + r k(θ) σ_1 ) ) k(r) = 1/2r^1/2∑_k=-∞^∞k(θ) ( ±1/ϕ(r) ( mσ_3 + r σ_1 ) ) k(r) . Therefore, (<ref>) implies f_L^2(^3, ^2)^2 = 2 π∑_k=-∞^∞∫_0^∞k(r) k(r) k(r) dr , where 2 k(r) 1/2( + 1/ϕ ( mσ_3 + r σ_1 ) ) k( + 1/ϕ ( mσ_3 + r σ_1 ) ) + 1/2( - 1/ϕ ( mσ_3 + r σ_1 ) ) k( - 1/ϕ ( mσ_3 + r σ_1 ) ) = k + 1/ϕ^2 ( mσ_3 + r σ_1 ) k ( mσ_3 + r σ_1 ) = k + 1/ϕ^2 (m^2 σ_3 kσ_3 + mr ( σ_3 kσ_1 + σ_1 kσ_3 ) + r^2 σ_1 kσ_1 ) = k + 1/ϕ^2 (m^2 k + mr (λ_k - λ_k+1) σ_1 + (ϕ^2 - m^2) σ_1 kσ_1 ) = k + σ_1 kσ_1 + m/ϕ^2 (m ( k - σ_1 kσ_1 ) + r (λ_k - λ_k+1) σ_1 ) = (λ_k(r) + λ_k+1(r)) + m/ϕ(r)^2 (λ_k(r) - λ_k+1(r)) (m σ_3 + r σ_1) . Now we need to determine the maximal eigenvalue of k(r) and its associated eigenspace. Since eigenvalues of the matrix m σ_3 + r σ_1 are ±ϕ(r), we conclude that the maximal eigenvalue of k(r) and its associated eigenspace are k(r) = 1/2 ( λ_k(r) + λ_k+1(r) ) + m/2ϕ(r)λ_k(r) - λ_k+1(r) and W_k(r) = ^2, m ( λ_k(r) - λ_k+1(r) ) = 0 , the eigenspace of m σ_3 + r σ_1 associated with ϕ(r) , m ( λ_k(r) - λ_k+1(r) ) > 0 , the eigenspace of m σ_3 + r σ_1 associated with - ϕ(r) , m ( λ_k(r) - λ_k+1(r) ) < 0 , respectively. Therefore, we have f_L^2(^3, ^2)^2 = 2 π∑_k=-∞^∞∫_0^∞k(r) k(r) k(r) dr ≤ 2 π∑_k=-∞^∞∫_0^∞k(r) k(r)^2 dr ≤ 2 πf_L^2(^2, ^2)^2 and hence ^2 ≤ 2 π . To see the equality ^2 = 2 π, we will show that for any ε > 0, there exists f ∈ L^2(^2, ^2) ∖{0} such that f^2_L^2(^3, ^2)≥ 2 π ( - ε ) f_L^2(^2, ^2)^2. Fix ε > 0. Then, by the definition of and the continuity of k, we can choose k ∈ such that the Lebesgue measure of _k(ε) r > 0_k(r) ≥ - ε is nonzero (possibly infinite). Now let f ∈ L^2(^2, ^2) ∖{ 0 } be f(ξ) = k(θ) k(r) with k∈ L^2(, ^2) satisfying k⊂_k , k(r) ∈ W_k(r) , a.e. r > 0 . Then we have f_L^2(^3, ^2)^2 = 2 π∫_0^∞k(r) k(r) k(r) dr = 2 π∫_0^∞k(r) k(r)^2 dr ≥ 2 π ( - ε) f_L^2(^2, ^2)^2 , hence the equality ^2 = 2 π holds. Using a similar argument, we can show that (<ref>)(<ref>). Suppose that {k}_k ∈ satisfies k⊂_k , k(r) ∈ W_k(r) , a.e. r > 0 for each k ∈. Then we have f_L^2(^3, ^2)^2 = 2 π∫_0^∞∑_k=-∞^∞k(r) k(r) k(r) dr = 2 π∫_0^∞∑_k=-∞^∞k(r) k(r)^2 dr = 2 πf_L^2(^2, ^2)^2 . Finally, we show that (<ref>)(<ref>). Suppose that the equality f_L^2(^3, ^2)^2 = 2 πf_L^2(^2, ^2)^2 holds. Then, since 2π∑_k ∈∫_0^∞( k(r) ^2 - k(r) k(r) k(r) ) dr = 2 πf_L^2(^2, ^2)^2 - f_L^2(^3, ^2)^2 = 0 and k(r) ^2 - k(r) k(r) k(r) ≥ 0 , we obtain k(r) ∈ W_k(r) , k(r) ^2 - k(r) k(r) k(r) = ( - _k(r) ) k(r) ^2 = 0 for almost every r > 0 and each k ∈. On the other hand, the definition of _k implies that - _k(r) > 0 for any r ∈∖_k and each k ∈. Therefore, we conclude that k(r) = 0 holds for almost every r ∈∖_k and each k ∈. § IN THE CASE D=3 In this section, we use the following representation: α_j = σ_1 ⊗σ_j = [ σ_j; σ_j ], β = σ_3 ⊗ = [ ; - ], where σ_1 = [ 0 1; 1 0 ], σ_2 = [ 0 -i; i 0 ], σ_3 = [ 1 0; 0 -1 ]. In the case d = 3, it is known that the following functions {kn}_n≤ k form an orthonormal basis for _k(^3) for each k ≥ 0: kn(θ, φ) = 1/√(2π) (-1)^(n + n)/2kn ( sinθ )^nn + 1/2k - n (cosθ) e^i n φ, where pn denotes the Gegenbauer polynomial, which is defined by the following recurrence relation: p-1(x) = 0 , p0(x) = 1 , n pn+1(x) = 2 (n + p) x pn(x) - ( n + 2p - 1) pn-1(x) , and kn is the normalizing constant given by kn (2n-1)!! ( (k + 1/2) (k - n)! / (k + n)! )^1/2 . At first we prove the following lemma: Let kn(θ, φ) = [ kn(θ, φ) 0; 0 kn+1(θ, φ) ] . Then there exist matrices {kn}⊂ M_2 × 2() satisfying the following properties: ( σ_1 sinθcosφ + σ_2 sinθsinφ + σ_3 cosθ) kn(θ, φ) = k+1n(θ, φ) kn + k-1n(θ, φ) k-1n, k+1nkn = , knkn + k-1nk-1n = , kn = 0. At first we prove (<ref>) for n ≥ 0. By (<ref>), we have kn(θ, φ) = [ kn(θ, φ) 0; 0 kn+1(θ, φ) ] = n(φ) [ (-1)^nkn ( sinθ )^nn + 1/2k - n (cosθ) 0; 0 (-1)^n+1kn+1 ( sinθ )^n+1n + 3/2k - n - 1 (cosθ) ] n(φ) kn(θ) , hence Lemma <ref> implies ( σ_1 sinθcosφ + σ_2 sinθsinφ) kn (θ, φ) = ( σ_1 cosφ + σ_2 sinφ) n(φ) kn(θ) sinθ = n(φ) σ_1 kn(θ) sinθ . Since σ_3 kn(θ, φ) cosθ = n(φ) kn(θ) σ_3 cosθ , we obtain ( σ_1 sinθcosφ + σ_2 sinθsinφ + σ_3 cosθ) kn(θ, φ) = n(φ) ( σ_1 kn(θ) sinθ + kn(θ) σ_3 cosθ ) . Therefore, it is enough to show that σ_1 kn(θ) sinθ + kn(θ) σ_3 cosθ = k+1n(θ) kn + k-1n(θ) k-1n . In order to prove this, we use the following identities of the Gegenbauer polynomials: (n + p) pn(x) = p ( p+1n(x) - p+1n-2(x) ) , 4p (n + p) (1 - x^2) p+1n-1(x) = (n + 2p - 1)(n + 2p) pn-1(x) - n (n + 1) pn + 1(x) , 2 (n + p) x pn(x) = (n+1) pn+1(x) + ( n + 2p - 1) pn-1(x) . Using (<ref>) and (<ref>), we obtain (2k + 1) (-1)^nkn ( sinθ )^n+1n + 1/2k - n (cosθ) = (-1)^nkn ( sinθ )^n+1 (2n + 1) ( n+3/2(k+1)-(n+1)(cosθ) - n+3/2(k-1) - (n+1)(cosθ) ) = (-1)^n ( sinθ )^n+1 (2n + 1) ( knn+3/2(k+1)-(n+1)(cosθ) -knn+3/2(k-1) - (n+1)(cosθ) ) and (2n + 1) (2k + 1) (-1)^n+1kn+1 ( sinθ )^n + 2n+3/2k - (n+1) (cosθ) = (-1)^n+1 (2n + 1) (2k + 1) ( sinθ )^nkn+1 ( 1 - (cosθ)^2 ) n+3/2k - (n+1) (cosθ) = (-1)^n+1 ( sinθ )^nkn+1 ( (k + n )(k + n + 1) n + 1/2(k-1) - n (cosθ) - (k - n)(k - n + 1) n+1/2(k+1) - n (cosθ) ) = (-1)^n+1 ( sinθ )^n( (k + n )(k + n + 1) kn+1n + 1/2(k-1) - n (cosθ) - (k - n)(k - n + 1) kn+1n+1/2(k+1) - n (cosθ) ) , respectively. Combining these, we get kn(θ) sinθ = (-1)^n ( sinθ )^n[ - k+1n+1 ( sinθ ) n+3/2(k+1)-(n+1)(cosθ) 0; 0 k+1nn+1/2(k+1) - n (cosθ) ][ - (2n+1) kn/ (2k+1 )k+1n+1 0; 0 (k - n )(k - n + 1) kn+1/ (2n + 1) (2k + 1) k+1n ] + (-1)^n ( sinθ )^n[ - k-1n+1 ( sinθ ) n+3/2(k-1) - (n+1)(cosθ) 0; 0 k-1nn + 1/2(k-1) - n (cosθ) ][ (2n+1) kn/ (2k+1 )k-1n+1 0; 0 - (k + n )(k + n + 1) kn+1/ (2n + 1) (2k + 1) k-1n ] = σ_1 k+1n(θ) σ_1 [ - (2n+1) kn/ (2k+1 )k+1n+1 0; 0 (k - n )(k - n + 1) kn+1/ (2n + 1) (2k + 1) k+1n ] + σ_1 k-1n(θ) σ_1 [ (2n+1) kn/ (2k+1 )k-1n+1 0; 0 - (k + n )(k + n + 1) kn+1/ (2n + 1) (2k + 1) k-1n ] = σ_1 k+1n(θ) [ 0 (k - n )(k - n + 1) kn+1/ (2n + 1) (2k + 1) k+1n; - (2n+1) kn/ (2k+1 )k+1n+1 0 ] + σ_1 k-1n(θ) [ 0 - (k + n )(k + n + 1) kn+1/ (2n + 1) (2k + 1) k-1n; (2n+1) kn/ (2k+1 )k-1n+1 0 ] . Furthermore, (<ref>) implies (2k + 1) (-1)^nkn ( sinθ )^nn + 1/2k - n (cosθ) cosθ = (-1)^n ( sinθ )^n( (k-n+1) knn+1/2(k+1)-n(cosθ) + (k+n) knn + 1/2(k-1)-n(θ) ) and so that kn(θ) σ_3 cosθ = (-1)^n ( sinθ )^n[ k+1nn+1/2(k+1)-n(cosθ) 0; 0 - k+1n+1 ( sinθ ) n+3/2(k+1)-(n+1)(cosθ) ][ (k-n+1) kn/ (2k+1) k+1n 0; 0 (k-n) kn+1/ (2k+1) k+1n+1 ]σ_3 + (-1)^n ( sinθ )^n[ k-1nn+1/2(k-1)-n(cosθ) 0; 0 - k-1n+1 ( sinθ ) n+3/2(k-1)-(n+1)(cosθ) ][ (k+n) kn/ (2k+1) k-1n 0; 0 (k+n+1) kn+1/ (2k+1) k-1n+1 ]σ_3 = k+1n(θ) [ (k-n+1) kn/ (2k+1) k+1n 0; 0 - (k-n) kn+1/ (2k+1) k+1n+1 ] + k-1n(θ) [ (k+n) kn/ (2k+1) k-1n 0; 0 - (k+n+1) kn+1/ (2k+1) k-1n+1 ] . Therefore, we conclude that σ_1 kn(θ) sinθ + kn(θ) σ_3 cosθ = k+1n(θ) [ (k-n+1) kn/ (2k+1) k+1n (k - n )(k - n + 1) kn+1/ (2n + 1) (2k + 1) k+1n; - (2n+1) kn/ (2k+1 )k+1n+1 - (k-n) kn+1/ (2k+1) k+1n+1 ] + k-1n(θ) [ (k+n) kn/ (2k+1) k-1n - (k + n )(k + n + 1) kn+1/ (2n + 1) (2k + 1) k-1n; (2n+1) kn/ (2k+1 )k-1n+1 - (k+n+1) kn+1/ (2k+1) k-1n+1 ] holds. Now let kn[ (k-n+1) kn/ (2k+1) k+1n (k - n )(k - n + 1) kn+1/ (2n + 1) (2k + 1) k+1n; - (2n+1) kn/ (2k+1 )k+1n+1 - (k-n) kn+1/ (2k+1) k+1n+1 ] . Then we have k-1n - [ (k+n) kn/ (2k+1) k-1n - (k + n )(k + n + 1) kn+1/ (2n + 1) (2k + 1) k-1n; (2n+1) kn/ (2k+1 )k-1n+1 - (k+n+1) kn+1/ (2k+1) k-1n+1 ] = [ (k-n) k-1n/ (2k-1) kn - (2n+1) k-1n/ (2k-1 )kn+1; (k - n - 1)(k - n) k-1n+1/ (2n + 1) (2k - 1) kn - (k-n-1) k-1n+1/ (2k-1) kn+1 ] - [ (k+n) kn/ (2k+1) k-1n - (k + n )(k + n + 1) kn+1/ (2n + 1) (2k + 1) k-1n; (2n+1) kn/ (2k+1 )k-1n+1 - (k+n+1) kn+1/ (2k+1) k-1n+1 ] = [ (k-n) k-1n/ (2k-1) kn - (k+n) kn/ (2k+1) k-1n (k + n )(k + n + 1) kn+1/ (2n + 1) (2k + 1) k-1n - (2n+1) k-1n/ (2k-1 )kn+1; (k - n - 1)(k - n) k-1n+1/ (2n + 1) (2k - 1) kn - (2n+1) kn/ (2k+1 )k-1n+1 (k+n+1) kn+1/ (2k+1) k-1n+1 - (k-n-1) k-1n+1/ (2k-1) kn+1 ] = [ (2k+1) (k-n) ( k-1n )^2 - (2k-1) (k+n) ( kn )^2 / (2k-1) (2k+1) knk-1n (2k-1)(k + n )(k + n + 1) ( kn+1 )^2 - (2n+1)^2 (2k + 1) ( k-1n )^2 / (2n + 1) (2k + 1) (2k-1 )kn+1k-1n; (2k+1)(k - n )(k - n - 1) ( k-1n+1 )^2 - (2n+1)^2 (2k - 1) ( kn )^2 / (2n + 1) (2k + 1) (2k-1 )knk-1n+1 (2k-1) (k+n+1) ( kn+1 )^2 - (2k+1) (k-n-1) ( k-1n+1 )^2 / (2k-1) (2k+1) kn+1k-1n+1 ] , and substituting the explicit expression for the normalizing constant (<ref>) shows that (2k+1) (k-n) ( k-1n )^2 - (2k-1) (k+n) ( kn )^2 = (2k+1) (k-n) ( (2n-1)!! )^2 (k - 1/2) (k - n - 1)! / (k + n - 1)! - (2k-1) (k+n) ( (2n-1)!! )^2 (k + 1/2) (k - n)! / (k + n)! = 0, that (2k-1)(k + n )(k + n + 1) ( kn+1 )^2 - (2n+1)^2 (2k + 1) ( k-1n )^2 = (2k-1) (k+n) (k+n+1) ( (2n+1)!! )^2 (k + 1/2) (k - n - 1)! / (k + n + 1)! - (2n+1)^2 (2k + 1) ( (2n-1)!! )^2 (k - 1/2) (k - n - 1)! / (k + n - 1)! = 0, that (2k+1)(k - n )(k - n - 1) ( k-1n+1 )^2 - (2n+1)^2 (2k- 1) ( kn )^2 = (2k+1) (k-n) (k-n-1) ( (2n+1)!! )^2 (k + 1/2) (k - n - 2)! / (k + n)! - (2n+1)^2 (2k - 1) ( (2n-1)!! )^2 (k + 1/2) (k - n)! / (k + n)! = 0, and that (2k-1) (k+n+1) ( kn+1 )^2 - (2k+1) (k-n-1) ( k-1n+1 )^2 = (2k-1) (k+n+1) ( (2n+1)!! )^2 (k + 1/2) (k - n - 1)! / (k + n + 1)! - (2k+1) (k-n-1) ( (2n+1)!! )^2 (k - 1/2) (k - n - 2)! / (k + n)! = 0 . Consequently, we obtain ( σ_1 sinθcosφ + σ_2 sinθsinφ + σ_3 cosθ) kn(θ, φ) = k+1n(θ, φ) kn + k-1n(θ, φ) k-1n for n ≥ 0. Now we consider the case n ≤ -1. In this case, kn(θ, φ) = k-n (-θ, -φ) implies kn(θ, φ) = σ_1 k-(n+1) (-θ, -φ) σ_1 and so that ( σ_1 sinθcosφ + σ_2 sinθsinφ + σ_3 cosθ) kn(θ, φ) = ( σ_1 sinθcosφ + σ_2 sinθsinφ + σ_3 cosθ) σ_1 k-(n+1) (-θ, -φ) σ_1 = σ_1 ( σ_1 sinθcosφ - σ_2 sinθsinφ - σ_3 cosθ) k-(n+1) (-θ, -φ) σ_1 = - σ_1 ( σ_1 sin(-θ)cos(-φ) + σ_2 sin(-θ)sin(-φ) + σ_3 cos(-θ)) k-(n+1) (-θ, -φ) σ_1 = - σ_1 ( k+1-(n+1)(-θ, -φ) k-(n+1) + k-1-(n+1)(-θ, -φ) k-1-(n+1)) σ_1 = - k+1n (θ, φ) σ_1 k-(n+1)σ_1 - k-1n (θ, φ) σ_1 k-1-(n+1)σ_1 . Thus, by letting kn - σ_1 k-(n+1)σ_1 , we have the desired result for n ≤ -1. Finally, we prove (<ref>), (<ref>) and (<ref>). Since ( σ_1 sinθcosφ + σ_2 sinθsinφ + σ_3 cosθ)^2 = , we have ( σ_1 sinθcosφ + σ_2 sinθsinφ + σ_3 cosθ)^2 kn = kn . On the other hand, using (<ref>) twice, we also have ( σ_1 sinθcosφ + σ_2 sinθsinφ + σ_3 cosθ)^2 kn = ( σ_1 sinθcosφ + σ_2 sinθsinφ + σ_3 cosθ) ( k+1nkn + k-1nk-1n ) = k+2nk+1nkn + kn ( knkn + k-1nk-1n ) + k-2nk-2nk-1n, hence (<ref>) and (<ref>) hold. Moreover, since kn, k+1n and k+1nkn =, we obtain kn = k+1n = 0, which shows (<ref>). As a consequence of (<ref>), (<ref>) and (<ref>), we obtain a certain orthonormal basis of ^2: There exist {kn}, {kn}⊂^2 such that kn = kn = 1 , knkn = 0 , knkn = k-1nkn = , k+1n = knkn = knknk+1n, kn = knk+1n = knknkn . Since kn = 0, we can take kn∈^2 satisfying knkn = and kn = 1 for each k, n. Now let knknk+1n. Then we have knkn = knknk+1n = ( k+1nk+1n + knkn ) k+1n = k+1n , k-1nkn = k-1nknk+1n = , knkn = knknk+1n = knknk+1n = 0 , kn^2 = knknk+1n = knknk+1n = u_k+1^n^2 = 1 . Now we prove the following lemma, which is analogue to Lemma <ref>: Let kn(θ, φ) ( kn(θ, φ) kn , k+1n(θ, φ) k+1n ) . Then the following hold: For any f ∈ L^2(^3, ^4), there uniquely exists {kn}⊂ L^2(_>0, ^4) satisfying f(ξ) = r^-1∑_k = 0^∞[ kn(θ, φ) ; kn(θ, φ) ]kn(r) , ξ = (r sinθcosφ , r sinθsinφ, cosθ) , <ref>.i f_L^2(^3, ^4)^2 = ∑_k = 0^∞kn_L^2(, ^4)^2 . <ref>.ii Conversely, for any {kn}⊂ L^2(_>0, ^4) satisfying ∑_k = 0^∞kn_L^2(, ^4)^2 < ∞ , the function f given by (<ref>) is in L^2(^3, ^4) and (<ref>) holds. Let f ∈ L^2(^3, ^4) and decompose as (ξ) = r^-1∑_k = 0^∞[ kn(θ, φ) ; kn(θ, φ) ]kn(r) . Then we have f_L^2(^4, ^4)^2 = 2 π∑_k = 0^∞∫_0^∞( [ k(r) ; k(r) ]kn(r) kn(r) + [ k(r) ; k(r) ]kn(r) kn(r)) dr , where k(r) [ λ_k(r) 0; 0 λ_k+1(r) ] . We have ( σ_1 sinθcosφ + σ_2 sinθsinφ + σ_3 cosθ) kn(θ, φ) = kn(θ, φ) σ_1 . (<ref>) and (<ref>) are immediate from Theorem <ref> and Lemma <ref>, respectively. (<ref>) follows from Lemma <ref> and Corollary <ref>. Since ( σ_1 sinθcosφ + σ_2 sinθsinφ + σ_3 cosθ) knknLemma <ref> = ( k+1nkn + k-1nk-1n ) kn Corollary <ref> =k-1nk-1n , ( σ_1 sinθcosφ + σ_2 sinθsinφ + σ_3 cosθ) knknLemma <ref> = ( k+1nkn + k-1nk-1n ) kn Corollary <ref> =k+1nk+1n , we have ( σ_1 sinθcosφ + σ_2 sinθsinφ + σ_3 cosθ) kn = ( k+1nk+1n, knkn ) = knσ_1 . Now we prove the following result: We have _L^2(^3, ^4) → L^2(^4, ^4)^2 = 2 π , where sup_k ∈sup_r > 0k(r), k(r) 1/2 ( λ_k(r) + λ_k+1(r) ) + m/2ϕ(r)λ_k(r) - λ_k+1(r) . Regarding extremisers, let f ∈ L^2(^3, ^4) be such that f(ξ) = r^-1∑_k = 0^∞[ kn(θ, φ) ; kn(θ, φ) ]kn(r) . Then the following are equivalent: The equality f_L^2(^4, ^4)^2 = 2 πf_L^2(^3, ^4)^2 holds. The functions {kn} satisfy kn⊂_k r > 0 k(r) = , kn(r) ∈ W_k(r) , a.e. r > 0 for each k ≥ 0 and - k - 1 ≤ n ≤ k, where W_k(r) = ^4, m ( λ_k(r) - λ_k+1(r) ) = 0 , the eigenspace of m ⊗σ_3 + r σ_2 ⊗σ_2 associated with ϕ(r), m ( λ_k(r) - λ_k+1(r) ) > 0 , the eigenspace of m ⊗σ_3 + r σ_2 ⊗σ_2 associated with - ϕ(r), m ( λ_k(r) - λ_k+1(r) ) < 0 , As a consequence, extremisers exist if and only if there exists k ∈ such that _k > 0. Let f ∈ L^2(^3, ^4) be such that f(ξ) = r^-1∑_k = 0^∞[ kn(θ, φ) ; kn(θ, φ) ]kn(r) , ξ = (r sinθcosφ , r sinθsinφ, cosθ) , f_L^2(^3, ^4)^2 = ∑_k = 0^∞kn_L^2(, ^4)^2 . At first we need to compute . By (<ref>), we have ∑_j=1^3α_j ξ_j f = ∑_k = 0^∞(σ_1 ⊗∑_j=1^3σ_j ξ_j ) (⊗kn) kn = ∑_k = 0^∞ (σ_1 ⊗knσ_1) kn = ∑_k = 0^∞ (⊗kn) α_1 kn and so that = 1/2( f ±1/ϕ( m β f + ∑_j=1^3α_j ξ_j f ) ) = 1/2r∑_k=0^∞[ kn ; kn ](_4 ±1/ϕ (m β + r α_1)) kn . Therefore, (<ref>) implies f_L^2(^4, ^4)^2 = 2 π∑_k = 0^∞∫_0^∞k(r) kn(r) kn(r) dr , where 2 k(r) 1/2 (_4 + 1/ϕ (m β + r α_1)) (⊗k) (_4 + 1/ϕ (m β + r α_1)) + 1/2 (_4 - 1/ϕ (m β + r α_1)) (⊗k) (_4 - 1/ϕ (m β + r α_1)) = ⊗k + 1/ϕ^2 (m σ_3 ⊗ + r σ_1 ⊗σ_1) (⊗k) (m σ_3 ⊗ + r σ_1 ⊗σ_1) = ⊗k + 1/ϕ^2 (m^2 ⊗k + mr ( σ_3 σ_1 ⊗kσ_1 + σ_1 σ_3 ⊗σ_1 k ) + r^2 ⊗σ_1 kσ_1 ) = ⊗k + 1/ϕ^2 (m^2 ⊗k + i mr σ_2 ⊗ ( kσ_1 - σ_1 k ) + (ϕ^2 - m^2) ⊗σ_1 kσ_1 ) = ⊗ (k + σ_1 kσ_1) + m/ϕ^2 (m ⊗ ( k - σ_1 kσ_1 ) + i r σ_2 ⊗ ( kσ_1 - σ_1 k ) ) = (λ_k + λ_k+1) _4 + m/ϕ^2 (λ_k - λ_k+1) (m ⊗σ_3 + r σ_2 ⊗σ_2 ) . Now we need to determine the maximal eigenvalue of k(r) and its associated eigenspace. Since eigenvalues of the matrix m ⊗σ_3 + r σ_2 ⊗σ_2 are ±ϕ(r), we conclude that the maximal eigenvalue of k(r) and its associated eigenspace are k(r) = 1/2 ( λ_k(r) + λ_k+1(r) ) + m/2ϕ(r)λ_k(r) - λ_k+1(r) and W_k(r) = ^4, m ( λ_k(r) - λ_k+1(r) ) = 0 , the eigenspace of m ⊗σ_3 + r σ_2 ⊗σ_2 associated with ϕ(r), m ( λ_k(r) - λ_k+1(r) ) > 0 , the eigenspace of m ⊗σ_3 + r σ_2 ⊗σ_2 associated with - ϕ(r), m ( λ_k(r) - λ_k+1(r) ) < 0 , respectively. Therefore, we have f_L^2(^4, ^4)^2 = 2 π∑_k = 0^∞∫_0^∞k(r) kn(r) kn(r) dr ≤ 2 π∑_k = 0^∞∫_0^∞k(r) kn(r)^2 dr ≤ 2 πf_L^2(^3, ^4)^2 and hence ^2 ≤ 2 π . The equality ^2 = 2 π and the characterization of extremisers (<ref>)(<ref>) can be proved by the same argument as in the proof of Theorem <ref>, so we omit the details. § EXPLICIT VALUES In this section, we prove the following result: In the case (<ref>), we have sup_ k ∈sup_r>0_k(r) = (c_0 + c_1)/2 , m = 0 , c_0 , m > 0 , _k r > 0 k(r) = = , m = k = 0, ∅, else, where c_k 2^1-s (2π)^dΓ(s-1) Γ((d-s)/2 + k) / ( Γ(s/2) )^2 Γ( (d + s)/2 + k - 1 ) . As a consequence, we have _L^2(^d, ^N) → L^2(^d+1, ^N)^2 = 2 π in the case (<ref>) with d = 2, 3, and the equality f_L^2(^d+1, ^N)^2 = 2 πf_L^2(^d, ^N)^2 holds if and only if: d=2, m = 0 f(ξ) = r^-1/20(θ) 0(r) , d=3, m = 0 f(ξ) = r^-1∑_n=-1^0[ 0n(θ, φ) ; 0n(θ, φ) ]0n(r) , d=2, 3, m > 0 f = 0 . Furthermore, in the case (<ref>) with d ≥ 4 and m > 0, we have _L^2(^d, ^N) → L^2(^d+1, ^N)^2 = 2 π = 2 π c_0 . We use the following lemma: [<cit.>] Let d ≥ 2, 1 < s < d, a > 0, and suppose that ( w, ψ, ϕ ) satisfy w(x) = r^-s, ψ(r)^2 = a r^1-sϕ'(r) . Then we have λ_k(r) = a c_k 2^1-s a (2π)^dΓ(s-1) Γ((d-s)/2 + k) / ( Γ(s/2) )^2 Γ( (d + s)/2 + k - 1 ) . Furthermore, { c_k }_k ∈ is strictly decreasing. For example, if d ≥ 3 and s = 2, then λ_k(r) = a c_k = a (2π)^d/d + 2k - 2 . The proof of Lemma <ref> can be found in <cit.>. Note that Theorem <ref> is immediate from Theorem <ref> and Lemma <ref>. In fact, we have C_d(w,ψ,ϕ) = 1/(2π)^d-1sup_k∈sup_r>0λ_k(r) = 1/(2π)^d-1 a c_0 = 2^2-s a πΓ(s-1) Γ((d-s)/2) / ( Γ(s/2) )^2 Γ( (d + s)/2 - 1 ) . Similarly, Theorem <ref> follows from Theorems <ref>, <ref>, <ref>, <ref>, <ref> and Lemma <ref>. By Lemma <ref>, we have _k(r) = 1/2( c_k + c_k+1 + m/√(r^2+m^2) ( c_k - c_k+1 ) ) and so that sup_k ∈sup_r > 0_k(r) = sup_k ∈lim_r → +0_k(r) = sup_k ∈1/2( c_k + c_k+1) , m = 0 , sup_k ∈ c_k , m > 0 = (c_0 + c_1)/2 , m = 0 , c_0 , m > 0 , r > 0 k(r) = = , m = k = 0, ∅, else. Combining these with Theorems <ref>, <ref>, we obtain the desired result in the case d = 2, 3. In the case d ≥ 4, notice that Theorems <ref>, <ref> and <ref> imply 2πsup_r>0_(r) Theorem <ref>≤_L^2(^d, ^N) → L^2(^d+1, ^N)^2 Theorem <ref>≤ 2 πsup_k ∈sup_r > 0λ_k(r) Theorem <ref>= 2 π c_0. By Lemma <ref>, we have _(r) = 1/2( c_0 + c_1 + m^2/r^2 + m^2( c_0 - c_1 ) ) and so that sup_r>0_(r) = (c_0 + c_1)/2 , m = 0 , c_0 , m > 0 . Therefore, we conclude that _L^2(^d, ^N) → L^2(^d+1, ^N)^2 = 2 π c_0 holds when m > 0. § ACKNOWLEDGMENTS The authors would like to thank Mitsuru Sugimoto (Nagoya University) and Yoshihiro Sawano (Chuo University) for valuable discussions. numbers plainnat
http://arxiv.org/abs/2405.10020v1
20240516120202
Natural Language Can Help Bridge the Sim2Real Gap
[ "Albert Yu", "Adeline Foote", "Raymond Mooney", "Roberto Martín-Martín" ]
cs.RO
[ "cs.RO", "cs.CL", "cs.CV", "cs.LG", "I.2.9; I.2.7; I.2.6" ]
Natural Language Can Help Bridge the Sim2Real Gap Albert Yu, Adeline Foote, Raymond Mooney, and Roberto Martín-Martín UT Austin May 20, 2024 ==================================================================================== The main challenge in learning image-conditioned robotic policies is acquiring a visual representation conducive to low-level control. Due to the high dimensionality of the image space, learning a good visual representation requires a considerable amount of visual data. However, when learning in the real world, data is expensive. Sim2Real is a promising paradigm for overcoming data scarcity in the real-world target domain by using a simulator to collect large amounts of cheap data closely related to the target task. However, it is difficult to transfer an image-conditioned policy from sim to real when the domains are very visually dissimilar. To bridge the sim2real visual gap, we propose using natural language descriptions of images as a unifying signal across domains that captures the underlying task-relevant semantics. Our key insight is that if two image observations from different domains are labeled with similar language, the policy should predict similar action distributions for both images. We demonstrate that training the image encoder to predict the language description or the distance between descriptions of a sim or real image serves as a useful, data-efficient pretraining step that helps learn a domain-invariant image representation. We can then use this image encoder as the backbone of an IL policy trained simultaneously on a large amount of simulated and a handful of real demonstrations. Our approach outperforms widely used prior sim2real methods and strong vision-language pretraining baselines like CLIP and R3M by 25 to 40%. § INTRODUCTION Recently, visual imitation learning (IL) has achieved significant success on manipulation tasks in household environments <cit.>. However, these methods rely on large amounts of data in very similar domains to train data-hungry image-conditioned policies <cit.>. Some researchers are attempting to generalize visual IL to any target domain by collecting large datasets of demonstrations from mixed domains. In this work, we explore a different approach: can we transfer a policy trained on cheaply acquired, diverse simulation data to a real-world target task with just a few demonstrations? A solution to effectively leverage cheap sim data while successfully fitting scarce real-world demonstrations is to create a domain-agnostic visual representation and use it for policy training. Such a representation should enable the policy to use the simulation image-action data as an inductive bias to learn with few-shot real world data. This representation must allow the policy to tap into the right distribution of actions by being broad enough to capture the task-relevant semantic state from image observations, yet fine-grained enough to be conducive to low-level control. For instance, a sim and real image observation, both showing the robot gripper a few inches above a pan handle, should lie close together in the image embedding space to lead to similar actions, even if the two images have large differences in pixel space. How might we acquire supervision for learning such a visual representation? Language is an ideal medium for providing it. Descriptions of task-relevant features in image observations, such as whether or not a gripper is close to a pan handle, serve as a unifying signal to align the representations of images between sim and real. We hypothesize that if a sim and real image have similar language descriptions (e.g., “the gripper is open and right above the pan handle”), then their underlying semantic states are also similar, and thus the actions the policy predicts conditioned on each image should also be semantically similar (e.g., moving downward to reach the pan handle). The pretrained embedding space of large language models (LLMs) offers a well-tuned signal that can be leveraged to measure the semantic similarity between real and sim images via their associated language descriptions (see Fig. <ref>). This simple insight allows us to learn a domain-agnostic visual representation to bridge the visual sim2real gap. A popular paradigm in foundation model research is to first pretrain the backbone on large datasets, and then add and train a task-specific head to process the backbone outputs to perform a downstream task. We borrow from this paradigm by first pretraining an image encoder to predict the pretrained embeddings of language descriptions of images from roughly 50 trajectories in sim and real, with language labels on each image. Then we use this image encoder as the backbone of our IL policy and train on action-labeled data from both the sim and real domains simultaneously, where we only need a few action-labeled demonstrations from the real world. In this paper, we introduce , a lightweight framework for transferring between any two domains that have large visual differences but contain data across a similar distribution of tasks. Our approach has the following main advantages over prior sim2real efforts: * Alleviates the need for the engineering-intensive task of system identification, or more broadly trying to exactly match a sim environment to the real environment both visually and semantically. * Enables sim2real transfer on tasks involving deformable objects that are hard to simulate with the same dynamics and visual appearance as the real-world version of the objects. * Bridges a wide sim2real gap that includes differences in: camera point-of-view (1st vs 3rd person), friction and damping coefficients, task goals, robot control frequencies, and initial robot and object position distributions. In the few-shot setting, on long-horizon multi-step real-world tasks, these advantages enable  to outperform prior SOTA methods in sim2real and vision representation learning by 25-40%. To our knowledge, this is the first work that shows that using language to learn a domain-invariant visual representation can help improve the sample efficiency and performance of sim2real transfer. § RELATED WORK Our main contribution is a method to learn domain-invariant image representations by exploiting natural language descriptions as a bridge between domains for sim2real transfer. While we believe this has not been explored before, significant related research has been done in vision-language pretraining, domain-invariant representations for control, and sim2real techniques. §.§ Vision Pretraining for Robotics Various works have found that vision-only pretraining improves performance on image-based robotic policies. Prior work has explored pretraining objectives ranging from masked image modeling <cit.>, image reconstruction <cit.>, contrastive learning <cit.>, video frame temporal ordering <cit.>, future frame prediction <cit.>, and image classification <cit.> on internet-scale datasets such as ImageNet <cit.>, Ego4D <cit.>, Something-Something <cit.>, and Epic Kitchens <cit.>. While these vision-only pretraining objectives learn good representations for robotic control within a specific domain distribution (such as the real world), they are not necessarily robust to the wide domain shifts encountered during sim2real. In vision-language pretraining, contrastive learning <cit.> has been shown to learn valuable representations for robotic tasks <cit.>. However, these pretrained visual representations are often overly influenced by the semantics of language captions. This results in a representation that is too object-centric to differentiate between different frames of a robot demonstration, lacking the level of granularity needed for spatial-temporal understanding. R3M <cit.> addresses this by learning semantics from language labels of videos but also training with a time contrastive loss between video frames. Prior work in multimodal representations <cit.> found language to be effective in aligning representations learned across multiple modalities including depth and audio. Instead of using language to bridge modalities, our approach uses language to bridge visual representations between domains. §.§ Sim2Real While we approach sim2real through vision-language pretraining, there are many alternative, well-researched techniques. Domain randomization <cit.> involves varying physical parameters and visual appearances of the simulation to train a policy that functions in a wide distribution of domains that hopefully also covers the target domain distribution. However, domain randomization requires a large amount of diverse training data and attempts to be simultaneously performant in an overly broad distribution of states, leading to a suboptimal and conservative policy that takes longer to train. System identification <cit.> involves tuning the simulation parameters to match the real world in order to create a custom-tailored simulation environment that easily transfers to the real domain. However, this process is very engineering intensive and time consuming, and it may be intractable to simulate all real world physical interactions with high fidelity and throughput. In contrast, our sim2real approach can handle potentially large discrepancies between the source and target domain and with a few target task demonstrations, does not require system identification or domain randomization. §.§ Domain-Invariant Representations Several methods have been proposed to learn domain invariant representations. The domain-adaptation community has extensively researched using Generative Adversarial Networks (GANs) to map images from one distribution into another, using pixel space as a medium of common representation <cit.>. However, GANs require a large training dataset and are notorious for unstable training. Additionally, enforcing similarity on the input image side at the pixel level is less efficient than our method, which encourages cross-domain distributional similarity in a compact, low-dimensional image encoder space. Furthermore, researchers in self-driving have studied using semantic segmentation and depth maps <cit.> as a common representation space between domains, though their effectiveness has only been demonstrated in navigation tasks with binary segmentation masks, which is too simplified for the long-horizon manipulation tasks we consider. §.§ Language and Robotics A growing body of work has investigated training multitask robotic policies conditioned on language instruction embeddings  <cit.>, or a combination of language instructions and goal images/demonstrations  <cit.>. Our approach also involves learning a language-conditioned policy, but unlike prior work, our main novelty is using language for a second use-case: as scene descriptors during pretraining to pull together semantically similar image observations between two visually dissimilar domains. Language has also been used for reward shaping in RL  <cit.>, and as a high-level planner in long-horizon tasks  <cit.>. These areas of research are more ancillary to our contributions, as we demonstrate our approach with IL instead of RL and with fine-grained manipulation tasks that do not require extensive planning. § PROBLEM DESCRIPTION In this work, we address the problem of few-shot visual imitation-learning (IL): learning a visuomotor manipulation policy in the real world based on a few real-world demonstrations. We assume access to a large amount of simulation data and cast few-shot IL as a sim2real problem. More concretely, we render the few-shot IL problem as a k+1 multi-task IL problem: k tasks from simulation and the target task (with a few demonstrations) in the real world. In general terms, we assume a source domain in which data can be acquired cheaply and a target domain where data is expensive to collect. In our setting, we consider access to two datasets across two domains: , which spans multiple tasks in the source domain, and , which contains a small number of demonstrations of the target task in the target domain we want to transfer to. Thus, we assume that || >> ||, due to how expensive target domain data collection is (such as in the real world). We make two assumptions about the two domains. First, we assume the source and target tasks are all of the same general structure, such as multi-step pick-and-place task compositions, but with different objects and containers across different subtasks. Otherwise, transfer would be infeasible in the low-data regime if the source and target domain tasks lack similarity. Second, to train a common policy for both domains, we assume the domains share state and action space dimensionality. We make no further assumptions about the similarity between the two domains. All of our datasets are in the form of expert trajectories. Each trajectory, τ = {(I_t, s_t, [a_t, l_t], l_task )}, is a sequence of tuples containing an image observation, I_t (128×128 RGB), robot proprioceptive state, s_t (end effector position and joint angles), and a language instruction of the task, l_task. Note that l_task is the same over all timesteps of all trajectories in a given task. [a_t, l_t] denotes that a trajectory may optionally also include robot actions (in which case we consider the trajectory a full demonstration) and/or a language description of the image I_t. In the following sections, we identify with τ[L] a trajectory with language descriptions l_t, but no actions a_t. Similarly, τ[A] is a full demonstration with actions, a_t, but no language descriptions, l_t, and τ[AL] is a demonstration that contains both actions, a_t, and language, l_t. The language labels for images can be automatically generated from a programmatic function that maps image observations to language scene descriptions depending on the relative position between the robot and the objects in the scene. We elaborate on these language labels and how to automatically collect them in Section <ref>. Note that these language scene descriptions, l_t, are different from the language instruction associated with each task, l_task. Different data elements and types of trajectories will be used during pretraining and policy few-shot training: during pretraining, we use τ[L] image-language (I_t, l_t) pairs from ∪. During policy learning, we use τ[A] data: (I_t, s_t, a_t, l_task) tuples from ∪. In the next section, we explain how these two steps are defined for . § : FEW-SHOT IL WITH SIM&REAL In our method, we adopt the common pretrain-then-finetune learning paradigm (see Fig. <ref>). First, we pretrain an image backbone encoder on cross-domain language-annotated image data (Sec. <ref>). Then, we freeze this encoder and train a policy network composed of trainable adapter modules and a policy head to perform behavioral cloning (BC) <cit.> on action-labeled data from both domains (Sec. <ref>). To leverage the simulation data, we train a k+1 multi-task BC policy that learns for k tasks in the source domain (sim) and 1 in the target domain (real, few shot). §.§ Automatic Language labeling of Images To acquire image-language pairs for pretraining, we implement an automated pipeline for labeling the images of a trajectory that occurs synchronously during scripted policy demonstration collection (see Appendix <ref>). Each if-case in the scripted policy corresponds to a stage index. We define a list of template strings describing the scene for each of these stages, so the stage indexes into the template string list, giving us our language annotation for the image. See Table <ref> in the Appendix for all template strings, and Appendix <ref> for details about our language labeling procedure. However, our language labeling process need not be synchronously coupled with scripted policy demonstration collection. We also implemented a labeling process using off-the-shelf vision-language models to detect the location of objects and the gripper in the image to predict the stage number. This process can be run on previously-collected trajectories and requires only the images of a trajectory alone, without need for additional action or state information. We describe this second process in Appendix <ref>. Empirically, using language from this second, more scalable automated approach does not degrade the performance of our method. §.§ Cross-Domain Image-Language Pretraining After collecting trajectories with language labels, our first step in involves learning a domain-invariant representation that will enable leveraging simulation data for few shot IL. For that, we need to learn an image observation encoder, f_cnn: I_t→ℝ^d_cnn, that attains the following property: it should preserve the semantic similarity of scenes in images between the two domains. For instance, if both image I^s from (sim) and image I^t from (real world) show the robot's gripper open and a few inches above the object to grasp, even if from different viewing angles, then we want their image embeddings to be close together in the learned image encoding space. This will facilitate policy learning later, as the policy will need to draw from a similar distribution of actions for similar scene semantics, which are now already mapped into similar visual features. Theoretically, off-the-shelf pretrained vision-language models (VLMs) <cit.> should already possess these properties as they were trained on a massive distribution of image and language data. However, in the context of robot manipulation, pretrained VLMs tend to encode all observations of the trajectory into a very narrow region of the embedding space without sufficient distinction for task-relevant, semantic aspects of the image such as the location of the gripper in relation to the manipulated objects. This renders them unsuitable without additional finetuning for our application (see Sec. <ref>). In , we propose an alternative approach to obtain a visual representation with the aforementioned desired property. We train a ResNet-18 <cit.> from scratch as our image encoder using image-language tuples (I^s, l^s) from and (I^t, l^t) from . We denote this vision language pretraining dataset as 𝒟_VL = { (I^d, l^d): (I^d, l^d) ∈∪}, where d is either the source or target domain. The images are observations collected during 100 demonstrations from each of the tasks in and 25-100 demonstrations from , totaling around 10k images per domain. We assume that the two sets of language descriptions in and are similarly distributed; otherwise, language may not help learn domain-invariant features between and . To effectively leverage language as a bridge between visually different domains, we need a well-tuned (frozen) language model, f_lang: l →ℝ^d_lang, to map strings to d_lang-dimensional language embeddings. We use off-the-shelf miniLM <cit.>, since prior work <cit.> has demonstrated its effectiveness for language-conditioned control policies compared to other small, off-the-shelf language models. Given the data and the language embedding described above, we propose two variants in for the image-language pretraining step that can obtain a sim-real agnostic representation based on language supervision (see Fig. <ref>(i)A-B): §.§.§ Language-Regression Our first variant is a straightforward use of language supervision to shape the image embedding space: predicting the language embedding of the description, l^d, given the embedding of the corresponding image, I^d. We sample image-language pairs from the 𝒟_VL dataset defined above: (I^d, l^d) ∼∪. Let g: ℝ^d_cnn→ℝ^d_lang be a single linear layer (language predictor in Fig. <ref>(i)(A)) trained to minimize the following loss: ℒ_cnn, reg(𝒟_VL) = g ( f_cnn(I^d) ) - f_lang(l^d) _2^2 We use the loss to train both the language predictor and the CNN backbone. The loss provides strong language supervision by encouraging f_cnn to directly regress toward the frozen language embeddings of the image descriptions, effectively making the pretrained image encoder reflect the LLM embedding space. §.§.§ Language-Distance Learning We also experiment with a second variant of image-language pretraining that incorporates language with a softer form of supervision. We posit that the exact values of the language embeddings do not themselves convey meaning; rather, it is the pairwise distances between two language embeddings that convey key information about the semantic similarity between two corresponding images. Thus, we design an objective to regress the image embedding distances between a pair of images from the two domains to their corresponding language distance: ℒ_cnn, dist(𝒟_VL) = f_cnn^⊤(I^s) f_cnn(I^t) - d (l^s, l^t ) _2^2 where the language distance function we use, d: l × l →ℝ is BLEURT <cit.>, a learned similarity score between two strings commonly used in the NLP community. We normalize d(·, ·) between 0 and 1 for all possible ( l^s, l^t) pairs in our image-language dataset, where 1 indicates the highest similarity between any two strings in the dataset. The output of f_cnn is unit normalized before taking the dot product. When comparing both variants (see Sec. <ref>) we would like to assess if the additional degrees of freedom from the looser distance supervision are beneficial later on for policy training. §.§ Multitask, Multidomain Behavioral Cloning Our second step in involves learning a multi-domain, multi-task, language-conditioned BC policy (see Fig. <ref>(ii)). By leveraging our learned domain-invariant representation for robotic control, this policy should be able to perform well in real-world task with only a few demonstrations, thanks to the additional information it can extract from simulation. During this phase of policy learning, we freeze all but the last layer to preserve the semantic scene information encoded in the learned, domain-invariant representation, f_cnn, while enabling the network to adapt to the new downstream task of low-level control. We also insert trainable FiLM layer blocks <cit.> as adapter modules in f_cnn to process the language instruction embeddings between the frozen convolution layers. Finally, we include a few fully-connected layers as a policy head to process the image feature, f_cnn(I_t), and proprioceptive state, s_t, and train the resulting policy π with BC loss to predict the mean and standard deviation of a multivariate Gaussian action distribution, as described below. Let our training dataset 𝒟_BC = ∪ be a set of demonstrations τ^d, for domain d ∈{source, target}. As explained in Sec. <ref>, each demonstration is a sequence of tuples x_t = ( I_t^d, s_t^d, a_t^d, l_task) containing the image observation, proprioceptive state, language instruction for the task, and action at timestep t. We train with the following standard BC negative log probability loss <cit.>: ℒ_π ( 𝒟_BC ) = 1/B∑_x_t ∼τ^d τ^d∼𝒟_BC-logπ(a_t^d| f_cnn(I_t^d), s_t^d, l_task) where B denotes the batch size. The policy is trained on k+1 tasks: k from (thousands of trajectories per task) and 1 from (≤ 100 trajectories, see Sec. <ref>). In each batch, we sample m tasks uniformly at random from the k+1 tasks, and then query 𝒟_BC for a fixed number of transitions from trajectories for each of the m selected tasks. We hypothesize that cross-domain image-language pretraining (Sec. <ref>) improves policy learning because it helps ensure that image observations of different domains depicting semantically similar scenes map into similar regions of the learned embedding space. This accelerates learning not only on data but also helps alleviate data scarcity in because the pretrained image backbone encodes images into an in-distribution region of the learned image embedding space, alleviating common issues with visual distribution shift and enabling our method to leverage simulation data to compensate for the lack of real-world action-labeled data, improving sim2real transfer. § EXPERIMENTAL SETUP We evaluate in two settings: a setting where we test the transfer abilities between two simulated domains with visual and physical differences, and the setting, where the few shot IL is defined in the real world and we use simulation to address the data scarcity. serves as a platform to evaluate in depth the behavior of with a fully controlled domain gap, while is our setting of interest for this work. We will use three task suites that we explain below. See Figure <ref> in the Appendix for detailed frame rollouts of each task. In a slight overload of notation from Sec. <ref>, here we use and to denote the source and target domains, respectively. §.§ and Environment Differences In , and are both sim environments with large differences in camera point-of-view (third person vs. first person), joint friction, and damping. In the setting, we employ a setup with a wide gap that we aim to bridge using language that includes differences in control frequency, task goals, visual observation appearance, objects, and initial positions. More details between the two environments in and can be found in Appendix <ref>. §.§ Evaluation Metrics For all and experiments, we measure task success rate. In , this is calculated through ten evaluation trials for each of two seeds per task, for a total of 20 trials per table entry. In each set of ten trials, we place the object in the same ten initial positions and orientations, evenly distributed through the range of valid initial object positions. In , we also run two seeds per setting and take a success rate averaged over 720 trials between the two seeds in the final few hundred epochs of training. §.§ Data §.§.§ Environments For each of our tasks, we design simulation environments on top of Robosuite <cit.> in Mujoco <cit.>. For the real environment, we use Operational Space Control <cit.> to control the position of the end-effector of the robot in Cartesian space. In both simulation and real, we work with a 7-DOF Franka Emika Panda arm and use a common action space consisting of the continuous xyz delta displacement and a continuous gripper closure dimension (normalized from [-1, 0]). The robot proprioception space is 22-dimensional, consisting of the robot's xyz end-effector position, gripper state, and sine and cosine transformations of the 7 joint angles. The image observation space is 128 × 128 RGB images. §.§.§ Overview of Tasks For each task suite, we collect data from simulated domain and real target domain (for ) or sim target domain (for ). All demonstrations in sim and real are collected with a scripted policy (see Appendix for further details). Sim trajectories range from 200-320 timesteps long, operated at 50, while real trajectories run at 2 and range from 18-45 timesteps. Our three task suites allow us to test the effectiveness of for sim2real in a wide variety of control problems ranging from simple stacking in task suite 1, to multi-step long-horizon pick and place in task suite 2, to deformable, hard-to-simulate objects in task suite 3. §.§ Task Suite 1: Stack Object In our first suite of tasks, the robot must move an object to a target. In the simulated domain , the target is on top of a wooden coaster, and there are four objects: milk carton, soda can, bread, and cereal box, which correspond to the four tasks. Both the object and coaster positions are randomized over the entire workspace. We collect and train on 400 demonstrations per task (1600 total) as our simulation data. §.§.§ For experiments on this task suite, we define a new simulated environment with differences from as enumerated in Sec. <ref>. Policies are trained with the 1600 demonstrations and 100 target task demonstrations. §.§.§ For , remains the same as . is a real world environment in which the object is randomly placed on the left mat and the target task is to move the object onto the right mat and open the gripper by the end of 20 timesteps. §.§ Task Suite 2: Multi-step Pick and Place Our second suite of tasks is longer-horizon. In simulation, the robot must first put an object in the pot, then grasp the pot by its handle and move it onto the stove. We categorize this as a 2-step pick-and-place task. We use the same four object-task mappings from Sec. <ref>. The object, pot, and stove locations are all randomized within a quadrant of the workspace. Since this task is longer horizon, we train on more data—1,400 trajectories per task in . §.§.§ Similar to the stacking task, in the setting, we define a new environment with differences from enumerated in Sec. <ref> and evaluate over the four tasks when given 100 target-task demonstrations. §.§.§ In the setup, remains the same, while is the real task of putting a carrot into a bowl, then putting the bowl onto a plate (see Fig. <ref>), and ending with the gripper open after 50 timesteps. In addition to success rate (Section <ref>), we measure average number of subtasks completed, allowing partial credit if the robot only succeeds in placing the carrot in the bowl, but not if only the bowl is placed on the plate. §.§ Task Suite 3: Wrap Wire Our final suite of tasks involves wrapping a long deformable wire around a fixed object. In simulation , we approximate a wire with a chain of spheres connected with free joints, and the task is to wrap the chain around a fixed cylinder (see Fig. <ref>). A trajectory is successful if the first link of the chain has traveled ≥5π/3 radians (5/6ths of a full revolution) around the cylinder. Our simulation data consists of two tasks: wrapping counterclockwise and clockwise. The initial position of the end of the chain is randomized over a region to the left of the cylinder. contains 400 trajectories per task. §.§.§ For our sim environment, we again apply the changes specified in Sec. <ref>. We additionally swapped the spheres for capsules and changed the color and texture of the table, robot arm, and objects. This task has a wider sim2sim gap from additional visual and dynamics changes. §.§.§ In our experiments, the target task is to first grasp the plug, then wrap the cord around the base of a blender in the middle of the workspace, and finally put the plug down, similar to what one might do before putting the appliance away. Like the sim environment, we define success if the following two conditions are met: (1) the plug travels ≥5 π/3 radians around the blender, and (2) the plug is placed and the gripper is open at the end of 50 timesteps. §.§ Baselines To evaluate the effectiveness of , we consider two sets of baselines: non-pretrained baselines where the CNN is initialized from scratch, and baselines with pre-trained visual encoders. For the non-pretrained baselines, we examine training with only data, and training with both and data. This enables us to understand the benefits of our proposed training procedure. In , we also compare to three popular prior sim2real approaches: * MMD <cit.>, which aims to minimize the distance between the mean embedding of all sim images and all real images of a batch to prevent the real images from being out-of-distribution relative to the sim images. * Domain randomization <cit.> of the colors, textures, and physics of the environment. * Automatic Domain Randomization with Random Network Adversary (ADR+RNA) <cit.>, which keeps increasing/decreasing domain randomization bounds depending on the agent's performance, and also introduces a randomly initialized network for each trajectory to inject correlated noise into the agent's action conditioned on the state input. For the pretrained baselines, we consider two strong foundation models as the visual backbone, CLIP <cit.> and R3M <cit.>, commonly used visual representations for robotics that are, like our approach, also shaped by language descriptions of images/videos. For each task in , we train and evaluate with 25, 50, or 100 demonstrations. §.§ Our Method Variants and Ablations In our evaluations, we compare language regression (Section <ref>) and language distance (Section <ref>), the two pretraining variants of our approach. We also ablate away the effects of language on our pretraining approach in a method called “stage classification,” where the pretraining task is to predict the stage index of an image (see Section <ref>) instead the language embedding or embedding distance. § EXPERIMENTAL RESULTS Our results for experiments are shown in Table <ref>, and the results for are shown in Table <ref>. In both tables, the methods (rows) are grouped into non-pretrained baselines, our method variants and ablations, and pretrained SOTA baselines. In , we additionally include a group of three rows to show the performance of prior sim2real approaches. §.§ Experimental Questions and Analysis Across the three task suites in both and , our method generally achieves the highest success rates. To further analyze the effectiveness of our method, we pose and investigate the following experimental questions. What is the impact of our pretraining approach? Our method nearly doubles the success rate of both non-pretrained baselines in most task suites in and . This indicates that can bridge a wide sim2real gap. One factor that may allow our method to perform well is that image observations with similar language descriptions may also have similar action labels. In Appendix <ref>, we further investigate this hypothesis with an analysis of the action distributions between images, split by their language descriptions. Between the non-pretrained baselines, training on sim demonstrations in provides little benefit on stack object, increases average performance by ≈20% on multi-step pick-and-place, but decreases average performance by ≈10% on wrap wire. However, in , it provides a 10-15% increase on most tasks. This suggests that the sim2sim gap is small enough to benefit from using even without pretraining, but that the sim2real gap is large enough for pretraining to be needed to leverage . How does our method compare to prior sim2real baselines? Our method outperforms all of the prior sim2real baselines we tested against (second row-group in Table <ref>), which collectively do relatively poorly in most settings, highlighting the difficulty of the sim2real problem in our setup. MMD averages the best performance across the three sim2real baselines and even achieves competitive performance on the easiest task of stacking an object. However, on the two other more difficult tasks, its performance does not scale well with more trajectories, which we suspect arises from stability issues in trying to push together the mean of all sim and real image embeddings in each batch. Domain randomization only exacerbates the sim2real gap since enabling all randomizations does not move the distribution of simulation trajectories closer to the real world trajectories due to the large visual dissimilarity between our simulation and real environments. ADR+RNA, which only randomizes the environment as much as possible without severely hurting the scripted policy performance, averages slightly better performance than domain randomization, perhaps because the data is less diverse and easier to fit a policy to than the data from full-scale domain randomization. How does our method compare to prior vision-language pretrained representations? In , our method outperforms both pretrained baselines across the board, including R3M, which is the strongest baseline on stack object and multi-step pick-and-place. When trained on increasing amounts of real-world data, both R3M and CLIP tend to plateau—CLIP performs no better than 40% on any task, R3M has an apparent ceiling of 65%, while our method achieves up to 90%. This suggests that CLIP and R3M do not scale as well as our method when provided more data, despite being pretrained on internet-scale video and image data while our method was pretrained on images from just a few hundred sim and real trajectories. In , our method also outperforms R3M and CLIP across the board. Averaging the performance on stacking and multi-step pick-and-place, our method outperforms R3M by 15-30% and CLIP by 25-40%. On the wrap wire task, our method and R3M perform comparably, probably because the task is quite a bit easier for all methods in simulation. What is the effect of language in learning shared representations? We ablate the effect of language on our pretraining as the “stage classification” row in Tables <ref> and <ref>, as mentioned in Section <ref>. In , we see similar performance in language regression pretraining and stage classification pretraining. However, in , where the domain gap is larger, we see language providing a measurable benefit in all task suites, especially in multi-step pick-and-place, perhaps because pretraining with language leverages similarities in language descriptions between the first and second steps of the pick-and-place task. How do our two image-language pretraining variants compare? We compare our two pretraining variants introduced in Sections <ref> and <ref>, where language regression directly predicts language embeddings while language distance is encouraged to maintain pairwise distances based on BLEURT similarity scores. Again in , there is no clear winner between the two, but in , language regression performs better on average. This suggests that when performing language pretraining for visual representations, the more constraining regression loss outperforms the less constraining distance-matching loss on performance. §.§ Additional Experimental Questions and Results Finally, we examine a few questions to better understand the performance of our method under slight changes to the data and problem setup. What is the effect of pretraining on image-language pairs where the language granularity is reduced? We evaluate the impact of reduced language granularity on performance. See Appendix <ref> for results. How does our method perform if we cannot pretrain directly on image-language pairs from the target task? There are scenarios in which we might not have access to the real-world target task during the pretraining phase, as pretraining is often done without knowledge of the downstream task. To investigate this, we introduce a real-world prior task that we pretrain on, and use real-world target task data only during imitation learning. The advantage of this problem setup is that we can reuse the same f_cnn for multiple downstream real-world target tasks as long as they are sufficiently similar to the real-world prior task. In this modified problem setup, our method still mostly outperforms all baselines, which demonstrates that our method does not overfit to the real-world task it sees during pretraining. See Appendix <ref> for full results. § CONCLUSION Vision-based policies struggle with distributional shift during sim2real transfer. To address this challenge, we introduced a low-data-regime visual pretraining approach that leverages language to bridge the sim2real visual gap with only 25-100 real-world trajectories with automatically generated language labels. We evaluate the effectiveness of our approach on multi-step long-horizon tasks and hard-to-simulate deformable objects. In the few-shot setting, our approach outperforms state-of-the-art vision-language foundation models and prior sim2real approaches across 3 task suites in both and . § LIMITATIONS AND FUTURE WORK One of the main limitations of our work is that the learned representation may have limited generalizability compared to existing pretraining methods that leverage internet-scale data to enable a large degree of generalization. Our approach targets a specific distribution and domain of real-world tasks and operates in the low-data regime for both pretraining and policy learning, so it does not yield general-purpose visual representations that can be applied to a wide distribution of target tasks. Our method also assumes that the template language descriptions used by the automatic labelling process describe similar aspects of images across the two domains, and may perform worse if the language between sim and real described images at extremely different levels of granularity. Furthermore, our approach relies on segmenting all trajectories of a task into stages of a certain granularity so that the associated template language is diverse enough to prevent the learned visual representation from mapping the entire input image distribution to a collapsed point. On contact-rich tasks involving continuous motions or complex object deformations, it may be harder to segment a trajectory and label these segments with language. Another avenue for future work involves exploring sim2real by combining existing pretraining approaches such as time-contrastive learning and masked image modeling in conjunction with the language-based pretraining we propose, as adding temporal or masked prediction terms to the objective may enable more fine-grained representations that complement the coarseness of language. plainnat § APPENDIX §.§ Scripted Policy for Real-World Data Collection [htb]- .1in [htb]- .1in [htb]-.1in [htb]-.1in §.§ Detailed Policy Network Architecture & Hyperparameters For the policy backbone, we use a ResNet-18 architecture but made changes to the strides and number of channels to adapt the network to our 128 × 128 × 3 image size. Hyperparameters are shown in Table <ref>. A detailed layer-by-layer architecture figure of our policy is shown in Figure <ref>. During policy training, only the last CNN layer, FiLM blocks, and policy head (FC layers) are finetuned, while all other layers are kept frozen. §.§ Does Language Similarity Imply Action Distribution Similarity? We hypothesize that one of the ways language is an effective bridge for sim2real transfer is that the sim and real action distributions of the demonstrations are similar when the image observations have similar language descriptions. Figure <ref> shows the action distribution similarities between sim and real when the language descriptions are similar (top row), and when the language descriptions are different (bottom row). Each column represents a component of the action distribution. We plot three components: z-axis actions, xy-magnitude (which is the ℓ 2 norm of the (x, y) action dimensions), and the gripper dimension. We observe that action distributions are indeed more similar for images described by similar language than for images described by different language. §.§ Task and Data Details Figure <ref> provides film strips of trajectories from the source domain data , target domain prior task data , and target domain target task data , for each of the three task suites. §.§ Training Hyperparameters Table <ref> shows our BC training hyperparameters. In each training iteration, we sample 4 random tasks from our training buffer and get 57 samples per task, for a total batch size of 228. §.§ Sim2Sim and Sim2Real Differences In our experiments, and are both sim environments with the following differences: * Camera point-of-view: image observations are third person (looking toward the robot), and image observations are first person (over the shoulder), a large change of viewing angle. * Friction and Damping: Joint friction and damping coefficients are 5 × and 50× higher in than , which significantly changes the dynamics. In our experiments, in sim and in real have the following differences: * Control frequency: The simulated policy runs at 50Hz while the real world policy runs at 2Hz. * Objects: The objects on the scene in each task differ between simulation and real data, except the robot itself. * Visual Observation: Backgrounds and camera angles are markedly different between the two domains. * Initial positions: The initial object and robot positions are different across sim and real. §.§ Labeling Image Observations with Language §.§.§ Language labeling during Scripted Policy We automatically label image observations with language descriptions during the scripted policy data collection process. Each image is assigned a stage number based on the if-case of the scripted policy, which corresponds to a semantic positional arrangement between the gripper and the relevant objects on the scene. Stage numbers map 1-to-1 to the template language strings shown in Table <ref>. For example, for the pick-and-place/stack object task, we define 7 stages and 7 corresponding language string templates, where the first stage is when the gripper moves toward a point above the object, the second stage is when the gripper moves downward toward the object, and so on. For the 2-step pick-and-place task, we use 14 stages—2 consecutive lists of the 7 individual pick place string templates, where the object and container variables of each template are filled in with the proper names. Though our approach to labeling image observations with language was done during demonstration collection, we emphasize that images can be automatically labeled with language in hindsight after demonstrations are collected. For instance, one can run an object detector on the images to estimate the position of the gripper in relation to the scene objects. This information can be used to determine what stage in a pick-and-place trajectory an image observation falls into. §.§.§ Alternative Approach: Language labeling with off-the-shelf VLMs To relax the requirement that our automated language labeling process must occur synchronously with a scripted policy collecting demonstrations, we implemented an alternative approach that is decoupled from the demonstration collection process. First, we use an off-the-shelf open-vocabulary object detector model, GroundingDINO <cit.>, to output bounding boxes for the relevant objects on the scene. No finetuning of GroundingDINO is required. Second, we train a CNN-based gripper state predictor to predict the gripper position (x, y, z) as well as whether the gripper is opened or closed in a given image. This network is trained on previously collected (image, gripper position, gripper opened/closed) data from 100 trajectories, and takes one minute to train on a single A5000 GPU. Using these two models, we can get the gripper state and position relative to the objects, enabling us to predict a stage number that corresponds fairly closely with the actual stage number as outputted by our scripted policy. Finally, we verified that training our method on VLM-derived language annotations does not degrade performance. §.§ Impact of Language Granularity on Performance To examine the impact of decreasing language granularity on performance, we segment the trajectories into fewer and fewer stages, until the extreme case where the entire trajectory has only a single stage, which means that all images across all trajectories of a task have the same exact language description embedding. The language descriptions we use for each stage, for varying numbers of stages per task, are displayed in Tables <ref> (2-step pick-and-place) and <ref> (wire wrap). Results are shown in Table <ref>. The trend is noisy, but in general, decreasing language granularity hurts performance slightly. Still, our method is robust to lower granularity, which matches our hypothesis that our pretraining approach provides significant performance gains simply by pushing sim and real images into a similar embedding distribution even if the language granularity is extremely coarse. §.§ results with no pretraining on In Tables <ref> and <ref>, we presented results in a setting where we both pretrained and did policy learning on two datasets, and . Sometimes it is unrealistic to assume that during pretraining, we have access to the downstream target task we are ultimately interested in. In such scenarios, it may be more realistic to assume we instead have real-world data for a prior task, . Thus, in this setting, we experiment with pretraining on ∪ and training our policy on ∪. Our method uses extra language labels during pretraining that the baselines do not have access to. While these language labels can be acquired at scale, to compensate for this data advantage, we decided to give all baselines an augmented dataset that includes action-labeled demonstrations, in addition to the target task, . Note that our method is not given action-labeled data: it is trained only on images with language labels during image-language pretraining (Sec. <ref>) but not during BC policy learning. Therefore, the baselines in a sense serve as upper bounds as they are given | | = 50 additional action-labeled demonstrations. In other words, during policy learning, the baselines train on action-labeled demonstrations from ∪∪ while ours are only trained on ∪. Results are shown in Table <ref>. How different are and ? In and for stack object and 2-step pick-and-place, the robot interacts with different objects in the two real-world tasks. Instead of a carrot as in , in , the robot interacts with a paper box for the stack object task suite and a rigid toy wooden block for 2-step pick-and-place. In on wire wrap, contains data of the beads being wrapped clockwise, instead of counterclockwise in . In for wire wrap, the plug, cord, and blender in are replaced by a wooden block, ethernet cable, and spool, respectively, in data. The differences between and can be visually examined in Figure <ref>. What trends are different between Table <ref> (with ) and Table <ref> (without )? Most of the trends are similar. Re-examining our main experimental questions, we see that our method still nearly doubles the success rate of both non-pretrained baselines, outperforms all three prior sim2real baselines, and that using language regression is important to achieve the most gains from pretraining (language regression outperforms stage classification and language distance, on average). However, in this new problem setting in , R3M outperforms our method in the lowest data regime with 25 target task demonstrations, perhaps because of the additional 50 demonstrations that our method does not train on. However, on 50 and 100 trajectories for the longer-horizon multi-step pick and place task, our method achieves higher sim2real performance than the best of either pretrained baseline.
http://arxiv.org/abs/2405.09364v1
20240515141652
Orbital Stability Study of the Taiji Space Gravitational Wave Detector
[ "Yu-Yang Zhang", "Geng Li", "Bo Wen" ]
gr-qc
[ "gr-qc", "hep-ex", "hep-ph" ]
=17pt =5pt ^1 School of Fundamental Physics and Mathematical Sciences, Hangzhou Institute for Advanced Study, UCAS, Hangzhou 310024, China ^2 University of Chinese Academy of Sciences, Beijing 100049, China ^3 Institute of Theoretical Physics, Chinese Academy of Sciences, Beijing 100190, China ^4 Institute of High Energy Physics, Chinese Academy of Sciences, Beijing 100049, China ^5 National Space Science Center, Chinese Academy of Sciences, Beijing 100190, China Space-based gravitational wave detection is extremely sensitive to disturbances. The Keplerian configuration cannot accurately reflect the variations in spacecraft configuration. Planetary gravitational disturbances are one of the main sources. Numerical simulation is an effective method to investigate the impact of perturbation on spacecraft orbits. This study shows that, in the context of the Taiji project, Earth's gravity is an essential factor in the change in heliocentric formation configuration, contributing to the relative acceleration between spacecrafts in the order of 𝒪(10^-6) m· s^-2. Considering 00:00:00 on 27 October 2032 as the initial orbiting moment, under the influence of Earth's gravitational perturbation, the maximum relative change in armlengths and variation rates of armlengths for Taiji is 1.6× 10^5 km, 32 m· s^-1, respectively, compared with the unperturbed Keplerian orbit. Additionally, by considering the gravitational perturbations of Venus and Jupiter, the armlength and relative velocity for Taiji are reduced by 16.01% and 17.45%, respectively, compared with when only considering that of Earth. The maximum amplitude of the formation motion indicator changes with the orbit entry time. Results show that the relative velocity increase between the spacecrafts is minimal when the initial orbital moment occurs in July. Moreover, the numerical simulation results are inconsistent when using different ephemerides. The differences between ephemerides DE440 and DE430 are smaller than those between DE440 and DE421. Orbital Stability Study of the Taiji Space Gravitational Wave Detector Yu-Yang Zhang ^1,2,3[zhangyuyang21@mails.ucas.ac.cn], Geng Li ^2,4[ligeng@ucas.ac.cn (Corresponding Author)] and Bo Wen ^1,2,5[wenbo21@mails.ucas.ac.cn] April 25, 2024. ============================================================================================================================================================= § INTRODUCTION On 11 February 2016, the Laser Interference Gravitational-Wave Observatory (LIGO) <cit.> announced the direct observation of gravitational wave signals, opening a new era of physics <cit.>. Compared with ground-based gravitational wave detection, space-based detection can address the limitations of ground-based noise <cit.> and interferometer scale, enabling the detection of gravitational waves in the medium- and low-frequency bands <cit.>. The success of the Laser Interferometer Space Antenna Pathfinder (LPF) validates the feasibility of gravitational wave detection in space <cit.>. The subsequent detection plan, the Laser Interferometer Space Antenna (LISA) project, is currently under pre-research and simulation, with a targeted launch date of around 2035 <cit.>. In 2016, the Chinese Academy of Sciences proposed the Taiji Space Gravitational Wave Detection Program <cit.>. The program aims to use Taiji spacecrafts to form a formation of three spacecrafts in Earth-like orbits around the Sun, describing an equilateral triangle with a side length of 3 million km <cit.>. These three spacecrafts would be distributed approximately 20^∘ in front of (or behind) the Earth, and the angle between the spacecraft formation plane and the ecliptic plane would be 60^∘ <cit.>. Space-based gravitational wave detection detects gravitational waves through the use of laser interferometry <cit.>. As a gravitational wave passes through, it weakly distorts space, causing a small change in the length of the interferometer arms. By measuring the interference effect of the laser beam in the interferometer, this small change in arm length can be detected, and a gravitational wave signal is obtained. In space-based gravitational wave detection, the length of the laser interference arm changes with time <cit.>. The instability of the laser frequency is one of the main sources of noise. Time-delay interferometry (TDI) technology <cit.> is proposed to reduce this measurement noise. The implementation of TDI requires precise spacecraft absolute range measurement and real-time communication. Thus, the stability of spacecraft orbits should be investigated. Spacecrafts operate in a complicated gravitational environment. The orbit calculations must consider a variety of disturbances, such as planetary gravity, planetary tidal force, lunar gravity, solar light pressure, and post-Newtonian effects <cit.>. Planetary gravity is one of the dominant factors, especially the gravities of Earth, Venus, and Jupiter, which has a significant impact on orbital stability <cit.>. Furthermore, to meet the requirements for gravitational wave detection within targeted frequency bands, stringent constraints are imposed on the parameters governing the configuration of the Taiji formation <cit.>. ∙ To mitigate significant shifts within the sensitive frequency band, it is essential to limit the arm length of the Taiji constellation within a range of 3± 0.035 million kilometers. ∙ The Taiji constellation does not need to keep a rigid geometry, but the change rate of armlength is limited to under 10 m/s. ∙ Due to the design requirements of the optical system, the breathing angle of the Taiji constellation should be maintained at 60 ± 1 degrees. To meet the aforementioned detection requirements, the configuration of the Taiji formation needs to maintain orbital stability. Gravitational perturbations from celestial bodies induce changes in spacecraft velocities, thereby altering their orbits. Over time, these variations may result in the armlength of spacecraft, breathing angle, and velocities exceeding the maximum acceptable limit. Once this threshold is surpassed, we will be unable to effectively probe the target frequency band for the gravitational wave detection of Taiji. Hence, assessing the impact of planetary gravitational perturbations on satellite formations is crucial for calculating experimental duration and facilitating subsequent orbit optimization efforts. This study calculates the armlength, armlength variation rate, and breathing angle variation of the Taiji constellation under planetary gravitational perturbations. Employing ephemeris DE440 and based on the Keplerian orbit of Taiji, these variations under the influence of the gravities of Venus, Earth, and Jupiter were evaluated. Our analysis showed that the variations in the heliocentric formation configuration are primarily influenced by Earth's gravitational perturbations. Numerical simulations based on ephemeris DE440 show that the contribution of the Earth's gravity to the relative acceleration between spacecrafts is approximately 𝒪(10^-6) m· s^-2. Considering 00:00:00 on 23 October 2032 as the initial orbiting moment as an example, when the Earth's gravitational perturbation is determined, within six years the maximum amplitude of the relative variation in the armlength is 1.6×10^5 km, maximum amplitude of the relative variation in breathing angle is 3.2^∘, and maximum amplitude of the relative variation in the armlength variation rates is 32 m· s ^-1, compared with the unperturbed Keplerian configuration. By adding the gravitational perturbations of Venus and Jupiter, the armlength and relative velocity are reduced by 16.01% and 17.45%, respectively, compared with when only the Earth's gravitational perturbation is considered. Results showed that the maximum amplitude of Taiji constellation armlengths, armlength variation rate, and breathing angles varied with orbital entry moments. The maximum amplitude of the inter-spacecraft armlength change rate is the smallest when the initial orbital moment occurs in July. When the orbital entry moment is 00:00:00 on 1 July 2032, the maximum amplitude of the reduced armlengths is approximately 8×10^4 km, the maximum amplitude of the reduced breathing angles is approximately 1.6^∘, and the maximum armlength variation rate is approximately 18.1 m· s^-1 within six years. We show that by using different ephemerides for the calculations, the constellation armlengths deviate in the order of 𝒪(1) m, constellation armlength variation rate deviates in the order of 𝒪(10^-7) m· s^-1, and relative accelerations deviate in the order of 𝒪(10^-13) m· s^-1 within six years. The differences between ephemerides DE440 and DE430 are smaller than those between DE440 and DE421. The remainder of the paper is presented as follows: In Section <ref>, we derive the expressions for the positions and velocities of spacecrafts in the Keplerian configuration. The armlengths, armlength variation rate, inter-spacecraft relative acceleration, and breathing angles are determined. In Section <ref>, we discuss the effect of planetary gravitational perturbations on the heliocentric formation configuration and the effect of different entering orbit moments. In Section <ref>, we compare the calculation results using different ephemerides. In Section <ref>, we present the conclusions. § THE KEPLERIAN ORBITAL CONFIGURATION OF TAIJI §.§ The Keplerian Orbit Several orbitals have been designed for the Taiji heliocentric formation configuration . We adopted the Keplerian orbital configuration model proposed in ref. <cit.> as part of the Taiji program preview, as shown in Figure <ref>. In the Keplerian configuration, the heliocentric reference system ECLIPJ2000 is chosen, the argument of periapsis is denoted as ω, and the longitude of the ascending node is denoted as Ω. When t = 0, spacecraft 1 (SC1) is at the highest point of the orbit and is located in the XZ plane. This can be associated with the determination of ω and Ω of SC1 as 270^∘ <cit.>. As the three orbits have rotational symmetry about the Z-axis with a rotation angle of 120^∘, the ω values of spacecraft 2 (SC2) and spacecraft 3 (SC3) are the same as that of SC1 with ω = 270^∘. SC2 has Ω = 30^∘, and SC3 has Ω = 150^∘. Notably, the orbital semi-major axis of the three spacecrafts is a, eccentricity is e, and inclination is ϵ. The spacecraft position vectors r_i = (x_i,y_i,z_i) (i = 1,2,3) and the spacecraft velocity vectors v_i = (v_xi,v_yi,v_zi) (i = 1,2,3) can be expressed as follows: { x_i =a(e+cosE_i)cosεcos[(i-1)2π/3]-a√(1-e^2)sinE_isin[(i-1)2π/3], y_i =a(e+cosE_i)cosεsin[(i-1)2π/3]+a√(1-e^2)sinE_icos[(i-1)2π/3], z_i =a(e+cosE_i)sinε, . and .{ v_xi =-a(e+sinE_i)cosεcos[(i-1)2π/3]dE_i/dt-a√(1-e^2)cosE_isin[(i-1)2π/3]dE_i/dt, v_yi =-a(e+sinE_i)cosεsin[(i-1)2π/3]dE_i/dt+a√(1-e^2)cosE_icos[(i-1)2π/3]dE_i/dt, v_zi =-a(e+sinE_i)sinεdE_i/dt, .. where E_i is the eccentric anomaly, which can be obtained from the Kepler equation E_i+esinE_i=ω t-(i-1)2π/3, where ω=√(G(m_s+m_i)/a^3) is the average angular velocity of the spacecraft. In the Taiji program, the half-length axis is a = 1 AU, the orbital eccentricity is e≈5.789×10^-3, and the orbital inclination ε is given by Equation (<ref>) <cit.> cosε =√(3)/3√(3)+2αcosϕ/1+e, sinε=√(3)/32αsinϕ/1+e, where ϕ is the angle between the plane of spacecraft formation and ecliptic plane, and α is a small parameter for the expansion, ϕ = π/3 + 5√(3)e/8, α = √(3)/2(√(e^2 + 2e + cos^2ϕ) - cosϕ). The spacecraft position r_i and velocity v_i varied with time t, as expressed in the above equation. §.§ Configuration Parameter The armlength l_ij, armlength variation rate v_i, relative acceleration between the spacecrafts l̈_ij, and breathing angle β_i of the constellation are as follows: l_ij = |r_i - r_j|, l̇_ij = (ṙ_i - ṙ_j) ·l_ij/l_ij = v_ij, β_i = arccos(r_i ·r_j/|r_i| · |r_j|), l̈_ij = (r̈_i - r̈_j)(r_i - r_j) + (ṙ_i - ṙ_j)^2(1 - cosθ^2)/l_ij, where θ is the angle between vector ṙ_i - ṙ_j and vector r_i - r_j and i,j = 1,2,3. By defining the reduced armlength as l̃_ij = l_ij-3×10^6 km and the reduced breathing angle as β̃_i=β_i-60^∘, we obtain the variation of Taiji constellation reduced armlengths, armlength variation rate, and reduced breathing angles within six years, as shown in Figures <ref> and <ref>. In the Keplerian orbital configuration, the reduced armlength amplitude is approximately 1×10^4 km, the rate of the armlength amplitude variation is 1.2 m/s, breathing angle amplitude is -0.24^∘∼0.24^∘, and the relative acceleration amplitude is approximately 4.5 ×10^-7 m/s^2. § TAIJI HELIOCENTRIC FORMATION CONFIGURATION §.§ Influence of the Gravitational Field of the Solar System Bodies As we aim to evaluate the influence of the gravitational fields of celestial bodies on the configuration of spacecraft formation, we consider only the Newtonian gravitational forces of the Sun and other 9 bodies, Mercury, Venus, Earth, Mars, Jupiter, Saturn, Uranus, Neptune, and the Moon, in the solar system (k = 1,2,...,9). The motion equation of the spacecraft in heliocentric ECLIPJ2000 can be expressed as r̈_i=-μ_sr_i/|r_i|^3 +∑_kμ_k(r_k-r_i/|r_k-r_i|^3-r_k/|r_k|^3), where r_i is the spacecraft position vector, μ_s = GM_s, μ_k = GM_k, s denotes the Sun, and G is the gravitational constant. For this specific formation configuration, the initial mission time is 00:00:00 on 27 October 2032 (heliocentric ecliptic coordinate system), and the initial position and velocity are presented in Table <ref>. The acceleration of each planet toward SC1 is a_i,k=μ_k|r_k-r_i/|r_k-r_i|^3|. We obtain the planet position from the DE440 ephemeris, substitute it into theEquation (<ref>), and obtain the result as shown in Figure <ref>. The Figure shows that the gravitational attractions exerted by the Earth, Venus, and Jupiter on spacecrafts are greater than those of other celestial bodies in the solar system and may have a greater variation on the spacecraft formation configuration. The contribution of the gravitational disturbances of the Earth, Venus, and Jupiter to the relative acceleration between spacecrafts obtained by numerical simulation is shown in Figure <ref>. As shown in Figures <ref> and <ref>, the gravitational perturbations of Venus, Earth, and Jupiter cause a variation in the spacecraft orbit. The maximum variation is of 𝒪(10^-7) m/s^2. The Earth's gravitational perturbation has the greatest variation in the heliocentric formation configuration of 𝒪(10^-6) m/s^2. Subsequently, we will examine variations in spacecraft formation configuration parameters perturbed solely by Earth's gravity. §.§ Influence of the Gravitational Perturbation of the Earth Considering the gravitational perturbation of the Earth, the motion equation of the spacecraft can be expressed as r̈_i=-μ_sr_i/|r_i|^3+μ_3(r_3-r_i/|r_3-r_i|^3-r_3/|r_3|^3). Substituting the initial conditions in Table <ref> into Equation (<ref>), the spacecraft position vector r_i(t) can be obtained. Substituting r_i(t) into Equation (<ref>) and comparing it with the Keplerian orbits, the result is shown in Figure <ref>. Considering the Earth's gravitational perturbations, the orbits of spacecrafts in the heliocentric formation deviate from the Kepler orbit. This deviation changes the spacecraft formation configuration and increases the magnitude of the configuration parameters. Specifically, the maximum amplitude of the relative change in armlength between spacecrafts is 1.6×10^5 km, the maximum amplitude of the relative change in relative velocity between spacecrafts is 32 m· s ^-1, and the maximum relative change in the breathing angle is 3.2^∘ within six years. §.§ Influence of the Gravitational Perturbation of Venus and Jupiter To obtain accurate configuration parameter variations, the influence of the gravities of Venus and Jupiter on the spacecraft formation configuration must be considered. Choosing k = 2, 3, 5 to represent Venus, Earth, and Jupiter in the equation, respectively, for a single spacecraft in the heliocentric formation configuration, the following acceleration can be obtained. r̈_i=-μ_sr_i/|r_i|^3+ μ_2(r_2-r_i/|r_2-r_i|^3-r_2/|r_2|^3)+ μ_3(r_3-r_i/|r_3-r_i|^3-r_3/|r_3|^3)+ μ_5(r_5-r_i/|r_5-r_i|^3-r_5/|r_5|^3). Substituting the initial conditions in Table. <ref> into Equation (<ref>), the spacecraft position vector r_i(t) can be obtained. Substituting r_i(t) into Equation (<ref>) and comparing it with the perturbation of the spacecraft formation configuration parameters caused exclusively by the Earth's gravity, the results are shown in Figure <ref>. On can see that in addition to the gravity perturbation of the Earth, the configuration parameters will change when the gravity perturbation from Venus, Earth, and Jupiter is included. The maximum relative variations in inter-spacecraft armlength and relative velocity are 5× 10^4 km and 9 m· s^-1, respectively, within six years. The maximum relative variation in breathing angle is approximately 0.8^∘. However, the relative magnitude of the changes does not accurately reflect the influence of the perturbations of Venus and Jupiter. Figure <ref> compares the changes in spacecraft formation configuration parameters in the three cases: the unperturbed Keplerian configuration, adding only the gravitational perturbation of the Earth, and adding the perturbations of Venus and Jupiter. As shown in Figure <ref>, when considering the gravitational disturbances of Venus and Jupiter in addition to the gravitational disturbances of the Earth, the reduced armlengths and armlength variation rate decrease by 16.01% and 17.45%, respectively. The findings indicate that when the orbit entry time is set to 00:00:00 on 27 October 2032, the Earth's gravity amplifies the variations in the constellation armlength, change rate, and breathing angle. However, the gravitational perturbations of Venus and Jupiter reduce the magnitude of the constellation configuration parameters. §.§ Orbital entry moment The orbital entry moment is an optimization variation that affects the orbit design, and the different initial orbiting moments change the initial position and velocity of the spacecraft. In this section, different initial positions and velocities are obtained by changing the initial orbital entry moment of the formation. Further, the changes in configuration parameters at different orbital entry moments are analyzed. The specified orbiting time is set as 00:00:00 on the first day of each month in 2032. The simulation assesses the variations in the armlengths, breathing angles, and armlength variation rates over six years following the different orbiting times, which are shown in Figure <ref>. Notably, variations in the maximum amplitudes of the configuration parameters are observed at different orbital moments. Specifically, in July, the maximum amplitudes of the armlength and relative velocity between the spacecrafts are the smallest over one year. Consequently, selecting July as the moment of orbit entry is conducive to extending the mission duration. Through sensitivity analysis of the initial orbital entry moment of the heliocentric formation, the orbital moment chosen was 00:00:00 on 1 July 2032. The initial positions and velocities of the spacecraft are presented in Table <ref>. By substituting the parameters in Table <ref> into Equation (<ref>), the position vector r_i(t) of the spacecraft can be determined. The vector in Equation (<ref>) is used to calculate the variation in spacecraft formation configuration parameters, as shown in Figure <ref>. The findings indicate that selecting the orbital entry moment as 00:00:00 on 1 July 2032, while accounting for the gravitational perturbation of Venus, Earth, and Jupiter, results in a maximum amplitude of approximately 8×10^4 km in the reduced armlengths, a maximum armlength variation rates of 18.1 m· s^-1, and a maximum amplitude of approximately 1.6^∘ in the reduced breathing angles over six years. § SIMULATION WITH DIFFERENT EPHEMERIS The accuracy of the numerical simulation results depends on that of the planetary position data. Differences in planetary orbital positions may occur in various ephemerides due to variations in the chosen fitting data and the integration method used <cit.>. These deviations can affect the numerical simulation of spacecraft configuration parameters. Therefore, the biases in numerical simulations caused by different ephemerides as input should be extensively investigated. We obtained planetary position data from the ephemerides DE440, DE430 and DE421 . In the heliocentric equatorial coordinate system J2000, the maximum disparities in the coordinates of Venus, Earth, and Jupiter relative to the solar center of mass are presented in Table <ref> across various ephemerides. Comparatively, the maximum coordinates of the ephemerides of the three planets differ significantly: Venus and Earth differ at the 100 m level, while Jupiter differs at the 1. Discrepancies in the planetary coordinates between ephemerides can cause variations in the forces acting on the spacecrafts. Consequently, these variations may alter the relative acceleration between the spacecrafts, thereby affecting the overall configuration of the formation. Three ephemerides, DE421, DE430, and DE440, were used for numerical simulations with an initial entry time of 00:00:00 on 1 July 2032. Figure <ref> shows the difference in spacecraft armlengths, armlength variation rates, and relative accelerations within six years. Figure <ref> shows differences in the spacecraft configuration parameters obtained via numerical simulation using different ephemerides. Specifically, the variation between DE440 and DE430 is relatively small, with a difference of approximately 0.5 m in armlength over six years, 1×10^-7 m/s in armlength variation rates, and 3×10^-14 m/s^2 in relative acceleration. Conversely, the variation between DE421 and DE440 is significant, with a difference of approximately 2.5 m in armlength, 5×10^-7 m/s in armlength variation rates, and 1×10^-13 m/s^2 in relative acceleration within six years. Notably, the variable period of the difference in spacecraft structural parameters obtained via the numerical simulation of different ephemerides is approximately one year, and the frequency is relatively low. This is relatively small compared to the change in the gravitational wave frequency band targeted by the Taiji project. § CONCLUSIONS Using the DE series of ephemerides, we investigated the influence of celestial gravitational perturbations on the heliocentric formation configuration. Specifically, variations in the major planets in the solar system on the orbital stability of the Taiji space gravitational wave detector were investigated. Ephemeris DE440 was used to interpolate the planetary orbital position during the mission, and the high-order Runge–Kutta numerical integration method was used to calculate planetary gravity. The trajectory of each spacecraft in the heliocentric formation configuration under the influence of perturbation was numerically simulated. In our investigation, to streamline the model, solar system bodies are regarded as point masses, with ancillary factors such as their geometric characteristics and rotational dynamics being omitted. These aspects will be meticulously scrutinized in forthcoming research. Results showed that the gravitational acceleration affecting an individual spacecraft within a spacecraft formation due to planetary gravity is approximately 10^-7 m· s^-2. The relative acceleration between spacecrafts due to planetary gravity is approximately 10^-6 m· s^-2. The difference in the heliocentric formation configuration is mainly affected by the gravity of the Earth. Considering 00:00:00 on 27 October 2032 as the initial orbit entry moment as an example, when only the Earth's gravitational perturbation is considered, the maximum amplitude of the relative change in the armlength within six years was 1.6×10^5 km, the maximum amplitude of the relative change in breathing angle was 3.2^∘, and the maximum amplitude of the relative change in armlength variation rates was 32 m· s ^-1. Further, we discuss the spacecraft formation configuration changes with the addition of gravitational perturbations from Venus and Jupiter. The results showed that compared with the case where only the Earth's gravitational perturbation was considered, after the gravitational perturbations of Venus and Jupiter were added, within six years the maximum amplitude of the relative change in armlength was approximately 5 × 10^4 km, maximum amplitude of the breathing angle was approximately 0.8^∘, and maximum amplitude of the relative change in armlength variation rates was 9 m/s. After adding the gravitational perturbations of Venus and Jupiter, the armlength and relative velocity were reduced by 16.01% and 17.45%, respectively, compared with when only the Earth's gravitational perturbation was considered. Variations in the maximum amplitudes of the formation configuration parameters were observed when entering orbit at different times. The smallest increase in armlength variation rates between the spacecrafts occurred when the initial orbiting time was in July. Consequently, selecting July as the initial orbital insertion time extends the experiment duration. By considering 00:00:00 on 1 July 2032 as the initial orbit entry moment, under the gravitational perturbations of Venus, Earth, and Jupiter, using the DE440 ephemeris, the maximum amplitude of the reduced armlength was 8×10^4 km, the maximum amplitude of the reduced breathing angle was 1.6^∘, and the maximum armlength variation rates was 18.1 m· s^-1 within six years. Ephemeris data were used to perform numerical simulations. By comparison, the maximum disparity in the orbital positions of Venus and Jupiter was 300 m and 60 km, respectively. The armlength deviation over the six-year simulation period was approximately 1 m. The deviation in armlength variation rates was in the order of 𝒪(10^-7) m· s^-1, and the deviation in relative accelerations was approximately 1×10^-13 m· s^-2. The differences between ephemerides DE440 and DE430 are smaller than those between DE440 and DE421. Nonetheless, the impact of discrepancies in high-frequency bands induced by factors such as planetary rotation remains unexplored and warrants consideration in future studies. 999 GW150914 Abbott, B.P.; Abbott, R.; Abbott, T.; Abernathy, M.R.; Acernese, F.; Ackley, K.; Adams, C.; Adams, T.; Addesso, P.; Adhikari, R.X.; et al. Observation of Gravitational Waves from a Binary Black Hole Merger. Phys. Rev. Lett. 2016, 116, 061102. GW170608 Abbott, B.P.; Abbott, R.; Abbott, T.D.; Acernese, F.; Ackley, K.; Adams, C.; Adams, T.; Addesso, P.; Adhikari, R.X.; Adya, V.B.; et al. GW170608: Observation of a 19-solar-mass Binary Black Hole Coalescence. Astrophys. J. Lett. 2017, 851, L35. GW170814 BLACK, G.W.F.A.B. GW170814: A Three-Detector Observation of Gravitational Waves from a Binary Black Hole Coalescence. Phys. Rev. Lett. 2017, 119, 141101. GWO1 Punturo, M.; Abernathy, M.; Acernese, F.; Allen, B.; Andersson, N.; Arun, K.; Barone, F.; Barr, B.; Barsuglia, M.; Beker, M.; et al. The Einstein Telescope: A third-generation gravitational wave observatory. Class. Quant. Grav. 2010, 27, 194002. GWO2 Amaro-Seoane, P.; Aoudia, S.; Babak, S.; Binetruy, P.; Berti, E.; Bohe, A.; Caprini, C.; Colpi, M.; Cornish, N.J.; Danzmann, K.; et al. Low-frequency gravitational-wave science with eLISA/NGO. Class. Quant. Grav. 2012, 29, 124016. GWO3 Punturo, M.; Abernathy, M.; Acernese, F.; Allen, B.; Andersson, N.; Arun, K.; Barone, F.; Barr, B.; Barsuglia, M.; Beker, M.; et al. The third generation of gravitational wave observatories and their science reach. Class. Quant. Grav. 2010, 27, 084007. LISA_conclusion Xie, X.; Jiang, F.; Li, J. Design and optimization of stable initial heliocentric formation on the example of LISA. Adv. Space Res. 2023, 71, 420–438. JOFFRE20213868 Joffre, E.; Wealthy, D.; Fernandez, I.; Trenkel, C.; Voigt, P.; Ziegler, T.; Martens, W. LISA: Heliocentric formation design for the laser interferometer space antenna mission. Adv. Space Res. 2021, 67, 3868–3879. LISA_formation2 Dhurandhar, S.V.; Nayak, K.R.; Koshti, S.; Vinet, J.Y. Fundamentals of the LISA stable flight formation. Class. Quant. Grav. 2005, 22, 481–488. LISAPathfinder1 McNamara, P.; Vitale, S.; Danzmann, K. LISA Pathfinder. Class. Quant. Grav. 2008, 25, 114034. LISAPathfinder2 Armano, M.; Audley, H.; Auger, G.; Baird, J.T.; Bassan, M.; Binetruy, P.; Born, M.; Bortoluzzi, D.; Brandt, N.; Caleno, M.; et al. Sub-Femto- g Free Fall for Space-Based Gravitational Wave Observatories: LISA Pathfinder Results. Phys. Rev. Lett. 2016, 116, 231101. LISAPathfinder3 Anza, S.; Armano, M.; Balaguer, E.; Benedetti, M.; Boatella, C.; Bosetti, P.; Bortoluzzi, D.; Br, t, N.; Braxmaier, C.; Caldwell, M.; et al. The LTP experiment on the LISA Pathfinder mission. Class. Quant. Grav. 2005, 22, S125–S138. LISAPathfinder4 Armano, M.; Bortoluzzi, D.; Hoyle, C.D.; Vitale, S. Gravitational compensation for the LISA pathfinder. Class. Quant. Grav. 2005, 22, S501–S507. bayle2022overview Bayle, J.; Bonga, B.; Caprini, C.; Doneva, D.; Muratore, M.; Petiteau, A.; Rossi, E.; Shao, L. Overview and progress on the laser interferometer space antenna mission. Nat. Astron. 2022, 6, 1334–1338. Hu:2017mde Hu, W.-R.; Wu, Y.-L. The Taiji Program in Space for gravitational wave physics and the nature of gravity. Natl. Sci. Rev. 2017, 4, 685–686. WYL Wu, Y.L. Space gravitational wave detection in china. In Presentation to 1st eLISA Consortium Meeting; APC-Paris; ESA: Paris, France, 2012. taiji2 Gong, Y.; Luo, J.; Wang, B. Concepts and status of Chinese space gravitational wave detection projects. Nat. Astron. 2021, 5, 881–889. Luo:2019zal Luo, Z.; Guo, Z.; Jin, G.; Wu, Y.; Hu, W. A brief analysis to Taiji: Science and technology. Results Phys. 2020, 16, 102918. taijiangle20 Cai, R.G.; Guo, Z.K.; Hu, B.; Liu, C.; Lu, Y.; Ni, W.T.; Ruan, W.H.; Seto, N.; Wang, G.; Wu, Y.L. On networks of space-based gravitational-wave detectors. Fundam. Res. 2023, 5. https://doi.org/10.1016/j.fmre.2023.10.007. LISA:2017pwj Amaro-Seoane, P.; Audley, H.; Babak, S.; Baker, J.; Barausse, E.; Bender, P.; Berti, E.; Binetruy, P.; Born, M.; Bortoluzzi, D.; et al. Laser Interferometer Space Antenna. arXiv 2017, arXiv:1702.00786. armlength Martens, W.; Joffre, E. Trajectory Design for the ESA LISA Mission. J. Astronaut. Sci. 2021, 68, 402–443. TDI Tinto, M.; Dhurandhar, S.V. Time-delay interferometry. Living Rev. Rel. 2021, 24, 1. Otto2015TimedelayIS Otto, M. Time-Delay Interferometry Simulations for the Laser Interferometer Space Antenna. 2015. https://doi.org/10.15488/8545. Pucacco_2010 Pucacco, G.; Bassan, M.; Visco, M. Autonomous perturbations of lisa orbits. Class. Quantum Gravity 2010, 27, 235001. Halloin_2017 Halloin, H. Optimizing orbits for (e)lisa. J. Phys. Conf. Ser. 2017, 840, 012048. Taiji_constraint Qiao, D.; Jia, F.; Li, X.; Zhou, X. A Review of Orbital Mechanics for Space-Based Gravitational Wave Observatories Space Sci. Technol. 2023, 265, 0015. ruan2020lisa Ruan, W.H.; Liu, C.; Guo, Z.K.; Wu, Y.L.; Cai, R.G. The lisa–taiji network. Nat. Astron. 2020, 4, 108–109. Taiji_analysis Wu, B.; Huang, C.G.; Qiao, C.F. Analytical analysis on the orbits of Taiji spacecrafts. Phys. Rev. D 2019, 100, 122001. formation Nayak, K.R.; Koshti, S.; Dhurandhar, S.V.; Vinet, J.Y. On the minimum flexing of LISA's arms. Class. Quant. Grav. 2006, 23, 1763–1800. Uncertainties_of_ephemeris Folkner, W.M. Uncertainties in the jpl planetary ephemeris. 2011, https://api.semanticscholar.org/CorpusID:220827130 DE440 Park, R.S.; Folkner, W.M.; Williams, J.G.; Boggs, D.H. The jpl planetary and lunar ephemerides de440 and de441. Astron. J. 2021, 161, 105. DE430 Folkner, W.M.; Williams, J.G.; Boggs, D.H.; Park, R.S.; Kuchynka, P. The planetary and lunar ephemerides de430 and de431. The Interplanetary Network Progress Report 2014, 196, 42–196. DE421 Folkner, W.M.; Williams, J.G.; Boggs, D.H. The planetary and lunar ephemeris de 421. The Interplanetary Network Progress Report 2009, 42, 1–34. DOP853 Hairer, E.; Norsett, S.; Wanner, G. Solving Ordinary Differential Equations I: Nonstiff Problems SIAM Rev. 1990, 32, 485–486
http://arxiv.org/abs/2405.09263v1
20240515112619
Polydisperse versus monodisperse microbubbles: A simulation study for contrast-enhanced ultrasound imaging
[ "Agisilaos Matalliotakis", "Martin D. Verweij" ]
physics.med-ph
[ "physics.med-ph", "physics.comp-ph" ]
Department of Imaging Physics, Faculty of Applied Sciences, Delft University of Technology, 2628 CJ Delft, the Netherlands Department of Imaging Physics, Faculty of Applied Sciences, Delft University of Technology, 2628 CJ Delft, the Netherlands Department of Biomedical Engineering, Erasmus University Medical Center, 3000 CA Rotterdam, the Netherlands Corresponding author: M.D.Verweij@tudelft.nl Objective: Contrast-enhanced ultrasound (CEUS) presents distinct advantages in diagnostic echography. Utilizing microbubbles (MBs) as conventional contrast agents enhances vascular visualization and organ perfusion, facilitating real-time, non-invasive procedures. There is a current tendency to replace the traditional polydisperse MBs by novel monodisperse formulations in an attempt to optimize contrast enhancement and guarantee consistent behavior and reliable imaging outcomes. This study investigates the contrast enhancement achieved by monodisperse MBs of different sizes, and their influence on nonlinear imaging artifacts observed in traditional CEUS. Methods: To explore the differences between monodisperse and polydisperse populations without excessive experimentation, numerical simulations are employed for delivering precise, objective and expeditious results. The Iterative Nonlinear Contrast Source (INCS) method has previously demonstrated its efficacy in simulating ultrasound propagation in large populations in which each bubble has individual properties and several orders of multiple scattering are significant. Therefore, this method is employed to realistically simulate both monodisperse and polydisperse MBs. Results: Our findings in CEUS imaging indicate that scattering from resonant monodisperse microbubbles is 11.8 dB stronger than scattering from the polydisperse population. Furthermore, the amplitude of nonlinear imaging artifacts downstream of the monodisperse population is 19.4 dB stronger compared to polydisperse suspension. Conclusion: Investigating the impact of multiple scattering on polydisperse populations compared to various monodisperse suspensions reveals that monodisperse MBs are more effective contrast agents, especially when on resonance. Despite the strong signal to noise ratio of monodisperse populations, the imaging artifacts due to nonlinear wave propagation are also enhanced, resulting in more missclassification of MBs as tissue. Polydisperse versus monodisperse microbubbles: A simulation study for contrast-enhanced ultrasound imaging M.D. Verweij ========================================================================================================== § INTRODUCTION Achieving superior deep tissue imaging of blood vessels with ultrasound remains a challenge in medical diagnostics. Contrast-enhanced imaging, particularly using MBs, has emerged as a promising solution <cit.>. These gas-filled microspheres, stabilized with a lipid or protein shell, enhance blood contrast for improved organ and lesion visualization. MBs, characterized by small size, biocompatibility, and vascular navigability, resonate in the ultrasound frequency range (1-10 MHz). Their efficient sound scattering in both fundamental and harmonic modes, driven by substantial acoustic impedance difference with surroundings and highly nonlinear oscillatory behavior <cit.>, enhances image quality. As ultrasound waves propagate through a resonant MB suspension, they undergo nonlinear distortion due to nonlinear MB scattering influenced by size, shell characteristics, ultrasound pressure and frequency <cit.>. Because of these properties, MBs are also efficient contrast agents in various applications besides CEUS, such as ultrasound localization microscopy <cit.>. As a drawback, wave distortion extends beyond a MB suspension and this leads to the misidentification of tissues as MBs, diminishing the specificity of CEUS imaging <cit.>. Narrowing of the size distribution of the MB population might be a way to provide improved acoustic scattering, reduce imaging artifacts and enhance scattering homogeneity. Historically, polydisperse MBs with varying size distributions (typical radii 0.5 to 15 μm) have been standard in ultrasound contrast imaging <cit.>. Recent technological breakthroughs have introduced the possibility of using monodisperse, i.e. uniformly sized, MBs <cit.>. Studies highlight the superiority of monodisperse MBs <cit.>, offering enhanced predictability, improved acoustic performance, and clearer imaging signals <cit.>. Nevertheless, we think that it is important to shed more light on the effect of monodisperse MBs as contrast agents for deep vessel imaging, especially on the generation of clearer echoes and reducing imaging artifacts. The use of computational tools is an efficient way to perform comprehensive investigations without performing extensive measurements. Initially, studies focused on the collective behavior of bubbly media for marine applications <cit.>. Effective medium theory facilitated 1D computational studies for both monodisperse <cit.> and polydisperse <cit.> MB suspensions in medical ultrasound, including high intensity focused ultrasound (HIFU) <cit.>. Previous models successfully captured nonlinear ultrasound propagation through uniform MB distributions in two dimensions using iterative schemes <cit.>. Challenges arise when coupling the nonlinear dynamics of multiple MBs in 3D realistic simulations, due to the complexity of the coupled Rayleigh-Plesset equation <cit.>. Another difficulty shows up when the number of polydisperse MBs is small and the use of averaged quantities becomes questionable. Various computational methods have been explored to understand the dynamics between polydisperse and monodisperse MB populations. Among these, the INCS method has demonstrated efficacy in simulating bubble cloud behavior in a three-dimensional domain, enabling the generation and comparison of echoes produced by dense monodisperse MB populations, considering multiple scattering <cit.>. This is crucial for optimizing contrast-enhanced ultrasound (CEUS) applications and reducing the need for excessive experimentation. The aim of this numerical study is to investigate the efficacy of monodisperse and polydisperse populations when used as contrast agents for deep tissue imaging. More precisely, this article discusses the extension of INCS method to simulate the behaviour of a population of polydisperse scatterers. Furthermore, the effectiveness of the extended INCS method is illustrated through simulating the multiple scattering occurring inside a population of polydisperse MBs, each with individual properties represented by its own Marmottant model <cit.>. INCS is based on an iterative scheme for computing the scattered acoustic signals <cit.>. Numerically, the accuracy of the final result is improved after each iteration. In a physical sense, each iteration adds an extra order of multiple scattering corresponding to an additional path of wave propagation. First, in Section <ref>, the fundamental theory behind the INCS method is explained, followed by its extension with the introduction of polydisperse point scatterers. In Section <ref>, the configurations for the numerical experiments are discussed. Next, in Section <ref> the results from the numerical simulations for each different test case are presented. Concluding remarks are given in Section. <ref>. § INCLUSION OF A POLYDISPERSE MB POPULATION §.§ Linear Field The linear pressure field generated by an external source in a linear, lossless, homogeneous acoustic medium is described by the wave equation c^-2_0∂^2 p(,t)/∂ t^2-∇^2 p(,t)= S_pr(,t). Here, [m] is the Cartesian position vector, and t [s] is the time. The symbol p(,t) [Pa] indicates the acoustic pressure, c_0=1/√(ρ_0κ_0) [m/s] is the small signal sound speed in the background medium, where ρ_0 [kg·m^-3] is the mass density and κ_0 [Pa^-1] is the compressibility. The Laplacian operator ∇^2 generates the sum of the second order spatial derivatives. The acoustic field is generated by the primary source term S_pr, which can for example describe a jump condition for either the velocity or the pressure. These jump conditions can be used to represent a source with a plane aperture, e.g. a phased array transducer. §.§ Nonlinear field due to contrast agents In medical ultrasound, nonlinearities arising from contrast media can have a significant impact on the propagation of the acoustic signals. To incorporate any phenomena that affect the pressure field, it is possible to extend Eq. (<ref>) with a contrast source term S_cs c^-2_0∂^2 p/∂ t^2-∇^2p = S_pr + S_cs(p). With this approach, multiple contrast sources can be accommodated that represent global nonlinear effects <cit.>, attenuation <cit.>, inhomogeneous medium properties <cit.>, or local nonlinear effects <cit.>. In contrast-enhanced imaging, the nonlinear oscillatory behavior of the MBs influences the pressure field. To include the contribution of a population of N MBs, each will be described as a point scatterer and the source term will be written as <cit.> S_cs(,t) = ∑_i=1^N S_MB_i = ρ_0∑_i=1^Nd^2 V^(i)(_sc^(i),t)/d t^2 δ(-_sc^(i)), where V^(i) is the volume of the ith MB, ^(i)_sc is the position vector of its center and δ is the Dirac delta distribution. Each scatterer's volume depends on the bubble radius R as a function of time, which in our case will be calculated by solving the Marmottant equation <cit.>. In the case of a population of monodisperse MBs, the rest radius R_0 is the same for all the scatterers, whereas for a polydisperse distribution, each scatterer has its own rest radius R_0^(i). § CONFIGURATIONS USED IN THE SIMULATIONS §.§ Simulation of pressure fields §.§.§ Incident field and contrast domain In this study, we will consider the computational domain and the domain for the contrast media as depicted in Fig. <ref>(a). This configuration is used in Secs. <ref> and <ref> for the INCS validation and the comparison between different populations, respectively. The computational domain has dimensions X× Y× Z=20 mm× 20 mm× 30 mm is used. The scatterers are placed in a domain with dimensions X× Y× Z=15 mm× 15 mm× 4.444 mm, resulting in a 1 ml volume. These configuration choices are made to simplify the comparison between polydisperse and monodisperse populations. The incident pressure field is a plane wave being generated at z=0 and propagating in the positive z-direction. A plane wave is used to let all the scatterers experience the same incident pressure. The temporal signature of the incident pressure is p(t)=P_0 exp[ -(t-T_d/T_w/2)^2]sin[2π f_0 (t-T_d)], where T_w=3/f_0 is the width and T_d=6/f_0 is the delay of a Gaussian envelope with a duration of 12/f_0, where f_0 = 1 MHz is the center frequency. The peak pressure is P_0=200 kPa. The scatterers will be embedded in water with a density of ρ = 1060 kg/m^3 and a speed of sound of c_0 = 1482 m/s. In the considered situations, water has negligible losses and nonlinear effects will be hardly noticeable. Therefore, we assume that the embedding medium is lossless and linear. A sampling frequency of 18 MHz was used as the basis for the discretization of the spatiotemporal domain <cit.>. §.§.§ Configuration for validation To validate INCS we compare our results by those following from effective medium theory. The analytical expressions that describe the effective behavior of a population of isotropic linear scatterers (LSs) are derived from Foldy <cit.>. The same approach has been used in a previous publication for a monodisperse population of scatterers <cit.>, but here we will consider a polydisperse population. For the INCS implementation, we assume that the contrast source term for each LS is given by S_sc(,t)= -f(R_0) V_0 ρ_0/ρ_1 c_1^2 ∂^2 p(_sc,t)/∂ t^2 δ(-_sc), where R_0 is rest radius, V_0 is its initial volume, and f(R_0) is the polydispersity coefficient given by f(R_0) = k/(R_0/R_0,ref)^γ. k is a constant to adjust the scattering strength if necessary and γ is the polydispersity scale parameter to control the scattering distribution of the population. In the case of a plane wave excitation as in Eq. (<ref>), the scattered pressure is given by p_sc(,ω) = f(R_0) V_0 ρ_0/ρ_1 c_1^2ω^2 p(ω)/4π re^-ikr = g(R_0,ω) p(ω)e^-ikr/r, where g(R_0,ω) is the scattering strength of an individual LS, and r is the distance from the scatterer. We follow this approach in order to match the variables as defined previously in Foldy <cit.>. In this study we assume k=0.6, γ=2 and R_0,ref=1 μm, resulting in a scattering strength that is a linear function of the (fictitious) radius of a scatterer. Thus, there will not be extremely large differences in the scattering strength in the polydisperse population. For the polydisperse populations considered in this paper, the density of the microbubbles varies with the rest radius R_0 according to the gamma distribution n(R_0)=N/V 1/b^αΓ(α) R_0^α-1e^-R_0/b, where N is the total number of scatterers, and V is the volume in which the homogeneous population resides. Furthermore, α and b are the scale and shape parameters, and Γ is the gamma function <cit.>. In our case we take α=2.24 and b=1.23 μm. The total density of the microbubbles with radii between R_0,min and R_0,max is given by n_tot = ∫_R_0,min^R_0,max n(R_0) dR_0, where R_0,min and R_0,max are the minimum and maximum values considered to be present within the polydisperse distribution. For the given parameters a and b, virtually all microbubbles are taken into account if we take R_0,min=0.5 μm and R_0,max=15 μm. §.§.§ Types of monodisperse and polydisperse suspensions To make a comparison between the efficiency of a population of monodisperse and polydisperse MBs, we take into account four distinct populations: * A monodisperse population of MBs with a rest radius R_0=4 μm and a resonance frequency f_res=0.8 MHz (below the center excitation frequency); * A monodisperse population of MBs with a rest radius R_0=3.2 μm and a resonance frequency f_res=1 MHz (at the center excitation frequency); * A monodisperse population of MBs with a rest radius R_0=1 μm and a resonance frequency f_res=3.9 MHz (above the center excitation frequency); * A polydisperse population of MBs with a rest radius between R_0,min=0.5 μm to R_0,max=15 μm, distributed as described in Section <ref>, corresponding to a resonance frequency between f_res=0.3 MHz and 10 MHz (a number of MBs will be near the resonance frequency, others will be above of below resonance) In our simulations, we use high driving pressures to activate the nonlinear oscillatory behaviour of the MBs and therefore the contribution of the shell stiffness becomes unimportant. As a result, the resonance frequency of the MBs shifts towards the resonance of an uncoated bubble <cit.>. Thus, we have approximated the resonance frequency by the eigenfrequency <cit.> f_res = 1/2π R_0√(1/ρ_0[3 γ· P_amb + (3γ-1)2σ_w/R_0]), where R_0 is the initial radius of the MB, γ=1.07 is the polytropic exponent of the gas encapsulated in the bubble, and P_amb=101.3 kPa is the static ambient pressure. The center excitation frequency f_0=1 MHz corresponds to a resonance frequency of an uncoated MB of radius R_0=3.2 μm. For solving the Marmottant equation <cit.>, we further use the gas core viscosity μ=2× 10^-3 Pa·s, the effective surface tension σ(R)=0.036 N/m, the shell elasticity χ=0.4 N/m, and the surface tension of the gas-water interface σ_w=0.072 N/m <cit.>. The shell elasticity is given by κ_s=1.5× 10^-9 exp(8× 10^5 R_0) <cit.>. Combined with the aforementioned, the oscillatory behavior and the frequency spectrum of a single MB when excited with a driving pressure P_0=200 kPa and a center frequency f_0=1 MHz is depicted in Fig. <ref>. §.§ Simulation of CEUS imaging To actually see the difference between monodisperse and polydisperse populations for contrast-enhanced imaging, it is necessary to visualize the reconstructed beamformed images from the scattered radio frequency (RF) data generated by a realistic configuration. To mimic tissue with an enclosed vessel, we distribute LSs surrounding a cylindrical population of MB, as depicted in Fig. <ref>(b). We need to take into account all the relevant phenomena that occur during the propagation of ultrasound through the populations of scatterers inside the water background medium. Based on this, the new nonlinear wave equation is given by c^-2_0∂^2_t p-∇^2 p = S_pr + S_MBs(p) + S_LSs(p) + S_nl(p) + S_ℒ(p), where S_MBs is the contrast source term for the MB population <cit.>, S_LSs is the contrast source term for the LS population <cit.>, S_nl and S_ℒ are the terms for global <cit.> and local medium nonlinearities <cit.>, respectively. Equation (<ref>) is solved iteratively using a Neumann scheme, as described in previous publications <cit.>. The incident pressure field is computed for a P4-1 probe (Verasonics, Washington, USA). Transducer elements have a height of H_el = 16 mm, a width of W_el=0.245 mm, a pitch of D_tr=0.295 mm. The transmitted pulse is given by Eq. (<ref>), with center frequency f_0=2.5 MHz and a peak pressure at the transducer surface of P_0=200 kPa, to activate the nonlinear behavior of the monodisperse MBs. Next, the domain of the MB population is a cylinder with center (x, y, z)=(0, 0, 22.5) mm, diameter of 5 mm and length of 10 mm, as illustrated in Fig. <ref>(b). This corresponds to a total volume of 0.2 ml. The domain of LSs surrounding the MBs is a cube of X× Y× Z=8 mm× 10 mm× 12 mm, corresponding to a volume of 0.76 ml centered at (x, y, z)=(0, 0, 24) mm, as depicted in Fig. <ref>(b). Furthermore, the background medium is water, with a coefficient of nonlinearity of β=3.21. To accurately solve the full nonlinear wave equation up to the second harmonic frequency (h=2) of the incident pressure pulse, we need to have a Nyquist frequency of at least F_nyq = (h+1.5)f_0=3.5 f_0. To also safely capture the higher harmonics of the MB scattering, we used F_nyq=5 f_0=12.5 MHz. Thus, the sampling frequency, used for discretizing the spatiotemporal domain, is F_s=2F_nyq=25 MHz. Furthermore, we need at least j=h+1=3 iterations for an accurate prediction of the second harmonic <cit.>. We take j=10 iterations to ensure that the relative root mean square error between successive iterations is below 10^-6. This also implies that our simulations account for MB interactions up to ninth order multiple scattering <cit.>. We compare CEUS imaging with two different microbubble populations: * A resonant monodisperse population of MBs with a rest radius R_0=1.4 μm and a resonance frequency f_res=2.5 MHz (at the center excitation frequency); * A polydisperse population of MBs with a rest radius between R_0=0.5 μm and 15 μm, distributed as described in Section <ref>, and a resonance frequency between f_res=0.3 MHz and 10 MHz. Each LS has a scattering strength which can be computed through Eq. (<ref>), for a polydispersity coefficient f=1. For the beamforming process, we use the MUST <cit.> toolbox after employing the amplitude modulation (AM) technique and a virtual point source formulation as described by Garcia et. al. <cit.>. § NUMERICAL RESULTS §.§ Comparison of INCS and effective medium theory In this section, we assume that there are N=10^6 microbubbles located in the V=1 ml volume indicated in Fig. <ref>. The suspension has a type 4 polydisperse distribution, as described in Section <ref>. The total gas volume corresponds to 7.41× 10^-6 ml. It is assumed that the gas inside the bubbles is C_4F_10, with a density ρ_1 = 10 kg/m^3 and a speed of sound c_1 = 100 m/s. As we want to perform a simplified comparison with effective medium theory, we do not take into account the resonance frequency and the nonlinear behavior of the microbubbles. Instead, we assume that each bubble can be described by its scattering behavior as describe in Eqs. (<ref>) to (<ref>). In other words, we are only interested on the scattered signal of each point scatterer. The maximum of the incident pressure P_0 = 200 kPa will not affect the final result because we operate in the linear regime. According to Foldy's theory <cit.>, the effect of a polydisperse population of scatterers is represented by replacing the wave number k_0 in the scattering domain by a corrected wave number k according to k^2=k_0^2 + 4π∫_R_0,min^R_0,max g(R_0,ω) n(R_0) dR_0, where g(R_0,ω) [m] is derived from Eq. (<ref>), and n(R_0) is computed through Eq. (<ref>. The shift in wavenumber corresponds to a shift in wave speed, and as a consequence, in a time shift of the wave that has traversed the scattering domain. In the case considered in this subsection, the integral amounts to 2.3×10^5 m^-2. This yields a wavespeed of 1375.5 m/s in the scattering domain, while the speed in the medium without scatterers is 1482 m/s. Since the scattering domain has a length of 4.4444 mm, the additional time delay caused by the scattering domain, as predicted by the theory of Foldy, is Δ t_Foldy=0.228 μs. We have also determined the time delay between the incident wave p^(0) and the wave with all significant orders of scattering p^(8) from Fig. <ref>(b), by looking at the shift in the zero crossings around 13 μs. This is found to be Δ t_INCS=0.232 μs. Thus, the difference in time delay as predicted by the theory of Foldy and our method is only 1.75%. Furthermore, since the wavenumber derived from Eq. (<ref>) lacks an imaginary component in our specific case, according to Foldy's theory <cit.>, the wave traversing the scattering domain is not subject to attenuation. As illustrated in Fig. <ref>(b), in our approach the later iterations correct the larger amplitudes observed in earlier iterations, and iteration p^(8) has the same amplitude as the incident field p^(0). This consistency in both time delay and wave amplitude across a scattering domain indicates a good quantitative agreement between our method and Foldy's effective medium theory in case of a polydisperse distribution of scatterers. §.§ Plane wave: monodisperse vs polydisperse We continue with a comparison between four different populations of MBs, as mentioned in Sec. <ref>. To start, our reference is the type 2 monodisperse resonant population, for which we use 35,000 MBs, resulting on a total gas volume of 4.6× 10^-6 ml. To achieve a fair comparison, the total gas volume concentration of the MB suspension should be the same in all the cases <cit.>. Therefore, the type 1 monodisperse population will contain 17,500 MBs, the type 3 monodisperse population will consist of 10^5 MBs and the type 4 polydisperse population will count 20,000 MBs. The bubble populations are placed in the volume V=1 ml as indicated in Fig. <ref>(a). §.§.§ Scattered pressure field: Full spectrum The scattered pressure field in each case is depicted in Fig. <ref>. At first sight, the scattered pressure generated from the resonant MBs (type 2, R_0=3.2 μm) is the strongest between all the cases with a peak pressure of +1.1 dB relative to the peak incident pressure P_0. Next, the case below resonance (type 1, R_0=4 μm) follows with a relative peak amplitude of -1.13 dB. Although these MBs have a resonance frequency that is still close to the excitation frequency, their peak amplitude is significantly smaller than the resonant MBs. The third case with a relative peak amplitude of -1.45 dB is the one above resonance (type 3, R_0=1 μm) and the last one is the polydisperse distribution (type 4) with a peak pressure of -5.47 dB. These results demonstrate that when the resonance frequency is closer to the excitation frequency then the scattered pressure field is stronger, with the scattering of the resonant contrast agents being the highest. Another observation is that the beam profile is smoother if the bubbles are smaller. This is because more MBs are necessary to achieve the same gas volume concentration, and the higher the number of scatterers gives a smoother beam profile of the scattered field. Finally, as the incident wave propagates through every MB population, it undergoes attenuation and speed of sound variations, resulting on a shift in the resonance frequency of the MBs. §.§.§ Scattered pressure field: Harmonics In this section we look at the different harmonics of the excitation pulse that are present in the scattered pressure field. These are obtained by decomposing the scattered signal into specific frequency bands using an 4th order Butterworth filter. These frequency bands are (i) the fundamental F0 [0.7, 1.3] MHz, (ii) the second harmonic 2H [1.7, 2.3] MHz and (iii) the third harmonic 3H [2.7, 3.3] MHz, where the intervals define the cutoff frequencies of the applied filter. Figure <ref> shows the harmonic contributions of the scattered pressure field for each of the considered populations. In the fundamental (F0) frequency band (top row of Fig. <ref>), we observe that the strongest scattered field is generated by the type 2 resonant MB suspension with a peak amplitude of -1.23 dB. The type 1 population with the below-resonance oscillating MBs has the second highest peak pressure of -2.61 dB as the resonance frequency is closer to the excitation frequency, in comparison to the other two remaining cases. A significant observation is that the scattered field from the type 4 polydisperse population has a peak amplitude of -7.1 dB and is stronger than the case of type 3 above-resonance MBs, which have a peak pressure of -8.74 dB. This can be explained due to the presence of MBs with a resonance frequency around 1 MHz in the polydisperse suspension. In the second harmonic (2H) frequency band (middle row of Fig. <ref>), we observe that the scattered field of the type 2 resonant MBs is still the highest of all the four distinct cases. The peak amplitude in this case is -10.3 dB. The peak pressure of the type 3 above-resonance oscillating MBs is -12.09 dB, which is larger than the respective value of -15.82 dB of the type 1 population with the below-resonance oscillating MBs. This is explained due to the fact that the resonance frequency of the system of the former is closer to the 2H frequency band around 2 MHz. The type 4 polydisperse distribution shows the weakest peak pressure amplitude of -18.89 dB. Compared to the monodisperse populations, hardly any constructive interferences are observed below the polydisperse suspension, due to the varying phases of the oscillations that result from the different sizes of the contrast bubbles. Finally for the third harmonic (3H) frequency band (bottom row of Fig. <ref>), the type 3 below resonance MBs exhibit the strongest scattered pressure field with a peak amplitude of -17.49 dB, as their resonance frequency of 3.9 MHz is closer to the 3H frequency band. Still, the type 2 resonant MBs scatter the second highest pressure field with a peak amplitude of -18.49 dB. Inside the MB suspension, the type 4 polydisperse MBs give a peak pressure of -22.66 dB. This is stronger than the peak of the pressure field of the type 1 below-resonance oscillating MBs (-25.91 dB), because the smaller MBs with a resonance frequency close to 3 MHz add to the strong scattering of the larger MBs. Similar to 2H, the type 4 polydisperse MBs hardly yield constructive interference below the suspension, as is the case for the type 1, 2 and 3 monodisperse MBs. This observation predicts that the uniformity of the size distribution of a population might have an impact on the nonlinear imaging artifacts downstream the population. The cumulative scattered pressure field is the addition of the signals emitted from all the MBs in the population taken into account their individual position and therefore all the phase delays. A simplified expression is to linearly project the behavior of a single MB to the behavior of a whole population of MBs. Thus, the simulated pressure fields of the populations show similar behavior with the projected response of the single MB in Fig. <ref>. §.§.§ Total pressure field: Attenuation and speed of sound variations To show the influence of the nonlinear microbubble behavior on a propagating pressure wave, in Fig. <ref> we show the temporal signatures and the respective frequency spectra after traversing each type of MB population. From Fig. <ref>(a) it is clear that the type 2 monodisperse resonant population (black line) causes the most nonlinear distortion. The distortion takes place mainly after the second cycle as the MBs need to get a large oscillation amplitude before they demonstrate significant nonlinear behavior. The influence of the nonlinear bubble oscillation on the propagation through each one of the other three populations is much less visible in the time domain. By observing the frequency spectra in Fig. <ref>(b), we can better see the effect of the nonlinear bubble behavior. Similar as in Sec. <ref>, the type 2 population of monodisperse oscillating MBs, shows a shift of energy from the fundamental to the second and higher harmonics. Furthermore, the maximum spectral amplitude of the fundamental is about equal for the other types of populations. The type 3 population of monodisperse below resonance MBs shows a strong second harmonic, and the highest third harmonic of all the populations, even higher than for the type 2 population. To quantify attenuation and speed of sound changes in the fundamental frequency band, we have subjected the temporal signatures in Fig. <ref>(a) to a Butterworth filter of 8th order and a frequency pass band of [0.75, 1.25] MHz. The results are plotted in Fig. <ref>. For the type 1 population of bubbles that are below resonance, there is a decrease in peak pressure of 92.2 kPa relative to the incident field, and the speed of sound has been increased to 1517 m/s. For the type 2 population with resonant bubbles, the peak pressure undergoes a drop of 126.9 kPa, and the speed of sound has been maintained at 1482 m/s. For the type 3 population of bubbles that are above resonance, the peak pressure experiences a drop of 19.9 kPa, and the speed of sound has decreased to 1458 m/s. Finally, for the type 4 polydisperse population, the decay in peak pressure is 44.8 kPa, and the speed of sound has been increased to 1497 m/s. We observe that the differences for the type 3 microbubbles are the smallest from all the populations, because they present the strongest effect mainly on the second harmonic. As in previous studies <cit.>, the INCS simulations demonstrate that for MBs with a resonance frequency below the excitation frequency there is an increase of the wave speed, whereas for a resonance higher than the excitation frequency there is a decrease of the wave speed. Finally, for the MBs with a resonance frequency equal to the excitation frequency, the wave speed is equal to the speed of sound of the background medium. §.§.§ Total pressure field: Convergence behavior To quantify the numerical performance of our scheme, we analyzed the difference between the successive iterations using the Relative Root Mean Square Error (RRMSE) RRMSE = √(∫_𝒳_𝒸𝒹∫_𝒯_𝒸𝒹[p^(j)(,t)-p^(j-1)(,t)]^2 dt d∫_𝒳_𝒸𝒹∫_𝒯_𝒸𝒹[p^(0)(,t)]^2 dt d), where 𝒳_𝒸𝒹 is the spatial computational domain, 𝒯_𝒸𝒹 is the temporal computational domain, j is the iteration number and p^(j) is the total pressure obtained in the jth iteration. The decay of the RRMSE is illustrated in Fig. <ref> as a function of the number of iterations. A first observation is that after a certain number of successive iterations, the error tends to stabilize at a level of 10^-5 or below. At this juncture, it can be inferred that incorporating additional multiple scattering orders will not yield further enhancements to the solution, indicating the attainment of insignificant scattering orders. Upon reaching this stage, it is assumed that the iterative process has converged to the lowest achievable error. For the type 2 monodisperse resonant MBs, it turns out that the initial iterations even show an RRMSE above 1. This indicates that the first multiple scattering orders are highly significant. Moreover, for these MBs more iterations are needed to reach convergence, and therefore more multiple-scattering orders should be included to achieve an accurate result. A general observation is that the closer the resonance frequency of the population is to the excitation frequency, the more iterations need to be taken into account. This can be explained by the fact that stronger close-range interactions occur in populations with resonant MBs due to the stronger scattering strength, making higher scattering orders more important. By observing the case of type 3 above-resonance monodisperse MBs, the RRMSE of the initial iterations is also above 1. This is due to the larger number of scatterers that are used to achieve the same gas volume concentration. This corresponds to higher number of bubble-bubble interactions at short distances. Finally, the type 4 polydisperse MBs yield a faster convergence (in the 13th iteration) than every other type of monodisperse suspension, demonstrating the relative significance of multiple scattering for the monodisperse populations. §.§ CEUS imaging §.§.§ Scattered pressure fields In this section we compare the nonlinear scattering coming from suspensions of type 5 resonant monodisperse MBs and type 6 polydisperse MBs when these are surrounded by linear scatterers, as illustrated in Fig. <ref>(b). To resemble an in vivo setting and match the gas volume concentration, for the type 5 suspension, we use a concentration of 5× 10^5 ml^-1 MBs with 1.4 μm rest radius, corresponding to a total gas volume of 5.8×10^-6 ml. Furthermore, for the type 6, we use 3.1× 10^4 ml^-1 MBs, corresponding to the same total gas volume. First, the total pressure fields in these configurations are computed for three different excitations: field p_1 is due to a double amplitude excitation (full aperture), and the fields p_2 and p_3 result from two single amplitude excitations (odd elements and even elements), respectively. After employing the AM procedure, the peak residual AM pressures are as shown in Fig. <ref>. For the monodisperse case in Fig. <ref>(a), nonlinear effects accumulate in the suspension and propagate in the area below the population. The peak AM residual pressure is -3.7 dB relative to the pressure at the source surface P_0. On the other hand, for the polydisperse case in Fig. <ref>(b), the residual pressure field shows a relative peak amplitude of -19.9 dB, which is 6.5 times smaller than the respective for the monodisperse suspension. Most MBs in the polydisperse suspension are less efficient scatterers than the MBs in the resonant monodisperse population and, more importantly, bubbles with different sizes will cause nonlinear scattering with different phases. Therefore, the nonlinearities due to scattering do not propagate outside the MB domain. These results indicate that in CEUS the nonlinear wave propagation artifacts will be stronger for a resonant monodisperse population than a polydisperse population. To demonstrate what this means for the AM imaging process, in Fig. <ref> we compare the time signatures of the double amplitude pulse p_1, the sum p_2+p_3 of the two single amplitude pulses, and the AM residual p_1 - (p_2+p_3), for both the type 5 monodisperse and the type 6 polydisperse case. We depict the temporal signatures for the center of the aperture of the linear array. In Fig. <ref>(a), the AM residual of the monodisperse population is a strong signal with a peak pressure of 1.5 kPa, compared to 2.11 kPa for the incident double excitation field. The sum of the two single amplitude signals matches the waveform of the double amplitude signal only in the beginning of the pulse, which corresponds to the scattering of the LSs that are present above the MB suspension. The AM residual signal is stronger at the end of the pulses, which denotes the propagation of the nonlinear scattering of the MBs to the LSs that are located below the MB suspension. In contrast, Fig. <ref>(b) shows that for the polydisperse case, the peak pressure of the AM residual corresponds to 0.35 kPa, which is 4.3 times smaller than the respective value of the type 5 monodisperse population. Moreover, the sum of the single amplitude signals overlaps with the double amplitude signal, both in the beginning (scattering from the LSs above the MB suspension) and in the end (scattering from the LSs below the MB suspension) of the pulses. This indicates that the nonlinear scattering that propagates below the polydisperse MB suspension is relatively small. §.§.§ Effect of size distribution on imaging artifacts To assess the imaging effects of the nonlinear fields below each MB population, it is necessary to generate the reconstructed B-mode (single-shot) images and the images that are obtained after employing the AM procedure. The results are depicted in Fig. <ref>. To achieve this, we placed 7× 10^5 ml^-1 tissue-mimicking linear scatterers (grey) surrounding the MB suspension. Figures <ref>(a) and (b) depict the B-mode images for the configuration with a resonant monodisperse MB population and a polydisperse population, respectively. In both cases the backscattering from tissue-mimicking LSs and the MBs is indistinguishable because the areas with LSs and MBs have a similar echogenicity, independent of the size distribution. This demonstrates that B-mode imaging does not allow to disentangle nonlinear MB scattering from tissue mimicking scattering. Figures <ref>(c) and (d), show the AM images for the configuration with a resonant monodisperse MB population and a polydisperse population, respectively. Employing the AM sequence for imaging a monodisperse MB population generates an image with significant nonlinear artifacts below of the MB area, meaning that tissue scatterers get misclassified as MBs. On the contrary, applying the AM sequence for imaging the polydisperse population delivers an image with much higher specificity. The peak amplitude in the image of the monodisperse area (0 dB) is stronger than in the image of the polydisperse area (-11.8 dB). The peak value of the nonlinear artifact level is -10.04 dB for the monodisperse population and -29.4 dB for the polydisperse population. This is an indication that monodisperse MBs are more efficient scatterers than polydisperse populations, especially in applications that require to enhance deep tissue imaging. A drawback of CEUS with monodisperse MBs is that the generated artifacts due to propagation of nonlinear scattering in the area below the MBs, is of comparable magnitude and can lead to missclassification of tissue as contrast agents. § CONCLUSIONS We simulated AM ultrasound imaging of both monodisperse and polydisperse MBs using the INCS method, taking into account all the relevant physical phenomena occurring during ultrasound propagation through a MB population. We highlighted the significance of multiple scattering in monodisperse populations. Resonant monodisperse MBs are shown to be the most efficient scatterers, which corresponds to high sensitivity for CEUS. This property is crucial for optimizing contrast enhancement, guaranteeing consistent behavior and reliable imaging outcomes, especially compared to using polydisperse contrast agents. The drawback of resonant monodisperse MBs is the generation of imaging artifacts, which reduce the specificity of CEUS. This research approach is useful for optimizing CEUS imaging by designing the size distribution and parameters of a MB population through simulations. § ACKNOWLEDGMENTS This research was supported by the project "Optoacoustic sensor and ultrasonic MBs for dosimetry in proton therapy" of the Dutch National Research Agenda, which is partly financed by the Dutch Research Council (NWO). The authors thank N. de Jong for his involvement in this research. 10 url<#>1urlprefixURL Averkiou2020 authorM. Averkiou, authorM. Bruce, authorJ. Powers, authorP. Sheeran,and authorP. Burns, titleImaging methods for ultrasound contrast agents, https://doi.org/10.1016/j.ultrasmedbio.2019.11.004journalUltr. Med. Biol. volume46, 3, pages498–517 (year2020). Versluis2020 authorM. Versluis, authorE. Stride, authorG. Lajoinie, authorB. Dollet, and authorT. Segers, titleUltrasound Contrast Agent Modeling: A review, https://doi.org/10.1016/j.ultrasmedbio.2020.04.014journalUltrasound in Med. & Biol. volume46, 9 pages2117–2144 (year2020). deJong2002 authorN. de Jong, authorA. Bouakaz, and authorP. Frinking, titleBasic Acoustic Properties of Microbubbles, https://doi.org/10.1046/j.1540-8175.2002.00229.xjournalEchocardiography volume19, pages229–240 (year2002). Marmottant2005 authorP. Marmottant, authorS. van der Meer, authorM. Emmer ,authorM. Versluis, authorN. de Jong, authorS. Hilgenfeldt, and authorD. Lohse, titleA model for large amplitude oscillations of coated bubbles accounting for buckling and rupture, https://doi.org/10.1121/1.2109427journalJ. Acoust. Soc. Am. volume118, 6, pages3499–3506 (year2005). Emmer2009 authorM. Emmer, authorH.J. Vos, authorD.E. Goertz, authorA. van Wamel, authorM. Versluis, and authorN. de Jong, titlePressure-Dependent Attenuationand Scattering of Phospholipid-Coated Microbubbles at Low Acoustic Pressures, https://doi.org/10.1016/j.ultrasmedbio.2008.07.005journalUltr. Med. Biol. volume35 , 1, pages102–111 (year2009). Sojahrood2015 authorA. Sojahrood, authorO. Falou, authorR. Earl, authorR. Karshafian, & authorM. Kolios, titleInfluence of the pressure-dependent resonance frequency on the bifurcation structure and backscattered pressure of ultrasound contrast agents: a numerical investigation, https://doi.org/10.1007/s11071-015-1914-7journalNonlinear Dynamics 80, pages889-904 (year2015). Sojahrood2023 authorA. J. Sojahrood, authorQ. Li, authorH. Haghi, authorR. Karshafian, authorT.M. Porter, and authorM.C.Kolios, titleProbing the pressure dependence of sound speed and attenuation in bubbly media: Experimental observations, a theoretical model and numerical calculations, https://doi.org/10.1016/j.ultsonch.2023.106319journalUltrason. Sonoch. volume95, 106319, (year2023). Errico2015 authorC. Errico, authorJ. Pierre, authorS. Pezet, authorY. Desailly, authorZ. Lenkei, authorO. Couture, & authorM. Tanter, titleUltrafast ultrasound localization microscopy for deep super-resolution vascular imaging, https://doi.org/10.1038/nature16066journalNat. volume527, pages499-502 (year2015). Tang2006 authorM. Tang, and authorR. Eckersley, titleNonlinear propagation of ultrasound through microbubble contrast agents and implications for imaging, https://doi.org/10.1109/TUFFC.2006.189journalIEEE Trans. Ultrason., Ferroelectr. Freq. Control volume53, 12, pages2406–2415 (year2006). Lindner2004 authorJ.R.Lindner, title Microbubbles in medical imaging: current applications and future directions, https://doi.org/10.1038/nrd1417journalNat. Rev. Drug Discov. volume3, 6, pages527-532 (year2004). Frinking2020 authorP. Frinking, authorT. Segers, authorY. Luan, and authorG. Tranquart, titleThree Decades of Ultrasound Contrast Agents: A Review of the Past, Present and Future Improvements, https://doi.org/10.1016/j.ultrasmedbio.2019.12.008journalUltrasound in Med. & Biol. volume46, 4, pages892–908, (year2020). Segers2016 authorT. Segers, authorN. Jong, & authorM. Versluis, titleUniform scattering and attenuation of acoustically sorted ultrasound contrast agents: Modeling and experiments, https://doi.org/10.1121/1.4964270journalJ. Acoust. Soc. Am. 140, 4, pages2506-2517 (year2016). Helbert2020 authorA. Helbert, authorE. Gaud, authorT. Segers, authorC. Botteron, authorP. Frinking, and authorV. Jeannot, titleMonodisperse versus Polydisperse Ultrasound Contrast Agents: In Vivo Sensitivity and safety in Rat and Pig, https://doi.org/10.1016/j.ultrasmedbio.2020.07.031journalUltrasound in Med. & Biol. volume46, 12, pages3339–3352 (year2020). Segers2018 authorT. Segers, authorP. Kruizinga, authorM. Kok, authorG. Lajoinie, authorN. de Jong, and authorM. Versluis, titleMonodisperse Versus Polydisperse Ultrasound Contrast Agents: Non-Linear Response, Sensitivity, and Deep Tissue Imaging Potential, https://doi.org/10.1016/j.ultrasmedbio.2018.03.019journalUltr. Med. & Biol. volume44, 7, pages1482–1492 (year2018). vanElburg2023 authorB. van Elburg, authorJ. Deprez, authorM. van den Broek, authorS. De Smedt, authorM. Versluis, authorG. Lajoinie,authorI. Lentacker, and authorT. Segers, titleDependence of sonoporation efficiency on microbubble size: An in vitro monodisperse microbubble study, ttps://doi.org/10.1016/j.jconrel.2023.09.047JjournalJ. Contr. Rel. volume363, pages747–755 (year2023). Foldy1945 authorL.L. Foldy, titleThe Multiple Scattering of Waves, https://doi.org/10.1103/PhysRev.67.107journalPhys. Rev. volume64, pages107–119 (year1945). Foldy1947 authorE.L. Carstensen, and authorL.L. Foldy, titlePropagation of Sound Through a Liquid Containing Bubbles, https://doi.org/10.1121/1.1916508journalJ. Acoust. Soc. Am. volume19, pages481–501 (year1947). Stride2005 authorE. Stride, & authorN. Saffari, titleInvestigating the significance of multiple scattering in ultrasound contrast agent particle populations, https://doi.org/10.1109/tuffc.2005.1563278journalIEEE Trans. Ultrason., Ferroelectr. Freq. Control 52, pages2332-2345 (year2005). Hibbs2007 authorK. Hibbs, authorJ. Mari, authorE. Stride, authorR. Eckersley, authorA. Noble, & authorM. Tang, titleNonlinear Propagation of Ultrasound Through Microbubble Clouds: A Novel Numerical Implementation, https://doi.org/10.1109/ULTSYM.2007.502journal2007 IEEE Ultr. Symp. Proc. (year2007). Ando2011 authorK. Ando, authorT. Colonius, and authorC. Brennen, titleNumerical simulation of shock propagation in a polydisperse bubbly liquid, https://doi.org/10.1016/j.ijmultiphaseflow.2011.03.007journalInt. J. Mult. Flow 37, 6, pages596-608 (year2011). Ovenden2017 authorN. Ovenden, authorJ. O’Brien, & authorE. Stride, titleUltrasound propagation through dilute polydisperse microbubble suspensions, https://doi.org/10.1121/1.4998574journalJ. Acoust. Soc. Am. 142, 3, pages1236-1248 (year2017). Vanhille2019 authorVanhille, & authorH. Hynynen, titleNumerical Simulations of the Nonlinear Interaction of a Bubble Cloud and a High Intensity Focused Ultrasound Field, https://doi.org/10.3390/acoustics1040049journalAcoustics 1, pages825-836 (year2019). Pinton2009 authorG. Pinton, authorJ. Dahl, authorS. Rosenzweig, & authorG. Trahey, titleA heterogeneous nonlinear attenuating full- wave model of ultrasound, https://doi.org/10.1109/TUFFC.2009.1066journalIEEE Trans. Ultrason., Ferroelectr. Freq. Control. 56, pages474-488 (year2009). Joshi2017 authorA. Joshi, authorB. Lindsey, authorP. Dayton, authorG. Pinton, & authorM. Muller, titleAn iterative fullwave simulation approach to multiple scattering in media with randomly distributed microbubbles, https://doi.org/10.1088/1361-6560/aa6523journalPhys. Med. & Biol. 62, pages4202-4217 (year2017). Haghi2019 authorH. Haghi, authorA. J. Sojahrood, and authorM.C.Kolios, titleCollective nonlinear behavior of interacting polydisperse microbubble clusters, https://doi.org/10.1016/j.ultsonch.2019.104708journalUltrason. Sonoch. 58, 104708, (year2019). Bubble_Cloud authorA. Matalliotakis, and authorM.D. Verweij, titleComputation of ultrasound propagation in a population of nonlinearly oscillating microbubbles including multiple scattering, https://doi.org/10.1121/10.0017770journalJ. Acoust. Soc. Am. 153, 4, pages2209–2222 (year2023). Koos_Thesis authorJ. Huijssen, titleModeling of nonlinear medical diagnostic ultrasound, http://resolver.tudelft.nl/uuid:3a01d973-d125-430f-82e2-fb83cc9239fbjournalPhD Thesis, Delft University of Technology (year2008) Huijssen2010 authorJ. Huijssen and authorM.D. Verweij, titleAn iterative method for the computation of nonlinear, wide-angle, pulsed acoustic fields of medical diagnostic transducers, https://doi.org/10.1121/1.3268599journalJ. Acoust. Soc. Am. 127, 1, pages33–44 (year2010). Libe_Thesis authorL. Demi, titleModeling nonlinear propagation of ultrasound through inhomogeneous biomedical media, https://repository.tudelft.nl/islandora/object/uuid:01b3942b-ffaa-4a27-be64-ea00f292bf5f/datastream/OBJ/downloadjournal Thesis, Delft University of Technology (year2013). Libe2009 authorL. Demi, authorM.D. Verweij, authorJ. Huijssen, authorN. de Jong, and authorK.W.A. van Dongen, titleAttenuation of ultrasound pressure fields described via a contrast source formulation, (https://doi.org/10.1109/ULTSYM.2009.5441906journalProceedings of 2009 IEEE Ultrasonics, pages1590–93 (year2009). Libe2011 authorL. Demi, authorK.W.A. van Dongen and authorM.D. Verweij, titleA contrast source method for nonlinear acoustic wave fields in media with spatially inhomogeneous attenuation, https://doi.org/10.1121/1.3543986journalJ. Acoust. Soc. Am. 129, 3, pages1221–1230 (year2011). LocalNL2023 authorA. Matalliotakis, authorD. Maresca, and authorM.D. Verweij, titleNonlinear interaction of two cross-propagating plane waves, https://doi.org/10.48550/arXiv.2312.00445journalarXiv, (year2023). Gamma2013 authorM. Abramowitz , and authorI. Stegun , titleHandbook of Mathematical Functions: With Formulas, Graphs, and Mathematical Tables., https://personal.math.ubc.ca/ cbm/aands/abramowitz_and_stegun.pdfjournalprint.; [Nachdr. der Ausg. von 1972]. Dover Books on Mathematics (year2013). Overvelde2010 authorM. Overvelde, authorV. Garbin,authorJ. Sijl,authorB. Dollet, authorN. de Jong, authorM. Lohse, and authorP. Versluis, titleNonlinear Shell Behavior of Phospholipid-Coated Microbubbles, https://doi.org/10.1016/j.ultrasmedbio.2010.08.015journalUltrasound in Med. & Biol. volume36, 12, pages2080–2092, (year2010). Segers2018a authorT. Segers, authorE. Gaud, authorM. Versluis, and authorP. Frinking, titleHigh-precision acoustic measurements of the nonlinear dilatational elasticity of phospholipid coated monodisperse microbubbles, https://doi.org/10.1039/C8SM00918JjournalSoft Matter volume14, pages9550–9561, (year2018). XwaveMBs2023 authorA. Matalliotakis, authorR. Waasdorp, authorM.D. Verweij, and authorD. Maresca, titleImpact of wavefront shape on nonlinear ultrasound imaging of monodisperse microbubbles, https://doi.org/10.48550/arXiv.2403.01452journalarXiv, (year2023). MUSTToolbox authorD.Garcia, title Make the most of MUST, an open-source MATLAB UltraSound Toolbox, https://doi.org/10.1109/IUS52206.2021.9593605journalIEEE Int. Ultr. Symp., (year2021). DAS2021 authorV. Perrot, authorM. Polichetti, authorF. Varray, and authorD.Garcia, titleSo you think you can DAS? A viewpoint on delay-and-sum beamforming, https://doi.org/10.1016/j.ultras.2020.106309journalUltrasonics 111, 106309, (year2021).
http://arxiv.org/abs/2405.08860v1
20240514180000
Unearthing the intersections: positivity bounds, weak gravity conjecture, and asymptotic safety landscapes from photon-graviton flows
[ "Benjamin Knorr", "Alessia Platania" ]
hep-th
[ "hep-th", "gr-qc", "hep-ph" ]
Discovering Electroweak Interacting Dark Matter at Muon Colliders using Soft Tracks Jose Zurita May 20, 2024 ===================================================================================== § INTRODUCTION One of the most challenging open problems in theoretical physics is to understand the gravitational interaction at the smallest distance scales. At Planckian scales, we expect gravity to shed off its classical behavior and display quantum properties, similar to other gauge and matter fields. Different approaches to QG try to describe the phenomena at these scales starting from a variety of fundamental ideas and based on seemingly different frameworks. Fundamental research in each of these theories entails the investigation of their UV details and, ideally, the use of top-down strategies to derive predictions from scratch. EFT, on the other hand, is a powerful mathematical formalism to describe physical phenomena involving particles and fields within a given energy range — usually below a given cutoff scale where new physics becomes relevant. It serves as a pragmatic approach for modeling complex systems while incorporating the effects of higher energy degrees of freedom or features through systematic expansions. The challenge lies in bridging the gap between these two frameworks: QG, which governs the behavior of spacetime and gravitational interactions at the Planck scale, and EFT, which describes gravity-matter systems at much lower energy scales. One way to connect QG to EFT is through decoupling <cit.>. This involves constructing EFT that capture the low-energy dynamics of gravity while incorporating the effects of quantum fluctuations at shorter distances. By integrating out the high-energy modes, one can derive EFT that manifest as low-energy approximations to the underlying QG theory. A paradigmatic example of this mechanism is realized within the AS program for QG <cit.>, which builds on the framework of QFT, and on the idea that gravity could be UV-complete with respect to an interacting fixed point of the gravitational RG flow. Within the AS approach, much effort has been put into corroborating the existence of this fixed point <cit.>, as well as in assessing its unitarity <cit.>, and its compatibility with matter <cit.>. Although RG trajectories have been found that connect the fixed point with the GR regime, and despite the natural embedding of the decoupling mechanism in AS, a systematic study of the QG-EFT map in AS is missing. Similarly, most of the other QG approaches have devoted their main focus on understanding the UV details of the theory, often in isolation from matter. The swampland program <cit.> plays a crucial role in this endeavor, by providing constraints and guiding principles to construct EFT that arise from consistent UV completions of gravity. At its core, the swampland program seeks to identify the theoretical constraints — formulated in terms of a set of conjectures — that any consistent quantum theory of gravity must satisfy. Its name, “swampland”, metaphorically reflects the idea that not all EFT can arise from a consistent UV theory. The program aims to delineate which theoretical frameworks belong to the swampland of inconsistent theories, and which ones originate from fundamental theories. The latter set identifies the “landscape” of consistent EFT. Whether such a landscape solely identifies consistent EFT stemming from ST, or more generally the set of EFT generated by any consistent QG theory is an open question. According to the String Lamppost Principle <cit.>, all such EFT coming from consistent QG theories must have a stringy origin. Although in the past years, the swampland program has sparked intense debate due to the use of conjectures, it has also inspired numerous research efforts. Its implications extend beyond the confines of ST, influencing broader discussions in cosmology, particle physics, and beyond <cit.>. In particular, despite the swampland program having emerged in the context of ST, the general idea of selecting and constraining the set of EFT compatible with and stemming from different UV completions of gravity can <cit.>, and should, be extended to other approaches to QG. Deriving the landscapes from different approaches could indeed allow for (i) a more efficient comparison with constraints derived from EFT <cit.>, and (ii) a more direct and clean dictionary to compare the predictions of different QG approaches <cit.> and frameworks, which in the UV hardly talk to each other. Constructing the QG-EFT map and generalizing the notion of landscape to other QG theories <cit.> would additionally allow to investigate the validity of swampland conjectures beyond string models <cit.>, and thus to test the String Lamppost Principle <cit.>. In a similar direction, analyzing the intersections between different QG landscapes could reveal non-trivial connections between theories. For instance, if the AS landscape would lie inside the string landscape, or if the two would have a non-trivial intersection <cit.>, this could indicate that AS be realized in the form of “effective AS” <cit.>, , as a low-energy approximation of ST <cit.>. This overarching idea that generalizes the big picture of the swampland program is illustrated in <Ref>. In this work, we take key steps in the realization of this program, by extending the investigations of <cit.> to a more sophisticated system and a full-fledged non-perturbative RG computation, and by comparing the AS landscape not only with swampland conjectures — specifically, the WGC <cit.> — but also, for the first time, with positivity bounds <cit.>. Concretely, we will focus on a photon-graviton system at fourth order in a derivative expansion, while only including couplings that cannot be removed by a local field redefinition. Explicitly, the dynamics is encoded in the effective action Γ = ∫d^4x √(g) [ R/16π G_N + G_ - + G_^2 ()^2 + G_ + G_CFF C^μνρσ F_μν F_ρσ] , with = 1/4 F^μν F_μν , = 1/4 F^μ_μν F^ν_νρ F^ρ_ρσ F^σ_σμ , = R_μνρσR^μνρσ - 4R_μνR^μν + R^2 . Here, F_μν is the Abelian field strength tensor and C_μνρσ is the Weyl tensor. Along the lines of <cit.>, we compute the beta functions and determine the AS landscape as the set of EFT — parameterized by the relevant dimensionless Wilson coefficients — that are connected to UV-complete RG trajectories. We will then compare the Wilson coefficients in the landscape with standard positivity bounds <cit.> and with a family of entropy-based positivity constraints <cit.>, which contains the WGC for black holes in the presence of higher derivatives <cit.> as a particular case. Our system has two viable gravitational UV fixed points, hence the AS landscape consists of two sub-landscapes. One fixed point comes with a single relevant direction, so that, once the scale of QG is fixed, the resulting sub-landscape is a zero-parameter theory: a single point in the space of dimensionless Wilson coefficients. The second fixed point has two relevant directions, so that the corresponding sub-landscape is a line. In particular, this line is nearly straight. The straight approximation only ceases to work in a small region where the sub-landscape of the second fixed point bends to continuously connect to the single-point sub-landscape coming from the most predictive fixed point. Globally, the entire AS landscape falls onto a plane — a feature already observed in a previous work <cit.>. Whether this is a coincidence or a universal feature of AS is to be assessed by more extensive studies. In agreement with general expectations from EFT <cit.>, we find that Planck-scale suppressed violations of weak gravity and positivity constraints can occur across the landscape. We will show that such violations are minimized by the most predictive sub-landscape. Finally, our work also highlights the special role played by the Euler coupling in AS: while it is attached to a topological invariant and is thus generally unconstrained by the RG flow <cit.>, it can enter off-shell positivity constraints such as those based on black hole entropy <cit.>. Such positivity bounds can thus entail constraints on the Euler coupling, rather than tests of AS. Our paper is organized as follows. In <Ref>, we discuss the concepts that are important to this work in more detail: landscapes, positivity bounds, and the weak gravity conjecture. <Ref> contains our results on the fixed point structure, and an in-depth discussion of the ensuing landscapes. With these results in hand, in <Ref> and <Ref> we confront our results with positivity, entropy, and weak gravity bounds. Finally, in <Ref> we summarize and discuss our results. The two appendices contain some more details about our technical setup and an analytical result. § LANDSCAPES, POSITIVITY BOUNDS, AND THE WEAK GRAVITY CONJECTURE This section introduces the basic concepts that we will need and use throughout the manuscript. This includes the notion of landscapes in QG and an overview of the weak gravity and positivity bounds that we will intersect with the AS landscape. §.§ Landscapes in String Theory and Asymptotic Safety The concept of landscape was first introduced in the context of ST <cit.>, to indicate the set of IR EFT that admit a consistent UV completion in the presence of gravity. The swampland is the complement of this set — that is, the set of EFT that cannot be UV-completed when coupled to gravity. The difficulties in tracing the set of theories belonging to the string landscape via top-down derivations have led to the idea at the core of the swampland program <cit.>: to determine the conditions or criteria that precisely select the EFT belonging to the landscape. Such conditions are better known as “swampland conjectures”, and stem from recurring patterns in stringy constructions, black hole physics, and the interrelations between different conjectures. In the following, we will assume that swampland conjectures precisely identify the string landscape. While the concept of landscape — seen as the set of EFT stemming from consistent gravity-matter UV completions — is universal, the strategy to determine it may depend on the specific approach to QG. For a given approach to QG, the goal is to predict the set of allowed Wilson coefficients in the corresponding low-energy EFT. The first generalization of the concept of the QG landscape beyond ST has been put forth in <cit.>, in the context of AS <cit.>. As argued in <cit.>, the AS version of the string landscape is the set of Wilson coefficients in the effective action stemming from asymptotically safe RG trajectories, cf. <Ref>. If more than one UV fixed point exists, we will define the landscape as the union of the “sub-landscapes” identified by each viable UV fixed point. These sub-landscapes are in general not contiguous, but can never overlap: they will typically be disjoint sets. It is noteworthy that the concept of landscape in AS is amenable to explicit computation. The FRG <cit.> (see <Ref> for a brief review) is typically used in the field of AS to corroborate the existence of interacting fixed points of the RG flow, , possible UV completions of the theory. In a nutshell, the idea of the FRG is to perform the path integral in the Wilsonian spirit by integrating out modes above an IR cutoff scale k. All couplings thus acquire an RG-scale dependence, and in the limit k→0 — corresponding to all fluctuations having been integrated out — the standard effective action Γ is obtained. As a consequence, the FRG also provides a clear recipe to compute the Wilson coefficients belonging to the landscape <cit.>: they are the IR (k→0) limits of the FRG-running couplings G_i(k), for all RG trajectories departing from a given fixed point. We shall return to this point in the next section, in the light of the definitions of positivity and entropy-based bounds. §.§ Positivity bounds Positivity bounds originally arose in the context of EFT. The underlying idea is that consistency conditions for a fundamental theory, including locality, unitarity, analyticity, and Lorentz invariance, constrain the scattering amplitudes of the theory, even at scales above the cutoff where the EFT is valid. More precisely, without knowing the details of the UV completion of the EFT, one can still infer constraints on the IR physics that follow from these consistency conditions. One strategy for deriving positivity bounds is to use the optical theorem together with the (assumed or known) branch cut and pole structure of scattering amplitudes in order to find contour integrals that are positive. This can then be mapped onto constraints on specific combinations of Wilson coefficients that are, by construction, gauge- and reparameterization-invariant. For a recent overview of the topic, see  <cit.>. For the theory that we consider in this work, the effective action is given by (<ref>), and the relevant dimensionless Wilson coefficients are ^2 = G_^2/(16π G_N)^2 , = G_/(16π G_N)^2 , C = G_CFF/16π G_N , Positivity bounds on these coefficients have been discussed in <cit.>. In <cit.>, bounds for the first two Wilson coefficients were derived, together with constraints for other operators that we do not consider in this work. With the EFT cutoff Λ, the dimensionless quantities f_2 = (16π G_N)^2Λ^4/2 (^2+) , g_2 = (16π G_N)^2Λ^4/2 (^2+3) , need to satisfy g_2 > |f_2| . On the other hand, in <cit.>, also the third Wilson coefficient has been included, and the scale has been chosen explicitly in terms of the Planck scale. Introducing α_1 = G_^2+2G_/(16π G_N)^2 , α_2 = G_/(16π G_N)^2 , α_3 = G_CFF/4π G_N , The two bounds of <cit.> read[The first of these two bounds actually originates from two independent constraints, namely ^2+2 - 2C > 0 and ^2+2 + 2C > 0, whose combination gives the first bound.] ^2+2 - 2|C| > 0 , > 0 . It is however important to stress that most of the literature, including the above works, explicitly excludes the massless graviton in the derivation of positivity bounds. This is partially related to the technical difficulties originating from subtracting the massless graviton pole and treating the IR logarithms. In connection to our work, this has two important implications. First, while there has been some effort in addressing these issues, see  <cit.>, the situation is not settled yet. In particular, since we allow for massless graviton fluctuations, we will observe a logarithmic running in the IR, and we will have to discuss how to treat the resulting logarithms in our RG flow when defining the Wilson coefficients. Second, and most importantly, it is not obvious that the strict inequalities above also apply to theories with massless degrees of freedom. The expectation <cit.> is that such positivity bounds may be violated once gravity is turned on. The violation would result in a weakening (or even a complete removal <cit.>) of the positivity bounds: sums of dimensionless Wilson coefficients which ought to be positive in unitary, gravity-free theories, could actually be slightly negative <cit.>. The amount of allowed violations is not precisely settled but, when present <cit.>, it is generally expected to be an 𝒪(1) quantity in appropriate units. Specifically, within our system given by the action (<ref>), any potential violation of positivity bounds should naturally be Planck-mass suppressed, as long as the dimensionless Wilson coefficients (<ref>) are of 𝒪(1). To illustrate this argument, let us consider photon-photon scattering. With our effective action (<ref>), the two-to-two scattering amplitude in the low energy and forward scattering limit reads, structurally <cit.>, 𝒜∼ - G_N s^2/t + c s^2 + … , where the first term originates from the massless graviton pole, c is a linear combination of the couplings G_^2 and G_, and s, t are the standard Mandelstam variables. Positivity bounds are tied to the positivity of c. Using our definitions for the dimensionless Wilson coefficients (<ref>), and assuming i∼ -𝒪(1), we have that 𝒜∼ - G_N s^2/t - 𝒪(1) G_N^2 s^2 + …∼ - s^2/t M_Pl^2 - 𝒪(1) s^2/M_Pl^4 + … . This entails that, as long as energies do not reach the Planck scale, s ≪ M_Pl^2, any potential violation is suppressed significantly. As a matter of fact, if a second mass scale M≪ M_Pl is present, it is conjectured that only a weaker positivity bound needs to be fulfilled <cit.>, c ≳ - 𝒪(1)/M^2 M_Pl^2 , compared to which any potential violation in our system is once again suppressed by a factor of M^2/M_Pl^2. §.§ Entropy positivity bounds and electric weak gravity conjecture The WGC is among the most important and best-understood criteria within the swampland program. In the context of ST, it is strictly related to the No Global Symmetries conjecture, but it can also be motivated by the requirement that no black hole remnants are formed, since they might lead to consistency issues at the EFT level <cit.>. In its simplest form, its electric version states that in any consistent U(1)-theory coupled to gravity, there must be at least one electrically charged state, whose dimensionless mass-to-charge ratio is bounded by an order-one number, m≤√(2) Q M_Pl , where Q=qg is the unquantized charge of the state with mass m, and g is the U(1)-gauge coupling. Equivalently, there must be an electrically charged state whose charge-to-mass ratio is bounded by that of an extremal charged black hole, Q/M≤ Q_extr/M_extr . At variance with the condition (<ref>), the bound above is not universal, rather it is generally modified by higher derivative terms in the effective action, in the gravitational or in the U(1) sector <cit.>. Specifically, in a generic photon-graviton EFT, the condition (<ref>) becomes Q/M≤ Q_extr/M_extr(1-Δ/M^2) , where Δ entails a non-trivial combination of Wilson coefficients, and the extremality parameter is in the range ξ=√(1-Q^2/M^2)∈[0,1], or in [0,1/2] for black holes with positive specific heat <cit.>. Following the derivations in <cit.>, for a theory of the type (<ref>) considered in this paper, the combination reads <cit.> Δ=(1-ξ)^2d_0+20ξ (8π G_N) G_ -5ξ (1-ξ)(16π G_N G_ +G_CFF) , with d_0 = G_^2/32π G_N + G_/16π G_N -G_CFF = 8π G_N (^2+2 - 2C) , and must be non-negative for all ξ in the allowed range for the entropy-positivity-bounds in <cit.> to be fulfilled. We note that despite the Euler coupling G_ not entering field equations or scattering amplitudes — at least not trivially — it can affect off-shell quantities like the black hole entropy. Thus, it can generally impact the family of positivity bounds attached to the condition Δ>0 . Nonetheless, G_ non-trivially drops out of the linear combination of Wilson coefficients d_0, which is strictly related to the electric WGC. Indeed, in the case of a highly charged black hole (ξ≪1), the family of positivity bounds (<ref>) is proportional to the extremality condition of charged black holes <cit.> and yields the electric WGC for black holes in the presence of higher-derivative corrections <cit.> d_0>0 . This is the condition that we will consider in this manuscript. Similarly to positivity bounds, in the presence of gravitational quantum fluctuations, small violations of the WGC are allowed and compatible with unitarity and causality <cit.>. § COMPUTING THE AS LANDSCAPE In this section, we compute the landscape of EFT stemming from asymptotically safe photon-gravity flows. The dynamics is encoded in the action (<ref>). For the computation of the beta functions, we will work with its Euclidean counterpart, Γ_k = ∫d^4x √(g) [ -R/16π G_N + G_ + + G_^2 ()^2 + G_ + G_CFF C^μνρσ F_μν F_ρσ] , where all couplings G_i(k)≡{ G_N,G_, G_^2, G_, G_CFF} now depend on the RG scale k. As remarked in the introduction, we work in the same spirit as in the EFT literature and remove all inessential operators across all scales by an appropriate k-dependent field redefinition <cit.> (see <Ref> for additional details). §.§ Defining the Wilson coefficients As anticipated in <Ref>, identifying the AS landscape boils down to computing the set of IR endpoints (k→0 limit of the FRG flow) of all asymptotically safe trajectories <cit.> (cf. <Ref>). Each endpoint corresponds to an EFT in the landscape, and is uniquely described by the (generally dimensionful) Wilson coefficients W_G_i≡lim_k→0 G_i(k) . The flow has to be computed for the dimensionless counterparts of the couplings G_i(k), , g(k) = G_N(k) k^2 , g_^2(k) = G_^2(k) k^4 , g_(k) = G_(k) k^4 , g_CFF(k) = G_CFF(k) k^2 . The flow for these four dimensionless couplings has been computed with the help of the Mathematica package xAct <cit.> and a well-tested code <cit.>. Some details on the computation are reported in <Ref>. The complete set of beta functions can be found in the accompanying notebook <cit.>. In terms of the above running dimensionless couplings, the dimensionless Wilson coefficients in (<ref>) originate from the following limits ^2 = lim_k→0g_^2(k)/(16π g(k))^2≡lim_k→0 G_^2(k)/(16π G_N(k))^2 , ^2 = lim_k→0g_^2(k)/(16π g(k))^2≡lim_k→0 G_(k)/(16π G_N(k))^2 , C = lim_k→0g_CFF(k)/16π g(k)≡lim_k→0 G_CFF(k)/(16π G_N(k)) . These are the dimensionless ratios of couplings that enter scattering amplitudes in our system, and thus the corresponding positivity bounds. However, some caveats and ambiguities remain. Hence, prior to computing the AS landscape, we need to discuss some aspects of the definition of the Wilson coefficients in our setup: * Wick rotation: Our computation of beta functions has been performed with Euclidean signature, as is the standard in the field (for recent progress in defining flows directly in Lorentzian signature, see <cit.>). We thus have to perform a Wick rotation to relate our couplings to the couplings in the literature on positivity bounds and the WGC <cit.>. It is known that the Wick rotation comes with many issues in gravitational theories <cit.>. We implement the Wick rotation pragmatically by the rule that spacetime curvature tensors get a minus sign, and field strength tensors a factor of i. In practice, this only affects the Einstein-Hilbert term and the kinetic term of the photon in our action. As the end result of this procedure, our couplings do not have to be modified and can be used directly in all bounds. * Definition of Wilson coefficients in the presence of logarithms: Some of the couplings in our system display a logarithmic running originating from massless graviton fluctuations. Such a logarithmic running introduces an ambiguity in the definition of the Wilson coefficients, since one can always redefine the scale in the logarithm, so that a_1+log(k^2/k_1^2)=a_2+log(k^2/k_2^2). This makes it necessary to adopt a prescription to subtract the logarithm and thus to define the Wilson coefficients uniquely (see also <cit.>). Grounded on this issue, and in anticipation of the results, it turns out useful to introduce the two combinations g_±(k) = g_^2(k) ± g_(k)/2 , and the corresponding dimensionless Wilson coefficients ± = lim_k→0g_±(k)/(16π g(k))^2≡lim_k→0 G_±(k)/(16π G_N(k))^2 . The underlying reason is that both g_^2 and g_^2 run logarithmically in the IR, but with opposite signs. Thus, in combining them, only g_- shows a logarithmic running in the IR, whereas g_+ does not, allowing us to eliminate one of the two logarithmic ambiguities. In practice, we will subtract the logarithmic running to obtain the Wilson coefficient -. To parameterize the ambiguity, we will fix our reference scale to be the Planck mass and introduce a parameter, , multiplying G_N inside the logarithm: - ≡1/2(^2-^2) - N ln[ G_N k^2 ] . In computing the landscape, N has to be chosen appropriately to remove the universal IR logarithmic running. This will be shown explicitly in <Ref>, cf. (<ref>). Different choices of then correspond to different prescriptions to subtract the logarithm and define the Wilson coefficient -. * Role of Euler coupling In four dimensions, the Euler term  is topological, and thus the corresponding coupling G_ does not enter the right-hand side of the flow equation in conventional treatments. There are two consequences to this. First, the coupling will generically not show a fixed point <cit.>, but run off to ±∞ for k→∞. Modulo the reconstruction problem <cit.>, this suggests that certain topologies are preferred in the Euclidean path integral, or that only topologies with vanishing Euler term contribute to the Lorentzian path integral <cit.>. Second, even disregarding the first issue, without any additional constraint we cannot uniquely define the Wilson coefficient of the Euler coupling, as it can be shifted by an arbitrary amount while still fulfilling the same RG equation. This problem is exacerbated by the fact that the coupling also runs logarithmically in the IR, giving rise to the same ambiguity as for g_-. This is generally not a problem, as the Euler coupling typically does not enter scattering amplitudes in four dimensions. Nonetheless, as we shall see, the above issues lead to ambiguities at the level of the bounds involving off-shell quantities. With these caveats and ambiguities in mind, in the next subsections, we will continue our discussion by investigating the analytical properties of two important limits of the FRG flow: the fixed point structure in the UV, and its universal running in the IR. §.§ Fixed points, UV behavior, and free parameters Obtaining the AS landscape requires in the first place a UV fixed point from which the flow emanates. The fixed points of the RG flow are given by the zeros of the vector field of beta functions β⃗(g⃗) of all dimensionless couplings g⃗. Along with the fixed points, the leading-order behavior of the flow about a fixed point contains information about the predictivity of the corresponding theory, which, in turn, is related to the dimensionality of the corresponding sub-landscape. Indeed, linearizing the beta functions about a fixed point g⃗^ ∗ and solving the flow yields the leading-order scaling g⃗ = g⃗^ ∗ + ∑_i c_i e⃗_i (k/k_0)^-θ_i , where the e⃗_i are the unit eigenvectors of the stability matrix ∂β⃗/∂g⃗, the c_i are integration constants, k_0 is an arbitrary reference scale, and the critical exponents θ_i are defined as minus the eigenvalues of the stability matrix defined above. The number of relevant directions equates to the number of positive critical exponents and measures the predictivity of the theory generated by the UV fixed point: RG trajectories reaching the fixed point in the limit k→∞ are those whose integration constants c_i multiplying a positive power of k are zero. Thus, the fewer the number of relevant directions for a fixed point, the fewer the number of integration constants to vary to identify an RG trajectory, resulting in a higher predictive power. For asymptotically safe trajectories, the number of non-vanishing integration constants equates to the number N of relevant directions of the flow. We note however that one of the integration constants associated with the relevant couplings can be absorbed to redefine the reference scale, k_T ≡ c_i^1/θ_i k_0 . The possibility of redefining the reference scale, or, equivalently, adding an arbitrary shift to the RG time t=log(k/k_0) is related to the invariance of the flow under shifts of the RG time t. At the same time, k_T has the interpretation of a transition scale from the fixed point (or, QG) regime to the IR scaling. Indeed, as we shall see in this section, in a gravitational theory k_T is also the mass scale defining the Newton coupling. In any theory with dimensionful couplings, one needs at least one arbitrary unit mass scale with respect to which other dimensionful quantities can be measured: one of the integration constants of the full theory can be eliminated to define such a scale. Thus, contrary to the common folklore that N relevant directions correspond to N free parameters to be measured to fix the theory, the need to have a unit of measure reduces the number of free parameters to N-1 dimensionless quantities. This number also defines the dimensionality of the landscape in the space of dimensionless Wilson coefficients. The only exception to this argument are scale-free theories. If our universe were a conformal field theory, then all measurable quantities would be dimensionless, in which case the need to introduce one arbitrary mass unit scale would disappear. Following this general discussion, in the next subsection we shall present the fixed point structure for our system. §.§ Fixed point structure The beta functions for the dimensionless couplings {g, g_+, g_-, g_CFF} are rational functions, and their fixed points correspond to their common zeros. Here we are excluding the Euler coupling g_ from the set, since by construction its beta function can only vanish if all other couplings conspire so that β_g_=0. This is because, as mentioned in <Ref>, the Gauss-Bonnet operator is a topological invariant and thus the Euler coupling cannot enter the right-hand side of the flow equation. As a consequence, g_ cannot influence the flow of the other couplings, nor of itself, and hence it generically has no fixed points <cit.>. The fixed points of our system that are reliable in our approximation are reported in <Ref>, together with the corresponding critical exponents. Despite the beta functions being rational functions, the search for fixed points generally requires resorting to numerical methods. Yet, when turning off Newton's coupling, g=0, one can find the zeros of the remaining three beta functions analytically. This identifies the “pure” MFP. Next to the standard GFP (first entry in <Ref>), with critical exponents θ given by the classical mass dimensions of the couplings, we find two real MFP. The first one is denoted by “MFP” in <Ref>, and has a single relevant direction. The second one has highly non-canonical critical exponents, making it unreliable in our approximation, and it is also shielded from the GFP by a singularity in the beta functions, meaning that it cannot have a standard IR behavior. We thus excluded this second matter fixed point from <Ref>. To search for fully interacting fixed points, we employed a numerical fixed point search together with a random grid of starting points for the Newton-Raphson iteration, focusing on the region close to the GFP. For this, we first used a hypercube around the GFP with 10^6 randomly distributed starting points. We then successively enlarged the cuboid in all directions, except the g-direction, which we restricted to the interval [0,2π]. This is because, on the one hand, we are only interested in fixed points with positive Newton's coupling, and on the other hand, there is a singular line at g=2π in the flow, beyond which fixed points cannot be connected to a standard IR behavior.[This should be understood as a maximum value for the first singularity encountered when increasing g from 0 to positive values, and it stems from our renormalization condition for λ, see <Ref>.] The largest cuboid that we investigated was g∈[0,2π], g_±∈[-100π^2,100π^2], g_CFF∈[-100,100].[The factors of π are chosen for convenience — rescaling the couplings {g,g_±,g_CFF}→{πg̃,π^2 g̃_±, g_CFF} removes all factors of π in the beta functions.] With this method, we identified two reliable fixed points that are connected to the GFP in the IR. Both are displayed in <Ref>. The first one (FP1) comes with a single relevant direction, and thus gives rise to the most predictive theory. By contrast, the second fixed point (FP2) has two relevant directions, with a peculiar magnitude of the two relevant critical exponents: θ_1 is an order of magnitude larger than θ_2. We anticipate here that this makes the computation of the sub-landscape resulting from this fixed point numerically challenging. Finally, in agreement with the discussion above and previous computations in AS <cit.>, we find that k ∂_k g_(k) > 0 , at both FP1 and FP2. This entails that the coupling g_ runs to +∞ for k→∞, and does not have a common fixed point with the other couplings, as expected. To summarize, our system has three fixed points that can serve as a UV completion of the theory, and that are connected to the GFP (g_i=0) in the IR.[For general theories including couplings with positive mass dimension (, a cosmological constant), the condition of a UV fixed point being connected to the GFP generalizes to the existence of RG trajectories departing from it and reaching or passing close to the GFP in the IR.] For the case of fully interacting fixed points (FP1 and FP2), this is needed to recover GR at low energies. The MFP, lying at g^∗=0, instead corresponds to a gravity-free UV completion of self-interacting photons. In the next subsections, we will show how to parameterize the IR behavior of the flow, which is key to compute the sub-landscapes stemming from the three possible UV completions presented in this subsection. In particular, an important role will be played by the so-called “separatrices” — critical RG trajectories connecting couples of fixed points and separating different behaviors of the flow. In the next subsections, we will describe the sub-landscapes identified by the three interacting fixed points in <Ref>, as well as the global geometry of the AS landscape. §.§ Computing the landscape: parameterizing the IR behavior of the flow Parameterizing the IR behavior of the beta functions is a necessary step to efficiently compute the AS landscape. The strategy is to expand the beta functions in the IR about the GFP by using ansätze a (k/k_0)^-d_c(b+c log(k/k_ref)) for the running of the couplings, with d_c being the classical mass dimension of the corresponding coupling. Inserting these into the beta functions, one then extracts the coefficients a,b,c. This procedure yields the following scalings in the limit k→0, which are IR-universal, , independent of the specific UV completion: g(k) ≃ g_IR(k/k_0)^2 , g_+(k) ≃ g_IR^2(k/k_0)^4 f_+ , g_-(k) ≃ g_IR^2(k/k_0)^4(f_- - 548/15ln[c_l g_IR(k/k_0)^2]) , g_CFF(k) ≃ g_IR(k/k_0)^2 f_c . In this, c_l parameterizes the ambiguity in defining the argument of the logarithm, in accordance with the discussion in <Ref>. The remaining parameters {f_±, f_c} are to be determined by the RG flow: they depend on the fixed point one starts with and on the specific RG trajectory departing from it. As the running dimensionful Newton coupling is G(k)≡ g(k)k^-2, the combination g_IR k_0^-2 defines the Newton coupling as G_N≡lim_k→0g(k)k^-2=g_IR k_0^-2 . In turn, G_N sets the scale of QG, and thereby provides a scale with respect to which all other dimensionful quantities can be meaningfully defined. Indeed, while the unit scale G_N is arbitrary, all other Wilson coefficients can be defined by the dimensionless ratios (<ref>) and (<ref>) — the only quantities that are relevant to assess positivity bounds. §.§ Sub-landscape from FP1 The most predictive UV completion in our setup is provided by FP1, as it comes with one relevant direction only. The free parameter attached with its only relevant direction fixes the scale of QG, G_N, and thereby also fixes the units to define all other Wilson coefficients. All remaining dimensionless Wilson coefficients are thus predicted uniquely. The single relevant direction and the presence of an IR fixed point (the GFP) imply that there are only two trajectories emanating from FP1: one diverging, and one approaching the GFP, as k→0. The latter trajectory (or, more precisely, the family of trajectories for different values of the QG scale) corresponds to the separatrix between FP1 and the GFP in theory space. The flow of the dimensionless couplings along this separatrix is shown in <Ref>. The sub-landscape stemming from FP1 is thus a single point, and it corresponds to the k→0 limit of the FP1-GFP separatrix. The separatrix can be computed numerically by integrating the beta functions with initial conditions close to FP1, perturbed in the direction of the eigenvector corresponding to the positive critical exponent. The only EFT in the sub-landscape is identified by the following Wilson coefficients: + = 0.00792 , C = 0.000550 . The Wilson coefficient of - suffers the aforementioned ambiguity due to its logarithmic running. Trivially, this coefficient depends logarithmically on the parameter . For =16π, we obtain - = 0.0955 . We shall use these Wilson coefficients in the next section to investigate the compatibility with positivity bounds and the weak gravity conjecture. §.§ Sub-landscape from FP2 We move on to discuss the sub-landscape connected to FP2. This fixed point has two relevant directions. Following from the earlier discussion that one of the free parameters sets the QG scale, the ensuing sub-landscape is one-dimensional: it is a line in the space of dimensionless Wilson coefficients. Here, all RG trajectories fall into three classes: those that approach the GFP in the IR, those that diverge, and the two boundary trajectories between these two behaviors. The first set makes up the sub-landscape. Before we discuss our results, let us briefly point out a technical difficulty. Since the positive critical exponents of this fixed point are one order of magnitude apart, see <Ref>, the numerical determination of the sub-landscape requires high precision.[Specifically, a working precision of 32 digits was required for our data points when using Mathematica's NDSolve routine. We then chose initial conditions on a small circle about FP2 within its critical hypersurface, parameterized by an angle. We picked one specific trajectory that is well within the sub-landscape. From there, we changed the angle by a step of π/10. If the successive trajectory is still flowing into the GFP, we continue the procedure by changing the angle by the same amount, otherwise, we discard the trajectory, half the step size, and compute the next trajectory. We repeated this procedure until the step size fell below 10^-18π. Finally, we added some additional trajectories to improve the density of data points in the sub-landscape.] Trajectories that are close to each other near the fixed point can vastly differ in their IR physics. In practice, this issue is realized in different ways within the sub-landscape attached to FP2. This is because the above-mentioned boundary trajectories are separatrices that connect FP2 to FP1 and MFP, respectively.[This means that the part of theory space that is both UV-complete and connected to the GFP in the IR looks very similar to that of the system of a shift-symmetric scalar field coupled to gravity in the same order of the derivative expansion, see <cit.>.] As a consequence, going towards the separatrix to FP1, the Wilson coefficients approach those of FP1. On the other hand, approaching MFP, the Wilson coefficients become larger and larger. The reason is that one approaches the separatrix between MFP and the GFP. For this trajectory, Newton's coupling vanishes identically, so our definition of Wilson coefficients is ill-defined and only their ratios are still well-defined, cf. <Ref>. With this remark out of the way, let us discuss the sub-landscape of FP2. In <Ref> we show different views on the Wilson coefficients within this sub-landscape (purple dots), as well as the EFT generated by FP1 (red dot). The overall geometry resembles that of a stretched-out candy cane: most of this sub-landscape is approximated by a straight line (the extended “strabe”). At one end, this extended strabe terminates at the sub-landscape of MFP, which we will discuss in the next subsection. At the other end, there is a small curved part (the “warble”) that connects it to the sub-landscape from FP1, and gives the characteristic candy cane shape. A thorough analysis of our data points shows that the extended strabe part of the sub-landscape can be approximated by a square root plus linear fit in the {+,-}-plane, and by a quadratic fit along the  C-direction.[The form of these functions was selected by the fact that they fit large parts of the sub-landscape, significantly beyond the selection of data points used to obtain them.] Concretely, fitting the 10 data points furthest away from the warble region, we find -≈ 0.8022 + + 0.4353 √(-+) + 0.03453 , and + ≈ - 1666 C2 + 32.78 C + 0.1566 , - ≈ - 1359 C2 + 5.930 C + 0.2341 . These fits are valid in the limit of large Wilson coefficients, as ±,C→-∞, so that, to leading order, the extended strabe is a straight line in {±, C2}. The rather large coefficients in the quadratic fits are entirely due to the fact that C is much smaller than ± for our data points. Remarkably, the entire sub-landscape originating from FP2 approximately lies on a plane described by C = 0.0049155 + 0.038755 + - 0.047318 - . This is to mean that the candy cane is not bent significantly. For the points of the sub-landscape that were computed, the distance from this plane does not exceed 0.000057. Moreover, as C covers the small range [-0.043,0.00026] for our data points, the shape of the sub-landscape is approximately preserved when projecting it onto the bidimensional space {+,-}, see <Ref>. We remark once again that to determine all the above relations, we used =16π to fix the logarithmic ambiguity in -. Similar findings were also obtained in the first paper that computed a landscape from the RG flow of an asymptotically safe gravitational theory <cit.>. Concretely, in <cit.> the one-loop flow of quadratic gravity was investigated, and the resulting two-dimensional landscape was approximately a plane. This is an intriguing and highly non-trivial feature that deserves further investigation. §.§ Sub-landscape from MFP The fixed point MFP lies at g=0 and thus it involves no gravity. Nonetheless, since it acts as one of the boundaries of the sub-landscape attached to FP2, we will briefly discuss the properties of its sub-landscape. First of all, since g=0 at the fixed point, the separatrix between MFP and the GFP will have g=0 at all scales, corresponding to a pure matter theory. This also entails that we have to use a different dimensionful coupling to set the scale. Second, and intriguingly, on this separatrix, we have an exact relation between g_+ and g_-, g_-/g_+ = y ≈ 0.826 , where y is the real root of the polynomial (-73+131y-59y^2+9y^3). This relation is fulfilled along the whole trajectory, which reduces the number of independent couplings to two. Stated differently, the Wilson coefficient resulting from the ratio g_-/g_+ is exactly y. Finally, since the fixed point has one relevant direction, the sub-landscape has only one non-trivial dimensionless Wilson coefficient. We find that G_CFF/√(-G_+)≈ -0.0290 . We were able to compute its exact value in a closed form as well. Since only limited insights can be gained from it, we present it in (<ref>) in <Ref>. The importance of the relations (<ref>) and (<ref>) derives from the fact that, since the sub-landscape of FP2 is bounded by MFP, the relations must be fulfilled at the asymptotic end of the extended strabe. For our data points, this is not yet fulfilled — for the last data point, we find g_-/g_+≈ 0.5997 , g_CFF/√(-g_+)≈ -0.02068 . This simply signals that our data at the open end of the strabe are not yet in the asymptotic regime, so they do not yet match the scaling of MFP. This emphasizes the aforementioned numerical challenge of mapping out the full sub-landscape of FP2. We can nevertheless try to extract the exact limits (<ref>) and (<ref>) from our fits of the extended strabe, Eq. (<ref>) and (<ref>). While using the square root plus linear fit (<ref>) yields y≈ 0.8022, employing the ratio of quadratic fits (<ref>) and sending C→∞, we get y≈ 0.816. Both values are very close to the exact value. Likewise, using the quadratic fit for + to compute the ratio (<ref>), we obtain -0.0245, which differs from the exact value by about 16%. Last but not least, we can also use the plane equation (<ref>) to estimate (<ref>), which is given by the ratio of the coefficients in front of ±. From this, we get y≈0.819. This lends more support to the idea that the entire sub-landscape of FP2, including the asymptotic region close to MFP, indeed approximately lies on a plane. § COMPARING THE AS LANDSCAPE WITH POSITIVITY, ENTROPY, AND WEAK GRAVITY BOUNDS In terms of the dimensionless Wilson coefficients (<ref>) and (<ref>), the bounds introduced in <Ref> read Positivity bounds: 𝒫_1=(16π G_N)^2Λ^4(2+---|+|)>0 , 𝒫_2=3+---2|C|>0 , 𝒫_3=+-->0 , Entropy bound: 𝒫_E= 8π G_N ( (1-ξ)^2 (3+--) 𝒫_E-2(1-ξ)(1+4ξ)C+10ξ(1+ξ))>0 , WGC: 𝒫_WGC=8π G_N (3+---2C)>0 . Insomuch as the only scale of our system is the Planck mass, we will set Λ^2=1/(16π G_N), so that in the first bound all multiplicative factors disappear. At this point, it is important to highlight that while the individual Wilson coefficients i are neither gauge/parameterization independent, nor invariant under field redefinitions, the combinations of them appearing in scattering amplitudes and also in the 𝒫_i are. In this section we will compare the above conditions with the AS landscape derived in the previous section. Our results are summarized in <Ref>, <Ref>, and <Ref>. The comparison of the global AS landscape of our system with the positivity bounds 𝒫_i and the WGC condition 𝒫_WGC shows that the arguments and conjectures in <cit.> are strictly realized within our system: Planck-scale suppressed violations of both positivity bounds and WGC occur across the entire landscape.[Whether positivity bounds are strictly fulfilled in some parts of the landscape may also depend on the approximations employed and, in truncated systems, on the choice of gauge and parameterization to set up the flow <cit.>.] This is visually evident in <Ref>, where we show the projection of the AS landscape onto the plane {+,-}, together with the regions where positivity bounds and the WGC hold. We need to emphasize though that the theory space is parameterized via dimensionless Wilson coefficients so that all numbers are expressed in units of the Newton coupling, cf. (<ref>) and (<ref>). Focusing on the violation of the WGC, the comparison of our results with that in <cit.> indicates the importance of the U(1) sector and non-perturbativity in assessing the validity of the WGC. Indeed, <cit.> assumed electromagnetic duality and investigated the intersections between the AS landscape in one-loop quadratic gravity <cit.> and some swampland conjectures, including the WGC. The AS landscape of <cit.> was fully located inside the region 𝒫_WGC>0, so that the WGC was strictly valid throughout the landscape. The deviation of our result from that in <cit.> is to be attributed to several factors, from the improved computational setup (full-fledged FRG versus perturbative computation), to the inclusion of the U(1) sector with essential couplings only, and the different universality class of the UV fixed point <cit.>. The dimensionless amount of violation for each 𝒫_i is shown in <Ref>. Each dot quantifies the dimensionless deviation from positivity of 𝒫_i at a point of the rectified version of the landscape, where, aiming at a better visualization, all data points have been deformed to be equidistant. The more negative 𝒫_i is, the larger the violation. In particular, the only EFT predicted by FP1 — denoted by a red dot — is the one where the violation is smallest. Moving along the landscape, and away from the red dot, the violation gets larger and larger. Let us also briefly discuss the positivity bounds for the sub-landscape deriving from MFP. Due to the exact expressions for the Wilson coefficients, (<ref>) and (<ref>), and the fact that g_+<0 along the separatrix, we straightforwardly find that all positivity bounds are violated. Since this theory excludes gravitational fluctuations so that standard positivity bounds should apply, this entails that the UV completion of self-interacting photons provided by MFP is likely not unitary. From this perspective, AS might act as a unitarizer of the photonic theory through the fixed points FP1 and FP2. So far we discussed the comparison of the AS landscape with the bounds 𝒫_1,2,3 and 𝒫_WGC. We now turn our attention to the family of entropy-based positivity bounds 𝒫_E(ξ). This constraint deserves a separate discussion because of its dependence on an external parameter, and, more notably, due to the appearance of the Euler Wilson coefficient w_. As already discussed, the Euler coupling is special in purely field-theoretic setups, because its flow typically does not have non-trivial fixed points, and hence the corresponding Wilson coefficient w_ is unconstrained. This atypical behavior and the lack of RG-induced constraints are generally not considered as issues, inasmuch w_ does not enter on-shell quantities like scattering amplitudes. Yet, the Gauss-Bonnet term impacts off-shell quantities like the entropy <cit.>, even in four spacetime dimensions. It can consequently enter positivity bounds such as 𝒫_E(ξ). Such bounds can then result in constraints for the Euler Wilson coefficient, as shown in <Ref>. The individual constraints depend both on the extremality parameter ξ and on the point of the landscape. The bounds become stronger for points of the landscape further away from the single-point landscape and for smaller values of ξ. Thus, requiring that they all hold results in the requirement that the strongest of them is satisfied. In our case the strongest of the entropy bounds is realized in the limit ξ→0^+, in which case the Euler Wilson coefficient is constrained to be w_C→+∞. At the same time, setting ξ=0 in the entropy positivity bounds 𝒫_E(ξ) yields the WGC and decouples the Euler Wilson coefficient. This discontinuity is due to the slight violation of the WGC, since if 𝒫_WGC were positive, the limit ξ→0^+ would have implied w_C>-∞ instead, , the absence of a bound. Once again, this result is to be contrasted with the conclusions drawn in <cit.>. The use of a one-loop approximation in pure quadratic gravity has the effect of generating a fixed point for the Euler coupling, which is thus constrained to be positive in the landscape. In <cit.> this entailed the possibility of testing the validity of the entropy bounds 𝒫_E(ξ) within the AS landscape, in the Stelle universality class <cit.>. Overall, our results point to an intriguing lesson on the role of the Euler coupling in AS. Wilson coefficients associated with boundaries or spacetime topology, like the Euler coupling in our case, remain unbounded by the RG flow. Conditions like the entropy bounds 𝒫_E(ξ)>0 thus turn into constraints for these couplings. Predictivity thus hinges on a better understanding of the role of boundaries in AS, and perhaps the necessity of relating bulk and boundary Wilson coefficients via an AS version of the holographic principle. § POSITIVITY BOUNDS AND WGC BEYOND WILSON COEFFICIENTS: FLOWING CONDITIONS Motivated by the cases of Ward identities <cit.> and the C-theorem <cit.>, in this section we take a more general perspective, and ask the question whether positivity bounds and the WGC should hold at any FRG scales k≥0, or if they only apply to fully renormalized quantities. Along these lines, there are several questions about the validity of positivity and WGC along the flow: what is the relation between their realization along a given RG trajectory at finite k, and at its IR endpoint, , in the limit k→0? Is it true that if positivity bounds are fulfilled/violated at the fixed point, they are also fulfilled/violated across the corresponding landscape? While we will not try to prove any general statements, we will investigate the questions above for a sample of UV-complete trajectories that flow into the GFP in the IR. To this end, we investigate a “flowing” version of Wilson coefficients, defined by removing the k→0-limit in the original definitions, (<ref>) and (<ref>), but still subtracting the logarithm along the whole flow as in (<ref>). We then insert these into the positivity bounds 𝒫_i and study them as a function of k. As a first example, we consider the flowing conditions for the separatrix between FP1 and the GFP. For this trajectory, g_+>0 and g_CFF>0, so that 𝒫_1 ≡𝒫_3 and 𝒫_2≡𝒫_WGC/8π G_N, and we are left with two independent bounds. As shown in <Ref>, in this case the positivity bounds are (Planck-scale) violated along the entire flow. A second set of examples can be extracted from the flow emanating from FP2. Trajectories originating at FP2 start out with a stronger violation of the flowing positivity bounds than the trajectory starting at FP1. The flow then behaves differently depending on its specific initial conditions: for trajectories ending up in the warble part of the landscape, the violation decreases, and approaches that of the FP1-GFP separatrix. Near the crossover between warble and extended strabe, the violation stays approximately constant along the flow. Finally, in the extended strabe region, the violation increases along the flow. For these trajectories, g_+ and g_CFF can be negative along the flow, so we do not necessarily have a degeneracy of the positivity bounds as for the separatrix between FP1 and the GFP. In <Ref> we show these different behaviors. In summary, in our system it is indeed true that the positivity bounds are Planck-scale violated along the whole flow for all trajectories that are UV-complete and end up in the GFP in the IR. This behavior would support the idea that a relationship exists between the fulfillment/violation of positivity bounds at non-zero k (including at the fixed point) and in the limit k→0, where all quantum fluctuations are integrated out. While we refrain from drawing conclusions from these few examples, the above behavior once again emphasizes that FP1 comes with the least amount of violation of the bounds, even along the flow connecting it to the IR. § SUMMARY AND CONCLUSIONS In this work, we computed the landscape of EFT stemming from UV-complete (in particular, asymptotically safe) photon-graviton flows, and confronted it with entropy-based bounds that include the WGC as a particular case <cit.>, and, for the first time, with positivity bounds <cit.>. The key idea is to generalize the notion of string landscape, which emerged within the swampland program <cit.>, to other approaches to QG and in particular to AS <cit.>. This has the ultimate scope of identifying the intersections of QG landscapes, potentially highlighting connections between different approaches <cit.>, systematically testing their consistency with bounds stemming from EFT <cit.>, and assessing the validity of swampland conjectures beyond ST, especially in the light of the String Lamppost Principle <cit.>. As a first step in this endeavor, we focused on a gravitational system non-minimally coupled to an Abelian gauge field in four spacetime dimensions. Our dynamics include (essential <cit.>) operators up to the fourth order in a derivative expansion, totaling five interaction couplings. Within this setup, we found that the beta functions provide two gravitational fixed points that can serve as UV-completion of the coupled theory, see <Ref>. The first has only one relevant direction, whose corresponding free parameter sets the scale of QG and the unit with respect to which all other dimensionful quantities are measured. The ensuing sub-landscape is zero-dimensional, namely, it is a single point in the space of dimensionless Wilson coefficients. The second fixed point has two relevant parameters. Given that one of them sets the unit mass scale, its corresponding sub-landscape is a line in the shape of a stretched-out candy cane, see <Ref>. The tip of its small hook (the “warble”) meets the sub-landscape point of the first fixed point, so that the two sub-landscapes are continuously connected. A large part of the whole landscape is nearly a straight line (the extended “strabe”), whose other end is connected to a pure photon fixed point. Intriguingly, the entire landscape can be approximately embedded into a plane — a non-trivial feature that has been observed before <cit.>. Whether this is a coincidence or a universal feature of AS is unclear, and deserves further systematic investigations. We then confronted the set of Wilson coefficients within the AS landscape with a collection of bounds stemming from different considerations, from positivity bounds motivated by unitarity and causality properties of scattering amplitudes <cit.>, to a family of constraints based on the positivity of the entropy of black holes under the addition of higher order curvature operators. The latter also includes (one form of the) the electric WGC as a particular case <cit.>. While positivity bounds are typically derived by excluding graviton fluctuations, they may nevertheless indicate whether parts of the landscape violate either unitarity or assumptions like low spin dominance or Regge limit, which are key in the derivation of positivity bounds <cit.>. The motivation for this is the fact that violations of such bounds are generally expected upon the inclusion of gravitational fluctuations <cit.>, and are Planck-scale suppressed as long as the coefficients are of order one, cf. Eq. (<ref>). Similar considerations also apply to the violation of the WGC, where the allowed amount of negativity in the presence of a new physics scale was estimated in <cit.>. We found that both the positivity bounds of <cit.> and the WGC are violated across the entire AS landscape, but that indeed for large parts of it, the violation is Planck-scale suppressed. This is consistent with general expectations <cit.>. At the sub-landscape point of the most predictive fixed point, the violation is maximally suppressed. The violation slightly increases along the warble part of the other sub-landscape. The further along the extended strabe one progresses, the larger the coefficients of the violations become. This behavior is not surprising, since in the asymptotic limit of the extended strabe, the pure photon (, gravity-free) fixed point <cit.> is approached, where standard positivity bounds are violated. A more thorough investigation, along the lines of <cit.>, is needed to understand the properties of this fixed point and its unitarity further. We can also turn our perspective around: starting from this non-unitary self-interacting photon theory, Asymptotically Safe Gravity acts as a “unitarizer” since the addition of gravity and the requirement of maximizing predictivity bring the theory closer to fulfilling standard unitarity bounds.[This goes along with the idea that AS provides an avenue to solve the triviality problem of the U(1) sector <cit.>.] Indeed, as already highlighted, the most predictive theory (the single-point landscape) is the one minimizing the already Planck-scale-suppressed violations of positivity bounds, suggesting an ideal scenario which combines high predictivity with the eventual fulfillment of modified positivity bounds. Along with positivity bounds, we also investigated a family of entropy-based positivity constraints, which include the WGC as a particular case <cit.>. Since these are off-shell bounds, they can (and do) include the coupling of the topological Euler term. Since the Gauss-Bonnet operator is a topological invariant, its coupling does not appear in any beta function, and thus generically does not have a fixed point <cit.>. As a consequence, the landscape-value of the Euler coupling is unconstrained, and within the current setup and state-of-the-art, such entropy-based bounds put constraints on the Euler coupling, rather than being a direct test of AS. Our results are based on a setup that should be improved in future work. First, our computations were performed in Euclidean signature, which made a Wick rotation necessary in order to relate the resulting AS landscape to entropy and positivity bounds. Recent progress in the field <cit.> has shown that the spacetime signature does not strongly impact the flow, but it would be noteworthy to investigate whether this statement applies to the whole AS landscape. Second, approximations had to be made, in our case by truncating the effective action to fourth order in derivatives. It is conceivable that improved approximations <cit.> and a better understanding of gauge and parameterization dependence of the flow <cit.> could push the most predictive fixed point into a regime where standard positivity bounds are satisfied. Third, to achieve a more realistic description <cit.>, an extended field content has to be included, which would contribute to the relevant Wilson coefficients. Last but not least, grounded on the discussion above, the role of the Euler coupling has to be clarified within AS: it has to be understood what AS can, or cannot say, about black hole entropy (along the lines of <cit.>) and, more generally, if a relation exists between bulk and boundary couplings, which could enhance the predictive power of off-shell quantities in this approach. Finally, what would it mean if all these shortcomings were addressed, and AS would still violate positivity bounds? There are different options in this case. The violation could simply indicate that AS does not fulfill at least one of the assumptions underlying the derivation of positivity bounds. Potential candidates for these include the violation of microcausality or locality, which is not unrealistic for a theory of QG. Alternatively, as mentioned earlier, the violation could be related to the low-spin dominance hypothesis not being realized, which is assumed for the derivation, but whose violation would not pose any obvious problems for QG. Additionally, a violation could also mean that the existence of a UV fixed point is not enough to obtain a unitary and causal theory. Indications for the latter were indeed found in the study of general non-perturbative scattering amplitudes <cit.>. All these possibilities point to two key aspects within this endeavor: on the EFT side, the necessity of understanding positivity bounds in the presence of gravitational fluctuations and beyond perturbative settings <cit.>, with an improved understanding of the relation between violations and their sources. On the AS side, the importance of mapping the UV features of the AS fixed point to the IR properties of the corresponding landscape <cit.>. The authors would like to thank I. Basile, G. N. Remmen, C. de Rham, A. Tokareva, and A. Tolley for interesting discussions, and I. Basile for feedback on the manuscript. The research of A.P. is supported by a research grant (VIL60819) from VILLUM FONDEN. A.P. also acknowledges support by Perimeter Institute for Theoretical Physics during the development of this project. B.K. is grateful for the hospitality of Perimeter Institute where part of this work was carried out. Research at Perimeter Institute is supported in part by the Government of Canada through the Department of Innovation, Science and Economic Development and by the Province of Ontario through the Ministry of Colleges and Universities. The research of B.K. is supported by Nordita. A.P. is grateful to Nordita for support within the “Nordita Distinguished Visitors” program and for hospitality during the early stages of this work. Nordita is supported in part by NordForsk. § PHOTON-GRAVITON FLOWS: SETUP AND DEFINITIONS In this section, we review the framework of the FRG and briefly discuss its modifications when accounting for field redefinitions. Afterward, we motivate our ansatz (<ref>) for the effective action, and give details on the chosen gauge fixing and regularization. §.§ The Functional Renormalization Group The FRG is a powerful theoretical framework that is extensively employed in condensed matter physics, quantum field theory, and beyond <cit.>, as a prescription to evaluate the path integral and to investigate non-perturbative RG flows. Unlike traditional RG techniques, which focus on coarse-graining in momentum space, the FRG operates directly in the space of actions, offering a more versatile approach to study strongly interacting systems. Specifically, the FRG is based on a modification of the effective action by an IR regulator. The latter has the dual scope of providing a prescription to regularize the path integral and of implementing the Wilsonian idea of shell-by-shell integration of modes. With the regularization implemented, the path integral can be translated into an exact functional differential equation <cit.> for the modified effective action Γ_k, k∂_k Γ_k = 1/2STr{(Γ_k^(2)+_k)^-1 k∂_k _k } . Here, Γ_k^(2) is the second functional derivative of Γ_k with respect to the underlying fields, _k is the regulator implementing the successive mode integration, and STr denotes a so-called “super-trace”, a sum over discrete indices and an integration over continuous variables, together with a minus sign for Grassmann-valued fields. By construction, in the limit k→0, we obtain the fully renormalized couplings and thus the corresponding Wilson coefficients. An essential ingredient for theories with gauge symmetries like gravity is the background field formalism. To be able to define a gauge fixing and a regularization, it is necessary to split the metric g into an arbitrary background metric g̅ and fluctuations h about it, g_μν = g̅_μν + h_μν . In our computation, we furthermore restrict ourselves to the so-called background field approximation, where we set h to zero after having computed the two-point function Γ_k^(2). For an overview of how to go beyond this, see  <cit.>. Our focus in this work is to compute the landscape of AS, and to evaluate IR constraints like positivity bounds. To simplify the technical setup, we implement field redefinitions to eliminate as many inessential running couplings <cit.> as possible. One minimal way of eliminating inessential couplings within the FRG has been introduced in <cit.>, and is based on an appropriate k-dependent field redefinition at the level of the flow equation, k∂_k Γ_k + Ψ_k Γ_k^(1) = 1/2STr{(Γ_k^(2)+_k)^-1 [ k∂_k + 2 Ψ_k^(1)] _k } . This equation is based on a more general flow equation <cit.>, see <cit.> for earlier applications. The modification resides in the RG kernel Ψ_k, which is the expectation value of the flow of the redefined microscopic field. In practice, an ansatz is chosen for Ψ_k, and in principle one has to check a posteriori whether this indeed corresponds to a well-defined field redefinition. Couplings that can be removed by Ψ_k are inessential, whereas all other couplings are the essential ones. The latter are also the only ones that can appear in observables like scattering amplitudes. The “essential” version of the FRG flow has been applied widely, see  <cit.>. A critical discussion is given in <cit.>. §.§ Action, gauge fixing, regularization and field redefinitions The flow equations (<ref>) and (<ref>) can usually not be solved exactly, for a generic Γ_k. The standard approach is to specify symmetry, field content, an ordering criterion for the operators, and a truncation order that is typically dictated by the computational limitations. The elimination of inessential couplings via the flow equation (<ref>) is a powerful instrument to push both truncation order and number of non-minimally coupled fields beyond previous computational limitations. In our work, we employ a derivative expansion of the effective action, and we include all essential couplings in QG coupled to an Abelian gauge field with up to four derivatives. To this order, the essential part of the action reads Γ_k = ∫d^4x √(g) [ 1/16π G_N( 2Λ - R ) + G_ ] + ∫d^4x √(g) [ + G_^2 ()^2 + G_ + G_CFF C^μνρσ F_μν F_ρσ] , where, as usual, all couplings depend on the scale k, and the operators , , and are those introduced in (<ref>). Compared to the action in (<ref>), the above effective action includes a cosmological constant. Although the latter is the lowest-order operator in a derivative expansion, observationally Λ G_N≪1, and thus, in a first approximation, we will consider Λ=0 in the limit k→0. Based on these considerations, and following the argumentation of <cit.>, we use the field redefinition to fix λ to be proportional to g, entailing a vanishing dimensionful cosmological constant for the effective action(<ref>). Since our system admits gauge symmetries, the effective action has to be complemented by a gauge fixing term. We employ the harmonic gauge in both sectors, so that Γ_gf = ∫d^4x √(g̅) [ 1/32π G_N(D̅^α h_μα - 1/2D̅_μ h ) (D̅_β h^μβ - 1/2D̅^μ h ) + 1/2( D̅^μ a_μ)^2 ] . This gives rise to a Faddeev-Popov ghost action of the form Γ_c = 1/√(G_N)∫d^4x √(g̅) c̅_μ[ Δ̅δ^μ_μν - R̅^μ_μν] c^ν + ∫d^4x √(g̅) b̅ Δ̅ b , where Δ̅=-D̅^2. The next ingredient that we need to compute the RG flow is the regulator. For this, we follow the argumentation of <cit.> and introduce the “natural” endomorphisms in all sectors. This entails _k^h = 1/32π G_N[ (Δ̅_2) Π^TL - (Δ̅) Π^Tr] , _k^a = (Δ̅_a) 1 , _k^c = (Δ̅_c) 1 , _k^b = (Δ̅) . We have identified all regulator shapes for simplicity, and Π^TL,Tr denote the traceless and trace projectors, respectively. Eventually, we employed the Litim regulator <cit.>, (x) = (k^2-x) θ(1-x/k^2) . The operators used in the regulators read Δ̅_2μνρσ^2μν = ( Δ̅+ 2/3R̅) Π^TLμν_TLμνρσ - 2 C̅^(μνρ)_(μνρ)σ , Δ̅_aμν^aμ = Δ̅δ^μ_μν + R̅^μ_μν , Δ̅_cμν^cμ = Δ̅δ^μ_μν - R̅^μ_μν . The final ingredient for the RG flow of essential couplings is the specification of the RG kernel. Since we have two different fields, we can perform a field redefinition in the combined field space. At the order that we consider here, the most general corresponding RG kernels are Ψ^g_μν = γ_g g_μν + γ_R R g_μν + γ_S S_μν + γ_ g_μν + γ_F^2^TL ( F_μα F^α_αν + g_μν) , Ψ^a_μ = γ_a a_μ + γ_DF D^α F_μα . Here, we set up the metric RG kernel in a way to split it into trace and traceless parts, which disentangles the equations for the gamma functions maximally. To read off the beta and gamma functions, we complete the monomials in our action by the following invariants to form a basis: R^2 , S^μν S_μν , F^μνΔ F_μν , R F^μν F_μν , S^μν F_μα F^α_αν . This completes the discussion of the setup. With these ingredients as starting point, the computation of RG flow has been performed with the help of the Mathematica package xAct <cit.> and a well-tested code <cit.>. The complete set of beta functions can be found in the accompanying notebook <cit.>. § ANALYTIC WILSON COEFFICIENT IN THE PURE MATTER THEORY The analytic expression for the Wilson coefficient given in (<ref>) is G_CFF/√(-G_+) = -√(a_1/π)Γ(a_2)/Γ(a_2+1/2)[ _2F_1( 1/2, a_3, a_2+1/2| z ) + a_4 _2F_1( 1/2, a_3+1, a_2+3/2| z ) ] , where the numbers a_1,2,3,4 and z are roots of low-order polynomials that can be obtained as follows. The number a_1 is the real root of the polynomial 10793861 + 18949368102912 x + 30552729884565700608 x^2 - 1619614712642678776922112 x^3 - 9100076856720841554681397248 x^4 - 16902672123436474482083424632832 x^5 + 3161447767348821628323748676370432 x^6 near the point a_1 ≈ 0.0000177 . For a_2,3, we define the polynomial 413044310016 - 10879053926400 x + 158650842599040 x^2 - 1291967177365900 x^3 + 4894717105389125 x^4 - 8447086388113750 x^5 + 5813784153762500 x^6 . The number a_2 is then the real root of this polynomial near a_2 ≈ 0.113 , whereas a_3 is minus the root of the other real root of this polynomial, a_3 ≈ -0.313 . Next, the number a_4 is the real root of the polynomial 307642010808877056 + 179837560477922623488 x + 48915980589977271545856 x^2 + 5843687635359647850841088 x^3 + 826964630953265786764096 x^4 + 39539603437261811380528 x^5 + 646308740102823594591 x^6 near the point a_4 ≈ -0.00395 . Finally, the number z is the real root of the polynomial 5967 - 7673436 x + 3561700932 x^2 - 554287751986 x^3 + 3561700932 x^4 - 7673436 x^5 + 5967 x^6 close to z ≈ 0.00303 . JHEP
http://arxiv.org/abs/2405.10282v1
20240516174109
GKLS Vector Field Dynamics for Gaussian States
[ "Hans Cruz-Prado", "Octavio Castaños", "Giuseppe Marmo", "Francisco Nettel" ]
quant-ph
[ "quant-ph", "math-ph", "math.MP" ]
theoremTheorem remarkRemark corollaryCorollary exampleExample definitionDefinition propositionProposition lemmaLemma wEḍ Dı iℋ̋ℐ𝒢ℱŁℒℂℝℂℙ𝕊ℙℍ ℒℋℱ𝒮ℰ𝒜𝒞𝔉𝔛𝔰𝔩𝔤𝔩𝔲̆𝔰𝔲𝔰𝔬𝔰𝔭𝔊𝔄𝔇𝔗 e g Tr#1#2#2#1 =hans@ciencias.unam.mxDepartamento de Física, Facultad de Ciencias, Universidad Nacional Autónoma de México, A. P. 70543, Ciudad de México, 04510 México.On the sabbatical leave at the Universidad de Granada, España; ocasta@nucleares.unam.mxInstituto de Ciencias Nucleares, Universidad Nacional Autónoma de México, A. P. 70543, Ciudad de México 04510, México. marmo@na.infn.it INFN-Sezione di Napoli, Complesso Universitario di Monte S. Angelo Edificio 6, via Cintia, 80126 Napoli, Italy. Dipartimento di Fisica “E. Pancini”, Università di Napoli Federico II, Complesso Universitario di Monte S. Angelo Edificio 6, via Cintia, 80126 Napoli, Italy.fnettel@ciencias.unam.mx Departamento de Física, Facultad de Ciencias, Universidad Nacional Autónoma de México, A. P. 70543, Ciudad de México, 04510 México. Abstrasct: We construct the vector field associated with the GKLS generator for systems described by Gaussian states. This vector field is defined on the dual space of the algebra of operators, restricted to operators quadratic in position and momentum. It is shown that the GKLS dynamics accepts a decomposition principle, that is, this vector field can be decomposed in three parts, a conservative Hamiltonian component, a gradient-like and a Choi–Krauss vector field. The last two terms are considered a “perturbation” associated with dissipation. Examples are presented for a harmonic oscillator with different dissipation terms. Keywords: Open quantum systems; Quantum dissipative dynamics; Gaussian states; Non-unitary evolution; Statistical mix states. GKLS Vector Field Dynamics for Gaussian States Francisco Nettel May 20, 2024 ============================================== § MOTIVATION AND PREVIOUS WORKS The geometric formulation of quantum mechanics makes use of structures and methods used in classical dynamics and in a lesser way of those appearing in the theory of general relativity, suggesting a powerful framework for approaching both, the conceptual and the mathematical foundations of quantum mechanics. Such a geometric approach consists in describing all the fundamental properties of the quantum theory through the use of geometric structures on an appropriate classical manifold. This perspective of the quantum theory was formally introduced by the pioneering works of Strocchi <cit.>, Cantoni <cit.>, Kramer–Saraceno <cit.> and Cirelli–Lanzavecchia–Mania <cit.>. Strocchi realized that the space of expectation values for Hamiltonian systems quadratic in position and momentum, allows to introduce a set of complex coordinates and to obtain a “classical” form of quantum mechanics, establishing a direct connection between Poisson brackets and commutators as well as between canonical transformations that preserve the Jordan product and the unitary representation of operators. Following the idea of a “classical” form of quantum mechanics proposed by Strocchi, it was soon recognized that for Hamiltonian systems with a group structure G, i.e. the Hamiltonian is written in terms of a generators of a Lie algebra _G, it is possible to define a dual Lie algebra _G^∗ of linear functions which are related to the observables associated to the operators in _G; see Ref. <cit.> and references therein. The operator basis satisfies a set of commutation relations while the linear functions a set of corresponding Poisson brackets. The geometry of the dual Lie algebra ^∗_G is described by the symplectic structures on orbits under G, which are characterized as submanifolds of _G given by the quotient space G/H_s, with H_s the stability group of the fiducial state. By considering coherent states of the Lie group G, one arrives at symplectic submanifolds to describe the corresponding “classical” Hamiltonian evolution, which in general is integrable. On the other hand, the geometrical study of quantum mechanics by Cirelli–Lanzavecchia–Mania is motivated by the physically equivalent description of a quantum system by means of the Hilbert space or by its projective Hilbert space PH. Let us recall that Hilbert spaces H were introduced by Dirac as a consequence of the linear superposition principle for wave functions satisfying the interference phenomena <cit.>; however, the arbitrary global phase of the wave functions leads to the ray concept (an equivalence class of wave functions) and the definition of the projective Hilbert space PH. Then, in Ref. <cit.> it was shown that, the projective space of a complex Hilbert space is a Kähler manifold and also the set of pure states of the von Neumann algebra, thus showing a link between these structures. This natural Kähler structure is defined by a symplectic form, a Riemannian metric called the Fubini-Study metric and a complex structure on the manifold of pure states. Nowadays, there is a growing interest in the geometric description of open quantum systems and their dynamics; specifically, there has been great advances in the geometric study of the general master equation governing the Markovian dynamics of finite quantum systems <cit.>. From here onward, we restrict our considerations to finite dimensional Hilbert spaces. Following <cit.>, we can establish the kinematics for the n-levels quantum systems. In this case, we are dealing with systems whose observables are elements of a finite C^∗-algebra, i.e., being ℳ_n() the C^∗-algebra of (n × n) complex matrices, then the observables are elements of the finite algebra _n = ℳ_n(), for n ≥ 2. Thus, the space of observables _n is identified with the subset of _n consisting of self-adjoint elements, i.e. _n ⊂_n. In addition, _n possess a natural Hilbert inner product ⟨â , b̂⟩ := {â^†b̂} and hence _n is a complex Hilbert space. If ^∗_n is the dual algebra of _n, it is known that for every ξ∈^∗_n there is a unique ξ̂∈_n such that ξ(â) = ⟨ξ̂ , â⟩ for all â∈_n; then, the space of states 𝒮 of _n is defined as := {ρ∈_n^∗⊂_n^∗ | ρ(ââ^†) ≥ 0 , ρ(𝕀̂) = 1} , where 𝕀∈_n is the identity operator and â∈_n. Therefore, for each quantum state ξ≡ρ∈_n^∗ there is a corresponding ρ̂∈_n defined as the self-adjoint semi-positive matrix such that {ρ̂ 𝕀̂} = 1, meaning that ρ̂ is a density operator. Furthermore, from the one-to-one correspondence between elements of _n^∗ and elements of _n, it follows that may be decomposed as = _r=1^n _r , with _r = {ρ∈ | rank(ρ) = r } , where rank(ρ) denotes the rank of ρ defined as the matrix rank of the density matrix ρ̂∈_n. Then, as has been shown in Refs. <cit.>, _r is a homegeneous space of the Lie group SL(_n) := { g ∈_n | (g) = 1} , and therefore is a differential manifold, in those references, it is also proved that on each _k there is an action of the compact Lie subgroup SU(_n) := {𝐔∈_n | 𝐔𝐔^† = 𝕀, (𝐔) = 1} , where SU(_n) ⊂ SL(_n) and the orbits of this action are known in quantum theory as isospectral manifolds. In particular, the manifold of pure states, i.e. rank(ρ) = 1, turns out to be a homogeneous space for SU(_n) and SL(_n). Once the manifold of states is characterized, relevant geometric structures can be introduced. Each isospectral manifold is endowed with a Kähler structure, thus, through the symplectic form, it is possible to define a Hamiltonian dynamics on these manifolds which corresponds to the unitary evolution, see Refs. <cit.>. In this work, we will introduce more general geometric structures on , so a more general dynamical evolution can be defined than the symplectic one. This new dynamics is defined for the entire manifold of states, hence it represents the evolution of open quantum systems. For instance, the dynamics associated with the Markovian evolution introduced in Refs. <cit.> by means of the Gorini–Kossakowski–Lindblad–Sudarshan (GKLS) master equation L(ρ̂) = - ı/ħ [Ĥ, ρ̂]_- - 1/2∑^3_j [ v̂^†_j v̂_j , ρ̂ ]_+ + ∑^3_j v̂^†_j ρ̂ v̂_j , where ρ̂ is the density operator associated to a quantum state ρ∈, Ĥ∈_n is the Hamiltonian operator and v̂_j ∈_n is an operator introducing dissipation to the system; here, [ · , · ]_- and [ · , · ]_+ denote the commutator and the anti-commutator in _n. The geometric description of the GKLS master equation means to describe the GKLS equation of motion through a vector field Γ in the affine space _n^1 ⊂^∗_n defined as _n^1 = {ξ∈_n^∗ | {ξ̂ 𝕀} = 1 } . From the Lie–Jordan algebra structure of _n it is shown in Ref. <cit.> that the affine vector field Γ can decomposed as Γ = X_H + Y_b + Z_𝒦 , where X_H is the Hamiltonian vector field associated with the first term of the GKLS generator in (<ref>) and Y_b + Z_𝒦 are related to the dissipative part of the GKLS generator given by the second and third terms in (<ref>). Then, the Hamiltonian vector fields are tangent to _n^1, more precisely, they are tangent to the manifolds of quantum states with fixed rank. On the other hand, the vector field Y_b + Z_𝒦 generates a dynamical evolution that changes the rank and the spectrum of the density matrix, that is, it represents a dissipation term in the dynamics. There are two types of systems in what refers to the encoding of the quantum information: i) systems with a finite spectrum, i.e. in the form of q-bits or q-dits and ii) systems with an infinite spectrum such as those given in the form of the position or momentum representations. For instance, one may consider Gaussian states that emerge naturally in Hamiltonians quadratic in the position and momentum variables <cit.>, or those associated with Hamiltonians which are described employing algebraic structures whose states are called generalized coherent states <cit.>. All of them, even the non-linear coherent states <cit.>, constitute examples of quantum states whose properties may be described by finite-dimensional smooth manifolds. In particular, the generalized coherent states solutions to the Schrödinger equation for Hamiltonians quadratic in the position and momentum operators, have been extensively studied due to their broad application in quantum optics and quantum technologies. In these cases, one may express the density matrix in the position representation in the general form ⟨ q' | ρ̂ | q ⟩ = 1/√(2 π σ^2_q)exp{1/2 σ^2_q[ σ_qp/ħ (q - q' ) - ı/2 ( q + q' - 2 ⟨q̂⟩ ) ]^2 - σ^2_p/ 2 ħ^2 (q - q')^2 + ı/ħ ⟨p̂⟩ (q - q') } , which corresponds to a Gaussian density matrix. These normalized positive functional are parametrized by the first (⟨q̂⟩ , ⟨p̂⟩) and second ( σ_q^2, σ_p^2, σ_qp ) moments. The second moments are constrained by the saturated Robertson–Schrödinger uncertainty relation, i.e., σ_q^2σ_p^2 - σ^2_qp = ħ/4. The first and second moments parametrize the space of states defining a finite-dimensional manifold with a quantum evolution that can be described through a symplectic evolution, for details see Ref. <cit.>. For the Gaussian density matrix given in (<ref>) is possible to construct the GKLS dynamical vector field Γ defined in the space of parameters, which accepts the decomposition as the vector field in (<ref>). To see this, notice that the density matriz in (<ref>) can describe states with statistical mixture, as has been remarked in Ref. <cit.>, where the degree of mix is given by a parameter r ∈ [1 , ∞), such that {ρ̂^2 } = 1/r and σ_q^2 σ_p^2 - σ^2_qp = ħ^2 r^2/4 . Consequently, for r=1 we have pure states and only in this case the density operator can be factorized as ρ̂ = | α⟩⟨α |, where | α⟩ is the so-called generalized coherent state. In this form, the GKLS dynamics acting on the Gaussian states could modify the parameter r introducing a change in the statistical mix. In this work, we aimed to find the GKLS vector field for Gaussian states following the same procedure presented in <cit.>, nevertheless, it is immediate to notice that the infinite representation in (<ref>) for Gaussian states results inappropriate for such a task. In the q-bit case, it is well known that states described by ρ can be mapped to points in a ball 𝐁 of radius one. Then, the GKLS dynamics dictated by the master equation (<ref>) for the operator ρ̂ induces a GKLS vector field Γ which determines the dynamical evolution of states. Therefore, one may notice that deducing the GKLS dynamics for Gaussian states following this procedure is not straightforward as such states do not possess a finite-dimensional representation. However, an interesting observation can be made: the space of states can be represented in a finite-dimensional space of parameters. To see this, let us restrict our problem to Gaussian states with vanishing first moments, then, the space of parameters is the solid hyperboloid 𝐇 = { (y^1, y^2, y^3) ∈^3 | (y^1)^2 - (y^2)^2 - (y^3)^2 ≥ 1 } , where (y^1, y^2, y^3) are related to the second moments by y^1 = 1/ħ (σ^2_p + σ^2_q) , y^2 = 2/ħ σ_qp , y^3 = 1/ħ (σ^2_p - σ^2_q). Then, there is a clear analogy between the ball 𝐁 for the q-bit states, and the solid hyperboloid 𝐇 for Gaussian states, in the sense that quantum states are represented as points in these finite manifolds. Furthermore, while on 𝐁 there is a natural action of the SU(2) Lie group, in 𝐇 there is a smooth action of the SL(2,) Lie group. This fact allows to definine the immersion of the space of (2 × 2)-matrices into the space of parameters 𝐇, i.e., m : ℳ_2 →𝐇 that is ξ↦ (y^1, y^2, y^3), where ξ is an element of the Lie algebra sl(2,)^∗, note that ξ is not a state. Using this immersion the evolution in the space of matrices determines the dynamics in the space of parameters and consequently, for the Gaussian states. As we will show, this allows us to follow the same procedure proposed for the n-level systems presented in <cit.>. Now. to ensure that the dynamics is the one associated to the GKLS generator (<ref>), we restrict our problem to only consider operators which are quadratic in the position and momentum operator such that they are elements of the (2,) Lie algebra. As it is well-known, sl(2, ) accepts a (2 × 2)-matrix representation, an important feature for our purposes. Then, we claim that the GKLS evolution for ξ̂ can be determined through the master equation L(ξ̂) = - ı/ħ [Ĥ, ξ̂]_- - 1/2∑^3_j [ v̂^†_j v̂_j , ξ̂ ]_+ + ∑^3_j v̂^†_j ξ̂ v̂_j , where Ĥ and v̂ are elements of (2,) and, consequently, they can represent by (2 × 2)-matrices. Therefore, we can follow the same procedure for the q-bit system considered in Ref. <cit.>, up to the appropriate modifications, to obtain the GKLS vector field for the Gaussian states. The paper is organized as follows. In Section <ref> we review the case of one q-bit systems, starting with the description of the space of quantum states as a manifold with boundary and establishing its kinematic properties, in particular, we describe its foliation in terms of two-spheres. In Subsection <ref>, we describe the Hamiltonian dynamics on each isospectral submanifold through its Kähler structure. In Subsection <ref>, we analyze the quantum systems from the point of view of observables and use the Lie-Jordan algebra to define the two relevant geometric structures, a skew-symmetric bivector field which defines a Poisson structure for the space of functions on the dual algebra, that is associated to the Lie product and a symmetric bivector field which defines a symmetric product, both are realizations of the algebra on the space of linear functions. In Subsection <ref>, we construct the GKLS vector field and find the decomposition (<ref>), to do so, we perform a reduction procedure and find the Choi-Kraus vector field associated to the completely positive map 𝒦 in (<ref>). In the last subsection, <ref>, we present two examples for a damping process of a two-level system. In Section <ref>, and its subsections, our main results are presented, where the procedure reviewed in Section <ref>, with the appropriate tuning, is used to determine the GKLS vector field for the Gaussian states. It is worth to mention that, adapting the procedure to an infinite-dimensional state space is not straightforward, and some modifications had to be considered. Finally, in Section <ref>, we present our conclusions and some of the lines of research that will be presented in a set of future works. § ON THE KINEMATICS AND DYNAMICS OF DISSIPATIVE ONE Q-BIT SYSTEMS In this section, the GKLS dynamical vector field for one q-bit systems is constructed in detail to exemplify the ideas that we will apply in section <ref> for the Gaussian density matrices. It is well known that in general, the quantum space for the q-bit can be immersed in the space of 2 × 2 Hermitian matrices, where the basis may be provided by the Pauli matrices σ_1 = ( [ 0 1; 1 0 ]) , σ_2 = ( [ 0 -ı; ı 0 ]) , σ_3 = ( [ 1 0; 0 -1 ]) , and the identity matrix σ_0 = ( [ 1 0; 0 1 ]) . Thus, an arbitrary density matrix may be expressed as [ It is important mention that here, and in the following, {σ_μ}_μ=0,1,2,3 denotes the basis for _2^∗, which is dual to the basis {σ̂^μ}_μ=0,1,2,3 of the _2, defined in the introduction as the set of self-adjoint operators. This distinction between elements of the algebra and its dual will be highlighted by using a hat to denote operators.]ρ = 1/2 (σ_0 + x^k σ_k) = 1/2 x^μ σ_μ , where k = 1,2,3, μ =0,1,2,3 and from the normalization condition {ρ̂ σ̂^0 } = 1 it follows that x^0 = 1. In this expression, and from here on, Einstein's summation convention over repeated indices is assumed[Throughout the paper greek indeces will run from 0 to 3, meanwhile latin indeces will run from 1 to 3.]. Thus, for example, every pure quantum state is represented by a point (x^1, x^2, x^3) in the unit sphere, such that x^k = {σ̂^k ρ̂}. Here the purity condition ρ^2 = ρ defines the unit sphere S^2 = { (x^1, x^2, x^3) ∈^3 | (x^1)^2 + (x^2)^2 + (x^3)^2 = 1 } , which is known as the Bloch sphere. On the other hand, for mixed states one has that the mixture condition {ρ̂^2 } < 1 defines the constraint (x^1)^2 + (x^2)^2 + (x^3)^2 < 1 , where the maximal mixed state correspond to x^1 = x^2 = x^3 = 0. Therefore, a q-bit state may be always represented by a point 𝐱 = (x^1, x^2 , x^3) on the solid ball 𝐁 = { (x^1, x^2, x^3) ∈^3 | (x^1)^2 + (x^2)^2 + (x^3)^2 ≤ 1 } . In the literature, the points 𝐱∈𝐁 are called Bloch vectors or polarization vectors <cit.>. The space of 2-level quantum system is made up of two strata: 𝒮_1 by unit sphere S^2, the space of pure states and 𝒮_2 the open interior of the ball, space of mix states. Therefore, the quantum space of q-bit systems is a manifold with boundary. As a final remark, let us notice that the manifold 𝐁 is a foliated space, i.e., 𝐁 = ⋃_r ∈ [0 , 1]ℓ_r where the leaves of the foliation correspond to ℓ_r = { (x^1, x^2, x^3) ∈^3 | (x^1)^2 + (x^2)^2 + (x^3)^2 = r^2, r ≤ 1 } . A schematic picture of this foliation is displayed in Fig. <ref>. It is important to note that we have a singular foliation, i.e., the leaves are not all of the same dimension, having a singular point at the origin. Nevertheless, removing the origin one has a regular foliation given by the family {ℓ_r } of disjoint subsets, with r ∈ (0 , 1] and where ℓ_r are the leaves of the foliation, on which a differential structure can be given. In general, this foliation is a consequence of the smooth action of SU(_2). In the literature, the leaves of this foliation are the so-called manifolds of isospectral states <cit.>. §.§ Dynamical study of one q-bit systems from a state point of view Once we have introduced the manifolds of isospectal states in this section, we procede to define the symplectic dynamics and the gradient vector field on these manifolds. To do that, let us note that each manifold of isospectral states is endowed with a Kähler structure, i.e., there is a symplectic form ω_, a Riemannian metric g_, known as the Fubini–Study metric, and a complex structure J all of them defined globally on each isospectral manifold <cit.>. Moreover, the symplectic form and the Riemannian metric define a Hamiltonian vector field 𝕏_H and a gradient vector field 𝕐_H. Given a real function f_Ĥ∈^∗_2 defined as the expectation value of the observable Ĥ∈_2, that is, f_Ĥ := {ρ̂ Ĥ} , then the Hamiltonian vector field and the gradient vector field are defined intrinsically by ω_( 𝕏_H , · ) = f̣_Ĥ and g_ ( 𝕐_H , · ) = f̣_Ĥ , respectively, along with the property J( 𝕏_H ) = 𝕐_H , which provides an intrinsic definition of the complex structure tensor J. To give the coordinates expression for all these definitions, let us consider the coordinate charts (U_1, ϕ_1), (U_2, ϕ_2) for the foliation ℓ_r, with U_1, U_2 ⊂ℓ_r and ϕ_1 : U_1 → : (x^1, x^2, x^3) ↦ z , ϕ_2 : U_2 → : (x^1, x^2, x^3) ↦ζ , where the complex parameters z and ζ are given by z = x^1 - ı x^2/r - x^3 , ζ = x^1 + ı x^2/r + x^3 . In this manner, the set { (U_1, ϕ_1), (U_2 , ϕ_2) } constitutes an atlas for each foliation ℓ_r. These coordinates in quantum mechanics are employed in to describe atomic coherent states <cit.> or spin coherent states<cit.>, see also <cit.>. Geometrically, by considering { (U_1, ϕ_1), (U_2, ϕ_2) } corresponds to the stereographic projection from the “north pole” and the “south pole” of the sphere onto the equatorial plane, respectively. Notice that, using the Cartesian coordinates (x^1, x^2, x^3) we are giving an extrinsic geometric description of the system, in which case one obtains linear equations of motion. On the other hand, the stereographic projection atlas constitutes an intrinsic geometric description and, as we will see, the equations of motions are non-linear. Now, in the coordinate chart (U_1, ϕ_1) the symplectic form and the Riemannian metric are given by ω_ = - ı ħ r^2/(1 + z̅ z )^2̣̅z ∧ ẓ , and g_ = - ħ r^2/(1 + z̅ z )^2̣̅z ⊗_ ẓ , respectively. In these definitions one considers the wedge product ̣̅z ∧ ẓ = ̣̅z⊗ẓ - ẓ⊗̣̅z together with the symmetrical product ̣̅z ⊗_ ẓ = ̣̅z⊗ẓ + ẓ⊗̣̅z, while the complex structure has the form J _ = 1/ı( ẓ⊗∂/∂ z - ̣̅z⊗∂/∂z̅) . To express the density matrix (<ref>) in this coordinate system, one has to take into account the inverse of the stereographic projection to obtain the relations x^1 = r z + z̅/1 + z z̅ , x^2 = - r/ız - z̅/1 + z z̅ and x^3 = r zz̅ - 1/1 + z z̅ , and then, by direct substitution, the density matrix has the form ρ = 1/1 + z̅ z ( [ 1/2(1 - r) + 1/2( 1 + r )z̅ z r z; ; r z̅ 1/2(1 + r) + 1/2( 1 - r )z̅ z; ]) , where the dependence on the r-parameter is explicit. From density matrix expressed in Eq. (<ref>), the expectation value of an arbitrary observable operator can be obtained. In general, a self-adjoint operator can be written as Ĥ = H_μ σ̂^μ; thus, its expectation value corresponds to f_Ĥ := {ρ̂ Ĥ} = H_0 + H_k x^k , and taking into account the transformations in (<ref>), it can be expressed as f_Ĥ = 1/1 + z̅z[ ( H_0 + r H_3 ) z z̅ + r (H_1 + ı H_2) z + r (H_1 - ı H_2) z̅ + ( H_0 - r H_3) ] , where f_Ĥ shows the explicit dependence on the parameter r. The expectation value f_Ĥ, the Hamiltonian and gradient vector fields in coordinates take the form 𝕏_H = 𝕏_z ∂/∂ z + 𝕏_z̅∂/∂z̅ and 𝕐_H = 𝕐_z ∂/∂ z + 𝕐_z̅∂/∂z̅ , where the components 𝕏_z̅ and 𝕐_z̅ are the complex conjugated of 𝕏_z and 𝕐_z, respectively. These components may be directly computed by means of the definitions in Eq. (<ref>) to obtain that 𝕏_z = - ı/ħ r^2 (1 + z̅ z)^2 ∂ f_Ĥ/∂z̅ = ı/ħ r[ (H_1 + ı H_2) z^2 - 2 H_3 z - (H_1 - ı H_2) ] , and 𝕐_z = - 1/ħ r^2 (1 + z̅ z)^2 ∂ f_Ĥ/∂z̅ = 1/ħ r[ (H_1 + ı H_2) z^2 - 2 H_3 z - (H_1 - ı H_2) ] . Then, an important consequence of giving an intrinsic description of the manifolds of isospectral states is to obtain a non-linear evolution equation, in our case of interest we have obtained the non-linear Riccati equationż = ı/r ħ[ (H_1 + ı H_2) z^2 - 2 H_3 z - (H_1 - ı H_2) ] , as a Hamiltonian evolution for the quantum states; besides, note that this evolution is independent on H_0. On the other hand, the equations of motion in the extrinsic geometric description with coordinates (x^1,x^2, x^3) is given by the system of linear equations ẋ^1 = - 2/ħ r (x^2 H_3 - x^3 H_2) , ẋ^2 = - 2/ħ r (x^3 H_1 - x^1 H_3) , ẋ^3 = - 2/ħ r (x^1 H_2 - x^2 H_1) , and whose solutions correspond to the integral curves of the Hamiltonian vector field 𝕏_H = - 2/ħ rϵ^k j_l x^l H_k ∂/∂ x^j , where ϵ^kj_l is the Levi-Civita symbol[The convention for the Levi-Civita symbol is the following ϵ^kj_l = 1 if (k,j,l) is (1,2,3), (2,3,1), (3,1,2) , -1 if (k,j,l) is (2,1,3), (3,2,1), (1,3,2) , 0 if k=j, j=l, l=k ]. Alternatively, the evolution of the quantum systems can be described directly by the so-called Poisson brackets and Jordan brackets on the manifolds of isospectral states <cit.>. Given the expectation values f_â and f_b̂ associated to the quantum observables â and b̂, one can define the Poisson brackets and Jordan brackets through the relations { f_â , f_b̂}_ω = ω_(X_b , X_a ) , ⟨⟨ f_â , f_b̂⟩⟩_g = g_(Y_a, Y_b ) , respectively, and where these brackets satisfy the relations[Notice that in the definition of Hamiltonian vector field we are using the convention ω(X_H, ·) = Ḥ , which has a minus sign with respect to the more common choice in Classical Mechanics, i.e., ω(X_H, ·) = - Ḥ , as is taken, for instance, in <cit.>, but following the same definition for the Poisson bracket from the symplectic form {f, g} = ω(g, f) = -ω(f,g) . ]{ f_â, f_b̂}_ω = - 1/r f_[[ â, b̂ ]] ⟨⟨ f_â , f_b̂⟩⟩_g = - f_â⊙b̂ + 2/ħ r^2 f_a f_b , with [[ â, b̂ ]] = ı/ħ( â b̂ - b̂ â ) and â⊙b̂ = 1/ħ( â b̂ + b̂ â ) , defining the Lie and Jordan products, respectively. Using the Poisson and the Jordan brackets means that we are describing the quantum systems from the point of view of observables f_â as the primary objects, hence states and dynamics are derived from it. The observables in quantum mechanics constitute a Lie–Jordan algebra (_n, ⊙, [[·,·]]) where the products satisfy the following compatibility conditions [[ â, b̂⊙ĉ ]] = [[ â, b̂ ]] ⊙ĉ + b̂⊙ [[ â,ĉ ]] , and (â⊙b̂) ⊙ĉ - â⊙ (b̂⊙ĉ) = [[ b̂, [[ ĉ , â]] ]] . Thus, for the Lie algebra structure, observables appear as infinitesimal generators of one-parameter groups of transformations and with respect to the Jordan structure observables appear as measurable quantities with outcomes given by real numbers, more specifically, probability measures on the real line. Included r as a dynamics variable means that the differential manifold of quantum states is odd-dimensional, therefore a symplectic form cannot defined. Nevertheless, this alternative description in terms of a Poisson structure allows to extend our definition of the Hamiltonian vector field to all of 𝐁. Furthermore, as we will see in the following section, from this perspective we have a direct route to the GKLS dynamics. To conclude this subsection, let us introduce two important geometrical objects: the skew-symmetrical bivector Λ_ and the symmetrical bivector G_, which in terms of the stereographic coordinates take the form Λ_ = - ı/r^2 ħ(1 + z̅ z)^2 ∂/∂z̅∧∂/∂ z and G_ = - 1/r^2 ħ(1 + z̅ z)^2 ∂/∂z̅⊗_∂/∂ z , such that the Poisson and the Jordan brackets may be defined as { f_â, f_b̂}_ω = Λ_ ( f̣_â , f̣_b̂) ⟨⟨ f_â, f_b̂⟩⟩_g = G_ ( f̣_â , f̣_b̂ ) , respectively. These geometrical objects will be relevant in the GKLS evolution. §.§ Dynamical study of one q-bit systems from an observable point of view The symplectic dynamics and the gradient vector field in (<ref>) are tangent to the manifolds of isospectral states, i.e., the quantum states resulting from the evolution of states with initial conditions on ℓ_r, remain on such manifolds; however, the dynamical evolution given by the GKLS master equation changes the rank and the spectrum of quantum states, i.e., the description of dissipative phenomena leads to consider an evolution that is transversal to the leaves {ℓ_r }. It is important to mention that the dynamical evolution for any initial conditions must be constrained to the space of quantum states. The evolution of quantum state determined by the GKLS master equation is non-unitary, completely positive and trace preserving evolution of a quantum system. As it was mentioned in the introduction, the geometrical formulation of the dynamics of open quantum systems generated by the GKLS equation is given by an affine vector field Γ, which may be decomposed as Γ = X_H + Y_b + Z_𝒦, where X_H is a Hamiltonian vector field on 𝐁 which describe a conservative dynamics and the term Y_b + Z_K is a perturbation term, that determines the dissipative part of the dynamics. The two latter vector fields produce different effects in the dynamical evolution of the quantum states. On the one hand, Y_b is a gradient-like vector field whose flow changes the spectrum but preserves the rank of the density matrix ρ and on the other hand, Z_𝒦 is responsable for the change of rank, that is, only through Z_𝒦 the statistical mixture of the initial state can change. To determine the vector field Γ we need to extend the Hamiltonian vector field to the space 𝐁 and introduce the concept of gradient-like vector field. We will adopt the point of view of observables as the primary objects from which dynamics is obtained. As a starting point, we establish the Lie–Jordan algebra structure (_2 , ⊙, [[·,·]]) of the space of observables from the Lie and Jordan products in _2. To introduce the vector field Γ, let us start defining the Hamiltonian vector field to all the ball and the concept of gradient-like vector field. From the observable point of view, one must start employing the Lie–Jordan algebra structure (_2 , ⊙, [[·,·]]) of the space of observables. To every element â∈_2 correspond a linear function f̃_â in _2^∗ by f̃_â(ρ) := {â ρ̂}, where ρ̂ is the density matrix. Conversely, any linear function f̃_â∈^∗_2 maps to an element â∈_2. Therefore, the space of linear functions on ^∗_2 together with the products defined as {f̃_â, f̃_b̂} = f̃_[[ â, b̂ ]] ⟨⟨f̃_â, f̃_b̂⟩⟩= f̃_â⊙b̂ . constitutes a realization of the Lie-Jordan algebra (_2,⊙, [[·, ·]]). From this algebraic structure, it is possible to define symmetric and skew-symmetric covariant tensor fields (bivectors) on ^∗_2. The skew-symmetric bivector Λ̃ and the symmetric bivector G̃ are uniquely determined by their linear action on the one-forms ̣̃f_â, which at each point in ^∗_2 are elements of the cotangent space, i.e., Λ̃(̣̃f_â, ̣̃f_b̂) := f̃_ [[â , b̂]] and G̃(̣̃f_â, ̣̃f_b̂) := f̃_â⊙b̂ . Hence, Hamiltonian and gradient-like vector fields can be defined as X̃_H = Λ̃( ̣̃f_Ĥ, · ) and Ỹ_b = G̃( ̣̃f_b̂, · ) , respectively. Notice that the bivectors Λ̃ and G̃ in (<ref>) are different to the bivectors Λ_ and G_ defined in (<ref>), the latter are defined on a single isospectral manifold with fixed r, while Λ̃ and G̃ are defined for any linear function in the dual space ^∗_2 with r variable. This procedure allows to define a Hamiltonian vector field X̃_H by means of the Poisson bivector Λ̃ deduced from the Lie algebra structure of _2, which is less restrictive and more general than the definition of Hamiltonian vector field in terms of the symplectic form. Moreover, the Jordan product allows to introduce the gradient-like vector field Ỹ_b. We can introduce Cartesian coordinates {x^μ} associated to the basis {σ̂^μ} of _2 by the mapping (<ref>), then the coordinate functions on ^∗_2 are x^μ := f_σ̂^μ = {σ̂^μρ̂} . The tensor fields (<ref>) in this coordinate basis are Λ̃ = c_η^μν x^η∂/∂ x^μ∧∂/∂ x^ν and G̃ = d_η^μν x^η∂/∂ x^μ⊗∂/∂ x^ν , where the structure constants c_η^μν and d_η^μν are defined uniquely by the Lie and the Jordan products, i.e., for the Lie product we have [[σ̂^μ, σ̂^ν ]] = c_η^μνσ̂^η where c_η^μν = 0 for c_η^0 ν, c_η^μ 0, c_0^μν , - 2/ħϵ^kj_l for k, j, l = 1,2,3 , where ϵ^kj_l is the Levi-Civita symbol. For the Jordan product σ̂^μ⊙σ̂^ν = d_η^μνσ̂^η where σ̂^μ⊙σ̂^ν = d_η^μνσ̂^η with d_η^μν = 0 for d_l^0 0 and d_l^k j , 2/ħδ^μν for d_0^μν , 2/ħδ^μ_η for d_η^μ 0 = d_η^0 μ , where δ^μν and δ^μ_η are Kronecker delta functions. Let us now compute the coordinate expressions for the Hamiltonian and the gradient-like vector fields using the definitions in (<ref>), that is, considering the expectation values f̃_Ĥ = H_μ x^μ and f̃_b̂ = b_μ x^μ we obtain X̃_H = c_σ^μν H_μ x^σ∂/∂ x^ν and Ỹ_b = d_σ^μν b_μ x^σ∂/∂ x^ν , respectively. Because the dynamical trajectories of quantum states under the action of Γ, in Eq. (<ref>), must remain in the space of physical states, then, it is necessary to constraint further the manifold where the Hamiltonian Λ̃ and gradient-like G̃ vector fields act. Thus, we must consider the affine subspace _2^1 = {ξ∈_2^∗ | {ξ̂ 𝕀} = 1 } , that is, all those elements in _2^∗ with x^0 = 1. This fact allows to introduce the canonical immersion i:= _2^1 →_2^∗, such that the pullback i^∗f̃_â = f_â = a_0 + a_k x^k of a linear function f̃_â = a_μ x^μ associated to â∈_2 is an affine function on _2^1. Consequently, we can define symmetric and skew-symmetric tensor fields Λ and G on _2^1 through a reduction procedure for the bivectors Λ̃ and G̃. Then, as has been pointed out in Ref. <cit.>, the algebra ℱ(_2^1) of functions on the affine space _2^1 may be identify with the quotient space ℱ(_2^∗)/ℐ, where ℱ(_2^∗) is the algebra of smooth functions in _2^∗ and ℐ is the closed linear subspace of smooth functions vanishing on the affine space. The quotient space ℱ(_2^∗)/ℐ≅ℱ(_2^1) inherits the vector space structure from ℱ(_2^∗). For ℱ(_2^1) also inherit the algebra structure with respect to the relevant algebraic product, the subspace ℐ must be an ideal of ℱ(^∗). For the Poisson product the reduction is straightforward. Considering f̃_b̂ = (1 - x^0) b_k x^k in ℐ, it can be shown by direct calculation that for an arbitrary f̃_â∈ℱ(^∗), the realization of the Poisson bracket through the bivector Λ̃ gives Λ̃(̣̃f_â, ̣̃f_b̂ ) = (1-x^0) c^k l_j x^l(a_k b_l -b_k a_l) . Thus, Λ̃(̣̃f_â, ̣̃f_b̂ ) vanishes on _2^1, meaning that the Poisson product defined by (<ref>) and (<ref>) is such that {f̃_b̂, f̃_â}∈ℐ for f̃_b̂∈ℐ, therefore, ℐ is an ideal of ℱ(^∗). Then, reducing the bivector field Λ̃ we can find a bivector field Λ that permits to define the Poisson product on the affine space _1^2. To performe the reductions we choose x^k = f_σ̂^k as a basis of _2^1, then their differentials form a basis for the cotangent space at each point of _2^1; then, through the pullback induced form the immersion i we have the reduced bivector Λ in _2^1 given by Λ(f̣_σ̂^k, f̣_σ̂^l) := f_ [[ σ̂^k , σ̂^l]] = c^kl_j f_σ̂^j . Thus, the explicit expression of Λ in Cartesian coordinates is Λ = - 2/ħ ϵ^kj_l x^l ∂/∂ x^k∧∂/∂ x^j . Using this Poisson bivector Λ, it is posible to define the Hamiltonian vector field associated with the linear function f_Ĥ = H_0 + H_k x^k by X_H := Λ( f̣_Ĥ , · ) = - 2/ħ ϵ^kj_l H_k x^l ∂/∂ x^j . In addition, it is direct to note that the function f_Ĉ = (f_σ̂^1)^2 + (f_σ̂^2)^2 + (f_σ̂^3)^2 = (x^1)^2 + (x^2)^2 + (x^3)^2 is a constant of motion as Λ( f̣_Ĥ , f̣_Ĉ) = 0. This implies that the Hamiltonian vector field is tangent to the spheres defined by f_Ĉ = r^2. In this case, the affine space _2^1 actually corresponds to the foliated ball 𝐁, already defined in Eq. (<ref>), whose center has been fixed at the origin of the Cartesian space ^3. The gradient-like vector field Ỹ_b defined on ^∗_2 from the bivector G̃ in (<ref>) has integral curves that do not preserve the trace of ξ, therefore, starting with initial conditions on _2 ^1 the dynamics provided by Ỹ_b may lead to non-physical states Tr{ξ̂ 𝕀}≠ 1. In particular, starting on the boundary of _2^1 (the boundary of 𝐁) the integral curves may end up outside of 𝐁. In order to avoid this behaviour, we proceed to perform the reduction of the symmetrical bivector G̃. For f̃_â = a_μ x^μ∈^∗ and f̃_b =(1 - x^0) b_k x^k ∈ℐ, it is not difficult to find that G̃(̣̃f_â, ̣̃f_b̂ ) = 2/ħ (1-x^0) (x^0 a_k b^k + a_0 b_k x^k) - 2/ħ a_0 x^0 b_k x^k - 2/ħ a_k x^k b_l x^l , which does not vanish on _2^1, meaning that the Jordan product ⟨⟨f̃_â, f̃_b̂⟩⟩ defined in (<ref>) and (<ref>), is not an element of the ideal ℐ. To amend this problem we must modify G̃ to obtain a symmetrical bivector field whose associated product makes the affine closed subspace ℐ an ideal of ℱ(_2^1). However, in doing so, we must renounce to have a Jordan product realized in the space of linear functions. Thus, the modified symmetric bivector field is given by <cit.>ℛ̃ = G̃ - 2/ħΔ̃⊗Δ̃ , where Δ̃ = x^μ∂/∂ x^μ is the Euler–Liouville vector field. Then, it can be verified that ℛ̃(̣̃f_â, ̣̃f_b̂ ) = 2/ħ(1-x^0 ) (x^0 δ^kl a_k b_l + (1 - 2 x^0) a^0 b_k x^k - 2 a_k x^k b_l x^l ) ∈ℐ . Hence, taking into account the pullback form the immersion map i, it is straightforward to obtain that the reduced symmetric tensor field ℛ as ℛ(f̣_σ̂^k, f̣_σ̂^l) := f_σ̂^k ⊙σ̂^l - 2/ħ f_σ̂^k f_σ̂^l , where it can be seen that ℛ does not leads to a Jordan product realization in the space of linea functions ℱ(_2^1). The expression of this symmetric bivector field in Cartesian coordinates is ℛ = 2/ħ( δ^kl - x^k x^l ) ∂/∂ x^k⊗∂/∂ x^l , where Δ = x^k ∂/∂ x^k is the Euler–Liouville vector field. Now that the vector spac and the algebraic structure are compatible, the gradient-like vector field Y_b associated with the linear function f_b̂ = b_0 + b_k x^k is defined as Y_b := ℛ( f̣_b̂ , · ) = 2/ħ( δ^kl - x^k x^l ) b_k ∂/∂ x^l . Note that this gradient-like vector field is quadratic in the Cartesian coordinate system adapted to _2^1. To analize the behavior of the gradient-like vector field, one may compute the Lie derivative of r along the direction of the vector field Y_b, i.e. _Y_b r = Y_b(r), to obtain that _Y_b r = 2/ħ r ( 1 - r^2 ) b_k x^k , Then, the Lie derivative is different from zero if and only if r ≠ 1; consequently, the gradient-like vector field is transversal to the leaves ℓ_r for r≠ 1 and only is tangent to the unit sphere which is the boundary of 𝐁. This fact allows us to observe that Y_b does not change the rank of the density matrix at the boundary, because a pure state remains pure under its evolution. §.§ GKLS evolution on one q-bit systems As we have seen in the previous subsection, for pure states the Hamiltonian and gradient-like vector fields give rise to a dynamical evolution that preserves the purity of the density matrix, that is, given initial conditions on the boundary of 𝐁 the integral curves of X_H and Y_b remain on it. Consequently, if the system is prepared in a pure state, the Hamiltonian and gradient-like evolution take it to another pure state. Now, we want to introduce a vector field that not only is transversal to the isospectral manifolds but also with the possibility of changing the rank of the density matrix for pure states. For the finite-dimensional case, it is known that the GKLS generator 𝐋 yields a quantum dynamical evolution described by linear equations <cit.>. The GKLS master equation has the general form 𝐋(ξ) = - ı/ħ [Ĥ, ξ̂ ]_- - 1/2∑^3_j [ v̂^†_j v̂_j , ξ̂ ]_+ + ∑^3_j v̂_j ξ̂ v̂^†_j , which may be expressed in terms of the Lie and the Jordan products, see Eq. (<ref>), as the linear operator 𝐋(ξ̂) = - [[Ĥ, ξ̂ ]] - ħ/2 V̂⊙ξ̂ + 𝒦̂ ( ξ̂ ) , where Ĥ∈_2, setting V̂ = ∑^3_j v̂^†_j v̂_j with v̂_j ∈_2 and 𝒦(ξ̂) a completely positive map given by 𝒦̂ (ξ̂) = ∑^3_j v̂_j ξ̂ v̂^†_j . 𝒦̂ (ξ̂) is also called Choi–Kraus map <cit.>. We can define the vector field associated to the GKLS generator 𝐋 using the following result <cit.>: consider the linear map A: ^∗_2 →^∗_2 given by A(ξ) = A^μ_ ν ξ^ν e_μ , where A^μ_ ν is the matrix representing the linear transformation and { e_μ} denotes an orthonomal basis for ^∗_2. Thus, let { x^μ} be the Cartesian coordinates associated to the orthonormal basis { e^μ} of _2, it is possible to associate a vector field Z_A ∈_2^∗ (where _2^∗≅ to a cross-section of T_2^∗) as Z_A := A^μ_ ν x ^ν∂/∂ x^μ. Moreover, from this definition is direct to check that given  and B̂ linear maps then Z_A + B = Z_A + Z_B and Z_c A = c Z_A, with c ∈. Then, the vector field in ^∗_2 associated to the GKLS generator 𝐋 is <cit.>Γ̃ = X̃_H - ħ/2Ỹ_V + Z̃_𝒦, where X̃_H is the Hamiltonian vector field associated to Ĥ given by X̃_H = Λ̃(f̣_Ĥ, ·); Ỹ_V is the gradient-like vector field associated with V̂ by means of Ỹ_V = G̃(f̣_V̂, ·) and Z̃_𝒦 is the linear vector field associated with the complete positive map 𝒦̂. To find Γ̃ we start expressing the operators in the linear mapping (<ref>) in the form of (<ref>). Thus, given the Hamiltonian operator Ĥ = H_μ σ̂^μ and the density operator ξ̂ = 1/2 x_ν σ̂^ν we find that the products [[Ĥ, ξ̂]] can be expressed as [[Ĥ, ξ̂ ]] = 1/2 c^μν_η H_μ x_ν σ̂^η and V̂⊙ξ̂ = 1/2 d^μν_η V_μ x_ν σ̂^η . Once defined these linear maps, we may associate the linea vector field on ^∗_2 from the definition (<ref>). Then, taking into account that x^μ = δ^μν x_ν and σ_μ = δ_μνσ̂^ν we obtain Z_[[ H, ξ]] = δ_νσ δ^ηλ c^μν_η H_μ x^σ∂/∂ x^ν = - c_σ^λν H_λ x^σ∂/∂ x^ν and Z_V ⊙ξ = δ_νσ δ^ηλ d^μν_η V_μ x^σ ∂/∂ x^ν = d_σ^λν V_λ x^σ ∂/∂ x^ν . On the other hand, to compute the complete positive map 𝒦̂(ξ̂), let us note that it may be expressed in the form 𝒦̂( ξ̂ ) = 1/2 𝒦_μ σ̂^μ = 1/2 {𝒦̂( ξ̂ ) σ̂^ν}δ_νμ σ̂^μ , and by straightforward calculation we obtain {𝒦̂( ξ̂ ) σ̂^0 } = ∑^3_j {v̂_j ξ̂ v̂^†_j σ̂^0 } = {∑^3_j v̂^†_j v̂_j ξ̂} = f̃_V̂ , with f̃_V̂ = {ξ̂ V̂}. Now, for the remaining components we have that {𝒦̂( ξ̂ ) σ̂^k } = ∑^3_j {v̂_j ξ̂ v̂^†_j σ̂^k } = 1/2{𝒦̂(σ̂^η) σ̂^k } x_η = 1/2 𝒦^η k x_η . where we have considered the definition 𝒦^η k = {𝒦̂(σ̂^η) σ̂^k }. Consequently, the complete positive map 𝒦̂( ξ̂ ) has the form 𝒦̂( ξ̂ ) = 1/2f̃_V̂ σ̂^0 + 1/4 𝒦^η k x_η δ_k mσ̂^m = 1/2f̃_V̂ σ̂^0 + 1/4 𝒦^η_ m x_η σ̂^m . Given this linear map we can now proceed to obtain the associated vector field Z̃_𝒦 = f̃_V ∂/∂ x_0 + 1/2 𝒦^η_ m δ_ημ x_μ δ^m k∂/∂ x^k = f̃_V ∂/∂ x_0 + 1/2𝒦_μ^ k x^μ ∂/∂ x^k , then, the GKLS generator map 𝐋 has associated the following linear vector field Γ̃ = - Z_[[H, ξ ]] - ħ/2 Z_V ⊙ξ + Z̃_𝒦 . Comparing with the coordinate expression of the vector field in Eq. (<ref>) we observe that Z_[[H , ξ ]] = - X̃_H and Z_V ⊙ξ = Ỹ_V. Then, we have obtained an expression in terms of the coordinate basis for the vector field Γ̃ in Eq. (<ref>) associated to the GKLS master equation. As was done in the previous subsection, we need to find the vector field whose integral curves lies entirely in the affine space _2^1. To do so, we only need to apply the reduction procedure to Γ̃ by taking into account the immersion i:= _2^1 →_2^∗. Before performing the reduction, let us first express Γ̃ as follows Γ̃ = X̃_H - ħ/2( Ỹ_V - 2/ħ f̃_V̂Δ̃) + Z̃_𝒦 - f̃_V̂Δ̃ . Notice that the last term on the right hand side of the equation above is given in terms of the Cartesian coordinates as Z̃_𝒦 - f̃_V̂Δ̃ = ( 1 - x^0 ) f̃_V̂∂/∂ x^0 + 1/2𝒦^k_ μ x^μ ∂/∂ x^k - f̃_V̂ x^k ∂/∂ x^k . Now the projection of the vector field Γ̃ onto the vector field Γ can be easily obtained just setting x_0 =1. Thus, the first term in the right-hand-side of (<ref>) vanishes in _2^1. Therefore, Z̃_𝒦̂ - f̃_V̂Δ̃ is projected onto Z_𝒦 = 1/2𝒦^k_ μ x^μ ∂/∂ x^k - f_V̂ x^k ∂/∂ x^k = 1/2𝒦^k_ μ x^μ ∂/∂ x^k - f_V̂ x^k ∂/∂ x^k , which will be denominated as the Choi–Kraus vector field. On the other hand, the vector field X̃_Ĥ is projected onto X_H and Ỹ_V - 2/ħ f̃_VΔ̃ is projected onto Y_V. Then, upon the reduction we have that the quantum dynamical evolution generated by the GKLS generator 𝐋 is described defining the GKLS vector field Γ = X_H - ħ/2 Y_V + Z_𝒦 in _2^1. Considering the Hamiltonian operator Ĥ = H_μσ̂^μ and V̂ = V_μσ̂^μ, then the GKLS vector field in Cartesian coordinates takes the following form Γ = - 2/ħ ϵ^kj_l H_k x^l ∂/∂ x^j - δ^k j V_k ∂/∂ x^j + 1/2𝒦^j_ μ x^μ ∂/∂ x^j - V_0 x^k ∂/∂ x^k . It is interesting that the nonlinear term in Z_𝒦 and Y_V cancel out in the sum - ħ/2 Y_V + Z_𝒦 and then, the vector field Γ for a q-bit system in Cartesian coordinates is linear. Therefore, the GKLS dynamics accepts a decomposition principle, i.e. the conservative part is given by the Hamiltonian part as a reference dynamics, while the sum of the gradient-like and the Choi–Kraus vector fields can be considered as a “perturbation term” associated with dissipation. §.§ Damping phenomena in two-level atomic system In this subsection we analyze two simple cases to illustrate the use of the GKLS vector field to determine the dynamics of a physical system. Let us consider a two-level atom with ground state | 1 ⟩ and excited state | 2 ⟩, i.e., we are considering eigenstates of the Hamiltonian Ĥ_ with eigenvalue E_1 and E_2, respectively. This allows to define the transition operators σ̂_i j = | i ⟩⟨ j | , with i, j = 1,2. Then, Ĥ_ may be written as Ĥ_ = ∑_i E_i | i ⟩⟨ i | = ∑_i E_i σ̂_ii = 1/2(E_1 + E_2) σ̂^0 + 1/2(E_1 - E_2) σ̂^3 . Now, defining ħ ν = E_1 - E_2 and for simplicity ignoring constant terms, the atomic Hamiltonian takes the form Ĥ_ = 1/2 h ν σ̂^3 , with h ν the transition energy between the states. Furthermore, taking into account the Pauli matrices we can express the transition operators as σ̂_12 = σ̂_+ = 1/2(σ̂^1 + ı σ̂^2 ) and σ̂_21 = σ̂_- = 1/2(σ̂^1 - ı σ̂^2 ) , which represent the transition between states, i.e., σ̂_21 = σ̂_- represents the transition |1 ⟩→ |2 ⟩ and σ̂_12 = σ̂_+ the transition |2 ⟩→ |1 ⟩. Then, associated to the Hamiltonian Ĥ_ we have the vector field X_Ĥ_ = - ν ( x^2 ∂/∂ x^1 - x^1 ∂/∂ x^2) . Let us now introduce a dissipative evolution by means of the GKLS formalism by considering the simplest case corresponding to v̂_1 = √(γ/2) σ̂^3, where γ is a constant with dimensions of frequency that modulates the dissipation, called the damping parameter. In this case, we have that V̂ = v̂^†_1 v̂_1 = γ σ̂^0, thus Y_V̂ = 0; therefore, the dissipation to the Hamiltonian system is exclusively determined by the Choi–Krauss vector field. To compute the vector field Z_𝒦 in Eq. (<ref>), we first note that f_V̂ = γ/2 and 𝒦^k_0 = 0, then 1/2𝒦^k_j x^j ∂/∂ x^k = 1/2{𝒦̂(σ_j) σ̂^k } x^j ∂/∂ x^k = γ/4δ_j l{σ̂^3 σ̂^l σ̂^3 σ̂^k } x^j ∂/∂ x^k . Taking into account the property {σ̂^i σ̂^j σ̂^k σ̂^l } = 2( δ^i jδ^k l - δ^i kδ^j l + δ^i lδ^j k ) it is direct to show that 1/2𝒦^k_j x^j ∂/∂ x^k = γ x^3 ∂/∂ x^3 - γ/2 x^k ∂/∂ x^k . Then, the Choi–Krauss vector field in Cartesian coordinates takes the following form Z_𝒦 = - γ( x^1 ∂/∂ x^1 + x^2 ∂/∂ x^2) . Once calculated the Hamiltonian and Choi–Kraus terms, we can finally give the expression for the GKLS vector field Γ = - ( ν x^2 + γ x^1) ∂/∂ x^1 + ( ν x^1 - γ x^2) ∂/∂ x^2 . The vector field Γ is displayed in Fig. <ref>, where the ball 𝐁 is also plotted considering the Cartesian coordinates (x^1, x^2, x^3). We observe that the line (0, 0, x^3) is singular and behaves as an attractor and then any pure state converge as a state with some statistical mix. In particular for the initial condition with x^3 = 0 the vector field ends to the state with maximal mix. Another interesting case is to consider the transition operators to model a different dissipative dynamics. Let v_1 = √(2 γ) σ_+, V̂ = v̂^†_1 v̂_1 = γ (σ̂^0 - σ̂^3) and f_V̂ = γ (1 - x^3). By a straightforward calculation we find that - δ^k j V_k ∂/∂ x^j = γ ∂/∂ x^3 , - V_0 Δ = - γ x^k ∂/∂ x^k and 1/2𝒦^k_μ x^μ ∂/∂ x^k = 1/2{𝒦̂(σ_0) σ̂^k } ∂/∂ x^k + 1/2{𝒦̂(σ_j) σ̂^k } x^j ∂/∂ x^k = γ (1 - x^3) ∂/∂ x^3 . Then, the GKLS vector field is given by Γ = - ( ν x^2 + γ x^1 ) ∂/∂ x^1 + (ν x^1 - γ x^2 ) ∂/∂ x^2 +2 γ (1 - x^3) ∂/∂ x^3 . We see that the components in the x^1 and x^2 directions are the same as those of the previous case, c.f. (<ref>); nevertheless, in the present case Γ has a component in the x^3 direction. The integral curves corresponding to (<ref>) are the solutions to the linear system of equations ẋ^1 = - ν x^2 - γ x^1 , ẋ^2 = ν x^1 - γ x^2 , ẋ^3 = 2 γ (1 - x^3) . From this vector field we see that there is only one singular point in (0,0,1), i.e., the “north pole” of the sphere, which is an attractor. The behaviour of this vector field is illustrated in Fig. (<ref>). We notice that regardless of the initial condition the evolution converges to the “north pole” of the sphere. § GKLS DYNAMICS ON GAUSSIAN STATES In this section, we are interested in obtaining the GKLS vector field for a class of systems whose quantum states are described by Gaussian states. We will follow, in general, the same steps taken for constructing the vector field for the q-bit systems and we will apply the result to describe a quantum harmonic oscillator with dissipation. Before we begin the procedure to construct the GKLS vector field, let us first review some important properties of Gaussian states. A Gaussian state in the position space representation has the general form ⟨ q' | ρ̂ | q ⟩ = 1/√(2 π σ^2_q)exp{1/2 σ^2_q[ σ_qp/ħ (q - q' ) - ı/2 ( q + q' - 2 ⟨q̂⟩ ) ]^2 - σ^2_p/ 2 ħ^2 (q - q')^2 + ı/ħ ⟨p̂⟩ (q - q') } , where q and q' are the coordinates in the position space for two different points. The uncertainty for each operator is defined as σ^2_q = ⟨q̂^2 ⟩ - ⟨q̂⟩^2 σ^2_p = ⟨p̂^2 ⟩ - ⟨p̂⟩^2 , and the correlation between the position and momentum operators is σ_qp = 1/2⟨q̂ p̂ + p̂ q̂⟩ - ⟨q̂⟩⟨p̂⟩ . For simplicity, in the following we will consider that the expectation values of position and momentum are zero, i.e. (⟨q̂⟩ , ⟨p̂⟩) = (0,0). A simplified expression of the Gaussian state can be obtained in the Wigner representation W_r (q,p) of ρ̂ by applying the Wigner-Weyl transformation <cit.> to obtain the Wigner quasi-distribution function[The Wigner-Weyl transform of ρ̂ is given by W(q,p) = 1/2πħ∫_-∞^∞ e^-ipy/ħ⟨ q + y/2|ρ̂|q - y/2⟩ dy . ] W_r( q , p ) = 1/π ħ rexp{ - 2/ħ^2 r^2[ σ_p^2 q^2 - 2 σ_qp q p + σ_q^2 p^2 ] } , where (q,p) denote points in phase space and the parameter r ∈ [1 , ∞), related to the Robertson-Schrödringer uncertainty relation, is defined as σ_q^2 σ_p^2 - σ^2_qp = ħ^2 r^2/4 . In addition, it can be shown by direct calculation that the statistical mixture of the Gaussian density matrix is parametrized by the r, i.e., {ρ̂^2 } = 1/r . In particular, when r=1 the density Gaussian matrix or the Wigner function corresponds to the generalized coherent states, see for instance Ref. <cit.>. Hence, in this case the Gaussian density matrix may be factorized as ρ̂ = | 0 ⟩⟨ 0 | where | 0 ⟩ denotes the vacuum state of the Fock states. The normalized positive functional ⟨ q' | ρ̂ | q ⟩ or equivalently its associated Wigner function W_r( q , p ) describing the quantum states is parametrized by ( σ_q^2, σ_p^2, σ_qp ) and these parameters are constrained by the relation (<ref>). We are interested in defining a manifold for the entire space of states parametrized by the uncertainties and the correlation. In order to do so, we consider the immersion of a finite-dimensional manifold into the space of normalized positive functionals ℒ^1 by means of a Weyl map <cit.>. To introduce the Weyl map, it is important to first make some remarks of the space of parameters. To describe the immersion of the space of second momenta ( σ_q^2, σ_p^2, σ_qp ), one intoduces the following set of coordinates y^1 = 1/ħ (σ^2_p + σ^2_q) , y^2 = 2/ħ σ_qp , y^3 = 1/ħ (σ^2_p - σ^2_q), such that the constraint (<ref>) defines the manifold for a fix value of r as h_r = { (y^1, y^2, y^3) ∈^3 | (y^1)^2 - (y^2)^2 - (y^3)^2 = r^2 } , where y^1 > 0. The manifolds h_r are consequently upper-hyperboloids in ^3 known as pseudo-spheres, in analogy to the spheres in the q-bit case <cit.>. From the expression (<ref>), it is clear that each manifold h_r has a different degree of statistical mixture, while the purity condition {ρ̂^2} = 1 identifies the hyperboloid H^2 = { (y^1, y^2, y^3) ∈^3 | (y^1)^2 - (y^2)^2 - (y^3)^2 = 1 } . On the other hand, the mixing condition {ρ̂^2} < 1 identifies the mixed states as the set defined by (y^1)^2 - (y^2)^2 - (y^3)^2 > 1 , where the maximal mixed state is in the limit r →∞. Therefore, a general Gaussian density matrix is parametrized by points 𝐲 = (y^1, y^2, y^3) in the three-dimensional differential manifold 𝐇 = { (y^1, y^2, y^3) ∈^3 | (y^1)^2 - (y^2)^2 - (y^3)^2 ≥ 1 } . The space 𝐇 is a differential manifold with boundary and it can be described as the foliation given by the hyperboloids (<ref>) labeled by r as 𝐇 = ⋃_r ≥ 1 h_r , where the leaves of this foliation are defined in Eq. (<ref>) and on each leaf a differential structure can be given. A schematic picture of this foliation is displayed in Fig. <ref>. The foliated space 𝐇 can be identified with the orthochronous Lorentz groupSO(1,2) which acts smoothly on itself. Equivalently, one may employ the two-fold covering group SU(1,1) of SO(1,2) or one of its isomorphic groups SL(2,) and Sp(2, ) <cit.>. There is a clear analogy between the ball 𝐁 employed in the description of q-bit systems and the solid hyperboloid 𝐇, in the sense that they are manifolds containing the information of the quantum states and both manifolds are foliated such that each leaf of the foliation has a fix degree of statistical mixture. We are interested in describing the immersion of each leaf h_r into the space of normalized positive functionals ℒ^1, then, we will require to find a set of coordinates for h_r. For instance, one may introduce hyperbolic coordinates by considering the mapping ν:h_r →, where each point (y^1, y^2, y^3) on h_r is described by a point in the complex plane κ_r = τ/2 ^ı φ given by y^1 = r coshτ , y^2 = r sinhτcosφ , y^3 = r sinhτsinφ , The coordinates (τ , φ ) are called squeezing parameters in quantum optics <cit.>. Now we are in position to introduce the Weyl map from h_r to the set of unitary operators Ŝ(ξ_r) ≡exp[ κ̅_r K̂_- - κ_r K̂_+] known as squeezing operators, where the operators K̂_± together with K̂_0 are the generators of the Lie algebra (1,1) satisfying the commutation relations [K̂_+, K̂_-] = - 2 K̂_0 , [K̂_0, K̂_±] = ± K̂_± . Therefore, we have defined the map W : h_r →𝒰(): κ_r ↦Ŝ(κ_r), denoting 𝒰() the set of unitary operators on a Hilbert space , where the unitary representation of h_r is given by the squeezing operator Ŝ(κ_r). Furthermore, the Weyl map allows to introduce the immersion w : h_r →ℒ^1 , defined in the following form, given a fiducial state ρ_0 ∈ℒ^1 we define the action of the map w as w : κ_r ↦ρ̂(κ_r) ≡Ŝ(κ_r) ρ̂_0 Ŝ^†(κ_r) . To be more specific, let us consider as a fiducial state the Gaussian state with zero squeezing parameters (τ_r, φ_r) = (0, 0), i.e., the state has zero correlation σ̃_qp = 0, therefore, the fiducial Gaussian state in the position representation is ⟨ q' | ρ̂_0 | q ⟩ = 1/√(2 π σ̃^2_q)exp{ - 1/8 σ̃^2_q ( q + q' )^2 - σ̃^2_p/ 2 ħ^2 (q - q')^2 } . Acting with the squeezing operator there is a translation in h_r from (0, 0) to (τ_r, φ_r) which in terms of the uncertainties and the correlation is given by the map ( σ̃^2_q, σ̃^2_p, 0) ↦ (σ^2_q, σ^2_p, σ_qp) defined by the transformation σ^2_q = σ̃^2_q[ coshτ_r - cosφ_r sinhτ_r ] , σ^2_p = σ̃^2_p[ coshτ_r + cosφ_r sinhτ_r ] , σ_qp = - ħ r/2 sinφ_r sinhτ_r , obtaining the Gaussian density matrix ⟨ q' | ρ̂(κ_r) | q ⟩ = 1/√(2 π σ^2_q)exp{1/2 σ^2_q[ σ_qp/ħ (q - q' ) - ı/2 ( q + q' ) ]^2 - σ^2_p/ 2 ħ^2 (q - q')^2 } . Let us note that because of the highly non-linearity of the squeezing parameters (τ_r, φ_r), it will result convenient to employ a set of coordinates adapted to the upper half complex plane HP_r = {∈ | _ > 0 } , where = _ + ı _ and defined by the relations y^1 = ı r 1 + / - , y^2 = ı r + / - , y^3 = ı r - 1+ / - . The two-dimensional space for r = 1 described by these coordinates is known in the literature as the Siegel upper half plane <cit.>. The infinite representations in Eqs. (<ref>) and (<ref>) of the Gaussian state ρ̂ are not adequate to obtain the GKLS vector field as it was done for the q-bit case. Nevertheless, the smooth action of SL(2, ) on the space of parameters 𝐇 makes possible to define an immersion of the space of 2 × 2 matrices into the space of parameters (y^1, y^2, y^3) by m : ℳ_2 →𝐇 : ξ̂↦ (y^1, y^2, y^3) . Thus, any dynamical evolution in the space of matrices induces dynamics in the space of parameters, describing geometrically the evolution of Gaussian states. In particular, this perspective allows us to apply the same procedure as in the q-bit case to obtain the corresponding GKLS vector field. Restricting to observables quadratic in position and momentum, the set of operators can be identified with the subset (2, ). The basis for the Lie algebra (2, ) of observables {L̂^k} can be chosen as L̂^1 = 1/4( p̂^2 + q̂^2 ) , L̂^2 = 1/4( q̂ p̂ + p̂ q̂) , and L̂^3 = 1/4( p̂^2 - q̂^2 ) . On the other hand, if ^∗(2,) is the dual Lie algebra, then for every ξ∈^∗(2,) there is a unique ξ̂∈(2,) such that ⟨ξ,â⟩ := {â ξ̂} . The advantage of considering (2, ) resides in that it can be given a finite matrix representation in terms of 2 × 2 matrices[ Notice that there is not a unique matrix representation as there is an isomorphism between the Lie algebas (2,) = (2,), (1,1) and (1,2), where the correspondence is given as follows sl(2, ) su(1,1) so(1,2) J_1 ↔ ı/2( [ 0 1; 1 0; ]) ↔ 1/2 ı( [ 0 - ı; ı 0; ]) ↔ 1/ı( [ 0 0 1; 0 0 0; 1 0 0; ]) J_2 ↔ ı/2( [ 1 0; 0 -1; ]) ↔ - 1/2 ı( [ 0 1; 1 0; ]) ↔ 1/ı( [ 0 -1 0; -1 0 0; 0 0 0; ]) J_0 ↔ ı/2([ 0 1; -1 0; ]) ↔ 1/2([ 1 0; 0 -1; ]) ↔ 1/ı( [ 0 0 0; 0 0 1; 0 -1 0; ]) These operators satisfy the commutation relations [J_0, J_1] = ı J_2 [J_1, J_2] = - ı J_0 and [J_2, J_0] = ı J_1 . For instance, if we are working with creation and annihilation operators instead of quadratic operators in position and momentum it is more convenient to consider the (1,1) matrix realization.]L̂_1 = ı ħ/2 ( [ 0 1; -1 0; ]) , L̂_2 = ı ħ/2 ( [ 1 0; 0 -1; ]) , L̂_3 = ı ħ/2 ( [ 0 1; 1 0; ]) . As we have seen in Section <ref>, at some point, we will employ the Lie-Jordan algebra structure to define Hamiltonian and gradient-like dynamics for the space of parameters. Thus, it is necessary to enlarge our basis of matrices to include the element L̂_0 = ħ/2 ( [ 1 0; 0 1; ]) . Then, we can define the matrix ξ as ξ = 1/2 ħ y^μ L_μ , where μ =0,1,2,3, where L_μ = g_μνL̂^ν, y^μ≡ g^μνy_ν where g^μν are the entries of the matrix diag(1,1,-1,-1). The normalization condition {ξ̂} = 1 fixes y^0 = 2 and the components (y^1, y^2, y^3) can be obtained as y^k = {L̂^k ξ̂} . These components depend on the second moments and the correlation as can be seen from the relations (<ref>), which can be found in this representation by σ_q^2 = Tr {q̂^2 ξ̂}, σ_p^2 =Tr {p̂^2 ξ̂} and σ_qp = Tr {1/2(q̂p̂ + p̂q̂) ξ̂} , which coincide with the expectation values for the operators q̂^2, p̂^2 and 1/2(q̂p̂ + p̂q̂) obtained from the Gaussian states, as is easily verified using for instance the Wigner function to compute them. It is important to note that ξ is not properly an state, because is not positive defined and its matrix representations is not self-adjoint; however, from the relation (<ref>) one may established the map between operators and linear functions by f_â := Tr{â ξ̂} , where â is an operator in (2, ) and f_â its corresponding dual. Based on these results, we claim that the GKLS dynamics can be determined from the GKLS equation for ξ, making it possible to follow the procedure for finite-dimensional representations and use the Lie-Jordan algebra structure to construct the GKLS vector field on the space of parameters. §.§ Dynamical study of Gaussian density matrix systems from a state point of view In quantum mechanics, under the usual probabilistic interpretation, any transformation of a state is described by a one-parameter group of unitary transformations, in particular, the time evolution of a state, i.e., the probability conservation is secured by the Schrödinger equation. The assumption of this interpretation, then requires the infinitesimal generator to be a self-adjoint operator acting on the separable complex Hilbert space associated with the physical system. Let us describe, for the case of Gaussian density matrices, how this unitary evolution can be cast in terms of a Hamiltonian vector field. To do so, we will make use of the Kähler structure that the upper-half-hyperboloid h_r bears, which later will probe to be useful when we extend the dynamics adding dissipation as a perturbation term. As we have seen in the previous subsection, the space 𝐇 of all the quantum states described by Gaussian density matrix is foliated by leaves labeled by r. Each leaf h_r is endowed with a Kähler structure (ω_h_r, g_h_r, J_h_r), then it is possible to introduce symplectic and gradient dynamics on it by means of the definitions ω_h_r( 𝕏_H , · ) = f̣_Ĥ , g_h_r = ( 𝕐_H , · ) = f̣_Ĥ , where f_Ĥ is a smooth functions in h_r. These functions are determined by the expectation value of a Hamiltonian operator as follows. First of all, let us restrict our study to Hamiltonian operators which are quadratic in the position and momentum operator with the general form Ĥ = 1/2[ H_1 p̂^2 + V(q̂ p̂ + p̂ q̂) + H_2 q̂^2 ] , where H_1, H_2 and V are real constants. Therefore, the space of observables 𝔇 is identify with the subset of the (2, ) Lie algebra consisting of self-adjoint elements. Thus, the expectation value of the Hamiltonian operator Ĥ, defined as f_Ĥ := {ρ̂ Ĥ}, is given by f_Ĥ = 1/2[ H_1 ⟨p̂^2 ⟩+ V ⟨q̂ p̂ + p̂ q̂⟩ + H_2 ⟨q̂^2 ⟩] and from the definitions (<ref>) and (<ref>) it is not difficult to find that f_Ĥ = 1/2[ H_1 σ^2_p + 2 V σ_qp + H_2 σ_q^2 ] . Determining the Hamiltonian and gradient dynamics boils down to obtain the vector fields 𝕏_H and 𝕐_H as defined in (<ref>). We can employ the coordinates adapted to the complex upper-half-plane defined in (<ref>) for which the symplectic form and the Riemannian metric take the form ω_h_r = ı ħ/2r^2/(𝒞̅ - 𝒞)^2𝒞̅∧𝒞 and g_h_r = ħ/2r^2/(𝒞̅ - 𝒞)^2𝒞̅⊗_𝒞 . with complex structure J_h_r = 1/ı( ⊗∂/∂ - ̣̅⊗∂/∂) . The expectation value of the Hamiltonian operator f_Ĥ in (<ref>) has the following expression in the 𝒞 coordinate system f_Ĥ = ı ħ/2r/ - [ H_1 + V ( + ) + H_2 ] where there is a clear dependence on the parameter r. We may now proceed to compute the Hamiltonian and the gradient vector fields considering that they take the following generic form 𝕏_H = 𝕏_∂/∂+𝕏_∂/∂ and 𝕐_H = 𝕐_∂/∂+𝕐_∂/∂ , respectively, then employing the definition in Eq. (<ref>) one may find that the components of these vector fields are given by 𝕏_ = - 2/ı ħ( - )^2/r^2 ∂ f_H/∂ = -1/r[ H_1 ^2 + 2 V + H_2 ] and 𝕐_ = 2/ħ( - )^2/ r^2 ∂ f_H/∂ = ı/r[ H_1 ^2 + 2 V + H_2 ] . Therefore, the symplectic evolution in these coordinates is described by the nonlinear Riccati equation = - 1/r[ H_1 ^2 + 2 V + H_2 ] . On the other hand, the equations of motion in terms of Euclidean coordinates (y^1,y^2, y^3) are given by the linear system of equation ẏ^1 = 1/r [(H_1 -H_2) y^2 - 2 V y^3] , ẏ^2 = 1/r [(H_1 -H_2) y^3 + (H_1 + H_2) y^1] , ẏ^3 = - 1/r [(H_1 + H_2) y^1 + 2 V y^2] , where the solutions of this linear system of equations correspond to the integral curves of the linear system of equations obtained from the Hamiltonian vector field 𝕏_H = 1/r (y^2 H_3 - y^3 H_2) ∂/∂ y^1 + 1/r (y^3 H_1 + y^1 H_3) ∂/∂ y^2 - 1/r (y^1 H_2 + y^2 H_1) ∂/∂ y^3 . Let us now introduce the Poisson and Jordan brackets in h_r and establish their relation to the algebraic structures of operators in . To this end, we start defining from ω_h_r the Poisson bracket for observables f_â and f_b̂ associated to the operators â and b̂, respectively. Thus, in the hyperboloid the Poisson bracket is defined by means of the symplectic form (<ref>) defined on each leaf h_r by { f_â , f_b̂}_h_r = ω_h_r(𝕏_b , 𝕏_a ) . Then, it can be shown by direct calculation that the Poisson bracket satisfies the following relation { f_â , f_b̂}_h_r = 1/r f_[[ â, b̂ ]] , where [[·,·]] is the Lie-product defined in Eq. (<ref>), where to obtain the last identity we have employed the basis {L̂^k } in Eq. (<ref>) satisfying the commutation relations [[L̂^1, L̂^2]] = L̂^3 , [[L̂^2, L̂^3]] = - L̂^1 and [[L̂^1, L̂^3]] = - L̂^2 . On the other hand, the Jordan bracket is defined through the relation ⟨⟨ f_â , f_b̂⟩⟩_h_r = g_h_r(𝕐_a , 𝕐_b ) . Hence, taking into account this definition, it can be shown by direct substitution that the Jordan bracket satisfies the following relation ⟨⟨ f_â , f_b̂⟩⟩_h_r = 1/2 f_â⊙b̂ - 4/ħ r^2 f_â f_b̂ , where ⊙ is the Jordan-product defined in Eq. (<ref>), which for the basis {L̂^k } obeys the relations L̂^j ⊙L̂^k = ħ/2 g^jk 𝕀̂ with 𝕀̂ the identity operator and g^jk being the entries of the diagonal matrix g = ( [ 1 0 0; 0 -1 0; 0 0 -1 ]) . To conclude this subsection, let us introduce a couple of bivector fields which will allow us to extend the Hamiltonian and gradient dynamics to the whole space 𝐇. These two tensor fields will be relevant in introducing the GKLS dynamics in the following subsections. On the one hand, the skew-symmetric bivector field will permit to describe the Hamiltonian dynamics on 𝐇 where a symplectic form cannot be defined; in this sense, this is a more general geometric field which can be used to define a Poisson bracket. On the other hand, the symmetric bivector field will serve for generalizing the gradient dynamics, although some redefinitions will be necessary to establish properly the GKLS dynamics. The skew-symmetric bivector field in the coordinates is given by the expression Λ_h_r = 2 ı/ħ( - )^2/r^2∂/∂∧∂/∂ and is such that the Poisson brackets can be defined as { f_â, f_b̂}_h_r : = Λ_h_r (f̣_â, f̣_b̂) , while the symmetric bivector field takes the form G_h_r = 2/ħ( - )^2/r^2∂/∂⊗_∂/∂ , and the Jordan bracket is defined by the following relation ⟨⟨ f_â, f_b̂⟩⟩_h_r : = G_h_r ( f̣_â , f̣_b̂ ) . Therefore, from the state point of view we have established the symplectic evolution and the gradient vector field which are tangent to the manifolds h_r; however, to introduce the GKLS evolution it is necessary generalize these definitions to all the manifold 𝐇. To achieve this it is necessary to consider the observable point of view as we will see in the next section. §.§ Dynamical study of Gaussian density matrix systems from an observable point of view In the previous subsections, as well as in the q-bit case, we have seen that the symplectic and gradient dynamics generated by the corresponding vector fields preserve the statistical mixture on each leaf h_r, that is, the parameter r is remains unchanged during this type of evolution. This means that mixed states with initial conditions described by a density matrix fulfilling (<ref>) preserve such a constraint under this dynamics. However, we aim to find a dynamical evolution for the states that does not preserve the statistical mix of the Gaussian density matrix; in a geometric language we are looking for dynamics described by a vector field transverse to the leaves of constant r, that is, to the hiperboloids {h_r }. This evolution will take place in the manifold 𝐇 and the change in the parameter r will reflect the fact that the mixture of states is changing in time. Then, similarly to what has been done in the q-bit case, we shall establish the dynamics generated by the GKLS equation in geometrical terms, finding an affine vector field Γ which accepts a decomposition in terms of a Hamiltonian dynamics, a gradient-like vector field and a Choi–Kraus vector field, as is expressed in Eq. (<ref>). Thus, to do so, we will extend the definition of Hamiltonian vector field and introduce the gradient-like vector field considering a description in terms of the observables employing the Lie–Jordan structure (, ⊙, [[·,·]]) of the space of self-adjoint operators. From the observables point of view, there is a one-to-one correspondence between each operator â∈⊂(2,) and a function f̃_â := {â ρ̂} in the dual space ^∗; therefore, the space of functions in ^∗ provides a realization of the Lie–Jordan algebra given in Eq. (<ref>). We may introduce a skew-symmetric and a symmetric (2,0) tensor fields on ^∗ by means of its Lie and Jordan algebraic structures, respectively. At each point of ^∗ we have a cotangent space whose elements are the 1-forms ̣̃f_â from the linear functions on ^∗ associated to any operator â∈; then, we can define these bi-vector fields as Λ̃(̣̃f_â, ̣̃f_b̂) := f̃_ [[â , b̂]] and G̃(̣̃f_â , ̣̃f_b̂) := f̃_â⊙b̂ , where f̃_â and f̃_b̂ are functions in ^∗. These tensor fields allow us to define a Hamiltonian vector field X̃_H and a gradient-like vector field Ỹ_b on ^∗ as follows X̃_H = Λ̃( ̣̃f_Ĥ, · ) and Ỹ_b = G̃( ̣̃f_b̂, · ) , where X̃_H and Ỹ_b are associated to the operators Ĥ and b̂, respectively. Therefore, we obtain a Hamiltonian vector field associated to the Lie structure and a gradient like-vector field associated to the Jordan algebra structure. This Hamiltonian vector field is the infinitesimal coadjoint action of the Lie group SL(2, ) on the Lie algebra, i.e. the solutions to the equations of motion are the coadjoint orbits of SL(2, ). To describe the Lie–Jordan algebraic structure a fourth element has to be added to the generators of the Lie algebra; thus, a basis of the Lie–Jordan algebra is given by {L̂^μ} where L̂^0 = ħ/2𝕀̂ with 𝕀̂ is the identity element of the algebra and {L̂^k } are the operators defined in Eq. (<ref>). The Lie bracket of the set of generators defines the Lie algebra through the specification of its structure constants, then, for this basis the Lie product is such that [[ L̂^μ , L̂^ν ]] = c^μν_σ L̂^σ where c_σ^μν = 0 for c_σ^0 ν, c_σ^μ 0, c_0^μν c^k j_l given by Eq. (<ref>) , where k, j and l denotes the values 1,2,3 of μ, ν and σ, respectively, and likewise the Jordan product must fulfil L̂^μ⊙L̂^ν = d_σ^μνL̂^σ with d_σ^μν = 0 for d_l^0 0 and d_l^k j , g^μν for d_0^μν , δ^μ_σ for d_σ^μ 0 = d_σ^0 μ , and g^μν are the componentes of the matrix g = ( [ 1 0 0 0; 0 1 0 0; 0 0 -1 0; 0 0 0 -1 ]) . We want to define vector fields on ^∗ to describe the dynamical evolution of systems, then, it is convenient to introduce Cartesian coordinates on ^∗ associated to the basis {L̂^μ} of by f_L̂^μ = {L̂^μρ̂} = ħ/4 y^μ . where {y^k}_k=1,2,3 are directly connected to the second moment by the identities in (<ref>). Hence, given the cartesian coordinate system {ħ/4 y^μ} we proceed to compute the coordinate expression of the tensor fields Λ̃ and G̃ by means of the definition (<ref>) obtaining Λ̃ = 4/ħ c_σ^μν y^σ∂/∂ y^μ∧∂/∂ y^ν and G̃ = 4/ħ d_σ^μν y^σ∂/∂ y^μ⊗∂/∂ y^ν . Given the Hamiltonian operator Ĥ = H_μL̂^μ and an arbitrary operator b̂ = b_μL̂^μ their duals (expectation values) correspond to f̃_Ĥ = ħ/4 H_μ y^μ and f̃_b̂ =ħ/4 b_μ y^μ; then, the Hamiltonian and the gradient-like vector fields are X̃_H = c_σ^μν H_μ y^σ∂/∂ y^ν and Ỹ_b = d_σ^μν b_μ y^σ∂/∂ y^ν , respectively. The orbits obtained from the Hamiltonian and gradient-like vector fields must lie entirely on the space of quantum states 𝐇 defined by the constraint ρ(𝕀̂) = {ρ̂ 𝕀} = 1, that is, ρ fulfills the definition of physical state. The space of quantum states is a convex subset of a closed affine subspace of ^∗, then, in the case of systems described by a Gaussian density matrix, the dynamical trajectories that follow the quantum states are constrained to satisfy the Robertson–Schrödinger uncertainty relation σ_q^2 σ_p^2 - σ^2_qp≥ħ^2/4 . Therefore, the affine subspace of quantum states is the subset of ^∗ defined as := {f̃_â = a_μ f_L̂^μ∈ℱ(^∗) | g^k jf̃_L̂^kf̃_L̂^j≥1/4 (f̃_L̂^0)^2 } , which in terms of the coordinate system {ħ/4y^μ} can be characterized by those points in ^∗ such that y^0 = 2. Then, it is possible to introduce the canonical immersion i := →^∗, such that f_â = ħ/2 a_0 + ħ/4a_k y^k is the pullback of the linear function f̃_â = ħ/4 a_μ y^μ, i.e., f_â = i^* f̃_â. Consequently, we can define symmetric and skew-symmetric tensor fields Λ and G on by performing a reduction of the algebraic structures given in (<ref>). Following a similar procedure as in the q-bit case, the space of functions ℱ() can be described as the quotient space ℱ(^∗)/ℐ where ℐ⊂ℱ(^∗) is the closed linear subspace of functions vanishing on . Now, in order to have an algebraic structure on ℱ(^∗)/ℐ the subspace ℐ must be an ideal of ℱ(^∗). One may prove directly that for an element f̃_â in ^∗ and an element f̃_b̂ = (1 - y^0/2) b_k y^k in ℐ we have that Λ̃(̣̃f_â, ̣̃f_b̂ ) = ħ/4(1 - y^0/2) c^k l_m y^m (a_k b_l -b_k a_l) , which vanishes in , meaning that the Poisson product of f̃_â and f̃_b̂ defined through Λ̃ is an ideal of ℱ(^∗). Thus, because the differentials of the coordinate functions {ħ/4y^k} form a coordinate basis for the cotangent space at each point of , then, by means of the immersion i we define the Poisson bivector field on as Λ(f̣_L̂^k, f̣_L̂^j) := f_ [[ L̂^k , L̂^j]] = c^k j_l f_L̂^l , or equivalently Λ(ỵ^k, ỵ^j) = 4/ħ c^k j_l y^l , hence, Λ has the explicit form Λ := 4/ħ c^k j_l y^l ∂/∂ y^k∧∂/∂ y^j =- 4/ħ( y^1 ∂/∂ y^2∧∂/∂ y^3 + y^2 ∂/∂ y^1∧∂/∂ y^3 - y^3 ∂/∂ y^1∧∂/∂ y^2) . Given the reduced bivector field Λ on we define the Hamiltonian vector field X_Ĥ := Λ(f̣_Ĥ, · ), whose coordinate expression can be obtained considering the linear function i^* f̃_Ĥ = f_Ĥ = ħ/2 H_0 + ħ/4 H_k y^k associated to the Hamiltonian operator, this is, X_H = c^k j_l H_k y^l ∂/∂ y^j = - (y^3 H_2 - y^2 H_3) ∂/∂ y^1 + (y^1 H_3 + y^3 H_1) ∂/∂ y^2 - (y^1 H_2 + y^2 H_1) ∂/∂ y^3 . An additional advantage of deducing the dynamics from an observable (algebraic) point of view is that a constant of motion can be obtained from the Casimir operator, which in this case is Ĉ = (L̂^1)^2 - (L̂^2)^2 - (L̂^3)^2, and whose expectation value gives the constant of motion f_Ĉ = (f_L^1)^2 - (f_L^2)^2 - (f_L^3)^2 = ħ^2/4^2 [ (y^1)^2 - (y^2)^2 - (y^3)^2 ] . This constant of motion allows to easily verify that Λ( f_H , f_C) = 0, therefore, the Hamiltonian vector field X_Ĥ is tangent to the manifolds {h_r }. Moreover, from this result it can be seen that the affine space is actually the manifold 𝐇 defined in Eq. (<ref>). In what refers to the gradient vector field Ỹ_b̂ = G̃(̣̃f_b̂, · ), it is not difficult to show that Ỹ_b̂(f_Ĉ) ≠ 0; then, it can be shown that upon reduction of Ỹ_b̂ we will obtain a vector field Y_b̂ that generates a dynamical evolution that is not necessarily restricted to the manifold 𝐇; in particular, we are interested in vector fields that are tangent to the space of pure states. To amend this situation, we proceed to perform a reduction procedure for the symmetrical bivector G̃; however, it is not difficult to show that ℐis not an ideal of ℱ(^∗) under this product because of G̃(̣̃f_â, ̣̃f_b̂ ) = ħ/4( 1-y^0/2) (y^0 g^kl a_k b_l + a_0 b_k x^k) - ħ/8 a_0 y^0 b_k y^k - ħ/8 a_k y^k b_n y^n . for f̃_b̂∈ℐ and an arbitrary f̃_â∈ℱ(^∗). Then, it is necessary to modify this symmetrical bivector. Following the procedure for the q-bit case and taking into account the result in (<ref>) one may start modifying the bivector by G̃ - 4/ħΔ̃⊗Δ̃ with Δ̃ = y^μ∂/∂ y^μ; however, it takes a straightforward calculation to show that on the affine space G̃(̣̃f_â, ̣̃f_b̂ ) - 4/ħΔ̃(̣̃f_â) Δ̃(̣̃f_b̂ ) ≠ 0, hence, its associated algebraic product is not in the subset ℐ. Furthermore, it can be shown that there is not a linear combination of the tensor fields G̃ and Δ̃⊗Δ̃ which defines a product such that ℐ is an ideal. Then, it is necessary to find a fine-tuned definition for a tensor field that allows to properly accomplish the reduction process. A possibility is to slightly modify the G̃ and introduce a different vector field Δ̃ to define the following tensor field ℛ̃ := 1/2 𝒢 - 4/ħ Δ̃⊗Δ̃ , where now Δ̃ = y^0/4∂/∂ y^0 + y^k ∂/∂ y^k and 𝒢 = 4/ħ d̃^μν_σ y^σ ∂/∂ y^μ⊗∂/∂ y^ν , with d̃^00_0 = d^00_0 - 3/2 y^0 and all the other constants in (<ref>) remaining the same, i.e., d̃^μ k_0 = d^μ k_0 and d̃^μν_k = d^μν_k. Notice that the vector field Δ̃ is a modification of the Euler-Liouville vector field. This tensor field allows us to define a product for the space of functions ℱ(^∗) for which ℐ is an ideal with respect to it, in fact it is direct to check that ℛ̃(̣̃f_â, ̣̃f_b̂ ) = ( 1-y^0/2) (ħ/8 y^0 g^kl a_k b_l + ħ/8 a_0 b_k x^k (1- y^0) - 5 ħ/16 a_k y^k b_n x^n ) +3 ħ/32 b_k y^k ( 1 - (y^0)^2/4) , which is an element of the ideal. In particular, it will prove useful to have the tensor field ℛ̃ expressed in terms of the bivector field G̃ introduced in (<ref>) by ℛ̃ = 1/2G̃ - 3/ħ ∂/∂ y^0⊗∂/∂ y^0 - 4/ħΔ̃⊗Δ̃ . In this manner, the gradient-like vector field associated to f̃_b̂ = ħ/4 b_μ y^μ is ℛ̃(f̣̣̃_b̂ , · ) = 1/2Ỹ_b̂ - 3/4 b_0 ∂/∂ y^0 - (b_0 y^0/4 + b_k y^k) Δ̃ . where the gradient-like vector field Ỹ_b̂ is defined in (<ref>). Once we have defined the symmetrical product ℛ̃, we may proceed to perform the reduction by means of the immersion i^* f̃_Ĥ = f_Ĥ = ħ/2 H_0 + ħ/4 H_k y^k, to obtain the reduced symmetric bivector ℛ defined by ℛ(f̣_L̂^k. f̣_L̂^j) = 1/2 f_L̂^k ⊙L̂^j - 4/ħ f_L̂^k f_L̂^j , such that in Cartesian coordinates takes the form ℛ = 4/ħ( g^k j - y^k y^j) ∂/∂ y^k⊗∂/∂ y^j , where we have denoted Δ = y^k ∂/∂ y^k. Now, by means of ℛ we can define the gradient-like vector field associated to the linear function f_b̂ = ħ/2 b_0 + ħ/4 b_k y^k, which results in Y_b := ℛ (f̣_b̂, · ) = (g^kj - y^k y^j ) b_k ∂/∂ y^j , where it is important to note that the last term in this gradient-like vector field is a quadratic term with respect to the Cartesian coordinate system. To finalize this subsection, we want to verify that the gradient-like vector field Y_b generates a dynamic evolution that is transverse to the leaves h_r but tangent to H ^2 when r=1, that is, when the initial conditions are those of a pure state; this means that the states reached from a dynamic evolution dictated by Y_b, regardless of the initial conditions, are contained within 𝐇. To do so, we compute the Lie derivative of f_Ĉ along the direction of the gradient-like vector field Y_b obtaining _Y_b f_Ĉ = ħ^2/2(1 - r^2 ) b_k y^k . Thus, we observe that the Lie derivative is different from zero if and only if r ≠ 1; therefore, the gradient-like vector field Y_b is transversal to the leaves h_r and is tangent to the hyperboloid H^2. Moreover, when r = 1, the gradient-like vector field Y_b is identical to the gradient vector field 𝕐_b, meaning that dynamics generated by Y_b does not change the rank of the density matrix, because a pure state remains pure under its evolution. §.§ GKLS dynamics for Gaussian density matrix states In this subsection we construct a geometric description of the GKLS dynamics of physical systems described by a Gaussian density matrix. This is done introducing a vector field which is defined from the GKLS master equation; this vector field is transverse to the foliation {h_r} and can describe the evolution of a pure state into a statistical mixture. To introduce the GKLS vector field we will follow the same procedure as for the q-bit system in the previous section, thus, to achieve that, it convenient to consider the immersion of the space of (2 × 2)-matrices into the space of parameters (y^1, y^2, y^3) established in Eq. (<ref>). Let ξ̂∈ sl(2, ) be an operator defined as ξ̂ = 1/2ħ y_μL̂^μ , where the coefficients {y_μ} are related to a set of coordinate functions on the dual space. Then, the map between operators and linear functions is given by the following definition: let T̂ be an operator in sl(2, ) and f̃_T̂ its corresponding dual, then the relation between them is given by f̃_T̂ := Tr{T̂ ξ̂} . This relation can also be defined as the natural pairing between elements of the algebra and its dual ξ(T̂ ) := ⟨ξ , T̂⟩ = {T̂ ξ̂} , where ξ∈ sl^∗(2, ) is the corresponding dual to ξ̂ given by ξ = 1/2ħ y^μ L_μ , where L_μ≡ g_μνL̂^ν, y^μ≡ g^μνy_ν and g_μν are the components of the matrix (<ref>). Thus, in terms of its matrix representation, ξ takes the following form ξ = 1/2ħ y^μ L_μ = ı/2 ħ ( [ -σ_qp - ı ħ σ_q^2; -σ_p^2 σ_qp - ı ħ; ]) . Then, we can define a coordinate system {ħ/4 y^μ} for the dual space ^∗(2, ) related to the generators {L̂^μ} of the Lie-Jordan algebra through the map (<ref>) f_L̂^μ = ⟨ξ , L̂^μ⟩ = {L̂^μ ξ̂} = ħ/4 y^μ . Consequently, we have that the expectation value of an arbitrary operator â = a_μ L̂^μ∈ corresponds to f̃_â = ⟨ξ , â⟩ = ħ/4 a_μ y^μ. Therefore, although ξ̂ is not a density matrix, through the constraint det{1/ı(ξ̂ - L̂^0)}≥ 0 we can state the condition for having physical states and we are able to establish a one-to-one connection between operators and their expectation values which is equivalent, in the sense that it reproduces the same mapping between and ^∗, as the density matrix would do. The reason to proceed in this manner is that using ξ̂ instead of ρ̂ has the advantage of a great simplification in the calculations. Then, once established a map between and ^∗ using the matrix representation of the Jordan–Lie algebra, we may now introduce the GKLS generator 𝐋 acting on ξ̂𝐋(ξ̂) = - ı/ħ [Ĥ, ξ̂ ]_- - 1/2∑^3_j [ v̂_j^† v̂_j , ξ̂ ]_+ +∑^3_A v̂_j ξ̂ v̂_j^† , where Ĥ∈ is the Hamiltonian operator and v̂_A an arbitrary element of (2, ). This generator can be expressed in terms of the Lie and the Jordan products 𝐋(ξ̂) = - [[ Ĥ, ξ̂ ]] - ħ/2 V̂⊙ξ̂ + 𝒦̂( ξ̂ ) , where V̂ = ∑_j v̂_j^†v̂_j and the completely positive map is 𝒦̂( ξ̂ ) = ∑_j v̂_j ξ̂ v̂_j^† . Let us now, in analogy with the analysis performed for the q-bit system, propose that the vector field on ^∗ associated with the 𝐋 generator takes the form Γ̃ = X̃_H - ħ/2 Ỹ_V + Z̃_𝒦 , where X̃_H is the Hamiltonian vector field Λ̃( d f̃_Ĥ, · ) = X̃_H, Ỹ_V is the gradient-like vector field G̃( ̣̃f_V̂, · ) = Ỹ_V and Z̃_𝒦 is the vector field associated with the completely positive map 𝒦̂. To find the explicit form of the GKLS vector field we consider the master equation for ξ̂ in terms of the Lie and Jordan products and the complete positive map (<ref>); expressing the Hamiltonian operator as Ĥ = H_μ L̂^μ the Lie product of Ĥ and ξ̂ is [[ Ĥ, ξ̂ ]] = c_σ^μν H_μ y_ν L̂^σ , while the Jordan product of V̂ = V_μL̂^μ and ξ̂ is expressed as V̂⊙ξ̂ = d^μν_σ V_μ y_νL̂^σ . Finally, we compute explicitly the completely positive map 𝒦( ξ̂ ) in terms of the generators of (2, ). Expressing it as 𝒦̂( ξ̂ ) = 4/ħ 𝒦_μ L̂^μ = 4/ħ ⟨ L_μ , 𝒦̂( ξ̂ ) ⟩ L̂^μ , where the coefficients 𝒦_μ are given by 𝒦_μ = ⟨ L_μ , 𝒦̂( ξ̂ ) ⟩ = 1/2ħ{∑_A v̂_A L̂^νv̂_A^†L̂^α} y_ν g_αμ , which can also be expressed as 𝒦_μ = 𝒦^ν_μ y_ν where 𝒦^ν_μ = 1/2ħ{∑_A v̂_A L̂^νv̂_A^†L̂^α} g_αμ . Therefore, we finally have that 𝒦̂(ξ̂) = 4/ħ𝒦^μ_ν y_μ L̂^ν . Once that all the linear maps in (<ref>) are expressed in terms of the (2, ) generators, we are in the position to associate a linear vector field on ^∗ to each one of them. To do so, we recall that for a linear map  acting on ξ̂ given by Â(ξ̂) = 𝒜^μ_ν y_μL̂^ν we can associate a vector field on the dual ^∗ given by Z_A = 𝒜_μ^ν y^μ∂/∂ y^ν , where 𝒜_μ^ν≡ g_αμg^βν𝒜^α_β. Then, the vector field associated to the GKLS generator 𝐋 in (<ref>) is given by Γ̃ = - Z_[[H, ξ ]] - ħ/2 Z_V ⊙ξ + Z̃_𝒦 , where each vector field that composes Γ̃ is given by Z_[[H , ξ]] = - c_σ^μν H_μ y^σ∂/∂ y^ν , Z_V ⊙ξ = d_σ^μν V_μ y^σ∂/∂ y^ν and Z̃_𝒦 = 4/ħ 𝒦_μ^ν y^μ ∂/∂ y^ν , where to arrive to this result we have used that L_ν = g_νηL̂^η and y_ν≡ g_νη y^η, as well as g_ναg^σβ c^μν_σ = -c^μβ_α, g_ναg^σβ d^μν_σ = d^μβ_α and 𝒦_μ^ν = g_μαg^νβ𝒦^α_β. Then, comparing with the coordinate expression of the vector field in Eq. (<ref>) it is direct that Z_[[H, ξ ]] = - X̃_H and Z_V ⊙ξ = Ỹ_V. Then, we have obtained an expression in terms of the coordinate basis for the vector field Γ̃ in Eq. (<ref>) associated to the GKLS master equation. Now, we would like to apply the reduction procedure to Γ̃ and obtain a vector field Γ on the affine space which generates a dynamic evolution whose orbits lie enterely on the space of physical quantum states. To accomplish this reduction by means of the immersion i:= →^∗, let us first express Γ̃ as Γ̃ = X̃_H - ħ [ 1/2Ỹ_V̂ - 3/4 V_0 ∂/∂ y^0 - (V_0 y^0/4 + V_k y^k )Δ̃] + Z̃_𝒦̂ - 3ħ/4 V_0 ∂/∂ y^0 - ħ(V_0 y^0/4 + V_k y^k) Δ̃ , at this point it is convenient to substitute the vector field Z̃_𝒦̂(ξ̂) expressed as follows Z̃_𝒦 = 2 f̃_V̂ ∂/∂ y_0 + 4/ħ 𝒦_μ^k y^μ ∂/∂ y^k , and then setting y^0 = 2 we obtain the reduced vector field Γ as Γ = X_H - ħ Y_V + Z_𝒦 , where the Choi-Kraus vector field is identified as Z_𝒦= 4/ħ𝒦_μ^k y^μ∂/∂ y^k - ħ(V_0/2 + V_k y^k ) Δ , with Δ = y^k ∂/∂ y^k is the Euler-Liouville vector field on and Y_V is the reduced gradient-like vector field given by Eq. (<ref>). It is convenient to have an explicit formula for the coefficients 𝒦_μ^ν, thus, from (<ref>) we obtain 𝒦_μ^k =1/2ħ{∑_A v̂_A L̂^αv̂_A^†L̂^k } g_αμ . We can express the GKLS vector field Γ in Cartesian coordinates considering the expectation values f_Ĥ = ħ/2H_0 + ħ/4 H_k y^k for the Hamiltonian operator and f_V̂ = ħ/2V_0 + ħ/4 V_k y^k. Then, finally, we obtain the result we where looking for, the GKLS vector field is given by the following expression Γ = c^k j_l H_k y^l ∂/∂ y^j - ħ g^kj V_k ∂/∂ y^j + 4/ħ 𝒦_μ^j y^μ ∂/∂ y^j - ħ/2V_0 Δ , which is a linear vector field because the nonlinear terms in Z_𝒦̂ and Y_V̂ cancel out in the sum. §.§ Damping phenomena for the harmonic oscillator dynamics In this subsection, we analyze, as an example, the GKLS dynamics considering the fiducial state described by the Gaussian density matrix (<ref>). Let us consider as the conservative system the harmonic oscillator with its Hamiltonian operator given as Ĥ_Osc = 1/2(p̂^2 + q̂^2) = 2 L̂^1 , where we have set all the parameters to unity, i.e., m=1 and ω = 1. Thus, from (<ref>) the Hamiltonian vector field has the form X_H = 2 ( y^3 ∂/∂ y^2 - y^2 ∂/∂ y^3) . To introduce dissipation, let us consider first the operator v̂_1 = √(γ)/ħ L̂^1 and hence we have that V̂ = γ/2 ħL̂^0 and accordingly the gradient-like vector field definition, in Eq. (<ref>), Y_V vanishes. Therefore, the Choi-Kraus vector field Z_𝒦 is the only term introducing dissipation into the system. To calculate Z_𝒦 we first need to compute the coefficients 𝒦_μ^ν for v̂_1 which can be obtained from the expression 𝒦_μ^ν = γ/2ħ^3 g_μα {L̂^1 L̂^αL̂^1 L̂^ν} . Then, from (<ref>) we find the Choi-Kraus vector field Z_𝒦 = - γ/2( y^2 ∂/∂ y^2 + y^3 ∂/∂ y^3) . Finally, we obtain the GKLS vector field Γ = X_H + Z_𝒦 = -( γ/2y^2 - 2 y^3)∂/∂ y^2 -( γ/2y^3 + 2 y^2)∂/∂ y^3 . Then, the system of equations of motion is ẏ^1 = 0 , ẏ^2 = - γ/2y^2 + 2y^3 , ẏ^3 = -γ/2y^3 - 2y^2 , and its solution for initial conditions at t=0, denoted as (y^1_0, y^2_0, y^3_0), is described by harmonic functions with the damping factor modulated by γ y^1(t) = y^1_0 , y^2(t) = e^-γ/2t(y^2_0 cos 2t + y^3_0 sin 2t) , y^3(t) = e^-γ/2t(y^3_0 cos 2t - y^2_0 sin 2t) . The GKLS vector field is plotted in Fig. <ref> where the damping parameter has been set as unity, γ = 1. In this figure we observe that, regardless the initial conditions, the vector field converges asymptotically to the line (y^1 , 0 , 0), which is a singular line for Γ. Moreover, we see that the pure coherent state given by the point (1,0,0) will not be affected by this dynamics. As a second example of damping, we consider again the harmonic oscillator system in Eq. (<ref>), but now the dissipation term is given by v̂_1 = √(γ)K̂_+ , where the operators K̂_+ and K̂_- are given in terms of the (2,) generators as K̂_+ = 1/ıħ( L̂^3 + ı L̂^2 ) and K̂_- = 1/ıħ( L̂^3 - ı L̂^2 ) , and are such that K̂_- = K̂_+^†. Thus, by direct calculation we find that V̂ = γK̂_- K̂_+ = γ/ħ(L̂^0 - L̂^1) and then, the gradient-like vector field can be computed by means of the result in (<ref>) Y_V = γ/ħ[ (y^1^2 - 1)∂/∂ y^1 + y^1 y^2 ∂/∂ y^2 + y^1 y^3 ∂/∂ y^3] . From (<ref>) and (<ref>) we find that the Choi-Kraus term is Z_𝒦 = γ( y^1^2 - y^1 + 1 ) ∂/∂ y^1 + γ ( y^1 y^2 - y^2/2) ∂/∂ y^2 + γ ( y^1 y^3 - y^3/2) ∂/∂ y^3 . Finally, the dynamical evolution is determined by the GKLS vector field Γ = X_H - ħ Y_V + Z_𝒦, whose Cartesian coordinate expression is Γ = -γ(y^1 - 2) ∂/∂ y^1 - ( γ/2 y^2 - 2 y^3 ) ∂/∂ y^2 - ( γ/2 y^3 + 2 y^2 ) ∂/∂ y^3 . Notice that the components in the direction of y^2 and y^3 are equal to those in the GKLS vector field in (<ref>) in the previous example; however, in this case, there is a non-vanishing component in the y^1 direction. The integral curves of this vector field are solutions to the linear system of equations ẏ^1 = -γ (y^1 - 2) , ẏ^2 = -γ/2 y^2 + 2 y^3 , ẏ^3 = -γ/2 y^3 - 2 y^2 , and are given by y^1(t) = (y^1_0 - 2) e^-γ t + 2 , y^2(t) = e^-γ/2 t(y^2_0 cos 2t + y^3_0 sin 2t ) , y^3(t) = e^- γ/2 t(y^3_0 cos 2t - y^2_0 sin 2t ) , where again (y^1_0, y^2_0, y^3_0) are initial conditions at t=0. The GKLS vector field for this case is displayed in Fig. <ref>; here we observe that there is a unique singular point at (2, 0 , 0). For any initial condition, every solution converges asymptotically to this singular point. § CONCLUSIONS AND PERSPECTIVES In this work, we have obtained in detail the GKLS vector field for systems described by Gaussian density matrices. In the first part, we have reviewed thoroughly the case of two-level systems, the q-bit, to introduce the concepts and procedures presented in <cit.>, which are necessary to accomplish the same task for the dissipative dynamics of Gaussian case. Although the two cases can be worked out similarly some differences that must be taken into account. In both cases, we start by introducing the space of quantum states and recognize these as parametrized spaces, that can be described as a manifold with boundary. The purity condition ρ^2 = ρ determines the boundary of the manifold of states, while the mixture condition {ρ^2 } < 1 its interior, that is, the boundary is the space of pure states and the open interior describes those which have a statistical mixture. As a first distinct feature, the manifold representing the quantum states for the q-bit is compact, while in the case of the Gaussian system is open, where the isospectral submanifolds are hyperboloids (<ref>) instead of Bloch spheres. To describe a dynamical evolution that allows a change in the degree of statistical mixture of the states, we have determined the GKLS vector field. The integral curves of this vector field are, in general, transversal to the foliation, making it possible to evolve from a pure or mixed state to another mixed state. This was done consistently, imposing that the orbits of this vector field were constrained to remain in the space of quantum states. To find the GKLS vector field in the Gaussian case, we followed closely the construction for the n-levels systems presented in <cit.>. Thus, we considered the description of the space of quantum states from an observable point of view, which offers the advantage of endowing the space of observables with a Lie-Jordan algebra structure. In particular, for the Gaussian density matrix case, we restricted our study to operators quadratic in the position and momentum operators. The space of observables, that is, the space of real functions on the dual of the Lie-Jordan algebra provides a realization of the Lie-Jordan algebra of operators, thus, it is possible to define geometric structures on the space of observables from these algebraic products. Namely, associated with the Lie product it is possible to define a (skew-symmetric) Poisson bivector field and to the Jordan product a corresponding symmetrical bivector. The Poisson bivector field endows the space of states with a Poisson structure and a Hamiltonian vector field can be defined for the entire space, generalizing the symplectic structure, in particular, for the case of odd-dimensional spaces. On the other hand, the symmetric bivector field defines a gradient-like vector field that is transverse to the leaves of the foliation. To constraint the orbits associated with the Hamiltonian and gradient-like vector fields to the space of quantum states, it is necessary to define such tensors in this space employing a reduction procedure. This reduction allows us to find a new symmetric bivector which yields a vector field that is transverse to the foliation, but it is tangent to the leaf of pure states. The orbits of the vector field defined from this modified bivector defines a dynamics such that a pure state remains pure under this evolution. To amend this, a vector field associated with a completely positive map from the dual space to itself is introduced, defining in this way the GKLS vector field associated with the corresponding GKLS operator. Obviously, in the Gaussian case, the GKLS vector field is different from the one for the q-bit. It is worth mention that, to obtain the GKLS vector field for Gaussian density states we have defined a new Hermitian operator (<ref>) in terms of the algebra generators to establish the mapping (<ref>) between operators in the algebra and linear functions in its corresponding dual algebra. This operator allows to reproduce the expectation value through a simple operation; hence, it is a simpler but equivalent way of establishing the relation between an operator and its dual. Therefore, we have shown that, in the case of states described by a Gaussian density matrix, the GKLS dynamics also admits a decomposition principle, i.e. a conservative-Hamiltonian part as a reference dynamics, while the sum of the gradient-like and the Choi–Kraus vector fields are considered as “perturbation terms” associated with dissipation. In this sense, we have seen that the very concept of dissipation is not associated with the GKLS vector field itself, but rather, with the decomposition of this vector field in terms of the relevant geometrical structures: the Hamiltonian vector field, the like-gradient vector field and the Choi–Kraus vector field. There are still many investigations to pursue in the study of the evolution of GKLS for Gaussian density matrices. For instance, an immediate question is whether this procedure can be generalized to address systems with more dimensions. The generalization of this procedure to n-levels has been already obtained in Ref. <cit.>, while the general form of the Gaussian density matrix with a statistical mixture for n-dimensions is well known and has been constructed employing a generalization of the so-called covariance matrix, see <cit.> and references therein. Thus, taking into account these results one could proceed directly to construct the GKLS vector field for more degrees of freedom. This will be presented in a forthcoming work. Another important generalization of our procedure is to consider the Gaussian density matrices with non-vanishing first moments. This implies considering observables at most quadratic in position and momentum operators, i.e., one may include linear operators such as the creation and annihilation operators. However, including linear operators means that we are no longer dealing with the (2, ) Jordan–Lie algebra, but rather with a semidirect product of algebras, specifically, the (2, ) and the Heisenberg-Weyl algebra. Then, given this kind of system, one may try to obtain the GKLS vector field following the procedures in this paper. This will be studied in a future work. Finally, in this work, we have only considered as a fiducial state the Gaussian density matrix, so a natural question is: whether different fiducial states can be chosen? In particular, it might be possible to consider the following Wigner function. Let us define I(q,p,t) : = 4/ħ^2 r^2[ σ_p^2 q^2 - 2 σ_qp q p + σ_q^2 p^2 ] , then, one may introduce the following family of Wigner functions W_n(q , p) = (-1)^n/π ħ r e^-1/2 I(q,p,t) L_n[ I(q,p,t) ] , where L_n denotes the Laguerre polynomials. Then, r=1 corresponds to Wigner functions associated with the Fock states; however, for r ≠ 1 these states satisfy the Robertson–Schrödinger uncertainty relation σ_q^2 σ_p^2 - σ^2_qp = ħ^2 r^2/4 , and the mixture condition {ρ̂^2 } = 1/r . Actually, for n = 0, one obtains the Gaussian density matrix studied in this contribution. This set of states has the same space of parameters analyzed for the Gaussian density matrix and then our results might be applied directly, i.e., the GKLS vector field obtained in (<ref>) could be used with fiducial states in the set { W_n(q , p) }. These results will be studied in future contributions. § ACKNOWLEDGEMENTS H. Cruz-Prado is grateful for the scholarship provided by CONAHCyT México, with reference number 379177. O. Castaños thanks support from PASPA of DGAPA-UNAM.. § REFERENCES 10Strocchi-1966 F. Strocchi, Rev. Mod. Phys., 38 (1) (1966). Cantoni-1978 V. Cantoni, Rendiconti del Seminario Matematico e Fisico di Milano, 48 (1978) 35–42. Kramer-1980 P. Kramer and M. Saraceno, Geometry of the time-dependent variational principle in quantum mechanics (Springer-Verlag, Berlin Heidelberg, 1980). Cirelli-1983 R. Cirelli, P. Lanzavecchia and A. Mania, J. Phys. A: Math. Gen., 16 (16) (1983) 3826. Moshinsky-1971 M. Moshinsky and C. Quesne, J. Math. Phys., 12 (8) (1971) 1772–1780. Mello-1975 P. A. Mello, M. Moshinsky, J. Math. Phys., 16 (10) (1975) 2017–2028. Dirac-1981 P. A. M. Dirac, The principles of quantum mechanics (Oxford university press, 1981) Grabowski-2005 J. Grabowski, M. Kuś, and G. Marmo, J. Phys. A Math. and Theor., 38 (2005) 10217. Aniello-2011 P. Aniello, J. Clemente-Gallardo, G Marmo, G. F. Volkert, (2011) arXiv preprint arXiv:1101.0625. Ciaglia-2017 F. M. Ciaglia, F. Di Cosmo, A. Ibort, M. Laudato and G. Marmo, Open Syst. Inf. Dyn.24 (3) (2017) 1740003. Chruscinski-2019 D. Chruściński, F. M. Ciaglia, A. Ibort, G. Marmo, and F. Ventriglia, Ann. Phys, 400 (2019) 221–245. Gorini-1976 V. Gorini, A. Kossakowski, and E. C. G. Sudarshan, J. Math. Phys., 17 (1976) 821. Lindblad-1976 G. Lindblad, Commun. Math. Phys., 48 (1976) 119. Malkin-1969 I. A. Malkin, V. I. Man'ko and D. A. Trifonov, Phys. Lett. A, 30 (1969) 414. Malkin-1973 I. A. Malkin, V. I. Man'ko and D. A. Trifonov, J. Math. Phys., 14 (5) (1973) 576–582. Perelomov-1972 A. M. Perelomov, Commun. Math. Phys., 26 (1972) 222. Arecchi-1972 F. T. Arecchi, E. Courtens, R. Gilmore, and H. Thomas, Phys. Rev. A,6 (6) (1972) 2211–2237. Onofri-1975 E. Onofri, J. Math. Phys., 16 (1975) 1087–1089. Manko-1997 V. I. Man'ko, G. Marmo, E. C. G. Sudarshan, and F. Zaccaria, Phys. Scripta, 55 (1997) 528. Aniello-2000 P. Aniello, V. Man'ko, G. Marmo, S. Solimeno, and F. Zaccaria, J. Opt. B Quantum semiclass, 2 (2000) 718–725. Aniello-2009 P. Aniello, G. Marmo, and G. F. Volkert, Int. J. Geom. Meth. Mod. Phys., 07 (2009) 369–383. Cruz-2021 H. Cruz-Prado, G. Marmo, D. Schuch and O. Castaños, J. Math. Phys., 62 (4) 042105. Ferraro-2005 A. Ferraro, S. Olivares and M. A. G. Paris, Gaussian states in continuos variables quantum information (Napoli Series on physics and Astrophysics, Bibliopolis Naples, 2005). Scully-1999 M. O. Scully and M. S. Zubairy, Quantum Optics (Cambridge University Press, Cambridge, UK, 1999). Radcliffe-1971 J. M. Radcliffe, J. Phys. A, 4 (1971) 313. Perelomov-1977 A. M. Perelomov, Soviet Physics Uspekhi, 9 (1977) 703. Ercolessi-2010 E. Ercolessi, G. Marmo, G. Morandi, From the equations of motion to the canonical commutation relations, RIV. NUOVO CIMENTO 33 (8) (2010) Zhang-1990 W. M. Zhang, D. H. Feng and R. Gilmore, Robert, Rev. Mod. Phys., 62 (1990) 867. Cruz-2020 H. Cruz-Prado, G. Marmo and D. Schuch, J. Phys. Conf. Ser., 1612 (2020) 012010. Arnold-2013 V. I. Arnol'd, Mathematical methods of classical mechanics, (Springer Science & Business Media, 2013). Kraus-1971 K. Kraus, Ann. Phys., 64 (1971) 311. Choi-1972 M. D. Choi, J. Canad. Math., 24 (1972) 520. Choi-1975 M. D. Choi, Lin. Alg. Appl., 10 (1975) 285. Carinena-2015 J. F. Cariñena, A. Ibort, G. Marmo and G. Morandi, Geometry from dynamics, classical and quantum. (Springer-Verlag, Berlin, Heidelberg, New York, Tokyo, 2015). Weyl-1927 H. Weyl, The Theory of Groups and Quantum Mechanics (Dover Publications, New York, 1950). Cruz-2015 H. Cruz, D. Schuch, O. Castaños and O. Rosas-Ortiz, Ann. Phys., 360 (2015) 44-60. Balazs-1986 N. L. Balazs and A. Voros, Phys. Rep., 143 (3) (1986) 109-240 . Siegel-1943 C. L. Siegel, Am. J. Math., 22 (1) (1943) 1–86. Simon-1987 R. Simon, E. C. G. Sudarshan, and N. Mukunda, Phys. Rev. A 36 (1987) 3868. Kastrup-2003 H. A. Kastrup, Fortschritte der Physik: Progress of Physics, 51 (10-11) (2003) 975–1134. Wolf-1986 Lie Methods in Optics, Proceedings of the CIFMO-CIO Workshop Held at León, México, January 7-10, 1985 (Springer-Verlag, Berlin, Heidelberg, New York, Tokyo, 1986) Balachandran-2010 A. P. Balachandran, S.G. Jo and G. Marmo, Group Theory and Hopf Algebras: Lectures for Physicists (World Scientific 2010).
http://arxiv.org/abs/2405.09676v1
20240515193608
The radius of statistical efficiency
[ "Joshua Cutler", "Mateo Díaz", "Dmitriy Drusvyatskiy" ]
math.ST
[ "math.ST", "math.OC", "stat.ML", "stat.TH", "90C15, 49K40, 62F12, 90C31" ]
The radius of statistical efficiency Joshua CutlerDepartment of Mathematics, U. Washington, Seattle, WA 98195; . Mateo DíazDepartment of Applied Mathematics and Statistics, Johns Hopkins University, Baltimore, MD 21218, USA; . Dmitriy DrusvyatskiyDepartment of Mathematics, U. Washington, Seattle, WA 98195; . Research of Drusvyatskiy was supported by the NSF DMS-2306322, NSF CCF 1740551, and AFOSR FA9550-24-1-0092 awards. =========================================================================================================================================================================================================================================================================================================================================================================================================== Classical results in asymptotic statistics show that the Fisher information matrix controls the difficulty of estimating a statistical model from observed data. In this work, we introduce a companion measure of robustness of an estimation problem: the radius of statistical efficiency (RSE) is the size of the smallest perturbation to the problem data that renders the Fisher information matrix singular. We compute RSE up to numerical constants for a variety of test bed problems, including principal component analysis, generalized linear models, phase retrieval, bilinear sensing, and matrix completion. In all cases, the RSE quantifies the compatibility between the covariance of the population data and the latent model parameter. Interestingly, we observe a precise reciprocal relationship between RSE and the intrinsic complexity/sensitivity of the problem instance, paralleling the classical Eckart–Young theorem in numerical analysis. § INTRODUCTION A central theme in computational mathematics is that the numerical difficulty of solving a given problem is closely linked to both (i) the sensitivity of its solution to perturbations and (ii) the shortest distance of the problem to an ill-posed instance. As a rudimentary example, consider solving an m× d linear system =. The celebrated Eckart–Young theorem asserts the equality: min_B∈^m× d{A-B_F| B is  singular}_ distance  to  ill-posedness =σ_min()_ difficulty/sensitivity. Although the proof is elementary, the conclusion is intriguing since it equates two conceptually distinct quantities. Namely, the reciprocal of the minimal singular value 1/σ_min() is classically known to control both the numerical difficulty of solving the linear system = and the Lipschitz stability of the solution to perturbations in . In contrast, the left side of the equation (<ref>) is geometric; it measures the smallest perturbation to the data that renders the problem ill-posed. The exact equality in (<ref>) is somewhat misleading because it is specific to linear systems. We would expect that for more sophisticated problems, the difficulty/sensitivity of the problem should be inversely proportional to the distance to ill-posedness. This is indeed the case for a wide class of problems in numerical analysis <cit.> and optimization <cit.>, including computing eigenvalues and eigenvectors, finding zeros of polynomials, pole assignment in control systems, conic optimization, nonlinear programming, and variational inequalities. Despite this impressive body of work, this line of research is largely unexplored in statistical contexts. Therefore, here we ask: Is there a succinct relationship between complexity, sensitivity, and distance to ill-posedness for problems in statistical inference and learning? We will see that in a certain precise sense the answer is indeed yes for a wide class of problems. The starting point for our development is that the statistical difficulty of estimation is tightly controlled by (quantities akin to) the Fisher information matrix for maximum likelihood estimation. This connection is made precise for example by the Cramér-Rao lower bound <cit.> and the local asymptotic minimax theory of Hájek and Le Cam <cit.>. From an optimization viewpoint, the minimal eigenvalue of the Fisher information matrix is closely related to the quadratic growth constant of the objective function modeling the learning problem at hand. In particular, (near-) singularity of this matrix signifies that the problem is ill-conditioned. Inspired by this observation, we introduce a new measure of robustness associated to an estimation task: the radius of statistical efficiency (RSE) is the size of the smallest perturbation to the problem data that renders the Fisher information matrix singular. Thus large RSE signifies existence of a large neighborhood around the problem instance comprised only of well-posed problems. We compute RSE for a variety of test bed problems, including principal component analysis (PCA), generalized linear models, phase retrieval, bilinear sensing, and rank-one matrix completion. In all cases, the RSE exhibits a precise reciprocal relationship with the statistical difficulty of solving the target problem, thereby directly paralleling the Eckart–Young theorem and its numerous extensions in numerical analysis and optimization. Moreover, we provide gradient-based conditions for general estimation problems which ensure validity of such a reciprocal relationship. Slope-based criteria for error bounds, due to Ioffe <cit.> and Azé-Corvellec <cit.>, play a key role in this development. Before delving into the technical details, we illustrate the main thread of our work with two examples—linear regression and PCA—where the conclusions are appealingly simple to state. Linear regression. The problem of linear regression is to recover a vector ∈^d from noisy linear measurements y = , + ε, where ∈^d is drawn from a probability distribution and ε is zero-mean noise vector that is independent of . The standard approach to this task is to minimize the mean-squared error min_∈^d 12_, y( , - y )^2. Classical results show that the asymptotic performance of estimators for this problem is tightly controlled by ^-1 where :=_∼^⊤ is the second moment matrix of the population. The closer the matrix Σ is to being singular, the more challenging the problem (<ref>) is to solve, requiring a higher number of samples. Seeking to estimate a neighborhood of well-posed problems around , the RSE is defined to be the minimal Wasserstein-2 distance from to distributions with a singular second moment matrix. We will see that for linear regression (<ref>), RSE is simple to compute: [box=]equation () = √(λ_min ()). That is, the simplest measure of ill-conditioning of the target problem 1/√(λ_min()) has a geometric interpretation as the reciprocal of the distance to the nearest ill-posed problem. The representation of RSE in (<ref>) is not specific to linear regression and holds much more generally for (quasi) maximum likelihood estimation <cit.> with strongly convex cumulant functions. Principal component analysis (PCA). Principal component analysis (PCA) seeks to find a q-dimensional subspace 𝒱⊂^d that captures most of the variance of a centered random vector x drawn from a probability distribution . Analytically, this amounts to solving the problem max_∈ Gr(q, d) _∼^2_2, where the Grassmannian manifold Gr(q, d) consists of all orthogonal projection matrices ∈^d× d onto q-dimensional subspaces of ^d. The column space of the optimal matrix R is called the top q principal subspace. Intuitively, the hardness of the problem is governed by the gap λ_q-λ_q+1 between the q'th and (q+1)'th eigenvalues of Σ=_^⊤: the smaller the gap is, the higher the number of samples required for estimation, and this can indeed be made rigorous. Again, RSE by definition is the minimal Wasserstein-2 distance from to a distribution with covariance having equal q'th and (q+1)'st eigenvalues. We will show that RSE admits the simple form [box=]equation () = 1√(2)(√(λ_q())-√(λ_q+1())). In particular, the expression (<ref>) endows the gap √(λ_q())-√(λ_q+1())—the reciprocal of the problem's complexity—with a geometric meaning as the distance to a nearest ill-conditioned problem instance. The two examples of linear regression and PCA can be understood within the broader context of stochastic optimization: min_∈  f() where f()=_∼ℓ(; ) *𝚂𝙿(). Here is data drawn from a distribution , the function ℓ(;·) is a loss parameterized by , and ⊂^d is a smooth manifold of allowable model parameters. For example, <ref> may correspond to least-squares regression or maximum likelihood estimation. In both cases, the asymptotic efficiency of estimators for finding the minimizer of <ref> is tightly controlled by the following matrix akin to Fisher information: (,)=P_𝒯∇_^2 ℒ(,, _⋆) P_𝒯. Here, P_𝒯 is the projection onto the tangent space of at and ℒ(,, _⋆) is the Lagrangian function for <ref> with optimal multiplier _⋆. For simplicity, we will abuse notation and call (,) the Fisher information matrix. The matrix (,) plays a central role in estimation, as highlighted by the lower bounds of Cramér-Rao and Hájek-Le Cam <cit.>, as well as their recent extensions to stochastic optimization of Duchi-Ruan <cit.>. Moreover, the minimal nonzero eigenvalue of (,) controls both the coefficient of quadratic growth of the objective function in <ref> and the Lipschitz stability of the solution under linear perturbations. In particular, the problem <ref> becomes ill-conditioned when the minimal eigenvalue of (,) is small. In summary, the matrix (,) tightly controls the difficulty of solving <ref>. Consequently, it is appealing to consider as a measure of robustness of <ref> the size of the smallest perturbation to the data , say in the Wasserstein-2 distance W_2(·,·), that renders the Fisher information matrix singular. This is the viewpoint we explore in the current work, and call this quantity the radius of statistical efficiency (RSE). We choose to use the Wasserstein-2 distance in the definition of RSE, as opposed to other metrics on measures, because it leads to concise and easily interpretable estimates in examples. In the rest of the paper, we study basic properties of RSE and compute it up to numerical constants for a variety of test bed problems: generalized linear models, PCA, phase retrieval, blind deconvolution, and matrix completion. In all cases, the RSE translates intuitive measures of “well-posedness” into quantified neighborhoods of stable problems. Moreover, in all cases we show a reciprocal relationship between the minimal eigenvalue of (,) and RSE, thereby paralleling the Eckart–Young theorem in numerical analysis and optimization. Outline The remainder of this section covers related work and basic notation we use. Section <ref> formally describes the radius of statistical efficiency and establishes a few general-purpose results relating RSE to the minimal eigenvalue of the Fisher information matrix. The subsequent sections characterize RSE for several problems: PCA (Section <ref>), generalized linear models and (quasi) maximum likelihood estimation (Section <ref>), rank-one matrix regression (Section <ref>). Section <ref> closes the paper with conclusions. All proofs appear in the appendix in order to streamline the reading. §.§ Related work Our work is closely related to a number of topics in statistics and computational mathematics. Local minimax lower bounds in estimation. There is a rich literature on minimax lower bounds in statistical estimation problems; we refer the reader to <cit.> for a detailed treatment. Typical results of this type lower-bound the performance of any statistical procedure on a worst-case instance for that procedure. Minimax lower bounds can be quite loose as they do not consider the complexity of the particular problem that one is trying to solve but rather that of an entire problem class to which it belongs. More precise local minimax lower bounds, as developed by Hájek and Le Cam <cit.>, provide much finer problem-specific guarantees. Simply put, a single object akin to the Fisher information matrix controls both the difficulty of estimation from finitely many samples and the stability of the model parameters to perturbation of the density. Extensions of this theory to stochastic nonlinear programming were developed by Duchi and Ruan <cit.> and extended to decision-dependent problems in <cit.> and to a wider class of (partly smooth) problems in <cit.>. In particular, it is known that popular algorithms such as sample average approximation <cit.> and stochastic gradient descent with iterate averaging <cit.> match asymptotic local lower-bound, and are therefore asymptotically optimal. Weaker ad hoc results, based on the Cramér-Rao lower bound, have been established for a handful of problems <cit.>. Radius theorems. Classical numerical analysis literature emphasizes the close interplay between efficiency of numerical algorithms and their sensitivity to perturbation. Namely, problems with solutions that change rapidly due to small perturbation are typically difficult to solve. Examples of this phenomenon abound in computational mathematics; e.g. eigenvalue problems and polynomial equations <cit.> and optimization <cit.>. Motivated by this observation, Demmel in <cit.> introduced a new robustness measure, called the radius of regularity, which measures the size of a neighborhood around a problem instance within which all other problem instances are stable. A larger neighborhood thereby signifies a more robust problem instance. Estimates on the radius of regularity have now been computed for a wealth of computational problems; e.g. solving polynomial systems <cit.>, linear and conic programming <cit.>, and nonlinear optimization <cit.>. The radius of statistical efficiency, introduced here, serves as a direct analogue for statistical estimation. Conditioning and radius theorems in recovery problems. Several condition numbers—controlling the convergence of first-order methods—are closely related to notions of strong identifiability, e.g., the restricted isometry property (RIP), in the context of statistical recovery problems <cit.>. A few works <cit.> have established connections between these notions of strong identifiability and a suitably-defined radius to ill-posed instances. In particular, <cit.> established a connection between Renegar's conic distance to infeasibility <cit.> and the null space property <cit.> in compressed sensing. In a similar vein, <cit.> linked the ℓ_1-distance to ill-posed problems and the RIP for generalized rank-one matrix completion. Finally, <cit.> defined a condition number for the LASSO variable selection problem via the reciprocal of the distance to ill-posedness, designed an algorithm whose complexity depends solely on this condition number, and proved an impossibility result for instances with infinite condition number. The radius of statistical efficiency, defined in this work, is distinct from and complementary to these metrics. Error bounds. The basic question we explore is the relationship between the minimal eigenvalue of the Fisher information matrix (complexity) and the distance to the set where this eigenvalue is zero (RSE). The theory of error bounds exactly addresses questions of this type; namely when does the function's value polynomially bound the distance to the set of minimizers. See for example the authoritative monographs on the subject, <cit.> and <cit.>. Indeed, Demmel's original work <cit.> makes heavy use of this interpretation. We explore this path here as well when developing infinitesimal characterizations of RSE in Theorem <ref>. The added complication is that the functions we deal with are defined over a complete metric space, and therefore the techniques we use rely on variational principles (á la Ekeland) and computations of the slope. §.§ Notation Linear algebra. Throughout, we let ^d denote the standard d-dimensional Euclidean space with dot product ⟨ x, y ⟩ = x^⊤y and norm x_2=√(⟨ x,x⟩). The unit sphere in ^d will be denoted by 𝕊^d-1, while the set of nonnegative vectors will be written as ^d_+. The symbol ^m× n will denote the Euclidean space of real m × n matrices, endowed with the trace inner product ⟨ X,Y⟩=(X^⊤ Y). The symbol ⊗ denotes Kronecker product. The Frobenius and operator norms will be written as ·_F and ·_ op, respectively. The singular values of a matrix A∈^m× n will be arranged in nonincreasing order: σ_1(A)≥σ_2(A)≥…≥σ_m∧ n(A). The space of real symmetric d× d matrices is denoted by ^d and is equipped with the trace product as well. The cone of d× d positive semidefinite matrices will be written as ^d_+. The eigenvalues of a matrix A∈^d will be arranged in nonincreasing order: λ_1(A)≥λ_2(A)≥…≥λ_d(A). For any subspace ⊂^d, the symbol _^d→ will denote the orthogonal projection onto . The compression of any matrix A∈^d× d to , denoted A|_→, is the map A|_:=_A_. Probability theory. We will require some background on the Wasserstein geometry on the space of probability measures on ^d. In order to streamline the reading, we record here only the most essential notation that we will need. A detailed review of Wasserstein geometry appears in Section <ref>. To this end, we let _p(^d) be the space of measures μ on ^d with a finite p'th moment _μx^p_p<∞. The subset of measures of _p(^d) that are centered, meaning _μ[x]=0, will be written as _p^∘(^d). When the space ^d is clear from context, we use the shorthand _p and _p^∘. A convenient metric on _p is furnished by the Wasserstein distance W_p(μ,ν); see Section <ref> for details. The distance function to a set of measures Q⊂_p is defined by W_p(μ,Q)=inf_ν∈ QW_p(μ,ν). The symbol Σ_μ=_μxx^⊤ will denote the second moment matrix of any measure μ∈_2. In the rest of the paper, we will use the symbol to denote a distinguished measure associated with the problem of interest, while we use μ as a placeholder for arbitrary measures. § THE DISTANCE TO ILL-CONDITIONED PROBLEMS In this section, we formally define the radius of statistical efficiency (RSE), and develop some techniques for computing it. Throughout, we will focus on the stochastic optimization problem min_β∈  f(β) where f(β)=_z ∼ℓ(β; z).*SP() Here, the set ⊂^d is a C^2 manifold and z is drawn from a distribution ∈_2(), where is a finite-dimensional Euclidean space equipped with its Borel σ-algebra. We assume that the function ℓ(β;z) is measurable and twice differentiable in β for every z and that f is C^2-smooth. We also make the blanket assumption that the gradient and Hessian of ℓ(· ; z) are -integrable. The difficulty of solving the problem <ref> from finitely many samples z_1,z_2,…, z_niid∼ is tightly controlled by a matrix akin to Fisher Information. This object, which we now describe, plays a central role in our work. Let be a minimizer of <ref> and define the solution map: σ(v)=_β∈∩ B_ε() f(β)-⟨ v,β⟩. Thus, the set σ(v) is comprised of all solutions to a problem obtained from <ref> by subtracting a linear/tilt perturbation ⟨ v,β⟩. Clearly, a desirable property is for σ to be single-valued and smooth. With this in mind, we introduce the following notion due to Poliquin and Rockafellar <cit.>. (Tilt-stable minimizer) The point is a tilt-stable minimizer of <ref> if the map σ(·) satisfies σ(0) = and is single-valued and C^1-smooth on some neighborhood of the origin. Then the regularity modulus of the problem is defined to be REG()=∇σ(0)_ op. If is not tilt-stable, we call unstable and set REG()=+∞. In particular, we will regard REG() as the measure of difficulty of solving the problem <ref>. When is the whole space, is a tilt-stable minimizer if and only if the Hessian ∇^2 f() is nonsingular, in which case equality ∇σ(0)=[∇^2 f()]^-1 holds <cit.>. In particular, when <ref> corresponds to maximum likelihood estimation, the matrix ∇σ(0) reduces to the inverse of the Fisher information. More generally, tilt-stability can be characterized either in terms of definiteness of the covariant Hessian or the Hessian of the Lagrangian on the tangent space to . Since we will use both of these viewpoints, we review them now. The reader may safely skip this discussion during the first reading since it will not be used until the appendix. Lagrangian characterization. Let G=0 be the local defining equations for around . That is G^d→^m is a C^2-smooth map with surjective Jacobian ∇ G() and such that the two sets and {β: G(β)=0} coincide near . Then the tangent space to at is 𝒯= Null(∇ G()). Define the Lagrangian function ℒ(β,λ):=f(β)+⟨λ, G(β)⟩. First order optimality conditions at ensure that there exists a unique vector λ_⋆∈^m satisfying ∇_βℒ(,λ_⋆)=0. Define the matrix (,):=P_𝒯·∇^2_ββℒ(,λ_⋆)· P_𝒯, where P_𝒯 is the orthogonal projection onto the tangent space 𝒯. If is a local minimizer of the problem, then (,) is positive semidefinite. Conversely: (,) is positive definite on 𝒯 if and only if is a tilt-stable minimizer. Moreover, in this case equality ∇σ(0)=(,)^† holds, where † denotes the Moore–Penrose inverse. In particular, one may regard ∇σ(0) as akin to the inverse of the Fisher information matrix for MLE. Note that when is tilt-stable, the reciprocal of the regularity modulus 1/ REG() coincides the minimal nonzero eigenvalue of (,). Intrinsic characterization. Often, the defining equations of the manifold are either unknown or difficult to work with. In this case, tilt-stability can be characterized through second-order expansions along curves. Namely, for any tangent vector u∈, there exists a C^2-smooth curve γ_u (-ϵ,ϵ)→ for some ϵ>0 satisfying γ_u(0)= and γ̇_̇u̇(0)=u. The covariant Hessian of f at is the unique symmetric bilinear form ∇^2_ f()×→ satisfying ∇^2_ f()[u,u]=(f∘γ_u)”(0). It is classically known that equality holds ∇^2_ f()[u,u]=u^⊤·(,)· u ∀ u∈𝒯, where the matrix (,) is defined in (<ref>). Consequently, ∇^2_ f() is positive semidefinite when is a local minimizer. Conversely: ∇^2_ f() is positive definite on 𝒯 if and only if is a tilt-stable minimizer. In this case, identifying ∇^2_ f() with a matrix, equality ∇σ(0)=(P_𝒯∇^2_ f()P_𝒯 )^† holds. In particular, the equality REG()^-1=λ_min(∇^2_ f()) holds. The sensitivity matrix ∇σ(0) figures prominently in the asymptotic performance of estimation procedures. Notably, building on classical ideas due to Hájek and Le Cam, the recent paper of Duchi and Ruan <cit.> established a lower bound on asymptotic covariance of arbitrary estimators β_n of using n samples z_1… , z_n. The precise lower bound is quite technical, and we refer the interested reader to their paper. In summary, their result shows that if is a tilt-stable minimizer, then the asymptotic covariance of √(n)(β_n - ) is lower bounded in the Loewner order by the matrix :=∇σ(0)·(∇ℓ(, z))·∇σ(0), Moreover, in typical settings the expression in (<ref>) simplifies to the equality =∇σ(0); this is the case for example for (quasi) maximum likelihood estimation (Section <ref>) and rank-one matrix regression problems (Section <ref>). Thus, asymptotically the best error that any estimator can achieve in the direction u is on the order n^-1/2· u^⊤ u. The direction u with the worst error matches the top eigenvector of and the number of samples necessary to find an accurate approximation of grows with λ_max(). Reassuringly, typical algorithms such as sample average approximation <cit.> and the stochastic projected gradient method <cit.> match the lower-bound (<ref>) and thus are asymptotically optimal. In the rest of the paper, we analyze data distributions ', nearest to a fixed measure , for which the problem SP(') admits unstable minimizers. The formal definition will depend on whether the learning problem is of supervised or unsupervised type. We now describe these two settings in turn. The goal of unsupervised learning is to learn some property of a distribution from finitely many samples z_1,…, z_niid∼. Dimension reduction with principal component analysis (PCA) is a primary example. In this case, the solution of <ref> strongly depends on the distribution . Therefore a natural measure of robustness of the problem is the size of the smallest perturbation in the Wasserstein-2 distance W_2(',) so that the problem SP(') has an unstable minimizer. Radius of statistical efficiency (unsupervised) Consider the problem <ref> and let ⊂_2() be a distinguished set of distributions. Define the set of ill-conditioned distributions as = {' ∈|There exists a minimizer of SP(') that is unstable}. The radius of statistical efficiency of is defined to be () := W_2(, ). Problems of supervised learning are distinctly different. The data consists of pairs z=(x,y)∼, where ∼_x are called feature vectors and y∼_y| are the labels. We will assume that the conditional distribution _y| depends on the features and a latent parameter β^⋆. A typical example is the setting of regression under a model y=g(,)+ε where ε is a noise vector that is independent of . The goal of the corresponding optimization problem <ref> is to recover β^⋆. In contrast to unsupervised learning, the latent parameter is fixed a priori and is not a function of the data distribution. Therefore a natural measure of robustness of the problem is the size of the smallest perturbation to the feature vectors W_2('_x,_x) so that the problem SP('_x×_y|) has an unstable minimizer. Notice that the conditional distributions of y given x coincide for the two measures and '='_x×_y|. This type of shift exclusively only in the feature data appears often in the literature under the name of covariate shift <cit.>. Radius of statistical efficiency (supervised) Consider the problem <ref> with =_x×_y| and let be its minimizer. Let ⊂_2 be a distinguished set of distributions. Define the set of ill-conditioned distributions as = {'_x ∈| is not a tilt-stable minimizer of SP('_x×_y|)}. The radius of statistical efficiency of is defined to be () := W_2(_x, ). Evidently, the quantity RSE() measures the robustness of the problem because it quantifies the size of a neighborhood around for which all problem instances are stable. There is a small nuance in formalizing this statement due to a lack of compactness in the Wasserstein space. Namely, we have to impose the minor assumption that any sequence of measures ν_i for which the problem SP(ν_i) becomes progressively harder (REG(ν_i)→∞) must approach the set of ill-conditioned distributions (RSE(ν_i)→ 0). In all examples we consider, this holds at least on bounded sets '⊂_2. The proof of this elementary observation appears in Appendix <ref>. Fix a set '⊂ and suppose that for any sequence of measures ν_i∈'∖ the implication holds: REG(ν_i)→∞ ⟹ RSE(ν_i)→ 0. Then for any measure μ∈'∖ and any radius 0<r< RSE(μ), we have sup_ν∈': W_2(ν,μ)≤ r REG(ν)<+∞. Moreover, if for some c,q>0, the inequality RSE(ν)^q≤ c· REG(ν)^-1 holds for all ν∈'∖, then the supremum in (<ref>) is upper bounded by c· ( RSE(μ)-r)^-1/q. At first sight, it appears that computing RSE() in concrete problems is difficult. Indeed, the set of ill-conditioned distributions may be quite exotic and computing RSE() amounts to estimating the Wasserstein-2 distance W_2(,). In contrast, computing the regularity modulus REG() should be relatively straightforward. The key observation now is that the two quantities, RSE() and REG(), are closely related since is the set of minimizers of the function 𝒥(μ):=1/ REG(μ). Thus it would be ideal if there were a quantitative relationship of the form: (W_2(μ,))^ℓ_1≲𝒥(μ)≲ (W_2(μ,))^ℓ_2 ∀μ∈_2. The upper should be elementary to establish because it amounts to upper bounding the growth of the functional 𝒥(μ). The lower bound is more substantial because it requires lower-bounding the growth of 𝒥(μ) by a nonnegative function of the distance. Such lower-estimates are called error bounds in nonlinear analysis and can be checked by various “slope”-based conditions. See for example the authoritative monographs on the subject, <cit.> and <cit.>. Indeed, Demmel's original work <cit.> relies on verifying an error bound property as well, albeit in the much simpler Euclidean setting. The following theorem provides a sufficient condition (<ref>) ensuring the relationship (<ref>). We state the theorem loosely by compressing all multiplicative numerical constants via the symbol ≲. More precise and sharper guarantees appear in Appendix <ref>.[In the theorem statement, the symbol DF(x)^d→^k denotes the differential of F, while the symbol DF(x)^*^k→^d is the adjoint linear map of DF(x).] [label=thm:infinitesimal](Infinitesimal characterization of RSE) Consider the problem <ref> of supervised learning and let be its minimizer. Set =_2 and suppose that the Hessian ∇^2_ f() corresponding to a measure μ∈_2 can be written as ∇^2_ f()=_μ F(x), for some C^1-smooth map F^d→^k_+ satisfying DF(x)_ op≲ 1+x_2 for all x∈^d. Suppose that there exists q_1,q_2 ∈ [0,1) such that for all measures ν∈_2∖, the estimate λ_min(_νF(x))^q_1≲√(_νDF(x)^*[uu^⊤]^2_2)≲λ_min(_νF(x))^q_2, holds for some eigenvector u∈𝕊^d-1 of the matrix _νF(x) corresponding to its minimal eigenvalue. Then for every μ∈_2 the inequality holds: REG(μ)^q_2-1≲ RSE(μ)≲ REG(μ)^q_1-1. The expression (<ref>) becomes particularly enlightening when F(x) decomposes as F(x)=g(x)g(x)^⊤ for some C^1-smooth map g^d→^k. This situation is typical for regression problems (see Section <ref>). A simple computation then shows that the sufficient condition (<ref>) reduces to (_ν⟨ u, g(x)⟩^2)^q_1≲√(_ν⟨ u, g(x)⟩^2 ∇ g(x)u]^2_2)≲(_ν⟨ u, g(x)⟩^2)^q_2, Observe that all three terms would match exactly with q_1=q_2=1/2 were it not for the term ∇ g(x)u]^2_2 that reweighs the middle integral. It is this reweighing that may impact the values of q_1 and q_2. The salient feature of Theorem <ref> is that it completely circumvents the need for explicitly estimating the distance to the exceptional set . One could apply this theorem to a number of examples that will appear in the rest of the paper. That being said, in all the upcoming examples, we will be able to compute the distance to explicitly, thereby obtaining sharper estimates than would follow from Theorem <ref>. Nonetheless, we believe that Theorem <ref> is interesting in its own right and may be useful for analyzing RSE in more complex situations. § PRINCIPAL COMPONENT ANALYSIS Principal component analysis (PCA) is a common technique for dimension reduction. The goal of PCA is to find a low-dimensional subspace that captures the majority of the variance of the distribution. In this section, we compute the radius of statistical efficiency for PCA. Setting the stage, let be a random vector in ^d drawn from a zero-mean distribution ∈_2^∘(^d). A unit vector v for which the random variable ⟨ v,⟩ has maximal variance is called the first principal component of . Thus, the first principal component is the maximizer of the problem max_v∈𝕊^d-1 12_x∼ ⟨ v,x⟩^2, Equivalently, the first principal component is the eigenvector corresponding to the maximal eigenvalue of the covariance matrix Σ_=_xx^⊤. Intuitively, the problem (<ref>) is more challenging when the gap between the top two eigenvalues of Σ_ is small. This is the content of the following lemma, whose proof appears in Appendix <ref>. The set of ill-conditioned distributions for (<ref>) is given by ℰ={μ∈_2^∘: λ_1(Σ_μ)=λ_2(Σ_μ)}. Moreover, for any μ∈_2^∘∖, equality REG (μ)^-1=λ_1(Σ_μ)-λ_2(Σ_μ) holds. Therefore, estimating RSE amounts to computing the Wasserstein-2 distance of a base measure to the set . The end result is the following theorem; we defer its proof to Appendix <ref>. [label=thm:rse_pca1](RSE for top principal component) Consider the problem (<ref>) and define the covariance Σ_:=_[^⊤]. Then, equality holds: () = 1√(2)(√(λ_1(Σ_))-√(λ_2(Σ_))). In particular, we have ()· REG()=1/√(2)(√(λ_1(Σ_))+√(λ_2(Σ_))). Thus treating λ_1(Σ_) in (<ref>) as being of constant order, we see that the hardness of the problem is inversely proportional to the distance to the nearest ill-posed problem, REG()∝()^-1. More generally still, we may be interested in finding a q-dimensional subspace 𝒱⊂^d that captures most of the variance. Analytically, this amounts to solving the problem max_R∈ Gr(q, d)  f(R)=_x∼ R x^2_2, where the Grassmannian manifold Gr(q, d) consists of all orthogonal projections R∈^d onto q dimensional subspaces of ^d. The column space of the optimal matrix R is called the top q principal subspace. Equivalently, the top q principal subspace is the span of the eigenspaces of Σ_ corresponding to its top q eigenvalues. The following lemma is a direction extension of Lemma <ref>; see Appendix <ref> for a proof. The set of ill-conditioned distributions for (<ref>) is given by ℰ={μ∈_2^∘: λ_q(Σ_μ)=λ_q+1(Σ_μ)}. Moreover, for any μ∈_2^∘∖, equality REG (μ)^-1=λ_q(Σ_μ)-λ_q+1(Σ_μ) holds. Thus, estimating RSE amounts to computing the Wasserstein-2 distance of a base measure to the set . The end result is the following theorem; the proof appears in Appendix <ref>. [label=thm:rse_pca2](RSE for PCA) Consider the problem (<ref>) and define the covariance Σ:=_[^⊤]. Then, the estimate holds: () = 1√(2)(√(λ_q(Σ))-√(λ_q+1(Σ))). In particular, we have ()· REG()=1/√(2)(√(λ_q(Σ_))+√(λ_q+1(Σ_))). Thus similarly to the case q=1, treating λ_q(Σ_) in (<ref>) as being of constant order, the hardness of the problem is inversely proportional to the distance to ill-posed problems, REG()∝()^-1. The proofs of Theorems <ref> and <ref> rely on estimating the distance to the exceptional sets . Notice that these two are defined purely in terms of the spectrum of the second-moment matrix. Although such “spectral” sets in _2 are quite complicated, their distance can be readily computed. Geometric properties of such sets are explored in Appendix <ref> and may be of independent interest. § GENERALIZED LINEAR MODELS AND (QUASI) MAXIMUM LIKELIHOOD ESTIMATION In this section, we compute the RSE for a large class of supervised learning problems arising from (quasi) maximum likelihood estimation (QMLE). The goal is to estimate a parameter ∈ where the constraint set is a C^2-smooth embedded submanifold of ^d. Setting the stage, suppose that we have an L^2 random vector (the predictor) and an L^2 random variable y (the response) satisfying the GLM conditions _[ y |] = h'(⟨, β_⋆⟩) and _[ y |] = σ^2 · h”(⟨, β_⋆⟩) for some known C^2-smooth convex function h→ with h”>0 and parameter σ^2 > 0. The function h is called the cumulant function of the model (<ref>), and σ^2 the dispersion parameter. Here and from now on, we make the blanket assumption that ⊂_2 is the space of probability measures where sufficient regularity conditions are met to take expectations and to interchange differentiation and expectation as necessary. More precisely, we assume that the function ϕ→ given by ϕ(β) = [h(⟨, β⟩)] is well defined and has a C^2-smooth local extension to a neighborhood of in ^d with ∇ϕ() = [h'(⟨, ⟩)] and ∇^2ϕ() = [h”(⟨, ⟩)^⊤]. Following the seminal work of McCullagh <cit.>, we consider the QMLE problem min_β∈f(β) := _[h(⟨, β⟩) - y⟨, β⟩]. The function f given in (<ref>) is called the negative log quasi-likelihood of the GLM (<ref>). In general f is not the negative of the log likelihood function, yet it shares many of its properties and hence the name. The motivation for this loss function comes from the canonical example of (<ref>), where the conditional density of y given admits an exponential-family formulation. In this case, standard maximum likelihood estimation of coincides with (<ref>). As illustration, Table <ref> lists some common examples of QMLE. Henceforth, we let 𝒯:=T_() denote the tangent space of at . We begin with the following lemma that characterizes the set of ill-conditioned problem instances. The set of ill-conditioned distributions for (<ref>) is given by ℰ={μ∈: 𝒯∩(Σ_μ)≠{0}}. Moreover, for any μ∈∖ for which there exist ,>0 satisfying ≤ h”(⟨ x,β_⋆⟩)≤ for μ-almost every x, we have ≤ REG (μ)·λ_min(Σ_μ|_)≤. The proof of this lemma is deferred to Appendix <ref>. The RSE for the problem follows quickly by computing the distance to the set in Lemma <ref>; see Appendix <ref> for a proof. [label=thm:RSE_QMLE](RSE for QMLE) Consider a QMLE problem (<ref>) and let _∈ be the distribution of with covariance Σ:=[^⊤]. Then, the estimate holds: () = √(λ_min(Σ|_)), In particular, if for some ,>0 the inequality ≤ h”(⟨ x,β_⋆⟩)≤ holds for _x-almost every x, then we have ≤ (())^2· REG()≤. Thus we see that under mild conditions, the hardness of the problem is inversely proportional to the square of the distance to the nearest ill-posed problem, REG()∝()^-2. Note that this scaling is different from the one exhibited by PCA in the previous section, REG()∝()^-1. Aside from the examples in Table <ref>, an interesting problem instance occurs in sparse recovery. Namely, let be the submanifold of ^d comprised of k-sparse vectors, i.e., those which have precisely k nonzero components. Then the tangent space of at is the k-dimensional subspace of ^d in which is supported: = {_i |⟨_i, ⟩≠ 0}, where {_1,…,_d} denotes the standard basis of ^d. Any regression problem from Table <ref> constrained to is ill-conditioned precisely when ()⊂^⊥ for some ∈. The right-side of (<ref>) is then the square root of the minimum eigenvalue of the submatrix of Σ_ whose columns and rows are indexed by the nonzero coordinates of . § RANK-ONE MATRIX REGRESSION PROBLEMS In this section, we compute the RSE for a number of tasks in low-rank matrix recovery—a large class of problems with numerous applications in control, system identification, recommendation systems, and machine learning. See for example <cit.> for an overview. Setting the stage, consider the measurement model y = , +ε where the matrix ∈^d_1× d_2 is drawn from some distribution _x, the noise ε is independent of and is mean zero with variance σ^2, and is a rank r matrix. The goal of low-rank matrix recovery is to estimate the latent parameter for finitely many i.i.d samples (_1, y_1), …, (_n, y_n). For the rest of the section, we focus on the simplest case of rank r=1 matrix recovery. §.§ Phase retrieval The problem of phase retrieval corresponds to (<ref>), where the ground truth =^⊤ and the data =xx^⊤ matrices are symmetric and have rank one. We will let _x∈_4(^d) denote the distribution of ∈^d and let denote the joint distribution of (x,y). There are two standard ways to write the phase retrieval problem as a problem of stochastic optimization. The first is simply to minimize the mean square error over rank one matrices: min_M≽ 0: M=1 f(M) := 12_x,y ∼(x^⊤ M x - y)^2. Alternatively, one may parameterize rank one matrices as M=ββ^⊤ and then minimize the mean square error over the factors: min_β∈^d f(β) := 18_x,y∼(⟨ x, β⟩^2 - y)^2. From the viewpoint of RSE there is no significant distinction between these two formulations. The proof of the next lemma appears in Appendix <ref>. The set of ill-conditioned distributions for both (<ref>) and (<ref>) is given by ℰ = ⋃_v∈𝕊^d-1{μ∈𝒫_4(^d) : (μ)⊂^⊥∪ v^⊥}. Fix any measure μ∈_4∖ and define the matrix Σ̂_μ=_μ⟨ x,β_⋆⟩^2xx^⊤. Then, the estimates hold: ≤ REG (μ)·λ_min(Σ̂_μ)≤, where (,)=(2^2,4^2) for problem (<ref>) and (,)=(1,1) for problem (<ref>). It remains to estimate the expression for the distance to the exceptional set . The following lemma shows that minimizing the expected squared distance to {β_⋆}^⊥∪{u}^⊥ over u∈𝕊^d-1 yields the squared W_2 distance to ; its proof appears in Appendix <ref>. [label=pse_generalphase](RSE for phase retrieval) Consider the problems (<ref>) and (<ref>) and define Σ:=__x[^⊤]. Then, equality holds: RSE() = min_v ∈𝕊^d-1√(_x∼_x[ ⟨ x, β_⋆β_⋆⟩^2 ∧⟨ x, v ⟩^2 ]). Thus, using the reasoning from Lemma <ref> and Theorem <ref> one easily derives the following two estimates for formulation (<ref>): RSE() = min_v ∈𝕊^d-1√(_x∼_x[ ⟨ x, β_⋆β_⋆⟩^2 ∧⟨ x, v ⟩^2 ]), REG()^-1 = min_v ∈𝕊^d-1_x∼_x[ ⟨ x, β_⋆⟩^2 ⟨ x, v ⟩^2 ]. A moment of thought leads one to realize that for reasonable distributions, the first equation should scale as √(λ_min(Σ)) while the scaling of the second is at least λ_min(Σ). We now verify that this is indeed the case when the base distribution is Gaussian x ∼ N(0,Σ) for some covariance matrix Σ≽ 0. To this end, define the two functions h_Σ(u, v) := _x ∼𝖭(0, Σ)[ ⟨ x, u⟩^2∧⟨ x, v⟩^2], g_Σ(u,v) := _x ∼𝖭(0, Σ)[ ⟨ x, u⟩^2·⟨ x, v⟩^2], where u, v ∈𝕊^d-1 vary over the unit sphere. We defer the proof of the next result to Appendix <ref>. For any u∈𝕊^d-1, the following estimates hold: (1-2π) λ_min(Σ) ≤ min_v ∈𝕊^d-1 h_Σ(u,v)≤λ_min(Σ). λ_min(Σ)·⟨Σ u,u ⟩≤ min_v ∈𝕊^d-1 g_Σ(u,v)≤ 3λ_min(Σ)·⟨Σ u,u ⟩ Combining Lemma <ref>, Theorem <ref>, and Theorem <ref> directly yields the following estimate on RSE with Gaussian initial data. (RSE for phase retrieval with Gaussian data) Consider problems (<ref>) and (<ref>) with Gaussian data _x∼ N(0,Σ). Then, the estimate hold: √((1-2π)λ_min(Σ))≤ RSE() ≤√(λ_min(Σ)). In particular, the following estimates hold: ·1-2π/3⟨Σ,⟩≤()^2· REG()≤·1/⟨Σ,⟩ where (,)=(2^2,4^2) for problem (<ref>) and (,)=(1,1) for problem (<ref>). Interestingly, we see a different scaling between () and REG(), depending on whether aligns with the nullspace of Σ. In the regime ⟨Σ,⟩≫λ_min(Σ), we observe the scaling REG()∝()^-2, while in the regime ⟨Σ,⟩≈λ_min(Σ), the scaling is much worse REG()∝()^-4. Thus in the latter regime, the distance to the nearest ill-posed problem has a much stronger effect on the hardness of the problem. §.§ Bilinear sensing The problem of bilinear sensing is an asymmetric analogue of phase retrieval, that is, the ground truth matrix =β_1⋆β_2⋆^⊤ and measurement data X=x_1x_2^⊤ are rank one d_1× d_2 matrices, where the factors x_1∼_x_1 and x_2∼_x_2 are independent. The standard way to write this problem as stochastic optimization is to minimize the mean square error over rank one rectangular matrices: min_M∈^d_1× d_2: M=1 f(M) := 12_(x^⊤_1 M x_2 - y)^2, where denotes the joint distribution over (x_1, x_2, y). Throughout, we fix as the set of allowable data distributions all product measures = _4(^d_1)×_4(^d_1). We disregard the factorized formulation with M = β_1β_2^⊤ because it results in a continuum of minimizers that are not tilt-stable. This technical difficulty could be circumvented by introducing an additional constraint, such as β_1 = 1; however, we do not pursue this approach to simplify the exposition. The following lemma characterizes the set of ill-conditioned problems; see Appendix <ref> for a proof. For any PSD matrix Σ, the symbol κ(Σ)=λ_max(Σ)/λ_min(Σ) denotes its condition number. The set of ill-conditioned distributions for (<ref>) is given by ℰ = {μ×ν∈:  either  _μxx^⊤  or  _νxx^⊤ is  singular}. Moreover, for any μ×ν∈∖, with Σ_1=_μx_1x_1^⊤ and Σ_2=_νx_2x_2^⊤, we have 2·min{γ_2λ_min(Σ_1),γ_1λ_min(Σ_2) }/κ(Σ_1)κ(Σ_2)+1≤ REG(μ×ν)^-1≤min{γ_2λ_min(Σ_1),γ_1λ_min(Σ_2) } where γ_i=⟨Σ_iβ_i⋆/β_i⋆,β_i⋆/β_i⋆⟩ for i=1,2. Thus an application of Theorem <ref> immediately yields the following expression for RSE. [label=thm:RSE_bilin](RSE for bilinear sensing) Consider the problem (<ref>) with Σ_1:=__x_1x_1x_1^⊤ and Σ_2:=__x_2x_2x_2^⊤. Then it holds: RSE() = min{√(λ_min(Σ_1)),√(λ_min(Σ_2))}. In particular, the following estimate holds: min{λ_min(Σ_1),λ_min(Σ_2)}/min{γ_2λ_min(Σ_1),γ_1λ_min(Σ_2) }≤()^2· REG()≤ C ·min{λ_min(Σ_1),λ_min(Σ_2)}/min{γ_2λ_min(Σ_1),γ_1λ_min(Σ_2) } where C= κ(Σ_1)κ(Σ_2)+1/2 with γ_i=⟨Σ_iβ_i⋆/β_i⋆,β_i⋆/β_i⋆⟩ for i=1,2. Similar to phase retrieval, the scaling between RSE and REG for bilinear sensing depends on the simultaneous alignment between β_i⋆ and the least eigenvector of Σ_i for i = 1, 2. In particular, when κ(Σ_1)κ(Σ_2) ≈ 1 both upper and lower bounds are off by a constant and if, further, λ_min(Σ_1) ≈λ_min(Σ_2), there are two regimes: (1) when min{γ_1,γ_2}≫λ_min(Σ_1), then REG() ∝ RSE()^-2, and (2) when min{γ_1,γ_2}≈λ_min(Σ_1), then REG() ∝ RSE()^-4. §.§ Matrix completion The problem of matrix completion corresponds to (<ref>), where the ground truth matrix has low rank and the data matrices ∼_x are drawn from some discrete distribution on matrices of the form X=e_ie_j^⊤. We will focus on the simplified setting where M_⋆ is rank one and positive semidefinite. The standard way to write this problem as stochastic optimization is min_M ∈ f(M) := 12_[(⟨ X,M⟩ - y)^2], where ={M≽ 0: M=1} is the set of rank one PSD matrices. In this section, we compute the RSE of the problem in terms of the graph induced by the support of the distribution _x. We begin with a few observations. First, it is straightforward to verify ∇ f(M_⋆)=0 and ∇ f(M_⋆)[Δ,Δ]=⟨ X,Δ⟩^2 ∀Δ∈^d. In particular, the optimal Lagrange multipliers for M_⋆ are zero. Forming the factorization =^⊤, the tangent space to at can be written as 𝒯={ v^⊤ +v ^⊤: v∈^d}. Moreover, an elementary computation shows that Δ_F^2/^2v^2∈ [2,4]. In particular, Δ is zero if and only if v is zero. Consequently, the set of ill-conditioned distributions takes the form: =⋃_v∈^d∖{0}{μ∈_2(^d): (X)⊂ (v^⊤)^⊥}. Indeed, estimating REG(μ) at any μ∈_2(^d)∖ is straightforward, and is the content of the following lemma; see Appendix <ref> for a proof. Consider any measure μ∈_2(^d)∖ and define the matrices Φ_ = (I ⊗) + (⊗ I) and Σ_μ=_μ(X)(X)^⊤. Then the estimate holds: 2β_⋆^2≤ REG(μ)·λ_min(Φ_β_⋆^⊤Σ_μΦ_β_⋆)≤ 4β_⋆^2. Observe now that most measures μ∈ do not correspond to a matrix completion problem, since they do not even need to be discrete. With this in mind, we now focus on the setting where the set of admissible distributions encodes only matrix completion problems. To this end, for a matrix completion problem, X is equal to some matrix X=e_ℓe_k^⊤, whose distribution is induced by a random pair (ℓ, k) in d × d and can be represented by a matrix of probabilities = (p_ij) where p_ij = {(ℓ, k) = (i,j)} = { X_ij = 1 }= [X_ij]. Since M_⋆ is symmetric, we assume without loss of generality that is symmetric as well. We denote the set of all such symmetric distributions on d × d by = { (p_ij) ∈^d |∑_ij p_ij = 1  and  p_ij≥ 0 for all i, j ∈ d }, For each Q ∈, we let μ_Q∈_2(^d × d) be the distribution over canonical matrices e_ie_j^⊤ where (i,j) ∼ Q.[With a slight abuse of notation we use Q to denote both the matrix and distribution over indices.] Thus, the set of ill-conditioned distributions encoding matrix completion problems is ^mc := {μ_Q| Q ∈}∩, where is defined in (<ref>). We are now ready to study the Wasserstein distance between μ and ^mc. We will see that this distance relies on the combinatorial structure of the observations. Consider the undirected graph G=(V, E) with vertices V = d , and edges E = { (i, j) | p_ij > 0}. Thus, E corresponds to the tuples of indices that are “observed” in the problem. Let G^* = (V^*, E^*) be the induced graph given by V^* = (), meaning that E^* consists of all the edges in E between elements of V^*. Further, define V^0 = { i ∉ V^*|for all j ∈ V^*, (i,j) ∉ E}. Thus, V^0 corresponds to nodes i with β_i = 0 that are not connected to any j for which β_j≠ 0. We will see shortly that the following assumption characterizes the set of well-posed instances of matrix completion. Consider a graph G = ([d], E) with G^* and V^0 defined in (<ref>) and (<ref>), respectively. Suppose the following two hold. * (Non-bipartite) G^* has no connected components that are bipartite. * (No isolated zeros) The set V^0 is empty. Given a set of edges A ⊂ d × d , we let G_A be the graph induced by A and define Ω_β_⋆ = { A ⊂ d × d |G_A does not satisfy Assumption <ref>}. We are now ready to show the main result of this section; see Appendix <ref> for a proof. [label=thm:rse_MC](RSE for matrix completion) Let = (p_ij) and M_⋆ = ^⊤ be the data of the matrix completion problem (<ref>), and let μ be the distribution of X induced by . Then, μ is well-posed, i.e., μ∉^ mc, if, and only if, the graph G induced by the problem satisfies Assumption <ref>. Additionally, the identity holds: RSE(μ) = min_A ∈Ω_ A ⊂() ∑_ij ∈() ∖ A p_ij. Moreover, computing RSE(μ) is NP-hard in general. Thus the theorem shows that one can compute RSE(μ) by enumerating over all exceptional edge sets A⊂(), meaning that G^*_A either has a connected bipartite component or V^0 is nonempty. Then, the set A for which the mass ∑_ij ∈() ∖ A p_ij is smallest yields RSE(μ). Interestingly, computing RSE(μ) is NP-hard in general, as we show by a reduction from . § CONCLUSION In this work, we introduced a new measure of robustness—radius of statistical efficiency (RSE)—for problems of statistical inference and estimation. We computed RSE for a number of test-bed problems, including principal component analysis, generalized linear models, phase retrieval, bilinear sensing, and matrix completion. In all cases, we verified a precise reciprocal relationship between RSE and the intrinsic complexity/sensitivity of the problem instance, thereby paralleling the classical Eckart–Young theorem and its numerous extensions in numerical analysis and optimization. More generally, we obtained sufficient conditions for such a relationship to hold that depend only on local information (gradients, Hessians), rather than an explicit description of the set of ill-conditioned distributions. We believe that this work provides an intriguing new perspective on the interplay between problem difficulty, solution sensitivity, and robustness in statistical inference and learning. §.§ Acknowledgments We thank John Duchi, Jorge Garza-Vargas, Zaid Harchaoui, and Eitan Levin for insightful conversations during the development of this work. abbrvnat § GEOMETRY OF THE WASSERSTEIN SPACE AND DISTANCE ESTIMATION In this section, we introduce the necessary background of the Wasserstein geometry and prove a number of results that may be on independent interest. In the following section, we will use many of these results to prove estimates on RSE announced in the paper. We follow standard notation of optimal transport, as set out for example in the monographs of Villani <cit.> and Santambrogio <cit.>. Let (,) be a separable complete metric space equipped with its Borel σ-algebra _. The primary example for us will be ^d equipped with the ℓ_p norm. The distance of a point x∈ to a set ⊂ will be denoted by (x,)=inf_x'∈(x,x'). The set of Borel probability measures on is denoted by (), and will be abbreviated as if the space is clear from context. The support of a measure μ∈, written as (μ), is the smallest closed set C⊂ such that the complement ∖ C is of zero μ-measure. For any measurable map T(Ω,,)→(,_), the pushforward measure T_#μ is defined to be (T_#μ)(B)=μ(T^-1(B)) for all B∈_. The support of a random variable on is the support of its distribution. For any p≥ 1, the symbol _p denotes the set of all distributions μ on with finite p^th moment, meaning _x∼μ(x,x_0)^p < ∞ for some (and hence any) x_0∈. The Wasserstein-p distance between two measures μ,ν∈_p is defined by: W_p(μ, ν) =min_π∈Π(μ,ν)( _(x,y)∼π(x,y)^p )^1/p. Here, the set Π(μ, ν) consists of couplings between μ and ν, i.e., distributions in _p(×) having μ and ν as its first and second marginals. An important fact is that the pair (_p,W_p) is a separable complete metric space in its own right and is called the Wasserstein-p space on . We will need a few basic estimates on the W_p distance. First, consider any measures μ,ν∈_p() and a measurable map T→ satisfying ν = T_#μ. Then the law of the random variable (x,T(x)) is a coupling between μ and ν and therefore the estimate holds: W_p^p(μ, ν)≤_x∼μ(x, T(x))^p. Another useful observation is that for any measures μ,ν∈_p such that the support of ν is contained in a set C⊂, the estimate holds: W_p^p(μ, ν)≥_x∼μ^p(x,C). Consequently, equality holds in (<ref>) when T is a projection, the content of the following lemma. Consider a measure μ∈_p and a set ⊂. Suppose that the metric projection _ admits a measurable selection s→. Then equality holds: W_p^p(μ, s_#μ) = _X∼μ(X, )^p. We first show that ν:=s_#μ lies in _p. Indeed, for any x_0∈ and x∈ we have (s(x), x_0)≤(s(x), x)+(x, x_0)≤ 2(x, x_0), and therefore ν lies in _p. Next, taking into account that the support of ν is contained in , combining (<ref>) and (<ref>) completes the proof. For the rest of the section, we will focus exclusively on the setting where is the Euclidean space ^d equipped with the inner product ⟨·, ·⟩. The ℓ_p norm in ^d will be denoted by ·_p. Finally, we denote the second moment matrix for any measure μ∈_2, by the symbol Σ_μ:=_x∼μ xx^⊤. Given a set of measures ⊂_p, the distance function W_2(·, 𝒱) is defined in the usual way W_2(μ, 𝒱)=inf_ν∈𝒱  W_2(μ,ν). Although estimating the distance function is difficult in general, we will focus on well-structured sets 𝒱 for which W_2(μ, 𝒱) can be readily computed. The following two sections study, respectively, measures constrained either by the location of its support or by the spectrum of its covariance. §.§ Sets of measures constrained by their support Consider a linear subspace ⊆ and a mean zero measure μ∈_p. Recall that the compression Σ_μ|_ of Σ_μ to is the positive semidefinite quadratic form on given by: ⟨Σ_μ v, v ⟩ = _x∼μ[⟨ x, v ⟩^2] for all v∈. We will need the following lemma that provides a convenient interpretation of the trace of Σ_μ|_. Consider a measure μ∈_2 and let be a proper subspace of ^d. Then, the following equalities hold: Σ_μ|_ = _∼μ(,^⊥)^2 = W_2^2(μ, (_^⊥)_#μ). We successively compute Σ_μ|_ = _x∼μ(P_ xx^⊤ P_) = _x∼μP_x^2 = _x∼μ^2(x,^⊥) = W_2^2(μ, (P_^⊥)_#μ), where the last equality follows from Lemma <ref>. In light of (<ref>), the matrix Σ_μ|_ is singular if and only if the inclusion (μ)⊂ v^⊥ holds for some v∈∩𝕊^d-1. Define the set of measures for which Σ_ν|_ is indeed singular: _ = {ν∈𝒫_2 |(ν)⊂ v^⊥ for some  v∈∩𝕊^d-1}. The following theorem, the main result of the section, shows that the W_2 distance to _ is simply the minimal eigenvalue of Σ_μ|_. [label=prop:Sdist](Distance to _) Consider a measure μ∈_2 and a proper linear subspace of ^d. Then, equality holds: W_2^2(μ, _) = λ_min(Σ_μ|_), where _ is defined in (<ref>). Moreover the distance of μ to _ is attained by the measure (_v^⊥)_#μ, where v∈ is an eigenvector of Σ_μ|_ corresponding to its minimal eigenvalue. For any vector v∈∩𝕊^d-1, applying Lemma <ref> with span(v) in place of yields: ⟨Σ_μv, v ⟩= Σ_μ|_ span(v) = W_2^2(μ, (_^⊥)_#μ). Let us now decompose _ into a union of simpler sets _=⋃_v∈∩𝕊^d-1 L_v where L_v:={ν∈_2: (ν)⊂ v^⊥}. We now estimate W_2(μ,L_v)≤ W_2(μ, (_^⊥)_#μ)=_∼μ(x,v^⊥)^2≤ W_2(μ,L_v). where the first inequality holds trivially, the equality follows from Lemma <ref>, and the last inequality follows from (<ref>) with C=v^⊥. Thus equality holds throughout. Using (<ref>), we then conclude W_2(μ, _)=inf_v∈∩𝕊^d-1 W_2(μ,L_v)=inf_v∈∩𝕊^d-1⟨Σ_μv, v ⟩= λ_min(Σ_μ|_), as claimed. It also follows immediately that for any minimal eigenvector of Σ_μ|_, the pushforward measure ν= (_^⊥)_#μ attains the minimal W_2 distance of μ to _. §.§ Spectral sets and functions of measures In this section, we investigate a special class of functions on _2—called spectral—that depend on the measure only through the eigenvalues of its second moment matrix. This function class has close analogues in existing literature in matrix analysis and eigenvalue optimization. We postpone a detailed discussion on the related literature until the end of the section. The following is the key definition. A function F_2→∪{+∞} is called spectral if for any μ,ν∈_2 with the same second moment matrix λ(Σ_μ)=λ(Σ_ν) equality F(μ)=F(ν) holds. A good example to keep in mind is the Schatten norm F(μ)=λ(Σ_μ)_q for any q∈ (0,∞]. Notice that in this example F factors as a composition of the eigenvalue map λ(·) and the permutation-invariant function f(v)=v_q on ^d. Evidently, all spectral functions arise in this way. A function f^d_+→∪{+∞} is called symmetric if equality f(s(x))=f(x) holds for all x∈^d_+ and all permutations of coordinates s(·). An elementary observation is that a function F_2→∪{+∞} is spectral if and only if there exists a symmetric function f:^d_+→∪{+∞} satisfying F(μ)=f(λ(Σ_μ)) ∀μ∈_2. Concretely, the symmetric function f can be obtained from F by restricting to Gaussian measures with diagonal covariance f(v)=F(N(0,(v))). The definition of spectral and symmetric functions easily extends to sets through indicator functions. Namely, a set ⊂_2 is spectral if the indicator function δ_ is spectral, while a set G⊂^d_+ is symmetric if the indicator δ_G is symmetric. An interesting example of a spectral set is given by _q:={μ∈_2: λ_q(Σ_μ)= λ_q+1(Σ_μ)}. Although complicated, we may identify it with the symmetric set G_q={v∈^d_+: v^(q)=v^(q+1)}, where v^(i) is the i'th largest coordinate of v. Indeed, equality ={μ∈_2: λ(Σ_μ)∈ G} holds. In this section, we will show that one can express the W_2-distance function to purely in terms of the ℓ_2-distance function to the much simpler set G. Indeed, we will prove the following theorem, which specialized to example (<ref>) yields the expression W_2(μ,_q)=_2(√(λ(Σ_μ)),G_q)=1√(2)(√(λ_q)-√(λ_q+1)). [label=thm:distance_est_spectral](Distance to spectral sets in 𝒫_2) Let G⊂^d_+ be a symmetric set. Define now the set of measures ={ν∈_2: λ(Σ_ν)∈ G}. Then for any μ∈𝒫_2, equality holds: W_2(μ,)=min_v∈ G √(λ(Σ_μ))-√(v)_2. Indeed, we will prove a more general statement that applies to functions, with the distance replaced by the so-called Moreau envelope. We need some further notation to proceed. Let (,) be a metric space and consider a function f→∪{+∞}. Then for any parameter ρ>0, the Moreau envelope and the proximal map of f <cit.>, respectively, are defined as: f_ρ(y) :=inf_y'∈ f(y')+12ρ^2(y,y'), _ρ f(y) :=_y'∈ W f(y')+12ρ^2(y,y'). In particular, if f is an indicator function of a set Q, then f_ρ reduces to the squared distance function to Q, while _ρ f(w) is the nearest point projection. We will be interested in three metric spaces and it is important to keep the metric in mind in all results that follow. * (_2,W_2) The space 𝒫_2(^d) equipped with the Wasserstein-2 distance W_2(·, ·). * (^d_+,W_2) The cone of PSD matrices ^d_+ equipped with the Bures-Wasserstein distance W^2_2(A,B)= A+ B-2(A^1/2BA^1/2). * (^d_+, W_2) The cone of nonnegative vectors ^d_+ equipped with the Hellinger distance W_2(x,y)=√(x)-√(y)_2, where the square root is applied elementwise. Notice that we are abusing notation by using the same symbol W_2 to denote the metric in all three spaces. The reason we are justified in doing so is that the three metric spaces are related by isometric embedding. Namely, the Wasserstein-2 distance between two Gaussian distributions μ= N(0,Σ_μ) and ν= N(0,Σ_ν) coincides with the Bures-Wasserstein distance between their covariance matrices: W_2(μ,ν)=W_2(Σ_μ, Σ_ν). Similarly, the Bures-Wasserstein metric restricted to diagonal PSD matrices is the Hellinger distance: W_2((x), (y))=W_2(x,y). The following is the main result of this section. [label=thm:distance_est_spectral_moreau](Diagonal reduction) Consider a symmetric function f^d_+→∪{+∞} and define the spectral function F_2→∪{+∞} by setting F(μ)=f(λ(Σ_μ)). Then equality holds: F_ρ(μ)=f_ρ(λ(Σ_μ)) ∀μ∈_2(^d). Theorem <ref> follows immediately by applying Theorem <ref> to indicator functions δ_ and δ_G. The rest of the section is devoted to proving Theorem <ref>. §.§.§ Proof of Theorem <ref> We begin with some notation. The symbol O(d) will denote the set of orthogonal d× d matrices. The singular values for any matrix A∈^m× n (with m≤ n) in nonincreasing order we written as σ_1(A)≥σ_2(A)…≥σ_m(A). We say that two matrices A and B admit a simultaneous ordered singular-value decomposition (SVD) if there exist matrices U∈ O(m),  V∈ O(n) satisfying U^⊤ A V=(σ(A)) and U^⊤ B V=(σ(B)). The following trace inequality, essentially due to <cit.>, will play a central role in the section. The theorem as stated, along with a proof, may be found in <cit.>. Any two matrices A,B∈^m× n satisfy ⟨σ(A),σ(B)⟩≥⟨ A,B⟩. Moreover, equality holds if and only if A and B admit a simultaneous ordered SVD. The following lemma shows that the Burer-Wasserstein distance can be written in terms of a Procrustes problem <cit.>. For any two matrices A,B∈^d_+ equality holds: W_2(A,B)=min_U∈ O(d)A^1/2-B^1/2U_F We will also need the following variational form of the Burer-Wasserstein distance; see for example <cit.>. For any two matrices A,B∈^d_+ equality holds: W_2^2(A,B)=min_x,y: [xx^⊤]=A, [yy^⊤]=Bx-y^2_2 With these results in place, we are ready to start proving Theorem <ref>. To this end, we will first establish the theorem in the Gaussian setting and then deduce the general case by a reduction. As the first step, we will estimate the W_2-distance of a matrix to an orbit of B under conjugation: 𝒪(B):={VBV^⊤: V∈ O(d)}. For any two matrices A,B ∈^d_+ it holds: W_2(A,𝒪(B))=√(λ(A))-√(λ(B))_2. Moreover, the set of nearest points of 𝒪(B) to A is given by {U(λ(B))U^⊤: A=U(λ(A))U^⊤,  U∈ O(d)}. To see the inequality ≤, consider an eigenvalue decomposition A=U(λ(A))U^⊤ for some U∈ O(d). Then we have W_2(A,𝒪(B))≤ W_2(U(λ(A))U^⊤,U(λ(B))U^⊤)= W_2(λ(A),λ(B)), as claimed. Next, we show the reverse inequality ≥. To this end, set A̅:=A^1/2 and B̅:=B^1/2. Then for any V∈ O(d), we successively compute W_2(A,VBV^⊤) = inf_U∈ O(d)A̅-(VB̅V^⊤) U_F^2 =A̅^2+B̅^2-2sup_U∈ O(d)⟨A̅,VB̅ (V^⊤ U)⟩ =A̅^2+B̅^2-2sup_Z∈ O(d)⟨A̅,VB̅ Z^⊤⟩ ≥A̅^2+B̅^2-2 ⟨σ(A̅),σ(B̅)⟩ =λ(A̅)^2+λ(B̅)^2-2 ⟨λ(A̅),λ(B̅)⟩ =λ(A̅)-λ(B̅)^2_2 =√(λ(A))-√(λ(B))^2_2, where (<ref>) follows from Lemma <ref>, the estimate (<ref>) follows from expanding the Frobenius norm, (<ref>) uses the variable substitution Z=V^⊤ U, the estimate (<ref>) follows from von Neumann's trace inequality (Lemma <ref>), and (<ref>) uses the fact that eigenvalues and singular values coincide for PSD matrices. Taking the infimum over V∈ O(d) shows the claimed inequality ≥ in (<ref>). Next, the fact that any matrix in (<ref>) is a nearest point of 𝒪(B) to A follows directly from the expression (<ref>). To see the converse, observe that VB V∈ O(B) is the closest point to A if and only if the chain of inequalities (<ref>)-(<ref>) holds as equalities. Since the only inequality appears in (<ref>), applying Lemma <ref> we see that equality holds if and only if there exist matrices M_1,M_2, Z∈ O(d) such that A̅=M_1(λ(A̅))M_2^⊤ and VB̅ Z^⊤= M_1(λ(B̅))M_2^⊤. In particular, multiplying each equation by its transpose yields the expressions A=M_1(λ( A))M_1^⊤ and V BV^⊤ =M_1(λ(B)) M_1^⊤, thereby concluding the proof. We are now ready to complete the proof of Theorem <ref> in the Gaussian setting. In the proof, we will use the basic fact that for any vectors v,w∈^d, the inequality holds: v^↑-w^↑_2≤v-w_2, where v^↑ and w^↑ are the vectors obtained by permuting the coordinates of v and w to be nonincreasing. Consider a symmetric function f^d_+→∪{+∞} and define the function on PSD matrices F^d_+→∪{+∞} by setting F(A)=f(λ(A)). Then, equality holds: F_ρ(A)=f_ρ(λ(A)) ∀ A ∈^d_+. Moreover, the following expression holds: _ρ F(A)={U(v)U^⊤: v∈_ρ f(λ(A)), A=U(λ(A))U^⊤,  U∈ O(d)}. For any matrix A∈^d_+, we successively compute F_ρ(A) =inf_B≽ 0  F(A)+12ρ W^2_2(A,B) =inf_v≥ 0inf_B∈𝒪((v))  f(λ(B))+12ρ W^2_2(A,B) =inf_v≥ 0  f(v)+12ρinf_B∈𝒪((v)) W^2_2(A,B) =inf_v≥ 0  f(v)+12ρ W^2_2(A,𝒪((v))) =inf_v≥ 0  f(v)+12ρ√(λ(A))-√(v^↑)^2_2 =inf_v≥ 0  f(v)+12ρ W^2_2(λ(A),v) =f_ρ(λ(A)), where (<ref>) follows from Lemma <ref> and (<ref>) follows from (<ref>). Next, observe from the chain of equalities that B=U(v)U^⊤ lies in _ρ F(A) for some U∈ O(d) if, and only if, v lies in _ρ f(λ(A)) and equality d_2(A,B)=d_2(A,𝒪((v))) holds. Appealing to Lemma <ref>, this equality holds if and only if we may write A=U(λ(A))U^⊤. Thus the proof is complete. We now move on to establishing Theorem <ref> in full generality, i.e. outside the Gaussian setting. We begin by extending Lemma <ref>. Fix a matrix B∈^d_+ and define the set of measures ℳ:={ν∈_2: Σ_ν∈𝒪(B)}. Then any zero-mean measure μ∈_2 satisfies: W_2^2(μ, ℳ)=W_2^2(Σ_μ, 𝒪(B)). We suppose first that Σ_μ is positive definite. Observe now W_2(μ, ℳ) =inf_π∈Π(μ,ν), ν∈ℳ_πx-y^2_2 ≥inf_[xx^⊤]=Σ_μ,  [yy^⊤]∈𝒪(B)x-y^2_2=d_2(Σ_μ,𝒪(B)), where the inequality follows from Lemma <ref>. To see the reverse inequality, consider an eigenvalue decomposition Σ_μ=V(λ(Σ_μ)) V^⊤ for some V∈ O(d). Define now the matrix B̅:=V Diag(λ(B))V^⊤ and set T:=B̅^1/2Σ_μ^-1/2. Then, clearly the measure ν:=T_#μ satisfies _z∼ν[zz^⊤]=T _μ [xx^⊤] T^⊤=T Σ_μ T^⊤=B̅ and therefore T_#μ lies in . Thus, we conclude W_2^2(μ,ℳ)≤ W_2^2(μ,ν) =_μx-Tx^2_2 =_μx^2-2_μ(Txx^⊤)+_μ(T^⊤ T xx^⊤) =(Σ_μ)-2(B̅^1/2Σ_μ^1/2)+(B̅) =∑_i=1^d (√(λ_i(Σ_μ))-√(λ_i(B)))^2, Using Lemma <ref> yields the claimed expression (<ref>). Finally, if Σ_μ is not positive definite, then μ is almost surely supported on some subspace ℒ. We now define the perturbed distribution μ_t=μ× g_t where g_t is a zero-mean Gaussian distribution supported on ℒ^⊥ with covariance t· I. Then, by Lemma <ref>, W_2(μ_t, μ) = W_2(μ_t, (_ℒ)_#μ_t) → 0, which in turn implies _μ_txx^⊤→_μxx^⊤. From (<ref>) we have W_2^2(μ_t, ℳ)=W_2^2(_μ_txx^⊤, 𝒪(B)). Letting t go to zero, we deduce the desired equality W_2^2(μ, ℳ)=W_2^2(Σ_μ, 𝒪(B)). The proof of Theorem <ref> now proceeds in exactly the same way as that of Theorem <ref>, with Lemma <ref> being used instead of Lemma <ref>. The argument is essentially the same as in Theorem <ref>. We detail it here for completeness. Define the matrix A:=Σ_μ. We successively compute F_ρ(μ) =inf_ν∈_2  F(ν)+12ρ W^2_2(μ,ν) =inf_u≥ 0inf_ν: Σ_ν∈𝒪((u))  f(u)+12ρ W^2_2(μ,ν) =inf_u≥ 0  f(u)+12ρinf_ν: Σ_ν∈𝒪((u)) W^2_2(μ,ν) =inf_u≥ 0  f(u)+12ρ W_2^2(A,𝒪((u))) =inf_v≥ 0  f(v)+12ρ√(λ(A))-√(u^↑)^2_2 =inf_u≥ 0  f(u)+12ρ W_2^2(λ(A),u) =f_ρ(λ(A)), where (<ref>) follows from Lemma <ref>, the estimate (<ref>) follows from Lemma <ref>, and (<ref>) follows from (<ref>). This completes the proof. Connection to existing literature. The results presented in this section have close analogues in the existing literature in matrix analysis and optimization. Namely a function F^d→∪{+∞} is called orthogonally invariant (or spectral) if the equality holds: F(UXU^⊤)=F(X) ∀ X∈^d, U∈ O(d). Evidently such functions are fully described by their restriction to diagonal matrices. More precisely, a function F is orthogonally invariant if and only if there exists a symmetric function f^d→∪{+∞} satisfying F(X)=f(λ(X)). A pervasive theme in the study of such functions is that various variational properties of the permutation-invariant function f are inherited by the induced spectral function F=f∘λ; see e.g. <cit.>. For example, f convex if and only if F is convex <cit.>, f is C^p-smooth if and only of F is C^p-smooth <cit.>, and so forth. A useful result in this area is the expression for the Moreau envelope obtained in <cit.> F_ρ(X)=f_ρ(λ(X)) ∀ X∈^d. For example, as explained in <cit.>, it readily yields expressions relating generalized derivatives of f and F. Crucially, in (<ref>) the Moreau envelope F_ρ is computed with respect to the Frobenius norm on ^d and the Moreau envelope f_ρ is computed with respect to the ℓ_2 norm on ^d. Thus the results presented in the section extend this circle of ideas to functions defined on the Wasserstein space. § PROOFS FROM SECTION <REF> §.§ Proof of Proposition <ref> Suppose otherwise that there exists a sequence ν_i∈' with W_2(ν_i,μ)≤ r satisfying REG(ν_i)→∞. Then from (<ref>) we deduce W_2(ν_i,)= RSE(ν_i)→ 0. Subsequently, using the triangle inequality yields RSE(μ)=W_2(μ,)≤ W_2(ν_i,μ)+W_2(ν_i,). Letting i tend to infinity, we deduce W_2(μ,)≤ r, which is a contradiction. Thus no such sequence ν_i exists and (<ref>) holds for some M>0. Suppose now that for some q>0, the inequality c≥ REG(ν)· RSE(ν)^q holds. Then, similarly as above, the triangle inequality for any ν∈'∖ yields RSE(μ)=W_2(μ,)≤ W_2(ν,μ)+W_2(ν,)≤ r+ RSE(ν)≤ r+(c/ REG(ν))^1/q. Rearranging yields the estimate REG(ν)≤ c( RSE(μ)-r)^-1/q, thereby completing the proof. §.§ Proof of Theorem <ref> In this section, we prove Theorem <ref>, or rather a stronger version thereof. To this end, we fix a C^1-smooth map F^d→^k_+ satisfying that it is integrable with respect to any measure μ∈_2. We define the function 𝒥_2→ by setting 𝒥(μ)=λ_min (_μF(x)). The differential of F will be denoted by DF(x)^d→^k, while the symbol DF(x)^*^k→^d will denote the adjoint linear map of DF(x). We further assume that there exists a constant L >0 such that DF(x)_ op≤ L(1+x) for all x. In order to simplify notation, for any matrix A∈^k, we let E_k(A) denote the set of all unit eigenvectors of A corresponding to the minimal eigenvalue λ_min(A). We will also use the elementary fact that λ_min is a concave function on ^k and its supdifferential at any matrix A is the set ∂λ_min(A)={uu^⊤: u∈ E_k(A)}. See for example <cit.>. Abusing notation, we will set E_k(μ):=E_k(_μ F(x)) for any measure μ∈_2. The proof of Theorem <ref> will be subdivided into two parts, corresponding to the two inequalities in (<ref>). We begin by establishing the first inequality REG(μ)^q_2-1≲ RSE(μ). The proof amounts to simply applying the fundamental theorem of calculus to the function 𝒥 along a geodesic μ_t joining a measure μ to its nearest point in . This is the content of the following theorem. Fix a measure μ∈_2, constants c,ε>0, and a power q∈ [0,1). Suppose that for all measures ν satisfying W_2(ν,[𝒥=0])≤ W_2(μ,[𝒥=0])+ε, the estimate holds: min_u∈ E_k(ν) _νDF(x)^*[uu^⊤]^2_2≤ c·𝒥(ν)^2q. Then, the inequality holds: W_2(μ, [𝒥= 0])≥1(1-q)√(c)·𝒥(μ)^1-q. Fix a measure ν∈ [=0] satisfying W_2(μ,ν)≤ W_2(μ,[=0])+ε and let π∈Π(ν,μ) be an optimal transport plan between ν and μ. Define the functions π_t(x,y)=(1-t)x+t y. Then the curve μ_t=(π_t)_#μ is a constant speed geodesic between μ and ν <cit.>: W_2(μ_t,μ_s)=|t-s|· W_2(μ,ν) ∀ t,s∈ [0,1]. Define the curve of matrices γ(t)=_μ_t F(x). We would like to now compute γ̇(t) by exchanging differentiation and integration in the expression: γ̇(t)=d/dt_(x,y)∼π F((1-t)x+ty). To this end, we bound the derivative of the integrand: DF((1-t)x+ty)[y-x]_ op≤ L(1+x_2+y_2)y-x_2. Applying Hölder's inequality, we see that the right side is π-integrable. Therefore, exchanging integration and differentiation in (<ref>) yields the expression γ̇(t)=_(x,y)∼π DF((1-t)x+ty)[y-x]. It is straightforward to see that γ is absolutely continuous and therefore using the subdifferential chain rule for concave functions <cit.>, we deduce that for almost every t∈ (0,1) we have d/dt𝒥(μ_t)=d/dt(λ_min∘γ)(t)=⟨ U_t, γ̇(t)⟩ ∀ U_t∈∂λ_min(γ(t)). In particular, for each such t we may choose U_t satisfying the running assumption _μ_tDF(x)^*[U_t]^2_2≤ c·𝒥(μ_t)^2q. Continuing with (<ref>), we successively compute d/dt𝒥(μ_t) =_(x,y)∼π⟨ DF((1-t)x+ty)^*[U_t],y-x⟩ ≤_(x,y)∼πDF((1-t)x+ty)^*[U_t]_2·y-x_2 ≤√(_(x,y)∼πDF((1-t)x+ty)^*[U_t]^2_2)·√(_(x,y)∼πy-x^2) ≤√(c)· W_2(μ,ν)·𝒥(μ_t)^q, where (<ref>) follows from Hölder's inequality. Raising 𝒥(μ_t) to power 1-q, we deduce d/dt𝒥(μ_t)^1-q≤√(c)(1-q)· W_2(μ,ν) for a.e. t∈ (0,1). Integrating both sides from t=0 to t=1, we conclude 𝒥(μ)^1-q-𝒥(ν)^1-q≤√(c)(1-q)· W_2(μ,ν). Taking into account the equality 𝒥(ν)=0 and the estimate W_2(μ,ν)≤ W_2(μ,[𝒥=0])+ε, we may now let ε tend to zero 0 thereby completing the proof. Next, we pass to the reverse inequality RSE(μ)≲ REG(μ)^q_1-1, which is a more substantive conclusion. The main tool we will use is the characterization of an “error bound property” using the slope. In what follows, for any real number r, the symbol r_+=max{0,r} denotes its positive part. Consider a function f→∪{+∞} defined on a metric space (,). The slope of f at any point x with f(x) finite is defined by |∇ f|(x)=lim sup_x'→ x(f(x)-f(x'))_+/(x,x'). Importantly, if a slope is large on a neighborhood, then the function must decrease significantly. This is the content of the following theorem; see <cit.>) or <cit.>. Consider a lower semicontinuous function f→∪{+∞} on a complete metric space (,). Fix a point x with f(x) finite, and suppose that there are constants α<f(x) and r, κ>0 so that the implication holds: α<f(u)≤ f(x) and (u,x)≤ r ⟹ |∇ f|(u)≥κ. If in addition the inequality f(x)-α< κ r is valid, then the estimate holds: (x,[f≤α])≤κ^-1(f(x)-α). We will apply this theorem to the function (μ)^1-q. The key step therefore is to compute the slope of . This is the content of the following lemma. Suppose that F satisfies DF(x)_ op≤ L(1+x_2) for all x∈^d, where L is some constant. Then, for any measure μ∈_2, the estimate holds: |∇𝒥|(μ)≥sup_u∈ E_k(μ)√(_μDF(x)^*[uu^⊤]^2_2). We begin by writing 𝒥(μ)=(λ_min∘ G)(μ), where we define the map G(μ):=_μ F(x). Next, fix a measure μ∈_2 and a matrix U∈∂λ_min(G(μ)), and define the transport map T(x)= x-DF(x)^*[U]. Clearly, we may assume that DF(x)^*[U] is not μ-almost surely zero, since otherwise the theorem holds trivially for U=uu^⊤. Observe that I-T is square μ-integrable since _μx-T(x)_2^2=_μDF(x)^*[U]^2_2≤ L^2·U_F^2·_μ (1+x_2)^2<∞. Define now the curve γ[0,1)→_2 by setting γ(t)=(I+t(T-I))_#μ. Note from (<ref>), we have W^2_2(γ(t),γ(0))≤ t^2_μx-T(x)^2_2. Next, from concavity of λ_min we deduce 𝒥(γ(t))-𝒥(γ(0))≤⟨ U,(G∘γ)(t)-(G∘γ)(0) ⟩. We would like to compute d/dt⟨ U,G∘γ(t)⟩ by exchanging integration/differentiation in the expression: d/dt⟨ U,G∘γ(t)⟩=d/dt_μ⟨ U, F(x+t(T(x)-x))⟩. To this end, we bound the derivative of the integrand uniformly in t: |⟨ U, DF(x+t(T(x)-x))[T(x)-x]⟩| ≤DF(x+t(T(x)-x))[T(x)-x]_2 ≤ L(1+x+t(T(x)-x)_2·T(x)-x_2) ≤ L(1+x+tT(x)-x_2)T(x)-x_2 ≤ LT(x)-x_2+L/2x_2^2+L+2/2x-T(x)^2_2. Clearly, the right-side is μ-integrable and therefore by the dominated convergence theorem, we may exchange integration and differentiation in (<ref>) yielding: d/dt⟨ U,G∘γ(t)⟩|_t=0=_μ⟨ DF(x)^*[U],T(x)-x⟩= -_μT(x)-x^2_2. In particular, we deduce (𝒥∘γ)(t)<(𝒥∘γ)(0) for all small t>0. Therefore, dividing (<ref>) by W_2(γ(t),γ(0)) and taking the limit as t→ 0 yields |∇𝒥|(μ) ≥lim_t→ 0𝒥(γ(0))-𝒥(γ(t))/W_2(γ(0),γ(t)) ≥lim_t→ 0𝒥(γ(0))-𝒥(γ(t))/ t√(_μx-T(x)^2_2) = ⟨ V,lim_t→ 0(G∘γ)(0)-(G∘γ)(t)/t√(_μx-T(x)^2_2)⟩ =√(_μT(x)-x^2_2)=√(_μDF(x)^*[U]^2_2), where the estimate (<ref>) follows from (<ref>) and the estimate (<ref>) follows from (<ref>). Finally, combining the decrease principle (Theorem <ref>) and the estimate on the slope of (Lemma <ref>) we arrive at the main result. Suppose that F^d→^d_+ satisfies DF(x)_ op≤ L(1+x_2) for all x∈^d, where L is some constant. Fix a constant c>0, a radius r>0, and a power q∈ [0,1). Suppose that for all measures ν∈ [0<𝒥≤𝒥(μ)]∩𝔹_2(μ; r), the estimate holds: sup_u∈ E_k(ν)_νDF(x)^*[uu^⊤]^2_2≥ c·λ_min(_νF(x))^2q. Then, the inequality holds: W_2(μ, [𝒥= 0])≤1(1-q)√(c)·𝒥(μ)^1-q, as long as r is large enough so that 𝒥(μ)^1-q< (1-q)r√(c). It follows immediately from <cit.> and continuity of λ_min(·) that the function 𝒥 is continuous. Define the function 𝒢(ν):=𝒥(ν)^1-q and note that the standard chain rule implies | ∇𝒢|(ν)=(1-q)·| ∇𝒥|(ν)/𝒥(ν)^q≥ (1-q)√(c), whenever 0<𝒢(ν)≤𝒢(μ) and W_2(ν,μ)≤ r. Applying Theorem <ref> to 𝒢 with α=0 and κ=(1-q)√(c) completes the proof. Theorem <ref> follows immediately from Theorems <ref> and <ref>. § PROOFS FROM SECTION <REF> §.§ Proof of Lemma <ref> We first argue the inclusion ⊃. Observe that for any measure μ∈_2^∘ with λ_1(Σ_μ)=λ_2(Σ_μ), the set of maximizers of (<ref>) is the intersection of a sphere and the top eigenspace of Σ_μ. Since none of the maximizers are isolated, they are unstable and therefore μ lies in . To see the reverse inclusion ⊂, fix a measure μ∈_2^∘ and suppose that the top two eigenvalues λ_1 and λ_2 of Σ_μ are distinct. Then the normalized top eigenvector v of Σ_μ is the unique maximizer of (<ref>). It remains to verify that v is a tilt-stable maximizer of (<ref>). To this end, define the Lagrangian function ℒ(v,λ)=-1/2_μ⟨ x,v⟩^2+λ/2 (v^2-1). Then equalities hold: ∇_v ℒ(v,λ)=(λ I-Σ_μ)v and ∇^2_vvℒ(v,λ)=λ I-Σ_μ. In particular, the first equation shows that the optimal Lagrange multiplier λ is λ_1. Let V∈^d× (d-1) be a matrix with an eigenbasis for v^⊥ as its columns. Then an elementary computation yields min_y∈𝕊^d-2⟨∇^2_vvℒ(v,λ)(Vy),(Vy)⟩=min_y∈𝕊^d-2∑_i=2^d (λ_1-λ_i) y_i^2=λ_1-λ_2>0, and therefore v is a tilt-stable maximizer of (<ref>). Moreover, the claimed equality REG (μ)^-1=λ_1(Σ_μ)-λ_2(Σ_μ) follows directly from (<ref>), thereby completing the proof. §.§ Proof of Theorem <ref> This follows directly by applying Theorem <ref> with = from Lemma <ref> and G={v∈^d_+: v^(1)=v^(2)}, where v^(i) denotes the i'th largest coordinate value of v. §.§ Proof of Lemma <ref> Henceforth, fix a measure μ∈_2^∘ and define the shorthand λ:=λ(Σ_μ). Let's dispense first with the simple direction ⊃. To this end, suppose that equality λ_q=λ_q+1 holds. Then we may form two sets of orthonormal bases U:={u_1,…,u_q} and U'={u_1,…, u_q-1,u_q'} with ⟨ u_q, u_q'⟩ =0, and which are contained in the span of the eigenspaces corresponding to the top q eigenvalues. We may further interpolate between the two bases with U_t={u_1,…,tu_q+(1-t)u_q'} for t∈ (0,1). The orthogonal projections onto the span of U_t furnish a path of optimal solutions, which are therefore not tilt-stable. Thus μ lies in ℰ, as claimed. We now establish the reverse inclusion ⊂. Suppose therefore that λ_q and λ_q+1 are distinct. We begin by conveniently parameterizing the Grassmannian manifold Gr(q, d) as follows. Define the matrix A:=[ I_q 0; 0 0 ]. Then using <cit.> we may write Gr(q, d) as the orbit of A under conjugation by orthogonal matrices: Gr(q, d)= {UAU^⊤: U∈ O(d)}. Fix a skew symmetric matrix W:=[ W_1 W_2; -W_2^⊤ W_4 ] and define the curve γ→ Gr(q, d) by γ(t)=exp(-tW)· A·exp(tW). Differentiating the curve yields the expression γ̇(0)=AW-WA= [ 0 W_2; W_2^⊤ 0 ]. Moreover, <cit.> shows that varying W among all skew-symmetric matrices yields the entire tangent space T_ Gr(q, d)(A)={[ 0 B; B^⊤ 0 ]: B∈^q× (d-q)}. Now without loss of generality, we may assume that Σ_μ is diagonal, that is Σ_μ= Diag(λ). Then clearly, R=A is the unique maximizer of the problem (<ref>). We now perform the second order expansion (f∘γ)(t)=⟨γ(t), Σ_μ⟩=f(A)+t⟨ AW-WA,Σ_μ⟩_=0+t^2⟨1/2(AW^2+W^2A)-WAW, Σ_μ⟩+O(t^3). In particular, we deduce (f∘γ)”(0)=⟨1/2(AW^2+W^2A)-WAW, Σ_μ⟩. Taking into account the definition of A, a quick computation show 1/2(AW^2+W^2A)-WAW =[ W_1^2-W_2W_2^⊤ 1/2(W_2W_4+W_1W_2); -1/2(W_2W_4+W_1W_2)^⊤ 0 ]- [ W_1^2 W_1 W_2; -W_2^⊤ W_1 -W_2^⊤ W_2 ] =[ -W_2W_2^⊤ 1/2(W_2W_4-W_1W_2); -1/2(W_2W_4-W_1W_2)^⊤ W_2^⊤ W_2 ]. Taking the trace product with the diagonal matrix Σ_μ yields (f∘γ)”(0) =-⟨ W_2W_2^⊤, Diag(λ_1:q)⟩+⟨ W_2^⊤ W_2, Diag(λ_q+1:d)⟩ ≤ -⟨ W_2W_2^⊤,λ_q I_q⟩+⟨ W_2^⊤ W_2,λ_q+1 I_d-q⟩ =-(λ_q-λ_q+1)·W_2^2_F In particular, the covariant Hessian ∇^2_ f(A) is negative definite on the tangent space T_ Gr(q, d)(A). Therefore, A is a tilt-stable maximizer of the problem, as we had to show. Moreover, setting W_2=(w) with w_q=w_q+1=1 and w_i=0 for i∉{q,q+1}, yields equality in (<ref>). Therefore we deduce REG (μ)^-1=λ_q(Σ_μ)-λ_q+1(Σ_μ) as claimed. §.§ Proof of Theorem <ref> This follows directly by applying Theorem <ref> with = from Lemma <ref> and G={v∈^d_+: v^(q)=v^(q+1)}, where v^(i) denotes the i'th largest coordinate value of v. § PROOFS FROM SECTION <REF> §.§ Proof of Lemma <ref> For any μ∈, observe ∇ f(β) = [ ( h'(, β) - y)] = [ [ h'(, β) - y|]] = [ (h'(, β) - h'(, ))x] Therefore, equality ∇ f()=0 holds for any distribution of . Hence, is critical for the problem with zero Lagrange multipliers λ=0. Differentiating again yields the expression for the Hessian H:=∇^2 f()=σ^-2·[h”(⟨, β_⋆⟩) ^⊤]. Note that H is positive semidefinite since h”> 0. Consequently, the set of ill-conditioned distributions corresponds to those distributions on for which (H) nontrivially intersects 𝒯. Clearly, a vector v lies in (H) if and only if 0=⟨ Hv,v⟩, or equivalently 0 = [h”(, )⟨, v ⟩^2]. Taking into account the assumption h”>0, this occurs precisely when v lies in the nullspace of Σ_μ. Thus consists of all measures μ∈ satisfying 𝒯∩(Σ_μ)≠{0}. Finally, it follows directly from (<ref>) that if for some α,β>0 the inequality α≤ h”(⟨ x,β_⋆⟩)≤β holds for μ-almost every x, then λ_min(Σ_μ|_)∈λ_min(Σ_μ|_)· [α,β], thereby completing the proof. §.§ Proof of Theorem <ref> This follows directly from Lemma <ref> and Theorem <ref>. § PROOFS FROM SECTION <REF> §.§ Proof of Lemma <ref> We begin by verifying the claim for the formulation (<ref>). To this end, a quick computation shows ∇ f(β_⋆)=0 and ∇^2 f(β_⋆) = _μ[x, ^2xx^⊤]. Therefore we deduce λ_min(∇^2 f(β_⋆))=min_v∈𝕊^d-1_μ⟨ x,β_⋆⟩^2⟨ x,v⟩^2. In particular, ∇^2 f(β_⋆) if and only if the support of μ is contained in ^⊥∪ v^⊥ almost surely. Next, we verify the claim for the formulation (<ref>). To this end, let denote the set of symmetric PSD rank one matrices: ={M∈^d_+: (M)=1}. A quick computation now yields ∇ f(M)= ⟨ M-,xx^⊤⟩ and ∇^2 f()[Δ,Δ]=⟨Δ ,xx^⊤⟩^2. In particular, equality ∇ f()=0 holds and therefore the optimal Lagrange multipliers λ_⋆ are zero. Hence the Hessian of the Lagrangian ∇^2 ℒ(,λ_⋆) coincides with ∇^2 f(). Classically, if we form the factorization =^⊤, then the tangent space to at can be written as 𝒯={ v^⊤ +v ^⊤: v∈^d}. Consequently, for any Δ= v^⊤ +v ^⊤ we compute ∇^2 f()[Δ,Δ]=⟨Δ ,xx^⊤⟩^2=4⟨,x⟩^2⟨ v,x⟩^2. Note that Δ_F^2= 2(^⊤ v)^2+2^2v^2 and therefore Δ_F^2/^2v^2∈ [2,4]. Therefore Δ is nonzero as long as w is nonzero. We therefore again deduce that ∇^2 f()[Δ,Δ]=0 of some nonzero Δ∈ T_() if and only if the support of μ is contained in ^⊥∪ v^⊥ almost surely. The claimed expression (<ref>) follows immediately. §.§ Proof of Theorem <ref> Define the set _v = {β_⋆}^⊥∪{v}^⊥ for each unit vector v∈𝕊^d-1 and define the function h𝕊^d-1→[0,∞). h(v) = _x∼μ(x, _v)^2=_x∼μ[ ⟨ x, β_⋆β_⋆⟩^2 ∧⟨ x, v⟩^2 ] Fatou's lemma directly implies that h is lower semicontinuous and therefore admits a minimizer u_⋆∈_u∈𝕊^d-1 h(u). Let s^d→^d be a Borel measurable selection of the metric projection __u_⋆. Define the pushforward measure ν̅= s_#μ. Clearly ν̅ lies in and hence inf_ν∈ W_2^2(μ, ν) ≤ W_2^2(μ, ν̅) = _x∼μ(x,_u_⋆)^2 = h(u_⋆) with the first equality holding by (<ref>). On the other hand, for any ν∈ there exists w∈𝕊^d-1 such that (ν)⊂_w and hence h(u_⋆) ≤ h(w) = _x∼μ(x, _w)^2 ≤ W_2^2(μ, ν) with the last inequality holding by (<ref>). Taking the infimum over ν∈, we deduce that (<ref>) holds with equality, thereby completing the proof. §.§ Proof of Theorem <ref> We will need the following two elementary lemmas. If (y_1,y_2) is a centered Gaussian vector with y_1^2=σ_1^2, y_2^2=σ^2_2, and y_1 y_2=ρσ_1σ_2, then the equations hold: |y_1 y_2| =2/π(√(1-ρ^2)+ρarcsin(ρ))σ_1σ_2, (y_1 y_2)^2 =(1+2ρ^2)σ_1^2σ^2_2. The first equation is proved for example in <cit.>. To see the second equation, standard results show that the conditional distribution y_1| y_2 is Gaussian N(ρ(σ_1/σ_2) y_2, σ_1^2-ρ^2σ_1^2). Thus, the second moment is [y_1^2| y_2]=(1-ρ^2)σ_1^2+ρ^2σ_1^2/σ_2^2y_2^2 and therefore iterating expectations gives y_1^2 y_2^2=[[y_1^2| y_2] y_2^2]=(1+2ρ^2)σ_1^2σ^2_2, as claimed. Consider the function ψ [-1, 1] → defined by ψ(t) = √(1 - t^2) + t arcsin(t) and the function ϕ [-1, 1] → given by ϕ(t) = (π/2 - 1)t^2 + 1. Then, for any t ∈ [-1, 1] we have ψ(t) ≤ϕ(t). Taking the derivative of ψ(t) and using the fundamental theorem of calculus we get ψ(t) = 1 + ∫_0^tarcsin(s) ds= 1 + ∫_0^t∑_n = 0^∞2n ns^2n + 1/4^n (2n+1) ds = 1+ ∑_n = 0^∞2n nt^2n + 2/4^n (2n+2), where the second equality follows by taking the Taylor expansion of arcsin(s). Factorizing a t^2 from the series yields ψ(t) = 1+ t^2 ∑_n = 0^∞2n n|t|^n + 1/4^n (2n+2)≤ 1+ t^2 ∑_n = 0^∞2n n1/4^n (2n+2) = 1 + (π/2 - 1)t^2 = ϕ(t), where the inequality follows since |t| ≤ 1 and the second to last equality evaluates (<ref>) at t = 1. This completes the proof. With these two lemmas at hand, we start the proof of Theorem <ref>. We will first verify (<ref>). To this end, notice that we may write g_Σ(u,v)= (y_1y_2)^2, where we define the random variables y_1=⟨ x,u⟩ and y_2=⟨ x,v⟩. We compute y_1^2=⟨Σ u,u⟩, y_2^2=⟨Σ v,v⟩, and y_1 y_2=⟨Σ u,v⟩. Therefore, Lemma <ref> directly implies g_Σ(u,v)=⟨Σ u,u⟩⟨Σ v,v⟩+2⟨Σ u,v⟩^2=Σ^1/2u_2^2·Σ^1/2v_2^2+2⟨Σ^1/2u,Σ^1/2v ⟩^2 Consequently, applying the Cauchy-Schwarz inequality yields the two sided bound Σ^1/2u_2^2·Σ^1/2v_2^2≤ g_Σ(u,v)≤ 3Σ^1/2u_2^2·Σ^1/2v_2^2. Taking the infimum over v∈𝕊^d-1 completes the proof of (<ref>). Next, we verify (<ref>). Notice that the upper bound follows since min_v ∈𝕊^d-1 h_Σ (u, v)= min_v ∈𝕊^d-1[min{x, u^2, x, v^2}] ≤min_v ∈𝕊^d-1[ x, v^2] = λ_min (Σ). To prove the lower bound we will show the slightly stronger statement that for any u, v∈𝕊^d-1, (1-2π) λ_min(Σ)≤ h_Σ (u, v). Recall that min{a,b}=a+b/2-|a-b|/2 for any a,b∈. Therefore, we can write h_Σ(u,v) =_x∼ N(0,Σ)[⟨ x,u⟩^2+⟨ x,v⟩^2/2-|⟨ x,u⟩^2-⟨ x,v⟩^2|/2] =u^⊤Σ u+v^⊤Σ v/2-_x∼ N(0,Σ)|⟨ x,u⟩^2-⟨ x,v⟩^2|/2. Next, let's compute |⟨ x,u⟩^2-⟨ x,v⟩^2|=|⟨ x,u+v⟩_y_1⟨ x,u-v⟩_y_2|. Then we get σ_1^2 := y_1^2 = Σ^1/2 (u + v)^2 and σ_2^2 := y_2^2 = Σ^1/2 (u - v)^2, and y_1 y_2 =(u+v)^⊤Σ(u-v)=ρσ_1σ_2, where ρ:=⟨Σ^1/2 (u+v),Σ^1/2 (u-v)⟩/Σ^1/2(u+v)Σ^1/2(u-v). Thus, by Lemma <ref> we get |y_1 y_2| =2/π(√(1-ρ^2)+ρarcsin(ρ)_=:ψ(ρ))σ_1σ_2. Therefore, after the relabeling û = Σ^1/2u and v̂ = Σ^1/2v we deduce h_Σ(u,v)= û^2+v̂^2/2-ψ(ρ)/πû+v̂û-v̂. By Lemma <ref> we have that h_Σ(u,v) ≥û^2+v̂^2/2- (( 1/2 - 1/π) ρ^2 + 1/π)û+v̂û-v̂. In what follows we upper bound the second term on the right-hand-sight of this inequality. Without loss of generality we assume that v̂≤û. By definition ρ is equal to cosα where α is the angle between û + v̂ and û - v̂. Thus, by the cosine law we have that 2 ρû+v̂û-v̂ = 2 cos(α) û+v̂û-v̂= û+v̂^2 + û-v̂^2 - 4v̂^2 = 2(û^2 - v̂^2), where the last equality follows by the parallelogram law. Similarly, using Young's inequality û+v̂û-v̂≤û+v̂^2 + û-v̂^2/2≤û ^2+ v̂^2. Then, applying (<ref>) and (<ref>) yields (( 1/2 - 1/π) ρ^2 + 1/π)û+v̂û-v̂ ≤( 1/2 - 1/π) ρ( û^2 - v̂^2) + 1/π(û^2 + v̂^2) ≤(1/π + ρ/2 - ρ/π)û^2 + ( 1/π - ρ/2 + ρ/π) v̂^2 ≤1/2û^2 + (2/π - 1/2) v̂^2 where the last inequality follows by adding and subtracting (2/π - 1/2) to the coefficient of v̂ and noting that v̂≤û. Combining this inequality with (<ref>) yields h_Σ(u,v) ≥( 1 - 2/π) v̂^2 = ( 1 - 2/π) v^⊤Σ v ≥( 1 - 2/π)λ_min(Σ), which proves the lower bound. § PROOFS FROM SECTION <REF> §.§ Proof of Lemma <ref> Set Σ_1=_μxx^⊤ and Σ_2=_νxx^⊤. Define the manifold of rank one d_1× d_2 matrices: = {M ∈^d_1 × d_2 : (A) = 1 }. A quick computation yields the expression ∇ f(M)= ⟨ M-,x_1x^⊤_2⟩. In particular, equality ∇ f()=0 holds and therefore the optimal Lagrange multipliers λ^⋆ are zero. Hence the Hessian of the Lagrangian ∇^2 ℒ(,λ^⋆) coincides with ∇^2 f(). We now successively compute ∇^2 f()[Δ,Δ] =⟨Δ, x_1x^⊤_2⟩^2 = x^⊤_1Δ (x_2x^⊤_2)Δ^⊤ x_1 =_x_1 x^⊤_1ΔΣ_2Δ^⊤ x_1 =_x_1(ΔΣ_2Δ^⊤ x_1x^⊤_1) =(ΔΣ_2Δ^⊤Σ_1) =Σ^1/2_1ΔΣ^1/2_2_F^2 ≥λ_min(Σ_1)λ_min(Σ_2)Δ^2_F. In particular, if both Σ_1 and Σ_2 are nonsingular, then μ×ν does not lie in . We now leverage the tangent structure to get an upper bound and a better lower bound on the quadratic form ∇^2 f()[Δ,Δ]. To this end, standard results (see e.g. <cit.>) show that the tangent space to at =β_1⋆β_2⋆^⊤ is given by T_()={ aβ_1⋆β_2⋆^⊤ + uβ_2⋆^⊤ + β_1⋆v^⊤ : a∈, u∈β_1⋆^⊥, v∈β_2⋆^⊥}. Let Δ=aβ_1⋆β_2⋆^⊤ + uβ_2⋆^⊤ + β_1⋆v^⊤∈ T_(). Without loss of generality we might assume β_1⋆ = β_2⋆ = 1, since otherwise we can make the change of variables a'/β_1⋆β_2⋆← a, v' / β_1← v, and u'/β_2← u. Thus, Δ_F^2 = a^2 +v^2 + u^2. Observe that we may rewrite (<ref>) as ∇^2 f()[Δ,Δ] = Σ^1/2_1((aβ_1⋆ + u)β_2⋆^⊤ + β_1⋆v^⊤)Σ^1/2_2_F^2 = (Σ_2^1/2⊗Σ^1/2_1)((aβ_1⋆ + u)β_2⋆^⊤) + (Σ^1/2_2⊗Σ^1/2_1) (β_1⋆v^⊤) _F^2. To obtain a bound we leverage the following general claim. Let A ∈ S_+^d be a positive definite matrix. Let x, y ∈^d be any pair of orthogonal vectors. Then the estimate holds: Ax + Ay^2≥2/κ(A)^2 + 1(Ax^2 + Ay^2). The claim will follow from the following inequality |Ax, Ay| ≤(1 - 2/κ(A)^2 + 1 )AxAy. We will come back to the proof of (<ref>) but first we show how it implies the result (<ref>). Expanding the square yields Ax + Ay^2 = Ax^2 + 2 Ax, Ay + Ay^2 ≥Ax^2 - 2 |Ax, Ay| + Ay^2 ≥Ax^2 - 2 (1 - 2/κ(A)^2 + 1 )AxAy + Ay^2 ≥Ax^2 - (1 - 2/κ(A)^2 + 1 )( Ax^2 + Ay^2) + Ay^2 = 2/κ(A)^2 + 1(Ax^2 + Ay^2), where (<ref>) follows by (<ref>) and (<ref>) is an application of Young's inequality. So we now focus on proving (<ref>). This is equivalent to finding an upper bound for the following optimization problem max_x,y:  x ⊥ y Ax, Ay/AxAy = max_x,y; x ⊥ yA(x+y)^2 - A(x-y)^2/4AxAy where the equality follows from the parallelogram law. Using the same law yields that 1/2( A(x+y)^2 + A(x-y)^2 )= Ax^2 + Ay^2. We use this constraint to define a relaxation of the problem. We introduce two new variables z and w, which intuitively play the role of x+y and x-y, respectively. Without loss of generality we can set x = y =1 since we can divide both sides of (<ref>) by xy and therefore from orthogonality of x and y we have x-y^2=1 and x+y^2=2. Define the constraint sets = {(z,w) |z^2 = w^2 = 2} and _w,z = {(x, y) |Ax^2 + Ay^2 = (Az^2 + Aw^2)/2}. We now successively upper bound max_x,y; x ⊥ y|Ax, Ay|/AxAy ≤max_(z, w) ∈max_(x,y) ∈_w,z|Az^2 - Aw^2|/4AxAy = max_(z, w) ∈|Az^2 - Aw^2|/Az^2 + Aw^2 = λ^2_max(A) - λ^2_min(A) /λ^2_max(A) + λ^2_min(A) where (<ref>) follows since the function (a, b) ↦ ab constrained to a^2 + b^2 = c is minimized at (a, b) = (√(c/2),√(c/2)) and (<ref>) follows since (a,b) ↦|a^2 - b^2|/a^2+b^2 constrained to the interval √(2)[λ_min(A), λ_max(A)] attains a maximum at (a, b) = √(2)(λ_max(A), λ_min(A)). Reordering the terms in (<ref>) proves (<ref>). We instantiate the claim with A = Σ_2^1/2⊗Σ_1^1/2, x = ((aβ_1⋆ + u)β_2⋆^⊤) and y = (β_1⋆v^⊤). Note that x and y are orthogonal since x, y = (aβ_1⋆ + u)β_2⋆^⊤, β_1⋆v^⊤ = (β_2⋆(aβ_1⋆ + u)^⊤β_1⋆v^⊤) = ((aβ_1⋆ + u)^⊤β_1⋆v^⊤β_2⋆) = 0 where we used the cyclic invariance of the trace and the fact that v, β_2⋆ = 0. Further, κ(A)^2 = κ(Σ_1) κ(Σ_2) and thus, all together we derive ∇^2 f()[Δ,Δ] ≥2/κ(Σ_1)κ(Σ_2) + 1( β_1⋆, Σ_1β_1⋆u, Σ_2 u + β_2⋆, Σ_2β_2⋆a β_1⋆ + v, Σ_1 (aβ_1⋆ + v)) ≥2/κ(Σ_1)κ(Σ_2) + 1( β_1⋆, Σ_1β_1⋆λ_min(Σ_2)u^2 + β_2⋆, Σ_2β_2⋆λ_min(Σ_1)aβ_1⋆ + v^2) ≥2/κ(Σ_1)κ(Σ_2) + 1min{β_1⋆, Σ_1β_1⋆λ_min(Σ_2), β_2⋆, Σ_2β_2⋆λ_min(Σ_1) } where the last holds since u^2 + aβ_1⋆+v^2 = 1. This establishes the lower bound in (<ref>). Next we establish the converse. For any Δ=aβ_1⋆β_2⋆^⊤ + uβ_2⋆^⊤ + β_1⋆v^⊤, from (<ref>) we have ∇^2 f()[Δ,Δ]=[a ⟨ x_1, β_1⋆⟩⟨ x_2, β_2⋆⟩ + ⟨ x_1, u ⟩⟨ x_2, β_2⋆⟩ + ⟨ x_1, β_1⋆⟩⟨ x_2, v ⟩]^2. Note the equality Δ^2_F=a^2β_1⋆_2^2β_2⋆_2^2+β_1⋆^2v^2+β_2⋆^2u^2. Now setting v=0, equation (<ref>) becomes ∇^2 f()[Δ,Δ]=[ ⟨ x_1, aβ_1⋆+u ⟩⟨ x_2, β_2⋆⟩]^2=⟨Σ_1(aβ_1⋆+u),aβ_1⋆+u⟩⟨Σ_2β_2⋆,β_2⋆⟩. Note the equality aβ_1⋆+u^2=a^2β_1⋆^2+u^2=Δ^2/β_2⋆^2. Therefore we deduce min_Δ: Δ=1∇^2 f()[Δ,Δ]≤min_w: w^2=1⟨Σ_1w,w ⟩⟨Σ_2β_2⋆,β_2⋆⟩/β_2⋆^2=λ_min(Σ_1)·⟨Σ_2β_2⋆,β_2⋆⟩/β_2⋆^2. A symmetric argument shows min_Δ: Δ=1∇^2 f()[Δ,Δ]≤λ_min(Σ_2)·⟨Σ_1β_1⋆,β_1⋆⟩/β_1⋆^2. In particular, if Σ_1 or Σ_2 are singular then μ×ν lies in , as claimed. This completes the proof. §.§ Proof of Theorem <ref> This follows directly from Lemma <ref> and Theorem <ref>. § PROOFS FROM SECTION <REF> §.§ Proof of Lemma <ref> Consider any tangent vector Δ= v^⊤ +v ^⊤ with v∈^d. Then we may vectorize Δ as follows: (Δ)=( v^⊤I)+(I v ^⊤)=(I⊗)v+(⊗ I)v=Φ_βv. Therefore, we compute ∇ f(M_⋆)[Δ,Δ]=(Δ)^⊤Σ_μ(Δ) = v^⊤ (Φ_β_⋆^⊤Σ_μΦ_β_⋆) v, Minimizing the expression (<ref>) in Δ with Δ_F=1 yields the guarantee (<ref>). §.§ Proof of Theorem <ref> Characterization of well-posedness. To simplify notation, we will relabel to β. Observe for any measure (p_ij)∈, equation (<ref>) yields ∇^2 f(M_⋆)[Δ_v,Δ_v]=⟨Δ_v,X⟩^2=∑_(i,j)∈ E (v_iβ_j+v_jβ_i)^2 p_ij, where Δ_v:=β v^⊤ +v β^⊤ is any tangent vector to M_⋆ at . Recall moreover that Δ_v is nonzero if and only if v is nonzero. () We prove the contrapositive. Thus, suppose that Assumption <ref> does not hold. We will consider two cases separately. Case 1. Assume that there exists i̅∈ V^0. Thus for any edge (i̅,j)∈ E equality β_j=0 holds. Then setting v = e_i̅, we deduce ∇^2 f(M_⋆)[Δ_v,Δ_v] = ∑_(i,j)∈ E (v_iβ_j+v_jβ_i)^2 p_ij=∑_j: (i̅,j)∈ Eβ_j^2 p_i̅j=0. We conclude that μ lies in ^ mc, as claimed. Case 2. Assume that one of the components of G^* is bipartite and the set V^0 is empty. Without loss of generality, we may assume that G^* is connected, since otherwise we can restrict the following argument to any connected component. Thus, there exists a partition V^* = I ∪ J with all the edges (i,j) ∈ E^* satisfying i ∈ I and j ∈ J. Define the vector v_i = β_i if i ∈ I, - β_j if j ∈ J, 0 otherwise. Using (<ref>), we obtain ∇^2 f(M_⋆)[Δ_v,Δ_v] = ∑_(i,j) ∈ E^* (v_iβ_j + v_jβ_i)^2 p_ij =∑_(i,j) ∈ E^* (β_iβ_j - β_jβ_i)^2 p_ij = 0. where we used that v is supported on V^* and that V^0 is empty. Thus, μ lies in ^ mc, as claimed. () Assume that Assumption <ref> holds. To this end, let v ∈^d satisfy ∇^2 f(M_⋆)[Δ_v,Δ_v]=0. Our goal is to show that v is the zero vector. To this end, clearly (<ref>) implies v_iβ_j+v_jβ_i=0 ∀ (i,j)∈ E. Without loss of generality suppose that G^* is connected, otherwise, we can repeat the argument for each connected component. Since G^* is non-bipartite, it must contain an odd-size cycle i_1→ i_2→…→ i_k→ i_1. Consider the expansion 2 β_i_1 v_i_k = ( β_i_1 v_i_k + β_i_k v_i_1) - β_i_k/β_i_2( β_i_2 v_i_1 + β_i_1 v_i_2) + ∑_j = 2^k - 1 (-1)^jβ_i_1β_i_k/β_i_jβ_i_j+1( β_i_j+1 v_i_j + β_i_j v_i_j+1 ) = 0, where the last equality follows since each term in the parenthesis is zero. Since β_i_1 > 0, we deduce v_i_k = 0. Next, observe from (<ref>) that for any neighbor j of i_k, i.e., satisfying (i_k, j)∈ E^*, we have that v_j = β_jv_i_k / β_i_k = 0. Repeating the argument, we deduce v_j = 0 for all v ∈ V^*. Next, consider any vertex i ∉ V^*. Then since V^0 is empty, there exists some j∈ V^* with (i,j)∈ E. Using (<ref>) again, we conclude v_iβ_j=0. Taking into account β_j>0, we deduce v_i = 0. Thus v is identically zero. Distance formula. We have now proved that a measure μ lies in ^ mc if and only if its support (μ) lies in Ω_. Given any set of indices A ⊆ [d] × [d] define _A^d× d→^d× d to be the orthogonal projection onto the entries indexed by A. Fix now a set of entries A ⊂(P) such that A ∈Ω_M_⋆. Clearly, the pushforward measure (_A)_#μ lies ^mc, and we compute min_ν∈^mc W_2^2 (μ, ν) ≤ W_2^2(μ, (_A)_#μ)= ∑_ij ∈(P) ∖ A p_ij. Taking the minimum over A yields the inequality ≤ in (<ref>). Conversely, fix a measure ν∈^mc and let A be the indices that are observed with positive probability according to ν. Thus, the support of ν is contained in the subspace (_A). Then by Lemma <ref> and (<ref>), we have W_2^2(μ, ν) ≥ W_2^2(μ, (_A)_#μ)= ∑_ij ∈(P) ∖ A p_ij. Taking the minimum over all entries A ⊂(P) such that A ∈Ω_M_⋆ completes the proof of the reverse inequality ≥ in (<ref>). Hardness. We reduce from the problem. Assume that we have a polynomial time algorithm (M_⋆, (p_ij)) to compute (<ref>). Given an instance of G = (V, E), we split the graph into its connected components G_1 = (V_1, E_1), …, G_k = (V_k, E_k). For each ℓ≤ k, we define an instance of matrix completion with M_⋆ = 11^⊤∈^|G_ℓ| × |G_ℓ| — with a slight abuse of notation we use M_⋆ for all k problems — and set the distribution ^(ℓ) to p_ij^(ℓ) = 1/|E_ℓ| if i,j ∈ E_ℓ, and p_ij^(ℓ) = 0, otherwise. Since the entries of M_⋆ are strictly positive and G_ℓ is connected, the output of (M_⋆, ^(ℓ)) times |E_ℓ| is equal to the minimum number of edges one needs to remove from G_ℓ to make it bipartite. Thus, |E_ℓ|(1 - (M_⋆, ^(ℓ))) is equal to the number of edges of the largest bipartite graph one can construct via edge deletion, which is readily seen to be equal to (G_ℓ). Thus summing along connected components yields (G) = |E| - ∑_ℓ≤ k|E_ℓ| ·(M_⋆, ^(ℓ)); completing the proof.
http://arxiv.org/abs/2405.09027v1
20240515014808
Csikvári's poset and Tutte polynomial
[ "Changxin Ding" ]
math.CO
[ "math.CO" ]
Csikvári constructed a poset on trees to prove that several graph functions attain extreme values at the star and the path among the trees on a fixed number of vertices. Reiner and Smith proved that the Tutte polynomials (1,y) of cones over trees, which are the graphs obtained by attaching a cone vertex to a tree, have the described extreme behavior. They further conjectured that the result can be strengthened in terms of Csikvári's poset. We solve this conjecture affirmatively. A temperature or FUV tracer? The HNC/HCN ratio in M83 on the GMC scale [ May 20, 2024 ====================================================================== § INTRODUCTION This paper aims to prove a conjecture of Victor Reiner and Dorian Smith. We first introduce some background. Fix a positive integer n≥ 3 and consider the set _n of all trees on n vertices. In _n, two extreme elements are the tree Path_n with exactly 2 leaves and the tree Star_n with exactly n-1 leaves. Many graph functions F(G) attain extreme values at Star_n and Path_n among trees in _n. For example, Lovász and Pelikán <cit.> proved that the star has the largest spectral radius and the path has the smallest spectral radius among all trees in _n (in short, we say that the function F(G) can be the spectral radius of G); Zhou and Gutman <cit.> proved that F(G) can be the coefficients of the characteristic polynomial of the Laplacian matrix of G; Péter Csikvári <cit.> proved that F(G) can the number of closed walks of a fixed length l in G. For more examples, see <cit.>. Among these works, Csikvári's is of most interest to us. Csikvári's main tool is an operation on _n called the generalized tree shift, which makes _n a partially ordered set. We call it the Csikvári poset. One basic feature of the Csikvári poset is that the tree Star_n is the unique maximal element and the tree Path_n is the unique minimal element <cit.>. For the definition of Csikvári poset, see Section <ref>. For readers' convenience, we quote the example of _7 from <cit.> here; see Figure <ref>. For the example of _6 , see <cit.>. The generalized tree shift indeed generalizes many transformations for trees found in the literature; see <cit.> for a detailed discussion. Csikvári's tree shift was inspired by a graph transformation defined by Kelmans in <cit.>. In <cit.>, the tree shift is called the KC-transformation in honor of Kelmans and Csikvári. Csikvári's tree shift was used to prove that several graph functions F(G) have the property described in the first paragraph; see <cit.>. In <cit.>, Victor Reiner and Dorian Smith studied the sandpile groups of graphs Cone(T) obtained by attaching a cone vertex to a tree T. In particular, they studied the function f(T):=_Cone(T)(1,y), where _G(x,y)∈[x,y] denotes the Tutte polynomial of a graph G. In general, the coefficients of the Tutte polynomial _G(1,y) encode some information from G including external activities, the number of recurrent sandpile configurations (also known as critical configurations) at a given level, and the number of reduced divisors (also known as G-parking functions or superstable configurations) at a given level; see <cit.>. Via duality, _G(1,y) can be viewed as the h-vector of the cographic matroid M^*(G) <cit.>. Going back to Reiner and Smith's work, they proved the following result. <cit.> For a tree T∈_n, the inequalities f(Star_n)≤ f(T)≤ f(Path_n) hold coefficientwise. For readers' convenience, we quote the values f(T) for all the trees T∈_7 from <cit.>; see Table <ref>. Victor Reiner and Dorian Smith conjectured and we will prove the following. <cit.> Let T and T' be two trees. If T<T' in the Csikvári poset, then f(T') ≤ f(T) coefficientwise in [y]. Note that the conjecture implies Theorem <ref>. Since our proof does not rely on Theorem <ref> (or its proof), we obtain a new proof of Theorem <ref>. Comparing the proof in <cit.> and ours, one can notice that they deal with many inequalities while we mostly deal with equalities. Remarkably, our proof makes use of what Csikvári called “General Lemma” (<cit.>). In particular, when T' covers T in the Csikvári poset, we can factor f(T)-f(T') into three polynomials with no negative coefficients (Proposition <ref>). Thus our result can be viewed as another successful application of Csikvári's method in <cit.>. Our paper is arranged as follows. In Section <ref>, we give the necessary definitions and notations. In Section <ref>, we present the proof of Conjecture <ref>. § PRELIMINARIES §.§ Tutte polynomial and Cone of Tree By a graph, we mean a finite undirected graph possibly with parallel edges and loops. Recall that the Tutte polynomial _G(x,y) of a graph G can be defined recursively by the following contraction-deletion relation. _G(x,y) = _G∖ e(x,y)+ _G/e(x,y), if e is neither a loop nor a bridge, y ·_G ∖ e(x,y), if e is a loop, x ·_G/e (x,y), if e is a bridge, 1, if G has no edges, where G∖ e (resp. G/e) is the graph obtained from G by deleting (resp. contracting) the edge e. It follows that the coefficients of Tutte polynomials are non-negative integers. By the contraction-deletion relation, it is easy to see that the trees in _n cannot be distinguished by their Tutte polynomials. Following <cit.>, we may consider the Tutte polynomials of the cones over trees to help with this issue. Let G be a graph. * The cone over the graph G is the graph obtained from G by adding an extra vertex r and then connecting r to each vertex of G, denoted by Cone(G). * Define the function f(G):=_Cone(G)(1,y)∈[y]. Throughout our paper, for two polynomials f_1, f_2∈[y], the inequality f_1≤ f_2 means that the coefficients of f_2-f_1 are all non-negative. In Section <ref>, we have seen that the function f(G) attains extreme values at Star_n and Path_n when taking values in _n (Theorem <ref>). §.§ Csikvári poset We first introduce a useful notation, which is also adopted in <cit.>. For i=1,2, let G_i be a graph with a distinguished vertex v_i. Let G_1:G_2 be the graph obtained from G_1 and G_2 by identifying the vertices v_1 and v_2. This operation depends on the vertices we choose, but we omit this in the notation for simplicity. It should be clear from the context what v_1 and v_2 are. Sometimes we also use the same label v for the two distinguished vertices in G_1 and G_2. The operation is also known as a 1-sum of G_1 and G_2 as a special case of clique-sums. The following operation on _n plays a central role in Csikvári's theory <cit.> and our paper. Let P_k be a path in a tree T with the vertex sequence v_1,v_2,⋯, v_k(k≥ 2) such that all the interior vertices v_2, ⋯, v_k-1 have degree two in T. (When k=2, there is no interior vertex.) By removing all the edges and interior vertices in the path P_k from T, we obtain a tree H_1 with the distinguished vertex v_1 and a tree H_2 with the distinguished vertex v_k. Then the generalized tree shift (with respect to P_k) transforms the tree T into the tree T':=(H_1:H_2):P_k, where we identify the vertex v_k in H_1:H_2 with the vertex v_k in P_k. (See Figure <ref>.) Observe that the generalized tree shift increases the number of leaves by 1 unless T and T' are isomorphic graphs (when v_1 or v_k is a leaf of T). Hence we may define the following poset. For T, T'∈_n, we denote T≤ T' if T' can be obtained from T by some generalized tree shifts. This gives a partial order on _n. We call it the Csikvári poset on _n. (See Figure <ref>.) If a tree has two vertices that are not leaves, then one can increase the number of leaves by applying a generalized tree shift to the tree. This implies that the only maximal element of the Csikvári poset is the star. It is less trivial that the only minimal element is the path. <cit.> In the Csikvári poset (_n, ≤), the tree Star_n is the unique maximal element and the tree Path_n is the unique minimal element. § PROOF OF CONJECTURE <REF> When the tree T' covers T in the Csikvári poset, we will prove the inequality f(T')≤ f(T) by factoring f(T)-f(T') into three polynomials with no negative coefficients (Proposition <ref>). To state this result, we need to define the following function. For a graph G and a vertex v of G, let e(v) be the edge of Cone(G) connecting the vertex v and the apex r of the cone. Note that e(v) is a bridge if and only if there is no edge of G incident to v. Define the function g_v(G):= T_Cone(G)\ e(v)(1,y), if there exists an edge of G incident to v, 0, otherwise. For the two trees T and T' in Definition <ref>, we have f(T)-f(T')=y· g_v_1(H_1)· g_v_k(H_2)· g_v_1(P_k). Consequently, f(T')≤ f(T). Conjecture <ref> holds. This is a direct consequence of Proposition <ref> and the definition of Csikvári poset. Before we present the technical proof of Proposition <ref>, we introduce a notation and give an example first. In this section, any path on k vertices will be denoted by P_k where the vertices are labeled by v_1,⋯, v_k in order. Sometimes a formula or a sentence will involve two different paths where we use the same label for two different vertices; e.g. the paths P_2 and P_3 both have a vertex labeled by v_1. This abuse of notation should not cause any confusion in the context. This example demonstrates Proposition <ref>. By direct computations, we have g_v_1(P_2)=_P_2(1,y)=1, g_v_2(P_3)=_4-cycle(1,y)=y+3, and g_v_1(P_3)=_3-cycle(1,y)=y+2. Then we consider the data in Figure <ref> and Table <ref>. The tree T_2 consists of a subtree H_1=P_3 with the distinguished vertex v_1, a subtree H_2=P_3 with the distinguished vertex v_2, and the path P_2 connecting the two distinguished vertices. The generalized tree shift transforms the tree T_2 into T_8. Then one can check that f(T_2)-f(T_8)=y^4+7y^3+16y^2+12y=yg_v_1(P_3)g_v_2(P_3)g_v_1(P_3). Our main tool to prove Proposition <ref> is what Csikvári called “General Lemma”. (General Lemma)[By the proof of the General Lemma in <cit.>, if we only assume that the condition (1) holds for any two trees G_1 and G_2, then the conclusion of the lemma still holds.]<cit.> Let f(G) be a graph polynomial in y. Assume there exists a graph polynomial g_v(G) in y whose inputs are a graph G and a vertex of G such that the following two conditions hold. * For any two graphs G_1 and G_2, we have f(G_1:G_2)=c_1f(G_1)f(G_2)+c_2f(G_1)g_v(G_2)+c_2f(G_2)g_v(G_1)+c_3g_v(G_1)g_v(G_2), where c_1,c_2,c_3 are rational functions of y and v is the identified vertex in G_1:G_2. * Denote q_v(G):=c_2f(G)+c_3g_v(G). We have that q_v_1(P_2) is not a zero polynomial. Then the conclusion is f(T)-f(T')=g_v_1(P_3)-g_v_2(P_3)/(q_v_1(P_2))^2q_v_1(P_k)q_v_1(H_1)q_v_k(H_2), where the trees T and T' are as in Definition <ref>. Although in our paper the function f has been defined, the function f in the lemma could be any graph polynomial. Csikvári called it the General Lemma because it was used to prove “T<T'⇒ f(T)≤ f(T')” for several graph polynomials f in <cit.>. One difficulty of applying the General Lemma is to find the function g_v, which we have given for our function f. We will prove Proposition <ref> by checking that our functions f and g_v satisfy the conditions in the General Lemma. We first introduce an auxiliary function h_v(G):=T_Cone(G)/ e(v)(1,y). By the contraction-deletion relation of Tutte polynomials, we get f(G)=g_v(G)+h_v(G). It is not hard to show that _G_1:G_2(x,y)=_G_1(x,y)_G_2(x,y). Hence we have h_v(G_1:G_2)=h_v(G_1)h_v(G_2), where v is the identified vertex in G_1:G_2. Let v be the identified vertex in G_1:G_2. Then f(G_1:G_2)=f(G_1)f(G_2)-yg_v(G_1)g_v(G_2). Observe that if there is no edge e of G_1 incident to v, then we have g_v(G_1)=0 and f(G_1:G_2)=f(G_1)f(G_2). Thus the desired equality holds. Now we use induction on the number edges in G_1 to prove f(G_1)f(G_2)-f(G_1:G_2)=yg_v(G_1)g_v(G_2). The base case is that G_1 has no edge, which is covered by the observation above. For the inductive step, we may assume that there exists an edge e of G_1 incident to v (otherwise the case is covered by the observation). Let e(v) be the edge of Cone(G_1) connecting the vertex v and the apex of the cone. Then by applying the contraction-deletion relation to the edge e of Cone(G_1), we get f(G_1)=f(G_1∖ e)+_Cone(G_1)/e(1,y). By further applying the contraction-deletion relation to the edge e(v) of Cone(G_1)/e (which is one of the multiple edges produced by the contraction), we get f(G_1)=f(G_1∖ e)+f(G_1/e)+yh_v(G_1/e). Similarly, by applying the contraction-deletion relation to the edge e in Cone(G_1)\ e(v) and then applying Equation (<ref>), we get g_v(G_1) = f(G_1/e)+g_v(G_1∖ e) = g_v(G_1/e)+h_v(G_1/e)+g_v(G_1∖ e). The following computation finishes the proof, where IH means the induction hypothesis. f(G_1)f(G_2)-f(G_1:G_2) (<ref>)= (f(G_1∖ e)+f(G_1/e)+yh_v(G_1/e))· f(G_2)-f(G_1∖ e:G_2)-f(G_1/e:G_2)-yh_v(G_1/e:G_2) IH= yg_v(G_1∖ e)g_v(G_2)+yg_v(G_1/e)g_v(G_2)+yh_v(G_1/e)f(G_2)-yh_v(G_1/e:G_2) (<ref>)= yg_v(G_1∖ e)g_v(G_2)+yg_v(G_1/e)g_v(G_2)+yh_v(G_1/e)f(G_2)-yh_v(G_1/e)h_v(G_2) (<ref>)= yg_v(G_1∖ e)g_v(G_2)+yg_v(G_1/e)g_v(G_2)+yh_v(G_1/e)g_v(G_2) (<ref>)= yg_v(G_1)g_v(G_2). Consider applying the General Lemma with c_1=1, c_2=0, c_3=-y. By Lemma <ref>, the first condition in the General Lemma holds. For the second condition, we have q_v(G)=-yg_v(G), and hence q_v_1(P_2)=-yg_v_1(P_2)=-y≠ 0 by Example <ref>. Thus the General Lemma can be applied in our setting, and a direct computation gives the desired equality. The inequality follows from the fact that the Tutte polynomials and hence g_v(G) do not have negative coefficients. § ACKNOWLEDGEMENTS Thanks to Victor Reiner, Dorian Smith, and Péter Csikvári for helpful discussions. plain
http://arxiv.org/abs/2405.10185v1
20240516153018
DiverGen: Improving Instance Segmentation by Learning Wider Data Distribution with More Diverse Generative Data
[ "Chengxiang Fan", "Muzhi Zhu", "Hao Chen", "Yang Liu", "Weijia Wu", "Huaqi Zhang", "Chunhua Shen" ]
cs.CV
[ "cs.CV" ]
On fusing matrices associated with conformal boundary conditions and Vasileios Vergioglou May 20, 2024 ================================================================= Instance segmentation is data-hungry, and as model capacity increases, data scale becomes crucial for improving the accuracy. Most instance segmentation datasets today require costly manual annotation, limiting their data scale. Models trained on such data are prone to overfitting on the training set, especially for those rare categories. While recent works have delved into exploiting generative models to create synthetic datasets for data augmentation, these approaches do not efficiently harness the full potential of generative models. To address these issues, we introduce a more efficient strategy to construct generative datasets for data augmentation, termed . Firstly, we provide an explanation of the role of generative data from the perspective of distribution discrepancy. We investigate the impact of different data on the distribution learned by the model. We argue that generative data can expand the data distribution that the model can learn, thus mitigating overfitting. Additionally, we find that the diversity of generative data is crucial for improving model performance and enhance it through various strategies, including category diversity, prompt diversity, and generative model diversity. With these strategies, we can scale the data to millions while maintaining the trend of model performance improvement. On the LVIS dataset, significantly outperforms the strong model X-Paste, achieving +1.1 box AP and +1.1 mask AP across all categories, and +1.9 box AP and +2.5 mask AP for rare categories. Our codes are available at https://github.com/aim-uofa/DiverGenhttps://github.com/aim-uofa/DiverGen. § INTRODUCTION Instance segmentation <cit.> is one of the challenging tasks in computer vision, requiring the prediction of masks and categories for instances in an image, which serves as the foundation for numerous visual applications. As models' learning capabilities improve, the demand for training data increases. However, current datasets for instance segmentation heavily rely on manual annotation, which is time-consuming and costly, and the dataset scale cannot meet the training needs of models. Despite the recent emergence of the automatically annotated dataset SA-1B <cit.>, it lacks category annotations, failing to meet the requirements of instance segmentation. Meanwhile, the ongoing development of the generative model has largely improved the controllability and realism of generated samples. For example, the recent text2image diffusion model <cit.> can generate high-quality images corresponding to input prompts. Therefore, current methods <cit.> use generative models for data augmentation by generating datasets to supplement the training of models on real datasets and improve model performance. Although current methods have proposed various strategies to enable generative data to boost model performance, there are still some limitations: 1) Existing methods have not fully exploited the potential of generative models. First, some methods <cit.> not only use generative data but also need to crawl images from the internet, which is significantly challenging to obtain large-scale data. Meanwhile, the content of data crawled from the internet is uncontrollable and needs extra checking. Second, existing methods do not fully use the controllability of generative models. Current methods often adopt manually designed templates to construct prompts, limiting the potential output of generative models. 2) Existing methods <cit.> often explain the role of generative data from the perspective of class imbalance or data scarcity, without considering the discrepancy between real-world data and generative data. Moreover, these methods typically show improved model performance only in scenarios with a limited number of real samples, and the effectiveness of generative data on existing large-scale real datasets, like LVIS <cit.>, is not thoroughly investigated. In this paper, we first explore the role of generative data from the perspective of distribution discrepancy, addressing two main questions: 1) Why does generative data augmentation enhance model performance? 2) What types of generative data are beneficial for improving model performance? First, we find that there exist discrepancies between the model learned distribution of the limited real training data and the distribution of real-world data. We visualize the data and find that compared to the real-world data, generative data can expand the data distribution that the model can learn. Furthermore, we find that the role of adding generative data is to alleviate the bias of the real training data, effectively mitigating overfitting the training data. Second, we find that there are also discrepancies between the distribution of the generative data and the real-world data distribution. If these discrepancies are not handled properly, the full potential of the generative model cannot be utilized. By conducting several experiments, we find that using diverse generative data enables models to better adapt to these discrepancies, improving model performance. Based on the above analysis, we propose an efficient strategy for enhancing data diversity, namely, Generative Data Diversity Enhancement. We design various diversity enhancement strategies to increase data diversity from the perspectives of category diversity, prompt diversity, and generative model diversity. For category diversity, we observe that models trained with generative data covering all categories adapt better to distribution discrepancy than models trained with partial categories. Therefore, we introduce not only categories from LVIS <cit.> but also extra categories from ImageNet-1K <cit.> to enhance category diversity in data generation, thereby reinforcing the model's adaptability to distribution discrepancy. For prompt diversity, we find that as the scale of the generative dataset increases, manually designed prompts cannot scale up to the corresponding level, limiting the diversity of output images from the generative model. Thus, we design a set of diverse prompt generation strategies to use large language models, like ChatGPT, for prompt generation, requiring the large language models to output maximally diverse prompts under constraints. By combining manually designed prompts and ChatGPT designed prompts, we effectively enrich prompt diversity and further improve generative data diversity. For generative model diversity, we find that data from different generative models also exhibit distribution discrepancies. Exposing models to data from different generative models during training can enhance adaptability to different distributions. Therefore, we employ Stable Diffusion <cit.> and DeepFloyd-IF <cit.> to generate images for all categories separately and mix the two types of data during training to increase data diversity. At the same time, we optimize the data generation workflow and propose a four-stage generative pipeline consisting of instance generation, instance annotation, instance filtration, and instance augmentation. In the instance generation stage, we employ our proposed Generative Data Diversity Enhancement to enhance data diversity, producing diverse raw data. In the instance annotation stage, we introduce an annotation strategy called SAM-background. This strategy obtains high-quality annotations by using background points as input prompts for SAM <cit.>, obtaining the annotations of raw data. In the instance filtration stage, we introduce a metric called CLIP inter-similarity. Utilizing the CLIP <cit.> image encoder, we extract embeddings from generative and real data, and then compute their similarity. A lower similarity indicates lower data quality. After filtration, we obtain the final generative dataset. In the instance augmentation stage, we use the instance paste strategy <cit.> to increase model learning efficiency on generative data. Experiments demonstrate that our designed data diversity strategies can effectively improve model performance and maintain the trend of performance gains as the data scale increases to the million level, which enables large-scale generative data for data augmentation. On the LVIS dataset, significantly outperforms the strong model X-Paste <cit.>, achieving +1.1 box AP <cit.> and +1.1 mask AP across all categories, and +1.9 box AP and +2.5 mask AP for rare categories. In summary, our main contributions are as follows: * We explain the role of generative data from the perspective of distribution discrepancy. We find that generative data can expand the data distribution that the model can learn, mitigating overfitting the training set and the diversity of generative data is crucial for improving model performance. * We propose the Generative Data Diversity Enhancement strategy to increase data diversity from the aspects of category diversity, prompt diversity, and generative model diversity. By enhancing data diversity, we can scale the data to millions while maintaining the trend of model performance improvement. * We optimize the data generation pipeline. We propose an annotation strategy SAM-background to obtain higher-quality annotations. We also introduce a filtration metric called CLIP inter-similarity to filter data and further improve the quality of the generative dataset. § RELATED WORK Instance segmentation. Instance segmentation is an important task in the field of computer vision and has been extensively studied. Unlike semantic segmentation, instance segmentation not only classifies the pixels at a pixel level but also distinguishes different instances of the same category. Previously, the focus of instance segmentation research has primarily been on the design of model structures. Mask-RCNN <cit.> unifies the tasks of object detection and instance segmentation. Subsequently, Mask2Former <cit.> further unified the tasks of semantic segmentation and instance segmentation by leveraging the structure of DETR <cit.>. Orthogonal to these studies focusing on model architecture, our work primarily investigates how to better utilize generated data for this task. We focus on the challenging long-tail dataset LVIS <cit.> because it is only the long-tailed categories that face the issue of limited real data and require generative images for augmentation, making it more practically meaningful. Generative data augmentation. The use of generative models to synthesize training data for assisting perception tasks such as classification <cit.>, detection <cit.>, segmentation <cit.>, etc. has received widespread attention from researchers. In the field of segmentation, early works <cit.> utilize generative adversarial networks (GANs) to synthesize additional training samples. With the rise of diffusion models, there have been numerous efforts <cit.> to utilize text2image diffusion models, such as Stable Diffusion <cit.>, to boost the segmentation performance. <cit.> combine the Stable Diffusion model with a novel grounding module and establish an automatic pipeline for constructing a segmentation dataset. DiffuMask <cit.> exploits the potential of cross-attention maps between text and images to synthesize accurate semantic labels. More recently, FreeMask <cit.> uses a mask-to-image generation model to generate images conditioned on the provided semantic masks. However, the aforementioned work is only applicable to semantic segmentation. The most relevant work to ours is X-Paste <cit.>, which promotes instance segmentation through copy-pasting the generative images and a filter strategy based on CLIP <cit.>. In summary, most methods only demonstrate significant advantages when training data is extremely limited. They consider generating data as a means to compensate for data scarcity or class imbalance. However, in this work, we take a further step to examine and analyze this problem from the perspective of data distribution. We propose a pipeline that enhances diversity from multiple levels to alleviate the impact of data distribution discrepancies. This provides new insights and inspirations for further advancements in this field. § OUR PROPOSED §.§ Analysis of Data Distribution Existing methods <cit.> often attribute the role of generative data to addressing class imbalance or data scarcity. In this paper, we provide an explanation for two main questions from the perspective of distribution discrepancy. Why does generative data augmentation enhance model performance? We argue that there exist discrepancies between the model learned distribution of the limited real training data and the distribution of real-world data. The role of adding generative data is to alleviate the bias of the real training data, effectively mitigating overfitting the training data. First, to intuitively understand the discrepancies between different data sources, we use CLIP <cit.> image encoder to extract the embeddings of images from different data sources, and then use UMAP <cit.> to reduce dimensions for visualization. Visualization of data distributions on different sources is shown in Figure <ref>. Real-world data (LVIS <cit.> train and LVIS val) cluster near the center, while generative data (Stable Diffusion <cit.> and IF <cit.>) are more dispersed, indicating that generative data can expand the data distribution that the model can learn. Then, to characterize the distribution learned by the model, we employ the free energy formulation used by <cit.>. This formulation transforms the logits outputted by the classification head into an energy function. The formulation is shown below: F(q; h) = -τlog∑_c = 1^nexp( h_c(q)/τ). Here, q is the feature of instance, h_c(q) is the c^th logit outputted by classification head h(.), n is the number of categories and τ is the temperature parameter. We train one model using only the LVIS train set (θ_train), and another model using LVIS train with generative data (θ_gen). Both models are evaluated on the LVIS val set and we use instances that are successfully matched by both models to obtain energy values. Additionally, we train another model using LVIS val (θ_val), treating it as representative of real-world data distribution. Then, we further fit Gaussian distributions to the histograms of energy values to obtain the mean μ and standard deviation σ for each model and compute the KL divergence <cit.> between them. D_KL(p_θ_trainp_θ_val) is 0.063, and D_KL(p_θ_genp_θ_val) is 0.019. The latter is lower, indicating that using generative data mitigates the bias of limited real training data. Moreover, we also analyze the role of generative data from a metric perspective. We randomly select up to five images per category to form a minitrain set and then conduct inferences using θ_train and θ_gen. Then, we define a metric, termed train-val gap (TVG), which is formulated as follows: TVG_w^k = AP_w^kminitrain - AP_w^kval. Here, TVG_w^k is train-val gap of w category on task k, AP_w^kd is AP <cit.> of w category on k obtained on dataset d, w ∈{f, c, r}, with f, c, r standing for frequent, common, rare <cit.> respectively, and k∈{box, mask}, with box, mask referring to the object detection and instance segmentation. The train-val gap serves as a measure of the disparity in the model's performance between the training and validation sets. A larger gap indicates a higher degree of overfitting the training set. The results, as presented in Table <ref>, show that the metrics for the rare categories consistently surpass those of frequent and common. This observation suggests that the model tends to overfit more on the rare categories that have fewer examples. With the augmentation of generative data, all TVG of θ_gen are lower than θ_train, showing that adding generative data can effectively alleviate overfitting the training data. What types of generative data are beneficial for improving model performance? We argue that there are also discrepancies between the distribution of the generative data and the real-world data distribution. If these discrepancies are not properly addressed, the full potential of the generative model cannot be attained. We divide the generative data into `frequent', `common', and `rare' <cit.> groups, and train three models using each group of data as instance paste source. The inference results are shown in Table <ref>. We find that the metrics on the corresponding category subset are lowest when training with only one group of data. We consider model performance to be primarily influenced by the quality and diversity of data. Given that the quality of generative data is relatively consistent, we contend insufficient diversity in the data can mislead the distribution that the model can learn and a more comprehensive understanding is obtained by the model from a diverse set of data. Therefore, we believe that using diverse generative data enables models to better adapt to these discrepancies, improving model performance. §.§ Generative Data Diversity Enhancement Through the analysis above, we find that the diversity of generative data is crucial for improving model performance. Therefore, we design a series of strategies to enhance data diversity at three levels: category diversity, prompt diversity, and generative model diversity, which help the model to better adapt to the distribution discrepancy between generative data and real data. Category diversity. The above experiments show that including data from partial categories results in lower performance than incorporating data from all categories. We believe that, akin to human learning, the model can learn features beneficial to the current category from some other categories. Therefore, we consider increasing the diversity of data by adding extra categories. First, we select some extra categories besides LVIS from ImageNet-1K <cit.> categories based on WordNet <cit.> similarity. Then, the generative data from LVIS and extra categories are mixed for training, requiring the model to learn to distinguish all categories. Finally, we truncate the parameters in the classification head corresponding to the extra categories during inference, ensuring that the inferred category range remains within LVIS. Prompt diversity. The output images of the text2image generative model typically rely on the input prompts. Existing methods <cit.> usually generate prompts by manually designing templates, such as “a photo of a single {category_name}." When the data scale is small, designing prompts manually is convenient and fast. However, when generating a large scale of data, it is challenging to scale the number of manually designed prompts correspondingly. Intuitively, it is essential to diversify the prompts to enhance data diversity. To easily generate a large number of prompts, we choose large language model, like ChatGPT, to enhance the prompt diversity. We have three requirements for the large language model: 1) each prompt should be as different as possible; 2) each prompt should ensure that there is only one object in the image; 3) prompts should describe different attributes of the category. For example, if the category is food, prompts should cover attributes like color, brand, size, freshness, packaging type, packaging color, etc. Limited by the inference cost of ChatGPT, we use the manually designed prompts as the base and only use ChatGPT to enhance the prompt diversity for a subset of categories. Moreover, we also leverage the controllability of the generative model, adding the constraint “in a white background" after each prompt to make the background of output images simple and clear, which reduces the difficulty of mask annotation. Generative model diversity. The quality and style of output images vary across generative models, and the data distribution learned solely from one generative model's data is limited. Therefore, we introduce multiple generative models to enhance the diversity of data, allowing the model to learn from wider data distributions. We selected two commonly used generative models, Stable Diffusion <cit.> (SD) and DeepFloyd-IF <cit.> (IF). We use Stable Diffusion V1.5, generating images with a resolution of 512 × 512, and use images output from Stage II of IF with a resolution of 256 × 256. For each category in LVIS, we generated 1k images using two models separately. Examples from different generative models are shown in Figure <ref>. §.§ Generative Pipeline The generative pipeline of is built upon X-Paste <cit.>. It can be divided into four stages: instance generation, instance annotation, instance filtration and instance augmentation. The overview of is illustrated in Figure <ref>. Instance generation. Instance generation is a crucial stage for enhancing data diversity. In this stage, we employ our proposed Generative Data Diversity Enhancement (GDDE), as mentioned in Sec <ref>. In category diversity enhancement, we utilize the category information from LVIS <cit.> categories and extra categories selected from ImageNet-1K <cit.>. In prompt diversity enhancement, we utilize manually designed prompts and ChatGPT designed prompts to enhance prompt diversity. In model diversity enhancement, we employ two generative models, SD and IF. Instance annotation. We employ SAM <cit.> as our annotation model. SAM is a class-agnostic promptable segmenter that outputs corresponding masks based on input prompts, such as points, boxes, etc. In instance generation, leveraging the controllability of the generative model, the generative images have two characteristics: 1) each image predominantly contains only one foreground object; 2) the background of the images is relatively simple. Therefore, we introduce a SAM-background (SAM-bg) annotation strategy. SAM-bg takes the four corner points of an image as input prompts for SAM to obtain the background mask, then inverts the background mask as the mask of the foreground object. Due to the conditional constraints during the instance generation stage, this strategy is simple but effective in producing high-quality masks. Instance filtration. In the instance filtration stage, X-Paste utilizes the CLIP score (similarity between images and text) as the metric for image filtering. However, we observe that the CLIP score is ineffective in filtering low-quality images. In contrast to the similarity between images and text, we think the similarity between images can better filter out low-quality images. Therefore, we propose a new metric called CLIP inter-similarity. We use the image encoder of CLIP <cit.> to extract image embeddings for objects in the training set and generative images, then calculate the similarity between them. If the similarity is too low, it indicates a significant disparity between the generative and real images, suggesting that it is probably a poor-quality image and needs to be filtered. Instance augmentation. We use the augmentation strategy proposed by X-Paste <cit.> but do not use the data retrieved from the network or the instances in LVIS <cit.> training set as the paste data source, only use the generative data as the paste data source. § EXPERIMENTS §.§ Settings Datasets. We choose LVIS <cit.> for our experiments. LVIS is a large-scale instance segmentation dataset, containing 164k images with approximately two million high-quality annotations of instance segmentation and object detection. LVIS dataset uses images from COCO 2017 <cit.> dataset, but redefines the train/val/test splits, with around 100k images in the training set and around 20k images in the validation set. The annotations in LVIS cover 1,203 categories, with a typical long-tailed distribution of categories, so LVIS further divides the categories into frequent, common, and rare based on the frequency of each category in the dataset. We use the official LVIS training split and the validation split. Evaluation metrics. The evaluation metrics are LVIS box average precision (AP^box) and mask average precision (AP^mask). We also provide the average precision of rare categories (AP_r^box and AP_r^mask). The maximum number of detections per image is 300. Implementation details. We use CenterNet2 <cit.> as the baseline and Swin-L <cit.> as the backbone. In the training process, we initialize the parameters by the pre-trained Swin-L weights provided by <cit.>. The training size is 896 and the batch size is 16. The maximum training iterations is 180,000 with an initial learning rate of 0.0001. We use the instance paste strategy provided by <cit.>. §.§ Main Results Data diversity is more important than quantity. To investigate the impact of different scales of generative data, we use generative data of varying scales as paste data sources. We construct three datasets using only DeepFloyd-IF <cit.> with manually designed prompts, all containing original LVIS 1,203 categories, but with per-category quantities of 0.25k, 0.5k, and 1k, resulting in total dataset scales of 300k, 600k, and 1,200k. As shown in Table <ref>, we find that using generative data improves model performance compared to the baseline. However, as the dataset scale increases, the model performance initially improves but then declines. The model performance using 1,200k data is lower than that using 600k data. Due to the limited number of manually designed prompts, the generative model produces similar data, as shown in Figure <ref>. Consequently, the model can not gain benefits from more data. However, when using our proposed Generative Data Diversity Enhancement (GDDE), due to the increased data diversity, the model trained with 1,200k images achieves better results than using 600k images, with an improvement of 1.21 box AP and 1.04 mask AP. Moreover, when using the same data scale of 600k, the mask AP increased by 0.64 AP and the box AP increased by 0.55 AP when using GDDE compared to not using it. The results demonstrate that data diversity is more important than quantity. When the scale of data is small, increasing the quantity of data can improve model performance, which we consider is an indirect way of increasing data diversity. However, this simplistic approach of solely increasing quantity to increase diversity has an upper limit. When it reaches this limit, explicit data diversity enhancement strategies become necessary to maintain the trend of model performance improvement. Comparision with previous methods. We compare with previous data-augmentation related methods in Table <ref>. Compared to the baseline CenterNet2 <cit.>, our method significantly improves, increasing box AP by +3.7 and mask AP by +3.2. Regarding rare categories, our method surpasses the baseline with +8.7 in box AP and +9.0 in mask AP. Compared to the previous strong model X-Paste <cit.>, we outperform it with +1.1 in box AP and +1.1 in mask AP of all categories, and +1.9 in box AP and +2.5 in mask AP of rare categories. It is worth mentioning that, X-Paste utilizes both generative data and web-retrieved data as paste data sources during training, while our method exclusively uses generative data as the paste data source. We achieve this by designing diversity enhancement strategies, further unlocking the potential of generative models. §.§ Ablation Studies We analyze the effects of the proposed strategies in through a series of ablation studies using the Swin-L <cit.> backbone. Effect of category diversity. We select 50, 250, and 566 extra categories from ImagNet-1K <cit.>, and generate 0.5k images for each category, which are added to the baseline. The baseline only uses 1,203 categories of LIVS <cit.> to generate data. We show the results in Table <ref>. Generally, increasing the number of extra categories initially improves then declines model performance, peaking at 250 extra categories. The trend suggests that using extra categories to enhance category diversity can improve the model's generalization capabilities, but too many extra categories may mislead the model, leading to a decrease in performance. Effect of prompt diversity. We select a subset of categories and use ChatGPT to generate 32 and 128 prompts for each category, with each prompt being used to generate 8 and 2 images, respectively, ensuring that the image count for each category is 0.25k. The baseline uses only one prompt per category to generate 0.25k images. The regenerated images will replace the corresponding categories in the baseline to ensure that the final data scale is consistent. The results are presented in Table <ref>. With the increase in prompt diversity, there is a continuous improvement in model performance, indicating that prompt diversity is indeed beneficial for enhancing model performance. Effect of generative model diversity. We choose two commonly used generative models, Stable Diffusion <cit.> (SD) and DeepFloyd-IF <cit.> (IF). We generate 1k images per category for each generative model, totaling 1,200k. When using a mixed dataset (SD + IF), we take 600k from SD and 600k from IF per category, respectively, to ensure the total dataset scale is consistent. The baseline does not use any generative data (none). As shown in Table <ref>, using data generated by either SD or IF alone can improve performance, further mixing the generative data of both leads to significant performance gains. This demonstrates that increasing model diversity is beneficial for improving model performance. Effect of annotation strategy. X-Paste <cit.> uses four models (U2Net <cit.>, SelfReformer <cit.>, UFO <cit.> and CLIPseg <cit.>) to generate masks and selects the one with the highest CLIP score. We compare our proposed annotation strategy (SAM-bg) to that proposed by X-Paste (max CLIP). In Table <ref>, SAM-bg outperforms max CLIP strategy across all metrics, indicating that our proposed strategy can produce better annotations, improving model performance. As shown in Figure <ref>, SAM-bg unlocks the potential capability of SAM, obtaining precise and refined masks. Effect of CLIP inter-similarity. We compare our proposed CLIP inter-similarity to CLIP score <cit.>. The results are shown in Table <ref>. The performance of data filtered by CLIP inter-similarity is higher than that of CLIP score, demonstrating that CLIP inter-similarity can filter low-quality images more effectively. § CONCLUSIONS In this paper, we explain the role of generative data augmentation from the perspective of data distribution discrepancies and find that generative data can expand the data distribution that the model can learn, mitigating overfitting the training set. Furthermore, we find that data diversity of generative data is crucial for improving model performance. Therefore, we design an efficient data diversity enhancement strategy, Generative Data Diversity Enhancement. We design various diversity enhancement strategies to increase data diversity from the aspects of category diversity, prompt diversity, and generative model diversity. Finally, we optimize the data generative pipeline by designing the annotation strategy SAM-background to obtain higher quality annotations and introducing the metric CLIP inter-similarity to filter data, which further improves the quality of the generative dataset. Through these designed strategies, our proposed method significantly outperforms the existing strong models. We hope can provide new insights and inspirations for future research on the effectiveness and efficiency of generative data augmentation. §.§ Acknowledgments This work was in part support­ed by National Key R&D Program of China (No. 20­22­ZD­01­18­700). ieeenat_fullname § APPENDIX § IMPLEMENTATION DETAILS §.§ Data Distribution Analysis We use the image encoder of CLIP <cit.> ViT-L/14 to extract image embeddings. For objects in the LVIS <cit.> dataset, we extract embeddings from the object regions instead of the whole images. First, we blur the regions outside the object masks using the normalized box filter, with the kernel size of (10, 10). Then, to prevent objects from being too small, we pad around the object boxes to ensure the minimum width of the padded boxes is 80 pixels, and crop the images according to the padded boxes. Finally, the cropped images are fed into the CLIP image encoder to extract embeddings. For generative images, the whole images are fed into the CLIP image encoder to extract embeddings. At last, we use UMAP <cit.> to reduce dimensions for visualization. τ is set to 0.9 in the energy function. To investigate the potential impact of noise in the rare classes to TVG metrics, we conduct additional experiments to demonstrate the validity of TVG. We randomly take five different models each for the LVIS and LVIS + Gen data sources, compute the mean (μ) and standard deviation (σ) of their TVG, and calculate the 3 sigma range (μ+3σ and μ-3σ), which we think represents the maximum fluctuation that potential noise could induce. As shown in Table <ref>, we find that: 1) The TVGs of LVIS all exceed the 3 sigma upper bound of LVIS + Gen, while the TVGs of LVIS + Gen are all below the 3 sigma lower bound of LVIS, and there is no overlap between the 3 sigma ranges of LVIS and LVIS + Gen; 2) For both LVIS + Gen and LVIS, there is no overlap between the 3 sigma ranges of different groups, e.g. frequent and common, common and rare. These two findings suggest that even in the presence of potential noise, the results can not be attributed to those fluctuations. Therefore, we think our proposed TVG metrics are reasonable and can support the conclusions. §.§ Category Diversity We compute the path similarity of WordNet <cit.> synsets between 1,000 categories in ImageNet-1K <cit.> and 1,203 categories in LVIS <cit.>. For each of the 1,000 categories in ImageNet-1K, if the highest similarity for that category is below 0.4, we consider the category to be non-existent in LVIS and designate it as an extra category. Based on this method, 566 categories can serve as extra categories. The names of these 566 categories are presented in Table <ref>. §.§ Prompt Diversity Limited by the inference cost of ChatGPT, we use the manually designed prompts as the base and only use ChatGPT to enhance the prompt diversity for a subset of categories. For manually designed prompts, the template of prompts is “a photo of a single {category_name}, {category_def}, in a white background". category_name and category_def are from LVIS <cit.> category information. For ChatGPT designed prompts, we select a subset of categories and use ChatGPT to enhance prompt diversity for these categories. The names of the 144 categories in this subset are shown in Table <ref>. We use GPT-3.5-turbo and have three requirements for the ChatGPT: 1) each prompt should be as different as possible; 2) each prompt should ensure that there is only one object in the image; 3) prompts should describe different attributes of the category. Therefore, the input prompts to ChatGPT contain these three requirements. Examples of input prompts and the corresponding responses from ChatGPT are illustrated in Figure <ref>. To conserve output token length, there is no strict requirement for ChatGPT designed prompts to end with “in a white background", and this constraint will be added when generating images. §.§ Generative Model Diversity We select two commonly used generative models, Stable Diffusion <cit.> and DeepFloyd-IF <cit.>. For Stable Diffusion, we use Stable Diffusion V1.5, with 50 inference steps and a guidance scale of 7.5. All other parameters are set to their defaults. For DeepFloyd-IF, we use the output images from stage II, with stage I using the weight IF-I-XL-v1.0 and stage II using IF-II-L-v1.0. All parameters are set to their defaults. §.§ Instance Annotation We employ SAM <cit.> ViT-H as the annotation model. We explore two annotation strategies, namely SAM-foreground and SAM-background. SAM-foreground uses points sampled from foreground objects as input prompts. Specifically, we first obtain the approximate region of the foreground object based on the cross-attention map of the generative model using a threshold. Then, we use k-means++ <cit.> clustering to transform dense points within the foreground region into cluster centers. Next, we randomly select some points from the cluster centers as inputs to SAM. We use various metrics to evaluate the quality of the output mask and select the mask with the highest score as the final mask. However, although SAM-foreground is intuitive, it also has some limitations. Firstly, cross-attention maps of different categories require different thresholds to obtain foreground regions, making it cumbersome to choose the optimal threshold for each category. Secondly, the number of points required for SAM to output mask varies for different foreground objects. Complex object needs more points than simple object, making it challenging to control the number of points. Additionally, the position of points significantly influences the quality of SAM's output mask. If the position of points is not appropriate, this strategy is prone to generating incomplete masks. Therefore, we discard SAM-foreground and propose a simpler and more effective annotation strategy, SAM-background. Due to our leveraging of the controllability of the generative model in instance generation, the generative images have two characteristics: 1) each image predominantly contains only one foreground object; 2) the background of the images is relatively simple. SAM-background directly uses the four corner points of the image as input prompts for SAM to obtain the background mask, then inverts the background mask as the mask of the foreground object. The illustrations of point selection for SAM-foreground and SAM-background are shown in Figure <ref>. By using SAM-background for annotation, more refined masks can be obtained. Examples of annotations from SAM-foreground and SAM-background are shown in Figure <ref>. To further validate the effectiveness of SAM-background, we manually annotate masks for some images as ground truth (gt). We apply both strategies to annotate these images and calculate the mIoU between the resulting masks and the ground truth. The results in Table <ref> indicate that SAM-background achieves better annotation quality. §.§ Instance Filtration We use the image encoder of CLIP <cit.> ViT-L/14 to extract image embeddings. The embedding extraction process is consistent with Sec <ref>. Then we calculate the cosine similarity between embeddings of objects in LVIS training set and embeddings of generative images. For each generative image, the final CLIP inter-similarity is the average similarity with all objects of the same category in the training set. Through experiments, we find that when the filtering threshold is 0.6, the model achieves the best performance and strikes a balance between data diversity and quality, so we set the threshold to 0.6. Furthermore, we also explore other filtration strategies. From our experiments, using pure image-trained models like DINOv2 <cit.> as image encoder or combining CLIP score and CLIP inter-similarity is not as good as using just CLIP inter-similarity alone, as shown in Table <ref>. Therefore, we ultimately opt to only use CLIP inter-similarity. §.§ Instance Augmentation In instance augmentation, we use the instance paste strategy proposed by <cit.> to increase model learning efficiency on generative data. Each image contains up to 20 pasted instances at most. The parameters not specified in the paper are consistent with X-Paste <cit.>. § VISUALIZATION §.§ Prompt Diversity We find that images generated from ChatGPT designed prompts have diverse textures, styles, patterns, etc., greatly enhancing data diversity. The ChatGPT designed prompts and the corresponding generative images are shown in Figure <ref>. Compared to manually designed prompts, the diversity of images generated from ChatGPT designed prompts can be significantly improved. A visual comparison between generative images from manually designed prompts and ChatGPT designed prompts is shown in Figure <ref>. §.§ Generative Model Diversity The images generated by Stable Diffusion and DeepFloyd-IF are different, even within the same category, significantly enhancing the data diversity. Both Stable Diffusion and DeepFloyd-IF are capable of producing images belonging to the target categories. However, the images generated by DeepFloyd-IF appear more photorealistic and consistent with the prompt texts. This indicates DeepFloyd-IF's superiority in image generation quality and controllability through text prompts. Examples from Stable Diffusion and DeepFloyd-IF are shown in Figure <ref> and Figure <ref>, respectively. §.§ Instance Annotation In terms of annotation quality, masks generated by max CLIP <cit.> tend to be incomplete, while our proposed SAM-bg is able to produce more refined and complete masks when processing images of multiple categories. As shown in Figure <ref>, our proposed annotation strategy can output more precise and refined masks compared to max CLIP. §.§ Instance Augmentation The use of instance augmentation strategies helps alleviate the limitation in relatively simple scenes of generative data and improves the efficiency of model learning on the generative data. Examples of augmented data are shown in Figure <ref>. [pt]llll tench great_white_shark tiger_shark electric_ray stingray brambling goldfinch house_finch junco indigo_bunting American_robin bulbul jay magpie chickadee American_dipper kite_(bird_of_prey) fire_salamander smooth_newt newt spotted_salamander axolotl American_bullfrog loggerhead_sea_turtle leatherback_sea_turtle banded_gecko green_iguana Carolina_anole desert_grassland_whiptail_lizard agama frilled-necked_lizard alligator_lizard Gila_monster European_green_lizard chameleon Komodo_dragon Nile_crocodile triceratops worm_snake ring-necked_snake eastern_hog-nosed_snake smooth_green_snake kingsnake garter_snake water_snake vine_snake night_snake boa_constrictor African_rock_python Indian_cobra green_mamba Saharan_horned_viper eastern_diamondback_rattlesnake sidewinder_rattlesnake trilobite harvestman scorpion tick centipede black_grouse ptarmigan ruffed_grouse prairie_grouse peafowl quail partridge sulphur-crested_cockatoo lorikeet coucal bee_eater hornbill jacamar toucan red-breasted_merganser black_swan tusker echidna platypus wallaby wombat jellyfish sea_anemone brain_coral flatworm nematode conch snail slug sea_slug chiton chambered_nautilus American_lobster crayfish hermit_crab isopod white_stork black_stork spoonbill great_egret crane_bird limpkin common_gallinule American_coot bustard ruddy_turnstone dunlin common_redshank dowitcher oystercatcher albatross grey_whale dugong sea_lion Chihuahua Japanese_Chin Maltese Pekingese Shih_Tzu King_Charles_Spaniel Papillon toy_terrier Rhodesian_Ridgeback Afghan_Hound Basset_Hound Beagle Bloodhound Bluetick_Coonhound Black_and_Tan_Coonhound Treeing_Walker_Coonhound English_foxhound Redbone_Coonhound borzoi Irish_Wolfhound Italian_Greyhound Whippet Ibizan_Hound Norwegian_Elkhound Otterhound Saluki Scottish_Deerhound Weimaraner Staffordshire_Bull_Terrier American_Staffordshire_Terrier Bedlington_Terrier Border_Terrier Kerry_Blue_Terrier Irish_Terrier Norfolk_Terrier Norwich_Terrier Yorkshire_Terrier Wire_Fox_Terrier Lakeland_Terrier Sealyham_Terrier Airedale_Terrier Cairn_Terrier Australian_Terrier Dandie_Dinmont_Terrier Boston_Terrier Miniature_Schnauzer Giant_Schnauzer Standard_Schnauzer Scottish_Terrier Tibetan_Terrier Australian_Silky_Terrier Soft-coated_Wheaten_Terrier West_Highland_White_Terrier Lhasa_Apso Flat-Coated_Retriever Curly-coated_Retriever Golden_Retriever Labrador_Retriever Chesapeake_Bay_Retriever German_Shorthaired_Pointer Vizsla English_Setter Irish_Setter Gordon_Setter Brittany_dog Clumber_Spaniel English_Springer_Spaniel Welsh_Springer_Spaniel Cocker_Spaniel Sussex_Spaniel Irish_Water_Spaniel Kuvasz Schipperke Groenendael_dog Malinois Dobermann Miniature_Pinscher Greater_Swiss_Mountain_Dog Bernese_Mountain_Dog Appenzeller_Sennenhund Entlebucher_Sennenhund Boxer Bullmastiff Tibetan_Mastiff Great_Dane St._Bernard husky Alaskan_Malamute Siberian_Husky Affenpinscher Samoyed Pomeranian Chow_Chow Keeshond brussels_griffon Pembroke_Welsh_Corgi Cardigan_Welsh_Corgi Toy_Poodle Miniature_Poodle Standard_Poodle dingo dhole African_wild_dog hyena red_fox kit_fox Arctic_fox grey_fox tabby_cat tiger_cat Persian_cat Siamese_cat Egyptian_Mau lynx leopard snow_leopard jaguar cheetah mongoose meerkat dung_beetle rhinoceros_beetle fly bee ant grasshopper cricket_insect stick_insect praying_mantis cicada leafhopper lacewing damselfly red_admiral_butterfly monarch_butterfly small_white_butterfly sea_urchin sea_cucumber hare fox_squirrel guinea_pig wild_boar warthog ox water_buffalo bison bighorn_sheep Alpine_ibex hartebeest impala_(antelope) llama weasel mink black-footed_ferret otter skunk badger armadillo three-toed_sloth orangutan chimpanzee gibbon siamang guenon patas_monkey macaque langur black-and-white_colobus proboscis_monkey marmoset white-headed_capuchin howler_monkey titi_monkey Geoffroy's_spider_monkey common_squirrel_monkey ring-tailed_lemur indri red_panda snoek_fish eel rock_beauty_fish clownfish sturgeon gar_fish lionfish academic_gown accordion aircraft_carrier altar apiary assault_rifle bakery balance_beam baluster_or_handrail barbershop barn barometer bassinet bassoon lighthouse bell_tower baby_bib boathouse bookstore breakwater breastplate butcher_shop carousel tool_kit automated_teller_machine cassette_player castle catamaran cello chain chain-link_fence chainsaw chiffonier Christmas_stocking church movie_theater cliff_dwelling cloak clogs spiral_or_coil candy_store cradle construction_crane croquet_ball cuirass dam desktop_computer disc_brake dock dome drilling_rig electric_locomotive entertainment_center face_powder fire_screen flute fountain French_horn gas_pump golf_ball gong greenhouse radiator_grille grocery_store guillotine hair_spray half-track hand-held_computer hard_disk_drive harmonica harp combine_harvester holster home_theater honeycomb hook gymnastic_horizontal_bar jigsaw_puzzle knot lens_cap library lifeboat lighter lipstick lotion loupe_magnifying_glass sawmill messenger_bag maraca marimba mask matchstick maypole maze megalith military_uniform missile mobile_home modem monastery monitor moped mortar_and_pestle mosque mosquito_net tent mousetrap moving_van muzzle metal_nail neck_brace notebook_computer obelisk oboe ocarina odometer oil_filter pipe_organ oscilloscope oxygen_mask palace pan_flute parallel_bars patio pedestal photocopier plectrum Pickelhaube picket_fence pier pirate_ship block_plane planetarium plastic_bag plate_rack plunger police_van prayer_rug prison hockey_puck punching_bag purse radio radio_telescope rain_barrel fishing_casting_reel restaurant rugby_ball safe scabbard schooner CRT_monitor seat_belt shoe_store shoji_screen_or_room_divider balaclava_ski_mask slide_rule sliding_door slot_machine snorkel keyboard_space_bar spatula motorboat spider_web spindle stage steam_locomotive through_arch_bridge steel_drum stethoscope stone_wall tram stretcher stupa submarine sundial sunglasses sunscreen suspension_bridge swing tape_player television thatched_roof threshing_machine throne tile_roof tobacco_shop toilet_seat torch totem_pole toy_store trimaran triumphal_arch trombone turnstile typewriter_keyboard vaulted_or_arched_ceiling velvet_fabric vestment viaduct sink whiskey_jug whistle window_screen window_shade airplane_wing wool split-rail_fence shipwreck sailboat yurt website crossword dust_jacket menu plate guacamole trifle baguette cabbage broccoli spaghetti_squash acorn_squash butternut_squash cardoon mushroom Granny_Smith_apple jackfruit cherimoya_(custard_apple) pomegranate hay carbonara chocolate_syrup dough meatloaf pot_pie red_wine espresso tea_cup eggnog mountain bubble cliff coral_reef geyser lakeshore promontory sandbar beach valley volcano baseball_player bridegroom scuba_diver rapeseed daisy yellow_lady's_slipper corn acorn rose_hip horse_chestnut_seed coral_fungus gyromitra stinkhorn_mushroom earth_star_fungus hen_of_the_woods_mushroom bolete corn_cob Extra categories from ImageNet-1K. [t]llll Bible pirate_flag bookmark bow_(weapon) bubble_gum elevator_car chocolate_mousse compass corkboard cougar cream_pitcher cylinder dollar dolphin eyepatch fruit_juice golf_club handcuff hockey_stick popsicle pan_(metal_container) pew_(church_bench) piggy_bank pistol road_map satchel sawhorse shawl sparkler_(fireworks) spider string_cheese Tabasco_sauce turtleneck_(clothing) violin waffle_iron whistle wind_chime headstall_(for_horses) fishing_rod coat_hanger clasp crab_(animal) flamingo stirrup machine_gun pin_(non_jewelry) spear drumstick cornet bottle_opener easel dumbbell garden_hose money saddle_(on_an_animal) garbage windshield_wiper needle liquor bamboo armor pretzel tongs ski_pole frog hairpin tripod flagpole hose belt_buckle streetlight coleslaw antenna hook Lego thumbtack coatrack plow_(farm_equipment) vinegar strap poker_(fire_stirring_tool) cufflink chopstick salad dragonfly musical_instrument sharpener bat_(animal) lanyard mat_(gym_equipment) gargoyle underdrawers paperback_book razorblade earring sword shovel turkey_(food) ambulance pencil weathervane trampoline applesauce jam ski tray tissue_paper lamppost clipboard router_(computer_equipment) battery lollipop crayon latch fig_(fruit) sunglasses toothpick business_card padlock asparagus shot_glass sled key bolt pipe steering_wheel deck_chair green_bean pouch telephone_pole fire_hose ladle pliers hair_curler handle screwdriver dining_table cart oar wolf envelope legume shopping_cart trench_coat Categories of ChatGPT designed prompts.
http://arxiv.org/abs/2405.09867v1
20240516074331
Haro 5-2: A New Pre-Main Sequence Quadruple Stellar System
[ "Bo Reipurth", "C. Briceno", "T. R. Geballe", "C. Baranec", "S. Mikkola", "A. M. Cody", "M. S. Connelley", "C. Flores", "B. A. Skiff", "J. D. Armstrong", "N. M. Law", "R. Riddle" ]
astro-ph.SR
[ "astro-ph.SR" ]
Reipurth et al. The Haro 5-2 Quadruple System 0000-0001-8174-1932]Bo Reipurth Institute for Astronomy, University of Hawaii at Manoa, 640 N. Aohoku Place, HI 96720 Planetary Science Institute, 1700 E Fort Lowell Rd, Suite 106, Tucson, AZ 85719 0000-0001-7124-4094]C. Briceño NSF's NOIRLab/Cerro Tololo Inter-American Observatory, Casilla 603, La Serena, Chile 0000-0003-2824-3875]T. R. Geballe Gemini Observatory, 670 North Aohoku Place, Hilo, HI 96720 0000-0002-1917-9157]C. Baranec Institute for Astronomy, University of Hawaii at Manoa, 640 N. Aohoku Place, HI 96720 0000-0003-1448-8767]S. Mikkola Department of Physics and Astronomy, University of Turku, Yliopistonmki (Vesilinnantie 5), Finland 0000-0002-3656-6706]A.M. Cody SETI Institute, 339 N Bernardo Ave, Suite 200, Mountain View, CA 94043 0000-0002-8293-1428]M. S. Connelley Institute for Astronomy, University of Hawaii at Manoa, 640 N. Aohoku Place, HI 96720 Staff Astronomer at the Infrared Telescope Facility, which is operated by the University of Hawaii under contract NNH14CK55B with the National Aeronautics and Space Administration 0000-0002-8591-472X]C. Flores Institute of Astronomy and Astrophysics, Academia Sinica, 11F of AS/NTU Astronomy-Mathematics Building, No.1, Sec. 4, Roosevelt Rd, Taipei 10617, Taiwan, R.O.C. 0000-0001-5306-6220]B. A. Skiff Lowell Observatory, 1400 West Mars Hill Road, Flagstaff, AZ 86001 Institute for Astronomy, University of Hawaii at Manoa, 34 Ohia Ku St., Pukalani, HI 96768 0000-0001-9380-6457]N. M. Law Department of Physics and Astronomy, University of North Carolina at Chapel Hill, Chapel Hill, NC 27599-3255 0000-0002-0387-370X]R. Riddle Cahill Center for Astrophysics, California Institute of Technology, 1216 East California Boulevard, Pasadena, CA 91125 reipurth@hawaii.edu Bo Reipurth We have discovered that the Hα emission line star Haro 5-2, located in the 3-6 Myr old Ori OB1b association, is a young quadruple system. The system has a 2+2 configuration with an outer separation of 2.6 arcseconds and with resolved subarcsecond inner binary components. The brightest component, Aa, dominates the A-binary, it is a weakline T Tauri star with spectral type M2.5±1. The two stars of the B component are equally bright at J, but the Bb star is much redder. Optical spectroscopy of the combined B pair indicates a rich emission line spectrum with a M3±1 spectral type. The spectrum is highly variable and switches back and forth between a classical and a weakline T Tauri star. In the near-infrared, the spectrum shows Paschen β and Brackett γ in emission, indicative of active accretion. A significant mid-infrared excess reveals the presence of circumstellar or circumbinary material in the system. Most multiple systems are likely formed during the protostellar phase, involving flybys of neighboring stars followed by an in-spiraling phase driven by accretion from circumbinary material and leading to compact sub-systems. However, Haro 5-2 stands out among young 2+2 quadruples as the two inner binaries are unusually wide relative to the separation of the A and B pair, allowing future studies of the individual components. Assuming the components are coeval, the system could potentially allow stringent tests of PMS evolutionary models. § INTRODUCTION The majority of stars are born as part of binary or multiple systems; for reviews see <cit.>, <cit.>, and <cit.>. It has even been suggested that all stars are born in binaries or multiples, and that the distribution of singles, binaries, triples, etc. observed in the field is the result of subsequent dynamical interactions <cit.>. This is consistent with simulations <cit.> and observations, which show an excess of binaries among T Tauri stars compared to the field <cit.> and an even higher multiplicity fraction among embedded protostars <cit.>. While triple stellar systems have been studied extensively, both observationally as well as numerically, quadruple systems have received much less attention. It is well established that a non-hierarchical quadruple system is unstable, and will quickly transform into a hierarchical either 2+1+1 or 2+2 configuration or break up by ejecting a single or binary star. Many quadruple systems are known, most of them found in recent years through major sky surveys <cit.>. Most famous of all is ϵ Lyrae, a system of 4 similar A-type stars, which was first noticed by William Herschel[Herschel writes about ϵ Lyrae on August 29, 1778: ”A very curious double-double star. At first sight it appears double at some considerable distance, and by attending a little we see that each of the stars is a very delicate double star” - Pratt & Gledhill (1880)]. Detailed observational studies of quadruple systems have until recently been limited, reflecting the rarity of such systems. With the advent of deep all-sky surveys, the number of quadruple systems is rapidly increasing, e.g. <cit.>, <cit.>. Although such surveys have inherent biases, they still offer new insights into the properties of quadruple systems, in particular the double eclipsing systems <cit.>. A statistical study of multiplicity in F-G stars was performed by <cit.>, who finds the following percentages for singles, binaries, triples, and quadruples: 54:33:8:4. Among these quadruples, the number of 2+2 systems is higher than the 2+1+1 systems. <cit.> and <cit.> also found a preponderance of 2+2 systems. An understanding of this difference must be found in the early stages of evolution of quadruple systems shortly after they are formed, and hence the study of newborn quadruple systems is important. Only a few 2+2 quadruple pre-main sequence systems are known, among others HD 98800 <cit.>, FV Tau and J4872 <cit.>, LkCa 3 <cit.>, 2M0441+2301 <cit.>, EPIC 203868608 <cit.>, and especially GG Tau <cit.>, with which Haro 5-2 has a remarkable similarity in terms of configuration. Most recently, TIC 278956474 was found with the TESS mission to consist of two low-mass short-period eclipsing binaries, with an age of 10-50 Myr <cit.>. We here present the discovery of a new 2+2 pre-main sequence system known as Haro 5-2, located at 5:35:07.5, -2:49:00 (2000) in the Ori OB1b association <cit.>. We provide detailed optical/infrared imaging and spectroscopy, and discuss the properties and possible formation scenarios of this system. § OBSERVATIONS One of us (BAS) identified Haro 5-2 as a binary and it was included as SKF2259 in The Washington Visual Double Star Catalog <cit.>. It was subsequently observed on UT 2014 November 9 using the Robo-AO autonomous laser adaptive optics system at the Palomar 1.5m telescope <cit.>. We performed the observations using a long-pass filter with a cut-on wavelength of 600 nm and with a total exposure time of 3 minutes. The Ba-Bb pair is well resolved. The Aa component has an approximate image width of 015, with a non-resolved elongation in the direction of Ab. The elongation was not present in the images of any other stars in the field, so we suspected this was due to an unresolved companion. To investigate this possibility, we observed Haro 5-2 with the NIRC2 instrument behind the Keck II adaptive optics system on UT 2015 August 5. We obtained six 30s exposures with the Kp filter and then five 30s exposures with the H filter. All four components of the Haro 5-2 system are well resolved in the Kp images, while the seeing degraded rapidly during the H-band observations making it difficult to resolve the Aa-Ab pair. More detailed observations are required to search for additional components, as found in the similar GG Tau system <cit.>. Direct images of Haro 5-2 were obtained on UT 2015 August 27 at the Gemini-North telescope with NIRI <cit.> at f/32 (0.02 arcsec pixels) and adaptive optics, see Figure <ref>. In the J- and H-bands a 3×3 mosaic was obtained with individual 3-second exposures, in total 81 sec, while in the K-band the exposures totaled 162 sec. Haro 5-2 was observed with GNIRS <cit.> at the Gemini-North telescope on UT 2015 September 15 in a seeing of 05, using 0.05 arcsec pixels and the 10 l/mm grating in cross-dispersed mode. A 015 wide slit with a length of 51 was placed along the Aa-Ab pair at position angle 47^∘. 12 exposures each of 60 seconds were acquired while nodding ±15 along the slit. The pair was not resolved. The slit was then changed to a position angle of 54.3^∘ and placed along the Ba-Bb pair. 12 exposures each of 120 seconds were acquired. The two components could be resolved at H and K, but not at J, and separate spectra were extracted in H and K. The unresolved Aa-Ab pair was reduced with the Gemini pipeline, while the Ba-Bb pair was reduced by using IRAF <cit.> and Figaro <cit.>. We obtained spectroscopy of the Haro A and B components, separated by 262, with the Goodman High Throughput Spectrograph <cit.>, installed on the SOAR 4.1m telescope on Cerro Pachón, Chile. We used available time slots during the engineering nights of Oct 19 and Nov 18, 2021, Nov 9, 2022, Jan 6, 2023 and Mar 9, 2023, for a total time span of about 1.4 years. The GHTS is a highly configurable imaging spectrograph that employs all-transmissive optics and Volume Phase Holographic Gratings, resulting in high throughput for low to moderate resolution spectroscopy over the 320-850 nm wavelength range. We used the Goodman RED camera, which uses a deep-depletion CCD that provides extended sensitivity into the red end of the spectrum with minimal fringing. The spatial scale of the Goodman detector is 015/pixel, and the slit was set at a position angle of 214.7^∘, with the Atmospheric Dispersion Corrector active. We configured the spectrograph for both low- and high-resolution setups. For the low resolution setup we used the 400 l/mm grating in its 400M1, and 400M2 + GG 455 filter, preset modes, with 2x2 binning. These two configurations combined span the wavelength range ∼ 3600 λ 9000Å. We used the 1 wide long slit, which yields a FWHM resolution =6.7 Å (equivalent to R=830). We used exposure times of 120s for the brighter Haro 5-2A and 300s for the fainter Haro 5-2B. A total of 21 low-resolution spectra were obtained for Haro 5-2A (13 in the 400M1 setup and 8 with the 400M2 setup), while 25 spectra were obtained for Haro 5-2B (13 with the 400M1 setup and 12 with the 400M2 one), during a time window of nearly 506 days or about 1.4 years, from UT 2021-10-20 to UT 2023-03-10 (Table <ref>). For the higher resolution spectra we used the 2100 l/mm grating centered at 650 nm, with the 045 long slit, and 1x2 binning. This setup produces a FWHM resolution of 0.45 Å (or R=11,930), spanning 565 Å, from 6140 Å to 6705 Å. Our choice of × 2 binning in the spatial direction improved the signal-to-noise ratio while still matching the median 07 seeing at SOAR. We obtained 3× 900s exposures for the A component and 3× 1200s for the B component. We took a total of 6 higher resolution spectra for each of the A and B components, 3 each on Oct 21, 2021 and 3 each on Nov 19, 2021 (Table <ref>). The basic CCD reduction, up to and including the extraction of the 1-D spectra, was carried out using the Goodman Spectroscopic Pipeline <cit.>. Wavelength calibration and final combination of the spectra was done with IRAF. We did not perform flux calibration, since the main purpose of our follow up spectroscopy was to characterize each component: determining its spectral type, measuring line equivalent widths and determine its accretion status. We measured equivalent widths of spectral features using the splot routine in IRAF. The S/N ratio of the individual spectra was typically ∼ 20-25 at Hα, sufficient for measuring equivalent widths down to ∼ 0.05 Å, at our highest spectral resolution of ∼ 0.45 Å FWHM. In Figures <ref> and <ref> we show the combined spectra of Haro 5-2 obtained with the low resolution, and the higher resolution setups, respectively. We measured equivalent widths in each of the individual spectra. The uncertainty values quoted in Tables <ref> and <ref> are the actual dispersion in measurement values. On the same nights that we obtained optical spectra on SOAR, we also obtained near-IR spectroscopy resolving the Haro 5-2A and B components using the TripleSpec 4.1 (TSpec) near-IR spectrograph <cit.>. The fixed-format slit is 11 wide and 28 long. The spectral resolution is R∼ 3500 across the six science orders. We obtained three ABBA sequences for each A and B component, using exposure times of 30s for both components, with Fowler Sampling 4. The telluric standard HIP 26812 was observed contiguous with Haro 5-2, a single ABBA sequence was obtained at the same airmass, with an exposure time of 5s, 2 coadds, and Fowler Sampling 1. The slit position angle was set at 214.7^∘. The data were reduced in the standard way using the customized version of the IDL Spextool package <cit.> that was modified by Dr. Katelyn Allers for use with TSpec at SOAR. Haro 5-2 was observed by the Transiting Exoplanet Survey Satellite (TESS) in its sector 6 (11 December 2018 to 7 January 2019) and sector 32 (19 November 2020 to 17 December 2020) campaigns, which lasted approximately 27 days each. Known as TIC 427380741 in the TESS Input Catalog, Haro 5-2 was monitored in full frame image (“FFI") mode. Sector 6 images were acquired at a cadence of 30 minutes, while those for sector 32 were taken every 10 minutes. The TESS mission data products for FFI targets do not include light curves, and so we downloaded stacked 15× 15 pixel cut-out images in the form of target pixel files (TPFs) using the package <cit.>. In these images, Haro 5-2 appears clearly in the center, approximately two pixels offset from the nearest relatively bright source. To create a light curve, we placed a 3× 3 pixel square aperture on the source, summing the flux at each available time. was used to carry out this procedure, as well as to subtract out a star-free average background. TESS light curves of different targets often exhibit common systematic trends due to scattered light, which can be removed by detrending against flux time series of pixels outside the target aperture. We employed the RegressionCorrector class to remove low-level trends in the Haro 5-2 light curves, using the top two principal component vectors. We ultimately removed points for which the original image TPF data was flagged in the TESS pipeline as being contaminated by stray light. The resulting light curves displayed a series of significant undulations on a range of timescales, similar to what was seen in the raw time series. § OBSERVATIONAL RESULTS §.§ Haro 5-2 Haro 5-2, also known as 2MASS J05350753-0248596 and ESO-Hα 1108 <cit.>, is located in the Ori OB1b association. As discussed in the following, we conclude that it is a young stellar object (YSO) based on its Hα emission, its mid-infrared excess, its irregular variability, and its location within a star forming region. In the first survey for Hα emission stars in the σ Ori region, <cit.> discovered 98 objects, including Haro 5-2, which was also detected in the Hα emission line survey by <cit.>. The σ Ori cluster has an age of 3-5 Myr and contains several hundred young low mass stars <cit.>. Haro 5-2 is located in a little-studied region southwest of the cluster of young stars surrounding the O9.5 V multiple star σ Ori. <cit.> measured the distance to σ Ori itself to be 387.5±1.3 pc, while <cit.> used Gaia DR2 for all known members of the σ Ori cluster to derive a mean distance of 391±44 pc. <cit.> stated that “the cluster seems to have two components: a dense core that extends from the centre to r∼20 arcmin and a rarified halo at larger distances.” This halo extends to ∼30 arcmin, whereas Haro 5-2 has a separation to σ Ori of almost one degree. Consequently, Haro 5-2 may alternatively be part of the loose clustering Collinder 70 surrounding ϵ Ori, also known as Alnilam <cit.>. The σ Ori and Collinder 70 populations together form the subgroup Ori OB1b defined by <cit.>. <cit.> suggest that Ori OB1b is located at a distance of 357±3 pc and has an age of 3-6 Myr. Figure <ref>a shows the location of Haro 5-2 relative to σ Ori and ϵ Ori. Haro 5-2 is not an isolated Hα emission star, <cit.> found three other Hα emitters, Haro 5-1, -3, and -4, within a few arcminutes, and within 10 arcminutes <cit.> found another five: ESO-Hα 1011, 1014, 1050, 1146, and 1364. A variable YSO, V2070 Ori, is also located close to Haro 5-2. Using Gaia DR3 parallaxes for the stars in this little group (omitting the multiple system Haro 5-2 itself and also Haro 5-3 for neither of which Gaia has a parallax) suggests a mean distance of 373±13 pc, which we adopt. Haro 5-2 is not directly associated with any cloud, but a small cloudlet, Dobashi 4834, is located about 15 arcmin to the south-west <cit.> and is part of the shell of gas and dust that has been pushed away from the central O-star σ Ori, see Figure 1 of <cit.> which shows a WISE 3-color image mosaic of the region. Given their youth, the components of Haro 5-2 are likely to be variable, and a g-lightcurve from ASASSN <cit.> of the integrated system light indeed shows irregular variability over a 5-year period with characteristic amplitudes of 0.1 - 0.2 magnitudes. TESS has observed the region including Haro 5-2 on two occasions (see Section <ref>) and the light curves are shown in Figure <ref>. The light curves vary with peak-to-peak amplitudes of ∼7% and are largely of a stochastic type that may indicate accretion variability <cit.>. This irregular variability likely originates in the B-components, of which at least one is a classical T Tauri star (see Section <ref>). In addition, the data from sector 32 exhibit several upward excursions of a few percent amplitude that are reminiscent of distinct accretion bursts <cit.>. Since we are viewing the combined light of several stars, it is possible that multiple variability types are represented in the time series. For each light curve, we computed a Fourier transform periodogram and searched for persistent signals. We find one tentative periodicity in common between both TESS light curves, at a timescale of 1.33 days (sector 6) to 1.36 days (sector 32) and an amplitude of ∼0.7%. These detections are tentative since the light curves are dominated by higher amplitude stochastic behavior; the signal-to-noise ratio in the periodogram is 3.2 (sector 6) to 3.6 (sector 32), whereas a secure detection would require a signal-to-noise value of at least 4.0 <cit.>. If real, the possible periodic signals could be indicative of starspots on one member of Haro 5-2, probably component Aa, which is significantly brighter than the other stars. §.§ Infrared Imaging Figure <ref>b shows a multi-color JHK image of Haro 5-2, obtained with adaptive optics at the Gemini-N 8m telescope, in which Haro 5-2 is well resolved as a 2+2 quadruple system. What makes Haro 5-2 of particular interest is that the two close pairs have rather large projected separations Aa-Ab and Ba-Bb relative to the projected separation of A-B compared to other PMS quadruple systems. If not purely a projection effect, this might be due to its youth, with the four components still interacting before settling into a long-term stable configuration in which the inner binaries have hardened. We have measured the separations for the three pairs and determined the following separations for Aa-Ba: 261 (975 AU), Aa-Ab: 019 (72 AU), and Ba-Bb 035 (131 AU). Also, Andrei Tokovinin kindly observed the system with the high resolution speckle camera at the SOAR telescope <cit.> and these values are listed in Table <ref>. The projected separation of the A and B binaries of about 1000 AU may suggest a period around 30,000 yr. The closest binary, Aa-Ab, has a period of ∼1000 yr, which will be measurable in a time span of a few years. Photometry of the components from the Gemini data on UT 2015 Aug 27 yields Aa: K=11.53     H-K=0.40 Ab: K=12.34     H-K=0.30 Ba: K=12.46     H-K=0.34 Bb: K=12.54     H-K=0.92 The uncertainties in H and K are around 0.02 mag. While the three brighter components have roughly the same colors, Bb is extremely red, suggesting that this component is either highly extincted, has a strong near-infrared excess, or a mixture of the two. Haro 5-2 has been detected in a number of sky surveys from the ultraviolet (Skymapper) to the mid-infrared (WISE). Figure <ref> shows the available photometry together with four WISE panels that reveal the system to be bright at infrared wavelengths. We have integrated over the energy distribution of the quadruple, and derive a luminosity of 0.95 L_⊙ between 0.35 μm and 22 μm. The peak of the energy distribution is around 1.05 μm, which corresponds to about 2760 K. According to the temperature-spectral type conversion by <cit.>, this corresponds to an M7 spectral type. As discussed below this classification is not borne out by spectroscopy. The energy distribution, although dominated by the brightest member, Aa, is not from a single object, but includes the three fainter components, thus shifting the peak in Figure <ref> to longer wavelengths. Also, Haro 5-2 shows Hα emission, a sign that accretion is occurring and indicating the presence of circumstellar material that further adds to an infrared excess. Thus, much of the light seen at longer wavelengths is not likely to come from the dominant member Aa. In fact, almost half of the observed luminosity, 0.43 L_⊙, falls within the four WISE bands. §.§ Optical Spectroscopy <cit.> discovered the T Tauri nature of Haro 5-2 on the basis of its Hα emission, and it remains a prominent Hα emission line star <cit.>. Figure <ref> shows optical low-resolution spectra of the (unresolved) A and B binaries, obtained with the GHTS instrument at SOAR (Section 2). Both the A and B binaries have the TiO absorption bands characteristic of M-type dwarfs. By comparing our spectra with spectra of already known M-type T Tauri stars (TTS) from the extensive sample of <cit.>, we assign spectral types of M2.5 for Haro 5-2A and M3 for Haro 5-2B, with an overall uncertainty of 1 subclass. As can also be seen in Figure <ref>, both the A and B binaries exhibit a clear Li I 6708Å absorption line, with equivalent widths W(Li I)∼ 0.3Å. Li I is a well known indicator of youth for late type stars <cit.>. Because lithium is depleted during the PMS stage in the deep convective interiors of K and M-type stars, a late type star is classified as a TTS if it has Li I(6707Å) in absorption, with an equivalent width larger than that of a zero-age main sequence Pleiades star of the same spectral type <cit.>. The Na I 8183,8195Å absorption doublet is also a well known feature useful for discriminating young, late-type PMS stars from their field dwarf counterparts <cit.>. Consistent with the presence of Li I, both Haro 5-2A and B exhibit weak Na I absorption (Figure <ref>), with values expected for TTS with ages of a few Myr, <cit.>. §.§.§ Haro 5-2A The low and higher resolution spectra show that the Hα line of the brighter A-binary exhibits a small equivalent width, with an average value W(Hα)=-3.9Å, and a range -3Å ≥ W(Hα) ≥ -4Å during the ∼ 1.4 yr over which we obtained multi-epoch spectroscopy (Tables <ref> and <ref>). In addition to the low intensity of the emission, the Hα line profile is narrow, 200  km s^-1, as measured in the high-resolution spectra. These characteristics are indicative of Hα originating in the active chromosphere of a young dwarf. The low intensity of Hα places the A-binary in the Weak-lined T Tauri star class (see Figure <ref>). §.§.§ Haro 5-2B In contrast with its brighter sibling, Haro 5-2B shows a strong Hα emission line, which also is broad when seen in detail in the higher resolution spectra. Moreover, thanks to our many observing epochs (Tables <ref> and <ref>), we find that the emission is highly variable, ranging from W(Hα)=-38.5Å in November 2021 to a low W(Hα)=-11.6Å a year later, in November 2022. We derive an average W(Hα)=-24.6Å. The width of the Hα line ranges from ∼400 to >500 km s^-1 and exhibits a very strong, slightly blueshifted absorption, suggestive of a strong wind. Moreover, the entire Balmer series is found to be in emission (Figure <ref>), with -3.6W(Hβ)-12.3Å. The Ca H & K 3934,3968Å lines are also in emission, with equivalent widths in the range W(Ca H & K)∼ -3 to ∼ -15Å. Both He I 5876Å and He I 6678Å are also found in emission, with W(He I 5876) ∼ -2.1Å and W(He I 6678)∼ -1.2Å. The Ca II near-IR triplet goes from being weakly in absorption during the October 2021 observation, when Haro 5-2B was in a seemingly more quiescent state, to being in emission, and staying in this state, from November 2021 to January 2023. Unfortunately we could not obtain spectra with the reddest 400M2 configuration during the March 2023 observation. We determine average values of W(CaII 8498)=-1.7Å, W(Ca II 8542)=-1.1Å, W(Ca II 8662)=-1.0Å. The rich assortment of H, He, and Ca emission lines seen in Haro 5-2B, together with the strength of the Hα emission, the several hundred km s^-1 broad wings in its line profile and the strong, slightly blueshifted central absorption, are all indicative of a strongly accreting Classical T Tauri star (CTTS), as can be seen by the location of this object in the diagnostic diagrams of Figure <ref> and Figure <ref>. The blueshifted central absorption is seen with similar strength in both the October and November observations. We highlight the importance of obtaining multi-epoch spectra and of combining low- and higher-resolution spectra, in order to provide both the wide spectral coverage needed to derive a reliable spectral type (T_eff), and the resolving power to characterize the Hα line profile, thus allowing to correctly classify young PMS stars such as Haro 5-2. If we observed this object only in November, we would have misclassified Haro 5-2B as a Weak Line T Tauri Star (WTTS) or C/W (Figure <ref>). It is the multi-epoch low resolution spectroscopy, combined with the higher resolution spectra, that allows us to confirm that the B component is a CTTS, and that A is a WTTS. In particular, the higher resolution spectra show that despite variations in both the strength of Hα and the velocity width, at both epochs Haro 5-2A remains well inside the region populated by non-accreting, young PMS low-mass stars, whereas the B-binary is well within the region where strongly accreting young PMS stars are found. §.§.§ Masses The optical spectra of Haro 5A (M2.5) and 5B (M3) are dominated by the components Haro 5Aa and 5Ba, respectively. If we assume a temperature uncertainty of 0.5 sub-types we find that T_eff(Aa) ∼ 3485±80 K and T_eff(Ba) ∼ 3410±100 K using the spectral type to temperature conversion of <cit.>. For an assumed age of 3 Myrs the magnetic models of <cit.> suggest that Aa has a mass of 0.54±0.08 M_⊙ and Ba has a mass of 0.46±0.1 M_⊙. However, <cit.> and <cit.> have shown that magnetic models overpredict masses for young stars with M_*<0.4 M_⊙. Non-magnetic Feiden models give M_Aa = 0.35±0.05 M_⊙ and M_Ba = 0.31±0.08 M_⊙. Given the uncertainties in both spectral types and in models, we end up with the not very accurate estimate of M_Aa ∼ 0.45±0.15 M_⊙ and M_Ba ∼ 0.35±0.15 M_⊙. The Ab and Bb components are likely to have somewhat lower masses. A rough estimate of the system mass is therefore ∼1.5 M_⊙. §.§ Infrared Spectroscopy The near-infrared spectra of the Haro 5-2A and 5-2B (unresolved) binaries obtained with the TripleSpec 4.1 near-IR spectrograph at the SOAR telescope are shown in Figure <ref>. Both objects display metal absorption lines and CO bands and a triangular H-band continuum due to water absorption, all consistent with their optical late spectral types. The spectrum of Haro 5-2A (upper panel) shows no emission features, consistent with its classification as a non-accreting WTTS, as shown in Figures <ref> and <ref>. In contrast, the spectrum of Haro 5-2B (lower panel) displays strong Paschen β and Brackett γ emission lines, indicative of active accretion. The near-infrared spectra, obtained on the same nights as the optical spectra, are thus consistent with the optical classifications. Line variability is always an issue when observing young stars, and we note that a near-infrared spectrum of the (blended) A-binary obtained with GNIRS on 2015 Sept 15 showed a broad Paschen β line in emission, and a very weak emission at Brackett γ, indicating that active accretion was taking place at the time. On the same night, the B-binary was also observed with GNIRS using adaptive optics, and the two components Ba and Bb were resolved. The H-band spectra of the two B-components are shown in Figure <ref>. The spectra are almost identical, but the Bb component shows much more veiling as well as a redder continuum. This indicates that the deep red color of the Bb component seen in Figure <ref>-bottom is not due to a much later spectral type than Ba, but rather to a strong near-infrared excess. § DISCUSSION §.§ Formation of Quadruple Systems As pointed out by <cit.>, multiple stars are a natural consequence of the collapse of a rotating cloud core, and many if not all stars may originate in such systems. Observationally it is well established that in most star forming regions there is an excess of young binaries and multiples relative to the field population <cit.>. It follows that to get down to the number in the field, there must be considerable dynamical evolution during early stellar evolution, from N-body interactions in combination with the presence of gas in the system. The 2+1+1 quadruples are easily formed from small non-hierarchical systems with 4 or more components through dynamical interactions which lead to subsequent ejections of members into distant bound orbits or into escapes <cit.>. The more common 2+2 systems, on the other hand, have been explained in a variety of ways. <cit.> suggested that successive fragmentations of a cloud core with transfer of spin angular momentum at each stage would lead to wide binary systems in which each component is a close binary. Such a cascade would naturally lead to 2+2 quadruples with large ratios between inner and outer orbits, although hydrodynamical simulations generally show more chaotic and sequential star formation. The inner binaries of Haro 5-2 are well above the opacity limit to fragmentation, so this scenario might describe the formation of Haro 5-2. <cit.> considered a different type of cascade in which two colliding clouds form a shock-compressed layer that fragments into filaments and cores and eventually form stars with massive disks. Collisions of two disks around binaries can form bound quadruple systems. While turbulent core fragmentation will produce wider binaries and multiples (100 - 10,000 AU), simulations of this process fail to produce the very close binaries often found in 2+2 quadruples. To produce such close binaries a dissipative gaseous environment is required in which orbital decay will harden a binary <cit.>. In disk fragmentation models, massive disks can become gravitationally unstable and produce one or more companions and, combined with capture, 2+2 systems can be formed <cit.>. Filament fragmentation has also been discussed as leading to bound binaries and multiples <cit.>. <cit.> carried out N-body simulations of small-N groups (10 to 24 stars), treating the stellar components as point sources, and noting that among the final outcomes were a number of 2+2 quadruples. The dynamical evolution of such gas free quadruple systems has subsequently been further studied by many other authors, e.g., <cit.>. To summarize, it appears that there are multiple viable pathways to the formation of 2+2 quadruple stellar systems, and the multiple systems we observe in the field may result from a mix of different formation scenarios. §.§ The Origin of the Haro 5-2 Quadruple System In the following we discuss three questions regarding the structure, stability, and formation of the Haro 5-2 system. 1) Is Haro 5-2 merely a chance alignment of two young binaries along the line-of-sight? We have photometric and spectroscopic evidence that the components of Haro 5-2 are young, so we look at the surface density of nearby young stars. In an area of roughly 25×30 arcmin centered on Haro 5-2 there is a small group of only 9 additional YSOs. This immediately indicates that the probability that Haro 5-2A and Haro 5-2B are close due to a chance alignment along the line of sight is negligible. 2) Is Haro 5-2 a temporary quadruple or can it be in a stable configuration? Compared to other pre-main sequence quadruples the most remarkable aspect of Haro 5-2 is the large separations of the two inner binaries relative to the outer binary when compared to other known young 2+2 quadruple systems.[The ratio of projected outer and inner separations is 7.4. However, the boundary between chaotic and regular orbits is dependent on many parameters, not least the eccentricity <cit.>, and a single value cannot grasp this complexity. But at least, for comparison, the ratio is 7.3 for HIP 28442, and since the star is a member of the old thick-disk population it has evidently been stable for a long time <cit.>.] This opens the question of whether Haro 5-2 in the future might break apart. For a definite answer one would need to know the orbits of the Haro 5-2 components, or at least the physical separations in the system, the stellar masses, and the velocities of the components. None of this is currently known. Instead we have performed some statistical estimates for a variety of system properties. For these calculations we assumed that Aa has a mass of about 0.45 M_⊙ and the three other components have masses of about 0.35 M_⊙, in total 1.5 M_⊙, which determines the gravitational potential. We further assumed that the centers of mass of the two subsystems are in the plane of the sky, since this would be the most unstable configuration, while the orbital planes of the subsystems are random. Finally we adopt random velocities assuming virial balance. We then ran a code, described in more detail in <cit.>, which computes semi-major axes, eccentricities, and other orbital characteristics for bound systems. After 10,000 experiments, about 1/3 remained stable 2+2 quadruples. Given that Haro 5-2 almost certainly is extended along the line-of-sight and thus more stable, we conclude that the quadruple may well be in a longterm stable configuration. 3) Could Haro 5-2 have become a quadruple at a later stage after dispersal of most of the placental gas? If Haro 5-2 formed as a quadruple already during the embedded collapse phase, one would expect that the resulting dynamical evolution in a gaseous environment and active accretion would lead to a decrease of the semi-major axes of the inner binaries, thus producing harder inner binaries <cit.>. This is not what is seen in Haro 5-2, which has well resolved inner binaries, suggesting that it might have developed its 2+2 configuration sufficiently late to have avoided early shrinking of the two inner binaries during the gas rich embedded phase. This could have resulted in two spectroscopic binaries as in the young 2+2 quadruples LkCa 3, EPIC 203868608, and TIC 278956474 mentioned earlier. Consequently we have explored numerically the dynamical evolution of a small cluster of single and binary stars at a later stage, after the bulk of the gas has dispersed or been accreted, to see if N-body simulations without gas could result in a quadruple like Haro 5-2, following the pioneering work of <cit.>. We have performed many thousands of new numerical simulations with 4-, 10-, 20-, and 50-body groups of single stars and masses drawn from the IMF and with virialized velocities <cit.>. Some binary and multiple systems indeed do form, including at least temporarily some 2+2 quadruples (see Figure <ref>a). However, if one assumes that a substantial fraction of the bodies are binaries then, not surprisingly, 2+2 systems are formed more frequently (see Figure <ref>b). We conclude that the number of individual members of the initial multiple system is much less important than whether some of those members are binaries. Such 2+2 quadruples can form without the help of a gaseous environment when three objects, of which at least two are binaries, interact chaotically, in the process binding the two binaries together into a 2+2 quadruple system, leaving the third body (single or binary) to carry away the excess energy. It should be noted that the binaries survive these interactions only if they have binding energies that exceed the gravitational perturbations induced by flybys. Effectively the binaries have to act almost as point sources, that is, they must be close relative to the impact parameter. Thus, the stable 2+2 quadruples formed this way are highly hierarchical, like ϵ Lyrae, because wider subsystems would be disrupted during the chaotic formation process. It follows that to form 2+2 quadruples this way the two binaries must previously have undergone a process that makes them “hard”. This, however, would not form the more open architecture of Haro 5-2. Haro 5-2 is a member of the Ori OB1b association, and is therefore not a case of isolated star formation. Within the previously mentioned little group of young stars surrounding Haro 5-2, the two closests are the Hα emitters Haro 5-3 and Haro 5-4 (see Figure <ref>), which are both only about 3.6 arcmin away. This corresponds to about 80,000 AU or 0.4 pc in projection which – assuming a stellar velocity dispersion of ∼1 km s^-1 – can be traversed in less than half a million years. Haro 5-4 has a faint companion to the NNW which has an infrared excess to 12 μm and possibly to 22 μm as seen in WISE images, and thus could be a young star, too. It is therefore conceivable that the Haro 5-2 system in the past could have been part of a larger group that broke up, with Haro 5-3 and 5-4 carrying away the energy that enabled the Haro 5-2A and B binaries to bind together into a 2+2 system. § CONCLUSIONS 1. We have discovered a new young 2+2 quadruple system, Haro 5-2, in the 3-6 Myr old Ori OB1b association, which encompasses the σ Ori and the Collinder 70 clusterings. The system has an overall extent of 2.6” and projected separations of the inner binaries of 019 and 035. 2. We obtained low-resolution optical spectra of both the A and B components of the Haro 5-2 quadruple system on five nights over ∼1.5 years, as well as two high-resolution spectra separated by about a month. The presence of significant TiO absorption molecular bands in the spectra, combined with indicators of stellar youth such as the Li I 6708 Å and Na I 8183,8195 Å absorption features, confirm these as low-mass, young PMS stars. The brighter A component is a non-accreting M2.5 type (T_eff∼ 3450 K) WTTS. The fainter B component is an M3 type (T_eff∼ 3400 K) accreting CTTS, but on two nights it fell in the transition region between CTTS and WTTS. Overall, the emission in both stars varies by up to a factor 2× over the course of our observations. Assuming evolutionary models such as <cit.>, these spectral types correspond to a mass ∼ 0.35 - 0.45. The Hα profile of the accreting B component shows broad wings extending to ∼± 300 km s^-1, and a strong, slightly blueshifted central absorption. This blueshifted absorption is seen in the higher resolution spectra at both epochs. The Bb component is very red and the spectrum may be affected by accretion and by extinction from a circumstellar disk. 3. The hierarchy of Haro 5-2 is low, i.e. the two inner binaries in Haro 5-2 are unusually wide relative to the separation between the two binaries in comparison with oter known PMS quadruples. We have made simulations of a variety of configurations and conclude that the system may well survive intact over long time scales. 4. Since the members of the Haro 5-2 quadruple are well resolved and presumably of the same age, this system may provide stringent constraints on PMS evolutionary models, similar to the well-studied pentuple GG Tau. Haro 5-2 also displays a significant infrared excess, and ALMA observations of its circumstellar material may offer insights into the effect of flybys on disks. Acknowledgements We thank A. Tokovinin and J. A. Caballero for valuable comments, and A. Tokovinin for providing the data in Table <ref> . Based in part on observations obtained at the International Gemini Observatory, a program of NSF's NOIRLAB, which is managed by the Association of Universities for Research in Astronomy (AURA), under a cooperative agreement with the National Science Foundation on behalf of the Gemini partnership: the National Science Foundation (United States), the National Research Council (Canada), Agencia Nacional de Investigación y Desarollo (Chile), Ministerio de Ciencia, Tecnologia e Innovación (Argentina), and Ministerio da Ciencia, Technologia, Inovacoes e Comunicacoes (Brazil), and Korea Astronomy and Space Science Institute (Republic of Korea).. It is also based in part on observations obtained at the Southern Astrophysical Research (SOAR) telescope, which is a joint project of the Ministério da Ciência, Tecnologia e Inovações (MCTI/LNA) do Brasil, the US National Science Foundation's NOIRLab, the University of North Carolina at Chapel Hill (UNC), and Michigan State University (MSU). IRAF was distributed by the National Optical Astronomy Observatory, which was managed by the Association of Universities for Research in Astronomy (AURA) under a cooperative agreement with the National Science Foundation. The Robo-AO system is supported by collaborating partner institutions, the California Institute of Technology and the Inter-University Centre for Astronomy and Astrophysics, by the National Science Foundation under Grant Nos. AST-0906060, AST-0960343, and AST1207891, by a grant from the Mt. Cuba Astronomical Foundation and by a gift from Samuel Oschin. This paper includes data collected by the TESS mission. Funding for the TESS mission is provided by the NASA's Science Mission Directorate. This research has made use of the SIMBAD database, operated at CDS, Strasbourg, France, and of NASA's Astrophysics Data System Bibliographic Services. This research made use of Astropy, a community-developed core Python package for Astronomy <cit.>. We are grateful to Katelyn Allers for her detailed and instructive YouTube videos on reducing TSpec data: https://www.youtube.com/user/katelynallers/videos. Astropy, IRAF, TOPCAT (Taylor 2005), Lightkurve (LightKurve Collaboration, 2018) Facilities: PO:1.5m (Robo-AO), Keck:II (NIRC2-LGS), Gemini-North (GNIRS,NIRI), SOAR (Goodman spectrograph, TSpec spectrograph, HRCam) [Anosova(1986)]anosova1986 Anosova, J.P. 1986, Ap&SS, 124, 217 [Astropy Collaboration et al.(2013)]astropy2013 Astropy Collaboration, Robitaille, T. P., Tollerud, E. J., et al. 2013, , 558, A33. doi:10.1051/0004-6361/201322068 [Astropy Collaboration et al.(2018)]astropy2018 Astropy Collaboration, Price-Whelan, A. M., Sipőcz, B. M., et al. 2018, , 156, 123. doi:10.3847/1538-3881/aabc4f [Astropy Collaboration et al.(2022)]astropy2022 Astropy Collaboration, Price-Whelan, A. M., Lim, P. L., et al. 2022, , 935, 167. doi:10.3847/1538-4357/ac7c74 [Baranec et al.(2014)]baranec2014 Baranec, C., Riddle, R., Law, N.M. et al. 2014, ApJ, 790, L8 [Barrado et al.(2011)]barrado2011 Barrado, D., Stelzer, B., Morales-Calderón, M., et al. 2011, A&A, 526, A21 [Bate(2019)]bate2019 Bate, M.R. 2019, MNRAS, 484, 2341 [Bate et al.(2002)]bate2002 Bate, M.R., Bonnell, I.A., Bromm, V. 2002, MNRAS, 336, 705 [Blaauw(1964)]blaauw1964 Blaauw, A. 1964, ARA&A, 2, 213 [Bodenheimer(1978)]bodenheimer1978 Bodenheimer, P. 1978, , 224, 488. [Bonnell et al.(1993)]bonnell1993 Bonnell, I. & Bastien, P. 1993, ApJ, 406, 614 [Bowler et al.(2015)]bowler2015 Bowler, B. & Hillenbrand, L. 2015, ApJ, 811:L30 [Braun et al.(2021)]braun2021 Braun, T.A.M., Yen, H.-W., Koch, P.M., et al. 2021, ApJ, 908, 46 [Breger et al.(1993)]breger1993 Breger, M., Stich, J., Garrido, R., Martin, B., Jiang, S.-Y. et al. 1993, A&A, 271, 482 [Briceño et al.(1998)]briceno1998 Briceño, C., Hartmann, L., Stauffer, J., & Martín, E. 1998, , 115, 2074 [Briceño et al.(2001)]briceno2001 Briceño, C., Vivas, A. K., Calvet, N., et al. 2001, Science, 291, 93 [Briceño et al.(2019)]briceno2019 Briceño, C., Calvet, N., Hernandez, J., et al. 2019, , 157, 85 [Caballero(2008a)]caballero2008a Caballero, J.A. 2008a, A&A, 478, 667 [Caballero(2008b)]caballero2008b Caballero, J.A. 2008b, MNRAS, 383, 375 [Caballero and Solano(2008)]caballero_solano2008 Caballero, J.A. & Solano, E. 2008, A&A 485, 931 [Caballero(2018)]caballero2018 Caballero, J.A. 2018, RNAAS, 2, 25 [Chen et al.(2013)]chen2013 Chen, X., Arce, H. G., Zhang, Q. et al. 2013, ApJ, 768:A110 [Clemens et al.(2004)]clemens2004 Clemens, J. C., Crain, J. A., & Anderson, R. 2004, in Proc. SPIE, Vol. 5492, Ground-based Instrumentation for Astronomy, ed. A. F. M. Moorwood & M. Iye, 331-340 [Cody et al.(2014)]cody2014 Cody, A.M., Stauffer, J., Baglin, A., Micela, G., Rebull, L.M. et al. 2014, AJ, 147:82 [Cody et al.(2017)]cody2017 Cody, A. M., Hillenbrand, L.A., David, T.J., Carpenter, J.M., Everett, M.E., Howell, S.B. 2017, ApJ, 836:41 [Collinder(1931)]collinder1931 Collinder, P. 1931, On structural properties of open galactic clusters and their spatial distribution, Lund: Nya Boktryckeriet [Connelley et al.(2008)]connelley2008 Connelley, M.S., Reipurth, B., Tokunaga, A.T. 2008, AJ, 135, 2526 [Correia et al.(2006)]correia2006 Correia, S., Zinnecker, H., Ratzka, Th., Sterzik, M.F. 2006, A&A, 459, 909 [Currie et al.(2014)]currie2014 Currie, M.J., Berry, D.S., Jenness, T. et al. 2014, in ASP Cpnf. Ser. Vol. 485 Astronomical Data Analysis and Systems, eds.Manset, N., Forshay, P., p.391 [Cushing et al.(2004)]cushing2004 Cushing, M.C., Vacca, W.D., Rayner, J.T. 2004, PASP, 116, 362 [Delgado-Donate et al.(2004)]delgado-donate2004 Delgado-Donate, E.J., Clarke, C.J., Bate, M.R., Hodgkin, S.T. 2004, MNRAS, 351, 617 [Di Folco et al.(2014)]difolco2014 Di Folco, E., Dutrey, A., Le Bouquuin, J.-B., Lacour, S., Berger, J.-P. et al. 2014, A&A, 565, L2 [Dobashi(2011)]dobashi2011 Dobashi, K. 2011, PASJ, 63, S1 [Duchêne and Kraus(2013)]duchene_kraus2013 Duchêne, G. & Kraus, A. 2013, ARAA, 51, 269 [Dutrey et al.(2016)]dutrey2016 Dutrey, A., Di Folco, E., Beck, T., Guilloteau, S. 2016, A&ARv, 24, 5 [Elias et al.(2006)]elias2006 Elias, J.H., Rodgers, B., Joyce, R. et al. 2006, Proc.SPIE, 6269, 14 [Fedorovich and Peremennye(1960)]fedorovich1960 Fedorovich, V.P. 1960, Peremennye Zvezdy, 13, 166 [Feiden (2016)]feiden2016 Feiden, G.A. 2016, A&A, 593, A99 [Fezenko et al.(2022)]fezenko2022 Fezenko, G. B., Hwang, H.-C., & Zakamska, N. L. 2022, , 511, [Flores et al.(2022)]flores2022 Flores, C., Connelley, M.S., Reipurth, B., Duchêne, G. 2022, ApJ, 925:21 [García-López et al.(1994)]garcia1994 Garcia Lopez, R. J., Rebolo, R., & Martin, E. L. 1994, A&A, 282, 518 [Guszjenov et al.(2023)]guszjenov2023 Guszjenov, D., Raju, A.N., Offner, S.S.R., Grudic, M.Y. et al. 2023, MNRAS, 518, 4693 [Haro and Moreno(1953)]haro1953 Haro, G. & Moreno, A. 1953, Bol. Obs. Tonantz. Tacub., 1, No 7, 11 [Harrington(1974)]harrington1974 Harrington, R.S. 1974, Celestial Mechanics, 9, 465 [Herczeg and Hillenbrand(2014)]herczeg2014 Herczeg, G.J. & Hillenbrand, L. 2014, ApJ, 786:97 [Hernández et al.(2014)]hernandez2014 Hernández, J., Calvet, N., Pérez, A., et al. 2014, , 794, 36 [Hillenbrand et al.(2013)]hillenbrand2013 Hillenbrand, L. A., Hoffer, A. S., & Herczeg, G. J. 2013, , 146, 85 [Hodapp et al.(2003)]hodapp2003 Hodapp, K.W., Jensen, J.B., Irwin, E.M. et al. 2003, PASP, 115, 1388 [Kochanek et al.(2017)]kochanek2017 Kochanek, C.S., Shappee, B.J., Stanek, K.Z. et al. 2017, PASP, 129, 4502 [Koenig et al.(2015)]koenig2015 Koenig, X., Hillenbrand, L.A., Padgett, D.L. et al. 2015, AJ, 150:100 [Kostov et al.(2022)]kostov2022 Kostov, V.B., Powell, B.P., Rappaport, S.A. et al. 2022, ApJS, 259:66 [Kostov et al.(2024)]kostov2024 Kostov, V.B., Powell, B.P., Rappaport, S.A. et al. 2024, MNRAS, 527, 3995 [Kounkel et al.(2018)]kounkel2018 Kounkel, M., Covey, K., Suárez, G. et al. 2018, AJ, 156:84 [Kratter and Lodato(2016)]kratter2016 Kratter, K. & Lodato, G. 2016, ARAA, 54, 271 [Kuruwita and Haugbolle(2023)]kuruwita2023 Kuruwita, R.L. & Haugbolle, T. 2023, A&A, 674, A196 [Lada and Lada(2003)]lada_lada2003 Lada, C.J. & Lada, E. 2003, ARA&A, 41, 57 [Larson(1972)]larson1972 Larson, R.B. 1972, MNRAS, 156, 437 [Law et al.(2010)]law2010 Law, N.M., Dhital, S., Kraus, A. et al. 2010, ApJ, 720, 1727 [Lawson et al.(2009)]lawson2009 Lawson, W. A., Lyo, A.-R., & Bessell, M. S. 2009, MNRAS, 400, L29 [Lee et al.(2019)]lee2019 Lee, A.T., Offner, S.S.R., Kratter, K.M., Smullen, R.A., Li, P.S. 2019, ApJ, 887:232 [Leinert et al.(1993)]leinert1993 Leinert, C., Zinnecker, H., Weitzel, N., et al. 1993, , 278, 129 [Lightkurve Collaboration(2018)]lightkurve2018 Lightkurve Collaboration, Cardoso, J.V.d.M, Hedges, C., Gully-Santiago. M., Saunders, N., Cody, A.M. et al. 2018, Lightkurve: Kepler and TESS time series analysis in Python, Astrophysics Source Code Library [Lodieu et al.(2011)]lodieu2011 Lodieu, N., Dobbie, P. D., & Hambly, N. C. 2011, A&A, 527, A24 [Luhman et al.(2003)]luhman2003 Luhman, K. L., Stauffer, J. R., Muench, A. A., et al. 2003, , 593, 1093 [Mardling and Aarseth(2001)]mardling2001 Mardling, R. A. & Aarseth, S. J. 2001, MNRAS, 321, 398 [Mason et al.(2022)]mason2022 Mason, B.D., Wycoff, G.L., Hartkopf, W.I., Douglass, G.G., Worley, C.E. 2022, Vizier Online Data Catalog: The Washington Visual Double Star Catalog [Martín et al.(1996)]martin1996 Martín, E. L., Rebolo, R., & Zapatero-Osorio, M. R. 1996, , 469, 706 [Mikkola(1983)]mikkola1983 Mikkola, S. 1983, MNRAS, 203, 1107 [Mikkola(1984a)]mikkola1984a Mikkola, S. 1984a, MNRAS, 207, 115 [Mikkola(1984b)]mikkola1984b Mikkola, S. 1984b, MNRAS, 208, 75 [Offner et al.(2022)]offner2022 Offner, S.S.R., Moe, M., Kratter, K.M. et al. Protostars and Planets VII, Astron. Soc. Pac. Conf. Ser. 534, 275 [Pettersson et al.(2014)]pettersson2014 Pettersson, B., Armond, T., Reipurth, B. 2014, A&A, 570:A30 [Pineda et al.(2015)]pineda2015 Pineda,, J.E., Offner, S.S.R., Parker, R.J., Arce, H.A., Goodman, A.A. et al. 2015, Nature, 518, 213 [Prato et al.(2001)]prato2001 Prato, L., Ghez, A.M., Piña, R.K. et al. 2001, ApJ, 549:590 [Pratt and Gledhill(1880)]pratt1880 Pratt, H. & Gledhill, J. 1880, The Observatory, 3, 637 [Raghavan et al.(2010)]raghavan2010 Raghavan, D., McAlister, H. A., Henry, T. J. et al. 2010, ApJS, 190, 1 [Reipurth and Mikkola(2015)]reipurth2015 Reipurth, B. & Mikkola, S. 2015, AJ, 149:145 [Reipurth and Zinnecker(1993)]reipurth1993 Reipurth, B. & Zinnecker, H. 1993, A&A, 278, 81 [Reipurth et al.(2014)]reipurth2014 Reipurth, B., Clarke, C.J., Boss, A.P. et al. 2014, in Protostars and Planets VI, eds. H. Beuther et al., Univ. Arizona Press, p. 267 [Reipurth et al.(2010)]reipurth2010 Reipurth, B., Mikkola, S., Connelley, M., Valtonen, M. 2010, ApJ, 725, L56 [Riddle et al.(2015)]riddle2015 Riddle, R.L., Tokovinin, A., Mason, B.D. et al. 2015, ApJ, 799:A4 [Rowden et al.(2020)]rowden2020 Rowden, P., Borkovits, T., Jenkins, J.M. et al. 2020, AJ, 160:76 [Ryu et al.(2017)]ryu2017 Ryu, T., Leigh, N.W.C., Perna, R. 2017, MNRAS, 467, 4447 [Sadavoy and Stahler(2017)]sadavoy2017 Sadavoy, S.I. & Stahler, S.W. 2017, MNRAS, 469, 3881 [Schaefer et al.(2016)]schaefer2016 Schaefer, G.H., Hummel, C.A., Gies, D.R. et al. 2016, AJ, 152, 213 [Schlawin et al.(2014)]schlawin2014 Schlawin, E., Herter, T. L., Henderson, C., et al. 2014, in Society of Photo-Optical Instrumentation Engineers (SPIE) Conference Series, Vol. 9147, Ground-based and Airborne Instrumentation for Astronomy V, ed. S. K. Ramsay, I. S. McLean, & H. Takami, 91472H [Schlieder et al.(2012)]schlieder2012 Schlieder, J. E., Lepine, S., Rice, E., et al. 2012, , 143, 114 [Shappee et al.(2014)]shappee2014 Shappee, B.J., Prieto, J.L., Grupe, D. et al. 2014, ApJ, 788, 48 [Siess et al.(2000)]siess2000 Siess, L., Dufour, E., & Forestini, M. 2000, A&A, 358, 593 [Soderblom et al.(1993)]soderblom1993 Soderblom, D. R., Jones, B. F., Balachandran, S., et al. 1993, , 106, 1059 [Sterzik and Durisen(1998)]sterzik1998 Sterzik, M.F. & Durisen, R.H. 1998, A&A, 339, 95 [Suárez et al.(2017)]suarez2017 Suárez, G., Downes, J. J., Román-Zúñiga, C., et al. 2017, , 154, 14 [Taylor(2005)]taylor2005 Taylor, M. B. 2005, in Astronomical Society of the Pacific Conference Series, Vol. 347, Astronomical Data Analysis Software and Systems XIV, ed. R. E. P. Shopbell, M. Britton, 29 [Todorov et al.(2010)]todorov2010 Todorov, K., Luhman, K.L., McLeod, K.K. 2010, ApJL, 714:L84 [Tody(1986)]tody1986 Tody, D. 1986, Proc. SPIE Conf., Ser. Vol. 627, Instrumentation in Astronomy IV, ed. Crawford, D.L., Bellingham, p. 733 [Tody(1993)]tody1993 Tody, D. 1993, in ASP Conf. Ser. Vol. 52, Astronomical Data Analysis, Software and Systems II, eds. Hanisch, R.J. et al., Astron. Sc. Pac., San Francisco, p.173 [Tokovinin(2014)]tokovinin2014 Tokovinin, A. 2014, AJ, 147:A87 [Tokovinin(2020)]tokovinin2020 Tokovinin, A. 2020, AstL, 46, 612 [Tokovinin(2021)]tokovinin2021 Tokovinin, A. 2021, Universe, 7, 352 [Tokovinin et al.(2022)]tokovinin2022 Tokovinin, A., Mason, B.D., Mendez, R.A., Costa, E. 2022, AJ, 164:58 [Torres et al.(2013)]torres2013 Torres, G., Ruiz-Rodriguez, D., Badenas, M. et al. 2013, ApJ, 773:A40 [Torres-Robledo et al.(2020)]torres-robledo2020 Torres-Robledo, S., Briceño, C., Quint, B., & Sanmartim, D. 2020, in Astronomical Society of the Pacific Conference Series, Vol. 522, Astronomical Data Analysis Software and Systems XXVII, ed. P. Ballester, J. Ibsen, M. Solar, & K. Shortridge, 533 [Vaessen and van Roestel(2024)]vaessen2024 Vaessen, T. & van Roestel, J. 2024, A&A, 682, A164 [Valtonen and Karttunen(2006)]valtonen2006 Valtonen, M. & Karttunen, H. 2006, The Three-Body Problem, Cambridge University Press [Valtonen and Mikkola(1991)]valtonen1991 Valtonen, M. & Mikkola, S. 1991, ARAA, 29, 9 [van Albada(1968a)]vanalbada1968a van Albada, T.S. 1968a, Bull. Astr. Inst. Netherlands, 19, 479 [van Albada(1968b)]vanalbada1968b van Albada, T.S. 1968b, Bull. Astr. Inst. Netherlands, 20, 57 [Wang et al.(2018)]wang2018 Wang, J., David, T.J., Hillenbrand, L. et al. 2018, ApJ, 856:A141 [Weaver and Babcock(2004)]weaver2004 Weaver, W.B. & Babcock, A. 2004, PASP, 116, 1035 [White and Basri(2003)]white2003 White, R. J., & Basri, G. 2003, , 582, 1109 [White et al.(1999)]white1999 White, R.J., Ghez, A.M., Reid, I. Neill, Schultz, G. 1999, ApJ, 520, 811 [Whitworth(2001)]whitworth2001 Whitworth, A. P. 2001, in IAU Symp. 200, The Formation of Binary Stars, eds. H. Zinnecker & R.D. Mathieu, 200, 33 [Zasche et al.(2019)]zasche2019 Zasche, P., Uhlař, R., Svoboda, P., et al. 2019, , 158, 95. [Zúñiga-Fernández et al.(2021)]zuniga-fernandez2021 Zúniga-Fernández, S., Olofsson, J., Bayo, A. et al. 2021, A&A, 655, A15 cccccl[ht!] 0pt SOAR Goodman GHTS Observations of Haro 5-2 Component UT Date Setup Slit Binning Exposures (yyyy-mm-dd) () (N × s) A 2021-10-20 400M1 1.0 2× 2 3× 120 A 2021-10-20 400M2 1.0 2× 2 3× 120 B 2021-10-20 400M1 1.0 2× 2 3× 120 B 2021-10-20 400M2 Brice1.0 2× 2 3× 120 B 2021-10-20 400M1 1.0 2× 2 3× 300 B 2021-10-20 400M2 1.0 2× 2 3× 300 A 2021-10-21 2100_650 0.45 1× 2 3× 900 B 2021-10-21 2100_650 0.45 1× 2 3× 1200 A 2021-11-19 400M1 1.0 2× 2 3× 120 A 2021-11-19 400M2 1.0 2× 2 3× 120 B 2021-11-19 400M1 1.0 2× 2 3× 300 B 2021-11-19 400M2 1.0 2× 2 3× 300 A 2021-11-19 2100_650 0.45 1× 2 3× 900 B 2021-11-19 2100_650 0.45 1× 2 3× 1200 A 2022-11-10 400M1 1.0 2× 2 3× 120 B 2022-11-10 400M1 1.0 2× 2 1× 300 A 2022-11-10 400M2 1.0 2× 2 1× 120 B 2022-11-10 400M2 1.0 2× 2 1× 300 A 2023-01-07 400M1 1.0 2× 2 1× 200 B 2023-01-07 400M1 1.0 2× 2 1× 400 A 2023-01-07 400M2 1.0 2× 2 1× 200 B 2023-01-07 400M2 1.0 2× 2 2× 400 A 2023-03-10 400M1 1.0 2× 2 3× 200 B 2023-03-10 400M1 1.0 2× 2 2× 600 llccccccccc[ht!] 0pt Equivalent Widths in SOAR low-resolution Spectra of Haro 5-2 Component Spectral Type UT DATE W(Hα) W(Hβ) W(Li I) W(NaI) W(CaII 8498) W(CaII 8542) W(CaII 8662) Type yyyy-mm-dd (Å) (Å) (Å) (Å) (Å) (Å) (Å) A M2.5 W 2021-10-20 -4.3± 0.1 -2.7± 0.1 0.4± 0.1 1.8± 0.1 0.2± 0.1 1.2± 0.1 0.9± 0.1 A M2.5 W 2021-11-19 -4.1± 0.2 -2.6± 0.1 0.4± 0.1 1.7± 0.1 0.2± 0.1 1.3± 0.1 1.0± 0.1 A M2.5 W 2022-11-10 -3.7± 0.1 -3.0± 0.1 0.2± 0.1 1.2± 0.1 0.3± 0.1 0.5± 0.1 0.6± 0.1 A M2.5 W 2023-01-07 -4.2± 0.2 -3.1± 0.1 0.3± 0.1 1.8± 0.1 0.6± 0.1 1.1± 0.1 0.9± 0.1 A M2.5 W 2023-03-10 -3.3± 0.1 -1.8± 0.1 0.2± 0.1 B M3 C 2021-10-20 -20.3± 0.3 -5.8± 1.3 0.4± 0.1 1.8± 0.1 0.1± 0.1 0.3± 0.2 0.4± 0.1 B M3 C 2021-11-19 -37.5± 0.8 -12.0± 0.5 0.3± 0.1 1.7± 0.1 -2.34± 0.1 -1.6± 0.1 -1.3± 0.1 B M3 C 2022-11-10 -11.7± 0.1 -4.3± 0.1 0.3± 0.1 1.2± 0.1 -0.6± 0.1 -0.3± 0.1 -0.3± 0.1 B M3 C 2023-01-07 -24.6± 0.7 -5.6± 0.1 0.3± 0.1 1.5± 0.1 -1.2± 0.1 -0.9± 0.1 -1.0± 0.1 B M3 C 2023-03-10 -24.6± 2.0 -6.9± 2.0 0.2± 0.1 (1): C=Classical TTS; W=Weak-line TTS (2): For the Na I doublet we report the combined equivalent width of the 8183Å and 8195Å lines cccc[ht!] 0pt Hα Emission Line in SOAR high-resolution Spectra of Haro 5-2 Component UT DATE W(Hα) W10(Hα) (yyyy-mm-dd) (Å) (km s^-1) Haro 5-2A 2021-10-21 -7.08± 0.13 198.6± 3.6 Haro 5-2A 2021-11-19 -3.69± 0.03 165.7± 2.7 Haro 5-2B 2021-10-21 -18.73± 0.24 420.8± 2.8 Haro 5-2B 2021-11-19 -37.73± 0.62 552.2± 2.9 ccccc[ht!] 0pt Separation and Position Angles of Haro 5-2 Pairs Pair PA Sep. Proj. Sep. ΔI (deg.) (”) (AU) (mag) Aa,Ba 124.9±0.8 2.6130±0.0008 975 Aa,Ab 54.7±0.8 0.1942±0.0008 72 1.0 Ba,Bb 233.0±1.3 0.3504±0.0013 131 0.7 (1): Observed by Andrei Tokovinin on 2021.7983 with the HRCam speckle camera at the SOAR telescope in good seeing <cit.> (2): The data for the Ba,Bb pair are noisy. (3): Projected separations for a distance of 373 pc
http://arxiv.org/abs/2405.09907v1
20240516085609
End-to-end Optimization of Optical Communication Systems based on Directly Modulated Lasers
[ "Sergio Hernandez F.", "Christophe Peucheret", "Francesco Da Ros", "Darko Zibar" ]
eess.SP
[ "eess.SP" ]
[ [ May 20, 2024 ================ § INTRODUCTION Directly modulated lasers (DMLs) are a compelling option for short-reach intensity modulation/direct detection (IM/DD) systems, thanks to their low energy consumption, small form factor and reduced cost <cit.>. The aim in such systems is to maximize the symbol rate R_s while maintaining sufficient optical received power P_rec in order to increase net data throughput. This requires the laser to operate in the large-signal regime, where the high modulation index allows high extinction ratio and peak-to-peak output power. However, the large-signal DML dynamics introduce significant waveform distortions, potentially resulting in nonlinear intersymbol interference as the symbol rate is increased <cit.>. This is due to the modulation-induced changes in carrier and photon concentration within the laser active region, that cause nonlinear memory effects in the output optical field. Consequently, the DML's response time sets a limit on its modulation bandwidth, restricting the laser's throughput <cit.>. Although increasing the bias current to the laser can enhance modulation bandwidth, it comes at the expense of increased energy consumption and lower extinction ratio, resulting in a receiver power penalty. Another alternative to partially overcome bandwidth limitations is tuning transmitter (TX), receiver (RX), digital signal processing (DSP) and the laser-driving configurations (bias and peak-to-peak current to the laser) separately. Yet, the large amount of parameters to be optimized within the DSP pipeline (pulse shaping, pre-distortion, receiver-side equalization) may require a high amount of simulation/system evaluations to converge towards optimal configurations, making such approach intractable in cases where such evaluations are time consuming. Recent advances in DML development have focused on delivering >100 Gbps bit rates with low energy consumption (<1 pJ/bit) through the use of coupled-cavity laser structures <cit.>. The additional optical feedback enabled by such structures enables higher modulation bandwidths through the use of photon-photon resonance (PPR) and detuned loading. Impairment compensation in DML-based systems <cit.> has however relied mainly on separate transmitter-side pre-distortion and receiver-side optimization, omitting the potential gains of optimizing both TX and RX jointly. This arises from the fact that the large-signal DML dynamics are governed by nonlinear differential equations, hindering the calculation of analytical gradients and therefore the simultaneous optimization of transmitter and receiver using standard gradient-base optimization techiques. An accurate, differentiable model of the DML dynamics can prove useful in this scenario, allowing the propagation of gradients between TX and RX while accounting for the DML-induced signal impairments. Data-driven modeling is a viable option in this context provided that sufficient data is available, yet the choice of model structure (Volterra filters, neural networks) can pose a challenge. In <cit.>, we conduct a comparison between model structures using the laser rate equations as source of DML waveform data. The results show the potential of transformer-based neural networks <cit.> in the prediction of the DML dynamics. DSP algorithms have extensively been used to compensate for transmission impairments within optical communication systems. End-to-end (E2E) learning has attracted a special interest in this scope, as it enables the simultaneous optimization of TX (constellation and pulse shaping) and RX, given that a differentiable model of the system under test is available <cit.>. Through the use of alternative approaches like gradient-free optimization and reinforcement learning, it is also possible to avoid modeling the communication channel and use local gradient approximations <cit.>. The focus in E2E learning is to substitute one or several functions of the DSP pipeline at both ends of the channel by an adaptive optimizable function <cit.>. Autoencoders (AEs) are especially popular in this context, as they allow the compression and decompression of data <cit.> in a similar fashion to how communication systems map symbols to optical waveforms and viceversa <cit.>. Approaches such as geometric constellation shaping (GCS), that have become standard in state-of-the-art in long-haul coherent systems, can be implemented based on this principle <cit.>. In this paper, we demonstrate E2E optimization of TX and RX DSP, together with bias and peak-to-peak modulation current for impairment compensation of DML systems. The proposed simulation approach is shown in <ref>. The modeling of the DML dynamics is performed using a data-driven surrogate model, based on the numerical solution of the rate equations as source of data. The waveform data generation uses 4-level pulse amplitude modulation (4PAM) symbols in a back-to-back (B2B) simulation. The proposed AE-based approach is used to optimize DSP configurations at TX (GCS, pulse shaping) and RX (equalization, symbol detection) simultaneously. In addition, the input current offset I_bias and the peak-to-peak modulation current I_pp are used as learnable parameters. This provides insight on the optimal compromise between extinction ratio and distortion of the optical waveform. Additionally to the AE approach, three more approaches are included as a benchmark: the uncompensated system without equalization, a receiver-side linear feed-forward equalizer (FFE) and a second E2E approach with learnable pulse shape (LPS) and RX-side Volterra nonlinear equalization (VNLE), but excluding I_bias and I_pp from the optimization. The results show the advantage of the joint optimization of bias, modulation current and DSP for DML systems, with the AE yielding performance gains over the RX-only equalization and a considerable advantage over the VNLE setup in terms of symbol error rate (SER) and mutual information (MI). This paper is structured as follows: Section <ref> describes the dynamic behaviour of DMLs and the state-of-the-art in optical communication laser development. Section <ref> analyzes the available algorithms for the modeling of DMLs in terms of complexity, interpretability and compatibility with gradient-based optimization. The fundamentals of E2E learning and its application to DMLs in the literature is reviewed in Section <ref>. The motivation and structure of the proposed DML model and the optimization approach around it can be found in Section <ref>. The results of the E2E-optimized B2B DML system simulation are described in Section <ref>. The conclusions are summarized in Section <ref>. § DML DYNAMICS The rate equations governing the photon density S(t), carrier density N(t) and phase ϕ(t) in DMLs are given by <cit.>: dS(t)/dt = Γ g_0 (N(t) - N_0) 1/1 + ϵ S(t) S(t) - S(t)/τ_p + Γβ N(t)/τ_n , dN(t)/dt = I(t)/qV - N(t)/τ_n -g_0(N(t)-N_0)1/1+ϵ S(t)S(t) , dϕ(t)/dt = 1/2α[ Γ g_0 (N(t) - N_0) - 1/τ_p] , where Γ is the mode confinement factor, g_0 is the gain slope constant, N_0 is the carrier density at transparency, ϵ is the gain compression factor, τ_p is the photon lifetime, β is the fraction of spontaneous emission coupled into the lasing mode, τ_n is the electron lifetime, I(t) is the injected current, q is the electron charge, V is the active layer volume and α is the linewidth enhancement factor. The output optical power is given by: P(t) = S(t) V η_0 h ν/2 Γτ_p , where η_0 is the differential quantum efficiency, h is Planck's constant, and ν is the unmodulated optical frequency. Thus, the modulated optical power P(t) is proportional to the photon density S(t), while the rate of change of the optical phase (instantaneous angular frequency) is proportional to the carrier density over threshold. §.§ Small-signal regime The term N(t) - N_0 in <ref> hints the reservoir-like behaviour of the carrier density fluctuation in the laser active region. When I reaches the laser's threshold current level I_th (optical gain overcomes cavity losses) the carrier density reaches its maximum steady-state level N_0 (the reservoir becomes full). The carrier excess generated by I - I_th "overflows" the reservoir, generating net stimulated emission of photons. The higher I-I_th is, the more photons are emitted, and through <ref> the larger P(t) will be. This also entails that the stimulated recombination of carriers increases, bringing N(t) back to its threshold level N_0 after some transient period. The interaction between carriers and photons creates a damped oscillatory behaviour in S(t) and N(t) (carrier-photon resonance) when I(t) is modulated, as the increase of one drives the decrease of the other until steady state is reached. The optical gain generated by the laser, dependent of N(t) and S(t), is given by <ref>: g(N,S) = g_0/1+ϵ S(t)ln (N(t)+N_s/N_0 + N_s) , where N_s is a fitting parameter used to ensure the logarithm is finite and defined for N(t) > 0. Both g_0 and N_s are usually fitted from the measured response of the laser <cit.>. Given that the optical gain g is monotonically (although non-linearly) related to N and S, it can be assumed that for small variations of the carrier density Δ N the optical gain variation is proportional to Δ N, Δ S. This leads to g = g_th + aΔ N - a_pΔ S, using the local slopes a = ∂ g/∂ N (differential gain) and a_p = ∂ g/ ∂ S. The approximation yields accurate results as long as I_pp << I_bias for I > I_th, the small-signal regime conditions. Although such conditions are usually impractical in a communication setting, the small-signal analysis provides valuable insight into the characteristics of a DML. It can be shown that, under the small signal approximation and under sinusoidal excitation at angular frequency ω, the frequency response of the laser can be expressed according to: H(ω) = ω_R^2/ω_R^2 - ω^2 + jωγ , where f_R is the natural resonant frequency of the system, j is the imaginary unit and γ is the laser's damping factor. Assuming operation over threshold, f_R can be approximated as <cit.>: f_R ≈1/2 π√(v_g a S̅/τ_p) = 1/2 π√(Γ v_g a/qVη_i (I_bias - I_th)) , where S̅ denotes the average photon density. f_R is also called relaxation-oscillation or carrier-photon resonance frequency, as it determines the frequency of the transient damped oscillations created in S(t) and N(t) due to the aforementioned carrier-photon interaction under current modulation. γ can be expressed as: γ = K f_R^2 + γ_0 , where K is the "K-factor" governing the laser response at high modulation frequencies and γ_0 is the damping offset. Both parameters are usually obtained through fitting from the modulation response curve (<ref>). The linear relation between ω_R^2 and (I_bias-I_th) describes the impact of I_bias in the modulation response of the laser. Such relation is shown in <ref>, where various frequency responses |H(ω)| are represented for different values of I_bias. It becomes apparent how the response shows a peak (that coincides approximately with f_R) and a rapid damping for frequencies beyond it. A 3-dB bandwidth f_3dB can be defined as the frequency for which the magnitude of the modulation response decreases to half of its DC value. For small γ, the approximation f_3dB≈ 1.55 f_R is often used. The small-signal intensity modulation (IM) of DMLs is therefore dictated by the I_bias level, with higher levels associated with a higher 3dB-bandwidth, although with diminishing returns due to the (I_bias-I_th)^1/2 factor in <ref>. §.§ Large-signal regime The cost and power constraints in short-reach IM/DD systems favor amplifier-free operation. The large-signal regime describes the DML laser behaviour in most communication settings, where a high modulation index is desirable to overcome receiver noise while avoiding the use of amplifiers. This is due to the analytical solutions to the rate equations being unavailable in this regime (where I_pp and I_bias are comparable in magnitude). The most direct effect of a large I_pp is the introduction of signal intensity distortion due to the effect of relaxation oscillations. This effect is depicted in <ref>a, where overshoot and undershoot can be observed right after changes of the modulating current value. The larger instantaneous variation of N, S enhances the amplitude of the relaxation oscillations, introducing signal components uncorrelated to I. The instantaneous emission frequency ν can also show significant differences with respect to the small signal regime, as shown in <ref>b. The figure shows the variation of ν resulting from a current modulation. This is due to the signal chirp and relaxation oscillations resulting from the higher modulation index. Frequency chirping can induce a critical limitation of the transmission distance over a dispersive optical fiber. This is due to the interaction between chirp and CD in optical fiber, that induces pulse broadening on the envelope of the received optical signal. The effect of chromatic dispersion on a signal with nonlinear chirp is depicted in <ref>c, d, where the eye diagram of the simulated optical signal before and after transmission over 2 km of standard single-mode fiber (SSMF) is shown. The detrimental impact of pulse broadening is twofold: it decreases the peak optical power of the pulses while introducing ISI between neighboring symbols. It therefore entails substantial worsening of the effective SNR on top of the linear fiber-induced attenuation. The instantaneous chirp expression <cit.> is obtained from the derivative of <ref>: Δν = α/4π[1/P(t)dP(t)/dt + κ P(t) ] , and the factor κ is defined by: κ = 2 Γ/η h ν_0 Vϵ , where h is Planck's constant. The first factor in the sum of <ref> is known as transient chirp, while the second is called adiabatic chirp. The prevalence of each chirp factor depends on the dynamics of P, and therefore on the variation of the input current I. The derivative term of the transient chirp makes it sensitive to fast variations of the optical power. This makes it especially prevalent in high R_s conditions, where the rapid variation of power makes its derivative large. The use of sharp pulses, like square pulses, induce high instantaneous derivative and high relaxation oscillation amplitude, leading to additional transient chirp in the output waveform. The adiabatic chirp becomes prevalent under high I_bias and optical output powers, especially at lower R_s. Chirp in DMLs is therefore unavoidable, but the careful selection of certain configuration parameters (I_pp, I_bias, pulse shape) could help mitigate it. The limited modulation bandwidth of DMLs is associated with several factors, including the aforementioned carrier-photon interaction. Such factors introduce linear and nonlinear memory effects, that become apparent as R_s increases. Although <ref> gives meaningful insight on the characteristics of the DML, the DML response depends on the laser configuration, including I_bias, I_pp, and pulse shaping, and can only be obtained through numerical simulations. Additionally, the susceptibility of the system to timing and amplitude impairments varies depending on the channel characteristics. Thus, the use of eye diagrams (<ref>c and d) is widely used in the evaluation of experimental DML-based systems in the large signal regime, as it allows to qualitatively assess the signal degradation at the receiver. §.§ Coupled-cavity effects Through the use of additional passive and active laser sections in the laser structure, it is possible to overcome the physical limitations that hinder the DML modulation bandwidth beyond 40 GHz <cit.>. Among the various techniques used, detuned loading and photon-photon resonance (PPR) have delivered promising results in several implementations. Detuned loading is based on the combined effect of distributed reflectors, mostly distributed Bragg reflectors (DBRs) and modulation-induced chirping in order to enhance the dynamic response of the DML <cit.>. This is achieved through the design of the gain and reflective sections, so the range of main lasing wavelengths is detuned from the main lobe of the DBR reflectivity spectrum. <ref> shows the instantaneous lasing frequencies ν_0, ν_1, associated with 2 different pump current levels I_0, I_1. Both frequencies fall on significantly different regions of the DBR reflectivity spectrum, leading to a difference in mirror loss and the effective differential gain a_eff. The approximation of f_R in <ref> must therefore be modified to account for the detuning Δλ from the DBR central wavelength: f_R, DL≈1/2 π√(v_g S/τ_p[Γ_z(Δλ)g_0(1+jα)]) , where Γ_z(Δλ) is the complex reactive confinement factor in the active section <cit.>. The larger the imaginary part of this factor becomes, the larger the impact of α (proportional to chirp) becomes, enhancing f_R. When Δλ = 0, Γ_z(Δλ) becomes real, and therefore no f_R advantage is obtained from the passive section. Detuned loading is therefore an effective method to exploit the FM derived from direct modulation by enhancing the IM, resulting in higher modulation bandwidth. This is not only desirable from the IM bandwidth perspective, but it can also mitigate chirp, leading to reduced distortion after fiber transmission <cit.>. PPR bandwidth enhancement is based on the interaction between longitudinal modes in the laser cavity. When such modes are close in wavelength, they can still be coupled, <cit.> leading to a potentially faster modulation response compared to the single-mode configuration. This is usually done through amplification of a secondary side mode using a coupled reflective cavity, as in the case of detuned loading. Both approaches are mutually compatible, resulting in further enhancements in modulation bandwidth. In the case of PPR, a key design goal is to tune the grating phase to match the round-trip phase of the main mode and the side mode, forcing constructive interference between themselves. In this fashion, Γ shows also a time dependency, due to the time-varying interaction between the modes in the cavity. The combined gain of the two modes makes possible to force a second resonance peak, of higher frequency that the carrier-photon resonance, to appear in the modulation response of the DML. §.§ High-speed DMLs The modulation performance of DMLs, based on both in-plane lasers <cit.> and vertical cavity surface-emitting lasers (VCSELs) <cit.> structures, has been extensively developed through cavity design. It must be noted that the commercial interest of DMLs often resides in their energy efficiency and cost, and considerations like reliability, output power and thermal performance play a role in their development. Their comparison must therefore go beyond modulation bandwidth, and a more holistic assessment must be made <cit.>. Within the cavity design choice, several structures and substrates have been employed in the development of high-bandwidth (f_3dB) DML lasers. >50 GHz bandwidth was achieved in <cit.> using push-pull modulation, where the current is injected on two locations within the cavity. The driving of the laser is designed so an increase in one of the currents leads to a decrease in the other, and viceversa. Through the combined use of a MQW active region and a single DBR section enhancing detuned loading and PPR, uncooled >100 GHz bandwidth was obtained in <cit.>, maintaining >70GHz at 85°C. The combination of detuned loading and PPR has been used in several works, obtaining f_3dB between 40 and 65GHz <cit.>. As a general overview, several configurations have achieved bandwidths over 50GHz, but 100 GHz have been reached using more complex and costly fabrication processes <cit.>. § SYSTEM MODELING Modeling a dynamic system is a fundamental first step to its optimization. In the case of DMLs, several alternatives are available depending on the computational resources available, time constraints and desired accuracy and exhaustiveness of the modeling. In this section we introduce some of the most common approaches, sorting them from the more physics-intensive ones to the purely data-driven. §.§ Parameter extraction and ODE solvers Given that the previously introduced rate equations (<ref>) yield accurate prediction of the DML response in line with experimental verification <cit.>, they may seem as the most direct way of estimating the system response. However, they entail 3 main challenges: * Most of their parameters are not easily measurable * They require the use of numerical ordinary differential equation (ODE) solvers in the large signal regime * Analytical gradients are unavailable due to the computational structure of ODE solvers The problem of parameter extraction has been extensively treated in the literature <cit.>. Iterative techniques allow to obtain all the relevant parameters based on experimentally obtained data, like the lasing spectrum, the modulation response (<ref>), the static light-current (L-I) curve (<ref>) or the spontaneous emission spectrum of the laser. Through the use of machine learning, it is possible to automatize this process, simply by providing the algorithm with the necessary figures of merit, avoiding manual calculation of parameters <cit.>. Yet, quantities like N_0 are not directly measurable, and must be estimated through related parameters, leading to potential inaccuracy. Even when all the necessary parameters are available, the use of ODE solvers may prove impractical in some scenarios. The large values of S and N can lead to instability in the numerical calculation, leading to divergence and failure of the method. This makes convergence analysis a must have when using ODE solvers, and re-initialization of the calculation may be needed <cit.>. The sequential nature of the widely used Euler-based methods <cit.> makes parallelization challenging, leading to computational bottlenecks depending on the analyzed system. The largest advantage of such methods is their configurable precision, but the computational toll associated with high precision must also be considered. Lastly, ODE solvers make use of local gradient approximations in order to solve differential equations. This collides with the concept of automatic differentiation that many machine learning and optimization frameworks use, where systems are usually built on functions with known analytical derivatives <cit.>, making gradient calculations faster. This drawback makes an automatically differentiable model of DMLs desirable, as it may allow the simultaneous optimization of large parameter spaces in a relatively reduced computation time. The next subsections will describe some of the available alternatives for the modeling of DMLs. §.§ Circuit-equivalent models Circuit-equivalent models <cit.> represent a less complex alternative to rate equation solvers, while sharing the interpretability of physics-based models. They make use of electrical components (amplifiers, capacitors, resistors...) to model the dynamic behaviour of the laser by relating their characteristics to the laser's rate equation parameters. Thus, they are able to reproduce the behaviour of a wide variety of lasers, as long as the corresponding parameters can be measured. The use of current and voltage signals as sole dynamic variables makes them an ideal candidate for circuit simulators, thus facilitating the task of building a numerical simulation. Some literature has included the Langevin phase and intensity noise sources in the simulation, <cit.> thus introducing significant acceleration with respect to noisy rate equations, where stochastic local gradients can worsen the performance of adaptive step solvers. On the other hand, circuit-based models share some of the pitfalls of ODE solvers. Firstly, although circuit simulators are optimized for the task, exhaustive models can be relatively complex, introducing significant computational burden to the simulation <cit.>. This becomes especially true in optimization scenarios, where circuit-specialized software cannot be utilized, and the numerous differential relations between variables must be obtained numerically, i.e. with ODE solvers. In conclusion, although they can be useful in the design and simulation of DMLs, circuit-equivalent models are so far impractical as part of a gradient-based optimization pipeline, and automatically differentiable alternatives should be developed for this purpose <cit.>. §.§ Interpretable data-driven modeling The mathematical modeling of dynamic systems (based on underlying physics, data, or a mixture of both) has been studied extensively in the control theory community, in what is usually called system identification <cit.>. A fundamental advantage of data-driven modeling is its compatibility with numerical optimization techniques: they are mostly based on continuous, easily differentiable functions that provide numerical stability in the search of optimal configurations. Some of the most popular approaches for discrete linear systems are variations of autoregressive such as integrated moving average models (ARIMA) <cit.>. These models combine the use of 3 techniques: the evaluation of past outputs to predict future ones (AR), the discrete differentiation of the time series in order to force its stationarity (I) and a moving average (MA) of the past prediction errors. However, due to the nonlinear nature of the DML behaviour, many classical system identification models cannot be applied. Additionally, ARIMA models are based on endogenous time series, i.e. the sequence they make predictions on is determined solely by its past values. In order to model nonlinear dynamics, higher-order temporal dependencies and/or nonlinear activations are often used. Nonlinear autoregressive exogenous models (NARX) <cit.> are a common choice in this case, as they combine linear operations with a static nonlinearity that allows to model a larger function space. Defining an input u(n), an estimated output ŷ(n), a input feature vector x(n), a learnable parameter space θ, a and a static nonlinear function h, a generic NARX model expression can be obtained: x(n) = [ŷ(n-1), ... ŷ(n-N_y), u(n), ..., u(n-N_u)] , ŷ(n,θ) = h(x(n),θ) , where N_u, N_y are the input and output memory length, respectively. This general definition includes a large variety of functions, with different nonlinearities, recursivity schemes and complexity levels. Volterra filters are often used in communication-oriented DSP to model and/or compensate for the dynamics of nonlinear system <cit.>. They are among the simplest NARX models, as they do not employ recursion, and nonlinearity is achieved through multiplication of past input samples with each other. They are based on Volterra series <cit.>, a variation of the Taylor series where instead of evaluating the analyzed function around a single point, it is evaluated over an infinite range of past samples. Given the previous definitions of ŷ(n), u(n) the discrete-time Volterra series can be evaluated as: ŷ(n) = h_0 + ∑_k_1 = 0^∞ h_1(k_1) u(n-k_1) + ∑_k_1 = 0^∞∑_k_2 = 0^∞ h_2(k_1, k_2) u(n-k_1) u(n-k_2) + ... where h_0, h_1, ..., h_n are the Volterra kernels and k_1, k_2, ..., k_n are the delays associated with every kernel order 1, 2, ... n. As in the case of the Taylor series, increasing the order of the series delivers potentially higher accuracy in the representation, at the cost of increasing complexity <cit.>. This forces real implementations to truncate the Volterra series in both kernel order and delay, leading to Volterra filters. It must be noted that the Volterra filter of order 1 corresponds to the linear convolution operator, and therefore the Volterra kernels can be interpreted as a higher-order impulse response of the analyzed system. <ref> shows a comparison in terms of interpretability and potential complexity between data-driven and physics-driven modeling methods. The main challenge when using Volterra filters is the calculation of optimal kernels. There are several techniques developed for this purpose <cit.>, but numerical multivariate optimization techniques can also be used at the expense of higher computation time. Hammerstein-Wiener (H-W) models <cit.> are an alternative to NARX-based system identification, and provide design flexibility while including a larger number of hyperparameters to be tuned. They are based on the combination of an input static nonlinear function h, a linear infinite impulse response (IRR) filter z(n) defined by the real or complex coefficients p_i, q_i and an output static nonlinear function g. Its mathematical expression is: w(n) = h[u(n)] , z(n) = ∑_i=1^N_z q_i z(n-i) + ∑_j=0^N_u p_i w(m-j) , ŷ(n) = g [ z(w(n)) ] , where N_w, N_z are the number of input and output delays, respectively. The choice of h, g and the delay orders N_w, N_z must be tailored to the specific system to be modeled. Although the nonlinearity choice is usually hyperparameter-optimized among a pool of usual functions (sigmoidal, piece-wise continuous, wavelet), the linear transfer functions can be estimated through gray-box modeling (combination of a-priori physical knowledge with data-driven approaches), numerical optimization or stochastic models like the expectation-maximization (EM) algorithm can also be used <cit.>. One of the main advantages of H-W models resides in their interpretability, given that the linear transfer function governing them can be analyzed with traditional spectral analysis as any other linear filter (Laplace/Z transforms, etc). When the physics of the nonlinear system to be modeled are unknown or too complex to be represented in such structure, purely data-based approaches, like neural networks (NNs), are a popular approach <cit.>. §.§ Neural networks NNs are mathematical structures where complex calculations would be performed through the combination of large numbers of simpler operations <cit.>. Their potential resides in the use of relatively simple linear functions followed by a nonlinear activation. Although, as in the case of Volterra filters, they can be considered a NARX model, their advantage resides in their structural flexibility, that allows them to specialize depending on the task to be performed. The concatenation of simple mathematical operations within NNs leads to large, usually non-orthogonal parameter spaces. Such spaces must be optimized to meet a certain objective, abstracted into a loss or cost function. This is usually done through gradient-based numerical optimization techniques, where instead of calculating the overall function gradient analytically, it is approximated based on reduced subsets of the training data (mini-batches). This procedure accelerates the overall gradient calculation, leading to high performance models without explicit physical knowledge of the system under investigation. The main inconvenient of this procedure is however their lack of interpretability: the concatenation of nonlinearities within NNs them makes it difficult to understand the interaction between the different parameters in the network. NNs have been widely used in the modeling of optical communication systems and subsystems <cit.>. Many of the NN architectures in the field have tried to embed some temporal context into the network, as many of the systems found in communications have memory elements. Time-delay neural networks (TDNNs) <cit.> are a special case of feedforward neural networks (FFNNs), where no recursive element is added to network. Instead, the input of the network is built as a sliding window of past samples, thus providing the network with temporal context while maintaining relatively low complexity. Single-dimensional convolutional neural networks (CNNs) <cit.> work on the same principle as TDNNs, although they sometimes inherit pooling layers from its higher-dimensional counterparts. Another possible approach to address this problem is using recurrent neural networks (RNNs), where the network output depends not only on the input features, but also on the state tensor in each neuron. This tensor gives the network encoded temporal context on the past inputs and outputs of the model, leading to potentially better prediction of models with temporal dependencies. Gated recurrent units (GRUs) <cit.> and long-short term memory (LSTM) <cit.> are two of the main architectures in this paradigm. One major limitation of such networks is the modeling of long temporal dependencies, due to the exploding and vanishing gradient problem <cit.>. The transformer architecture has been extensively used due to its success overcoming these problems in the natural language processing community <cit.>. This has encouraged its use in the modeling of optic-fiber channels <cit.>. Our work in <cit.> demonstrated the use of a variation of the transformer architecture, the convolutional attention transformer (CAT) <cit.> in the modeling of DMLs. Based on the structural similarities between neural ODEs and the residual connections in transformers <cit.>, we use the outputs of the DML rate equations to obtain a differentiable DML model. The model compared favourably to hyperparameter-optimized Volterra filters, TDNNs and LSTMs <cit.>. § END-TO-END OPTIMIZATION The throughput limitations of DMLs have also been studied from the DSP perspective. In this scope, we can distinguish 2 different trends: one of them aims to exploit the available bandwidth to increase spectral efficiency, while the other aims to compensate the DML-induced waveform distortion. Discrete multi-tone (DMT) <cit.>, probabilistic constellation shaping (PCS) <cit.> and GCS are the dominant technologies in the former category. DMT (the baseband, guided-system equivalent of orthogonal frequency-division multiplexing, OFDM) aims to exploit the available bandwidth through the use of multiple digital narrow-band sub-carriers, instead of using a unique one spanning all the available spectrum. In non-flat spectrum conditions, like the induced by PPR in DMLs, the SNR varies along different regions of the spectrum. The use of DMT allows higher granularity on the configuration for each sub-channel, optimizing it to the local conditions of each narrow band. PCS aims to replace uniform probability mass function (PMF) of the transmitted symbols by a different PMF in order to optimize a certain metric (energy per bit, SNR). GCS <cit.> follows similar approach, but it modifies the energy allocated to each symbol instead of tweaking their probabilities. The combination of DMT and constellation shaping is usually called entropy loading <cit.>, and it allows to adapt the symbol distribution to the spectral channel response, saving power in the high-SNR narrow bands while maximizing it in the low-SNR bands. Many DML implementations have included a combinations of these technologies to increase the throughput of DMLs <cit.>. Pulse shaping is another source of throughput optimization. Faster-than-Nyquist (FTN) <cit.> signaling aims to exploit this by introducing a controlled amount of ISI in each symbol, correcting their introduced correlation on the RX side. This allows to increase the effective throughput, as long as the receiver is able to compensate both the TX- and the channel-induced ISI. <cit.> combines FTN with entropy loading for increased spectral density of the DML bandwidth. Within distortion compensation schemes, most approaches are focused on either tailoring the peak-to-peak current to invert the nonlinear DML response or using pre-distortion and equalization techniques. <cit.> provides a linearization method to correct the DML nonlinearities, while <cit.> aims to suppress relaxation-oscillations using ML approaches. Even optical filters can be designed through semi-analytical approaches to find optimal cavity configurations <cit.>. The development of DML-specific compensation approaches is also extensive, using different equalizer structures like Volterra <cit.>, TDNN <cit.>, RNN <cit.> or deep belief networks <cit.>. Despite the remarkable performance leap obtained in some of the aforementioned data-driven approaches, their optimization is based on single-sided or sequential learning approaches: they do not optimize the TX and RX sides jointly. In sequential learning, each side of the communication link (TX and RX) is optimized keeping the configuration of the other one fixed, thus avoiding the need for gradient propagation across the transmission channel. E2E learning <cit.> aims to condense several functions within TX (constellation shaping, pulse shaping, bit labelling) and RX (EQ, symbol decoding) into a single neural network, where the link configuration is optimized simultaneously at both ends based on a certain metric (loss function). Thus, the performance of the configuration obtained through E2E learning is limited mainly by the amount of DSP functions optimized and the accuracy of the modeling of the elements between TX and RX. This can pose a considerable challenge in the case of DML-based systems, where the building a differentiable model of the DML is not straightforward, as discussed in the previous section. Gradient-free approaches, based on derivative-free optimizers or reinforcement learning, have been proposed in the training of E2E <cit.>. Nonetheless, they introduce severe computational overhead and their gradient approximations can lead to numerical optimization issues <cit.>. Several gradient-based have been proposed for DML (VCSEL) systems, implementing GCS <cit.> and pre-distortion + EQ <cit.> based on data-driven DML models. The present work aims to jointly optimize transmitter GCS and LPS and receiver EQ with the driving configuration of the DML (I_bias, I_pp), thus tailoring E2E learning to the specific characteristics of DML-based systems. This entails extending our DML modeling work in <cit.>, obtaining a model that is able to predict the dynamics of the laser regardless of its biasing. The usual approach to E2E learning in communication systems is using autoencoders (AEs) <cit.> as substitutes of the DSP pipeline <cit.>. This is due to their functional resemblance to a communication link: they compress information subject to physical constraints (encoding) to then retrieve it with the minimum loss of information possible (decoding). The AE was created to compress and decompress information with minimal information loss, making it an ideal candidate for this task. It consists on an encoder, that performs the dimensionality reduction, and a decoder, that aims to retrieve the information compressed by the encoder (as shown in <ref>). AE-based optimization of communication systems <cit.> usually relies on an encoder to map an alphabet of one-hot encoded symbols i_n to a vector of samples h_n representing a symbol (assuming 2 sps in <ref>) to be transmitted. The decoder then maps the received samples g_n back to scalars o_n representing output probabilities, usually after a series of hidden layers, represented by l_n in <ref>. It must be noted that the matrix multiplication in AEs introduces a significant complexity overhead to the TX and RX operation. Therefore, AEs are often used only to find optimal link configurations, that are then implemented in the form of less complex look-up tables and decision circuits. This allows to take advantage of the performance gains yielded by autoencoders while adjusting complexity to the requirements of optical transceivers <cit.>. The standard AE <cit.> is an FFNN, and therefore it lacks any memory mechanism in its structure. This explains the use of a finite impulse response (FIR) filter on the decoder side, that allows the AE to capture and compensate for ISI and undesired memory effects. The o_n are then masked with a normalized exponential function (softmax), depicted in <ref>. σ(o_n) = e^o_n/∑_j=1^N e^o_j where N is the number of neurons in the decoder output layer. The softmax function converts the single-dimensional output tensor in an array of normalized probabilities, thus giving insight on the certainty of the symbol prediction. As in any supervised learning scheme, the AE needs to be trained based on a certain loss function. Given that the goal of the link optimization is minimizing the loss of information over the transmission channel, a metric that captures this information loss is desirable. This leads to the use of categorical cross-entropy (CE) as an indirect estimator of mutual information between transmitted and received symbols <cit.>. The expression of CE is shown in <ref>: J_CE(θ) = 1/N∑_n=1^N [ - ∑_m=1^M i_m^(n)log o_m^(n) (θ) ] , where N represents the symbol batch size and M represents the modulation order utilized. In the backpropagation stage of the training process, the gradient over trainable parameters θ is calculated to minimize CE. This leads to higher mutual information between both ends of the link, due to the approximation of its lower bound in <ref>: I(X;Y) ≥ H(X) - Ĥ(X|Y) , where H(X) is the entropy of the probability distribution of the transmitted symbols X and Ĥ(X|Y) is the upper bound on the conditional entropy of X given the probability distribution of the received symbols Y. Ĥ(X|Y) can be approximated by the CE between transmitted and received symbols, given that the expression of the true channel transition is unknown <cit.>. § SIMULATION SETUP The proposed DML system optimization is based on two main components: the laser surrogate model and the E2E optimization approach. The surrogate model aims solely to reproduce the laser behaviour as accurately as possible. The E2E approach finds the optimal system configurations subject to the constraints imposed by the surrogate model dynamics and its loss function, once the surrogate model has been trained. §.§ Surrogate model The structure of the surrogate CAT model, depicted in <ref>, is based on 3 building blocks: learned positional embeddings, convolutional attention and dense layers with ReLU activation. The positional embeddings aim to capture the position of each sample within the sequence, giving temporal context to the network. The convolutional attention block aims to discern the most relevant input samples for the calculation of each output sample through the use of matrix multiplication and learned convolutional filters. Lastly, the dense layers provide depth to the network, as they contain most of the trainable parameters within it. Although the model architecture, shown in <ref>, is identical to <cit.>, in this work we introduce a new data generation process. This allows to maintain model complexity with respect to our previous work while achieving a DML model agnostic to the waveform shape, the bias, and the peak-to-peak amplitude of the input sequence. The surrogate modeling is performed in a data-driven fashion, using the laser rate equations (<ref>) as source of data. The dataset of input waveforms to the rate equations is obtained by generating 4PAM-modulated symbols, pulse shaped at 2 samples per symbol (sps). The rate equation parameters utilized are specified in <ref>. The utilized parameters correspond to a generic DFB laser with f_R = 10.6 GHz and a f_3dB of 25.5 GHz at I_bias = 75 mA. Although the parameters are not chosen to match the specifications of a real device, they are of the same order of magnitude as those reported in the literature for single-mode quantum-well DFB lasers <cit.>. It must be noted that ϕ(t) is not considered, given that the simulated setup is back-to-back. Therefore the values of α and κ do not have an impact in the behaviour of the surrogate model. The region of the modulation response beyond f_R is of special interest due to the increased data throughput and significant waveform distortion introduced. The CAT model was therefore trained separately on 3 relatively high R_s, namely {15, 20, 25} Gbaud. The generalizability in terms of waveform shaping is addressed by using 2 different types of pulse: square and stochastic pulses. The steep slope of the square pulse-shaped symbols allows to capture the transient dynamics of the DML response, while the stochastic pulses are used to prevent overfitting. The stochastic pulses are implemented through a 2-tap finite impulse response (FIR) filter, where the filter coefficients are drawn from a uniform distribution in the interval [-0.5, 0.5]. The pulses are normalized in amplitude to avoid distorting the 4PAM symbols. In order to maintain a high modelling accuracy regardless of the combination of I_bias and I_pp utilized, both quantities are randomized for the generation of the input waveform data set. This is achieved by randomizing I_bias, drawing it from a uniform distribution in the interval I_bias = [50, 100] mA. I_pp is randomized in a similar fashion but constrained to the range I_pp = [0, 80] mA, in order to maintain a driving current significantly higher than the threshold current I_th = 4.06 mA for all I_bias and I_pp combinations. This will allow the optimization of the E2E learning approaches in a wide variety of different input currents, allowing the use of I_bias and I_pp as trainable parameters. Once the driving waveforms are generated, the sequences are then oversampled to 32 sps and low-pass filtered (LPF) to constrain their bandwidth to 0.9 R_s. This is done to ensure the accuracy and convergence of the numerical solution to the rate equations, even if the surrogate model will operate on a 2-sps basis. The target (ground truth) output sequences for the CAT surrogate models are obtained by inputting the previously described driving waveforms to a numerical solver of the laser rate equations at a certain (randomized) I_bias. The output photon density sequences from the rate equations are then downsampled and converted to optical power waveforms through <ref>. The loss function of the model is defined comparing the sequences from the rate equation solver to the predictions of the surrogate model using normalized root mean squared error (NRMSE). The use of NRMSE provides better loss interpretability, as the normalization applied makes the calculation of the loss relative to the amplitude of each generated sequence. Once the surrogate model is trained, its weights are fixed and it is used as part of an E2E optimization approach. §.§ E2E approach The E2E optimization approach aims to find the optimal combination of TX, RX and laser-driving parameters in order to minimize the probability of decision errors after detection. The approach should therefore reach optimal performance regardless of the targeted laser cavity structure, as long as the surrogate DML model is able to reproduce the laser behavior accurately. Thus, our approach paves the way for the E2E optimization of optical communication system based on external modulators, like electro-absorption modulators (EAMs), as long as data availability allows for the training of a sufficiently accurate surrogate model. The investigated back-to-back IM/DD system is represented in <ref>. The modulation is based on equiprobable 4PAM symbols, upsampled to 2 sps. The TX establishes the GCS constellation intensity levels and the 2-tap pulse shaping in order to generate the modulation current sequences. The hardware limitations of the digital-to-analog and analog-to-digital converters (DAC/ADC) are modeled as FIR LPFs, with a bandwidth B_LPF = 0.9 · R_s. The impact of impairments associated with DAC and ADC is not considered in this paper. However, the AE is trained to take into account the limited time resolution and BW of DAC and ADC. After filtering, the sequences are constrained to the range [-0.5, 0.5] and then amplified to [-40, 40] mA for a maximum peak-to-peak I_pp of 80 mA, in order to match the surrogate data generation. I_bias is again constrained to the range [50, 100] mA. During training, the emulation of the DML response is carried out by the surrogate model, as its architecture allows for automatic differentiation. The testing is however conducted on the laser rate equations, using the same parameters as for data generation. This allows to obtained more reliable performance metrics, as the surrogate may distort them due to modeling inaccuracies. After generating the output sequences from the DML models, additive white Gaussian noise (AWGN) samples are added to the signal after square-law detection. The noise variance is fixed to yield 22 dB electrical signal-to-noise ratio (SNR) at the highest average P_rec (corresponding to I_pp = 80 mA). The value of I_bias does not impact P_rec, as only the peak-to-peak power is considered in it. Lastly, the adaptive RX DSP performs 2-sps-wise EQ, downsampling and symbol decision. The SER and MI performance metrics are obtained by comparing the estimated symbol probabilities at RX with the originally sent sequence at TX, using maximum likelihood detection (MLD) or softmax activation depending on the approach. The compensation of the system is based on four different approaches: the baseline uncompensated system (BL), an RX-side FIR FFE, a second-order VNLE with a transmitter LPS and an AE. Table <ref> lists the specific parameters for each approach. The FFE is meant to emulate the conventional approach to DML-based system optimization, while the VNLE and AE give insight on the performance advantage of E2E models. It must be noted that only the AE handles I_pp and I_bias as trainable parameters, while all the other models are kept at fixed bias and swept through various I_pp (and therefore P_rec) values. Fig. <ref> shows the encoder and decoder AE architecture from input symbol i_n to output symbol likelihoods o_n. The encoder, or TX side, is based on a single linear layer to map the input one-hot encoded vectors into 2-sample pulses. All the obtained pulses are then serialized into 1024-sample time sequences to be used as driving current to the DML. The I_bias level is added to the signal after filtering and amplification, as explained previously. On the decoder (RX) side, the received sequences are first FIR filtered in order to provide a memory mechanism to the AE. After deserialization, 3 leaky-ReLU-activated feedforward layers with softmax activation at the output convert the filtered samples into symbol probabilities for decision. The AE loss is then calculated using cross-entropy (CE) between the originally transmitted symbols and their assigned symbol probability at the receiver. § RESULTS §.§ Surrogate model The surrogate model training compromises 2^23 samples in sequences of 1024 samples, while the testing data set includes 2^17 samples. The randomization of pulse shape and laser driving configurations is performed on a per-sequence basis. The training and testing NRMSE scores obtained by the CAT model are depicted in <ref>. Throughout the three analyzed R_s, the testing NRMSE loss is slightly lower than its training counterpart, giving no sign of overfitting. Another interesting trend is the slight increase in NRMSE as R_s increases. This trend might be related to the predominance of the nonlinear effects at higher R_s, leading to potentially more complex DML dynamics. In any case, the three models perform well below the 1% mark. §.§ E2E approach The symbol dataset of the E2E approaches compromises 2^20 symbols, with an 80/20 partition between training and validation, respectively. The dataset is split in mini-batches of 512 symbols for exploiting parallelization to reduce the training time. The I_pp to all the models except the AE is swept in the range [8, 80] with a step of 8 mA between consecutive levels. Even though all the approaches are subject to the same I_pp constraints, the E2E approaches are able to exploit the DML transient response through LPS (and GCS in the case of the AE), yielding higher P_rec. The AE is able to optimize its I_pp dynamically within the constrained range, therefore only one AE power level was analyzed. The training is iterated over three different symbol rates: 15, 20 and 25 Gbaud, matching the used in the surrogate training. The MI and SER results tested on the laser rate equations are shown as a function of P_rec in Fig. <ref>. Based on the figure, the AE delivers the best SER and MI performance overall, hinting that the optimization of I_bias level could have a higher impact on performance than the equalization in certain cases. This is more accentuated on the lower R_s than the higher ones, where the waveform distortion worsens the SNR, giving equalization a higher relative impact. Another interesting metric is the optimal I_bias obtained by the AE, resulting in {62.31, 69.31, 70.30} mA at {15, 20, 25} Gbaud, respectively. This trend provides interesting insight on the relation between the R_s and the optimal I_bias conditions. For the R_s analyzed, the E2E approaches show a clear performance advantage over the BL and RX-only optimization. This is expected, given that the lack of nonlinearity in the latter makes the complete compensation of the DML-induced distortion infeasible at high symbol rates. The better performance of the E2E approaches serves as a further validation of the surrogate, as a poor model of the DML could lead to low performance when tested on the rate equations. § CONCLUSION We propose a novel end-to-end, directly modulated laser optimization approach using a differentiable data-driven model as laser surrogate, allowing the propagation of gradients between transmitter and receiver. The surrogate is built based on the laser rate equations using various bias and peak-to-peak current values, in order to make it robust to such values. We compare different system architectures with conventional receiver-side optimization, varying the received optical power and the symbol rate of the simulation. The proposed autoencoder approach including bias and peak-to-peak current optimization shows significant performance gain compared to its receiver-side counterpart, showcasing the potential of end-to-end approaches in the optimization of directly modulated laser systems. § FUNDING The Villum Fonden (VI-POPCOM VIL54486) and Villum YIP OPTIC-AI (no. VIL29334) projects are acknowledged.
http://arxiv.org/abs/2405.08877v1
20240514180007
Probing Heavy Neutrino Magnetic Moments at the LHC using Long-Lived Particle Searches
[ "Rebeca Beltrán", "Patrick D. Bolton", "Frank F. Deppisch", "Chandan Hati", "Martin Hirsch" ]
hep-ph
[ "hep-ph" ]
Radiative Corrections to Light Thermal Pseudo-Dirac Dark Matter Tim M.P. Tait May 20, 2024 =============================================================== § INTRODUCTION Neutrino oscillations currently provide the only concrete laboratory-based evidence of physics beyond the Standard Model (SM). In the absence of direct signs of physics beyond the SM from resonance searches at the Large Hadron Collider (LHC), it is therefore interesting to consider the possibility that the existence of new physics (NP) could manifest itself as non-trivial neutrino properties. One such intriguing attribute is the neutrino magnetic dipole moment. Minimally extending the SM to include non-zero active neutrino masses, the induced neutrino magnetic moments are tiny, μ_ν≃ 3 × 10^-20 μ_B (m_ν/0.1 eV), where m_ν is the neutrino mass scale and μ_B is the commonly employed unit of the Bohr magneton <cit.>. Thus, a positive experimental signal of neutrino magnetic moments could provide a smoking gun for physics beyond the SM. Sizeable active-to-active neutrino magnetic moments with naturally small neutrino masses have been a lively avenue of research in recent times <cit.>. With the addition of right-handed (RH) or sterile neutrinos to the SM field content, often invoked as a means to dynamically generate the light neutrino masses, the prospect of active-to-sterile neutrino transition magnetic moments has also been of much interest in the literature <cit.>. A wide variety of flavour-dependent signatures of transition magnetic moments have been examined, such as the Primakoff upscattering of an active neutrino to a heavy state (with its subsequent decay to a light neutrino and a photon) in coherent elastic neutrino-nucleus scattering (CEνNS) experiments <cit.>, NOMAD <cit.>, CHARM-II <cit.>, DONUT <cit.>, LSND <cit.>, Borexino <cit.>, XENON1T <cit.>, IceCube <cit.>, MiniBooNE <cit.>, DUNE <cit.>, Super-Kamiokande <cit.>, and FASERν <cit.>; the production of heavy states via meson decays, leading to invisible final states <cit.> and displaced vertex signatures <cit.>; monophoton plus missing energy at LEP, LHC, and future colliders <cit.>; modifications to Big Bang nucleosynthesis (BBN) affecting the abundance of ^4He <cit.>; and excess energy loss in SN1987A and low-energy supernovae <cit.>. If active-to-sterile neutrino transition magnetic moments are discovered in the future, further observations may be used to discriminate the Dirac versus Majorana nature of the heavy states. For example, the photon emitted in the decay of a heavy state produced via upscattering in a CEνNS experiment exhibits different kinematic distributions in the two scenarios <cit.>. From the model-building perspective, large active-to-active and active-to-sterile neutrino transition magnetic moments are intimately connected with the radiative corrections to neutrino masses, requiring non-trivial symmetries or cancellations to realise a consistent ultraviolet (UV) completion <cit.>. On the other hand, sterile-to-sterile neutrino transition magnetic moments remain relatively unconstrained for sterile neutrinos with masses above the heavy meson masses. The transition magnetic moments between GeV scale sterile neutrinos have been extensively studied in the context of long-lived particle (LLP) searches at AL3X, ANUBIS, CODEX-b, DUNE, FACET, FASER/FASERν, MAPP, MATHUSLA, and SHiP <cit.>. For sterile neutrinos heavier than the energies accessible in meson decays, LLP searches at the LHC can provide an alternative window to sterile-to-sterile neutrino magnetic moments, as recently explored in <cit.>. The leading constraint in this mass regime (up to the Z mass) comes from the monophoton plus missing energy signature at LEP <cit.>. In this work, we start with the model-independent effective field theory (EFT) approach, examining the operators relevant for GeV to TeV scale neutrino magnetic moments within the frameworks of N_RSMEFT (SM effective field theory extended with RH neutrinos) and its low-energy counterpart, N_RLEFT (low-energy effective field theory extended with RH neutrinos). To complement this, we consider a UV-complete model example that generates these operators at the one-loop level. Our model is similar to the one proposed in <cit.>, but we generalise it further by removing the Z_2 symmetry imposed in the original construction. The broadening of the model parameter space leads to the interesting scenario where large active-to-sterile magnetic moments are also generated at one-loop, without relying on the highly suppressed active-sterile neutrino mixing. This opens up a wider range of phenomenology for LLP searches at the LHC. We also consider constraints from other probes on the UV model, such as electroweak precision observables and charged lepton flavour violation (cLFV). We focus on the heavy sterile neutrino mass ranging from the GeV to several 100 GeV to explore the constraints on the transition magnetic moments from LLP searches at the LHC using non-pointing photons. Our results for the simplified model example can be straightforwardly generalised to many extensions of the SM involving sterile neutrinos, an extended scalar sector, and vector-like fermions <cit.>. We find that to probe the heavy neutrino dipole moments with a non-pointing photon final state (without any prompt charged particles) at the LHC, the transverse impact parameter can be used to select events with a displaced vertex. Our numerical simulations show that, depending on the dominant production and decay mechanisms for the sterile neutrinos, searches for LLPs with non-pointing photons will be sensitive to unexplored parts of the parameter space in sterile-to-sterile and active-to-sterile neutrino transition dipole moments for sterile neutrino mass ranging from tens of GeV to well beyond the Z mass. The plan for the rest of the paper is as follows. In Section <ref>, we present an EFT description of the operators relevant to the sterile neutrino magnetic moment in the context of different EFTs. We then present a simple UV-complete model example for realising the sterile-to-sterile and active-to-sterile transition magnetic dipole moments in Section <ref>. In this Section, we also present the derivation of these magnetic moments and their correlation with neutrino masses. In Section <ref>, we review the various phenomenological constraints on the model from direct collider searches and searches for cLFV processes. In Section <ref>, we present different interesting final state signatures for probing the sterile-to-sterile and active-to-sterile transition magnetic dipole moments using LLP searches with non-pointing photons. We further discuss the implementation and simulation details. In Section <ref>, we present the numerical results in terms of the EFT coefficients and the parameters of the explicit UV model. Finally, we conclude in Section <ref>. § STERILE NEUTRINO TRANSITION MAGNETIC MOMENTS In the presence of RH or sterile neutrinos N_R with masses below the electroweak (EW) scale, transition magnetic moments are described by the effective Lagrangian ℒ_N_RLEFT ⊃ d_ NNγ^' ij 𝒪_ NNγ^ij + d_ν Nγ^'α i 𝒪_ν Nγ^α i + h.c. , with the dimension-five (d = 5) operators[While we omit it here, it is common to include a factor of 1/2 in the definition of 𝒪_ NNγ so that the normalisation is analogous to the Majorana and Dirac mass terms, as seen in Section <ref>.] 𝒪_ NNγ^ij = (N̅_Ri^cσ_μνN_Rj)F^μν , 𝒪_ν Nγ^α i =(ν̅_Lασ_μνN_Ri)F^μν , where σ_μν = i/2[γ_μ,γ_ν], F_μν is the photon field strength and N_R^c = 𝒞N̅_R^T, with 𝒞 the charge conjugation matrix. Here, we use α∈{1,2,3} and i,j ∈{1,2,…} to denote the generation of SM lepton and sterile neutrino, respectively. The operators 𝒪_ NNγ^ij and 𝒪_ν Nγ^α i correspond to sterile-to-sterile and active-to-sterile neutrino transition magnetic moments, respectively. The former is antisymmetric in the indices i,j and vanishes for a single sterile state. We note that these effective interactions are only applicable well below the EW symmetry breaking scale and are often referred to as N_RLEFT operators. We stress that the prime superscript is used to distinguish the N_RLEFT Wilson coefficients from the basis of operators obtained after rotating N_RSMEFT operators at the EW symmetry breaking scale, but not integrating out degrees of freedom such as W and Z, which will be defined shortly. The N_RLEFT operators in Eq. (<ref>) arise from operators above the EW scale in the N_RSMEFT, in particular, the operators (up to d = 6) ℒ_N_RSMEFT ⊃ C^(5)ij_ NNB 𝒪_ NNB^(5)ij + C_ NB^(6)α i 𝒪_ NB^(6)α i+ C_ NW^(6)α i 𝒪_ NW^(6)α i + h.c. , with 𝒪_ NNB^(5)ij = (N̅_Ri^cσ_μνN_Rj)B^μν , 𝒪_ NB^(6)α i = (L̅_ασ_μνN_Ri)H̃B^μν , 𝒪_ NW^(6)α i = (L̅_ασ_μνN_Ri)τ^IH̃W^Iμν , where L = (ν_L ℓ_L)^T and H = (φ^+, φ^0)^T are the SM lepton and Higgs doublets, B_μν and W^I_μν are the field strengths of U(1)_Y and SU(2)_L, respectively, and τ^I = σ_I/2 are the generators of SU(2)_L, with σ_I being the Pauli matrices. In Eq. (<ref>), we additionally have H̃ = iσ_2 H^*. The operator 𝒪_ NNγ^ij arises from the N_RSMEFT operator at d = 5, while the operator 𝒪_ν Nγ^α i arises from those at d = 6 after expanding around the Higgs vacuum expectation value as H = (φ^+, (v + h + i φ_Z)/√(2))^T and transforming to the weak-rotated basis of gauge fields. At the next lowest dimension in the (N_R)SMEFT are the d = 7 operators <cit.> 𝒪_ LHB^(7)αβ = (L̅_αH̃)σ_μν(H̃^T L_β^c)B^μν , 𝒪_ LHW^(7)αβ = (L̅_αH̃)σ_μν(H̃^T τ^I L_β^c)W^Iμν , 𝒪_ NHB^(7)ij = (N̅_Ri^cσ_μνN_Rj)(H^† H)B^μν , 𝒪_ NHW^(7)ij =(N̅_Ri^cσ_μνN_Rj)(H^†τ^I H)W^Iμν , which generate active-to-active neutrino magnetic moments 𝒪_ννγ^αβ and further modify 𝒪_ NNγ^ij. In the context of high-energy collider processes, it is convenient to define a basis of rotated N_RSMEFT operators at the EW symmetry breaking scale. In this basis, there are two additional d = 5 operators that involve the Z boson field strength, 𝒪_ NNZ^ij=(N̅_Ri^cσ_μνN_Rj)Z^μν , 𝒪_ν NZ^α i=(ν̅_Lασ_μνN_Ri)Z^μν . These operators are not present in the usual N_RLEFT because Z is integrated out <cit.>; however, they are convenient to use near the Z pole, where they describe the production and subsequent decay of the sterile neutrinos via a dipole-like coupling to the Z boson. We will denote the weak-rotated interactions near the Z pole as ℒ^Z-pole_N_RLEFT ⊃ d_ NNγ^ij 𝒪_ NNγ^ij + d_ν Nγ^α i 𝒪_ν Nγ^α i + d_ NNZ^ij 𝒪_ NNZ^ij + d_ν NZ^α i 𝒪_ν NZ^α i + h.c. . Now, considering the N_RSMEFT operators up to d = 6, the dipole moments in Eq. (<ref>) are related to the N_RSMEFT Wilson coefficients in the unbroken phase as d_NNγ^ij = c_wC_ NNB^(5)ij , d_ NNZ^ij = - s_wC_ NNB^(5)ij , d_ν N γ^α i = v/√(2)(c_w C_NB^(6)α i + s_w/2 C_NW^(6)α i) , d_ν N Z^α i = v/√(2)(- s_w C_ NB^(6)α i + c_w/2 C_ NW^(6)α i) . where s_w = sinθ_w and c_w = cosθ_w, with θ_w the weak mixing angle, and v is the Higgs vacuum expectation value. From Eq. (<ref>), we observe the ratio d_ NNZ^α i/d_ NNγ^α i = - t_w between the sterile-to-sterile couplings, with t_w = s_w/c_w. In general, a UV-complete model will also predict some relation between the Wilson coefficients C_ NB^(6)α i and C_ NW^(6)α i; if we take C_ NW^(6)α i/C_ NB^(6)α i = a (g/g') = a/t_w, where a is an arbitrary parameter and g and g' are the SU(2)_L and U(1)_Y couplings, respectively, we obtain the following ratio between the active-to-sterile couplings d_ν N Z^α i/d_ν N γ^α i = a - 2 t_w^2/(2 + a)t_w . There are therefore two flat directions in the parameter space of couplings; d_ν N γ^α i = 0 for a = -2 and d_ν NZ^α i = 0 for a = 2t_w^2≈ 0.6. In addition to the operators in Eq. (<ref>), there is also the d = 5 charged-current dipole operator, ℒ^Z-pole_N_RLEFT ⊃ d_ℓ NW^α i(ℓ̅_Lασ_μν N_Ri)W^μν + h.c. , where W^μν is the field strength of the W boson. The associated dipole moment is related to the N_RSMEFT Wilson coefficient C_ NW^(6) in the unbroken phase as d_ℓ N W^α i = v/2 C_ NW^(6)α i ⇒ d_ℓ N W^α i/d_ν N γ^α i = √(2)a/(2 + a)s_w . Clearly, we have d_ℓ N W^α i = 0 for a = 0. This operator is important for the single production of a heavy N with a charged lepton or in the decay of N to a charged lepton plus di-jet final state. We note that, from a phenomenological point of view, the free parameters of the model can be taken to be {m_N, C_ NNB^(5)ij, C_ NB^(6)α i, C_ NW^(6)α i} or {m_N, d_ N N γ^ij, d_ν N γ^α i, a}, with general flavour structure. In any UV-complete realisation of these couplings, however, the flavour structure will be fixed. We take both approaches in this paper. First, we consider model-independent constraints on the couplings from LLP searches at the HL-LHC. We then express d_ N N γ^ij and d_ν N γ^α i in terms of the parameters of a specific UV-complete scenario, which fixes both the flavour structure and the parameter a. We will review this UV scenario in the next section. § A UV MODEL FOR STERILE NEUTRINO MAGNETIC MOMENTS In this work, we consider a simple UV extension of the SM with RH gauge-singlet fermions N_R. To generate magnetic moments for these states, we add the same minimal ingredients as <cit.>; a single vector-like lepton E and a scalar ϕ.[Comparing to other notation used in the literature, E ⇔ X_N^c <cit.> and ϕ⇔φ <cit.>, h <cit.> and 𝒮_1^* <cit.>.] Both are singlets under SU(3)_c and SU(2)_L and have the hypercharges[We employ the convention D_μ = ∂_μ + igτ^I W^Iμ + i g' Y B_μ for the covariant derivative, such that SM fields have the hypercharges Y(L) = -1/2, Y(ℓ_R) = -1, Y(H) = 1/2.] Y(E) = Y(ϕ) = -1. However, we do not impose an additional Z_2 symmetry under which the new fields transform as E → -E and ϕ→ -ϕ <cit.>. Without this extra discrete symmetry, the following renormalisable terms can be written, which are not present in the SM, ℒ ⊃E̅(iD - m_E)E + (D_μϕ)^* (D^μϕ) - m_ϕ^2 |ϕ|^2 - λ_ϕϕ |ϕ|^4 - λ_ϕ H |ϕ|^2(H^† H) - [N̅_R h E_L ϕ^* + N̅_R^c h' E_R ϕ^*    + L̅Y_E H E_R + E̅_L m_e Eℓ_R + L̅fL̃ϕ + N̅_R^cf'ℓ_R ϕ^* + h.c.] , where L̃ = iσ_2 L^c. The RH neutrinos N_R can also have the Yukawa-type and Majorana mass terms -L̅Y_νH̃ N_R -1/2N̅^c_R M_R N_R + h.c., which will be discussed in Section <ref>. Note that, because of the removal of the Z_2 symmetry, we obtain four additional terms in the last line of Eq. (<ref>) with respect to <cit.>. The field content of the model is also summarised in Tab. <ref>, where we show the representations of the fields under the SM gauge group and the renormalisable couplings which are present when one, two or all of the fields are considered. With check marks (crosses), we indicate which couplings are allowed (forbidden) when the Z_2 symmetry, under which E and ϕ are odd, is enforced. We also show the (N_R)SMEFT operators which are generated after integrating out E and ϕ. Among these are the operators 𝒪_ NNB^(5), 𝒪_ NB^(6) and 𝒪_ NW^(6) which induce sterile-to-sterile and active-to-sterile neutrino magnetic moments, but also operators which are relevant for other low-energy probes, such as cLFV processes. After EW symmetry breaking, mixing is induced between the SM charged leptons ℓ and the vector-like lepton E. In the broken phase, the Lagrangian contains the following extended 4× 4 mass matrix ℒ⊃ - [ ℓ̅_L E̅_L ]ℳ_E [ ℓ_R; E_R ] + h.c. ; ℳ_E = [ vY_e/√(2) vY_E/√(2); m_e E m_E ] , where the upper-left entry arises from the SM lepton Yukawa term -L̅ Y_e Hℓ_R + h.c.. Without loss of generality, the weak eigenbasis RH fields ℓ_R and E_R, which have identical gauge interactions, can be chosen such that all of the entries of m_e E, which is a 3× 1 matrix, are zero <cit.>. This is equivalent to rotating away non-zero m_e E via redefinitions of Y_e, Y_E and m_E. Now, the mass matrix ℳ_E can be diagonalised with the following rotation of fields, [ ℓ_Lα; E_L ] = [ V_αβ^L V_α E^L; V_E β^L V_EE^L ] P_L [ ℓ_β'; E' ] , [ ℓ_Rα; E_R ] = [ V_αβ^R V_α E^R; V_E β^R V_EE^R ] P_R [ ℓ_β'; E' ] , such that V^L†ℳ_E V^R = ℳ_E^diag = diag(m_e, m_μ, m_τ, m_E'). Here, ℓ_β' and E' represent the physical mass eigenstate fields, with the former corresponding to the known charged leptons ℓ_β' ∈{e, μ, τ} and the latter a single heavy charged lepton. With abuse of notation, we relabel ℓ_α' →ℓ_α and E' → E in the following. In the limit Y_e ∼ Y_E ≪√(2)m_E/v, it becomes possible to block-diagonalise ℳ_E with the approximate seesaw-like mixing V^L_α E = - V_E α^L*≈v Y_E^α/√(2)m_E , V^R_α E = - V_E α^R*≈v^2 [Y_e]_αγ^*Y_E^γ/2 m_E^2 , where we see that the mixing between the RH fields is further suppressed by v/m_E with respect to the mixing between LH fields. Given that the LH mixing is less suppressed, we can also consider the following non-unitarity in the V^L_α i entry, V^L_αβ≈δ_αβ - v^2 Y_E^α Y_E^β */4 m_E^2 . Similarly, non-unitarity is seen in the V^L_EE entry, but is irrelevant phenomenologically. Expanding the covariant derivatives in the SM Lagrangian and Eq. (<ref>), and transforming to the weak-rotated gauge fields, the gauge interactions of ν_L, ℓ_L/R, E_L/R and ϕ in the broken phase are found to be ℒ ⊃ - e Q[ℓ̅γ_μℓ + E̅γ_μ E + iϕ^*↔∂_μϕ]A^μ -[g/√(2)ℓ̅_Lγ_μν_L W^μ + h.c.]   - g/c_w[g_L^νν̅_Lγ_μν_L+ g_L^ℓℓ̅_Lγ_μℓ_L + g_R^ℓ(E̅_Lγ_μ E_L + [ ℓ̅_R E̅_R ]γ_μ[ ℓ_R; E_R ] + iϕ^*↔∂_μϕ)]Z^μ , where, as usual, the electric charge is Q = τ^3 + Y and the LH and RH Z couplings are g_L^f = τ^3 - Q s_w^2 and g_R^f = - Q s_w^2, respectively. Additionally, the charged lepton and vector-like lepton fields must be rotated to the mass basis according to Eq. (<ref>); the structure of Eq. (<ref>) results in the photon couplings and Z couplings to ℓ_R and E_R being diagonal in the mass basis, as insertions of the LH and RH mixing cancel due to unitarity. However, the LH mixing V^L remains in the charged-current term and the neutral-current terms involving ℓ_L and E_L. In the seesaw limit, the Lagrangian then becomes ℒ ⊃ -[g/√(2)ℓ̅_α(δ_αβ - v^2 Y_E^α Y_E^β*/4m_E^2)γ_μ P_L ν_β W^μ + h.c.] - g/c_wℓ̅_α(g_L^ℓδ_αβ + v^2 Y_E^α Y_E^β*/4m_E^2)γ_μ P_L ℓ_β Z^μ . The former induces lepton flavour universality (LFU) violating charged-current processes. The latter describes off-diagonal Z couplings or flavour-changing neutral-currents (FCNCs), which are subject to a plethora of stringent bounds from EW precision observables and cLFV. We note that Eq. (<ref>) is equivalent to integrating out E at low energies, which gives non-zero tree-level matching conditions for the coefficients of the d = 6 SMEFT operators 𝒪_Hl^(1) = (H^† i ↔D_μ H)(L̅γ^μ L) and 𝒪_Hl^(3) = (H^† i ↔DD^I_μ H)(L̅τ^Iγ^μ L), i.e., C_Hl^(1)αβ = C_Hl^(3)αβ =- Y_E^α Y_E^β */4m_E^2 , which has been used previously in the literature <cit.>. Constraints on the Yukawa coupling Y_E from these probes are detailed in Section <ref>. Next, the Lagrangian in Eq. (<ref>) results in the Higgs interactions ℒ⊃ - 1/√(2) ℓ̅_L [ Y_e Y_E ][ ℓ_R; E_R ]h + h.c. . This term can also be rotated to the mass basis, where the RH mixing V^R is no longer cancelled due to unitarity. The resulting couplings of the Higgs to charged leptons are ℒ⊃ - 1/√(2)ℓ̅_α(δ_αγ - 3v^2 Y_E^α Y_E^γ*/4m_E^2)[Y_e]_γβ P_R ℓ_β h + h.c. , which is nothing other than the tree-level matching of the UV theory onto the d = 6 SMEFT operator 𝒪_eH = (H^† H)L̅Hℓ_R, with the coefficient C_eH^αβ = Y_E^α Y_E^γ*[Y_e]_γβ/2m_E^2 . Eq. (<ref>) modifies SM Higgs decays and induces cLFV Higgs decays via the off-diagonal couplings. Moving now to the scalar sector of the UV theory, the term -L̅fL̃ϕ in Eq. (<ref>) can be written explicitly as ℒ⊃ - [ ν̅_L ℓ̅_L ]f [ ℓ_L^c; - ν_L^c ]ϕ + h.c. = -2ν̅_L f ℓ_L^c ϕ + h.c. , where, in the second equality, we have used that f is antisymmetric in flavour space, f_αβ = - f_βα. This interaction also induces LFU violating and cLFV processes at tree-level and one-loop. If the scalar ϕ is heavy and integrated out at low energies, Eq. (<ref>) results in the tree-level matching to the SMEFT operator 𝒪_ll = (L̅γ_μ L)(L̅γ^μ L), with the coefficient <cit.>, C_ll^αβγδ = f_αγf_δβ^*/m_ϕ^2 . Including also the term N̅_R^c f' ℓ_R ϕ^*, integrating out ϕ also gives tree-level matching to the N_RSMEFT operators 𝒪_lNle = (L̅ N_R)ϵ(L̅ℓ_R) and 𝒪_eN = (ℓ̅_Rγ_μℓ_R)(N̅_Rγ^μ N_R) <cit.>, with C_lNle^α i βγ = 2f_αβf_iγ'/m_ϕ^2 , C_eN^αβ ij = f_iα^' *f_jβ'/2m_ϕ^2 . This exhausts the tree-level phenomenology of the UV model at low energies, which would have been absent if the Z_2 symmetry had been imposed. At the one-loop level, many more operators are induced in the (N_R)SMEFT, with partial matching results, predominantly in the SMEFT, available in the literature <cit.>. The focus of the remainder of this section is the one-loop matching of the operators 𝒪_ NNB^(5), 𝒪_ NB^(6) and 𝒪_ NW^(6), which induce LLP signatures at the LHC. First, in Section <ref>, we review the generation of neutrino masses. §.§ Neutrino Masses For sterile-to-sterile neutrino magnetic moments to be present, at least two RH neutrinos N_R are required in the model. As mentioned below Eq. (<ref>), it is possible to write the renormalisable terms -L̅Y_νH̃N_R - 1/2N̅_R^c M_R N_R + h.c. in the Lagrangian. Below the EW scale, these terms lead to mixing between the active neutrinos ν_L and N_R. The Lagrangian contains ℒ⊃ - 1/2[ ν̅_L N̅_R^c ]ℳ_ν[ ν^c_L; N_R ] + h.c. ; ℳ_ν = [ 0 vY_ν/√(2); vY_ν^T/√(2) M_R ] , where the extended neutrino mass matrix ℳ_ν is symmetric. Without loss of generality, M_R can be made to be diagonal via a rotation among the RH neutrinos, such that M_R = diag(m_N_1, m_N_2,…) and the resulting massive states are Majorana fermions, with N_i = N_i^c. However, we will keep M_R general in the following. The mass matrix can be diagonalised with the following unitary rotation <cit.> [ ν_Lα^c; N_Rj ] = [ U_α i^* V_α N_i^*; Ṽ_ji Ũ_jN_i ] P_R [ ν_i'; N_i' ] , such that V^T ℳ_ν V = ℳ_ν^diag = diag(m_1,m_2,m_3,m_N_1',…). Here, ν_i' = ν_i^' c and N'_i = N_i^' c are the physical Majorana mass eigenstate fields in the broken phase, with the former corresponding to the observed light neutrinos and the latter to heavy sterile neutrinos. Similar to the charged leptons, we will relabel ν_i' →ν_i and N_i' → N_i. In the limit Y_ν≪√(2)M_R/v, ℳ_ν can be diagonalised with V_α N_i ≈v/√(2)(Y_ν M_R^-1)_α jŨ_jN_i^* , Ṽ_ji≈ -v/√(2)(M_R^-1 Y_ν^T )_j α U_α i^* , to obtain the sub-blocks [M_ν]_αβ = U_α iU_β im_i ≈ - v^2/2(Y_ν M_R^-1Y_ν^T)_αβ , [M_N]_ij = Ũ^*_iN_iŨ^*_jN_im_N_i≈ [M_R]_ij , where U and Ũ diagonalise M_ν and M_N, respectively. Note that Ũ_j N_i = δ_ij if we had taken M_R to be diagonal. Combining Eqs. (<ref>) and (<ref>), it is possible to find the well-known relation for the active-sterile neutrino mixing <cit.> V_α N_i = i U_α jℛ_ji√(m_j/m_N_i) , where ℛ is an arbitrary orthogonal matrix, i.e. ℛ_jiℛ_ki = δ_jk. In this work, we consider heavy sterile states N_i within the kinematic reach of the HL-LHC, i.e., m_N_i from 1 GeV up to 1 TeV. For this range of masses, there are two possible ways to produce the observed light neutrino masses via the tree-level relations Eq. (<ref>). The first scenario is that the heavy states N_i are dominantly Majorana (or, in other words, the masses m_N_i are well separated). Then, the light neutrino masses are given by [M_ν]_αβ = -v^2/2 [Y_ν]_α i[Y_ν]_β i/m_N_i, which is the standard Type-I seesaw relation <cit.>. For m_N∼ 100 GeV, neutrino masses m_ν∼ 0.1 eV then require Yukawa couplings of size [Y_ν]_α i∼ 10^-6. The active-sterile mixing is governed by Eq. (<ref>) with |ℛ_ji|≤ 1, or V_ℓ N∼√(m_ν/m_N). However, for heavy sterile states in the mass range relevant for the HL-LHC, this active-sterile mixing V_ℓ N is too suppressed to give observable effects unless some additional NP is present. The second scenario, which can be more relevant for LLP searches, is when pairs of heavy sterile states N_i form pseudo-Dirac states with small mass splittings. In this limit, cancellations in the matrix product Y_ν M_R^-1 Y_ν^T ensure small light neutrino masses instead of suppressed Yukawa couplings. Now that Y_ν can in principle be large, so can the active-sterile mixing in Eq. (<ref>); equivalently, this corresponds to the limit |ℛ_ji| ≫ 1 in Eq. (<ref>). This interesting scenario is minimally obtained in the inverse seesaw mechanism <cit.>, which does not require any additional Higgs doublets or symmetries. To demonstrate this mechanism, we can consider the simplified scenario where two heavy sterile states, N_R1≡ N_R and N_R2≡ S_L^c, only interact with one generation of active neutrino. The relevant mass terms can be written as ℒ⊃ - m_ν Nν̅_L N_R - m_ν Sν̅_L S_L^c - 1 2 (μ_N N̅_R^c N_R + μ_SS̅_L S_L^c) - m_N_DS̅_L N_R + h.c. , where (m_ν N, m_ν S) ≡ v Y_ν/√(2). As such, the extended neutrino mass matrix (assuming one generation of active neutrino) in the basis (ν_L^c, N_R, S^c_L) takes the form ℳ_ν = ( 0 m_ν N m_ν S m_ν N μ_N m_N_D m_ν S m_N_D μ_S) , where m_ν S can be rotated away without any loss of generality <cit.> and the lepton number violating masses μ_N,S≪ m_N_D are naturally small in the sense of 't Hooft <cit.>, i.e. total lepton number symmetry is restored in the limit μ_N,S→ 0. The symmetric mass matrix ℳ_ν can be diagonalised as before; the inclusion of N_R and S_L^c leads to two Majorana eigenstates N_1 and N_2 with a small mass splitting compared to the Dirac mass m_N_D. The unitary rotation matrix V can be solved perturbatively by using the Hermitian combination ℳ_ν^†ℳ_ν (or ℳ_νℳ_ν^†), V^†ℳ^†_νℳ_ν V = (V^T ℳ_ν V )^†( V^T ℳ_ν V ) = diag(m_ν^2,m_N_1^2,m_N_2^2) , where the V diagonalising ℳ_νℳ_ν^† is the same as that in Eq. (<ref>). The most straightforward case is obtained in the limit where μ_N,μ_S≪ m_ν N≪ m_N_D and, without loss of generality, m_ν S is set to zero via a rotation among the sterile states. In this limit, the mass eigenvalues can be solved perturbatively to obtain m_ν ≃|m_ν N|^2 |μ_S|/|m_ν N|^2+|m_N_D|^2 , m_N_1^2 ≃ |m_ν N|^2 + |m_N_D|^2 -|μ_S^* m_N_D^2+ μ_N( |m_ν N|^2 + |m_N_D|^2)|/√(|m_ν N|^2+|m_N_D|^2) , m_N_2^2 ≃ |m_ν N|^2 + |m_N_D|^2 + |μ_S^* m_N_D^2+ μ_N (|m_ν N|^2 + |m_N_D|^2)|/√(|m_ν N|^2+|m_N_D|^2) . Another interesting case occurs when μ_N ∼μ_S and m_ν N∼ m_ν S, requiring some additional symmetry preventing the freedom to rotate away m_ν S. The mass matrix can be exactly solved in this scenario, leading to the masses m_ν = 1/2 (m_N_D+μ_S)[ 1-√(1+8 m_ν N^2/(m_N_D+μ_S)^2)] ≃ -2 m_ν N^2/m_N_D+μ_S , m_N_1 = 1/2 (m_N_D+μ_S)[1+√(1+8 m_ν N^2/(m_N_D+μ_S)^2)] ≃ (m_N_D + μ_S) -2 m_ν N^2/m_N_D+μ_S , m_N_2 = m_N_1 - μ_S , where the approximate relations are valid in the limit m_ν N≪ m_N_D. This case corresponds to the scenario where N_2 is decoupled, which can be protected in the presence of an explicit symmetry, and only N_1 mixes with the active state. We note that any new interactions involving N_1 and N_2, e.g., a transition magnetic dipole moment, is in general not diagonal in this basis and may lead to corrections to the mass matrix ℳ_ν. However, to restrict ourselves to these two limiting cases of the inverse seesaw mechanism, we will assume any such correction to be small. In the presence of the fields E and ϕ and their interactions in Eq. (<ref>), it is possible to draw the one-loop diagrams in Fig. <ref> which renormalise the Yukawa coupling Y_ν and the RH neutrino mass M_R. The threshold corrections from these diagrams are [Y_ν]_α i = [Y_ν^(0)]_α i - 1/8π^2[f_αβY_E^β*h_i'(1 + rlog r/1-r + logμ^2/m_E^2) + f_αβ[Y_e]_βρ^*f_iρ'(1 + logμ^2/m_ϕ^2) - 1/2Y_E^α Y_E^β*[Y_ν^(0)]_β i(1 - μ_H^(0)2/m_E^2)(1 + logμ^2/m_E^2)] , [M_R]_ij = [M_R^(0)]_ij - (h_i'h_j^* + h_i^*h_j')m_E/16π^2(1 + rlog r/1-r + logμ^2/m_E^2) , where r = m_ϕ^2/m_E^2 and μ_H is the usual parameter in the Higgs potential. While it is always possible to absorb the finite corrections to Y_ν and M_R in their renormalised values, these expressions are nevertheless useful to verify if fine-tuning between the tree-level and one-loop contributions is required to obtain the desired heavy masses m_N_i and transition magnetic moments, calculated in the following sections. §.§ Sterile-to-Sterile Neutrino Magnetic Moments In our model, the d = 5 N_RSMEFT dipole operator 𝒪_ NNB^(5) is generated at one-loop, as shown by the diagrams in Fig. <ref>, which then induces sterile-to-sterile transition magnetic moments 𝒪_ NNγ and 𝒪_ NNZ in the broken phase. We perform the matching of the UV model to the effective operator coefficient C_ NNB^(5) using the diagrammatic approach, which involves computing the one-light-particle-irreducible (1LPI) amplitudes in the effective and UV theories and requiring that they coincide at the matching scale. All one-loop amplitudes are performed using dimensional regularisation with d = 4-2ϵ space-time dimensions and introducing the renormalisation scale μ. We then subtract divergent pieces in the MS scheme. The amplitude ⟨N_i N_j B|$⟩ in theN_RSMEFT via the operator𝒪_NNB^(5)ijis i ℳ_ij^EFT = 4 u̅(p_N_i) σ_μν p_B^ν[C_ NNB^(5)ij P_R - C_ NNB^(5)ij*P_L] u(p_N_j) ϵ^μ*(p_B) , wherep_B = p_N_j - p_N_i. The amplitude⟨N_i N_j B|$⟩ in the UV model is instead given by i ℳ_ij^UV = g' Y u̅(p_N_i)[(h_i' P_R + h_i P_L) Γ_μ (h_j^* P_R + h_j^' * P_L) - (h_i^* P_R + h_i^' * P_L) Γ_μ (h_j^' P_R + h_j P_L) ]u(p_N_j)ϵ^μ*(p_B) , where Y(E) = Y(ϕ) = -1 and Γ_μ denotes the loop integral Γ_μ = μ^2ϵ∫d^dk/(2π)^d [(pp_N_i -k+ m_E)γ_μ (pp_N_j-k+ m_E)/[k^2 - m_ϕ^2][(p_N_i-k)^2 - m_E^2][(p_N_j -k)^2 - m_E^2] -(k+ m_E)(p_N_i + p_N_j - 2k)_μ/[k^2 - m_E^2][(p_N_i -k)^2 - m_ϕ^2][(p_N_j-k)^2 - m_ϕ^2]] , where k is the loop momentum. The first and second terms inside the square brackets of Eq. (<ref>) originate from the coupling of B_μ to E and ϕ, respectively. We now expand the loop integral in powers of the external momenta and, assuming m_N_i≪ m_E, m_ϕ, obtain the amplitude i ℳ_ij^UV = g'/16π^2 m_E f(r) u̅(p_N_i) σ_μν p_B^ν[(h'_i h^*_j - h^*_i h'_j)P_R - (h^' *_i h_j - h_i h^' *_j)P_L] u(p_N_j)ϵ^μ*(p_B) , where we define the loop function f(r) = 1/1-r + rlog r/(1-r)^2 , again with r = m_ϕ^2/m_E^2. For m_E = m_ϕ, the limit of this function is f(r)|_r→ 1 = 1/2. The amplitudes in Eqs. (<ref>) and (<ref>) can now be equated to find C_ NNB^(5)ij = g'(h_i'h_j^* - h_i^* h_j')/64π^2 m_E f(r) , which is in agreement with the result of <cit.>. We see that the flavour structure is determined entirely by the combination h_i'h_j^* - h_i^* h_j', with the diagonal elements vanishing, as expected. In the broken phase of the SM, but at energies where the Z boson is not yet integrated out, C_ NNB^(5) results in the γ and Z couplings (d_NNγ and d_NNZ) according to Eq. (<ref>). For m_E = m_ϕ, Eqs. (<ref>) and (<ref>) give dipole couplings of size, d_ NNγ^ij = c_w C_ NNB^(5)ij≈ 2.4 × 10^-6 GeV^-1(h_i'h_j^* - h_i^* h_j'/10)(1 TeV/m_E) . Thus, even with values of the couplings at the perturbative limit, h, h' ≲√(4π), values of the dipole coupling can only be obtained up to d_ NNγ^ij∼ 10^-4 GeV^-1 with m_E and m_ϕ just above the current lower limits from collider searches, m_E, m_ϕ≳ 200 GeV (see Section <ref> for further details). Examining Eq. (<ref>), we observe that the one-loop correction to [M_R]_ij depends on the same couplings entering C_ NNB^(5)ij in Eq. (<ref>). Therefore, to obtain large sterile-to-sterile neutrino magnetic moments for sterile states in the 10 GeV to 1 TeV range, some fine-tuning between the tree-level and one-loop contributions to [M_R]_ij may be required. §.§ Active-to-Sterile Neutrino Magnetic Moments The d = 6 operators 𝒪_ NB^(6) and 𝒪_ NW^(6), which induce active-to-sterile neutrino transition magnetic moments 𝒪_ν Nγ and 𝒪_ν NZ, as well as the charged-current dipole 𝒪_ℓ NW, are also generated in the UV model at one-loop, as depicted in Fig. <ref>. These diagrams do not require mixing between the active and sterile neutrinos and thus avoid mixing suppression. However, they do rely on the Yuwawa coupling Y_E of the lepton doublet L with vector-like lepton E_R and Higgs doublet H. In the following, we use the same procedure as the previous section to find the matching between C_ NB^(6) and C_ NW^(6) and the parameters of the UV model. In the N_RSMEFT, the amplitude ⟨ν_α N_i B h|$⟩ is given by iℳ_α i^EFT = √(2) C_ NB^(6)α iu̅(p_ν_α) σ_μν p_B^ν P_Ru(p_N_i) ϵ^μ*(p_B) , wherep_B = p_N_i - p_ν_α. In the UV model, the amplitude⟨ν_αN_i B h|$⟩ is determined in part by the three one-loop diagrams shown in Fig. <ref>. It is also possible to draw one-loop diagrams analogous to those in Fig. <ref>, but with E_R^c→ℓ^c_Rρ, h_i' → f_iρ' and Y_E^β*→ [Y_e]_βρ^*, and another with an internal Higgs line proportional to Y_E^α Y_E^β*[Y_ν]_β i. Including all of these contributions, the UV amplitude ⟨ν_α N_i B h|$⟩ can be calculated and compared with the EFT amplitude, as enacted in the previous section. This gives the matching condition, C_ NB^(6)α i = g'/64π^2m_E^2[3 f_αβY_E^β * h_i' f(r) - f_αβ[Y_e]_βρ^* f_iρ'/r(5/2 + 3logμ^2/m_ϕ^2) + Y_E^αY_E^β*[Y_ν]_β i] , where the flavour indicesβandρare summed over and the loop functionf(r)is the same as in Eq. (<ref>), withr = m_ϕ^2/m_E^2. As seen in the diagram to the right of Fig. <ref>, the intermediate charged lepton can also couple toW_μ^3, which generates the operator𝒪_NW^(6). The amplitude for⟨ν_αN_i W^3 h|$⟩ from this operator is iℳ_α i^EFT = C_ NW^(6)α i/√(2)u̅(p_ν_α) σ_μν p_W^ν P_R u(p_N_i) ϵ^μ*(p_W) , for p_W = p_N_i - p_ν_α. The UV amplitude ⟨ν_α N_i W^3 h|$⟩ can be calculated from the diagrams entering⟨ν_αN_i B h|$⟩ where it is possible to replace B→ W^3. The resulting matching condition is C_ NW^(6)α i = g/32π^2 m_E^2[f_αβY_E^β * h_i' f(r) - f_αβ[Y_e]_βρ^* f_iρ'/r(3/2 + logμ^2/m_ϕ^2) + Y_E^αY_E^β*[Y_ν]_β i] . From Eqs. (<ref>) and (<ref>) we see that, in principle, there are multiple contributions to these operators. To simplify this dependence on the UV model parameters, we note that we are only interested in the scenario where the coefficient C_ NNB^(5)ij is sizeable, requiring large h'. Then, if we assume that the first terms in Eqs. (<ref>) and (<ref>) dominate over the others, we obtain the particularly simple relation between the coefficients C_ NW^(6)α i = 2/3t_wC_ NB^(6)α i≈ 1.22 C_ NB^(6)α i . Thus, as discussed in Section <ref>, the UV model sets the parameter a = 2/3. Now, using Eqs. (<ref>) and Eq. (<ref>), we find the corresponding active-to-sterile dipole couplings in the broken phase to be d_ν Nγ^α i = 4 v c_w/3√(2)C_ NB^(6)α i≈ 1.7 × 10^-9 GeV^-1(f_αβY_E^β*h_i'/10^-2)(1 TeV/m_E)^2 . Using Eqs. (<ref>) and a = 2/3, we also find the ratio d_ν N Z^α i/d_ν N γ^α i = (1-3t_w^2)/(4 t_w) ≈ 4.5× 10^-2. The active-to-sterile dipole coupling with Z is therefore relatively suppressed with respect to d_ν N γ^α i in this scenario. Finally, we note that the flavour structure of the couplings d^α i_ν N γ and d^α i_ν N Z exhibits an interesting dependence on the parameters of the UV model. In the case of flavour universal couplings, at least one of the entries of d^α i_ν N γ and d^α i_ν N Z for α∈{e,μ,τ} must vanish, with the other two being equal and opposite in sign; for the choice Y_E^e = Y_E^μ = Y_E^τ and f_eμ = f_eτ = f_μτ, we obtain d^μ i_ν N γ = d^μ i_ν N Z = 0, for example. In the scenario where only the μ-τ couplings are non-zero, for example Y_E^μ = Y_E^τ and f_μτ, we would instead have d^e i_ν N γ = d^e i_ν N Z = 0, d^μ i_ν N γ = - d^τ i_ν N γ and d^μ i_ν N Z = - d^τ i_ν N Z. Finally, if only couplings involving τ are present (e.g., Y_E^τ, f_eτ = f_μτ), we would obtain d^τ i_ν N γ = d^τ i_ν N Z = 0, d^e i_ν N γ = d^μ i_ν N γ and d^e i_ν N Z = d^μ i_ν N Z. These particular limits can be readily translated to other flavour combinations. §.§ Active-to-Active Neutrino Magnetic Moments Here, we finally comment that active-to-active neutrino magnetic moments are naturally suppressed in the UV model. With the fields and couplings available in Eq. (<ref>) and assuming the RH neutrinos satisfy m_N_i≪ m_E, m_ϕ, the lowest-order 1LPI amplitudes that contribute to the d = 7 SMEFT operators C_ LHB^(7)αβ and C_ LHW^(7)αβ are at two loops, as shown in Fig. <ref>. However, in the broken phase, and at energies where the RH neutrinos may also be integrated out, active-to-active neutrino magnetic moments are induced via the active-sterile mixing V_ℓ N, i.e., d_ννγ^αβ = V_α N_i^* V_β N_j^* d^ij_ N N γ + [V_α N_i^* d_ν Nγ^β i - V_β N_i^* d_ν Nγ^α i] ≈ 10^-16 GeV^-1[(V_α N_i^*V_β N_j^*/10^-10)(d_ NNγ^ij/10^-6 GeV^-1) + (V_α N_i^*/10^-5)(d_ν Nγ^β i/10^-11 GeV^-1)] , where in the second line we have assumed that α≠β and d^α i_ν N γ = 0. For active-sterile mixing of the size predicted by the Type-I seesaw mechanism, V_ℓ N∼√(m_ν/m_N)∼ 10^-6, this contribution is highly suppressed both by the mixing and the one-loop suppressed d_ N N γ^α i and d_ν N γ^α i. Thus, even with the larger mixing V_ℓ N in the inverse seesaw scenario, we can still expect the active-to-active neutrino magnetic moments to safely satisfy the bounds from TEXONO <cit.>, GEMMA <cit.>, LSND <cit.> and Borexino <cit.>, d_ννγ^αβ≲ 2× 10^-8 GeV^-1. § PHENOMENOLOGICAL CONSTRAINTS ON THE UV MODEL In this section, we review the constraints on the UV-complete extension of the SM outlined in Section <ref>. We begin by examining the direct production of E and ϕ in collider experiments. We then consider rare low-energy processes which are not present or highly-suppressed in the SM, but could be enhanced by non-diagonal flavour couplings after integrating out E and ϕ. §.§ Constraints from Direct Collider Searches A recasting of direct searches for selectrons and smuons at the LHC provides useful constraints on the singly-charged scalar ϕ in our UV scenario. The dominant limit comes from the Drell-Yan pair production process pp →γ/Z →ϕ^+ ϕ^- followed by the decay ϕ^±→ℓ^±ν. In <cit.>, the most recent ATLAS search <cit.> for oppositely-charged electron and muon pairs with 139 fb^-1 of collected data was recast for the singly-charged scalar scenario. The ATLAS search places a lower bound on the slepton masses of approximately 450 GeV for a 100% branching ratio of the slepton in the specific channel. The major difference between this analysis and the pair production of ϕ^+ϕ^- is the cross-section in the two scenarios. In <cit.>, a simple scaling factor was used to map the leading-order Madgraph-simulated production cross-section for pp →ϕ^+ ϕ^- onto the production cross-section given by ATLAS for the RH slepton pair. Allowing for further uncertainties, a conservative lower limit of m_ϕ≳ 200 GeV was found. In addition, monophoton searches for dark matter at LEP <cit.> can also be recast to obtain the limit m_ϕ/|f_eμ| ≳ 350 GeV <cit.>. In general, SU(2)_L singlet vector-like leptons E are subject to much weaker bounds compared to their SU(2)_L doublet counterparts, due to their relatively smaller production cross-sections. Initially, searches at LEP <cit.> were able to exclude the masses m_E < 101.2 GeV. More recently, the ATLAS collaboration has performed a search for heavy charged leptons decaying to a Z boson and an electron or muon, excluding the mass range 129–176 GeV (114–168 GeV) for mixing with only electrons (muons), except for the interval 144–163 GeV (153–160 GeV) <cit.>. In <cit.>, the prospects of probing singlet vector-like leptons with multi-lepton searches at the LHC were explored, with some reach for exclusion at the HL-LHC. In our specific model, the pair-produced vector-like leptons can also decay to a RH neutrino and the singly-charged scalar, E^±→ N_i ϕ^±. The subsequent decay ϕ^±→ℓ^±ν therefore leads to the signature of two oppositely charged leptons plus missing energy. This is a signature similar to that from ϕ^+ϕ^- pair production and is expected to yield a similar constraint, m_E ≳ 200 GeV. Finally, EW precision data from LEP, complemented by the measurement of the Higgs mass, have been used to perform a global fit for the SM plus heavy NP effects, parametrised by d = 6 SMEFT operators <cit.>. From this fit, flavour-dependent upper bounds can be placed on the mixing between the charged leptons and E, which alternatively can be written as the lower limits m_E/|Y_E^e| > 8.3 TeV, m_E/|Y_E^μ| > 5.8 TeV and m_E/|Y_E^τ| > 5.3 TeV (95% C.L.). The singly-charged scalar is similarly constrained as m_ϕ/|f_eμ|> 12.5 TeV (95% C.L.). §.§ Constraints from Charged Lepton Flavour Violation As mentioned in Section <ref>, charged lepton flavour violating (cLFV), lepton flavour universality (LFU) violating observables and precision tests of observables such as (g-2)_μ can be used to constrain the parameter space of the UV model <cit.>. In the presence of the Yukawa couplings Y_E, mixing is induced between the SM charged leptons and the vector-like lepton E, modifying the SM charged-current, neutral-current and Higgs interactions at low energies, as seen in Eqs. (<ref>) and (<ref>).[The Yukawa coupling Y_ν, which induces the active-sterile mixing V_ℓ N, also modifies the charged- and neutral-current interactions and therefore contributes to cLFV processes <cit.>. For simplicity, we assume that Y_ν≪ 1 for the purposes of deriving bounds on the UV model.] Equivalently, from the EFT perspective, the operators 𝒪_Hl^(1), 𝒪_Hl^(3) and 𝒪_eH are induced at tree-level after integrating out E. Likewise, the operator 𝒪_ll is generated after integrating out ϕ. SM processes such as ℓ_α→ℓ_βνν̅, π→ℓ_αν, τ→πν and B→ D^(*)ℓ_αν are modified in the presence of these operators, while cLFV processes such as ℓ_α→ℓ_βγ, ℓ_α→ℓ_βℓℓ̅, μ - e conversion in nuclei, Z→ℓ_α^+ℓ_β^- and h→ℓ_α^+ℓ_β^- are induced. The ℓ_α→ℓ_βγ process additionally receives contributions directly from the dipole operators 𝒪_eB and 𝒪_eW, both of which are generated after integrating out E and ϕ at one-loop. In the following, we assume that E and ϕ are integrated out at a scale well above the low energies of the considered processes. Thus, one should consider the running of the operators from the high scale down to the EW scale, match to SU(3)_c × U(1)_Y invariant operators, and then run down to the low scale. However, we neglect the sub-leading effects of running as we only aim to obtain bounds on the model for comparison with those from LLP searches in Section <ref>. For the bounds from LFU violation, we consider here as an example only the purely leptonic LFU ratios, which measure deviations from the SM predictions for ℓ_α→ℓ_βνν̅. In the UV model, we obtain the ratio <cit.> Γ(ℓ_α→ℓ_βνν̅)/Γ(ℓ_α→ℓ_βνν̅)|_SM = 1 + v^2 [|Y_E^α|^2 + |Y_E^β|^2/2m_E^2 + 2|f_αβ|^2/m_ϕ^2] , where we only take into account the interference between the SM and the NP contributions. The first and second terms in the square brackets of Eq. (<ref>) arise from the exchange of W and ϕ, respectively, as seen in the left- and right-most diagrams of Fig. <ref>. The Z and h exchange diagrams do not contribute at this level, because there are no FCNCs at tree-level in the SM. This ratio can be used to determine the ratio of couplings |g_α/g_β|, obtained as |g_α/g_β| ≈ 1 + v^2 [|Y_E^α|^2 - |Y_E^β|^2/4m_E^2 + |f_αρ|^2 - |f_βρ|^2/m_ϕ^2] . From Eq. (<ref>), it is clear that LFU violation vanishes for flavour universal couplings, e.g., the choice Y_E^e = Y_E^μ = Y_E^τ and f_eμ = f_eτ = f_μτ. Current experimental values of these coupling ratios are |g_τ/g_μ| = 1.0009(14), |g_τ/g_e| = 1.0027(14), |g_μ/g_e| = 1.0019(14) <cit.>. From the first of these results, taking only Y_E^τ to be non-zero gives m_E/|Y_E^τ| > 4.1 TeV. The tree-level Z and Higgs exchange diagrams in Fig. <ref> induce the cLFV processes ℓ_α→ℓ_βℓℓ̅, which are subject to stringent constraints from SINDRUM <cit.> and Belle <cit.>. The branching ratio for this general process, neglecting final-state lepton masses, is <cit.> ℬ(ℓ_α→ℓ_βℓ_γℓ̅_δ) ≈m_α^5/768π^3Γ_α(1+δ_βγ)|Y_E^α *Y_E^β|^2/m_E^4[(1+δ_βγ)(g_L^ℓ)^2 + (g_R^ℓ)^2] . We assume that the dominant contribution to Eq. (<ref>) comes from the Z exchange diagrams, or correspondingly the operators 𝒪_Hl^(1) and 𝒪_Hl^(3). The contribution of the Higgs exchange diagram, or the operator 𝒪_eH, can be safely neglected as it is further suppressed by the lepton masses, as seen in Eqs. (<ref>) and (<ref>). In Eq. (<ref>), we also neglect the contributions from E and ϕ at one-loop via penguin and box diagrams, or equivalently via the operators 𝒪_eB, 𝒪_eW and 𝒪_ll <cit.>. The one-loop contribution of ϕ will only be relevant if the Yukawa couplings inducing ℓ_α→ℓ_βℓℓ̅ at tree-level satisfy Y_E^α, Y_E^β≪ f_αβ. If we assume that the tree-level contribution dominates, the branching ratio in Eq. (<ref>) and the corresponding experimental upper limits can be used to place lower bounds on the combination m_E/|Y_E^α*Y_E^β|^1/2 for α≠β. The SINDRUM experiment provides the upper limit ℬ(μ^+→ e^+ e^+ e^-) < 1 × 10^-12 (90% C.L.), giving m_E/|Y_E^μ*Y_E^e|^1/2 > 120 TeV. Likewise, the upper limits ℬ(τ^- → e^-e^-e^+) < 2.7× 10^-8 and ℬ(τ^- →μ^-μ^-μ^+) < 2.1× 10^-8 from Belle translate to m_E/|Y_E^τ*Y_E^e|^1/2 > 5.9 TeV and m_E/|Y_E^τ*Y_E^μ|^1/2 > 6.3 TeV, respectively. The equivalent constraints from ℬ(τ^-→ e^-μ^-μ^+) and ℬ(τ^- →μ^-e^-e^+) are comparable. We note that the cLFV processes ℓ_α→ℓ_β q q̅, i.e., hadronic τ decays τ→ℓ_β X, where X is a light pseudoscalar, scalar or vector meson, are also generated. Constraints on such decay modes from Belle and BaBar are comparible to those on τ→ℓ_βℓℓ̅ <cit.>. In the presence of the flavour-changing Z couplings at tree-level, the exotic process of muon conversion to electrons in nuclei can also occur. The rate for this process, divided by the total capture rate Γ_capt, is <cit.> CR(μ→ e) = m_μ^5/Γ_capt|Y_E^μ*Y_E^e|^2/m_E^4|(g_L^u + g_R^u)(2V^(p)+V^(n))+(g_L^d + g_R^d)(V^(p)+2V^(n))|^2 , where V^(p) and V^(n) are nucleus-dependent overlap integrals over the proton and neutron densities and the muon and electron wavefunctions <cit.>. In Eq. (<ref>), we again assume that the dominant contribution arises at tree-level via the operators 𝒪_Hl^(1) and 𝒪_Hl^(3), neglecting the contributions of E and ϕ at one-loop via penguin diagrams. The SINDRUM experiment has placed an upper bound on the μ-e conversion rate on ^197_ 79Au of CR(μ→ e)< 7× 10^-13 (90% C.L.) <cit.>, which results in the constraint m_E/|Y_E^μ*Y_E^e|^1/2 > 290 TeV. Next, the radiative cLFV process ℓ_α→ℓ_βγ is generated at one-loop, as depicted in Fig. <ref>. The corresponding branching ratio is given by <cit.> ℬ(ℓ_α→ℓ_βγ) ≈m_α^5/512π^3Γ_αα/2π[|Y_E^α *Y_E^β/4m_E^2(1 + 8/3 g_L^ℓ) + f_ρα^*f_ρβ/3m_ϕ^2|^2 + |f_iα'f_iβ^' */12 m_ϕ^2|^2] , where the indices ρ and i are summed over. The term in Eq. (<ref>) proportional to Y_E^α*Y_E^β originates from the operators 𝒪_Hl^(1), 𝒪_Hl^(3), 𝒪_eB and 𝒪_eW, generated after integrating out E, while the terms proportional to f_ρα^* f_ρβ and f_iα' f_iβ^' * arise from the contributions of ϕ to 𝒪_eB and 𝒪_eW. Contributions of the operator 𝒪_eH via two-loop Barr-Zee type diagrams are neglected, as C_eH is suppressed by the lepton masses. If we assume only non-zero Y_E^α values, the current upper limit from the MEG experiment <cit.>, ℬ(μ→ eγ) < 4.2 × 10^-13 (90% C.L.), corresponds to the constraint m_E/|Y_E^μ*Y_E^e|^1/2 > 75 TeV. For τ decays, the BaBar experiment gives the upper limits ℬ(τ→ eγ) < 3.3 × 10^-8 and ℬ(τ→μγ) < 4.4 × 10^-8 (90% C.L.) <cit.>, enforcing m_E/|Y_E^τ*Y_E^e|^1/2 > 2.9 TeV and m_E/|Y_E^τ*Y_E^μ|^1/2 > 2.7 TeV, respectively. Finally, limits can be placed from the cLFV decays Z→ℓ_α^+ℓ_β^- and h→ℓ_α^+ℓ_β^-. With non-zero values of Y_E^α, the branching ratios of these are <cit.> ℬ(Z→ℓ_α^+ℓ_β^-) = m_Z/6πΓ_Zm_Z^2/v^2[((g_L^ℓ)^2 + (g_R^ℓ)^2 + g_L^ℓv^2|Y_E^α|^2/2m_E^2)δ_αβ + v^4 |Y_E^α * Y_E^β|^2/16m_E^4] , ℬ(h→ℓ_α^+ℓ_β^-) = m_h/8πΓ_hm_α^2/v^2[(1 - 3 v^2|Y_E^α|^2/2m_E^2)δ_αβ + 9 v^4 |Y_E^α * Y_E^β|^2/16m_E^4] . We note that ℬ(Z→ℓ_α^±ℓ_β^∓) and ℬ(h→ℓ_α^±ℓ_β^∓) are found by adding the branching ratios with α↔β to those above. Upper bounds on these decays have been placed by OPAL <cit.>, ATLAS <cit.> and CMS <cit.>. Taking the most stringent, ℬ(Z→ e^±μ^∓) < 4.2× 10^-7 at 90% C.L. <cit.>, gives m_E/|Y_E^μ*Y_E^e|^1/2 > 4.1 TeV. To conclude this section, we observe that ℓ_α→ℓ_βℓℓ̅ and μ-e conversion in nuclei provide the most stringent constraints on the UV model, with the latter probing values of m_E up to 290 TeV for 𝒪(1) Yukawa couplings Y_E. The coupling Y_E^α enters the active-to-sterile dipole couplings, as seen in Eq. (<ref>) and (<ref>), and therefore these constraints must be taken into account when considering non-zero d_ν Nγ^α i. However, we note that these constraints can be easily evaded if the couplings of E and ϕ are not flavour universal. For example, one can consider only non-zero couplings in the μ-τ (e-τ) sector; for Y_E^e = 0 (Y_E^μ = 0), the strong bounds from μ→ 3e and μ - e conversion are no longer applicable. The only bounds that then apply are the much weaker limits from LFU violation, τ→μγ (τ→ eγ), and τ→ 3μ (τ→ 3e). One can also consider the case where only one Yukawa coupling is non-zero, e.g. Y_E^τ. Then the bounds strongest bounds come from the one-loop contributions of ϕ to μ→ eγ, μ→ 3e and μ- e conversion. § LONG-LIVED PARTICLE SEARCHES AT THE LHC USING NON-POINTING PHOTONS In this section, we discuss the potential of searches for non-pointing photons at the LHC to constrain sterile neutrino magnetic moments. To keep our analysis as general as possible, here we will follow the same conventions as the EFT notation introduced in Section <ref>. §.§ Sterile Neutrino Production and Decay Mechanisms In what follows, we first discuss the sterile neutrino production and decay processes triggered by the EFT Wilson coefficients C_NNB^(5)ij and C_NX^(6)α i, where X=B,W.[Here, we assume C_NW^(6)α i = 2/(3 t_w) C_NB^(6)α i, as discussed at the end of Section <ref>.] We also comment on the contributions from the active-sterile neutrino mixing. Hereafter we simplify notation by omitting flavour indices. First, we take C_NNB^(5)12≡ C_NNB^(5) and since 𝒪_NNB^(5) is an antisymmetric operator, we are implicitly assuming C_NNB^(5)21 = - C_NNB^(5)12. Second, we also omit the lepton flavour α in C_NX^(6)α i, (C_NX^(6)α i≡ C_N_i X^(6)), as processes which involve active neutrinos do not reveal which neutrino flavour is participating. These simplifications will also apply to the coefficients in the broken phase. The coupling C_NNB^(5) contributes to pair-production of the sterile neutrinos via pp→γ/Z → N_1 N_2, while C_N_iX^(6) leads to the production of a single N_i through pp→γ/Z → N_i ν and pp → W^±→ N_i ℓ^±. Fig. <ref> shows some example cross-sections for fixed values of C_NNB^(5) (left) and C_N_iX^(6) (right). For m_N_1+m_N_2 < m_Z, pair production through C_NNB^(5) is dominated by the Z exchange diagram, whereas for larger masses photon exchange dominates. Single sterile neutrino production through C_N_iX^(6), on the other hand, is dominated by the charged current. The neutral channel is subdominant compared to the charged one in this case because of the suppression between d_ν N_i γ and d_ν N_i Z discussed below Eq. (<ref>). Note that sterile neutrino production at the LHC through mixing (non-zero V_ℓ N_i) proceeds through charged current events with the same dependence on the mass m_N_i as for C_N_iX^(6). We therefore do not show production via mixing separately in Fig. <ref> (right). For sterile neutrino decays, the interaction C_NNB^(5) induces the two-body decay N_2 → N_1 γ and, if N_2 is heavy enough, N_ 2 → N_1 Z. Note that the lightest N_1 can not decay through C_NNB^(5). The relevant partial decay widths of N_2, in terms of the broken phase parameters defined in (<ref>), are given by Γ(N_2 → N_1 γ) = 2 |d_ NNγ|^2/π m_N_2^3 ( 1 - m_N_1^2/m_N_2^2)^3 = 2 |d_ NNγ|^2/π m_N_2^3 ( 2 - δ)^3 δ^3 , with δ≡ 1- m_N_1/m_N_2 denoting the mass splitting between the two sterile states, and Γ(N_2 → N_1 Z) = |d_ NNZ|^2/π m_N_2^3 f_Z (m_N_1/m_N_2, m_Z/m_N_2) λ^1/2(1, m_N_1^2/m_N_2^2,m_Z^2/m_N_2^2) , with f_Z(x,y) = ((1-x)^2-y^2)(2(1+ x)^2 +y^2 ) , λ(x,y,z) = (x-y-z)^2 - 4yz . We recall that we use the notation d_ NNγ^12≡ d_ NNγ in this section, and because of the antisymmetric nature of the coupling, there is an additional factor of 4 in the decay width of N_2 → N_1 γ compared to that of N_i →νγ (see Eqs. (<ref>) and (<ref>)). The couplings C_N_iB^(6) and C_N_iW^(6) generate the decays N_i →νγ, N_i →ν Z and N_i →ℓ^± W^∓. While the γ final state is kinematically allowed for practically all values of m_N_i, Z and W final states contribute only for m_N_i larger than the gauge boson masses.[For m_N_i below the weak gauge boson masses, three-body decays through off-shell Z/W occur, but we have checked that these are always subdominant in comparison to the γ channel. Hence these partial widths are neglected in our numerical study.] The relevant partial decay widths, written in terms of the broken phase parameters defined in Eqs. (<ref>) and (<ref>), are given by Γ(N_i →νγ) = |d_ν N_i γ|^2/2π m_N_i^3, Γ(N_i →ν Z) = |d_ν N_i Z|^2/8 π m_N_i (2m_N_i^2 + m_Z^2) λ(1,0,m_Z^2/m_N_i^2), Γ(N_i →ℓ^- W^+) = |d_ℓ N_i W|^2/16π m_N_i (2m_N_i^2 + m_W^2) λ(1,0,m_W^2/m_N_i^2), where, for simplicity, we have neglected the charged lepton masses. In the computation of the total width of a Majorana N_i, the latter two partial decay widths are multiplied by 2 to account for the Majorana nature of ν and the charged conjugated channel ℓ^+ W^-. Sterile neutrinos will also decay via mixing. These decays have been thoroughly studied in the literature. For our numerical analysis, we use the decay width formulas of <cit.>. In Fig. <ref> we show the previous partial decay widths as a function of the sterile neutrino masses for various coefficient values. Here, C_N_iB^(6) and C_N_iW^(6) are fixed as C_N_iB^(6) = 5 × 10^-12 GeV^-2 and C_N_iW^(6) = 2/(3t_w) C_N_iB^(6)≈ 6.22 × 10^-12 GeV^-2. These control the decay channels to νγ, ν Z, e^- W^+. For comparison, we also show the neutrino mixing partial decay width for two different values of |V_ℓ N_i|^2. In the left (right) plot, we fix |V_ℓ N_i|^2 = 10^-9 (10^-14), motivated by an inverse seesaw (or naive Type-I seesaw) estimation. Additionally, we show the partial decay width of N_2 → N_1 γ and N_2 → N_1 Z controlled by C_NNB^(5) and δ. In the left plot, these parameters have been fixed to C_NNB^(5) = 10^-6 GeV^-1 and δ = 0.01, and in the right plot, to C_NNB^(5) = 10^-8 GeV^-1 and δ = 0.5. Notice that only the right plot shows a curve for N_2 → N_1 Z, as a large mass splitting is needed for this channel. From Figs. <ref> and <ref> we can compare the relative sizes of C_NNB^(5), C_N_iX^(6) and V_ℓ N_i, determining the dominant sterile neutrino production and decay mechanisms. There are nine distinct possibilities, which we define as shown in Fig. <ref>. The scenarios in the same row have the same dominant production mechanism, whereas the scenarios in the same column share the same dominant decay mode for the sterile neutrino. In practice, not all combinations will lead to non-pointing photons and we present the features of all these scenarios below. Scenarios B1-B3 assume that production is controlled by C_NNB^(5), thus N_2 and N_1 are pair produced. Then: * B1: N_2 will decay to N_1γ, whereas N_1 escapes undetected. The decay length of N_2 is controlled by C_NNB^(5) and the mass difference δ = 1 - m_N_1 /m_N_2. N_2 is long-lived if δ≪ 1. Too small δ, however, leads to photons too soft to be tagged. The signal consists of one non-pointing photon plus missing energy. * B2: N_1 will decay to νγ and for small enough C_N_1X^(6) this decay is long-lived. N_2 will decay to N_1γ either promptly or, if δ≪ 1, with a finite decay length. The signature of this scenario contains up to three photons, either two or three of which are non-pointing. There is also missing energy in the event, from ν and the potential undetected photons. * B3: The mixing parameter V_ℓ N_1 controls the decay of N_1. Again, N_2 will decay to N_1γ promptly (or delayed if δ≪ 1). The potential final state signal is a prompt (or non-pointing) photon plus one or two displaced vertices (DV) with charged tracks from the N_1 decays. Unless two displaced vertices and the photon are all found, the event again contains missing energy. In scenarios B4-B6, production is controlled by C_N_iX^(6): * B4: Comparing the production cross-sections in Fig. <ref> to the decay widths in Eqs. (<ref>) and (<ref>) it becomes quickly clear that this scenario can not be realised, since C_NNB^(5) large enough to dominate the decay width over C_N_iX^(6) would also dominate the cross-section. * B5: In this scenario C_N_iX^(6) is assumed to be also dominant in the decay of N_i. Comparing the width (<ref>) to the production cross-section in Fig. <ref> (right), it is clear that the decays N_i→νγ are too fast to lead to non-pointing photons since there is no kinematic suppression (δ≃ 1) if C_N_iX^(6) is large enough to give a sizeable cross-section. The signal in this case is a prompt lepton accompanied by a prompt photon. * B6: V_ℓ N_i is assumed to dominate the decay. Given the discussion above for (B5), it is clear that decays will be prompt in this case. There are no photons in the final state, thus no difference in signal to the minimal mixing case. Finally, for B7-B9 we assume production is dominated by mixing, V_ℓ N_i. * B7: The conditions defining this scenario can be fulfilled only in a narrow range of parameters. The cross-section from the dipole and the one from mixing are roughly of the same order for some particular ratio of C_NNB^(5) to V_ℓ N_i. For this ratio, the width from C_NNB^(5) will be smaller than the one from mixing for m_N_2>15 GeV, for δ=0.01. For a smaller ratio (and thus a more dominant production via mixing), the partial width through the dipole can dominate only for smaller values of m_N_2. In this narrow parameter range some events with a non-pointing photon would be generated, however, always together with a certain number of displaced vertex events. * B8: The decay caused by C_N_iX^(6) can easily dominate over the decay via mixing for m_N_i<m_Z, without C_N_iX^(6) dominating the cross-section. Since a minimum size of V_ℓ N_i is needed to give a sizeable cross-section, however, decay lengths will be shorter than in the pure mixing scenario. The signal is a prompt lepton plus a photon (prompt or non-pointing, depending on C_N_iX^(6)). * B9: This scenario is phenomenologically the same as the minimal mixing scenario. It has been discussed extensively in the literature. Given the above discussion, the most interesting scenarios are B1-B3. We will therefore study these three scenarios in detail below. §.§ Simulation Details In order to determine the sensitivity reach of ATLAS for probing sterile neutrino magnetic moments, we performed a numerical study of the representative chosen benchmarks, using Monte-Carlo (MC) techniques. We first implemented our model described by Eq. (<ref>) (only up to d = 6) plus the interactions induced by active-sterile neutrino mixing in <cit.>. The generated UFO files <cit.> were embedded in <cit.>, where we generated the collision process p p → N_1 N_2 at √(s) = 14 TeV, induced by C_NNB^(5) in the selected benchmarks. We generated 10^5 events at each point of the grid covering the parameter plane (m_LLP, c_decay), where the LLP is the sterile neutrino providing the displaced signature at the detector and c_decay denotes the interaction dominating the LLP decay width. These two parameters define the two-dimensional space we scan over in our simulation for each benchmark scenario, as shown in the second column of Tab. <ref>. In the third column, we display the remaining parameters of the model, which are fixed to some numerical value. In B1, we keep the mass splitting δ fixed such that m_N_1 varies in the scan along with m_N_2. Whereas in B2 and B3 we fix m_N_2 and C_NNB^(5), since they do not affect the long-lived nature of N_1. The sterile neutrino decays are treated separately in <cit.>. To ensure numerical stability for small decay widths, the N_i was forced to decay into the final state of interest (shown in the last column of Tab. <ref>) in all simulated events. The decay LHE event files, given as an output by , were then processed by <cit.> to estimate the ATLAS efficiency in detecting the characteristic LLP signature of each benchmark: non-pointing photons in B1 and B2, and displaced vertices in B3. Specific details of each search strategy are outlined below. Non-pointing photons are emitted in the decay of the long-lived sterile neutrinos. These decays occur at a secondary vertex, displaced from the collision point or primary vertex (PV), causing the photon direction to point away from the PV. The ATLAS (and CMS) Electromagnetic Calorimeter (ECal) can detect energetic photons and reconstruct their trajectories precisely. The photon displacement is quantified using the impact parameter (IP), defined as the minimum distance of the photon trajectory to the PV. The IP can be decomposed into its transverse and longitudinal components, d_XY and d_Z, which can be obtained with d_XY = x_LLPp_Y/p_T - y_LLPp_X/p_T, d_Z = z_LLP-(r⃗·p⃗) p_Z /|p⃗|^2/1- p_Z^2/|p⃗|^2 , where r⃗={x_LLP, y_LLP, z_LLP} is the LLP decay position and p⃗ is the photon momentum. In our simulation, we have access to the true decay positions of the sterile neutrinos as well as to the kinematic properties of the outgoing particles. Therefore, we can use the previous equations to compute d_XY or d_Z for each photon emitted in the decay of N_i. In practice, measuring d_Z requires knowing the exact location of the PV along the beamline (z-axis). However, this can be challenging in the high-luminosity conditions at the LHC, where multiple collisions occur per bunch crossing, leading to the production of several PVs. Moreover, in our specific case (scenarios B1-B3), sterile neutrinos are produced at the collision point without any accompanying charged particle. This limitation extends, in general, to scenarios where only neutral particles are produced at the origin and they decay far from the PV. Therefore, we can only use the variable d_XY, which can be measured with respect to the beamline. It is worth mentioning that recent works have been performed in this direction. ATLAS, for instance, presented results from a search using non-pointing and delayed photons in <cit.>, and in <cit.>, the authors have recast it for constraining Higgs decays into sterile neutrinos in the context of dimension-5 operators. However, the search relies on d_Z and the time delay t_γ, which also requires knowing the PV location. We cannot apply this search to our selected benchmarks and accordingly propose an alternative search strategy. Our approach draws inspiration from the CMS search in <cit.>, which looked for non-pointing photons using instead a trigger on d_XY. In Tab. <ref>, we summarise the cuts for the non-pointing photon search that we apply in scenarios B1 and B2. The event selection starts by triggering photons satisfying |p_T^γ|>10 GeV and |η^γ|<2.47. Subsequently, we demand the LLP to decay before it reaches the outer layer of the ECal. This requirement imposes the following cuts on the transverse plane and longitudinal axis: r_DV<1450 mm and |z_DV|<3450 mm, respectively. Finally, the energetic photons passing the initial selection criteria and originating from one of the DVs are required to have |d_XY^γ|> 6 mm. This last cut is expected to reduce SM background significantly <cit.>. On the other hand, in scenario B3, the LLP candidate decays via the active-sterile neutrino mixing. The dominant decay modes contain charged particles that leave tracks that originate from a displaced vertex (DV). ATLAS and CMS have looked for long-lived sterile neutrinos using a DV search strategy and have placed bounds on the neutrino mixing parameter <cit.>. Here, we follow <cit.> and use their DV search, which targets a heavy N decaying into ejj. We summarise the main selection cuts of this search in the lower part of Tab. <ref>. The event selection starts by identifying electrons with |p_T^e| > 120 GeV, |η^e|<2.47. Then we select events with a DV lying inside the ATLAS inner tracker detector, which imposes a cut in the transverse and longitudinal positions: 4 mm < r_DV<300 mm, |z_DV|<300 mm. To reconstruct the DV, at least four displaced charged particle tracks are needed. We require them to have a transverse impact parameter of |d_0|>2 mm.[The approximate transverse impact parameter in this case is defined as d_0=r ·Δϕ, where r is the transverse distance of the track from the interaction point, and Δϕ is the azimuthal angle between the track and the direciton of N_1.] Additionally, one of the displaced tracks must correspond to the energetic electrons passing the initial trigger. A final cut on the invariant mass of the DV (m_DV>5 GeV) is applied to remove SM background from B-mesons. Further details can be found in Section III of <cit.>. The total number of signal events for the three selected benchmarks is obtained with the following expressions: N_sig.^B1 = σ·ℒ·ℬ( N_2 → N_1 γ) ·ϵ_sel^B1, N_sig.^B2 = σ·ℒ·ℬ( N_2 → N_1 γ) · 2 ·ℬ( N_1 →νγ) ·ϵ_sel^B2, N_sig.^B3 = σ·ℒ·ℬ( N_2 → N_1 γ) · 2 ·ℬ( N_1 → e j j ) ·ϵ_sel^B3, where σ is the cross-section of the process pp→ N_1 N_2 that depends on the sterile neutrino masses and the interaction C_NNB^(5). ℒ corresponds to the total integrated luminosity during the high-luminosity phase, 3 ab^-1. The branching ratio into the appropriate final state is also a function of the masses and the coupling responsible for the decay in each scenario (see second column in Tab. <ref>). The factor ϵ_sel denotes the efficiency of event selection in ATLAS after applying the cuts in Tab. <ref>. Notice that there is an additional factor of two in the formula for B2 and B3 since we only require one displaced photon/vertex and there are two N_1 in each event. Under the assumption of zero background, we derive the 95% C.L. sensitivity prospects of ATLAS by requiring 3 signal events. Exclusion limits can be placed in the corresponding parameter space if no signal events are found at the end of the experiment. However, in case of a discovery, a larger number of events would be necessary to identify the scenario. For instance, looking for a second photon in the non-pointing photon search could distinguish a potential signal event from B1 and B2. In scenario B3, on top of the displaced vertex search, an additional trigger on a prompt photon associated with the event containing the DV would distinguish this scenario from other models, such as the minimal scenario. § RESULTS AND DISCUSSION Following the methodology described above, we obtain the experimental sensitivity of ATLAS for probing sterile neutrinos interacting via dipole interactions. We work at the level of N_RSMEFT operators in the unbroken phase and then make the conversion to the broken phase parameters using Eqs. (<ref>) and (<ref>). Besides, we consider the active-sterile neutrino mixing as an independent parameter and, for simplicity, focus on the mixing of the light sterile neutrino with the electron sector only, i.e. V_ℓ N_1 = V_eN_1. The scenarios under study are B1-B3 in Fig. <ref>. In these scenarios, sterile neutrinos are pair-produced in proton-proton collisions through the operator 𝒪_NNB^(5). The difference stems from the interaction dominating the long-lived sterile neutrino decays: C_NNB^(5), C_N_1X^(6) (X=B, W) or V_eN_1. Fig. <ref> shows the sensitivity reach of the non-pointing photon search applied to scenario B1, in which N_2 is the LLP decaying within the detector. Contours corresponding to 3 (30) events are shown as solid (dashed) lines in the m_N_2 vs d_ NNγ plane, for three mass splittings: δ = 0.05, δ = 0.01 and δ = 0.005. The sensitivity in mass goes beyond m_N_2≃ 5 GeV, m_N_2≃ 14 GeV and m_N_2≃ 16 GeV, respectively. Dipole couplings as small as d_ NNγ≃ 5 × 10^-7 (8× 10^-7) GeV^-1 can be probed if δ=0.05 (0.01), improving the current limits from LEP <cit.> (shown as the grey area) by more than one order of magnitude in the corresponding mass ranges. For δ= 0.005, the sensitivity reaches d_ NNγ≃ 2 × 10^-6 GeV^-1. The shape of the curve can be understood in the following way. The lower part of the curve is the long-lived limit, for a given mass and δ, a smaller coupling results in N_2 being too long-lived, escaping the detector without decaying. The upper part represents the opposite situation. A larger coupling value would make the N_2 decay too promptly and the emitted photon would point back to the primary vertex, losing the LLP signature. Moreover, since the production cross-section is proportional to |d_NNγ|^2, we quickly lose events as the dipole coefficient becomes smaller. However, we determined that the results are highly sensitive to the p_T^γ cut in this scenario. The smaller the mass splitting, the less energetic the final photons are and for δ≲ 0.005 they become too soft to be detected in the ECal. This effect can already be observed in the top left corner of the contour for δ=0.005, where the search is no longer sensitive to low sterile neutrino masses. Fig. <ref> shows the results for scenario B2 in the m_N_1 vs. d_ν N_1 γ plane. In this case, production and decay of the LLP candidate, N_1, are decoupled since they depend on different interactions. We fix C_NNB^(5) = 10^-5 GeV^-1, corresponding to d_NNγ≃ 8.8 × 10^-6 GeV^-1, to ensure it dominates sterile neutrino production. Two mass values of N_2 are considered: m_N_2=700 GeV and m_N_2=80 GeV. We recall that the scan parameters are m_N_1 and C_N_1B^(6), which then we convert onto d_ν N_1 γ using Eq. (<ref>).[The contribution of d_ν N_1 Z and d_ℓ N_1 W to the total decay width of N_1 are also taken into account, since we are considering the EFT operators in the unbroken phase.] The solid (dashed) lines correspond to 3 (30) events. Equivalently, these dashed lines correspond to 3 events for C_ NNB^(5)≈ 3 × 10^-6 GeV^-1. The shape of the curves is similar to that of Fig. <ref>, but the probed parameter space is much larger, precisely because the production and decay mechanisms of N_1 are decoupled. The sensitivity reach in mass is limited by the kinematic threshold m_N_1≲ m_N_2, resulting in these vertical lines at m_N_1=700 GeV and m_N_1 = 80 GeV. As concerns active-to-sterile magnetic moments, values as small as d_ν N_1 γ≃ 5 × 10^-13 (2×10^-12) GeV^-1 can be probed for m_N_2=700 (80) GeV. In contrast to the previous scenario, the non-pointing photons are considerably more energetic since they arise in the decay N_1 →νγ. Consequently, almost all simulated photons pass the corresponding cuts of the non-pointing photon search, giving a large event selection efficiency. Even with a stronger p_T^γ cut, the area enclosed by the sensitivity curves comprises unexplored parameter space. Indeed, current exclusion bounds are much weaker and do not appear in the figure. For scenario B3, the results of the displaced vertex search are shown in Fig. <ref>. We display the 3 (solid) and 10 (dashed) event contours in the m_N_1 vs. |V_e N_1|^2 plane. Analogously to B2, production and decay of N_1 are decoupled. We fix the production coupling (C_NNB^(5)=10^-5 GeV^-1) and choose two values for m_N_2. For m_N_2 = 600 GeV, the sensitivity extends across 10 orders of magnitude in |V_eN_1|^2, reaching down to 10^-17, and saturates the kinematic threshold of m_N_1≃ 600 GeV. The slope of the curve changes around m_N_1≈ m_W because the N_1 can decay to on-shell weak bosons via the two-body decays N_1→ W^± e^∓ and N_1→ Z ν. This increase in the decay width is compensated by smaller values of the mixing parameter. For the chosen parameter values, 30 events are not reached and in the 10 event contour, we already observe two isolated regions in the parameter space. Conversely, for m_N_2=80 GeV, 30 signal events can be obtained although we do not show the corresponding curve in the plot. The probed region, in this case, is one order of magnitude larger in |V_eN_1|^2 than for m_N_2=600 GeV, reaching values as small as |V_eN_1|≃ 10^-13 at m_N_1≃ 80 GeV. Present exclusion bounds on the mixing parameter |V_eN|^2 are weaker in this sterile neutrino mass range and hence are not shown. However, we display the sensitivity prospects of displaced vertex searches by ATLAS and CMS <cit.>, represented with the grey and black lines. Their results are derived in the context of the minimal scenario (only neutrino mixing) and, to be precise, these searches cannot be directly recast into our parameter space as they require prompt leptons at the origin. Nevertheless, we show them for illustrative purposes, revealing that much smaller squared mixing values can be explored in the presence of sizeable sterile neutrino dipole moments, even accessing the Type-I seesaw band. This light grey region in the plot is obtained assuming the naive Type-I seesaw relation for active neutrino masses of 0.05 and 0.001 eV. Finally, we can map the values of the dipole couplings to which experiments are sensitive onto the parameters of the UV-complete model, namely the masses of the singly-charged scalar and vector-like lepton (m_ϕ, m_E) and the Yukawa-type couplings in Eq. (<ref>) (f, f', Y_E, h, h'). The sterile-to-sterile and active-to-sterile dipoles are related to these parameters via d_NNγ^ij = e/128 π^2 m_E(h_i' h_j^∗ - h_i^∗ h_j') , d_ν N γ^α i = e v/32√(2)π^2m_E^2 f_αβ Y_E^β∗ h_i' , where we have assumed that m_E=m_ϕ, hence replacing f(r)|_r→ 1 = 1/2. Since there are products of two or even three couplings entering these equations, we find it convenient to plot the generated magnetic moments as functions of the specific coupling combinations and the vector-like lepton mass. In the top panel of Fig. <ref>, we show fixed values of d_NNγ (d_NNγ^12) ranging from 10^-5 to 10^-9 GeV^-1 in the corresponding plane. Additionally, the coloured areas correspond to the probed regions in Fig. <ref>, marginalising the dependence on m_N_2. Direct collider searches exclude the low mass region of m_E (see discussion in Section <ref>). The bottom panels in Fig. <ref> show active-to-sterile magnetic moments ranging from 10^-7 to 10^-13 GeV^-1. Here, the pink and blue areas cover the probed regions illustrated in Fig. <ref> for the 30 event contours assuming C_ NNB^(5)= 10^-5 GeV^-1 (which correspond to 3 event contours assuming C_ NNB^(5)=3× 10^-6 GeV^-1), again marginalising the dependence on m_N_1. In addition to the collider constraints, we show bounds from cLFV processes (see discussion in Section <ref>) for two different flavour scenarios. The left plot assumes μ-τ couplings for the vector-like lepton (Y_E^μ=Y_E^τ≫ Y_E^e≈ 0 and f_μτ≠ 0), while the right plot assumes couplings only to τ (Y_E^τ≫ Y_E^e,μ≈ 0 and f_eτ = f_μτ≠ 0). As we have fixed C_NNB^(5), there is an upper limit on m_E above which the couplings of the sterile-to-sterile magnetic moment enter the non-perturbative regime. Masses above the grey dashed line lead to h,h' >√(4π). In Fig. <ref>, we display in three different panels the sensitivity projections from Figs. <ref> and <ref>. These are shown in the planes m_E vs. m_N_2 for B1 (top panel) and m_E vs. m_N_1 for B2 (bottom panels). We overlay current constraints from direct collider searches and cLFV processes as before. In the bottom panels, we also show the non-perturbative regime h,h'>√(4π). Additionally, the region corresponding to m_E < m_N_2 = 700 GeV compromises the EFT validity. As can be seen from the top plot of Fig. <ref>, for scenario B1 (where the decay of sterile neutrinos is dominated by the sterile-to-sterile transition magnetic moment) LLP searches can be sensitive to vector-like lepton masses as high as several TeV for sterile neutrino masses less than 10 GeV. However, the cLFV processes do not place additional constraints on this scenario because the couplings entering the cLFV rates are independent of the sterile-to-sterile transition magnetic moment. In contrast, for scenario B2 (where the decay of sterile neutrinos is dominated by the active-to-sterile transition magnetic moment), the complementarity between LLP searches at the LHC and the constraints from cLFV becomes evident, as can be seen from the bottom two plots of Figs. <ref> and <ref>. In this scenario, for a lepton flavour universal coupling of the vector-like lepton (Y_E^e=Y_E^μ=Y_E^τ), the stringent constraints from the current measurements of cLFV processes involving the first two generations of the charged leptons (combined with the perturbativity of Yukawa couplings) exclude the whole model parameter space. On the other hand, for the lepton flavour non-universal case, we have viable parameter space to be explored by LLP searches with non-pointing photons. If the vector-like lepton couples mainly to μ and τ with similar strength (with negligible coupling to the first-generation charged lepton, Y_E^μ=Y_E^τ≈ f_μτ≫ Y_E^e≈ 0) then the viable parameter space of the model in vector-like lepton mass spans between (0.30-2.3) TeV, with the lower limit coming from the current constraint from the τ→μγ and the current measurement of LFY from tau decays. The upper limit corresponds to the perturbativity of the Yukawa couplings. If the vector-like lepton couples only to τ (with negligible coupling to the first two generations of charged lepton, Y_E^τ≈ f_eτ = f_μτ≫ Y_E^e,μ≈ 0) then the viable parameter space of the model in vector-like lepton mass spans between (0.8-2.3) TeV, with the lower limit coming from the current constraints from the cLFV process μ→ eγ and the upper limit from the perturbativity of the Yukawa couplings. § CONCLUSIONS In this work, we have examined long-lived particle (LLP) searches at the LHC, using non-pointing photons to investigate sterile-to-sterile and active-to-sterile transition magnetic dipole moments. We considered two Majorana sterile neutrinos with masses ranging from a few GeV to several hundred GeV with sizable sterile-to-sterile and active-to-sterile transition magnetic moments, in addition to the usual active-sterile neutrino mixing. We first discussed the operators relevant to neutrino magnetic moments within an EFT framework; specifically, N_RSMEFT and its low-energy counterpart, N_RLEFT. Subsequently, we considered as an example a simplified UV-complete model, illustrating the emergence of substantial transition magnetic moments between the heavy sterile neutrinos at the loop level. We discussed relevant constraints on this model from direct collider searches and charged lepton flavour violating processes. For the phenomenological analysis of sterile neutrino magnetic moments, we began by taking the EFT approach. We considered the impact of two independent N_RSMEFT operators, 𝒪_NNB^(5) and 𝒪_NB^(6), together with the active-sterile neutrino mixing. We identified nine possible scenarios based on the dominant production and decay modes at the LHC. Of these nine possibilities, we explored the three physically realisable cases in detail. In these, the sterile-to-sterile magnetic moment dominates the production cross-section of sterile neutrinos at the LHC. Since there are no charged tracks from the primary vertex for the scenarios of interest, we have put forward a search strategy employing the transverse impact parameter of non-pointing photons, which does not rely on the location of the primary vertex. We presented detailed sensitivities for the three physically realisable scenarios at the high-luminosity LHC. Our numerical simulations indicate that for sterile neutrinos decaying primarily to photons, either via sterile-to-sterile (scenario B1) or active-to-sterile magnetic dipole moments (scenario B2), searches for LLPs with non-pointing photons can probe unexplored regions of the parameter space for sterile neutrino masses ranging from several GeV to several hundred GeV. The high-luminosity LHC will improve constraints on the sterile-to-sterile neutrino transition magnetic moment from LEP searches by more than an order of magnitude for the sub-ten-GeV sterile neutrino mass regime. On the other hand, for scenario B2, where much more energetic photons are produced, the active-to-sterile neutrino transition dipole moment can be probed to much lower values with respect to existing constraints. In this case, an unprecedented reach for sterile neutrino masses up to hundreds of GeV is expected. For the scenario where the lighter sterile neutrino decays dominantly through mixing, our results reveal that LLP searches using displaced vertices could probe active-sterile neutrino mixing values much lower than those in the minimal scenario (where mixing controls both the production cross-sections and sterile neutrino lifetimes). Applying the results from the model-independent EFT approach to our realistic simplified model example, we found that the synergy between LLP searches using non-pointing photons at the LHC and the constraints from cLFV processes can potentially probe and distinguish different flavour structures of new physics couplings. For the scenario where the decay of sterile neutrinos is dominated by the sterile-to-sterile transition magnetic moment, LLP searches will be sensitive to vector-like lepton masses as high as several TeV. However, such a scenario remains largely unconstrained from the cLFV processes owing to the freedom of choice for some of the couplings entering the cLFV rates, which are independent of the sterile-to-sterile transition magnetic moment. In contrast, for the scenario where the decay of sterile neutrinos is dominated by the active-to-sterile transition magnetic moment, the complementarity of LLP searches at the LHC and the constraints from cLFV can play a pivotal role in understanding the flavour structure of the model parameter space. For a lepton flavour universal coupling of the vector-like lepton, the stringent constraints from the current measurements of cLFV processes involving the first two generations of the charged leptons already exclude the whole viable model parameter space. On the other hand, if the vector-like lepton couples mainly to μ and (or) τ, then future cLFV measurements will also provide excellent synergy to the LLP searches at the LHC using non-pointing photons. Acknowledgements R. B. is supported by the grants ACIF/2021/052 and CIBEFP/2022/62 (Generalitat Valenciana). P. D. B. is supported by the Slovenian Research Agency under the research core funding No. P1-0035 and in part by the research grants N1-0253 and J1-4389. P. D. B. also acknowledges support from the European Union's Horizon 2020 research and innovation programme under the Marie Skłodowska-Curie grant agreement No 860881-HIDDeN. F. F. D. acknowledges support from the UK Science and Technology Facilities Council (STFC) via the Consolidated Grants ST/P00072X/1 and ST/T000880/1. C. H. is supported by the Generalitat Valenciana under Plan Gen-T via CDEIGENT grant No. CIDEIG/2022/16. R. B. also acknowledges partial support from CDEIGENT grant No. CIDEIG/2022/16. R. B., C. H., and M. H. acknowledge partial support from the Spanish grants PID2020- 113775GBI00 (AEI/10.13039/501100011033), and Prometeo CIPROM/2021/054 (Generalitat Valenciana). JHEP
http://arxiv.org/abs/2405.09836v1
20240516062134
Splittings of toric ideals of graphs
[ "Anargyros Katsabekis", "Apostolos Thoma" ]
math.AC
[ "math.AC", "13F65, 14M25, 05C25, 05E40" ]
1]Anargyros Katsabekis Corresponding author: Anargyros Katsabekis 2]Apostolos Thoma Anargyros Katsabekis, Department of Mathematics, University of Ioannina, Ioannina 45110, Greece katsampekis@uoi.gr Apostolos Thoma, Department of Mathematics, University of Ioannina, Ioannina 45110, Greece athoma@uoi.gr 13F65, 14M25, 05C25, 05E40. Let G be a simple graph on the vertex set {v_1,…,v_n}. An algebraic object attached to G is the toric ideal I_G. We say that I_G is splittable if there exist subgraphs G_1 and G_2 of G such that I_G=I_G_1+I_G_2, where both I_G_1 and I_G_2 are not equal to I_G. We show that I_G is splittable if and only if it is edge splittable. We also prove that the toric ideal of a complete bipartite graph is not splittable. In contrast, we show that the toric ideal of a complete graph K_n is always splittable when n ≥ 4. Additionally, we show that the toric ideal of K_n has a minimal splitting if and only if 4 ≤ n ≤ 5. Finally, we prove that any minimal splitting of I_G is also a reduced splitting. Splittings of toric ideals of graphs [ Received ; accepted ==================================== § INTRODUCTION Let A={ a_1,…, a_m} be a subset of ℕ^n such that A does not contain the zero vector. Consider the polynomial ring K[x_1,…,x_m] over any field K. We grade K[x_1,…,x_m] by the semigroup ℕA={l_1 a_1+⋯+l_m a_m} setting deg_A(x_i)= a_i for i=1,…,m. The A-degree of a monomial x^ u=x_1^u_1⋯ x_m^u_m is defined by deg_A( x^ u)=u_1 a_1+⋯+u_m a_m∈ℕA. The A-fiber of a vector b∈ℕ^n is the set of all monomials in K[x_1,…,x_m] with A-degree equal to b. The toric ideal I_A is the prime ideal generated by all the binomials x^ u- x^ v such that deg_A( x^ u)= deg_A( x^ v). A recent direction in the theory of toric ideals is to decide when I_A is splittable, see <cit.>. The toric ideal I_A is splittable if it has a toric splitting i.e. if there exist toric ideals I_A_1, I_A_2 such that I_A=I_A_1+I_A_2 and I_A_i≠I_A, for all 1 ≤ i ≤ 2. To every simple graph G one can associate the toric ideal I_G. In <cit.> G. Favacchio, J. Hofscheier, G. Keiper, and A. Van Tuyl consider the aforementioned problem for the toric ideal I_G, namely they study when I_G has a toric splitting of the form I_G=I_G_1+I_G_2, where I_G_i, 1 ≤ i ≤ 2, is the toric ideal of G_i. More precisely, given a graph G and an even cycle C, they consider the graph H which is formed by identifying any edge of G with an edge of C. They show <cit.> that I_H=I_G+I_C is a splitting of I_H. They also prove <cit.> that if G_1 and G_2 form a splitting of G along an edge e and at least one of G_1 or G_2 is bipartite, then I_G=I_G_1+I_G_2 is a splitting of I_G. Moreover, when I_G has such a splitting, they show <cit.> how the graded Betti numbers of I_G are related with those of I_G_1 and I_G_2. P. Gimenez and H. Srinivasan showed <cit.> that if G_1 and G_2 form a splitting of G along an edge e, then I_G splits into I_G=I_G_1+I_G_2 if and only if at least one of G_1 or G_2 is bipartite. This paper aims to answer <cit.>, namely for what graphs G can we find graphs G_1 and G_2 so that their respective toric ideals satisfy I_G=I_G_1+I_G_2? More generally, can we classify when I_G is a splittable toric ideal in terms of G? We give a complete answer to the latter question. Our approach is based on the graphs G92e and G_S^e introduced in section 2, where e is an edge of G and S is a minimal system of binomial generators of I_G. We show that I_G is splittable if and only if there is an edge e of G and a minimal generating set of binomials S of I_G such that I_G=I_G_S^e+I_G92e is a splitting, see Theorem <ref>. As an application of our results, we prove that the toric ideal of a complete bipartite graph is not splittable (see Corollary <ref>) and the toric ideal of the suspension of a cycle of length n ≥ 3 is splittable if and only if either n=4 or n is odd, see Theorem <ref>. In section 3 we study the case that G coincides with the complete graph K_n on n vertices. We show that I_K_n is splittable if and only if n ≥ 4, see Theorem <ref>. Moreover, we introduce minimal splittings and show (Theorem <ref>) that I_K_n does not have a minimal splitting for n ≥ 6. In section 4 we define reduced splittings of toric ideals and prove that every minimal splitting of I_G is also reduced, see Theorem <ref>. § EDGE SPLITTINGS In this section, we first collect important notations and definitions used in the paper. For unexplained terminology in graph theory, we refer to <cit.>. Let G be a finite, connected and undirected graph having no loops and no multiple edges on the vertex set V(G)={v_1,…,v_n}, and let E(G)={e_1,…,e_m} be the set of edges of G. Two edges of G are called adjacent if they share a common vertex. To each edge e={v_i,v_j}∈ E(G), we associate the vector a_e∈{0,1}^n defined as follows: the ith entry is 1, the jth entry is 1, and the remaining entries are zero. By I_G we denote the toric ideal I_A_G in K[e_1,…,e_m], where A_G={ a_e| e ∈ E(G)}⊂ℕ^n. A walk of length q of G connecting v_1∈ V(G) with v_q+1 is a finite sequence of the form w=({v_1,v_2}, {v_2,v_3},…,{v_q-1,v_q}, {v_q,v_q+1}) with each {v_i,v_i+1}∈ E(G), 1 ≤ i ≤ q. An even (respectively, odd) walk is a walk of even (respectively, odd) length. The walk w is called closed if v_q+1=v_1. A cycle is a closed walk w=({v_1,v_2}, {v_2,v_3},…,{v_q-1,v_q}, {v_q,v_1}) with q ≥ 3 and v_i≠ v_j, for every 1 ≤ i<j ≤ q. A cut vertex is a vertex of G whose removal increases the number of connected components of the remaining subgraph. A graph is called biconnected if it is connected and does not contain a cut vertex. A block is a maximal biconnected subgraph of G. Given an even closed walk w = (e_i_1, e_i_2, …, e_i_2q) of G, we write B_w for the binomial B_w=∏_k=1^qe_i_2k-1-∏_k=1^qe_i_2k∈ I_G. By <cit.> the ideal I_G is generated by all the binomials B_w, where w is an even closed walk of G. We say that w is a primitive walk if the corresponding binomial B_w is primitive. Recall that given a set of vectors A ⊂ℕ^n, the binomial x^ u- x^ v in I_A is called primitive if there exists no other binomial x^ w- x^ z∈ I_A such that x^ w divides x^ u and x^ z divides x^ v. Every minimal binomial generator of I_A is primitive, see <cit.>. Every even primitive walk w = (e_i_1, e_i_2, …, e_i_2q) partitions the set of edges of w in the two sets E_1={e_i_j| j odd} and E_2={e_i_j| j even}. The edges of E_1 are called odd edges of w and those of E_2 even. A sink of a block B is a common vertex of two odd or two even edges of the primitive walk w which belongs to B. The primitive walk w is called strongly primitive if it has no two sinks with distance one in any cyclic block. Let w = (e_1, e_2, …, e_2q) be an even primitive walk and f={v_i,v_j} be a chord of w with i<j, namely an edge f={v_i,v_j} of G such that f ∉ E(w). Then f breaks w in two walks: w_1=(e_1,…,e_i-1,f, e_j,…,e_2q) and w_2=(e_i,…,e_j-1,f). The chord f is called bridge of w if there exist two different blocks B_i, B_j of w such that v_i∈ B_i and v_j∈ B_j. The chord f is called even (respectively odd) if it is not a bridge and breaks w in two even walks (respectively odd). Let f={v_i,v_j} be an odd chord of w with i<j and f'={v_k,v_l} be another odd chord of w with k<l. We say that the odd chords f and f' cross effectively in w if k-i is odd and either i<k<j<l or k<i<l<j. We call an F_4 of the walk w a cycle (e,f,e',f') of length four which consists of two edges e, e' of the walk w both odd or both even, and two odd chords f and f' which cross effectively in w. We say that the odd chords f, f' cross strongly effectively in w if they cross effectively and they do not form an F_4 in w. A binomial B_w∈ I_G is called minimal if it belongs to a minimal system of binomial generators of I_G. Since I_G is homogeneous, the graded version of Nakayama's Lemma implies that every minimal system of generators of I_G has the same cardinality. The next theorem provides a characterization for the minimal binomials of I_G. (<cit.>) Let w be an even closed walk of G. Then B_w is a minimal binomial if and only if (1) w is strongly primitive, (2) all the chords of w are odd and there are not two of them which cross strongly effectively and (3) no odd chord crosses an F_4 of the walk w. A binomial B_w∈ I_G is called indispensable if every system of binomial generators of I_G contains B_w or -B_w. The next theorem provides a characterization for the indispensable binomials of I_G. (<cit.>) Let w be an even closed walk of G. Then B_w is an indispensable binomial if and only if w is a strongly primitive walk, all the chords of w are odd and there are not two of them that cross effectively. It follows from Theorems <ref>, <ref> that the binomial B_w is not indispensable due to the existence of F_4's in the walk w. Moreover, a minimal binomial B_w of I_G is indispensable if and only if the walk w does not have any F_4. Note that there may exist a subgraph H of the graph G such that I_H=I_G. This can happen when there are edges in G that are not used in any walk w such that B_w is a minimal binomial of I_G. Given an edge e of G, we denote by G92e the graph with the same vertex set as G and whose edge set consists of all edges of G except e. For any set F ⊂ E(G) of edges of G, we use G92 F to denote the subgraph of G containing the same vertices as G but with all the edges of F removed. Let S={B_w_1, B_w_2, …, B_w_r} be a minimal generating set of I_G. Given an edge e of G, we define G_S^e to be the subgraph of G on the vertex set V(G_S^e)=⋃_1 ≤ i ≤ r and e∈ E(w_i)V(w_i) with edges E(G_S^e)=⋃_1 ≤ i ≤ r and e∈ E(w_i)E(w_i). Thus to form the graph G_S^e one needs first to find all binomials B_w_i∈ S, 1 ≤ i ≤ r, such that e is an edge of the walk w_i. Then we take all vertices and edges of such walks. We use the symbol G_S^e to emphasize that the graph depends not only on the edge e but also on the minimal system of binomial generators S, see Example <ref>. For a toric ideal I_G with a unique minimal system of binomial generators S, we will simply write G^e instead of G_S^e. Recall that the complete graph K_n is the graph with n vertices in which each vertex is connected to every other vertex. Let G=K_4 be the complete graph on the vertex set {v_1,…,v_4}. Let w_1=(ϵ_12, ϵ_23, ϵ_34, ϵ_14), w_2=(ϵ_12, ϵ_24, ϵ_34, ϵ_13) and w_3=(ϵ_23, ϵ_13, ϵ_14, ϵ_24), where ϵ_ij={v_i,v_j} for 1 ≤ i<j ≤ 4. Then S={B_w_1=ϵ_12ϵ_34-ϵ_23ϵ_14, B_w_2=ϵ_12ϵ_34-ϵ_24ϵ_13} and T={B_w_1, B_w_3=ϵ_23ϵ_14-ϵ_13ϵ_24} are minimal generating sets of I_G. Let e=ϵ_23, then G_S^e is the cycle given by the walk w_1 while G_T^e is the whole graph G. Let G be a graph and e be an edge of G. Let S={B_w_1, B_w_2, …, B_w_r} be a minimal generating set of I_G, then I_G=I_G_S^e+I_G92e. Proof. We have that G92e⊂ G and G_S^e⊂ G, so I_G92e⊂ I_G and I_G_S^e⊂ I_G. Thus I_G_S^e+I_G92e⊂ I_G. Let B_w_i, 1 ≤ i ≤ r, be a minimal binomial in S. Then there are two cases: * e∈ E(w_i). Then B_w_i belongs to the ideal I_G_S^e. * e ∉E(w_i). Then B_w_i belongs to the ideal I_G92e. Consequently I_G⊂ I_G_S^e+I_G92e. □ We return to Example <ref>. We have that I_G_S^e=<B_w_1> and I_G_T^e=I_G. Also G 92e is the graph with edges ϵ_12, ϵ_34, ϵ_14, ϵ_13, ϵ_24, thus I_G 92 e=<B_w_2>. Then I_G=I_G_S^e+I_G 92e is a splitting of I_G, while I_G=I_G_T^e+I_G 92e is not a splitting of I_G since I_G_T^e=I_G. A splitting I_G=I_G_1+I_G_2 of I_G is called edge splitting if there exist an edge e of G and a minimal generating set S of I_G such that G_1=G_S^e and G_2=G92e or G_1=G92e and G_2=G_S^e. The toric ideal I_G is called edge splittable if there exists an edge splitting of I_G. It is not necessary for all splittings (if any) of the toric ideal of a graph to be edge splittings. Consider the complete graph K_5 on the vertex set {v_1,…,v_5}. Let ϵ_ij={v_i,v_j} for 1 ≤ i<j ≤ 5. Consider the subgraphs G_1=K_592{ϵ_12, ϵ_34} and G_2=K_592{ϵ_14, ϵ_23} of K_5. We have that S={ϵ_13ϵ_24-ϵ_14ϵ_23, ϵ_14ϵ_25-ϵ_15ϵ_24, ϵ_23ϵ_45-ϵ_24ϵ_35,ϵ_13ϵ_25-ϵ_15ϵ_23,ϵ_13ϵ_45-ϵ_14ϵ_35} is a generating set of I_G_1 and T={ϵ_12ϵ_34-ϵ_13ϵ_24, ϵ_24ϵ_35-ϵ_25ϵ_34, ϵ_13ϵ_45-ϵ_15ϵ_34,ϵ_12ϵ_45-ϵ_15ϵ_24, ϵ_12ϵ_35-ϵ_13ϵ_25} is a generating set of I_G_2. Also, S ∪ T is a generating set of I_K_5 and I_G_i≠ I_K_5 for all 1 ≤ i ≤ 2, so I_K_5=I_G_1+I_G_2 is a splitting of I_K_5 which is not an edge splitting. Next two Theorems provide a necessary and sufficient condition for a toric ideal I_G to be splittable in terms of the graph G. The ideal I_G is edge splittable if and only if there is a minimal system of binomial generators S={B_w_1,…,B_w_r} of I_G with r ≥ 2 and an edge e∈ E(w_i), 1≤ i≤ r, such that I_G_S^e≠I_G. Proof. (⟹) Suppose that the ideal I_G is edge splittable, then there exist a minimal system of binomial generators S={B_w_1,…,B_w_r} of I_G with r ≥ 2 and an edge e of G such that I_G=I_G_S^e+I_G92e is a splitting. Thus I_G92e≠I_G, so there is B_w_i∈ I_G, 1 ≤ i ≤ r, such that B_w_i∉ I_G92e and therefore e∈ E(w_i). (⟸) Suppose that there is a minimal system of binomial generators S={B_w_1,…,B_w_r} of I_G with r ≥ 2 and an edge e∈ E(w_i), 1≤ i≤ r, such that I_G_S^e≠I_G. Then from Theorem <ref> we have that I_G=I_G_S^e+I_G92e. Since e∈ E(w_i), we have that B_w_i∉I_G92e and therefore I_G92e≠I_G. From the hypothesis, it holds that I_G_S^e≠I_G. Consequently, I_G is edge splittable. □ Let P_n be the graph with 2n+1 vertices: the n vertices of a regular n-gon, the n midpoints of the edges of the polygon and the center of the polygon. The graph P_n has 3n edges: the n radii of the inscribed circle in the polygon corresponding to the midpoints of the edges of the polygon and the 2n segments joining the vertices of the polygon to the two adjacent midpoints. Consider the graph G_n which is the cartesian product of the graph P_n with a single edge. Thus G_n has 4n+2 vertices and 8n+1 edges. In Figure <ref> we plot the graph G_8 with 34 vertices and 65 edges. Since G_n is bipartite, we have, from <cit.>, that I_G_n has a unique minimal system of generators consisting of all binomials B_w, where w is an even cycle with no chords. The graph G_n has exactly 5n+2 cycles without a chord and all of them are of length 4, except two of length 2n. Note that G_8 has exactly 42 cycles with no chords, see Figure <ref>. Let e be the edge of G_n joining the two centers of the polygons. We plot in Figure <ref> the graph G_892e and in Figure <ref> the graph G_8^e. Note that the toric ideal of the graph G_892e has 62 minimal binomials, while the toric ideal of G_8^e has 8 minimal binomials. In general, I_G_n92e has 4n+n2+2 minimal binomials, while the ideal I_G_n^e has n minimal binomials. Notice that the n2 minimal binomials of I_G_n92e correspond to cycles in G_n92e of length 6 that in the graph G_n had e as an even chord. It follows from Theorem <ref> that I_G_n=I_G_n^e+I_G 92e is an edge splitting. Actually, for each edge b of G_n, there is a splitting I_G_n=I_G_n^b+I_G 92b of I_G_n. Next we state and prove the main result of this article, namely if the toric ideal of a graph has a splitting then it has also an edge splitting. The toric ideal I_G is splittable if and only if it is edge splittable. Proof. (⟸) If I_G is edge splittable, then I_G=I_G_S^e+I_G92e is a splitting of I_G, and therefore it is splittable. (⟹) Suppose that I_G is splittable and let I_G=I_G_1+I_G_2 be a splitting of I_G. Notice that I_G_1⫋ I_G and I_G_2⫋ I_G. Let {f_1,…,f_s} be a binomial generating set of I_G_1 and {g_1,…,g_t} be a binomial generating set of I_G_2. Then {f_1,…,f_s, g_1,…,g_t} is a generating set of I_G, therefore it contains a minimal binomial generating set S={B_w_1, …,B_w_r} of I_G, since toric ideals of graphs are homogeneous. Note that r ≥ 2, since I_G_1 and I_G_2 are nonzero ideals. But S ⊂{f_1,…,f_s, g_1,…,g_t}, so each B_w_i, 1 ≤ i ≤ r, belongs to at least one of I_G_1, I_G_2. If for every 1 ≤ i ≤ r it holds that B_w_i∈ I_G_2, then I_G_2=I_G a contradiction. Thus there exists 1 ≤ i ≤ r such that B_w_i∈ I_G_1 and B_w_i∉ I_G_2, so there is e ∈ E(w_i) such that e ∉ E(G_2). For every even closed walk w_j such that e∈ E(w_j) we have that B_w_j∈ I_G_1, thus I_G_S^e⊂ I_G_1 and therefore I_G_S^e≠ I_G since I_G_1⫋ I_G. By Theorem <ref> the toric ideal I_G is edge splittable. □ Consider the graph G on the vertex set {v_1,…,v_10} with edges e_1={v_1,v_2}, e_2={v_2,v_3}, e_3={v_3,v_4}, e_4={v_4,v_5}, e_5={v_5,v_6}, e_6={v_6,v_7}, e_7={v_7,v_8}, e_8={v_8,v_9}, e_9={v_9,v_10}, e_10={v_1,v_10}, e_11={v_1,v_5}, e_12={v_2,v_6} and e_13={v_6,v_8}. Then S={e_1e_5-e_11e_12, e_1e_9e_13-e_8e_10e_12, e_5e_8e_10-e_9e_11e_13, e_2e_4e_6e_13-e_3e_5e_7e_12, e_2e_4e_6e_8e_10-e_3e_7e_9e_11e_12} is a minimal generating set of I_G. For the edge f=e_1 of G we have that I_G92 f=<e_5e_8e_10-e_9e_11e_13, e_2e_4e_6e_13-e_3e_5e_7e_12, e_2e_4e_6e_8e_10-e_3e_7e_9e_11e_12>. Moreover E(G_S^f)={e_1, e_5, e_8, e_9, e_10, e_11, e_12, e_13} and also I_G_S^f=<e_1e_5-e_11e_12, e_1e_9e_13-e_8e_10e_12, e_5e_8e_10-e_9e_11e_13>. Thus I_G ≠ I_G_S^f and therefore I_G=I_G_S^f+I_G92 f is a splitting of I_G by Theorem <ref>. In <cit.> P. Gimenez and H. Srinivasan provide an example of a graph G obtained by gluing two bow ties G_1 and G_2 along an edge. Since neither G_1 nor G_2 is bipartite, I_G_1+I_G_2 is not a splitting of I_G by <cit.>. The ideal I_G has a unique minimal system of generators S consisting of five binomials and I_G_S^e=I_G, for every edge e of G. By Theorem <ref>, the ideal I_G is not edge splittable, and therefore it is not splittable by Theorem <ref>. Thus I_G does not have a splitting in the form I_H_1+I_H_2 for any subgraphs H_1, H_2 of G. It is worth mentioning that if a graph G is a gluing of two arbitrary disjoint connected graphs G_1 and G_2, bipartite or not, along an edge, then from <cit.> there exists a 3-uniform hypergraph H such that I_H=I_G_1+I_G_2. Theorems <ref> and <ref> are easier to apply when I_G has a unique minimal system of binomial generators. In particular, this is true for the toric ideal of a bipartite graph since it is minimally generated by all binomials of the form B_w, where w is an even cycle with no chords, see <cit.>. A graph G is called a complete bipartite graph if its vertex set can be partitioned into two subsets V_1 and V_2 such that each edge of G connects a vertex of V_1 to a vertex of V_2. It is denoted by K_m,n, where m and n are the numbers of vertices in V_1 and V_2 respectively. The next corollary shows that toric ideals of complete bipartite graphs do not admit a splitting. The toric ideal of K_m,n is not splittable. Proof. Let V_1={x_1,…,x_m}, V_2={y_1,…,y_n} be the bipartition of the complete bipartite graph K_m,n and E(K_m,n)={b_ij| 1 ≤ i ≤ m, 1≤ j ≤ n}, where b_ij={x_i,y_j}. Then I_K_m,n is minimally generated by the 2 × 2 minors of the matrix M=(b_ij), see <cit.>. Thus I_K_m,n is minimally generated by the set S of all binomials b_ijb_kl-b_ilb_kj which are in the form B_w, where w is a cycle in K_m,n of length 4. Since K_m,n is bipartite, the set S is the unique minimal system of binomial generators of I_K_m,n. Notice that if m=1 or n=1, then I_K_m,n={0}. Moreover if m=2 and n=2, then I_K_m,n is minimally generated by b_11b_22-b_12b_21 and therefore it is not splittable. Assume that m ≥ 2, n ≥ 2 and I_K_m,n≠ I_K_2,2. Let e=b_ij be any edge of K_m,n. We claim that K_m,n^e=K_m,n which implies the equality I_K_m,n^e=I_K_m,n. By definition K_m,n^e is a subgraph of K_m,n. It suffices to show that all edges of K_m,n belong also to the graph K_m,n^e. Let ξ=b_kl∈ E(K_m,n). There are four cases: * k=i and l=j. Then ξ=e which belongs to K_m,n^e. * k=i and l≠j. Let 1 ≤ i' ≤ m with i'≠ i. Consider the cycle w=(e, ξ, b_i'l, b_i'j), then B_w∈ S and therefore ξ belongs to K_m,n^e. * k≠i and l=j. Let 1 ≤ j' ≤ n with j'≠ j. Consider the cycle w=(e, ξ, b_kj', b_ij'), then B_w∈ S and therefore ξ belongs to K_m,n^e. * k≠i and l≠j. Consider the cycle w=(e, b_il, ξ, b_kj), then B_w∈ S and therefore ξ belongs to K_m,n^e. We conclude that in all cases ξ belongs to K_m,n^e, so K_m,n^e=K_m,n. From Theorem <ref> it follows that I_K_m,n is not edge splittable, and therefore I_K_m,n is not splittable by Theorem <ref>. □ The suspension G of a graph G is the graph obtained from G by adding a new vertex adjacent to all vertices of G. Given a cycle C_n in G of length n ≥ 3, the next theorem determines when the toric ideal I_C_n is splittable. Let C_n be a cycle of length n ≥ 3 and C_n be the suspension of C_n. * Suppose that n is even. Then I_C_n is splittable if and only if n=4. * If n is odd, then I_C_n is splittable. Proof. Let C_n=({v_1,v_2}, {v_2,v_3},…, {v_n-1,v_n}, {v_1,v_n}) and G=C_n be the suspension of C_n obtained by adding a new vertex v_n+1 such that {v_i,v_n+1} is an edge of G, for every 1 ≤ i ≤ n. (1) Suppose that n ≥ 4 is even. From <cit.> there is a bipartite graph H such that I_G=I_H, thus I_G has a unique minimal system of generators. We distinguish the following cases. (a) n=4. Let C_4=(ϵ_12={v_1,v_2}, ϵ_23={v_2,v_3}, ϵ_34={v_3,v_4}, ϵ_14={v_1,v_4}) and ϵ_i5={v_i,v_5} for 1 ≤ i ≤ 4. Then S={ϵ_12ϵ_45-ϵ_14ϵ_25, ϵ_12ϵ_35-ϵ_23ϵ_15, ϵ_12ϵ_34-ϵ_23ϵ_14, ϵ_23ϵ_45-ϵ_34ϵ_25, ϵ_15ϵ_34-ϵ_14ϵ_35} is a minimal generating set of I_G. Let e=ϵ_15, then E(G^e)={ϵ_12, ϵ_14, ϵ_15, ϵ_23, ϵ_34, ϵ_35}. Notice that ϵ_12ϵ_45-ϵ_14ϵ_25∈ I_G and ϵ_12ϵ_45-ϵ_14ϵ_25∉ I_G^e, so I_G^e≠ I_G. Also e is an edge of the cycle w=(ϵ_12, ϵ_23, ϵ_35, ϵ_15). Thus I_G=I_G^e+I_G 92 e is a splitting of I_G by Theorem <ref>. (b) n>4. Let e be an edge of G then there are two cases: (i) e is an edge of the cycle C_n, for the sake of simplicity let e=ϵ_12. We will show that G=G^e. Since C_n is a cycle with no chords, B_C_n is a minimal binomial of I_G by Theorem <ref>, and therefore E(C_n) ⊂ E(G^e). Consider now the odd cycles w_1=(e,{v_2,v_n+1},{v_n+1,v_1}) and w_2=({v_i,v_i+1},{v_i+1,v_n+1}, {v_n+1,v_i}), where 4 ≤ i ≤ n-2, which share only one vertex, namely v_n+1. The even closed walk w=(w_1,w_2) has no chords and no bridges, and therefore B_w is a minimal binomial of I_G by Theorem <ref>. Thus {v_i,v_n+1}∈ E(G^e) for i=1,2,4,5,…,n-1. Consider now the cycle γ=({v_n+1,v_1}, e, {v_2,v_3}, {v_3,v_n+1}) and notice that γ has exactly one odd chord, namely {v_2,v_n+1}, so from Theorem <ref> B_γ is a minimal binomial of I_G. Thus {v_3,v_n+1}∈ E(G^e). Consider the cycle δ=({v_n+1,v_2}, e, {v_1,v_n}, {v_n,v_n+1}) and notice that δ has exactly one odd chord, namely {v_1,v_n+1}. By Theorem <ref> B_δ is a minimal binomial of I_G. Thus {v_n,v_n+1}∈ E(G^e) and therefore E(G)=E(G^e). Consequently G=G^e, so from Theorems <ref>, <ref> the ideal I_G is not splittable. (ii) e={v_i,v_n+1} with 1 ≤ i ≤ n. For the sake of simplicity we let i=1, namely e={v_1,v_n+1}. Consider the odd cycles w_1=(e, {v_2,v_n+1},{v_1,v_2}) and w_2=({v_i,v_i+1},{v_i+1,v_n+1}, {v_n+1,v_i}), where 4 ≤ i ≤ n-2, which share only one vertex, namely v_n+1. The even closed walk w=(w_1, w_2) has no chords and no bridges, so B_w is a minimal binomial of I_G by Theorem <ref>. Thus {v_i,v_n+1}∈ E(G^e) for i=2,4,5,…,n-1. Moreover {v_i,v_i+1}∈ E(G^e) for i=1,4,5,…,n-2. Consider now the cycle γ=(e, {v_1,v_2}, {v_2,v_3}, {v_3,v_n+1}) and notice that it has exactly one odd chord, namely {v_2,v_n+1}. By Theorem <ref> B_γ is a minimal binomial of I_G. Thus {v_2,v_3}∈ E(G^e) and also {v_3,v_n+1}∈ E(G^e). Consider the cycle δ=({v_2,v_n+1}, e, {v_1,v_n}, {v_n,v_n+1}) and notice that δ has exactly one odd chord, namely {v_1,v_2}. By Theorem <ref> B_δ is a minimal binomial of I_G. Thus {v_1,v_n}∈ E(G^e) and also {v_n,v_n+1}∈ E(G^e). Furthermore μ=({v_n-1,v_n}, {v_n,v_1}, e, {v_n-1,v_n+1}) is an even cycle with exactly one odd chord, namely {v_n,v_n+1}. By Theorem <ref> B_μ is a minimal binomial of I_G, and therefore {v_n-1, v_n} is an edge of G^e. Consider the odd cycles ζ_1=(e, {v_1,v_n},{v_n,v_n+1}), ζ_2=({v_3,v_4}, {v_4,v_n+1},{v_3,v_n+1}) and let ζ=(ζ_1,ζ_2). Then B_ζ is a minimal binomial of I_G by Theorem <ref>, since ζ does not have any chords or bridges, and therefore {v_3,v_4}∈ E(G^e). Thus E(G)=E(G^e), so G=G^e and therefore I_G is not splittable by Theorems <ref>, <ref>. (2) Suppose that n is odd. Then any odd cycle of G either coincides with C_n or has at least three vertices, namely v_n+1 and two vertices of C_n. Thus G has no two odd vertex disjoint cycles. For n=3 we have that C_3 is the complete graph on the vertex set {v_1,…,v_4}, so I_C_3 is splittable by Example <ref>. Suppose that n ≥ 5. Consider a minimal binomial B_w∈ I_G. For the walk w there are two cases: * w is a cycle of length 4 with exactly one odd chord. Then B_w is indispensable of I_G by Theorem <ref>. * w has no chords or bridges and it is of the form w=(w_1,w_2) where w_1, w_2 are odd cycles of length 3 intersecting in exactly one vertex, namely v_n+1. By Theorem <ref> the binomial B_w is indispensable of I_G. Thus I_G has a unique minimal system of generators. Let e={v_1, v_2}, then there is no cycle of length 4 in G containing the edges e and ϵ_34={v_3,v_4}. Consider the odd cycles γ_1=(e,{v_2,v_n+1},{v_1,v_n+1}) and γ_2=(ϵ_34,{v_4,v_n+1},{v_3,v_n+1}) intersecting in v_n+1. Let γ=(γ_1,γ_2), then B_γ is not a minimal binomial of I_G since there is a bridge {v_2,v_3}. Thus the edge ϵ_34 does not belong to E(G^e). Consider the even cycle ζ=({v_2,v_3},{v_3,v_4}, {v_4,v_n+1}, {v_2,v_n+1}), then B_ζ is a minimal binomial of I_G and also B_ζ∉ I_G^e. Thus I_G≠ I_G^e and therefore I_G is splittable by Theorem <ref>. □ § THE COMPLETE GRAPH AND MINIMAL SPLITTINGS In this section, we study the special case of toric ideals of complete graphs. In contrast to toric ideals of (complete) bipartite graphs which always have a unique minimal set of binomial generators, toric ideals of complete graphs have a huge number of different minimal systems of binomial generators. Let n ≥ 4 be an integer and K_n be the complete graph on the vertex set {v_1,…,v_n} with edges {ϵ_ij|1 ≤ i<j ≤ n }, where ϵ_ij={v_i, v_j}. By <cit.> the set T={ϵ_ijϵ_kl-ϵ_ilϵ_jk, ϵ_ikϵ_jl-ϵ_ilϵ_jk| 1≤ i<j<k<l ≤ n} is a minimal generating set of I_K_n. Let { e_1,…, e_n} be the canonical basis of ℝ^n. Since T is a minimal generating set of I_K_n, the only A_K_n-fibers contributing to minimal generators are those consisting of all monomials with A_K_n-degree e_i+ e_j+ e_k+ e_l, where 1≤ i<j<k<l ≤ n. There are n4 such fibers and each one consists of three monomials, namely ϵ_ijϵ_kl, ϵ_ilϵ_jk and ϵ_ikϵ_jl, which have no common factor other than 1. Therefore to generate the ideal I_K_n we need to take any two of the three binomials ϵ_ijϵ_kl-ϵ_ilϵ_jk, ϵ_ikϵ_jl-ϵ_ilϵ_jk, ϵ_ijϵ_kl-ϵ_ikϵ_jl, for every 1≤ i<j<k<l ≤ n, see <cit.> for more details. The monomials ϵ_ijϵ_kl, ϵ_ilϵ_jk and ϵ_ikϵ_jl, where 1 ≤ i<j ≤ n, are indispensable monomials, namely each one is a monomial term of at least one binomial in every minimal system of binomial generators of I_K_n. Thus every minimal system of binomial generators of I_K_n consists of 2n4 binomials. By <cit.> the ideal I_K_n has 3^n4 different minimal systems of binomial generators, which is a huge number even for a small n. The toric ideal of K_n is splittable if and only if n≥ 4. Proof. For n∈{1, 2, 3} we have that I_K_n={0}, so I_K_n is not splittable. Suppose that n≥ 4. From the analysis above we deduce that S={ϵ_ijϵ_kl-ϵ_ikϵ_jl, ϵ_ilϵ_jk-ϵ_ikϵ_jl| 1≤ i<j<k<l ≤ n} is a minimal generating set for I_K_n. The set S has a nice geometric interpretation, see <cit.>. Let G=K_n and e=ϵ_12. First we show that the graph G_S^e contains all edges of G, except perhaps the edges ϵ_1n and ϵ_23. Let ϵ_ij be an edge of G where {i,j}∩{1,2}=∅ and 3≤ i<j≤ n. Since {v_1,v_2,v_i,v_j} is a set of vertices of K_n, the binomials ϵ_12ϵ_ij-ϵ_1iϵ_2j and ϵ_i2ϵ_1j-ϵ_1iϵ_2j belong to S, thus ϵ_ij is an edge of G_S^e. From the above binomials we also deduce that ϵ_1i, for 3≤ i ≤ n-1, and ϵ_2j, for 4≤ j≤ n, are edges of G_S^e. With the possible exception of the edges ϵ_1n and ϵ_23, the graph G_S^e contains all edges of G. Suppose that ϵ_1n is an edge of G_S^e, so there exists a binomial B ∈ S in four variables which contains the variables ϵ_12 and ϵ_1n. Then B=ϵ_12ϵ_in-ϵ_1nϵ_2i where 3 ≤ i ≤ n-1. But {v_1,v_2,v_i,v_n} is a set of vertices of K_n which contributes the binomials ϵ_12ϵ_in-ϵ_1iϵ_2n and ϵ_1nϵ_2i-ϵ_1iϵ_2n in S. Thus B does not belong to S, a contradiction. Consequently ϵ_1n is not an edge of G_S^e. Similar arguments show that ϵ_23 is not an edge of G_S^e. Since ϵ_12ϵ_3n-ϵ_1nϵ_23∈ I_G and ϵ_12ϵ_3n-ϵ_1nϵ_23∉I_G_S^e, we get I_G_S^e≠I_G and therefore I_G=I_G_S^e+I_G92 e is a splitting of I_G by Theorem <ref>. □ Let I_G=I_G_1+I_G_2 be a splitting of I_G, S={f_1,…, f_r} be a minimal system of binomial generators of I_G_1 and T={g_1,…, g_t} be a minimal system of binomial generators of I_G_2. We say that the splitting I_G=I_G_1+I_G_2 is a minimal splitting of I_G if {f_1,…, f_r, g_1,…, g_t} is a minimal system of generators of I_G. (1) The property of being minimal splitting does not depend on the minimal systems of generators chosen in the definition. Suppose that {f'_1,…, f'_r} is a minimal system of binomial generators of I_G_1 and {g'_1,…, g'_t} is a minimal system of binomial generators of I_G_2. But I_G=I_G_1+I_G_2, so {f'_1,…, f'_r, g'_1,…, g'_t} is a generating set of I_G consisting of r+t elements and therefore it is a minimal system of generators. (2) Let I_G=I_G_1+I_G_2 be a minimal splitting of I_G. For any f_i ∈ S, 1 ≤ i ≤ r, we have that neither f_i nor -f_i belongs to T. For any g_j ∈ T, 1 ≤ j ≤ t, we have that neither g_j nor -g_j belongs to S. (3) All the splittings which appeared in <cit.> are minimal splittings. (4) The splitting of I_G_n in Example <ref> is not minimal, since the ideal I_G_n92e has a minimal system of generators with 4n+n2+2 binomials, while the ideal I_G_n^e has a minimal system of generators with n binomials and the ideal I_G_n is generated minimally with 5n+2 binomials. Let K_4 be the complete graph on the vertex set {v_1,…,v_4}. Let G_1=K_492{ϵ_12, ϵ_34} and G_2=K_492{ϵ_14, ϵ_23} be subgraphs of K_4, then I_K_4=I_G_1+I_G_2 is a minimal splitting of I_K_4, since I_G_1=<ϵ_13ϵ_24-ϵ_14ϵ_23>, I_G_2=<ϵ_13ϵ_24-ϵ_12ϵ_34> and I_K_4=<ϵ_13ϵ_24-ϵ_14ϵ_23, ϵ_13ϵ_24-ϵ_12ϵ_34>. We will show that the toric ideal of K_n has no minimal splitting for n ≥ 6. Let n ≥ 4 be an integer and w=(a,b,c,d) be an even cycle of K_n. Then I_K_n=I_K_n92{a,c}+I_K_n92{b,d} is a splitting of I_K_n. Proof. Since K_n is the complete graph, the cycle w has two chords, namely e and f. Then ac-ef ∈ I_K_n does not belong to I_K_n92{a,c}, thus I_K_n92{a,c}≠I_K_n. Also I_K_n92{b,d}≠I_K_n since ef-bd ∈ I_K_n does not belong to I_K_n92{b,d}. It remains to show that I_K_n⊂ I_K_n92{a,c}+I_K_n92{b,d}. Let B_γ be a binomial belonging to the set S defined in the proof of Theorem <ref>. Clearly if B_γ belongs to I_K_n92{a,c} or I_K_n92{b,d}, then B_γ belongs to I_K_n92{a,c}+I_K_n92{b,d}. Suppose that B_γ does not belong to I_K_n92{a,c} or I_K_n92{b,d}. Then γ contains at least one edge from the set {a,c}, say a, and at least one edge from the set {b,d}, say b. Let γ=(a,b,c',d') and e', f' be the chords of the walk γ. Notice that ac'-e'f' belongs to I_K_n92{b,d} and bd'-e'f' belongs to I_K_n92{a,c}. Thus B_γ=ac'-bd'=(ac'-e'f')-(bd'-e'f')∈ I_K_n92{a,c}+I_K_n92{b,d}. So I_K_n=I_K_n92{a,c}+I_K_n92{b,d} is a splitting. □ Let I_A=I_A_1+I_A_2 be a splitting of I_A. If there exists a set A'_1 such that I_A_1⊂ I_A'_1⫋ I_A then I_A=I_A'_1+I_A_2 is also a splitting. By Proposition <ref>, I_K_n=I_K_n92{a,c}+I_K_n92{b,d} is a splitting of I_K_n, and therefore I_K_n=I_K_n92 a+ I_K_n92 b, I_K_n=I_K_n92{a,c}+ I_K_n92 b and I_K_n=I_K_n92 a+ I_K_n92{b,d} are also splittings of I_K_n. Let n ≥ 4 be an integer and I_K_n=I_G_1+I_G_2 be a splitting of I_K_n. If e is an edge of K_n which does not belong to G_1 and f is an edge of K_n which does not belong to G_2, then the edges e, f are adjacent in K_n. Moreover, if h is another edge of K_n which does not belong to G_2, then the edges f, h are not adjacent in K_n. Proof. Suppose that the edges e, f are not adjacent. Let e={v_i,v_j}, f={v_k,v_l} and e ∩ f=∅. Let {g_1,…,g_r} be a system of binomial generators of I_G_1 and {h_1,…,h_s} be a system of binomial generators of I_G_2, then {g_1,…,g_r,h_1,…,h_s} is a generating set of I_G. Therefore we can find a minimal system of generators V of I_K_n formed by binomials belonging to either I_G_1 or I_G_2. From the introduction of section <ref> the monomial ef=ϵ_ijϵ_kl is indispensable of I_K_n, so it is a monomial term in a binomial B_w of I_G_1 or I_G_2. Since e is not an edge of G_1, we have that B_w∉ I_G_1. But f is not an edge of G_2, so B_w∉ I_G_2. Thus B_w does not belong to I_G_1 or I_G_2, a contradiction. Consequently, the edges e, f are adjacent. Let e=ϵ_ij and f=ϵ_jk. Suppose that h is another edge of K_n which does not belong to G_2. Since e does not belong to G_1, we have, from the first part of the Proposition, that the edges e, h are adjacent. We claim that f and h are not adjacent. Suppose that f, h are adjacent. Since h is adjacent to both e and f, there are two cases for the edge h, namely either h=ϵ_ik or h=ϵ_jl for an index l different than i,j,k. * h=ϵ_ik. Let l be an index different than i,j,k. In a previous step we found that there is a minimal system of generators V formed by binomials belonging to either I_G_1 or I_G_2. Moreover, V must contain exactly two of the following three binomials ϵ_ijϵ_kl-ϵ_ilϵ_jk=eϵ_kl-ϵ_ilf, ϵ_ikϵ_jl-ϵ_ilϵ_jk=hϵ_jl-ϵ_ilf, ϵ_ijϵ_kl-ϵ_ikϵ_jl=eϵ_kl-hϵ_jl. But none of them belongs to I_G_2, since f,h ∉E(G_2), while eϵ_kl-ϵ_ilf and eϵ_kl-hϵ_jl do not belong to I_G_1, since e ∉E(G_1). A contradiction. * h=ϵ_jl. We argue similarly as above. The set V must contain two of the following three binomials eϵ_kl-ϵ_ilf, ϵ_ikh-ϵ_ilf, eϵ_kl-ϵ_ikh. But none of them belongs to I_G_2, since f,h ∉E(G_2), while eϵ_kl-ϵ_ilf and eϵ_kl-ϵ_ikh do not belong to I_G_1, since e ∉E(G_1). A contradiction. Consequently, the edges f, h are not adjacent. □ From Proposition <ref> we deduce that any of the graphs G_1, G_2 contains all edges of K_n except at most two. Suppose not, and let e=ϵ_ij∉E(G_1), {f_1,f_2,f_3}⊂ E(K_n) such that f_l ∉E(G_2), for every 1 ≤ l ≤ 3. By Proposition <ref>, any of the edges f_1, f_2, f_3 is adjacent to e=ϵ_ij, so v_i is a vertex of the edges f_1, f_2, f_3 or v_j is a vertex of the edges f_1, f_2, f_3. Thus either v_i or v_j is a vertex of at least two of the edges f_1, f_2, f_3, and therefore two of the edges f_1, f_2, f_3 are adjacent, a contradiction to the second part of Proposition <ref>. Let n ≥ 4 be an integer and I_K_n=I_G_1+I_G_2 be a splitting of I_K_n. Then there exists a cycle w=(a,b,c,d) in K_n such that * G_1=K_n92 a and G_2=K_n92 b or * G_1=K_n92{a,c} and G_2=K_n92 b or * G_1=K_n92 a and G_2=K_n92{b,d} or * G_1=K_n92{a,c} and G_2=K_n92{b,d}. Proof. Since I_G_1⫋ I_K_n, there exists an edge a of K_n such that a is not an edge of G_1. Thus I_G_1⊂ I_K_n92 a. Since I_G_2⫋ I_K_n, there exists an edge b of K_n such that b is not an edge of G_2. Thus I_G_2⊂ I_K_n92 b. By Proposition <ref>, the edges a,b are adjacent. We distinguish the following cases: * G_1=K_n92 a and G_2=K_n92 b. Since a,b are adjacent edges in the complete graph K_n with n ≥ 4 vertices, there exists a cycle w of length 4 in K_n with two consecutive edges a, b. * G_1 ⫋K_n92 a and G_2 = K_n92 b. In this case, there exists an edge c of K_n92 a such that c is not an edge of G_1. By Proposition <ref>, the edges a,c do not share a common vertex and they are adjacent to b. Furthermore, G_1 contains all edges of K_n except at most two. Thus G_1=K_n92{a,c} and there exists a cycle w of length 4 in K_n with three consecutive edges a, b, c. * G_1 = K_n92 a and G_2 ⫋K_n92 b. In this case, there exists an edge d of K_n92 b such that d is not an edge of G_2. By Proposition <ref>, the edges b,d do not share a common vertex and they are adjacent to a. Moreover, G_2 contains all edges of K_n except at most two. Thus G_2=K_n92{b,d} and there exists a cycle w of length 4 in K_n with three consecutive edges d, a, b. * G_1 ⫋K_n92 a and G_2 ⫋K_n92 b. Then there exists an edge c of K_n92 a and an edge d of K_n92 b such that c is not an edge of G_1 and d is not an edge of G_2. By Proposition <ref>, the edges a,c do not share a common vertex and they are adjacent to both b, d. Additionally G_1 contains all edges of K_n except at most two. By the same proposition, the edges b,d do not share a common vertex and they are adjacent to both a, c. Furthermore G_2 contains all edges of K_n except at most two. Thus w=(a,b,c,d) is a cycle in K_n, G_1=K_n92{a,c} and G_2=K_n92{b,d}. □ Let n ≥ 4 be an integer. Then I_K_n has a minimal splitting if and only if 4≤ n ≤ 5. Proof. Suppose first that n=4. Then I_K_4=I_K_4∖{ϵ_12, ϵ_34}+I_K_4∖{ϵ_14, ϵ_23} is a minimal splitting of I_K_4 by Example <ref>. Suppose now that n=5 and let {v_1,…,v_5} be the vertex set of K_5. Let G_1=K_592{{v_1,v_2}, {v_3,v_4}} and G_2=K_592{{v_1,v_4}, {v_2,v_3}} be subgraphs of K_5. Then S={ϵ_13ϵ_24-ϵ_14ϵ_23, ϵ_14ϵ_25-ϵ_15ϵ_24, ϵ_23ϵ_45-ϵ_24ϵ_35,ϵ_13ϵ_25-ϵ_15ϵ_23,ϵ_13ϵ_45-ϵ_14ϵ_35} is a minimal generating set of I_G_1 and T={ϵ_12ϵ_34-ϵ_13ϵ_24, ϵ_24ϵ_35-ϵ_25ϵ_34, ϵ_13ϵ_45-ϵ_15ϵ_34,ϵ_12ϵ_45-ϵ_15ϵ_24, ϵ_12ϵ_35-ϵ_13ϵ_25} is a minimal generating set of I_G_2. Also for any binomial B ∈ S we have that neither B nor -B belongs to T, while for any binomial B' ∈ T we have that neither B' nor -B' belongs to S. Moreover S ∪ T is a minimal generating set of I_K_5, and therefore I_K_5=I_G_1+I_G_2 is a minimal splitting of I_K_5. Finally assume that n≥ 6 and let I_K_n=I_G_1+I_G_2 be a minimal splitting of I_K_n. Then there exists a cycle w=(a,b,c,d) of K_n such that G_1 and G_2 are of one of the four types of Proposition <ref>. Without loss of generality we can assume that a={v_1,v_2}, b={v_2,v_3}, c={v_3,v_4} and d={v_1,v_4}. Since n≥ 6, the graph K_n has at least two more vertices, say v_5 and v_6. Notice that the complete subgraph of K_n on the vertex set {v_1, v_3, v_5, v_6} is also a subgraph of both G_1 and G_2. Let S be a minimal system of binomial generators of I_G_1 and T be a minimal system of binomial generators of I_G_2. Both S and T must contain exactly two of the binomials ϵ_13ϵ_56-ϵ_15ϵ_36, ϵ_13ϵ_56-ϵ_16ϵ_35, ϵ_15ϵ_36-ϵ_16ϵ_35. Thus S and T have at least one minimal generator in common which contradicts the fact that I_K_n=I_G_1+I_G_2 is a minimal splitting of I_K_n. Consequently, for n ≥ 6 the ideal I_K_n has no minimal splitting. □ § REDUCED SPLITTINGS In this section, we introduce reduced splittings of toric ideals and show that every minimal splitting of the toric ideal of a graph is also a reduced splitting. We say that the splitting I_A=I_A_1+I_A_2 of I_A is reduced if for any toric ideals I_B_1⊂ I_A_1 and I_B_2⊂ I_A_2 with I_A=I_B_1+I_B_2 it holds that I_B_1= I_A_1 and I_B_2= I_A_2. A basic step towards determining all splittings of the toric ideal of a graph is to find its reduced splittings. All other splittings are found from a reduced splitting I_G=I_G_1+I_G_2 by adding edges to one of G_1, G_2 or both to get graphs G_1', G_2', as long as I_G=I_G_1'+I_G_2' is a splitting. The reduced splittings of I_K_n are those of the last type in Proposition <ref>, namely I_K_n=I_G_1+ I_G_2 where G_1=K_n92{a,c} and G_2=K_n92{b,d}. The other two types in Proposition <ref> can be taken from the reduced splittings, by adding edges. If I_G is splittable, then it has at least one reduced splitting. Proof. Any splitting I_G=I_G_1'+I_G_2' of I_G is either reduced or there exist a splitting I_G=I_G_1+I_G_2 such that I_G_1⊂ I_G_1' and I_G_2⊂ I_G_2', where G_1 is a proper subgraph of G_1' or/and G_2 is a proper subgraph of G_2'. In the latter case, G_1 has fewer edges than G_1' or/and G_2 has fewer edges than G_2'. This procedure cannot be repeated indefinitely, since the number of edges of G is finite. □ To understand the structure of reduced splittings one has to generalize first the notion of edge splitting by replacing the edge with a set of edges. Let S={B_w_1, B_w_2, …, B_w_r} be a minimal system of binomial generators of I_G. Given a set F⊂ E(G), we define G_S^F=⋃_e∈ FG_S^e and get I_G=I_G_S^F+I_G92F, by using similar arguments as in the proof of Theorem <ref>. Of particular interest is the case that F is the set of all edges of G having a common vertex v. Given a vertex v of G, we let G-v be the subgraph of G obtained by deleting the vertex v. We denote by G_S^v the subgraph of G with edges E(G_S^v)=⋃_1 ≤ i ≤ r and v ∈ V(w_i) E(w_i). It holds that I_G=I_G_S^v+I_G-v. The next theorem asserts that the reduced splittings of I_G are always in the form I_G=I_G_S^F+I_G92F, for suitable sets S and F. Let I_G=I_G_1+I_G_2 be a reduced splitting of I_G. Then there exist a set F⊂ E(G) and a minimal system of binomial generators S of I_G such that I_G_1=I_G_S^F and I_G_2=I_G92F. Proof. Let I_G=I_G_1+I_G_2 be a reduced splitting of I_G and set F=G 92 G_2. Let {f_1,…,f_s} be a system of binomial generators of I_G_1, {g_1,…,g_t} be a system of binomial generators of I_G_2 and S={B_w_1, …,B_w_r}⊂{f_1,…,f_s,g_1,…,g_t} be a minimal system of binomial generators of I_G as in the proof of Theorem <ref>. Then G92F=G_2, so I_G92F=I_G_2, and I_G_S^F⊂ I_G_1 since I_G_S^e⊂ I_G_1 for each e∈ F from the proof of Theorem <ref>. But I_G=I_G_1+I_G_2 is a reduced splitting of I_G and also I_G=I_G_S^F+I_G_2, since I_G=I_G_S^F+I_G92F and I_G92F=I_G_2, with I_G_S^F⊂ I_G_1, therefore I_G_S^F= I_G_1. □ A reduced splitting I_G=I_G_1+I_G_2 can be also written in the form I_G=I_G92F+I_G_S^F where F=G 92 G_1, I_G_1=I_G92F and I_G_2=I_G_S^F. Let G be the bipartite graph consisting of four 4-cycles w_1, w_2, w_3, w_4 in a row, i.e. for i<j it holds that E(w_i)∩ E(w_j)=∅ except if j=i+1 in which case they have one edge in common. The ideal I_G has a unique minimal system of binomial generators consisting of the binomials B_w_1, B_w_2, B_w_3, B_w_4. Then there are 19 different splittings. More precisely, four minimal and reduced splittings in the form I_G=<B_w_i>+<B_w_j, B_w_k, B_w_l>, where {i,j,k,l}={1,2,3,4}. Also three minimal and reduced splittings in the form I_G=<B_w_i, B_w_j>+<B_w_k, B_w_l>. Finally twelve non-minimal and non-reduced splittings in the form I_G=<B_w_i, B_w_j>+<B_w_j, B_w_k, B_w_l>. The next Theorem asserts that minimal splittings are always reduced. Every minimal splitting of I_G is also a reduced splitting. Proof. Let I_G=I_G_1+I_G_2 be a minimal splitting which is not reduced. Then there exist I_G_1'⊂ I_G_1 and I_G_2'⊂ I_G_2 such that I_G=I_G_1'+I_G_2', where I_G_1' is a proper subset of I_G_1 or/and I_G_2' is a proper subset of I_G_2. Suppose that for instance I_G_1' is a proper subset of I_G_1. Let {B_w_1,…, B_w_s} and {B_w_s+1, …, B_w_l} be minimal systems of binomial generators of the ideals I_G_1 and I_G_2, respectively. Since I_G=I_G_1+I_G_2 is a minimal splitting, the ideal I_G is minimally generated by the set {B_w_1,…, B_w_s, B_w_s+1, …, B_w_l}. Let {f_1, …, f_t} be a system of binomial generators of I_G_1' then from the equality I_G=I_G_1'+I_G_2' we have that I_G=I_G_1'+I_G_2, since G_2'⊂ G_2. Thus there exists a set {B_w_1',…, B_w_s'}⊂{f_1, …, f_t} such that {B_w_1',…, B_w_s', B_w_s+1, …, B_w_l} is a minimal system of generators of I_G, since toric ideals of graphs are homogeneous and therefore any two minimal systems of generators have the same cardinality. Then, after reordering B_w_1',…, B_w_s' if necessary, we can assume that B_w_j'=B_w_j if B_w_j is indispensable, and the binomials B_w_j', B_w_j are F_4-equivalent if B_w_j is dispensable, namely B_w_j is not indispensable, since dispensability of toric ideals of graphs is only caused by F_4's. Recall that two primitive walks γ, γ' are F_4-equivalent if either γ=γ' or there exists a series of walks γ_1=γ, γ_2,…,γ_r-1, γ_r=γ' such that γ_i and γ_i+1 differ by an F_4, where 1 ≤ i ≤ r-1, see <cit.>. Since G_1' is a proper subgraph of G_1, there exists an edge e∈ E(w_i) ⊂ E(G_1) which is not in E(G_1'), but e∉E(w_i') for at least one index 1≤ i≤ s. Then B_w_i is dispensable, thus w_i, w_i' are F_4-equivalent and e belongs to a common F_4 of both w_i, w_i'. Suppose that the edges of the F_4 belonging to w_i are e,f and to w_i' are a,b, thus F_4=(e,a,f,b). Consider the binomial ef-ab ∈ I_G, we have that all e,f,a,b are edges of G_1, since G_1' ⊂ G_1, and therefore ef-ab∈ I_G_1. We distinguish the following cases. * ef-ab is an indispensable binomial of I_G. The set {B_w_1',…, B_w_s', B_w_s+1, …, B_w_l} is a minimal system of generators of I_G and ef-ab is indispensable of I_G, so ef-ab is one of the binomials B_w_i', 1 ≤ k ≤ s or B_w_k, s+1≤ k≤ l. But ef-ab∉I_G_1' since e is not an edge of G_1', thus ef-ab=B_w_k for an index s+1≤ k≤ l. Since the binomial ef-ab is indispensable of I_G, the A_G-fiber of deg_A_G(ef) has only two elements, namely ef and ab, see <cit.>. Then the A_G_1-fiber of deg_A_G_1(ef) has at most two elements; in fact, it has exactly two with no common factor other than 1 since ef-ab∈ I_G_1. Thus ef-ab is indispensable of I_G_1, and therefore ef-ab=B_w_q for an index 1≤ q ≤ s a contradiction to the hypothesis that I_G=I_G_1+I_G_2 is a minimal splitting. * ef-ab is not an indispensable binomial of I_G. Then there is a binomial ef-cd in I_G. In this case, the A_G-fiber of deg_A_G(ef) corresponds to a subgraph of G homomorphic to K_4 and it consists of exactly three monomials, namely ef, ab, and cd. Every minimal system of generators of I_G should contain exactly two of the binomials ef-ab, ef-cd, ab-cd. There are two subcases. (i) Both edges c, d belong to G_1. Then all binomials ef-ab, ef-cd, ab-cd belong to I_G_1, so any minimal system of binomial generators of I_G_1 should contain exactly two of them. Since I_G=I_G_1+I_G_2 is a minimal splitting, the ideal I_G_2 cannot contain any of the above three binomials. But I_G=I_G_1'+I_G_2 is also a splitting of I_G and e is not an edge of G_1', therefore only ab-cd can be an element of I_G_1'. Then {B_w_1',…, B_w_s', B_w_s+1, …, B_w_l} is a minimal generating set of I_G, which contains at most one of the binomials ef-ab, ef-cd, ab-cd, a contradiction. (ii) At least one of c, d does not belong to G_1. Then the binomial ef-ab is indispensable of I_G_1, since the A_G_1-fiber of deg_A_G_1(ef) has only two elements, namely ef and ab, with no common factor other than 1. Thus the set {B_w_1,…, B_w_s} contains the binomial ef-ab. The set {B_w_1,…, B_w_s, B_w_s+1, …, B_w_l} is a minimal system of generators of I_G, so {B_w_s+1, …, B_w_l} contains exactly one of the binomials ef-cd or ab-cd and does not contain ef-ab. But I_G=I_G_1'+I_G_2 is a splitting of I_G and none of the binomials ef-ab, ef-cd, ab-cd belongs to I_G_1', since e and at least one of c, d does not belong to E(G_1'). Thus the set {B_w_s+1, …, B_w_l} contains exactly two of the binomials ef-ab, ef-cd, ab-cd, a contradiction. In all cases we reach a contradiction, so every minimal splitting is reduced. □ The converse of Theorem <ref> is not true. By Theorem <ref>, I_K_n does not have a minimal splitting for n ≥ 6. But from Proposition <ref> any splitting I_G=I_K_n92{a,c}+I_K_n92{b,d} is reduced. In <cit.> G. Favacchio, J. Hofscheier, G. Keiper, and A. Van Tuyl pose the following question: Suppose that there exist graphs G, G_1 and G_2 such that I_G=I_G_1+I_G_2. How do the graded Betti numbers of I_G related to those of I_G_1 and I_G_2? This seems a very difficult question, especially given the existence of examples like the edge splitting in Example <ref> or the results about the splittings of K_n in section <ref>. 10 Coco CoCoATeam, CoCoA: A system for doing computations in commutative algebra, available at http://cocoa.dima.unige.it. ChKTh H. Charalambous, A. Katsabekis, A. Thoma, Minimal systems of binomial generators and the indispensable complex of a toric ideal, Proc. Amer. Math. Soc. 135 (2007), 3443-3451. DST J. A. De Loera, B. Sturmfels, R. Thomas, Gröbner bases and triangulations of the second hypersimplex, Combinatorica 15 (1995), 409-424. DSS M. Drton, B. Sturmfels, S. Sullivant, Lectures on algebraic statistics, Oberwolfach Seminars, vol. 39. Birkhäuser Verlag, Basel, 2009, viii+171 pp. FHKT G. Favacchio, J. Hofscheier, G. Keiper, A. Van Tuyl, Splittings of toric ideals, J. Algebra 574 (2021), 409–433. GS P. Gimenez, H. Srinivasan, Gluing and splitting of homogeneous toric ideals, arXiv:2402.17112v1 (2024). K G. Keiper, Toric Ideals of Finite Simple Graphs, PhD thesis, McMaster University, 2022. http://hdl.handle.net/11375/27878 HO H. Ohsugi, T. Hibi, Indispensable binomials of finite graphs. J. Algebra Appl. 4 (2005), 421–434. OH1 H. Ohsugi and T. Hibi, Centrally symmetric configurations of integer matrices, Nagoya Math. J. 216 (2014), 153-170. RTT E. Reyes, C. Tatakis, A. Thoma, Minimal generators of toric ideals of graphs, Adv. Appl. Math. 48 (2012), 64-78. St B. Sturmfels, Gröbner Bases and Convex Polytopes, University Lecture Series, vol. 8, MAS, R.I., 1995. Vil1 R. H. Villarreal, Normality of subrings generated by square free monomials, J. Pure Appl. Algebra 113 (1996), 91-106. Vil R. H. Villarreal, Monomial Algebras, 2nd edn. CRC Press, Boca Raton FL, 2015.
http://arxiv.org/abs/2405.09098v1
20240515051810
Analysis of Galaxies at the Extremes: A Kinematic Analysis of the Virgo Cluster Dwarfs VCC 9 and VCC 1448 using the Keck Cosmic Web Imager
[ "Jonah S. Gannon", "Duncan A. Forbes", "Aaron J. Romanowsky", "Jean P. Brodie", "Lydia Haacke", "Anna Ferré-Mateu", "Shany Danieli", "Pieter van Dokkum", "Maria Luisa Buzzo", "Warrick J. Couch", "Zili Shen" ]
astro-ph.GA
[ "astro-ph.GA" ]
firstpage–lastpage Bistellar Cluster Algebras and Piecewise Linear Invariants Zhi Lü May 20, 2024 ========================================================== We present spatially resolved Keck Cosmic Web Imager stellar spectroscopy of the Virgo cluster dwarf galaxies VCC 9 and VCC 1448. These galaxies have similar stellar masses and large half-light radii but very different globular cluster (GC) system richness (∼25 vs. ∼99 GCs). Using the KCWI data, we spectroscopically confirm 10 GCs associated with VCC 1448 and one GC associated with VCC 9. We make two measurements of dynamical mass for VCC 1448 based on the stellar and GC velocities respectively. VCC 1448’s mass measurements suggest that it resides in a halo in better agreement with the expectation of the stellar mass – halo mass relationship than the expectation from its large GC counts.. For VCC 9, the dynamical mass we measure agrees with the expected halo mass from both relationships. We compare VCC 1448 and VCC 9 to the GC-rich galaxy Dragonfly 44 (∼74 GCs), which is similar in size but has ∼ 1 dex less stellar mass than either Virgo galaxy. In dynamical mass – GC number space, Dragonfly 44 and VCC 1448 exhibit richer GC systems given their dynamical mass than that of VCC 9 and other `normal' galaxies. We also place the galaxies in kinematics–ellipticity space finding evidence of an anticorrelation between rotational support and the fraction of a galaxy's stellar mass in its GC system. i.e., VCC 9 is more rotationally supported than VCC 1448, which is more rotationally supported than Dragonfly 44. This trend may be expected if a galaxy's GC content depends on its natal gas properties at formation. galaxies: dwarf – galaxies: fundamental parameters – galaxies: clusters: Virgo – galaxies: kinematics and dynamics – galaxies: individual: VCC 1448 § INTRODUCTION Large, low surface brightness (LSB) galaxies in the dwarf-like regime have been studied for their peculiarities since at least the 1950's (see e.g., ). For example, large-sized, low-surface brightness cluster galaxies were studied by <cit.> and it was argued they must be dark matter dominated in order to survive in the cluster environment <cit.>. However, it is only recently that widespread interest in the study of these extreme galaxies has been renewed with a sub-set of LSB galaxies, so-called “ultra-diffuse" galaxies <cit.>. It soon became apparent that many are also extreme in their globular cluster (GC) content <cit.>. These galaxies may also help us differentiate between different prescriptions of dark matter (see e.g., ), with understanding star formation at the extreme (see e.g., ) as well as understanding the first epochs of galaxy and GC formation <cit.>. There have been various attempts to try and explain the formation of such large-sized dwarf galaxies. These attempts have focused on formation via internal (e.g., star formation feedback ; or high-halo spin ) or external (e.g., tidal heating ; tidal stripping ; or galaxy collisions/mergers ) physical processes. Combinations of both are also possible (e.g., ). While it is clear that large dwarf galaxies likely have multiple pathways to their formation (e.g., ; ; , ), understanding the individual pathways is still of interest. In particular, it has been shown that measuring the dynamics of these galaxies can be key to differentiate which formation pathways are viable (). Furthermore, the large variety of GC numbers the galaxies host, are suggestive of dual formation pathways <cit.>. Specifically, assuming that these galaxies follow the GC-number – halo mass relation <cit.>, the wide range of observed GC-richness implies a large spread in the total dark matter halo masses in which they reside. In turn, this would imply a wide range of formation pathways. Crucially, this is only true if the galaxies follow the same GC number – halo mass relationship as normal galaxies. Building a sample of large, GC-rich dwarf galaxies with independent halo mass measurements (e.g., ) is vital to test this assumption <cit.>. Beyond independent halo mass measurements, resolved kinematics of normal galaxies have aided our understanding of their formation (c.f., the review of ). For example, the build-up of stellar mass in a galaxy is thought to leave imprints in their resolved kinematics. Galaxies built via the accretion of high angular momentum gas are expected to rotate faster than galaxies built through randomised mergers, which tend to be more dispersion-supported in their kinematics. However, some orientations of major mergers are known to lead to higher angular momentum/rotation (e.g., ). These differences have led multiple authors to classify galaxies as either “slow rotators" or “fast rotators" based on metrics relying on the ratios of rotation to dispersion support in their central kinematics (e.g., ). Historically, kinematically resolved studies of galaxies have mainly relied on the use of long-slit spectroscopy, but recent developments in integral field spectroscopy have allowed far more complete galaxy samples to be compiled. See for example, the ATLAS-3D project <cit.>, the CALIFA project <cit.>, the SAMI project <cit.>, the MANGA project <cit.> or the LEWIS project <cit.>. Here, we target two large Virgo cluster dwarf galaxies – VCC 1448 (IC 3475; R_ e≈ 3.5 kpc and M_⋆≈2.6×10^9 M_⊙) and VCC 9 (IC 3019; R_ e≈ 3 kpc and M_⋆≈2.5×10^9 M_⊙) using resolved kinematics from the Keck Cosmic Web Imager (KCWI; ), an integral field unit on the Keck II telescope. VCC 1448 is prototypical for a large-LSB dwarf, with <cit.> first cataloguing it in a collection of large, LSB objects in the Virgo cluster as “IC 3475 type objects". VCC 9 is of comparable size and stellar mass. A key point of difference between the two galaxies is their GC content, and hence expected total halo mass. VCC 1448 is particularly rich in GCs with ground-based imaging suggesting a total system of 99.3±17.6 GCs <cit.>, while VCC 9 is more representative of a galaxy with its luminosity, hosting 25.7±6.4 GCs <cit.>. Likewise, we will also contrast these galaxies with the Coma Cluster UDG Dragonfly 44. Dragonfly 44 is of similar size and GC richness (N_ GC=74±18, ) to VCC 1448, however has an order of magnitude less stellar mass (M_⋆≈3×10^8 M_⊙). Dragonfly 44 has been proposed to be prototypical for a “failed galaxy” - a galaxy that quenched catastrophically before forming the total stellar mass expected for its dark matter halo <cit.>. Currently, Dragonfly 44 is the only “failed galaxy” UDG with resolved kinematics measured. However, many other examples of UDGs with similar GC systems exist (e.g., NGC 5846-UDG1; ) making elucidating their formation vital. This paper aims to focus on the comparative kinematic differences between VCC 9 and VCC 1448. Furthermore, given the known trends of luminosity/half-light radius and GC-richness (see e.g., ), we also aim to understand the underlying galaxy structure that may induce a different GC formation efficiency at fixed size/luminosity. We undertake this work as part of the Analysis of Galaxies at the Extremes (AGATE) project. In this project, we seek to better understand extreme galaxies with a particular emphasis on those galaxies that are at the size or GC-richness extremes for their stellar mass. In Section <ref> we provide further details of the two targets. In Section <ref> we outline our KCWI data, describing their observation, reduction and presenting the results of our analysis. We discuss these results in the context of other Virgo cluster dwarf galaxies and contrast VCC 1448 with Dragonfly 44 in Section <ref>. Finally, we present our primary conclusions in Section <ref>. § TARGETS In this work, we study two Virgo cluster galaxies, VCC 9 and VCC 1448, and make comparisons to the Coma Cluster UDG Dragonfly 44. It is worth knowing that while VCC 1448 has already had some of its GC kinematics studied by <cit.> our work is the first time its stellar kinematics have been studied. We contextualise our choice to study these galaxies in Figures <ref> and <ref>. In Figure <ref> we plot the catalogue described in <cit.> and <cit.> in half-light radius – absolute magnitude space, with both galaxies highlighted. The region commonly assigned to the so-called UDGs <cit.> is highlighted in orange. VCC 9 and VCC 1448 are amongst the largest dwarf galaxies in the catalogue for their luminosity. Indeed, VCC 1448 has been previously identified by numerous authors as being an outlier to large sizes for its luminosity <cit.>. While neither fit the UDG definition, it is likely both fit definitions based on galaxies being outliers in size – stellar mass space such as the “ultra-puffy galaxy” definition proposed by <cit.>. Dragonfly 44 is also extremely large for its luminosity and resides in the region corresponding to UDGs. In Figure <ref> we plot GC counts vs. absolute magnitude. We use these two parameters as observational proxies for the stellar mass – halo mass relationship by converting luminosity into stellar mass assuming a stellar mass to light ratio (M_⋆/L_V) of 2 (see e.g., ) and converting GC-numbers into a halo mass (M_ Halo) using the <cit.> relationship. The GC counts for VCC 9 (25.7 ± 6.4; ) are typical for a galaxy at its luminosity. However, the GC estimate for VCC 1448 (99.3±17.6; ) is elevated for its luminosity. Based on an average Milky Way GC mass of ∼2×10^5 M_⊙ (; 2010 revision), the GC system represents ∼ 0.76% of its total stellar mass. For VCC 9 the GC system is only 0.21%  of its stellar mass. As can be seen in Figure <ref>, Dragonfly 44 has a similar size to VCC 1448 but is 1.7 magnitudes fainter (-15.7 vs. -17.4 mag.). Estimates of its GC-richness have Dragonfly 44 as extremely GC-rich for its luminosity (Figure <ref>; although see and for a discussion of Dragonfly 44's GC richness). Due to this high number of GCs and comparatively low luminosity, Dragonfly 44 and other GC-rich UDGs have been thought of as “failed galaxies" or “pure stellar haloes" <cit.>. Given the similarly large size and rich GC system, the primary difference between VCC 1448 and Dragonfly 44 is the ∼1 dex difference in stellar mass. We therefore wish to test if both reside in similar dark matter halos to probe for an evolutionary link. i.e., we wish to test if Dragonfly 44 could plausibly be a passively evolved version of VCC 1448 that began on a similar evolutionary path or, equivalently, if VCC 1448 may be what Dragonfly 44 could have evolved into without `failing'. In Table <ref> we provide a basic summary of the relevant parameters for each galaxy. For VCC 1448 the values for rows 1–7 are taken from <cit.> and <cit.>. The GC counts in row 8 are from <cit.>. Rows 9–16 are from this work. For VCC 9 the values for rows 1–4 and 6 are taken from the catalogue of <cit.> and <cit.>. Row 7, the GC counts, are from <cit.>. All other values are from either <cit.> and <cit.> or are derived here based on their work. In particular, we do not use their stated dynamical mass for VCC 9 but instead re-derive their dynamical mass in Section <ref>. For Dragonfly 44 rows 1–7 are taken from <cit.> and <cit.>. Row 8 is calculated in this work. The remaining rows are from <cit.>. § KECK COSMIC WEB IMAGER DATA The KCWI data used in this work were observed on the night 2022, March 26 (N046, PI:Romanowsky). Conditions were clear with 1.1” seeing. KCWI was configured using the medium slicer (16” × 20” field of view), BM grating and a central wavelength of 5075Å. The spectral coverage of this configuration is 4650 - 5503Å. The instrumental resolution was measured from the arc calibration files and found to be R=5100 at 5075Å (σ_ inst = 24.9 km s^-1). A total exposure time of 3600s was spent targeting a central pointing of VCC 9 (see Figure <ref>) with 3600s on a nearby sky. Note that the sky position for VCC 9 is outside of the area plotted in Figure <ref>. Three pointings were made targeting VCC 1448 along its major axis as indicated in Figure <ref>. Total exposure times were 4800s, 3600s and 4800s for pointings 1, 2 and 3 respectively. In addition to these data a further 3600s of exposure were observed at each of the 2 sky pointings as indicated in Figure <ref>. Both sky locations were chosen to include a compact source that was a candidate GC of VCC 1448. We supplement these observations with two pointings of KCWI observed on the night 2021, April 16 as part of program Y228 (PI: van Dokkum). Conditions were clear with 0.7” seeing. KCWI was configured using the large slicer (32” × 20” field of view), BH3 grating and a central wavelength of 5080Å. The spectral coverage of this configuration is 4824-5315Å. Instrumental resolution in this configuration is approximately σ_ inst = 25 km s^-1 at 5000Å. A total exposure time of 3600s was spent targeting each pointing of VCC 1448 as indicated by A and B in Figure <ref>. The primary purpose of these additional KCWI observations was to recover the recessional velocities of GC candidates around VCC 1448. All data were reduced using the standard KCWI data reduction pipeline <cit.>. Following the reduction, data cubes were trimmed to their good spatial and spectral wavelength ranges and then processed and sky subtracted as below. §.§ VCC 9 Kinematic Measurements From Galaxy Light We took the output standard star calibrated, non-sky subtracted `ocubes' of VCC 9 from the data reduction pipeline and collapsed them over both spatial directions into a single, non-sky-subtracted spectrum. At the distance of the Virgo cluster the KCWI medium slicer corresponds to a rectangular region of roughly 0.25 R_ e within VCC 9 as seen in Figure <ref>. A library of 9 sky spectra was then created using the offset sky observations around VCC 9 and VCC 1448 by collapsing the entire offset sky exposures over both spatial directions, excluding any compact sources in the data. This sky library, along with a template for galaxy emission, was input into the sky subtraction routine described in <cit.> to sky subtract our object data. Using this routine the data were modelled as a linear combination of our 9 sky principal components along with an old, metal poor template for galaxy emission from the library of <cit.>. After modelling, the sky portion of the model is subtracted to create a sky subtracted spectrum for each data cube. Barycentric corrections were then applied and each spectrum was median combined to produce the final science spectrum. We used the code <cit.> to measure a recessional velocity and velocity dispersion from the VCC 9 spectrum. Here we also followed the fitting method described in <cit.>. In brief, this involved fitting the sky-subtracted spectrum using the <cit.> synthetic stellar library in 241 different input parameter combinations to . This includes all combinations of 0-10th order additive and/or multiplicative polynomials along with 2 (i.e., pure Gaussian) and 4 (i.e., including h_3 and h_4) Gaussian moments. We do this to ensure that our final results are not dependent on our choice of input parameters to . We take our results as the median of the resulting parameter distributions with uncertainties as the 16th and 84th percentiles of these distributions. Using this process we measure a recessional velocity of 1690±0.4 , a velocity dispersion of 27±0.6  and a spectral signal-to-noise ratio of 101Å^-1 off of our spectrum. We note our uncertainties are drawn from the parameter distributions. There may be some concern that this method underestimates the true uncertainty in the results. In order to test this we took one of our spectra and created 100 random realisations of it based on uncertainties drawn from the residuals of its initial fit. These were then re-fitted with . The resulting parameter distributions for both recessional velocity and velocity dispersion are within the uncertainties derived by our primary fitting method. These uncertainties however do not take into account any systematic uncertainty that may be present such as possible template mismatches between our fitting library and the data and any wavelength calibration issues. We estimate possible systematic errors based on the pixel sampling of our data as 10% of the resolution of our data, i.e., 0.1c/R = 6 (see further ). Further to this we find our ability to characterise the instrumental resolution of KCWI has a ∼2% (∼0.5 ) uncertainty which may affect our calculated velocity dispersions. We incorporate these uncertainties in quadrature to the relevant measurements for the remainder of the paper. Our recessional velocity is within the uncertainties of those previously listed for VCC 9 (e.g., SDSS; ). Furthermore, this recessional velocity and velocity dispersion are in reasonable agreement with those listed from William Herschel Telescope spectroscopy of VCC 9 by <cit.>. We suggest the good agreement of our velocity dispersion to that from <cit.> allows us to adopt their rotation value for the remainder of the paper as we do not sample out to the half-light radius where we require this measurement for comparison in Section <ref>. The fitting process used to produce our value includes the higher order Gaussian moments h_3 and h_4 in half of the fits to ensure their inclusion does not drastically affect our results. These additional parameters do not significantly change the recessional velocity/velocity dispersion from those fits that do not include them. We display the resulting spectrum for VCC 9, along with an example best fitting fit, in Figure <ref> (top panel). §.§ VCC 1448 Kinematic Profile Measurements From Galaxy Light We performed two separate extractions of spectra to create a radial profile for VCC 1448. In the first, we created a crude radial profile for VCC 1448 by spatially collapsing each of our three KCWI pointings along the spatial axes into a single spectrum. These are then sky subtracted, median combined and fitted using as described for VCC 9 in subsection <ref>. In each case, compact sources in the field of view are masked prior to spectral extraction. We display these three spectra, along with an example best fitting fit, in Figure <ref>. For our second extraction of a radial profile we desired higher spatial resolution (i.e., more radial bins). During early attempts to extract spectra in elliptical apertures on VCC 1448 we discovered the WCS coordinates available in KCWI reduced data cubes did not have the required accuracy for cube stacking. To correct for this inaccuracy we took the brightest point source in each data cube, fitted a Gaussian profile, measured its centre and reassigned the fits header WCS to this position. We took the centre of VCC 1448 to be at RA = 188.170009 deg. and Dec = +12.771085 deg. (both J2000), the galaxy's position angle to be 88 degrees and its ellipticity ϵ=0.19 <cit.>. We then used the galaxy centre, position angle and axis ratio to place elliptical apertures on the KCWI data using the routine <cit.>. The initial aperture had a semi-major radius of 3” with the 4 subsequent annuli stepping out by 3” in radius in each aperture. Following these initial 5 apertures, 8 more were added that each stepped out in 5” in radius each aperture. Apertures were limited to one side of the minor axis of the galaxy to ensure that they did not sample both the positive and negative side of the galaxy's rotation. Our apertures for extraction can be seen on sky in Figure <ref>. At our assumed 16.5 Mpc distance for Virgo our full profile reaches out to 4.4 kpc which corresponds to ∼ 1.25 R_ e. For each radial bin, spectra are extracted then sky subtracted and fitted using , as described for VCC 9 in subsection <ref>. Compact sources are masked in each aperture before the extraction of a spectrum. The final recessional velocity, velocity dispersion and signal-to-noise for each radial bin are visible in Figure <ref>. Measured values of recessional velocity and velocity dispersion for our first, crude extraction are in good agreement with those measured in similar radial regions of our second extraction. For the remainder of this paper, we will use velocity dispersions and recessional velocities from the second extraction. We note that while this extraction steps out along one side of the galaxy, we also sample out to small radii along the major axis in the other direction. We have also derived the results for the first 4 apertures available to step out in this direction. They are within the uncertainties of the results displayed in Figure <ref> for both velocity dispersion and rotation. However, for these data, the rotation was blue-shifted with respect to the systemic velocity of the galaxy along the line of sight. We measure the barycentric corrected recessional velocity of VCC 1448 to be 2280±6  (i.e., the value from the central bin of our second radial extraction). This value is within 2-σ uncertainties of the systemic velocity of the GC system reported by <cit.>. Our spectroscopic recessional velocity amends the reporting of VCC 1448 being at 2572±15   by <cit.>. We note the <cit.> measurement has been repeated (with small alterations) in various catalogues listing VCC 1448, however no other unique recessional velocity measurement exists for the galaxy that we are aware of. Given the high S/N of our various extracted spectra, their derived redshift seems unlikely. We suggest the cause of this difference is that their listed recessional velocity is from the HI thought to be associated with the galaxy <cit.>. It is instead likely this gas is no longer associated with VCC 1448, although it is possible it is in the process of being ram pressure stripped <cit.>. We use our recessional velocity when calculating projected rotation in the galaxy subject to additional corrections below. §.§ Azimuthal Correction We note that in coadding our spaxels we will be adding spaxels that do not align with the semi-major axis of VCC 1448 along which rotation is usually quoted. To correct for this issue we derive a correction for each annulus using a formula derived from equation 3 of <cit.>. This formula requires the systemic velocity of the galaxy (V_ Sys), the observed recessional velocity (V_ Obs), the position angle of the observed spaxel (PA_i), the position angle of the kinematic axis (PA_ kin = phot), the kinematic axis ratio (q_ kin=phot) and the position angle of the upper limit of the wedge of annulus observed (θ_ max). For both the kinematic position angle and axis ratio we assume it to be equivalent to the photometric axis ratios. The equation to calculate the rotational velocity (V_ Rot) then takes the form: V_ Rot = (V_ Obs - V_ Sys) ×∫_PA_ kin = phot^θ_ max 2 ×√(1+ ( tan( θ - PA_ kin = phot)/q_ kin=phot)^2) dθ where we calculate the maximum position angle in our wedge to evaluate it over as: θ_ max = PA_ kin = phot + arctan(0.64 kpc/(R_ min+ R_ max/2) kpc) which is derived by the geometry of our KCWI pointings at the distance of VCC 1448. In practice equation <ref> has little effect on our measured rotation at large radii, with most of the correction being evident in the central 4-5 apertures (∼ 0.33 R_ e). §.§ Inclination corrections In addition to our azimuthal correction, it is worth noting that VCC 1448 is not obviously edge-on. As such, any rotation measured is likely only a lower limit to the true rotation in the galaxy. Under the assumption of a thin disk, a lower limit on the inclination angle (0 degrees = face-on; 90 degrees = edge-on) may be calculated from the photometric ellipticity using i = cos^-1(1-ϵ). For VCC 1448 this equates to i=36 degrees. Using the standard sini inclination correction this implies that an assumption of viewing VCC 1448 edge on (i.e., V_ Measured = V_ Intrinsic) may underestimate the true rotation of the galaxy by a factor of up to 1.7×. Therefore, we will also include the results of the intrinsic rotation of VCC 1448 if we are observing it at inclination angles of 75, 60, 45 and 36 degrees (i.e., the lower limit) when assuming VCC 1448 is viewed edge-on throughout important sections of the remainder of the paper. In practice, none of these inclination corrections are large enough to alter the conclusions that will be drawn. After the inclusion of the inclination and azimuthal corrections, there is a flattening of the rotation curve in VCC 1448 at about 0.5 R_ e (see Fig. <ref>). However, the maximum rotational velocity in a galaxy is not usually reached until beyond 1 R_ e <cit.>. It is therefore possible that we have not yet reached the maximum rotational velocity of VCC 1448. For the remainder of the paper, we therefore compare rotation as measured within the half-light radius of each galaxy rather than the maximum rotation of each galaxy. §.§ Dynamical Masses Now that we have a velocity dispersion and rotational velocity we use them to calculate a mass using the formula of <cit.>. This formula uses the 2-D circularised half-light radius R_ e, Circ and the line of sight velocity dispersion within that radius (σ) and takes the form: M(<4/3 R_ e, Circ) = 930 ( σ^2/ km s^-1) R_ e, Circ/ pc We note however that the formula of <cit.> was designed for pressure-supported systems and that both VCC 1448 and VCC 9 have significant rotation support within the half-light radius. To account for this rotation we instead use the light-weighted average V_ rms = √(⟨ V ⟩^2_ Rot + ⟨σ⟩^2) within 1 R_ e as the “σ" in Equation <ref> <cit.>. Using this method we derive a dynamical mass of 1.96±0.68×10^9 M_⊙ within 4.67 kpc for VCC 1448. It is worth noting that this total dynamical mass is only slightly greater than the expected stellar mass within the half-light radius (∼ 1.3× 10^9 M_⊙) and is within 1-σ uncertainties of it. Using the same process for VCC 9 we derive a dynamical mass of 2.69±2.29×10^9 M_⊙ within 3.96 kpc. Finally, our rotation and velocity dispersion allows us to investigate the stellar motions in the galaxy. We determine the flux-weighted average rotational velocity over velocity dispersion (⟨ V ⟩ / ⟨σ⟩) based on the commonly used formula of <cit.>: ⟨ V ⟩/⟨σ⟩ = √(⟨ V^2⟩/⟨σ^2⟩) = √(∑_n=1^N F_n V^2_n/∑_n=1^N F_nσ^2_n) with F_n, σ_n and V_n as the measured flux, velocity dispersion and rotational velocity in each bin respectively. We apply this formula to the central bins of our profile (i.e., those within 1 R_ e) where we calculate the flux values for each bin based on the measured Sérsic profile of the galaxy. Our final ⟨ V ⟩/⟨σ⟩ value for VCC 1448 is 0.45. We report this value along with the flux-weighted average velocity dispersion (⟨σ_ e⟩) and the flux-weighted average rotational velocity corrected to the major axis (⟨ V_ rot, e⟩) in Table <ref>. §.§ GC Candidate Recessional Velocities In addition to the BM, Medium, 5075Å central wavelength data (hereafter BM/M) we use for kinematics we also have access to BH3, Large, 5080Å central wavelength KCWI data (hereafter BH3/L) in two pointings as described in Section <ref>. The positioning of these two pointings is visible in Figure <ref>. We reduced and stacked these data in the same manner as the BM/Medium data but here we performed an additional flat fielding correction prior to stacking as described in <cit.> to correct for a low-level gradient observed in the images. For the BM/Medium data we do not perform this correction as it is not possible to disentangle a gradient caused by the galaxy in the background from this flat fielding error. A peculiarity of KCWI is its rectangular spaxels. In the case of this Large slicer data, the sampling is such that the spatial resolution is pixel-limited along the long axis of the slicer while being seeing-limited on the short axis. After rebinning and stacking, this causes point sources such as GCs to appear oblong in their shape. We extracted spectra for all visually identified compact sources in the stacked data cubes in two manners: 1) by placing 1” diameter apertures on each source and subtracting off a nearby region of the data cube as the background (sky + galaxy); 2) by placing a 1” diameter aperture on each source and subtracting off an annulus with inner diameter 1.5” and outer diameter 3” as background (sky + galaxy). In the case of the BM/M data these apertures were circular. In the case of the BH3/L data these apertures were oblong along the long axis of the slicer to account for the different spatial resolution in that direction. In total, we extracted spectra for 35 objects from both VCC 1448 and VCC 9, 11 of which were duplicated between two KCWI configurations. In general, the first extraction, which uses a nearby offset sky region for background subtraction proved better able to recover a recessional velocity than using an annular sky background. For the remainder if this paper we report values from this extraction. We fitted all of these spectra with using 2 Gaussian moments (i.e., no h_3 or h_4) along with 6 additive and multiplicative polynomials. We ensure our final reported redshifts are robust to the initial redshift estimate by running with 51 different input recessional velocities linearly distributed from 200  above to 200  below the recessional velocity of the respective galaxy host. Redshifts were only reported when there is a clear independence of recovered GC velocity on initial guess. 10 GCs satisfy this criteria for VCC 1448 while only 1 satisfied this criteria for VCC 9. It is possible the confirmed GC for VCC 9 is the nucleus of the galaxy given its central location. 1 object was identified as a foreground star. For the 11 objects that were duplicated in both KCWI configurations 3 had recessional velocities in agreement between the configurations and 2 had recessional velocities that could only be deemed reliable in the BM/Medium grating. For all other GC candidates remaining in either KCWI configuration, we were unable to measure a recessional velocity due to low S/N. We present the GC position, magnitude, projected distance to VCC 1448/VCC 9 centre and recessional velocity in each configuration in Table <ref>. The recessional velocities have been barycentric corrected <cit.>. Magnitudes are quoted from the automatic Hubble Legacy Archive catalogues generated for the Hubble Space telescope (HST) imaging from <cit.> and <cit.>. In Figure <ref> we display the position of our GCs around VCC 1448 and the foreground star in the sky with colour coding corresponding to their absolute velocities around the recessional velocity of VCC 1448. The majority of our GCs reside at a higher recessional velocity than the galaxy. There is no obvious sign of rotation in the GC system. In Figure <ref> we then place our GCs in the phase space centred on VCC 1448. We include GC positions from the galaxy centre but note that our rotation curve for the galaxy which lies along the semi-major axis. Formally this does not take into account the ellipticity of VCC 1448, where projected GC distances away from the semi-major axis are “further out" in the galaxy (i.e., a larger fraction of the galaxy radius along that axis) for a similar observed distance. We do not apply these corrections so as to include and be comparable to the data plotted in <cit.> figure 4. Two of our GCs are in common. For our GC1 <cit.> measured a recessional velocity of 2324±3 and for our GC3 they measure 2279±8 . Both values are slightly lower than our measurements but within 2-σ joint uncertainties. Again, we can see that the GCs, measured both in this work and in <cit.>, have higher recessional velocities than the galaxy on average. We note that this is at least partially due to the sampling of our KCWI data being asymmetric along only one side of the galaxy. We feed our 10 GC recessional velocities into the GC velocity dispersion fitting code outlined in Haacke et al. (in prep). Following appendix A in <cit.>, the code uses the python package <cit.> to fit the mean velocity of the GC system and the velocity dispersion for the GCs using Monte Carlo Markov Chain exploration of the standard Gaussian maximum likelihood equation as used in e.g., <cit.> or <cit.>. Individual GC velocities are weighted by their uncertainties when included in this fitting process. A flat prior is used in the range 0-100 km s^-1. Final values are reported from the median of the posterior distribution with 16th and 84th percentiles as the uncertainties. From this analysis, and using only our 10 GC measurements we report a GC-system velocity dispersion of 27^+9_-6 km s^-1, which is in good agreement with our stellar velocity dispersion of 24.0±1.6. Interestingly our fitting also revealed a mean GC velocity of 2304^+9_-9 km s^-1, ∼ 20 km s^-1 more than that we measure for the galaxy. We note that our GC velocity dispersion is well below the 48^+16_-11 km s^-1 reported by <cit.> based on their sample of 9 GCs. However our average GC-system velocity is in good agreement with their mean velocity, 2310^+16_-17 km s^-1. Using their data (E Toloba, priv. comm.) our dispersion fitting code was able to produce results in agreement with their published values. We therefore combine both our data with those from <cit.>, using our recessional velocities for GCs that appear in both studies. For the combined sample we derive an average GC-system velocity of 2308^+9_-9 km s^-1 with a dispersion of 38^+9_-6 km s^-1. As expected this value is between those recovered with either survey used independently. In order to ensure our measured velocity dispersions were not being overly driven by an outlier GC we iteratively re-fitted the velocity dispersion for these samples each time excluding 1 GC measurement. For the GC sample coming from this work the maximum difference in the velocity dispersion from the fiducial value was ∼4 . Likewise for the combined GC sample of our work and <cit.>, the maximum difference in the velocity dispersion was also ∼4 . In both cases, the exclusion of the GC that drove the maximum difference resulted in lower velocity dispersions. As both differences are well within the uncertainties of our fiducial values, we are satisfied that there does not exist an outlier GC overly affecting our results. We summarise these values for VCC 1448 in Table <ref>. In general, the mean of the GC system is expected to be representative of that of the galaxy (see , fig. 1). Here however we measure a mean velocity of the GC system that is larger than that we measure for the galaxy. Fixing the GC mean to be that of the galaxy would cause a larger velocity dispersion to be measured from our data. However, this offset is at least partially due to our uneven sampling of the Gaussian distribution of GC velocities as the result of the uneven spatial coverage of our KCWI pointings. This reinforces the benefits of our study also targeting VCC 1448's stellar body to get an independent measurement of its recessional velocity. However, there also exists a difference between the GC velocity dispersion measured from our sample and that of <cit.>. <cit.>'s GC sample comes from GCs that are on average further from the centre of VCC 1448 than our data (see further Figure <ref>). If VCC 1448 exhibits a strongly rising velocity dispersion profile in its GC system it may be expected to see such a difference between the velocity dispersions measured from our GCs and those of <cit.>. A rising velocity dispersion profile is not seen in the stellar profile however, which is approximately flat with radius (see further Figure <ref>). Finally, we use the combined GC projected positions and recessional velocities to estimate a mass for the galaxy using a Tracer Mass Estimator <cit.>. Here we calculate a mass within the galactocentric radius enclosing the outermost GC (r_ out=6.48 kpc), by inputting the velocities of the tracers along the line of sight (v_ los) and their radii (r). When calculating the tracer velocities we subtract off VCC 1448's systemic velocity as listed in Table <ref>. We also need to make assumptions on the log-slope of the gravitational potential (α), the orbital (an)isotropy (β) and the power-law slope of the GC density profile (γ). The tracer mass estimator then takes the form: M (< r_ out) = C/G⟨ v_ los ^2 r^α⟩ with C = α + γ + 1 - 2 β/I_α, β r_ out^(1-α) and I_α,β = π^1/2Γ(α/2+1)/4 Γ(α/2 + 5/2)(α +3-β(α+2)) where Γ refers to the gamma function. Here we assume α=-0.7, β=0 and γ=1.25. This is equivalent to assuming a combined NFW <cit.> and stellar mass potential along with completely isotropic orbits. Our GC density slope γ=1.25 assumption is based on reported GC density profiles of GC-rich Virgo dwarfs of similar magnitude in <cit.>. Based on these assumptions, and the combined 17 GC sample from our study with <cit.>, we measure an enclosed mass of 8.55^+1.07 _-0.48×10^9 M_⊙ within 6.48 kpc for VCC 1448. Enclosed masses based on just the GCs in this work along with just the <cit.> GC sample are listed in Table <ref>. Uncertainties for our enclosed mass were estimated via the 16th and 84th percentiles of 1000 mass estimations in which each GC recessional velocity had been perturbed by a random number drawn from a Gaussian of standard deviation equivalent to the quoted uncertainty. We note however that our mass uncertainty does not include the systematic error caused by our assumptions on α, β and γ. A ±0.3 change in α changes the measured enclosed mass by at most 12%. A ±0.1 change in β will only change our measured mass by ∼±6%. A ±0.25 change in our assumed γ will change our measured mass by ∼±15%. § DISCUSSION §.§ Mass Measurement Comparison to Mass Profiles We are now equipped with a single mass measurement for VCC 9 and two mass measurements for VCC 1448 at differing radii. In Figure <ref> we compare these mass measurements halo profiles based on total halo masses from the GC number and stellar mass – halo mass relationships (see below). We plot our enclosed dynamical mass measurements using both the <cit.> formula (for both VCC 9 and VCC 1448) and the <cit.> tracer mass estimation formula (for VCC 1448). All mass measurements have the stellar mass within their radius subtracted using <cit.> to leave solely the dark matter mass. Finally, we include for comparison the region of halo profiles expected with total mass: 1) based on the expectation from its large GC count and the GC number – halo mass relationship (VCC 9 M_ Halo=1.29×10^11 M_⊙; VCC 1448 M_ Halo=4.97×10^11 M_⊙; ); and 2) based on the expectation given VCC 9's/VCC 1448's stellar mass and the stellar mass – halo mass relationship (VCC 9 M_ Halo=2.47×10^11 M_⊙; VCC 1448 M_ Halo=2.51×10^11 M_⊙; ). The halo mass profiles are plotted assuming limiting cases of a cuspy NFW <cit.> and a <cit.> cored profile with core radii set to be 2.75× R_ e. These correspond to minimal and maximal core formation in the halo, and thus minimal and maximal central masses are expected within the central regions of the dark matter halo. In each case, we take the halo concentration from <cit.>. For VCC 9 we find our mass measurement is in good agreement with the expected mass for dark matter halos of total mass predicted by either the stellar mass – halo mass or GC number – halo mass relationship. For VCC 1448 our crude mass profile based on two mass measurements indicates that it likely does not reside in the massive dark matter halo expected given the reported rich GC system for the galaxy or the stellar mass – halo mass relationship. For VCC 1448 our crude mass profile there is a better agreement between our mass measurements and the expected dark matter halo mass based on the stellar mass – halo mass relationship than the GC number halo mass relationship. We note Figure <ref> does not include the scatter in the <cit.> relationship. The addition of this ∼0.3 dex scatter would place our mass measurements within 1-σ uncertainties. However there is still some tension with our stellar dynamical mass measurement for VCC 1448. In order for this measurement to fit with the massive dark matter halo expected given either relationship, our choice of a normal concentration parameter (c=8.93) for VCC 1448 must be incorrect. Recent findings by <cit.> showed higher than usual concentration parameters are needed to explain their studied Virgo dwarf galaxies' dynamical masses. However, VCC 1448 would require the opposite. Namely, here a lower concentration parameter would be required to reproduce our measured dynamical masses. We take this as unlikely given the cored halo mass profiles are not in good agreement with our data. Strictly speaking, although we generate <cit.> mass profiles using a standard halo concentration, the mass profile modifications to create the core lower the concentration of the resulting mass profile. In this way, our choice to plot cored halos has a similar effect to lowering the concentration of the dark matter halo. A further lowering of the concentration of VCC 1448's dark matter halo to fit the expected total halo mass of the GC number – halo mass relationship is therefore unlikely. It is worth noting that while many suggest that VCC 1448 has smooth isophotes (see e.g., ), some have suggested the presence of a central bar and/or knots and/or shells in its isophotes (see e.g., ). These may indicate a recent interaction/merger in the galaxy's formation history, however, we note many of the knots appear to be candidate clusters in higher resolution imaging <cit.>. Alternatively, it has been suggested that VCC 1448 may currently be in the transition from a late-type to an early-type dwarf galaxy driven by ram pressure stripping in the Virgo Cluster (see e.g., ). If either is the case, VCC 1448's dark matter halo may not currently be in equilibrium and the simplified comparisons such as that we make in Figure <ref> may not be appropriate. Performing more extensive Jeans model-based fitting to our data for VCC 1448 to better explore its dark matter halo will be the subject of future work. §.§ Dynamical Masses versus Globular Cluster System Richness It has been observed that the GC-richness of a galaxy correlates with its dynamical mass within the half-light radius over many orders of magnitude for normal galaxies <cit.>. Given the differing GC counts of VCC 9 and VCC 1448, and thus differing expected halo masses, there is an expectation that VCC 1448 should be more massive, possibly following this trend. To investigate this we have produced Figure <ref>. Here we plot the number of globular clusters (N_ GC) vs. the dynamical mass within the half-light radius (M_ dyn) for VCC 9, VCC 1448 and Dragonfly 44. Similar to <cit.>, we include the <cit.> galaxy sample along with a line of best fit to galaxies in the sample with log(M_ dyn / M_⊙)<11. While VCC 9 follows the sample of normal galaxies from <cit.>, VCC 1448 and Dragonfly 44 both exhibit greater GC richness than other galaxies of their dynamical mass. When considering the positioning of VCC 1448 in this space, inclination effects may cause us to underestimate the intrinsic rotation of the galaxy, and thus underestimate the dynamical mass. Formally, our choice to plot the measured dynamical mass of VCC 1448 in Figure <ref> is equivalent to assuming our galaxy is being viewed edge-on, which may not be accurate given its low ellipticity (ϵ=0.19). A galaxy with higher intrinsic ellipticity viewed more face-on would also result in our rotation measurement while the intrinsic rotation would be higher <cit.>, increasing the dynamical mass. In practice, this effect can only lead to a modest increase in dynamical mass which is insufficient to place VCC 1448 on the relation for normal galaxies we show in Figure <ref>. It is also worth considering the type of galaxies that both Dragonfly 44 and VCC 1448 are being compared to in Figure <ref>. Due to their elevated GC richness for dwarfs of a similar stellar mass, the galaxies with similar GC richness in the panel tend to be of much higher stellar mass, which is included in the dynamical mass measurement for each galaxy. If this stellar mass component is removed from the dynamical mass so that the remaining mass is predominantly dark matter there is expected to be better agreement between Dragonfly 44/VCC 1448 and the normal galaxies. In this way, the GC number – dynamical mass relationship may be seen as an extension of the GC number – halo mass relationship. A separate effect that also goes to place Dragonfly 44 and VCC 1448 on the relationship is that of dark matter halo profiles. Both VCC 1448 and Dragonfly 44 exist in a stellar mass regime where core formation is expected to be maximally efficient <cit.>. In contrast, the more massive galaxies of similar GC richness are more likely to have dark matter cusps which will increase their central dynamical masses in relation to a cored dark matter halo of similar mass. Alternatively, it may be tempting to ascribe any differences between VCC 1448 and other galaxies to an over-estimation of VCC 1448's GC numbers by <cit.> as it was measured with ground-based imaging. It may be possible that the ground-based GC number by Lim et al. is a severe overestimate. However, included in <cit.> was the galaxy NGVSUDG-04/VLSB-B for which a total GC system of 13 ± 6 GCs was measured. <cit.> have now spectroscopically confirmed 14 GCs to be associated with this galaxy, giving some credence to the reliability of the <cit.> methodology. Further to this, we note that the lower limit for its GC content comes from the HST/WFPC2 imaging of <cit.>. Here the estimate of 34 GCs was determined from imaging corrected for luminosity incompleteness, but not for radial incompleteness <cit.>. Specifically, the <cit.> HST imaging only extends to ∼1 R_ e and “excludes much of the halo light and any halo clusters that may exist". The addition of a radial correction to their total number can easily double the total number of GCs inferred for the galaxy since the GC half-number radius is frequently greater than the host galaxy's half-light radius (; Janssens et al. Submitted). At the very least, any moderate radial correction of GC numbers to the <cit.> value will cause VCC 1448 to have high GC numbers for the dynamical mass we measure. This would then not affect the conclusions we draw from Figure <ref>. It is also unable to reduce the predicted total halo mass for VCC 1448 given the <cit.> relationship sufficiently to alter the conclusions we draw from Figure <ref>. Finally, it is worth considering VCC 1448 in comparison to Dragonfly 44 in Figure <ref>. Many of our conclusions that hold for VCC 1448 also hold for Dragonfly 44. Namely, it also has higher GC counts for its dynamical mass than many previously studied galaxies <cit.>. A partial explanation for this is Dragonfly 44 having a cored halo mass profile <cit.> which would lead to a lower V_ rms at fixed halo mass (GC number). Indeed we note that the large scatter of the plotted sample in Figure <ref> is likely caused by a combination of the scatter in the GC number – halo mass relationship and a scatter in the intrinsic velocity curves of these dark matter halos at fixed halo mass (GC number). Dragonfly 44 and VCC 1448 may thus be some of the most divergent dwarfs in what is referred to by many as a diversity of rotation curves problem (c.f., ) in the dwarf galaxy stellar mass regime. Similar conclusions have been reached for a larger sample of large half-light radius, GC-rich dwarfs in <cit.>. §.§ Kinematics vs. Ellipticity An interesting difference between VCC 9, Dragonfly 44 and VCC 1448 is the level of rotation in their kinematics. We investigate this further in Figure <ref>. Here we plot galaxies in the commonly used ⟨ V ⟩/⟨σ⟩ (left axis) and λ_R (right axis) vs. ellipticity parameter spaces to further investigate their level of ordered vs. random motions (see e.g., ). In Figure <ref> we include an isotropic line in ⟨ V ⟩/⟨σ⟩ – ellipticity space. This line is based on the tensor virial theorem and assumes a global anisotropy parameter (commonly δ) of 0 (c.f. sec. 4.8.3 of or ). This is derived for edge-on systems. We show the intrinsic values of VCC 1448 that would cause us to produce the measurements in this work if viewed at a viewing angle of 75, 60, 45, and 36 degrees as the smaller connected stars. Any inclination effect where a galaxy is moved from a face-on-view to a more edge-on view will tend to move galaxies to the right in this parameter space <cit.>. This is also the direction of increasing global anisotropy. As such, any galaxy that is incorrectly assumed to be edge-on will systematically also reside in regions of lower anisotropy. There exists the possibility that our assumption of viewing VCC 1448 edge-on will cause us to underestimate the global anisotropy of the galaxy. Additionally, in Figure <ref> we include the commonly used parameter λ_ R that quantifies the galaxy kinematic support. We convert our ⟨ V ⟩/⟨σ⟩ to λ_ R using the empirical relations derived in <cit.> and listed by <cit.>. We also include the divide between the regions prescribed for fast and slow central rotators as proposed by <cit.> and updated in <cit.>. Both VCC 9 and VCC 1448 are fast rotators centrally while Dragonfly 44 is a slow rotator. VCC 1448 lies extremely close to the isotropic line hence, there is some indication that the flattening of VCC 1448's morphology can be fully explained by its rotation alone. This contrasts with the commonly used explanation for (the majority) of larger elliptical galaxies that lie to the right of this line where the commonly assumed explanation is that there exists some level of anisotropy in the stellar orbits within the galaxy caused by its formation via (dry) mergers <cit.>. However, this may become more complex when the exact details of the merger (see e.g., ) are considered. VCC 1448 may therefore be being viewed, at least partially, face-on. This would allow a level of anisotropy in its kinematics which may reflect a formation via galaxy mergers. As previously discussed, the conclusion that VCC 1448 may not be edge on does not affect the conclusions we draw for VCC 1448 in Figure <ref>. While of slightly lower stellar mass than the lowest mass bin in <cit.>, VCC 1448 has a log(V/σ)≈-0.4 which makes it extremely unlikely that they would classify it as a dynamically cold disk. The implication is that VCC 1448 cannot be easily explained as a rotationally supported, dynamically cold disk, nor as a dispersion-dominated slow rotator, but it is instead classified as an “intermediate" system. As suggested by <cit.> this may imply that VCC 1448 either did not have the opportunity to accrete extra high-momentum gas from its surroundings or that it has an overly active merger history. While there is some evidence that the galaxy may not have had the opportunity to accrete high-momentum gas given its location in a cluster environment, here we prefer the merger explanation. VCC 1448 exhibits isophotal twists along with a lower dynamical mass than expected for estimations of its halo mass given either relationship, with a possible cause being a lack of equilibrium in the galaxy. These both are evidence that better aligns with the merger hypothesis than the high-angular momentum gas accretion hypothesis. However, mergers and other tidal effects are not the only explanation for VCC 1448's kinematics. The simulations of <cit.> suggest a broad distribution in stellar kinematics for such large dwarf galaxies. Furthermore, the observational work of <cit.> has shown that dwarfs with kinematics similar to VCC 1448 may form in isolated void environments, where tidal effects and mergers play a much lesser role. Galaxies that form earlier and quench earlier likely have lower levels of V/σ representative of their fast formation with an inability to readily acquire gas angular momentum. As discussed in <cit.> and <cit.>, Dragonfly 44 is likely a “failed galaxy”, one that quenched early and has experienced little growth in stellar mass since then. We see some possible evidence of this whereby Dragonfly 44 has a much lower V/σ than VCC 1448 which has a relatively younger age of ∼7 Gyr. It is worth noting that these formation scenarios do not need to be considered for VCC 9. Here <cit.> would classify it as a “fast rotator" suggesting it has likely formed via the accretion of gas similar to the more commonly studied higher mass galaxies. Indeed studies such as <cit.> suggest both of our Virgo cluster galaxies may exhibit a high level of rotation for a cluster dwarf (see their figs. 9/10). If both galaxies were relatively late to fall into the Virgo cluster (as will be seen later in Section <ref>) some of their large size may be related to not yet experiencing the transformation process that many classical dE's have undergone in the cluster environment <cit.>. This would also imply that they were not pre-processed in a large group prior to infall. Additionally, based on the kinematic evidence we suggest it is unlikely that an evolutionary link may exist between Dragonfly 44 and VCC 1448. In short, it is unclear how the passive evolution of VCC 1448 in the cluster environment would have enough impact on its kinematics to turn it into a slow rotator like Dragonfly 44. Conversely, our conclusion that VCC 1448 may have experienced past mergers makes it unlikely that it can represent a different evolutionary pathway for a `failed galaxy” such as Dragonfly 44. Finally, the simulations of <cit.> suggest that dwarf galaxies in the mass regime of those studied here may be biased to be more GC-rich based on their initial gas properties. Higher-pressure gas lends itself to more efficient cluster formation (i.e., higher GC richness) and tends to create galaxies that are more pressure-supported. As such there may be an expectation that galaxies that are GC-rich should be preferentially pressure-supported and GC-poor galaxies being rotationally supported. VCC 9, VCC 1448 and Dragonfly 44 follow this trend, as their relative GC-richness (i.e., M_ GC/M_⋆) increases (see Table <ref>) they also have lower levels of V/σ. We note however that our sample is relatively small with only 3 galaxies of which 1 is at a slightly different stellar mass. A future study of a larger sample looking to probe these trends is vital to confirm their hypothesis. §.§ Globular Cluster System Biasing In Figure <ref> we explore the idea of `biasing' which is that at fixed stellar mass, galaxies that fall in earlier have formed faster and thus have a higher fraction of their stellar mass bound in a richer GC system. Commonly this is observed as both the alpha enhancement and the cluster-centric distance/phase space position of galaxies correlating with their specific frequency (see e.g., ). Figure <ref> displays a phase space diagram for the cluster environment. It is colour-coded based on the regions prescribed by the simulations of <cit.>. Darker shaded areas correspond to regions where galaxies predominantly fall in at earlier times than those in lighter regions. In addition to plotting the three target galaxies of our study, we include the Virgo cluster dwarf galaxy sample of <cit.> which is comprised primarily of more compact, classical dE's in a similar stellar mass range to our targets. In the <cit.> data, a clear trend was found for GC-richness and cluster-centric distance. As has been noted for other GC-rich UDGs, including Dragonfly 44, it is not clear UDGs follow this trend <cit.>. For example, the recessional velocity of Dragonfly 44 has led some authors to argue it may be part of a low mass galaxy group only currently being accreted to the Coma Cluster <cit.>, which fits with its phase space positioning in Figure <ref>. VCC 1448 resides in the late infall region of Figure <ref> suggesting it is not an early infall into the galaxy cluster. We have further evidence for this from VCC 1448's stellar populations, which suggest a mass-weighted age of ∼ 7 Gyr (Ferre-Mateu et al. in prep). A full stellar population analysis of VCC 1448's radial profile will be the subject of future work. VCC 1448 also resides in a region of phase space that is outside of the sample of <cit.> that was used to argue for this `biasing' of dwarf galaxies in clusters. Similar conclusions have been reached by <cit.> who found that multiple large, GC-rich Virgo dwarfs are known to exist in this region of large velocities relative to the cluster (see their fig. 10). In Figure <ref> VCC 9 resides in a region of mixed infall times making its fall in time ambiguous. We note that it does reside near the region traced by the <cit.> Virgo dwarfs. In comparison to these Virgo dE's a relatively low GC specific frequency would be predicted which is in agreement with our observations. We conclude that while many dwarf galaxies may experience an elevated GC-richness (i.e., a higher than normal M_ GC/M_⋆) due to their fast formation and early accretion into the cluster, this is not true for either Dragonfly 44 or VCC 1448. They likely have formed their high M_ GC/M_⋆ external to the Coma and Virgo clusters respectively. As such, the special star formation environment common in a high redshift proto-cluster that leads to an elevated M_ GC/M_⋆ may also be possible in field environments. Alternatively, a separate formation pathway to elevate the relative stellar mass in the GC system may be required for the galaxies forming in the field. § CONCLUSIONS In this work, we have presented new Keck/KCWI spatially resolved data for the galaxy VCC 1448 along with new Keck/KCWI integrated data for the galaxy VCC 9. VCC 1448 is similar in size and magnitude to VCC 9 but hosts a far richer GC system. Our main conclusions based on these data are as follows: * We revise the recessional velocity of VCC 1448 to 2280±6 . We suggest the previous value, which is based on an HI detection, may be inaccurate due to ram pressure stripping of gas in the cluster environment. * We present a radial kinematic profile for VCC 1448 to just beyond the half-light radius. VCC 1448 has an approximately flat velocity dispersion profile, however, it exhibits a rising rotational component. * We confirm 10 compact sources within our field of view are GCs associated with VCC 1448, 1 GC associated with VCC 9 and 1 that is likely a foreground star. For VCC 1448, we note that the portion of the GC system that we have sampled has noticeably higher recessional velocity than the galaxy's main stellar body. This is at least partially (if not fully) due to the bias in spatial sampling of our data to an area of the galaxy with rising velocity. * We derive a dynamical mass from VCC 9's stellar velocity dispersion and rotation of 2.69±2.29×10^9 M_⊙ within 3.96 kpc. It is in agreement with halo profiles of total mass as expected from the GC number – halo mass relationship along with the stellar mass – halo mass relationship. * Based on our velocity measurements we can produce two independent mass estimates for VCC 1448 at different radii, building a crude mass profile. One mass estimate comes from the stellar velocity dispersion and its rotation of 1.96±0.68×10^9 M_⊙ within 4.67 kpc. The other mass estimate comes from the velocities of the GC system of 8.55^+1.73 _-0.77×10^9 M_⊙ within 6.48 kpc. Our mass profile is in better agreement with the halo profiles of total mass based on the stellar mass – halo mass relationship than the GC number – halo mass relationship. Further to this, we made comparisons to the galaxy Dragonfly 44 which is similar in both size and GC-richness to VCC 1448 but has a lower stellar mass than either VCC 9 or VCC 1448. Based on a comparison of the three galaxies we conclude: * When considering the position of the three galaxies N_ GC – M_ Dyn space, VCC 1448 and Dragonfly 44 exhibit rich GC numbers for their dynamical mass. VCC 9 however follows the normal galaxy sample. If each galaxy's total halo mass estimate from their GC counts is correct, this suggests a large diversity in dwarf galaxy rotation curves in this stellar mass regime (i.e., log(M_⋆ / M_⊙)≈ 8-9). * The position of VCC 1448 in ⟨ V ⟩/⟨σ⟩ – ellipticity space may suggest that we are not viewing the galaxy edge-on. While this does not change our previous conclusions it suggests a level of anisotropy that may support the hypothesis of a formation via mergers. A formation via mergers is further supported by the isophotal twists found in other studies. * VCC 9 exhibits more rotational support than VCC 1448 which in turn exhibits more rotational support than Dragonfly 44. This combined with their different GC systems suggests multiple formation pathways are needed to create a large dwarf galaxy at fixed luminosity. It also supports simulated predictions where GC formation efficiency is affected by the gas properties during initial star formation. * Dragonfly 44's position in stellar mass – halo mass space appears to require early quenching and passive evolution. It is not clear how the passive evolution of VCC 1448 could alter its kinematics into those seen in Dragonfly 44. This evolutionary link between Dragonfly 44 and VCC 1448 appears unlikely. § ACKNOWLEDGEMENTS We thank the anonymous referee for their careful, constructive consideration of our manuscript. We thank E. Toloba and T. Lisker for providing data to aid comparisons to her work. JSG thanks C. Johnson for sharing their wisdom throughout the creation of this work, she will be deeply missed. Some of this work was completed while JSG was an ECR visiting fellow at the Institutio de Astrofisica de Canarias, he is grateful for their support. This research was supported by the Australian Research Council Centre of Excellence for All Sky Astrophysics in 3 Dimensions (ASTRO 3D), through project number CE170100013. DF, WC and JB thank ARC DP220101863 for financial support. AJR was supported by the National Science Foundation grant AST-2308390. This work was supported by a NASA Keck PI Data Award, administered by the NASA Exoplanet Science Institute. AFM has received support from RYC2021-031099-I and PID2021-123313NA-I00 of MICIN/AEI/10.13039/501100011033/FEDER,UE, NextGenerationEU/PRT. The data presented herein were obtained at the W. M. Keck Observatory, which is operated as a scientific partnership among the California Institute of Technology, the University of California and the National Aeronautics and Space Administration. The Observatory was made possible by the generous financial support of the W. M. Keck Foundation. The authors wish to recognize and acknowledge the very significant cultural role and reverence that the summit of Maunakea has always had within the indigenous Hawaiian community. We are most fortunate to have the opportunity to conduct observations from this mountain. § DATA AVAILABILITY The KCWI data presented are available via the Keck Observatory Archive (KOA) 18 months after observations are taken. mnras § EXAMPLE GC SPECTRA This appendix provides Figure <ref> with 3 example spectra from our GC fitting.
http://arxiv.org/abs/2405.09440v1
20240515153233
Geminate Exciton Fusion Fluorescence as a Probe of Triplet Exciton Transport after Singlet Fission
[ "Eric A. Wolf", "Ivan Biaggio" ]
cond-mat.mtrl-sci
[ "cond-mat.mtrl-sci", "cond-mat.mes-hall" ]
Department of Physics, Lehigh University, Bethlehem, PA 18015, USA The geminate annihilation of two triplet excitons created by singlet exciton fission is affected by the dimensionality of transport as determined by typically anisotropic triplet exciton mobilities in organic molecular crystals. We analyze this process using a random-walk model where the time-dynamics of the geminate annihilation probability is determined by the average exciton hopping times along the crystallographic directions. The model is then applied to the geminate fluorescence dynamics in rubrene, where the main channel for triplet-triplet annihilation is via triplet fusion and subsequent photon emission, and we identify the transitions between transport in one, two, and three dimensions. Geminate Exciton Fusion Fluorescence as a Probe of Triplet Exciton Transport after Singlet Fission Ivan Biaggio May 20, 2024 ======================================================================================================= Triplet exciton pairs generated from singlet exciton fission in molecular crystals can separate by independent diffusion of the two triplet excitons, and the probability of their re-encounter will determine the rate of geminate annihilation at any given time. In materials where the triplet exciton energy is close to half the singlet exciton energy, geminate annihilation can result in triplet exciton fusion back into a singlet exciton and subsequent photon emission. The existence of fluorescence arising from this effect was recognized early on in anthracene crystals <cit.>. Under pulsed illumination, triplet exciton fusion leads to a “delayed fluorescence” that contrasts the “prompt fluorescence” resulting from the radiative decay of the initial singlet exciton population <cit.>. This delayed fluorescence has been investigated on several time scales in various materials <cit.>, and the time-evolution of the triplet exciton density has also been observed using other techniques like photo-induced absorption <cit.>. Recent interest in the topic has grown because of the interest of multi-exciton generation processes for photovoltaics <cit.>. In this work, we use photons produced through the fusion of geminate triplet excitons as a probe of triplet exciton transport over multiple time decades, and we show that the time-dependence of geminate fusion fluorescence can provide information on the dimensionality of triplet exciton diffusion in the crystal lattice. While spin is also important when the triplet pair has been generated in a spin-coherent state <cit.>, in the following we focus on variations in geminate fusion probability over multiple time-decades that are determined by the probability of re-encounter in the geminate triplet-pair. We consider fluorescence dynamics measurements performed in a low excitation density limit where the average distance between initially photoexcited excitons is such that only geminate triplet excitons can interact with each other. These experiments measure the time-dependence of the re-encounter probability, instead of just the triplet density measured in absorption measurements. They also make it possible to study transport over multiple time-decades and detect events that have an exceedingly low probability of occurring but provide important information on exciton transport—typically late reencounters between triplets that have widely separated by diffusion. The rarity of such events can be compensated by appropriately long accumulation times and the sensitivity of single-photon detection. Previous studies done in this regime were confined to amorphous materials or thin films <cit.>, where the discussion of the dimensionality of diffusion is complicated either by intrinsic disorder or by the possible presence of nanocrystals <cit.>. In anisotropic molecular crystals, the re-encounter probability in the triplet exciton pair must behave differently on different time scales, depending on the dimensionality of transport <cit.>. Immediately after a triplet pair is created by fission, the two excitons will initially hop in the highest-mobility direction with the smallest hopping time, diffusing in only 1 dimension (1D) before transitioning to 2 and 3 dimensions (2D and 3D) at later times. These transitions in dimensionality change how the reencounter probability evolves and lead to observable transitions in its dynamics. We model the effect of the dimensionality of triplet exciton diffusion on geminate-annihilation dynamics using an unbiased random walk for each exciton in the triplet pair <cit.>. Seeking for the simplest possible approach, we allow each exciton to independently hop between next-neighbors in a primitive cubic lattice, with different average hopping times in different directions. We deliberately ignore the possibility of an initial spatially correlated triplet pair <cit.> in order to focus on the essential features of later time effects. By using the distance between the two triplet excitons as the relevant coordinate, a random walk of two excitons can be reduced to a single-particle random walk that starts at the origin, with an average hopping time that is half that of each individual exciton. Here we modify this well-known system <cit.> by introducing a finite average probability γ (0≤γ≤ 1) that a re-encounter event leads to annihilation. In this case, the probability that a re-encounter happens at time t is determined by p_0^i(t) = exp(-t/τ_i)I_0(t/τ_i) , p_0(t) = p_0^1(t) p_0^2(t) p_0^3(t) , p_γ(t) = p_γ(n = t/τ) p_γ(n) = f(n) + (1-γ) ∑_i=1^n-1 f(i) p_γ (n-i) . Here, I_0 is the modified Bessel function of the first kind of order zero <cit.>, and p_γ=0^i(t) is the continuous time probability corresponding to an ensemble average of all possible one-dimensional random walks <cit.>. The p_0^i(t) are the independent probabilities that a random walk along the coordinate characterized by the hopping time τ_i is back at the origin at time t, and p_0(t) is the combined probability that the random walk is found at the origin along each coordinate simultaneously. Finally, the re-encounter probability for finite γ, p_γ(t), is obtained by discretizing time with the average hopping time τ^-1 = τ_1^-1 + τ_2^-1 + τ_3^-1 (we discuss alternative choices below) and using the recursive relationship (<ref>) between the discrete-time probability p_γ(n) that a re-encounter happens after exactly n time-steps in the presence of annihilation, and the probability f(n)=p_1(n) that the first re-encounter happens after n steps. This expression is well-known in the γ→ 0 limit <cit.>. It is useful to note that p_0^i(t) ∝ t^-1/2 in the limit t ≫τ_i. <cit.> It follows that for γ=0 and a d-dimensional random walk, p_0(t) ∝ t^-d/2 for large t, a stronger dependence on the dimensionality than for γ=1, where p_1(t) = f(t)∝ n^-3/2 for d=1,3 and p_1(t)=f(t) ∝ t^-1ln(t)^-2 for d=2 (See Refs. Pfluegl98,Redner01). The importance of Eq. (<ref>) here is that it can be used to analytically calculate p_γ(t) from the p_0(t) in Eq. <ref>. This is done in two steps, by first solving for f(n)=p_1(n) from p_0(n), and then solving for p_γ(n) from f(n). This calculation can in general be done as described above by discretizing time using the average hopping time τ, but we note that practical calculations can also be done by using a longer time-step Δ t in place of τ while at the same time using a renormalized γΔ t / τ≪ 1 in place of γ. We show in Fig. <ref> that there is an exact correspondence between this analytical method and a Monte Carlo simulation of the random walk of the two excitons. The Monte-Carlo simulations in Fig. <ref> were done starting with two triplet excitons adjacent to each other, implementing their two independent random walks, each with average hopping times 2 τ_i, and allowing for an average annihilation probability γ upon a re-encounter. We repeated the simulation about one million times, storing the time when each annihilation event occurred into logarithmically distributed time-bins to obtain the Monte Carlo data in the figure. In this work we choose to focus on the case where triplet-triplet annihilation is relevant, but where the way in which triplet excitons can separate still leads to a meaningful singlet exciton fission probability. Hence, we do not consider the case of exciton diffusion remaining indefinitely confined to one dimension. In such a 1D random walk with n steps, the average number of re-encounters is ∼√(n), and geminate annihilation is almost certain after n ∼ 1/γ^2, also leading to a deviation from the initial t^-1/2 dependence, towards a steeper t^-3/2 power-law. This situation does not occur in our analysis because we allow for a finite probability for the exciton to move out of an initial 1D behavior. Fig. <ref> highlights several features of the transition between the expected power law behaviors for 1D and 2D transport as the slower hopping time τ_2 comes into play, determining an average number of steps during the initial 1D random walk of n=τ_2/τ_1. In the small γ limit (here we always have γ < 1/√(n)), the asymptotic behaviors t^-0.5 and t^-1 for 1D and 2D diffusion become clearly visible for t ≲τ_2/100 and t ≳ 10 τ_2, far enough away from the transition. This can be understood from the functional form of p_0^i(x) = exp(-x)I_0(x) as defined in Eq. <ref>: between x ≈ 0.25 and x=10 it overshoots the extrapolation from its x →∞ limit of x^-0.5, with a tangent on the log-log plot that reaches a power law with an exponent up to n=-0.6. A consequence of this is the fact that the 1D→2D transition is always characterized by p_γ(t) transiently becoming slightly larger than the power-law extrapolation from the 2D transport region, leading to an inflection point that becomes more prominent as γ increases (see the dashed line extrapolation of the γ=0.018 data in Fig. <ref>). This feature is associated with the constant probability per unit time of 1/τ_2 for an exciton to “hop out” of the 1D random walk, and it is important because it could lead to the appearance of an exponential decay when geminate fluorescence is measured on a limited time scale of the order of t ∼τ_2 <cit.>. Fig. <ref> also shows the time dependence of the re-encounter probability when a 3D transport regime can be reached. There are clearly identifiable transitions as the dimensionality of the triplet exciton transport changes with time from 1D, to 2D, to 3D. Each dimensionality is characterized by equally clear power-law regimes, with exponents slightly larger than predicted in the t →∞ limit of each transport regime. This effect is stronger for the intermediate 2D transport regime (where in our example we get a power-law exponent of n=-1.13 for γ=0), due both to the proximity of the transition to different dimensionalities of transport and to a larger annihilation probability. From the re-encounter probability p_γ(t) one obtains a time-dependent geminate annihilation probability density γ p_γ(t)/τ. The γ-dependent integral of this quantity then gives the overall probability that geminate annihilation occurs before a given time, shown in the inset of Fig. <ref> for the case when transport dimensionality changes from 1D to 2D. This overall geminate annihilation probability is an important parameter for applications where exciton fission yield must be optimized. Since the average number of re-encounters in a 1D random walk of n steps is ∼√(n), much larger than for 2D random walks, the overall probability of geminate annihilation is ∼γ√(τ_2/τ_1) before the 1D→2D transition in the small γ limit. This is equal to 0.1 for γ=0.3 × 10^-2 and the τ_2/τ_1 = 1000 used in Fig. <ref>, in good agreement with the time-dependent geminate annihilation probability curves in the inset of the same figure. We note that if the 1D regime persists long enough, then even a relatively small value of the average annihilation probability per re-encounter, γ, will lead to a significant reduction in fission efficiency – this is shown in the inset of Fig. <ref>, where fission efficiency decreases below 50% for an annihilation probability per re-encounter as small as 2%. It is likely that some measure of 1D transport will always be necessary to facilitate initial separation of the triplet excitons, but one must be aware that it is also important to minimize γ√(τ_2/τ_1) in order to optimize fission efficiency. We experimentally investigated geminate fluorescence dynamics in rubrene single crystals, which have a large singlet fission efficiency, a triplet-triplet annihilation dominated by fusion and photon emission, large triplet lifetimes, and anisotropic triplet transport <cit.>. We used single crystals grown from 99% pure ACROS Organics rubrene powder via physical vapor transport in argon gas. We label the crystallographic axes of these crystals as a, b, and c following the convention of Ref. Irkhin12. We investigated crystals as large as a few mm along the a and b axes, with thicknesses of the order of 100 μm along the c axis. The samples were exposed to 150 fs pulses of 513 nm, b-polarized light from a Light Conversion PHAROS laser operating at 5 kHz; this gave a time-interval between successive pulses twice as long as the triplet exciton lifetime <cit.> in rubrene, ensuring a sufficient decay of the triplet population between illumination pulses. The beam diameter in the sample was a few mm, delivering excitation densities below ∼ 10^19 m^-3, which guaranteed a fluence-independent fluorescence dynamics up to ∼ 5 μs after photoexcitation. The resulting geminate fluorescence dynamics was measured by time-correlated single photon counting <cit.>, with after-pulsing effects avoided by detecting only a maximum of one photon per cycle. Typical integration times were of the order of 2-10 hours. Fig. <ref> presents the geminate fluorescence dynamics we obtained in several rubrene single crystals. The data was very well reproducible from sample to sample, except for the time window below 50 ns, where we observed variations in the data that correlated to the presence of an anomalous fluorescence spectrum characterized by a 650 nm emission band <cit.>. In this context, we note that our apparatus also captures the picosecond-scale “prompt” fluorescence that is due to the radiative decay of the initially photoexcited singlet states. Even though only a few percent at most of the photoexcited singlet states decay radiatively instead of undergoing fission <cit.>, and the corresponding signal is broadened by our limited response time, this transient signal is large enough to produce an initial decay of about 1 order of magnitude in signal intensity. This analysis is also supported by the data due only to the long-wavelength fluorescence above 650 nm, which does not show the strong enhancement of the signal that is otherwise seen below 1 ns. In fact, the long-wavelength fluorescence is exclusively due to interaction of the triplet excitons created by fission with extraneous defects <cit.>, without any contribution from the initially photoexcited singlet states. We also note that the initial dynamics of this triplet-only fluorescence could in principle be analyzed in terms of a 1D random walk and the expected power-law dependence, corresponding to a straight line with a slope of -0.5 in the log-log plot of Fig. <ref>. We cannot do so quantitatively here, however, because our limited time-resolution makes any such slope in the first decade of Fig. 2 too difficult to interpret. In any case, we have also shown that limiting the detection to fluorescence wavelengths below 600 nm essentially eliminated any sample-dependent effects on the fluorescence dynamics, even from crystals with strong abnormal fluorescence. In the following we concentrate on the fluorescence dynamics that does not show any sample-to-sample variation, and in particular on the data between ∼ 100 ns and ∼ 10 μs after excitation. We reach this intrinsic limit by using photoexcitation densities of the order of 2×10^19 m^-3. At higher excitation densities, one observes deviations in the fluorescence dynamics that are consistent with the onset of non-geminate annihilation. In Fig. <ref> we show the deviation obtained for an excitation density of 12×10^19m^-3, with its typical exponential decay that corresponds to half the triplet exciton lifetime <cit.>. The most important feature of the experimental results in Fig. <ref> is that the data in the time window from 30 ns and 5 μs—highlighted by full data points—clearly shows the expected transition between power law regimes that is predicted by Fig. <ref>. Power-law regimes at the beginning and end of this time window have exponents of -1.18 ± 0.02 and -1.66 ± 0.03, respectively. In addition to this, the whole dynamics in this time window and for multiple samples can be extremely well fitted by a slight modification of Eq. (<ref>): PL(t) ∝ t^-1 + p e^-t/τ I_0(t/τ) This corresponds to Equation <ref> in the t ≫τ_1, τ_2 limit, multiplied by a factor t^p which accounts for a possible systematic increase in the power-law exponent similar to that observed in Fig. <ref>. The inset in the top-right corner of Fig. <ref> shows a simultaneous fit of the data from three different samples using Eq. (<ref>), which delivers p=-0.07 and τ_3 = 1.3 μs. The latter corresponds to an average hopping time of a triplet exciton along the least probable hopping direction (likely the crystallographic c-axis in rubrene <cit.>) of 2 τ_3 = 2.6 μs. Individual fits to the dynamics in multiple samples show minimal variations of p in the range 0.05-0.15 and τ_3 in the range 1 -3 μs. In addition to this experimental determination of the slowest hopping time τ_3, we can also say something about the two faster hopping times. The τ_3 that we found experimentally is six orders of magnitude greater than the average hopping time along the high-mobility b-direction, which can be estimated to be τ_1 ∼ 3 ps from the diffusion length of 4 μm <cit.> and the triplet lifetime of 100 μs <cit.>. Then, the fact that the power-law corresponding to 2D transport is already well-established 20 ns after the beginning of the random walk, coupled with an earlier observation of a quasi-exponential decay in the fluorescence intensity with an apparent time-constant of 4 ns <cit.>, indicates that the average hopping time τ_2 that determines the transition from 1D to 2D transport is very likely of the order of a few nanoseconds. From this we see that geminate fluorescence dynamics is sensitive to the reencounter probability of triplet excitons. This will make it necessary to consider the reencounter probability in any long-timescale investigation of delayed fluorescence in rubrene or analogous singlet fission materials. But most importantly, geminate fusion dynamics can deliver valuable information on triplet exciton transport and the crystal lattice where it happens, all thanks to the ability to detect single photons that originate from annihilation events.<cit.> In future work, this can be exploited to understand more complicated systems, like those in which excitons interact with defects <cit.>. In conclusion, we have shown that the photons emitted during geminate fusion can be effectively used as a probe of triplet exciton transport. We have also demonstrated how data obtained in this way can be analyzed via a simple random-walk model that can be used to extract information on average hopping times and the dimensionality of transport. While we demonstrated these ideas in rubrene, the sensitivity of single-photon counting means that the model and approach are applicable to any singlet-fission material in which geminate triplet-triplet annihilation is accompanied by some probability of photon emission. This opens the door to the general use of geminate-fusion fluorescence as an investigative tool to, e.g., disentangle the effect of variations in the fusion probability γ and of variations in the triplet transport that determines the probability of re-encounter p_γ(t) in temperature-dependent or magnetic-field-dependent studies. Note added in proof: after this work was submitted, we became aware of the related work in Ref. Seki21, which uses a completely different approach to also discuss the influence of the dimensionality of triplet transport on the geminate fluorescence. Acknowledgement: This research was partially supported by the U.S. Department of Energy, Office of Basic Energy Sciences, Division of Materials Sciences and Engineering under Award # DE-SC0020981.
http://arxiv.org/abs/2405.10076v1
20240516131631
Travelling Waves and Exponential Nonlinearities in the Zeldovich-Frank-Kamenetskii Equation
[ "Samuel Jelbart", "Kristian Uldall Kristiansen", "Peter Szmolyan" ]
math.DS
[ "math.DS", "34E13, 34E15, 34E10, 35K57, 80A25" ]
Towards Real-Time Urban Physics Simulations with Digital Twins Identify applicable funding agency here. If none, delete this. Jacopo Bonari,1 Lisa Kühn,1 Max von Danwitz,1 Alexander Popp,12 1German Aerospace Center, Institute for the Protection of Terrestrial Infrastructures Rathausallee 12, 53757 Sankt Augustin, Germany 2University of the Bundeswehr Munich, Institute for Mathematics and Computer-Based Simulation Werner-Heisenberg-Weg 39, 85577 Neubiberg, Germany May 20, 2024 ================================================================================================================================================================================================================================================================================================================================================================ We prove the existence of a family of travelling wave solutions in a variant of the Zeldovich-Frank-Kamenetskii (ZFK) equation, a reaction-diffusion equation which models the propagation of planar laminar premixed flames in combustion theory. Our results are valid in an asymptotic regime which corresponds to a reaction with high activation energy, and provide a rigorous and geometrically informative counterpart to formal asymptotic results that have been obtained for similar problems using high activation energy asymptotics. We also go beyond the existing results by (i) proving smoothness of the minimum wave speed function c(), where 0<≪ 1 is the small parameter, and (ii) providing an asymptotic series for a flat slow manifold which plays a role in the construction of travelling wave solutions for non-minimal wave speeds c > c(). The analysis is complicated by the presence of an exponential nonlinearity which leads to two different scaling regimes as → 0, which we refer to herein as the convective-diffusive and diffusive-reactive zones. The main idea of the proof is to use the geometric blow-up method to identify and characterise a (c,)-family of heteroclinic orbits which traverse both of these regimes, and correspond to travelling waves in the original ZFK equation. More generally, our analysis contributes to a growing number of studies which demonstrate the utility of geometric blow-up approaches to the study dynamical systems with singular exponential nonlinearities. Keywords: geometric singular perturbation theory, geometric blow-up, travelling waves, combustion theory, exponential nonlinearity MSC2020: 34E13, 34E15, 34E10, 35K57, 80A25 § INTRODUCTION Exponential nonlinearities appear differential equation models across the natural sciences and beyond, ranging from models of earthquake faulting <cit.> to electrical oscillators <cit.> and plastic deformation in metals <cit.>. It is also common for modelers to introduce exponential nonlinearities for mathematical reasons, for example in the regularisation of non-smooth systems using a tanh function <cit.>, the study of gene regulatory networks which converge to a non-smooth approximating system at an exponential rate <cit.>, or more recently, via the introduction of singular exponential coordinates in order to study polynomial systems using techniques based on tropical geometry <cit.>. All of the problems cited above share a common mathematical feature, namely, that the exponential nonlinearities lead to singular perturbation problems which have distinct dynamics in different regions of phase space. The basic reason for this can be seen by considering the nonlinearity ^x /, which is exponentially small, 𝒪(1), or exponentially large as → 0 when x < 0, x = 𝒪(), or x > 0 respectively. Thus, if x is a state variable in a system of differential equations, then one expects to obtain distinct dynamics for x<0, x = 0 and x > 0 as → 0. Regions of phase space where the limit is not defined due to unbounded growth, in this case when x > 0, can often be controlled after a suitable `normalisation' (positive transformation of time). For example, the authors in <cit.> applied a normalisation which amounted to a division of the governing equations by 1 + ^x / in order to control the (otherwise unbounded) growth of the nonlinearity ^x /. Since ^x//1+^x/→ 0 x<0 1 x>1 → 0, the normalised system is piecewise-smooth in the limit → 0. From here, the loss of smoothness can be `resolved' via a geometric blow-up procedure which involves the insertion of an inner/switching layer which describes the dynamics for x = 𝒪(), i.e. `in between' the distinct regimes x< 0 and x > 0, using techniques that have in recent years been developed and applied in a large number of works, e.g. in <cit.>. In this work we show that similar techniques can also be used to describe travelling fronts in a reaction-diffusion equation known as the Zeldovich-Frank-Kamenetskii (ZFK) equation, which arises in combustion theory as a model for the propagation of planar laminar premixed flames <cit.>. We focus on one of the simplest variants of the ZFK equation, taken from <cit.> (see Section <ref> for details), but we also expect our methods to apply to more complicated models for planar flame propagation away from so-called flammability limits which – like the simple ZFK equation considered herein – can be described by a (system of) reaction-diffusion equation(s) for which the reaction rate is determined by an Arrhenius law with high activation energy (this is similar to the electrical oscillator models from <cit.> considered in <cit.>). The small parameter is := β^-1, where β is the so-called Zeldovich number, a dimensionless quantity related to the activation energy of the Arrhenius reaction, and β→∞ (→ 0) is the high activation energy limit. This leads to an exponential nonlinearity which, despite the relative abundance formal asymptotic studies based on high activation energy asymptotics (HAEA) (here we refer again to <cit.>), has often been viewed as an obstacle to rigorous geometric analysis using modern mathematical approaches like geometric singular perturbation theory (GSPT) <cit.>. We note that the problem considered herein, which is representative of problems with exponential nonlinearities quite generally, is smooth for each > 0, but non-smooth in the limit → 0. This distinguishes it from the problems considered in e.g. <cit.>, where travelling waves that are identified using GSPT and geometric blow-up feature a non-smooth cutoff which renders the equations non-smooth for > 0. The travelling wave problem can be formulated as a boundary value problem in a planar system of singularly perturbed ODEs. There are two important scaling regimes: a convective-diffusive zone where the reaction term is exponentially small, and a reactive-diffusive zone where the convective term can be neglected <cit.>. We use the geometric blow-up method to glue these two zones together, and subsequently identify a heteroclinic orbit which corresponds to a travelling wave in the original ZFK equation. Specifically, we show that for each > 0 sufficiently small there exists a (c,)-family of travelling waves, where c > 0 is the wave speed, and we prove the existence of a minimal wave speed c = c(). These results can be seen as a rigorous and geometrically informative counterpart to existing results that have been derived for similar problems using formal methods based on HAEA. In addition to providing a rigorous justification of existing results, we (i) prove the smoothness of the minimum wave speed function c(), and provide the asymptotics of c() up to and including 𝒪(). This calculation shows that c(ϵ)>1 for all 0<≪ 1. Moreover, we (ii) identify and provide an asymptotic expansion for a slow manifold which characterises part of the wave profile for non-minimal wave speeds c > c() in the convective-diffusive zone. We use a normal form from <cit.> in order to derive the smoothness results in (i), and we use an approach inspired by <cit.> to derive the asymptotics in (ii). The approach used in (ii) is of independent interest, insofar as we expect that it can be applied in order to determine the asymptotics of flat slow manifolds quite generally. The manuscript is structured as follows: In Section <ref> we introduce the ZFK equation and derive the singularly perturbed system of ODEs that we use in order to study the travelling wave problem. Section <ref> is devoted to the separate geometric analyses in the convective-diffusive and reactive-diffusive zones. We state the main result in Section <ref>, which is proven using geometric blow-up and normal forms in Section <ref>. We conclude in Section <ref> with a summary and outlook. § THE ZFK EQUATION Our starting point is the scalar ZFK equation θ_t = θ_xx + ω(θ, β) , where t ≥ 0, x ∈ and θ∈ [0,1] is the so-called reduced temperature, a non-dimensional variable which measures the ratio of burnt to unburnt gas in a simple reaction involving two species (corresponding to burnt and unburnt premix). In particular, θ = 0 corresponds to completely unburnt premix/fuel, and θ = 1 corresponds to a completely burnt state. The function ω(θ,β) models the reaction term, which depends upon the Zeldovich number β > 0. Equation (<ref>) can be obtained via a reduction from a two component system of reaction-diffusion equations which models the propagation of planar flames if the diffusivity of the two reactants are equal, i.e. when the so-called Lewis number L ≡ 1. We refer to <cit.> and the many references therein for details on the physical interpretation and derivation of the model. We consider the case for which the reaction term is given by ω(θ, β) = λθ (1 - θ) ^- β (1- θ) . where λ > 0 is a parameter. The asymptotic regime 0<β≪ 1 is known as the KPP regime, since equation (<ref>) reduces to the KPP equation when β→ 0. We shall be interested in the opposite regime β≫ 1, known as the ZFK regime, and the corresponding high activation energy limit β→∞. The parameter λ > 0 can be scaled out using the parabolic rescaling (x,t) ↦ (x / √(λ), t / λ ), however, it is customary to fix a particular choice for λ which ensures that the minimum wave speed derived in later sections is of order unity. We follow this convention here, and set λ = β^2/2 : ω(θ, β)=β^2/2θ (1 - θ) ^- β (1- θ) , without loss of generality. This particular choice of ω and λ was used in <cit.> in order to study the so-called KPP-ZFK transition from <cit.>. We shall write := 1/β : ω(θ, ϵ^-1)=ϵ^-2/2θ (1 - θ) ^- ϵ^-1 (1- θ) and consider the asymptotic behaviour as → 0 (which corresponds to β→∞). We restrict attention to the physically meaningful domain θ∈ [0,1], but do note that ω is unbounded for θ>1 for → 0. The graph of θ↦ω(θ,^-1), θ∈ [0,1], for five different values of is shown in Fig. <ref>. Notice that ω(1,^-1)=0 for all >0 and that ω is exponentially small with respect to → 0 for fixed θ∈ [0,1). Nevertheless, simple calculations shhow that ω is unbounded on θ∈ [0,1] due a local maximum θ=θ_c(ϵ) =1+𝒪(ϵ) with ω(θ_c(ϵ),ϵ)=𝒪(ϵ^-1). As we shall see, this lack of uniformity with respect to → 0 leads to distinct limiting problems in two distinct regimes. In combustion theory, one typically encounters the form ω(θ, β) = λ (1 - θ) ^- β (1 - θ) instead of (<ref>), see e.g. <cit.> and <cit.>. This form can be derived from the chemistry, however, the travelling wave problem is ill-posed because ω(0, β) = λ^- β > 0 for all β > 0, which means that the boundary condition at θ = 0 (see equation (<ref>) below) cannot be satisfied. This is known as the cold-boundary difficulty <cit.>. In order to circumvent it, one can either (i) adjust the reaction term so that ω(0,β) = 0, as in (<ref>), or (ii) introduce a cutoff by imposing that ω(θ, β) ≡ 0 for all θ < θ_c, where θ_c ∈ (0,1) is close to zero. The former approach is typically preferred in analytical studies such as ours, while the latter tends to be preferable for applications and simulations. Analytically, we expect that the ZFK equation (<ref>) with (<ref>) and a cutoff can be treated using a combination of our approach with the geometric methods developed for problems with cutoffs in <cit.>. §.§ Travelling waves We study the existence of travelling wave solutions for 0 < ≪ 1 within the physically meaningful regime θ∈ [0,1]. Rewriting equation (<ref>) in the travelling wave frame z = x + c t, where c ∈ is the (presently unknown) wave speed, we obtain the following second order ODE for travelling waves θ(x,t)=θ(z): c θ/ z = ^2 θ/ z^2 + 1/2^-2θ(1 - θ) ^- ^-1 (1 - θ) . We drop the tildes henceforth. As in <cit.>, the boundary conditions are lim_z → -∞θ(z) = 0 , lim_z →∞θ(z) = 1 . We now set η := θ / z and rewrite equation (<ref>) as the first order system θ̇ = η, η̇ = c η - 1/2^-2θ (1 - θ) ^- ^-1 (1 - θ) , where the overdot denotes differentiation with respect to z. There are two equilibria: p_- : (0,0) , p_+ : (1, 0) , which exist for all c ∈ and > 0. Direct calculations show that p_- is a proper unstable node for all c>2ϵ^-2^-ϵ^-1, whereas p_+ is a saddle. The strong and weak eigenvalues at the node p_-, which we denote here by λ_s and λ_w respectively, satisfy λ_s := c/2(1 + √(1 - 2/cϵ^2^ϵ^-1)) >λ_w := c/2(1 - √(1 - 2/cϵ^2^ϵ^-1)) , and the corresponding strong and weak eigenvectors (1,v_s)^ and (1,v_w)^ converge to (1,c)^ and (0,1)^ respectively as → 0. On the other hand, p_+ lies right at the interface Σ = { (1, η) : η∈ℝ} between θ<1, where the reaction term is exponentially small, and θ>1, where the right-hand side of (<ref>) is undefined for → 0. Our aim is to identify and characterise (generally (c,)-dependent) heteroclinic connections between p_- and p_+ as → 0, which correspond to travelling wave solutions in the ZFK equation (<ref>). Note that while heteroclinic orbits that connect to p_- tangentially to the weak eigendirection are robust, heteroclinic orbits which connects to p_- tangentially to the strong eigendirection are not. This follows from the uniqueness of strong stable manifold, which will be clarified in Lemma <ref> below. In the following, we also refer to the heteroclinic connections themselves as `weak' or `strong' in order to distinguish these two cases. System (<ref>) has a symmetry (η, z, c) ↔ (- η, -z, -c) . In the following we restrict attention to positive wave speeds c > 0, keeping in mind that corresponding statements can be derived for negative wave speeds using (<ref>). § GEOMETRIC SINGULAR PERTURBATION ANALYSIS We turn now to the geometry and dynamics in the singular limit → 0. As identified above, there are two important regimes to consider: * A convective-diffusive zone defined by 1 - θ = 𝒪(1), where the reaction term in (<ref>) is small; * A reactive-diffusive zone defined by 1 - θ = 𝒪(), where the advection term c η is small relative to the reaction and diffusion terms due to the spatial scale. We consider the geometry and dynamics in each zone separately. §.§ Equations in the convective-diffusive zone Taking → 0 in system (<ref>) with θ∈ [0,1) leads to the limiting problem in the convective-diffusive zone: θ̇ = η , η̇ = c η , which does not depend on ω. System (<ref>) has a 1-dimensional critical manifold S := { (θ, 0) : θ∈ [0,1) } , and the Jacobian along S has a single non-trivial eigenvalue λ = c > 0. Thus, S is normally hyperbolic and repelling. Note that p_- ∈ S, but p_+ ∉ S because θ<1 in the convective-diffusive zone. The fast fibers are contained within lines of the form η(θ) = c θ + η(0) , see Fig. <ref>, and the strong stable manifold of p_- is described by the following. Fix any κ∈ (0,1) and c>0. Then there exists an _0 = _0(κ,c) > 0 such that for all ∈ [0, _0) the strong unstable manifold of p_- is contained in a C^∞-smooth graph defined by η = cθ + R(θ,), over θ∈ [0,1-κ], where R is infinitely flat with respect to ϵ→ 0^+, i.e. ∂^n /∂ϵ^n R(θ,0)=0 ∀ n∈ℕ, such that R(θ,ϵ)=𝒪(^N) for all N∈ℕ. This follows directly from the Stable Manifold Theorem when ∈ (0,_0), i.e. when p_- is a proper node <cit.>. However, it remains to prove the result for = 0. Notice that Fenichel theory yields a strong unstable manifold given by a C^N-smooth graph of the form η = c θ + 𝒪(^N) for any N ∈ℕ, however, it does not yield the C^∞ property. In order to prove the C^∞ property we apply a polar blow-up of p_- : (θ,η)=(0,0). This turns the strong unstable manifold into a saddle and the result then follows from <cit.>. In the present case, we perform the directional blowup: (r_1,θ_1)↦θ = r_1θ_1, η =r_1. Plugging this into (<ref>) gives ṙ_1 = r_1(c - 1/2^-2θ_1 (1 - r_1θ_1) ^- ^-1 (1 - r_1θ_1)) , θ̇_1 = 1-θ_1 (c - 1/2^-2θ_1 (1 - r_1θ_1) ^- ^-1 (1 - r_1θ_1)). Here (r_1,θ_1)=(0,c^-1) is a hyperbolic saddle for =0 with the stable manifold contained within the invariant set r_1=0. In fact, the system is C^∞ smooth in a neighborhood of (r_1,θ_1)=(0,c^-1), also with respect to ≥ 0. Consequently, there is a hyperbolic saddle for any 0≤≪ 1 with an unstable manifold that is C^∞, also with respect to c>0 and ≥ 0. This invariant manifold corresponds to the strong unstable manifold upon blowing down. The expansion (<ref>) is a consequence of (<ref>) being flat with respect to → 0. In order to justify the orientation of the `reduced flow' indicated by single arrows on S in Fig. <ref>, note that Fenichel theory implies the existence of a locally invariant slow manifold of the form S_ = { (θ, h(θ, ) ) : θ∈ [0, 1 - κ] } , for an arbitrarily small but fixed constant κ∈ (0,1). The function h is C^N-smooth for ϵ∈ [0,ϵ_0(N)). Determining the slow flow on S_ is non-trivial, however, because the perturbation in system (<ref>) is flat as → 0. The slow manifold S_ has the following asymptotic expansion η = h(θ, ) = ∑_k=1^∞1/2^k F_k(θ,ϵ) ϵ^-3k+1^-kϵ^-1(1-θ) =1/2cθ(1-θ) ϵ^-2^-ϵ^-1(1-θ)+𝒪(ϵ^-5^-2ϵ^-1(1-θ)), with F_1(θ,ϵ) = 1/cθ(1-θ) and each F_k, k≥ 2, smooth satisfying the recursion cF_k = ∑_j=1^k-1(∂/∂θF_j(θ,ϵ)ϵ+jF_j(θ,ϵ) ) F_k-j(θ,ϵ), k≥ 2. As already noted, the slow manifold is flat with respect to ϵ→ 0 and therefore we cannot determine the flow on the slow manifold in the usual way. Inspired by <cit.>, we proceed instead by looking for a way to rewrite (<ref>) as an extended system in which the flat term 1/2ϵ^-2θ(1-θ) ^-ϵ^-1(1-θ) (or some scalings thereof) defines a new variable. In the present case, it turns out to be useful to introduce η̃ and ζ as follows: η̃=ϵ^-2η, ζ=1/2ϵ^-4^-ϵ^-1(1-θ). In this way, we can eliminate the flat terms by working in the extended (θ,η̃,ζ)-space where the equations are given by θ̇ = ϵ^2 η̃, η̇̃̇ =cη̃-θ(1-θ) ζ, ζ̇ = ϵη̃ζ. Notice that the set 𝒬 := { (θ,η̃,ζ) : ζ = 1/2ϵ^-4^-ϵ^-1(1-θ)}, defines an invariant set for all ϵ≥ 0, upon which (<ref>) reduces to (<ref>). System (<ref>) has a critical manifold defined by η̃= 1/cθ(1-θ)ζ, which is normally hyperbolic and repelling. Fenichel theory implies the existence of a slow manifold for system (<ref>) of the form η̃= 1/cθ(1-θ)ζ+𝒪(ϵ) on compact subsets (θ∈ [0, 1 - κ] as in (<ref>)), for all 0<ϵ≪ 1. At the same time, η̃=ζ=0 is also invariant for (<ref>), and the linearization about any point in this set gives a single nonzero eigenvalue c for all ϵ≥ 0. The center space at a point (θ_0,0,0) is spanned by the vectors (1,0,0)^T, (0,-θ_0(1-θ_0),c)^ T. We therefore have a center manifold that is a graph η̃=G(θ,ζ;ϵ) over (θ,ζ). In particular, G can be expanded as a formal series in ζ having (θ,ϵ)-dependent coefficients: η̃= ∑_k=1^∞ G_k(θ,ϵ) ζ^k. We find that each G_k, k≥ 2, is uniquely determined by the recursion relation cG_k = ϵ∑_j=1^k-1(∂/∂θG_j(θ,ϵ)ϵ+jG_j(θ,ϵ) ) G_k-j(θ,ϵ), k≥ 2. We have G_1(θ,ϵ)=1/cθ(1-θ), G_k(θ,0)≡ 0 for all k≥ 2, and the center manifold is therefore a slow manifold (i.e. an invariant set that is 𝒪(ϵ)-close to the critical manifold). Making the ansatz G_k=F_k ϵ^k-1, k≥ 1, we find that ϵ^k-1 cancels on both sides of the recursion relation and obtain the following equation for F_k=F_k(θ,ϵ): cF_k = ∑_j=1^k-1(∂/∂θF_j(θ,ϵ)ϵ+jF_j(θ,ϵ) ) F_k-j(θ,ϵ), k≥ 2. This leads to the following expansion of the slow manifold of (<ref>): η̃=G(θ,ζ;ϵ)=∑_k=1^∞ F_k(θ;ϵ)ϵ^k-1ζ^k. We then take the intersection of (<ref>) with the invariant set Q, recall (<ref>), and project the result back onto the (θ,η)-plane using (<ref>). This leads to a slow manifold of (<ref>) of the form η=ϵ^2 G(θ,1/2ϵ^-4^-ϵ^-1(1-θ);ϵ), which together with the expansion in (<ref>) completes the result. Restricting system (<ref>) to S_, we find that θ̇|_S_ = 1/2 c^-1^-2θ (1 - θ) ^- ^-1 (1 - θ) + 𝒪( ^-5^- 2 ^-1θ (1 - θ)) as → 0. This shows that θ is increasing between 0 and 1-κ∈ (0,1) for all ϵ∈ (0,ϵ_0(κ)) (i.e. between p_- and p_+ as → 0), as indicated in Fig. <ref>. The critical manifold S can be extended up to θ = 1, i.e. up to p_+, but Fenichel theory cannot be applied near p_+ because the right-hand side of system (<ref>) is not a C^1-perturbation on θ∈ [0,1] (only on a compact subset, see Fig. <ref>). This is also evident from (<ref>), which shows that the validity of the series expansions for S_ breaks down near θ = 1. A more ad-hoc way of arriving at the leading order asymptotics in the second line of (<ref>) is to substitute the ansatz h(θ, ) = 12 c^-2θ (1 - θ) ^^-1 (1 - θ) + h̃(θ, ) into (<ref>), and then use formal matching of terms to arrive at h̃(θ, ) = 𝒪(^-5^-2 ^-1 (1 - θ)). We present a more detailed statement in Proposition <ref> in order to demonstrate the utility of a more general approach which is (i) of independent interest as a dynamical systems approach to the derivation of asymptotic series for flat slow manifolds, and (ii) applicable in situations where the ansatz (<ref>) is not `obvious'. §.§ Equations in the reactive-diffusive zone In order to understand the dynamics in the reactive-diffusive zone, i.e. close to Σ, we introduce the following coordinate translation and rescaling: θ = 1 + θ_2. Inserting this into system (<ref>) leads to θ_2' = η , η' = 1/2θ_2 ^θ_2 + ( c η + 1/2θ_2^2 ^θ_2) , where the dash now denotes differentiation with respect to z_2 = z /, and the subscript `2' is chosen in order for consistency with the usual notation for dynamics in the `rescaling chart' in geometric blow-up analyses (our primary tool in Section <ref>). System (<ref>) is a regular perturbation problem, and the limiting system θ_2' = η , η' = 1/2θ_2 ^θ_2 , is Hamiltonian with solutions contained in level sets H(θ_2, η) = const., where H(θ_2, η) = η^2/2 - θ_2 - 1/2^θ_2 . System (<ref>) has a hyperbolic saddle at p_+ : (0,0), with eigenvalues λ_± = ± 1 / √(2). Note that we permit a slight abuse of notation here; earlier we defined p_+ in (<ref>) in (θ, η)-coordinates. Using H(0,0) = 1/2 we find that the the global stable and unstable manifolds of p_+ are given by w_0^s/u(p_+) = { (θ_2, h^s/u(θ_2) : θ_2 ∈}, where h^s(θ_2) = -sign(θ_2) √(1 + (θ_2 - 1) ^θ_2) , h^u(θ_2) = sign(θ_2) √(1 + (θ_2 - 1) ^θ_2) . Notice that h^s(θ_2) → 1 as θ_2 → -∞. This suggests that we look for solutions in the convective-diffusive zone which connect to the point q := (0,1) ∈Σ. The geometry and dynamics described above are sketched in Fig. <ref>. §.§ Singular heteroclinic orbits Using the above, we may construct an entire family of singular heteroclinic orbits. We define Γ(c) := Γ^0(c) ∪Γ^1(c) ∪Γ^2 , c ∈ [1, ∞) , where Γ^0(c) := { (θ, 0) : θ∈[ 0, 1 - c^-1] } , Γ^1(c) := { (θ, c θ + 1 - c) : θ∈[1 - c^-1, 1 ) } , Γ^2 := { (0, η) : η∈ [0, 1] } . Notice that we only consider singular orbits Γ(c) with c ≥ 1, since connections for c<1 enter the nonphysical domain θ<0. In this sense, there is a minimum wave speed (c=1) for =0. The singular orbit Γ(1), corresponding to the minimum wave speed, is distinguished by the fact that Γ^0(1) = p_- and corresponds to a strong connection; recall the discussion prior to Remark <ref>. In this case, there are only two components: Γ(1) = Γ^1(1) ∩Γ^2 . We sketch singular orbits with c = 1 and c > 1 in Fig. <ref>. The connections Γ(c) with c>1 correspond to weak connections, with the additional segment Γ^0(c) corresponding to a slow segment along the critical manifold S. § THE MAIN RESULT We are now in a position to state the main result, namely, the existence of a family of heteroclinic solutions for all > 0 sufficiently small in system (<ref>) which converge to the (non-smooth) singular orbits Γ(c) as → 0. Consider the ZFK system (<ref>). Fix κ>0 small enough and σ>0 large enough. Then there exists an _0 = _0(σ) > 0 and a C^∞-smooth function c : [0,_0) →, satisfying c(0) = 1 , c'(0) = ∫_-∞^0 (1-h^s(x)) x-1 ≈ 0.34405>0, recall the definition of h^s in (<ref>), such that the following assertions hold true for each ϵ∈ (0,_0): (i) There are no heteroclinic connections between p_- and p_+ within θ∈[0,1] for c<c(). (ii) There is a strong heteroclinic connection when c = c(), whose corresponding orbit converges to Γ(1) in Hausdorff distance as → 0. (iii) There is a weak heteroclinic connection for each c ∈ (c(), σ], and for each fixed c∈ (1+κ, σ], the corresponding orbit converges to Γ(c) in Hausdorff distance as → 0. The heteroclinic connection described by Theorem <ref> corresponds to a travelling wave solution of the ZFK equation (<ref>). In essence, Theorem <ref> provides a rigorous justification of formal asymptotic results that have been derived for similar problems using HAEA, most notably the existence of travelling wave solutions for each fixed c greater than a minimum wave speed satisfying c = c() ∼ 1; we refer again to <cit.> and the references therein. Assertion (i) shows that there is minimum wave speed for c = c(), and Assertions (ii) and (iii) describe the asymptotic properties of the wave profile as → 0. These two assertions describe an important qualitative difference between the cases c = c() and c > c(), namely, that they correspond to profiles with two vs. three distinct components as → 0 respectively; see again Fig. <ref>. The asymptotics of the `extra' component close to Γ^0(c) when c > c() are described by the slow manifold asymptotics in Proposition <ref>. To the best of our knowledge, the smoothness of the minimum wave speed function c() has also not been proven. Finally, we emphasize that c(ϵ)>1 ∀ 0<ϵ≪ 1, cf. (<ref>). In Figs. <ref> and <ref> we illustrate the results of numerical computations. The results are in agreement with Theorem <ref>. In particular, in Fig. <ref> we show the stable manifold W^s(p_+) (magenta) and the strong unstable manifold W^uu(p_-) (green) for =0.01 and two different c-values: c=1.5 in (a) and c=c(ϵ) (using the linear approximation c(ϵ) ≈ 1+0.34405×ϵ provided by (<ref>)) in (b). The results were obtained using the linear approximations offered at p_± and Matlab's ODE45 with low tolerances (10^-12). In (a), with W^s(p_+) lying below W^uu(p_-), we have a weak connection. The resulting profile is shown in Fig. <ref> (dashed line). Due to exponentially slow flow along S, recall Proposition <ref>, the decay θ(z)→ 0 for z→ -∞ is slow and it is not visible in Fig. <ref>(a). Fig. <ref>(b) shows that the linear approximation offered by (<ref>) for c() is accurate for this value of . In fact, W^uu(p_-) and W^s(p_+) is inseparable on the scale shown (being of the order ∼ 10^-4). The resulting profile is shown in Fig. <ref> (full line). See figure captions for further details. The travelling wave corresponding to the minimum wave speed c = c() has received the most attention in the HAEA literature <cit.>, primarily because it is expected to be stable, see e.g. <cit.> for recent work on the so-called marginal stability conjecture. We do not address questions of stability or selection in this work. § PROOF AND GEOMETRIC BLOW-UP As usual, we begin by considering the extended system θ̇ = η, η̇ = c η - 1/2^-2θ (1 - θ) ^- ^-1 (1 - θ) , = 0 , which is obtained from system (<ref>) by appending the trivial equation = 0. The extended system (<ref>) is not defined at the interface Σ = { (1, η, 0) : η∈}, or more generally on the set defined by θ > 1, = 0. Note that we permit a slight abuse of notation by using the same notation for Σ as in (<ref>) (the two sets are naturally identified). In order to obtain a well-defined system on θ > 1, = 0, we follow <cit.> and “normalise the equations" through division of the right-hand side by ϵ^-2(1+1/2^ϵ^-1(θ-1)). This corresponds to a reparametrization of time for >0; we denote the new time by s. In this way, we obtain an equivalent system for >0, having a well-defined piecewise smooth limit → 0, with Σ being the discontinuity set/switching manifold. In particular, θ/ s =0, η/ s =θ(θ-1), θ>1, for → 0. In line with <cit.>, we then gain smoothness along Σ by applying the cylindrical blow-up transformation: r ≥ 0 , (θ̅, ) ∈ S^1∩{≥ 0} ↦ θ = 1 + r θ̅, = r , which fixes η. Notice specifically that ^-ϵ^-1(1-θ) = ^-^-1θ̅ under (<ref>), which defines a smooth function on (θ̅, )∈ S^1 with ϵ̅≥ 0 and θ̅≤ 0. To analyse the normalised equations under the blowup transformation (<ref>), we work in local coordinate charts defined by K_1 : θ̅= - 1 and K_2 : = 1. Local coordinates in charts are related to the original coordinates (θ, η, ) by K_1 : (θ, η, ) = (1 - r_1, η, r_1 _1) , K_2 : (θ, η, ) = (1 + θ_2, η, ) , which we insert into (<ref>) (and apply appropriate desingularization to ensure that the system is well-defined and smooth). The local change of coordinates formulae are given by κ_12 : r_1 = - θ_2 , _1 = - 1 / θ_2 , θ_2 < 0 , κ_21 : θ_2 = - 1 / _1 , = r_1 _1 , _1 > 0 . The coordinates in chart K_2 are just the coordinates of the inner equations used in Section <ref>, except that we now view this system in the extended (θ_2, η, )-space. (In this way, (<ref>) also compactifies (<ref>).) Recall that (<ref>) is a smooth regular perturbation problem on compact subsets. Thus, it remains to consider the (smooth) equations in chart K_1, where the matching between the inner and outer system occurs. §.§ Singular geometry and dynamics in K_1 The equations in chart K_1 are given by r_1' = - r_1 η , η' = c r_1 η - 1/2_1^-2 (1 - r_1) ^- _1^-1 , _1' = _1 η , after a transformation of time which amounts to multiplication of the right-hand side by r_1. We permit a slight abuse of notation by using a dash to indicate differentiation with respect to the new independent variable (as opposed to the usage in system (<ref>)). Notice that system (<ref>) is smooth on r_1≥ 0, ϵ_1≥ 0, as desired, and that the planes defined by r_1 = 0 and _1 = 0, as well as their intersection; the line r_1 = _1 = 0, are invariant. In particular, the set L := { (0, η, 0) : η∈} defines a line of resonant saddles for system (<ref>), with eigenvalues -η, 0 and η (except for a non-hyperbolic point at (0,0,0), which does not play a role in the analysis). The limiting dynamics from the convective-diffusive zone appear within {_1 = 0 }, and are governed by r_1' = - r_1 η, η' = c r_1 η. Indeed, this system is equivalent to (<ref>) on r_1>0 upon setting r_1=1-θ. We therefore also re-discover the repelling critical manifold S as S_1 = { (r_1, 0, 0) : r_1 ≥ 0 } . Considered within {_1 = 0}, the Jacobian evaluated along S_1 has eigenvalues 0 and c r_1 ≥ 0. Trajectories are contained in straight lines of the form η(r_1) = - c r_1 + η(0). On the other hand, the limiting dynamics in the reactive-diffusive zone appear within {r_1 = 0}. We are particularly interested in the extension of w_0^s(p_+) into chart K_1. Using (<ref>) and (<ref>), we obtain w_0,1^s(p_+) := κ_21( w_0^s(p_+) ×{0}) = { (0, h_1^s(_1), _1) : _1 ≥ 0 } , where h_1^s(_1): = √( 1 - _1^-1 (1 + _1) ^- _1^-1), the subscript “1" indicates that we are viewing the object in chart K_1, and the solution within w_0,1^s(p_+) is backward asymptotic to the point q : (0,1,0) ∈ L. The singular geometry and dynamics is summarised in Fig. <ref>. The preceding analysis allows for the construction of an improved family of singular heteroclinics Γ(c) = Γ^0(c) ∪Γ^1(c) ∪Γ^2 , considered now in the blown-up space. The definitions for Γ^0(c) and Γ^1(c) remain unchanged (after a natural embedding into the extended (θ, η, )-space or representation in chart K_1 coordinates), but Γ^2 is replaced by Γ^2 = q ∪(w_0^s(p_+)×{0}). Improved singular heteroclinic orbits for c = 1 and c > 1 are shown in Fig. <ref>. §.§ Perturbation It remains to describe the perturbation of the improved singular heteroclinics in (<ref>). The main task is to understand the local passage near q ∈ L. To this end, we divide the right-hand side of system (<ref>) by η (which is positive close to q) and consider the flow induced by the system r_1' = - r_1 , η' = c r_1 - 1/2η^-1_1^-2 (1 - r_1) ^- _1^-1 , _1' = _1 . System (<ref>) can be brought into a kind of local normal form by straightening the fibers of the stable and unstable manifolds with base points along L, which are contained in the invariant planes {r_1 = 0} and {_1 = 0} respectively. This can be achieved with an (r_1,ϵ_1)-fibered diffeomorphism defined by (r_1,y_1,ϵ_1)↦η=-cr_1 +f_1(y_1,ϵ_1), where f_1(y_1,ϵ_1) = y_1 √(1- y_1^-2(_1^-1 + 1)^-ϵ_1^-1) and the inverse is given by y_1 = √((η + c r_1)^2 + (_1^-1 + 1 ) ^-ϵ_1^-1) . Indeed, in these coordinates the sets defined by (r_1,y_1,0), r_1≥ 0 and (0,y_1,ϵ_1), ϵ_1≥ 0 with y_1>0 fixed are the stable and unstable manifolds of the point (0,y_1,0); see Fig. <ref>. Here we have used the conservation of H(-_1^-1,η)=const. within r_1=0 to write the fibers of the unstable manifold with base point (r_1,y_1,ϵ_1)=(0,y_1,0) as r_1=0, H(-ϵ_1^-1, η)=y_1^2/2. Solving the last expression for η gives η=f_1(y_1,ϵ_1). Notice that f_1(1,ϵ_1) = h^s(-ϵ_1^-1), since η = h^s(θ_2) corresponds to the level set H(η,θ_2)=1/2 (i.e. y_1=1). This leads to the following equations: ṙ_1 = -r_1, ẏ_1 = r_1 F_1(r_1,y_1,ϵ_1,c), ϵ̇_1 =ϵ_1, where F_1(r_1,y_1,ϵ_1,c):=1/2 y_1^-1ϵ_1^-2^-ϵ_1^-1f_1(y_1,ϵ_1)-c/f_1(y_1,ϵ_1)-cr_1 . Notice that F_1 is infinitely flat in ϵ_1→ 0, i.e.  ∂^n /∂ϵ_1^nF_1(r_1,y_1,0,c)=0, for all n∈ℕ. We shall adopt the following notation: for any N∈ℕ∪{∞}, we let j_N G(·) denote the Nth-order Taylor-jet of a C^N-smooth function x↦ G(x) defined in a neighborhood of x=0. Then we can write (<ref>) as j_∞ F_1(r_1,y_1,·,c) = 0. System (<ref>) is similar (but not identical) to the normal form in <cit.> (here F_1, called k in <cit.>, only depends upon r_1 through r_1ϵ_1=ϵ). Let π : Δ_1^in→Δ_1^out denote the transition map induced by the flow of system (<ref>), where Δ^in_1 = { (ρ, y_1, _1) : y_1 ∈ I , _1 ∈ [0, δ] } , Δ^out_1 = { (r_1, y_1, δ) : r_1 ∈ [0,ρ] , y_1 ∈Ĩ} , with I := [1-β, 1+β] ⊂ (0,∞) and Ĩ := [1 - β̃, 1 + β̃] for some fixed β∈ (0,1), β̃> β, and small but fixed ρ, δ > 0. Due to resonances along L, one would in general expect logarithmic terms in the expansion of π (see e.g. <cit.>). However, all resonant terms are absent in the present case due to the flatness (<ref>) with respect to ϵ_1. Indeed, we have the following: Fix any N∈ℕ and a compact interval J⊂ (0,∞). Then for ρ>0, δ>0 sufficiently small, there exists a C^N-smooth function Y : I × [0,δ) × J →ℝ such that the map π : Δ_1^in→Δ_1^out is well-defined and given by π : (ρ, y_1, _1) ↦( ρ/δ_1, Y(y_1, _1, c ; ρ, δ), δ) . The first component of the right-hand side in (<ref>) follows directly from = _1 r_1 = const. It therefore remains to show that the function Y is well-defined and C^N-smooth. In order to do so, we show that system (<ref>) can be brought into the local normal form in <cit.>, which, based on additional arguments in <cit.>, is sufficient to prove C^N-smoothness of π. Since ρ, δ>0 are fixed as parameters, we shall suppress the dependency of Y on these quantities. We also suppress the dependency on c; the smoothness with respect to this parameter should be obvious from the proof. First, we claim that for any N∈ℕ there is an (r_1,ϵ_1)-fibered C^∞-diffeomorphism y_1 = y_N+α_N(r_1,y_N,ϵ_1), with j_∞α_N(r_1,y_N,·)=0, such that ṙ_1 = -r_1, ẏ_N = r_1^N F_N(r_1,y_N,ϵ_1), ϵ̇_1 =ϵ_1, with j_∞ F_N(r_1,y_N,·)=0. Following <cit.>, we proceed by induction. We have already shown the base case N = 1 in the derivation of system (<ref>) using (<ref>). We therefore proceed directly to the induction step and suppose that the statement is true for N=n. We write y_n+1 = y_n + r_1^n β_n(y_n,ϵ_1), with the (as yet undetermined) β_n satisfying j_∞β_n(y_n, ·) = 0. Writing F_n(r_1,y_n,ϵ_1) = F_n,0(y_n,ϵ_1) +r_1 F_n,1(r_1,y_n,ϵ_1) , we obtain ẏ_n+1 = r_1^n {F_n,0(y_n+1,ϵ_1) -n β_n(y_n,ϵ_1) + ϵ_1 ∂/∂ϵ_1β_n(y_n,ϵ_1)}+𝒪(r_1^n+1). The 𝒪(r_1^n) terms vanish if we impose the following solvability condition on β_n: ϵ_1∂/∂ϵ_1β_n(y_n,ϵ_1) =n β_n(y_n,ϵ_1)-F_n,0(y_n,ϵ_1) , j_∞β_n(y_n, ·) = 0, which may be solved to give β_n(y_n,_1) = - _1 ∫_0^_1 s^-2 F_n,0(y_n,s) s. Now fix N+3∈ℕ and use = r_1 _1 to rewrite system (<ref>) as ṙ_1 =-r_1, ẏ_N+3 = ϵ^N+3 R(r_1, y_N+3, ϵ_1), ϵ̇_1 =ϵ_1, where R:=ϵ_1^-(N+3) F_N+3. Notice that this is smooth since j_∞ F_N+3(r_1,y_n,·)=0. System (<ref>) is – up to a change of notation – precisely the local normal form appearing in <cit.> (case a = 0). C^N-smoothness of the function Y then follows from the arguments in <cit.>. Using Proposition <ref> and a number of elements of the proof thereof, we can derive the leading order asymptotics for Y(y_1, _1,c). The function Y has the following asymptotic expansion in _1: Y(y_1, _1, c) = y_1 + _1 ρ∫_0^δ s^-2 F_1(0,y_1,s,c) s + 𝒪(ϵ_1^2) , where the function F_1 is given in (<ref>). From (<ref>) and the proof of Proposition <ref> we have the following relations: y_N+3^out = y_N+3^in + 𝒪(^N), y_1 = y_N+3 + α_N+3(r_1, y_N+3, _1), y_N+3 = y_1 + α̃_1(r_1, y_1, _1) , where α̃_1(r_1, y_1, _1) is C^N-smooth and 𝒪(_1). Using these relations to evaluate y_1^out = Y(y_1, _1, c), we obtain Y(y_1, _1, c) = y_1 + α̂_1(ρ, y_1, _1) , where the function α̂_1(ρ, y_1, _1) is C^N-smooth and 𝒪(_1) as _1 → 0. In order to determine α̂_1(ρ, y_1, _1) to leading order in _1 → 0, we use the transformation y_2 = y_1 + α_1(r_1, y_1, _1) = y_1 + r_1 β_1(y_1, _1) and the fact that y_2^out = y_2^in + α̂_2(ρ, y_2^in, _1^in) , where the function α̂_2(ρ, y_2^in, _1^in) is C^N-1-smooth and 𝒪((_1^in)^2) as _1^in→ 0 (this follows from relations analogous to those which led to (<ref>)). We obtain Y(y_1, _1, c) = y_2^out - ρ/δ_1 β_1(Y(y_1, _1, c), δ) = y_2^in - ρ/δ_1 β_1(y_1, δ) + 𝒪(_1^2) = y_1 - ρ/δ_1 β_1(y_1, δ) + 𝒪(_1^2) . Substituting the expression for β_1(y_1, δ) in (<ref>) leads to (<ref>), as required. Taken together, Proposition <ref> and Lemma <ref> imply that the transition map π : Δ_1^in→Δ_1^out is C^N-smooth of the form (<ref>), with Y(y_1,_1,c) given by (<ref>). We now use Lemma <ref> in order to derive a bifurcation equation on the section Δ_1^out. More precisely, we introduce a distance function on Δ_1^out via 𝒟(c,) := Y_u(c,) - Y_s(c,) . Here Y_u(c,) denotes the y_1-coordinate of the intersection w_,1^u(p_-) ∩Δ_1^out, where w_,1^u(p_-) denotes the (unique) strong unstable manifold of the point p_- in K_1 coordinates, and Y_s(c,) denotes the y_1-coordinate of the intersection w_,1^s(p_+) ∩Δ_1^out, where w_,1^s(p_+) denotes the (unique) stable manifold of the saddle p_+, which is a perturbation of the limiting stable manifold in (<ref>). Notice that w_0,1^u(p_-) = Γ^1(1), w_0,1^s(p_+) = Γ^2 ∖ p_+, and that zeros of (<ref>) correspond to heteroclinic solutions of system (<ref>). For each N ∈ℕ there exists an _0 = _0(N) > 0 such that Y_u is C^N-smooth on (c,) ∈ (1-β,1+β) × [0,_0) and satisfies Y_u(c,) = c + ∫_0^δ s^-2 F_1(0,c,s,c) s + 𝒪(^2) as → 0. Fixing c ∈ (1 - β, 1 + β) ensures that w_0^u(p_-), as an orbit of (<ref>), intersects Σ in a point (1,c). Regular perturbation theory on compact subsets of θ<1 implies that the fast fiber corresponding to this connection lifts to a C^∞-smooth curve of the form γ^in(c,) := { (ρ, c + 𝒪(^N), ρ^-1 ) : ∈ [0,_0] }⊂Δ_1^in, when viewed in the (r_1, y_1, _1)-coordinates, where _0 := δρ, see also Lemma <ref>. Note that γ^in(c,) ⊂Δ_1^in as long as we choose δ and ρ sufficiently small. Thus we can use Proposition <ref> and Lemma <ref> in order to extend w_^u(p_-) to Δ_1^out. Expanding in and using (<ref>) together with the fact that = _1 ρ for points on Δ_1^in, we obtain Y_u(c,) = Y (1 + 𝒪(^N), ρ^-1, c ) = c + ∫_0^δ s^-2 F_1(0,c,s,c) s + 𝒪(^2) as → 0, as required. We now derive an implicit equation for Y_s(c,). For each N ∈ℕ there exists an _0 = _0(N) > 0 such that Y_s(c,) is C^N-smooth on c∈ (1-β,1+β) × [0,_0) and satisfies Y_s(c,)^2 - 1/2 - ϵ[ cδ^-1 f(Y_s(c,),δ) - ∫_δ^∞( cs^-2 f_1(1,s) +s^-4/2^-s^-1) s] + 𝒪(ϵ^2) = 0 as → 0. We begin in K_2 coordinates (η,θ_2,), and look for an expression for the intersection w_^s(P_s) ∩κ_12(Δ_1^out), where P_s := { (0,0, ) : ∈ [0,_0] } denotes the line of equilibria emanating from p_+. We use the fact that trajectories are contained in constant level sets of the Hamiltonian function H(θ_2, η) defined in (<ref>) when = 0, and consider now the perturbed dynamics for ∈ [0, _0]. We have /θ_2 H= ϵ(c η +θ_2/2^θ_2), using H=H(θ_2, η(θ_2)). Along the stable manifold η(θ_2;c,ϵ) = h^s(θ_2)+𝒪(ϵ), where h^s(θ_2) is given by (<ref>), regular perturbation theory implies /θ_2 H=ϵ(c h^s(θ_2) +θ_2/2^θ_2)+𝒪(ϵ^2) , since we are working on compact subsets. Integrating this expression from θ_2=0 (where the saddle is) to θ_2 = -1 / δ (where κ_12(Δ_1^out) is), we obtain H(-1/δ, η^out)-1/2 = ϵ∫_0^-1/δ( ch^s(θ_2) +θ_2^2/2^θ_2) θ_2+𝒪(ϵ^2) , where η^out denotes the η-coordinate of the intersection w_^s(P_s) ∩κ_12(Δ_1^out) to be solved for. Using (<ref>), we have H(-1/δ, η^out) =(y_1^out)^2/2-∂/∂η H (-1/δ, f_1(y_1^out))cr_1^out+𝒪((r_1^out)^2) =(y_1^out)^2/2-cr_1^outf_1(y_1^out,δ)+𝒪((r_1^out)^2) where y_1^out = Y_s(c,) is given in terms of η^out by the right-hand side of (<ref>) evaluated at (r_1^out, η^out, δ), where r_1^out = δ^-1ϵ. Using (<ref>) and (<ref>), we obtain the equation (y_1^out)^2/2 -1/2 - ϵ[ cδ^-1 f(y_1^out,δ) - ∫_δ^∞( cs^-2 f_1(1,s) +s^-4/2^-s^-1) s+𝒪(ϵ)] = 0 , as required. In order to identify the zeros of 𝒟(c,), it suffices to consider solutions to the bifurcation equation B(c, ) := B(Y_u(c,), c, ) = 0 , where the function B(X,c,) is defined using the implicit equation (<ref>), i.e.  B(X, c, ) := X^2 - 1/2 - ϵ[cδ^-1 f(X,δ) - ∫_δ^∞( cs^-2 f_1(1,s) +s^-4/2^-s^-1) s] + 𝒪(ϵ^2) . The following result establishes a solution to (<ref>). For each N ∈ℕ there exists an _0 = _0(N) > 0 and a C^N-smooth function c : [0,_0) → [1, ∞) such that B(c(), ) = 0 for all ∈ [0,_0). In particular, c(0) = 1 , c'(0) = ∫_-∞^0 (1-h^s(x)) x-1 ≈ 0.34405>0 . Using (<ref>) and (<ref>), we obtain B(1,0) = 0 and B_c'(1,0) = 1 ≠ 0. Thus, the existence of the C^N-smooth function ↦c() satisfying c(0) = 1 follows from the C^N-smoothness of B and the implicit function theorem. It remains to calculate c'(0). Since c'(0) = - B'_(1,0) / B'_c(1,0) = -B'_(1,0) by implicit differentiation, it suffices to determine B'_(1,0). From (<ref>), using the expression for Y_u(c,) in (<ref>), we obtain B_'(1, 0) = ∫_0^δ s^-2 F_1(0,1,s,1) s - δ^-1 f(1,δ) + ∫_δ^∞( s^-2 f_1(1,s) +s^-4/2^-s^-1) s , where we also used the fact that c(0) = 1. In order to simplify the right-hand side, we use the definitions of f_1 and F_1 given in (<ref>) and (<ref>), together with ∫_0^∞s^-4/2^-s^-1 s = 1. We obtain B_'(1, 0) = 1 + [ - δ^-1 f_1(1,δ) + 1/2∫_0^δs^-4^-s^-1/f_1(1,s) s + ∫_δ^∞ s^-2 f_1(1,s) s ] . Differentiating with respect to δ shows that (<ref>) is independent of δ. Therefore, B_'(1, 0) = lim_δ→ 0B_'(1, 0) = 1 + lim_δ→ 0 (∫_δ^∞ s^-2 f_1(1,s) s - δ^-1 f_1(1,δ)) = 1 + ∫_0^∞ s^-2(f_1(1,s)-f_1(1,0)) s = 1 + ∫_-∞^0 (h^s(θ_2)-1) θ_2, as required. §.§ Completing the proof of Theorem <ref> Lemma <ref> shows the existence of a C^N-smooth minimal wave speed function c() satisfying (<ref>) and Assertion (ii) of Theorem <ref>. To see that c is C^∞, we first notice that c is defined on a domain [0,ϵ_0) with ϵ_0>0 small enough (we can apply Lemma <ref> with N=1). Clearly c is C^∞ for any ϵ∈ (0,ϵ_0), since the original ZFK system (<ref>) is C^∞. We therefore only have to show that c is C^∞ at ϵ=0. For this purpose, we use the uniqueness of c(ϵ) for ϵ∈ (0,ϵ_0) and Lemma <ref> to conclude that c is C^N on [0,ϵ_0(N)) for some 0<ϵ_0(N)≪ 1 for any N∈ℕ. The claim now follows. Next, we notice that all heteroclinic connections with cc(ϵ) are weak, due to the uniqueness of the strong heteroclinic connection. In particular, for c<c(ϵ) the heteroclinic connections enter θ<0. This shows Assertion (i) of Theorem <ref>. To prove Assertion (iii), we just have to show that the connections limit to Γ(c) in Hausdorff distance. For this purpose, we first notice that for c>c(ϵ) the stable manifold w_ϵ^s(p_+) lies below the strong unstable manifold w_ϵ^u(p_-). Moreover, w_0^s(p_+) and Γ^1(c) (as a trajectory of (<ref>)) have q : (r_1,η,ϵ_1)=(0,1,0) as their α-limit resp. ω-limit sets in the K_1-chart, as shown in Figure <ref>. It then follows from Proposition <ref> that the intersection of w_ϵ^s(p_+) with Δ^in={(θ,η):θ=1-ρ} is 𝒪(ϵ)-close to Γ^2(c)∩Δ^in. Here we abuse notation slightly by using Δ^in for the blown-down version of (<ref>). The result then follows from Fenichel theory and Proposition <ref>. § SUMMARY AND OUTLOOK A primary aim of this work was to demonstrate the suitability of GSPT and geometric blow-up for the study of dynamical systems with singular exponential nonlinearities. We would like to emphasise and reiterate the following methodological point, which is supported by our analysis of the ZFK equation (<ref>) as well as the analysis of electrical oscillator models in <cit.>: Smooth dynamical systems with singular exponential nonlinearities can often be formulated as singular perturbation problems with a piecewise-smooth singular limit, after which the geometry and dynamics can be analysed using GSPT and geometric blow-up. This means that different limiting problems are to be expected on different regions of the phase space which are separated by switching manifolds, along which the system loses smoothness. Because the system is smooth for > 0, one expects to be able to find a smooth connection between these regions by inserting an inner `rescaling regime', where another limiting problem arises, after geometric blow-up; we refer again to <cit.>. In the case of system (<ref>), the singular exponential nonlinearity in the reaction term ω(θ, ^-1) leads to a non-smooth singular limit in a `normalised system' which is obtained from (<ref>) by dividing the the right-hand side by the positive expression in (<ref>), since ω(θ, ^-1)/^-2( 1 + 1/2^^-1 (1 - θ)) = 0, θ < 1 , θ (1 - θ) , θ > 1 , as → 0. This leads to different limiting problems on three distinct regimes: θ < 1, θ = 1 + 𝒪() and θ > 1. In order to identify and characterise travelling wave solutions to the ZFK equation (<ref>), it sufficed to consider the dynamics in two regimes only, namely, in the convective-diffusive zone wih θ < 1 bounded away from Σ = {θ = 1 }, and a reactive-diffusive zone with θ = 1 + 𝒪(). After extending the phase space and applying the blow-up transformation (<ref>), we were able to smoothly connect the analyses in each of these zones, which we performed separately in Section <ref>. The connection problem in chart K_1 within the blow-up is smooth, even though the singular limit in the original system (<ref>) is not, showing that the blow-up has `resolved' the loss of smoothness. Our main result is Theorem <ref>, which describes a (c,)-family of heteroclinic orbits in system (<ref>) which correspond to travelling waves the original ZFK equation (<ref>). In essence, Theorem <ref> is the rigorous and geometrically informative counterpart of existing results on travelling waves in (a number of closely related variants of) the ZFK equation, which have been obtained formally using HAEA in e.g. <cit.>. We also moved beyond the existing formal results by proving that the minimal wave speed function c() is C^∞ for all ∈ [0,_0) (Proposition <ref> and the normal form in <cit.> was crucial for this), and providing an asymptotic expansion for the slow manifold S_ (see again Proposition <ref>) which plays an important role in the construction of the non-minimal waves for c > c(). The latter feature is not likely to be of significance for the ZFK problem in particular, given that one expects the system to select the minimal wave speed c() (recall Remark <ref>). Nevertheless, the proof of Proposition <ref> is of independent interest because it provides a rigorous and dynamical systems based approach to the determination of asymptotic expansions for flat slow manifolds. This work paves the way for more complicated analyses, particularly because of the emphasis on the development of transferable methods for study of dynamical systems with exponential nonlinearities in general. We also left questions of stabilitiy and selection completely open, see e.g. Remark <ref>. With regard to problems in combustion theory, a natural continuation of this work would be to consider the case of planar flames with non-unity Lewis number, in which case one considers two species reaction-diffusion equations with different diffusivity constants as in <cit.>. These and other related problems remain for future work. TODO § ??NUMERICS?? § SYSTEMATIC APPROACH Oultine the mathematical/systematic formulation for general interest The first step is to normalise system (<ref>) so that the limit as → 0 is defined on for all θ≥ 0, and in particular over a neighbourhood of θ = 1. To this end, we define ϕ_θ(s) : = 1/1+1/2θ s ^ s . ϕ_θ is a (non-monotone) sigmoidal function satisfying lim_s→ -∞ϕ_θ=1, lim_s→ +∞ϕ_θ = 0, for each θ∈ (0,2). (In particular, ϕ_θ has a single extremum (maximum) at s=-1 with value 1/(1 - 1/2θ^-1) for θ∈ (0,2).) Note that the auxiliary functions (0,∞)∋ s↦ϕ_θ(-s^-1), (0,∞)∋ s ↦ϕ_θ(s^-1), due to the exponential decay, each have smooth extensions to s=0, θ∈ (0,2). We then normalise system (<ref>) by applying a positive transformation of time which amounts to multiplication of the right-hand side by ϵϕ_θ(^-1(θ-1)) , thereby obtaining a new system in the form [ θ'; η' ] = ϕ_θ( (θ - 1) ^-1) V_-(θ, η, ) + ( 1 - ϕ_θ( (θ - 1) ^-1) ) V_+(θ, η, ) , where the vector fields V_-(θ,η,ϵ) := ϵη[ 1; c ], V_+(θ,η,ϵ) = [ 0; 1 ] , depend upon ϵ in a regular, smooth way, and we permit a slight abuse of notation by allowing the dash to denote differentiation with respect to the new time. System (<ref>), which is equivalent to system (<ref>) for all >0 due to the multiplication by the positive quantity (<ref>), is an example of a smooth vector-field approaching a piecewise smooth one: V_-(θ,η,0)=(0,0)^T for θ<1 and V_+(θ,η,0)=(0,1)^T for θ>1 in the limit → 0, and the set Σ is the associated discontinuity set (also known as the switching manifold, see <cit.>). Notice that the rescaled vector-field ϵ^-1 V_-(θ,η,ϵ) corresponds to (<ref>) studied in the outer regime. The normalised ZFK system (<ref>) has four distinct time-scales as → 0, which correspond to (i) exponentially slow flow along S, (ii) `slow-intermediate' layer flow in {θ∈ [0,1) }∖ S, (iii) `fast-intermediate' flow in the inner regime 1 - θ = O(), and (iv) exponentially fast flow in {θ > 1}. Systems of the form (<ref>) have received significant interest over the last decade <cit.>. We now describe the general process of <cit.> for gaining smoothness of (<ref>). We begin by considering the extended system [ θ'; η' ] = [ ϕ_θ( (θ - 1) ^-1) V_-(θ, η, ) + ( 1 - ϕ_θ( (θ - 1) ^-1) ) V_+(θ, η, ) ] , ' = 0 , which is obtained from system (<ref>) by an additional multiplication of the right hand side (differentiation with respect to the new time is again denoted with a dash) and appending the trivial equation ' = 0. Let W denote the associated vector-field. The set =0 is a set of singularities of W, but the set Σ = { (1, η, 0) : η∈} , for which we use the same notation as in (<ref>) (the two sets are naturally identified), is extra degenerate due to the lack of smoothness there. In order to resolve the loss of smoothness along Σ, we follow <cit.> and apply the blow-up transformation Φ defined by r ≥ 0 , (θ̅, ) ∈ S^1 ↦ θ = 1 + r θ̅, = r . In this way, the pull-back vector-field W:=Φ^*(W) has as a common factor; more importantly, see <cit.>, the desingularized vector-field W:=^-1W, is well-defined and smooth (since the auxiliary functions (<ref>) have smooth extensions to s=0). To study W, we work in local coordinate charts defined by K_1 : θ̅= - 1 and K_2 : = 1. Local coordinates in charts are related to the original coordinates (θ, η, ) by K_1 : (θ, η, ) = (1 - r_1, η, r_1 _1) , K_2 : (θ, η, ) = (1 + θ_2, η, ) , and the local change of coordinates formulae are given by κ_12 : r_1 = - θ_2 , _1 = - 1 / θ_2 , θ_2 < 0 , κ_21 : θ_2 = - 1 / _1 , = r_1 _1 , _1 > 0 . Notice that the coordinates in chart K_2 are precisely coordinates of the inner equations used in Section <ref>, except that we now view this system in the extended space. The equations (,'=0) are therefore a local version of W (up to a regular reparametrization of time). Thus, it remains to consider the equations in chart K_1, where the matching occurs. siam
http://arxiv.org/abs/2405.08902v1
20240514183253
Minimization of Euclidean energy of $j-$degree mappings between annuli
[ "David Kalaj" ]
math.CV
[ "math.CV" ]
Minimization of Euclidean energy of j-degree mappings]Minimization of Euclidean energy of j-degree mappings between annuli D. Kalaj]David Kalaj University of Montenegro, Faculty of natural sciences and mathematics, Podgorica, Cetinjski put b.b. 81000 Podgorica, Montenegro davidk@ucg.ac.me [2010]Primary 35J60; Secondary 30C70 [ [ May 20, 2024 ================ Let 𝔸 and be circular annuli in the complex plane and consider the Dirichlet energy integral of j-degree mappings between and . Then we minimize this energy integral. The minimizer is a j-degree harmonic mapping between annuli and provided it exits. If such a harmonic mapping does not exist, then the minimizer is still a j-degree mapping which is harmonic in '⊂ and it is a squeezing mapping in its complementary annulus ”=∖. Such a result is an extension of the certain result of Astala, Iwaniec and Martin <cit.>. § INTRODUCTION In this paper, we continue to study the minimization problem of the Dirichlet energy of Sobolev mappings belonging to the W^1,2 class, between domains in ℝ^2. Such research is related to the principle of non-interpenetration of matter in the mathematical theory of Nonlinear Elasticity (NE) see for example <cit.>. We will minimize the energy of j-degree mappings between certain doubly connected domains. §.§ Dirichlet integral Let D and Ω be two bounded planar domains in ℝ^2, and let h:Donto⟶Ω be a mapping that belongs to the Sobolev class W^1,2. The Dirichlet integral (also called conformal energy) of h is given as follows: ℰ[h]=∫_D|Dh(z)|^2dxdy, where z=(x, y)∈ D and |Dh(z)|^2 =|f_z|^2+|f_z̅|^2=1/2(|f_x|^2+|f_y|^2). Let j be a positive integer and assume that ℋ_j(D,Ω) is the class of j-degree smooth mappings between D and Ω mapping the inner/outer boundary onto inner/outer boundary. Recall that we say that a mapping f has degree j in a regular point y∈Ω if j=(f,Ω,y)=∑_x∈ f^-1(y)sign( (Df(x))). Here regular means that (Df(x))≠ 0 for x∈ f^-1(y). For non-regular points, the degree is defined throughout a sequence of regular points y_n converging to y. This sequence exists according to the Sard theorem. Then we say that the mapping has a j-degree if g has degree j in every point from the image domain. The notation of degree can be also extended to Sobolev mappings of W^1,2 class, or even to the continuous mappings. If D and Ω are two simply connected domains in ℝ^2 different from ℝ^2 and let 𝔻 be the unit disk. Then the Riemann mapping theorem asserts that there exist conformal bijections g:D onto⟶𝔻 and k: 𝔻onto⟶Ω. Then the mapping f(z) = k(g^j(z)) is j-defree conformal mapping between D and Ω. Moreover, this map is a minimizer of the Dirichlet integral throughout the j-degree mappings. Indeed, for every h we have ℰ[h]=∫_D|Dh(z)|^2dxdy≥∫_D(Dh(z))dxdy=j(Ω). The equality holds if and only if h is j-degree conformal (because in this case, one has |Dh|^2=1/2(|h_x|^2+|h_y|^2)=(Dh(z))). If D and Ω are doubly connected domains, then the situation is very different. Namely, if the doubly connected domains are not conformally equivalent, then such a conformal minimizer does not exist. However, in this case, the harmonic diffeomorphisms can be an essential replacement for the conformal diffeomorphisms. The existence problem has been studied in a large number of papers. See for example the references <cit.>, which deal with the existence of minimisers for two-dimensional domains and surfaces and with their properties. The authors of <cit.> considered a similar problem for the so-called σ_2 energy between annuli in ℝ^4. The problems of this kind are related to the so called Nitsche conjecture for harmonic mappings. Let us briefly explain what is its formulation. §.§ Nitsche conjecture Let :={z:r<|z|<R} and ={w:r_∗<|w|<R_∗} be two annuli. In 1962 J.C.C. Nitsche announced, in a short article  <cit.>, that the existence of a harmonic homeomorphism h onto⟶, whether or not it comes from a minimal graph, yields a lower bound on Mod () in terms of Mod (). He conjectured that the necessary and sufficient condition for such a mapping to exist is the following inequality, now known as the Nitsche bound R_∗/r_∗≥1/2(R/r+ r/R). Various lower bounds for R_∗/r_∗ have been obtained by Lyzzaik <cit.>, Weitsman <cit.>, Kalaj <cit.>. Finally, this conjecture was solved by Kovalev, Onninen, and Iwaniec <cit.> who give an affirmative answer to it. §.§ Motivations An indirect evidence of Nitsche conjecture has been given previously by Astala, Iwaniec and Martin in <cit.>. Namely they showed that the minimizer of Dirichlet energy of W^1, 2- of homeomorphisms exists and present itself a W^1, 2- of homeomorphism precisely when the Nitsche bound (<ref>) is satisfied. On the other hand side, assume that reference annulus is substantially fatter than _*, precisely R/r>R_∗/r_∗+√(R^2_∗/r^2_∗-1), R_∗/r_∗<1/2(R/r+r/R). Let ρ be determined by the so-called critical Nitsche equation R_∗/r_∗=1/2(R/ρ+ρ/R). Then in <cit.> it is showed that the minimizer within all weak W^1, 2-limits of homeomorphisms: h:onto⟶^∗ takes the form: h(z)={[ r_∗z/|z|, r<|z|<ρ -not harmonic, squeezing map; r_∗(z/2ρ+ρ/2z̅), ρ<|z|<R -critical harmonic Nitsche map. ]. Note that (Dh(z))=|h_z|^2-|h_z̅|^2= r_∗^2/4|z|^2-r_∗^2/4|z|^2≡ 0 for r≤|z|≤ρ. This minimizer is unique up to the rotation of annuli as it is shown in <cit.>. §.§ Statement of the problem and the formulation of the main result Let and be two doubly connected domains in ℝ^2. This paper aims to minimize the Dirichlet energy integral ℰ[h] for mappings h belonging to the following class: ℋ_j(,):= { W^1,2(, ), }. It should be noted that for every doubly connected domain there is a conformal diffeomorphism ζ: ↦(r,R). In this case, the conformal modulus of is defined by Mod()=logR/r. Without losing of generality we assume that =(1,r) and =A(1,R), where r,R>1, because the minimizations of Dirichlet energy of mappings g and their transformations h(z) = α g(β z) is equivalent. The following theorem is the main result of this paper. There exists a j-degree radial harmonic mapping between and if and only if the condition (<ref>) is satisfied. In this case, the harmonic mapping g^∘: → is given by (<ref>). Moreover the minimum of the Dirichlet energy [f]=∫_ |Df|^2 dxdy, for f∈ℋ_j(,) is attained for the j-degree radial harmonic mapping g^∘ provided that (<ref>) is satisfied. If (<ref>) is not satisfied, then the minimum of (<ref>) is attained for a j-degree mapping g^♢ (see (<ref>) below) which is harmonic in a sub-annulus of but it is an "squeezing" in rest of . The minimum on both cases is unique up to a rotation. Theorem <ref> is a generalization of the similar result obtained by Astala, Iwaniec, and Martin in <cit.> for smooth homeomorphismsm. It must be emphasized that Theorem <ref> however cannot be deduced from such a result, because j-degree mappings cannot be decomposed as the composition of a homeomorphism and the power function z^j as one can guess. It seems that such a result is new even for degree one mappings, which are not necessarily homeomorphisms. The proof is an adaptations of the corresponding proofs in <cit.> by employing the free Lagrangian for j-degree mappings mapping the inner/outer boundary to inner/outer boundary. § RADIAL HARMONIC MAPPINGS AND RADIAL MINIMIZERS Set f(z) = e^j τ G(ρ), and call this mapping j-degree radial mapping where z=t e^τ and solve the Laplace equation Δ f=0. Then we get the second order differential equation -j^2 G(t)+tG'(t)+t^2G”(t)=0. By taking the substitution t= e^x, H(x)=G(e^x), we obtain H'(x) = e^x G'(e^x)=t G'(t) and H”(x) =H'(x) + e^2x G”(e^x)=F'(x)+t^2 G”(t). Thus (<ref>) is reduced to -j^2 H(x) +H”(x)=0. Now take s=H(x), H'(x)=h(s). Then we obtain H”(x)= h'(s) h(s). And (<ref>) is reduced to the equation 2j^2 s=(h^2(s))'. Thus h(s) = ±√(j^2 s^2+c). After returning back the variable x=log t and the function G, we obtain the equation ± G'(t)1/√(j^2G(t)^2+c_1) +1/t=0, where plus sign is in the case when G is decreasing and minus sign is for the case when G is increasing. Then the general solution of (<ref>) is given by G(t)=a cosh (j^2 log(t))+b sinh (j^2 log (t)). Assuming that it maps (1,r) onto (1,R), mapping the inner boundary onto the inner boundary and the outer boundary onto the outer boundary we get g^∘(z) = (r^j R-r^2 j/1-r^2 j) z̅^-j+(1-r^j R) z^j/1-r^2 j and the given function (<ref>) maps (1,r) onto (1,R) if and only if it is satisfied the Nitsche type bound R≥1/2 r^-j(1+r^2 j). Then the previous inequality can be written as r≤ r_∘ :=(R+√(-1+R^2))^1/j. Indeed, g^∘(ρ) is increasing if and only if r^j (R-r^j)+ρ^2 j(r^j R-1)≥ 0, ρ∈[1,r], and this inequality is equivalent with (<ref>). Now if r>r_∘, then there is an annulus (ρ, r_∘) having the conformal modulus equal to the modulus of (1,r). Indeed we have ρ = r_∘/r. Let g^♢(z) = {[ g^∘(z), if 1≤ |z|< r_∘;; z^j/|z|^j, if ρ< |z|≤ 1. ]. This is a j-degree mapping. Namely for every 1<|w|<R, there exist exact j points from 1<|z|<r_∘ so that g^∘(z)=w. Moreover the sign((Dg^∘(z)))=1. We can also define the degree for points |w|=1 by using a sequence of points 1<|w_n|<R converging to w. See Figure 1 for a 2-degree mapping g^♢: (1/2,2)→(1,17/8). §.§ The energy of the stationary mapping Elementary calculations lead to |Dg^∘|^2=j^2 G^2/t^2+Ġ^2. Thus it follows from (<ref>) that |Dg^∘|^2=2j^2G^2+c_1/t^2, where c_1=4j^2 (1+R^2- R (r^j+r^-j))/ (r^j-r^-j)^2. Observe that c_1=0 if and only if R=r^j or R=r^-j. In this case f(z) =z^± j is a conformal mapping between (r) and (R). Ġ^2= (j^2)G^2+c_1/t^2, Let F=G^-1 then F'(s)/F(s)=1/√(j^2 s^2+c_1). The energy of g^∘ is now as follows ℰ[g^∘]=∫_|Dg^∘|^2=4π j^2 ∫_1^rG^2(t)/tdt+2π c_1log r. Now we calculate ℰ[g^♢]. In this case c_1=-j^2. We have ℰ[g^♢] =∫_(1,r)|Dg^∘|^2+∫_(ρ ,1)|Dg^♢|^2 =4π j^2 ∫_1^rG^2(t)/tdt+2π -j^2log r+ 2π j^2∫_ρ^1 1/t dt =4π j^2 ∫_1^rG^2(t)/tdt-2π j^2logρ r. § FREE LAGRANGIANS Assume that ℋ_j(,) is a class of j-degree mappings, mapping onto and mapping the inner/outer boundary to the inner/outer boundary. Motivated by the paper of Iwaniec and Onninen <cit.> we consider the following free Lagrangians. a) A function in t= |z|; L(z, h, Dh) dx∧ dy = M(t) dx∧ dy Thus, for all h ∈ℋ_j(𝔸, 𝔹) we have ∫_ L(z, h, Dh) dx∧ dy = ∫_ M(|z|) dx∧ dy=2π∫_r^R t M(t) dt. b) Pullback of a form in via a given mapping h ∈ℋ_j(, ); L(z, h, Dh) dx∧ dy = N(|h|) J (z, h) dx∧ dy , where N ∈ L^ 1(r,R) Thus, for all h ∈ℋ_j(𝔸, 𝔹) we have ∫_ L(z, h, Dh) dx∧ dy =j∫_ N(|w|) du∧ dv = 2π j ∫_r_∗^R_∗ sN(s ) ds . c) A radial free Lagrangian L(z, h, Dh) dx∧ dy = A (|h|)|h|_N/ |z| dx∧ dy where A ∈ L^ 1( r_∗, R_∗). Thus, for all h ∈ℋ_j(𝔸, ) we have ∫_ L(z, h, Dh) dx∧ dy = 2π∫_r^R A(|h|)∂ |h|/∂ρ dρ = 2π∫_r_∗^R_∗ A(s)ds d) An angular free Lagrangian L(z, h, Dh)dx∧ dy = B (|z|)[h_T/h]dx∧ dy , where B ∈ L^1(r,R). Thus, for all h ∈ℋ_j(𝔸, 𝔹) we have ∫_ L(z, h, Dh) dx∧ dy =∫_r^R B(t)/t∫_|z|=t(∂Arg h/∂τ dτ) dt =2π j∫_r^R B(t) dt The idea of using these free Lagrangians is to establish a general sub-gradient type inequality for the integrand with two independent parameters: t ∈ (r,R) and s∈ (r^∗,R^∗), for the functions: g as follows: |g_T|^2+ |g_N|^2≥ X(s)|g|_N/t + Y(t) |g|/s g_T/g + Z(s)[ Dg] + W(t), where the coefficients X and Z are functions in the interval (r^∗,R^∗), while Y and W are functions in the interval (r,R). We will choose t = |z| and s = |g(z)| to obtain the corresponding free Lagrangians on the right-hand side of the above inequality. Suppose g∈ H_j(, ) and consider the following formula for Dirichlet energy ℰ[g]=∫_( |g_N|^2+|g_T|^2) dxdy. Here for z=ρ e^τ, g_N(z) =∂ g(z)/∂ρ and g_T =1/ρ∂ g(z)/∂τ. Then the Jacobian of a mapping g can be written as [Dg(z)] = (g_Tg_N). § THE PROOF OF THEOREM <REF> Since we already proved the existence of radial minimizers, it remains to prove inequality and equality statement. The proof consists of three parts. §.§ Under Nitsche bound Let us consider three subcases c_1=0, i.e. the case R=r^j, c_1>0 and c_1<0. In what follows, we let g(z)=(g_1(z), g_2(z)) to be the j-degree mapping between annuli and mapping inner/outer boundary onto the inner/outed bounrary. (i) Conformal case, c_1=0, i.e. R=r^j. From the following trivial inequality |g_N|^2+|g_T|^2≥(Dg(z)), it follows that ℰ[g]≥∫_(Dg(z)) dxdy. Now because g is a j-degree mapping we obtain the inequality ℰ[g]≥ j | |=∫_(Dg^∘(z))(z)dxdy. Namely in this case g^∘(z) = z^j and so (Dg^∘(z))(z)=j^2 |z|^2j-2. Thus ∫_(Dg^∘(z))(z)dxdy=2π j^2 ∫_1^r ρ^2j-1dρ=π j (r^2j-1)= j||. (ii) Elastic case, c_1>0, i.e. R>r^j. For every α, β∈ℝ, the following trivial inequality (α |g_N|-β|g_T|)^2≥ 0 is equivalent to |g_N|^2+|g_T|^2≥ (1- α^2)|g_N|^2 +(1- β^2) |g_T|^2 + 2 αβ |g_T| |g_N|. If we take β=1, then it reduces to the following simple inequality |g_N|^2+|g_T|^2≥ (1- α^2)|g_N|^2 +2 α |g_T| |g_N|, where the equality holds if and only if α|g_N|=|g_T|. Furthermore, the following inequality holds: (1- α^2)|g_N|^2 +2 α |g_T| |g_N|≥ 2( 1- α^2)γ|g_N|+ 2 α(Dg(z)) -(1 - α^2) γ^2, where γ is an arbitrary constant. It is easy to see that the equality is attained in (<ref>), if γ=|g_N|. Let t=|z|, s=|g(z)|. Suppose α=js/√(c_1+j^2s^2), α_∗=jG(t)/√(c_1+j^2G^2(t)), and A=Ġ(t)√( 1-α^2_∗/1-α^2 ), where G is a solution to the differential equation (<ref>) Moreover, in view of (<ref>) |Dg|^2 ≥ 2[( 1-α^2 )A ]|g_N|+2 jsF'(s)/F(s)[Dg] -(1- α^2 )A^2 ≥ 2[( 1-α^2 )A ]|g|_N+2jsF'(s)/F(s)[Dg] -(1- α^2 )A^2 . Furthermore, in view of (<ref>), γ(s) def= 2t( 1- α^2)A =2t √((1- α^2_∗)(1- α^2))Ġ(t) =2c_1tĠ(t)/√(j^2G^2(t)+c_1)·1/√(j^2s^2+c_1) =2c_11/√(j^2s^2+c_1), and δ(t) def=-(1- α^2)A^2 =-Ġ^2(t)(1- α^2_∗) =-c_1Ġ^2(t)/√(c_1+j^2G(t)^2) =-c_1/t^2. So for t=|z| and s=|g(z)|, by using the relations (<ref>), (<ref>), and (<ref>), we obtain ℰ[g] =∫_ ( |g_N|^2+|g_T|^2)dx∧ dy ≥∫_{2[( 1-α^2)A ]|g|_N+2jsF'(s)/F(s)[ Dg] -(1- α^2)A^2} dx∧ dy =∫_γ(s) |g|_N/t dx∧ dy+2j∫_|g(z)|F'(|g(z)|)/F(|g(z)|)[ Dg] dx∧ dy+∫_δ(t)dx∧ dy. By using the formulas (<ref>), (<ref>), (<ref>), and (<ref>) we obtain ℰ[g]≥ 2π∫_1^R γ(s) ds+4π j^2 ∫_1^R s^2F'(s)/F(s)ds+2π∫_1^r tδ(t)dt. Then ℰ[g]≥ 4π c_1∫_1^R 1/√(j^2s^2+c_1) ds+4π j^2 ∫_1^R s^2F'(s)/F(s)ds-2π c_1 ∫_1^r1/tdt. According to ∫_1^G(t)1/√(j^2s^2+c_1) ds=log t, we see that ℰ[g] ≥ 4π c_1log r-2π c_1 log r+4π j^2∫_1^R s^2F'(s)/F(s)ds =2π c_1log r+4π j^2∫_1^R s^2F'(s)/F(s)ds. Changing the variable t=F(s) in the last integral, we get ∫_1^R s^2F'(s)/F(s)ds=∫_1^r G^2(t)dt/t. In view of (<ref>), we see from (<ref>) that ℰ[g]≥ 2π c_1log r+4π j^2∫_1^r G^2(t)dt/t=ℰ[g^∘]. This finishes the proof of this case up to the equality statement. (iii) Non-elastic case, -j^2≤ c_1< 0, i.e. R<r^j Suppose g is a smooth j-degree mapping between two annuli and . Let z∈, g(z)∈ and let t=|z|, s=|g(z)|. If we put α=1 in (<ref>), then we have the following inequality |g_N|^2+|g_T|^2≥ (1- β^2)|g_T|^2 +2 β |g_T| |g_N|, where the equality holds if and only if β|g_T|=|g_N|. Furthermore, [Dg]=(g_Tg_N)=|g_T g_N| if and only if g_Tg_N∈ℝ. We continue with the following inequality |g_N|^2+|g_T|^2≥ℛ:= 2( 1- β^2)B |g_T|+ 2 β(D g)-( 1- β^2) B^2, where B is a constant. Since [g_T/g]≤[|g_T|/|g|], we infer that |g_T|≥ |g|[g_T/g]. The above equality is attained if and only if g_Tg∈ℝ. Now ℛ≥ [2( 1- β^2)B]|g|[g_T/g]+ 2β[ Dg] -(1 - β^2) B^2. The above equality is attained if and only if the conditions (<ref>), (<ref>), (<ref>) and the following equality B=|g_T| are satisfied. Moreover, these conditions imply that g is radial. Now, we choose β=√(j^2s^2+c_1)/js and B=j s/t. Then, for t=|z| and s=|g(z)|, we get -(1 - β^2) B^2 =μ(t)def=c_1/t^2. and 2s( 1- β^2)B=ν(t)def= -2c_1/jt. By using the relations (<ref>), (<ref>), and (<ref>) we obtain ℰ[g] =∫_ ( |g_N|^2+|g_T|^2)dx∧ dy ≥∫_[[2(1 - β^2)B ]|g|[g_T/g] + 2 √(j^2s^2+c_1)/js[ Dg] -(1 - β^2) B^2] dx∧ dy =∫_ν(t) [g_T/g] dx∧ dy +2∫_√(j^2s^2+c_1)/js[ Dg] dx∧ dy+∫_μ(t)dx∧ dy. By using the formulas (<ref>), (<ref>), (<ref>), and (<ref>) we obtain ℰ[g]≥ 2π j∫_1^r ν(t) dt+4π∫_1^R √(j^2 s^2+c_1 )ds+2π∫_1^r tμ(t)dt. Now we use the equality √(j^2s^2+c_1 )=j^2s^2+c_1/√(j^2s^2+c_1) to deduce that ∫_1^R√(j^2s^2+c_1 )ds=I_1+I_2=c_1∫ _1^R1/√(j^2s^2+c_1)ds+∫_1^Rj^2s^21/√(j^2s^2+c_1)ds. By using (<ref>) we get I_1=c_1log r. By using change of variables t=F(s), in view of F(τ)=rexp[∫_1^τ1/√(s^2+c_1)ds], we get I_2=j^2∫_1^Rs^2F'(s)/F(s)ds=j^2∫_1^rG^2(t)/tdt. Moreover, we have 2π j∫_1^r ν(t) dt=-4π c_1log r. We also have 2π∫_1^r tμ(t)dt= 2π c_1 log r. Summing all those formulas and again from (<ref>), we see that ℰ[g]≥ 2π c_1log r+4π j^2∫_1^r G^2(t)dt/t=ℰ[g^∘]. §.§ Below Nitsche bound In this case we assume that =(ρ,1]∪[1,r). We repeat the previous argument (the case (iii)). In this case c_1=-j^2 and we have ℰ[g]≥ 2π j∫_ρ^r ν(t) dt+4π∫_1^R √(j^2 s^2-j^2 )ds+2π∫_ρ^r tμ(t)dt. Further as before we obtain 4π∫_1^R√(j^2s^2-j^2 )ds= 4π j^2∫_1 ^r G^2(t)dt/t-4π j^2 log r, and in view of (<ref>) and (<ref>) ℰ[g] ≥ 4π j^2logr/ρ+4π j^2∫_1 ^r G^2(t)dt/t-4π j^2 log r-2π j^2 logρ/r =4π j^2 ∫_1^rG^2(t)/tdt-2π j^2log r-2π j^2logρ =ℰ[g^♢]. §.§ The equality part To finish the proof of the theorem, we need to consider the equality case. Consider only the non-elastic case, and notice that the proof of the remaining case is similar. Assume that the equality is attained in our theorem for a certain mapping g. Let z=t e^τ, and write g(t,τ)=g(z). Assume that g(t,τ)= u(t,τ) e^ v(t,τ) for some real functions u and v, with u(1,τ)=1, u(r,τ)=R, u(t,τ+2π)=u(t,τ), v(t,τ+2π)=v(t,τ) for every t,τ∈ [1,r]×ℝ. Then from (<ref>) and (<ref>), we see that ∂_t g(t,τ)/g(t,τ)=∂_t v(t,τ)+∂_t u(t,τ)/u(t,τ)∈ℝ. Since g is j-degree mapping it follows that v(t,τ)=ϕ(τ) for a certain j-degree mapping ϕ of the unit circle onto itself. Now from (<ref>), we have ( u(t,τ) ϕ'(τ)+∂_τ u(t,τ))/t∈ℝ. Thus u(t,τ)= G(t). Now from (<ref>) we see that βG(t) ϕ'(τ)/r=G'(t), where, as in (<ref>), β = √(j^2s^2+c_1)/js=√(G(t)^2+c_1)/j G(t). This implies that ϕ'(τ)=const=j, and thus ϕ(τ) =j τ +c. Now from (<ref>) and (<ref>) we see that G satisfies the equation (<ref>), which means that g is a j-degree harmonic mapping. 10 astala2010 K. Astala, T. Iwaniec, and G. Martin. Deformations of annuli with smallest mean distortion. Arch. Ration. Mech. Anal., 195(3):899–921, 2010. li L. Chen and G. Wang. σ _2-diffeomorphisms between 4-dimensional annuli. Calc. Var. Partial Differ. Equ., 55(3):20, 2016. Id/No 49. inventiones T. Iwaniec, N.-T. Koh, L. V. Kovalev, and J. Onninen. Existence of energy-minimal diffeomorphisms between doubly connected domains. Invent. Math., 186(3):667–707, 2011. solut T. Iwaniec, L. V. Kovalev, and J. Onninen. The Nitsche conjecture. J. Am. Math. Soc., 24(2):345–373, 2011. arma2009 T. Iwaniec and J. Onninen. Hyperelastic deformations of smallest total energy. Arch. Ration. Mech. Anal., 194(3):927–986, 2009. annalen2010 T. Iwaniec and J. Onninen. Neohookean deformations of annuli, existence, uniqueness and radial symmetry. Math. Ann., 348(1):35–55, 2010. iwon T. Iwaniec and J. Onninen. n-harmonic mappings between annuli: the art of integrating free Lagrangians, volume 1023 of Mem. Am. Math. Soc. Providence, RI: American Mathematical Society (AMS), 2012. Ka D. Kalaj. On the Nitsche conjecture for harmonic mappings in ℝ^2 and ℝ^3. Isr. J. Math., 150:241–251, 2005. calculus D. Kalaj. Energy-minimal diffeomorphisms between doubly connected Riemann surfaces. Calc. Var. Partial Differ. Equ., 51(1-2):465–494, 2014. klondon D. Kalaj. Deformations of annuli on Riemann surfaces and the generalization of Nitsche conjecture. J. Lond. Math. Soc., II. Ser., 93(3):683–702, 2016. ka2019 D. Kalaj. Hyperelastic deformations and total combined energy of mappings between annuli. J. Differ. Equations, 268(10):6103–6136, 2020. koh1 N.-T. Koh. Hereditary circularity for energy minimal diffeomorphisms. Conform. Geom. Dyn., 21:369–377, 2017. Lyz A. Lyzzaik. The modulus of the image annuli under univalent harmonic mappings and a conjecture of Nitsche. J. Lond. Math. Soc., II. Ser., 64(2):369–384, 2001. NCONJ J. C. C. Nitsche. On the module of doubly-connected regions under harmonic mappings. Am. Math. Mon., 69:781–782, 1962. sverak V. Šverák. Regularity properties of deformations with finite energy. Arch. Ration. Mech. Anal., 100(2):105–127, 1988. weit A. Weitsman. Univalent harmonic mappings of annuli and a conjecture of J. C. C. Nitsche. Isr. J. Math., 124:327–331, 2001.
http://arxiv.org/abs/2405.10286v1
20240516174654
FFF: Fixing Flawed Foundations in contrastive pre-training results in very strong Vision-Language models
[ "Adrian Bulat", "Yassine Ouali", "Georgios Tzimiropoulos" ]
cs.CV
[ "cs.CV", "cs.AI" ]
Exotic compact objects and light bosonic fields Thomas P. Sotiriou May 20, 2024 =============================================== Despite noise and caption quality having been acknowledged as important factors impacting vision-language contrastive pre-training, in this paper, we show that the full potential of improving the training process by addressing such issues is yet to be realized. Specifically, we firstly study and analyze two issues affecting training: incorrect assignment of negative pairs, and low caption quality and diversity. Then, we devise effective solutions for addressing both problems, which essentially require training with multiple true positive pairs. Finally, we propose training with sigmoid loss to address such a requirement. We show very large gains over the current state-of-the-art for both image recognition (∼ +6% on average over 11 datasets) and image retrieval (∼ +19% on Flickr30k and ∼ +15% on MSCOCO). § INTRODUCTION Large-scale contrastive image-text pre-training has emerged as the prevalent method for vision-language representation learning <cit.>. The majority of datasets employed for pre-training are web-collected <cit.>. They offer a varied data distribution and are sufficiently large to effectively train high-performing vision-language models. However, since the raw captions for each image are typically extracted from associated tags or descriptions, they often exhibit low quality, being noisy and suboptimal for training purposes <cit.>. Although some attempts to fix such issues have been already described, to some extent, in literature (e.g. ALIP <cit.>, BLIP <cit.>), in this work, we show that the full potential of improving the quality of the training process is far from being fully realized. Specifically, by studying and addressing specific issues related to noise and low data quality, in this work, we show that our improved vision-language training pipeline can achieve massive gains over the current state-of-the-art methods for both image recognition (∼ +6% on average over 11 datasets) and image retrieval (∼ +19% on Flickr30k <cit.> and ∼ +15% on MSCOCO <cit.>). The first issue we study is related to noise impacting contrastive learning: near-duplicate samples which are incorrectly treated as negative pairs. Even within a batch, it is not uncommon to find images and/or captions that are semantically similar or even identical. Since standard contrastive learning assumes one positive pair, this significantly hinders the training process and the quality of the trained models. The second issue we study is related to low caption quality and diversity. Captions can be short and lacking detail, noisy, or even entirely irrelevant to the image. Moreover, since the mapping process between image and text is one-to-many, more than one caption is needed to provide an approximate description of the image. To fix issue one, we propose an algorithm that mines new positive pairs based on image-text, image-image, and text-text similarities, aiming to decrease the number of false negatives in the training data arising due to semantically similar images and/or captions. We fix issue two by firstly generating pseudo-captions for each training image using a state-of-the-art image captioning technique <cit.> that will act as new true positives for a given image. Then, we propose batch text augmentation for training with multiple pseudo-captions (five captions per image selected via beam search) within the same batch to effectively increase caption diversity. Importantly, after applying the proposed solutions, we end up with a variable number of positive pairs per image newly mined positive pairs and multiple pseudo-captions per image. This implies that we need to train our model with a loss function that accommodates multiple positives and is robust to potential errors in the mining process. Unfortunately, neither contrastive loss <cit.> nor supervised contrastive loss <cit.> can be directly applied for this case. To this end, we propose to use the sigmoid loss <cit.> which allows the number of positives to vary dynamically per sample and per batch at no extra cost and is also robust to noise. Overall, we make the following contributions: * We study and provide in-depth analyses of two important issues related to vision-language training process/data: false negative pairs due to semantic near-duplicates, and low caption quality and diversity (Sec. <ref>). * We provide two simple algorithms for addressing the aforementioned issues: The first one uses text-image, image-image, and text-text similarities for eliminating incorrectly assigned negatives and mining new true positives. The second uses the proposed batch text augmentation for training with multiple pseudo-captions per image within the same batch. Both solutions induce multiple new positives per each training image. To address this, we propose to use sigmoid loss for training the model. See Sec. <ref>. * We show very large gains over the current state-of-the-art for both image recognition (∼ +6% on average over 11 datasets) and image retrieval (∼ +19% on Flickr30k and ∼ +15% on MSCOCO) (Sec. <ref>). We further ablate the impact of many important components of our method in Sec. <ref>. § FLAWS OF WEB-COLLECTED DATASETS & POTENTIAL SOLUTIONS Several observations drawn by analyzing the flaws of a web-collected dataset (CC3M dataset), motivating the proposed approach, are provided below: Original captions are noisy and repetitive: For example, as illustrated in <ref> for the CC3M dataset, original (raw) captions contain a high number of generic captions that frequently reoccur across the dataset (<ref> (c)), and are often semantically similar (<ref> (a)). Moreover, many raw captions may be unrelated to their associated images and their visual content, as indicated by low CLIP scores (<ref> (b)). Re-captioning enhances quality and diversity: A potential solution to this issue is the use of state-of-the-art image captioning models (BLIP2 <cit.>, OFA <cit.>) to generate synthetic pseudo-captions, which can enhance the quality and descriptiveness of the captions. When comparing raw and pseudo-captions, it is evident that the latter are more diverse and semantically relevant to their associated images, as shown in <ref>. Multiple pseudo-captions should reduce noise: State-of-the-art image captioning models, despite being capable of generating fluent and diverse captions, are often trained and bootstrapped from the same web-collected data used in training vision-language models. Consequently, as shown in <ref>, in some instances, the generated pseudo-captions can be ambiguous and contain hallucinations, errors, and stylistic biases similar to those found in the raw captions. As a result, relying on a single pseudo-caption per image can still introduce a high degree of noise and can hinder the training of an effective vision-language model. A potential solution to this issue is the use of multiple pseudo-captions or multiple positives per image in the hope that even if individual captions are incorrect, their ensemble is of higher quality and better reflects the content of the associated image. To probe for the possible positive effect of using multiple synthetic captions, in <ref> (a), we show the intra-cosine similarities of 5 pseudo-captions generated using beam search and, respectively, in <ref> (b), the average image-text CLIP score between these synthetic captions and their associated images - contrasted with the score corresponding to a single caption. We observe that: 1) a simple method such as beam search can generate diverse synthetic captions, and, more crucially, 2) using multiple positives per image results in an improved ensemble that better describes the image and helps alleviate the problem of false positives due to incorrect individual instances. Mining of new positives: As shown in <ref> (c), even for a relatively small batch of 1k image-caption pairs, it is common to find captions more similar to the image than the ground-truth caption (higher ranks), and, as displayed in <ref>, such high-ranking captions often contain true positives, which are captions that can be considered ground-truth descriptions for the associated image. A potential solution to this is the use of online mining of new positives based on image and text feature cosine similarities. However, as shown in <ref>, text-image pairs with high cosine similarity can still be false positives. To reduce them, we propose to mine the positives based on image-text, image-image, and text-text similarities, aiming to decrease the number of false negatives in the training data arising due to semantically similar images and/or captions. § RELATED WORK Contrastive pretraining under noisy web-collected data: Current publicly available vision-language datasets are mined from the internet automatically <cit.> with only basic automatic filtering applied, which results in imperfect annotations and duplicate or near-duplicate pairs. A series of papers <cit.> attempt to alleviate the noise present in annotations by switching from hard to soft labels, akin to knowledge distillation (KD), using various combinations of contrastive loss (i.e. InfoNCE) and KL divergence. The work in <cit.> constructs the soft labels using an online entropic optimal transport algorithm implemented via the Sinkhorn-Knopp algorithm. The probabilities for each image add up to 1, with 0.5 on the diagonal and the rest distributed. This assumes that, within the batch, there are always some images that are somewhat similar. In our case, we use hard labels, with multiple positives, performing reassignments only when the samples are sufficiently close, instead of forcing a distribution in all cases. Furthermore, we do not require running an optimal transport method, nor rely on a contrastive loss. The work of <cit.> progressively self-distills soft image-text alignments to more efficiently learned robust representations from noisy data. At every iteration, a subset of labels are “soft” while the rest are kept hard. Similarly, the work of <cit.> relaxes the strict one-to-one constraint, transitioning to a soft cross-modal alignment by introducing a softened target, which is generated from the fine-grained intra-modal self-similarity. Additionally, they disentangle the negatives in the distribution to further boost the relation alignment, resulting in a combination of InfoNCE loss performed with hard labels and KL divergence. However, they do not perform batched text augmentations with multiple positives, as in our work, and still use a contrastive loss combined with KL, operating on soft scores. The works of <cit.> study the effect of removing false negatives in the context of unimodal pure vision models, not considering the case of multi-modal learning. The work of <cit.> flags (a very small number) of potential negatives using the aggregated score obtained from multiple support views per image,  <cit.> uses a clustering based approach while  <cit.> is based on ranked positives, requiring a known class hierarchy (i.e. a fully supervised case) or known changes/relations (i.e. videos). The works of <cit.> derive from the Supervised Contrastive Loss, while <cit.> from InfoNCE. In contrast, our work operates on image-text data, takes into account multi-modal interactions (I2T, T2T, T2I), does not use additional support views, known hierarchies etc. and is easily scalable. Following a different direction, BLIP <cit.> and their followup <cit.> version, use a bootstrapping approach in which the noisy captions are filtered out using the initial model, which is then retrained on the new data. This interplay is performed offline and requires training a multitask model. The work of <cit.> presents a small-scale study showing that random sampling of pseudo-captions improves CLIP, concluding however that scaling up the number of image-caption pairs appears to be more effective. Finally, very recently, ALIP <cit.> adds a synthetic pseudo-caption and a consistency gating mechanism that weights the influence of the samples and image-text pairs on the contrastive loss. Different from the aforementioned methods, we propose to fix incorrectly assigned negatives and mine for new true positives using text-image, image-image, and text-text similarities. Moreover, to increase caption quality and diversity, we further propose training with multiple pseudo-captions per image within the same batch. As our methods require training with multiple positives per image, we further propose to use the sigmoid loss <cit.> for training the model. § METHOD This section describes the proposed method, whose aim is to improve vision-language training by denoising and improving the quality of the training process/data. Specifically, Sec. <ref> addresses the problem of false negative pairs inherent to the noisy nature of large-scale image-text datasets by re-assigning them as true positives[It is possible that such cases can occur in clean datasets too, as multiple captions can describe an image and vice versa, multiple images can be described by one caption.]. Sec. <ref> proposes text batch augmentation for training the model with multiple positives pairs. The effect of Secs. <ref> and <ref> is that, for each training image, a variable number of positive-negative pairs is formed (Sec. <ref>). Sec. <ref> proposes a natural way to train the model in this case by using the recently proposed sigmoid loss for vision-language pre-training. §.§ Fixing incorrect negatives Let D be a dataset consisting of image-text pairs, with B a batch of randomly selected samples (x_i, t_i), i = 1, 2, …, N. In addition to the ground truth positives pairs (x_i, t_i), we seek to identify and correct wrongly co-occurring negative pairs (x_i,t_j) on-the-fly. To achieve this, let us first define the image-text, image-image, and text-text cosine similarity matrices S_it = X_f · T_f^T, S_ii=X_f · X_f^T and S_tt=T_f · T_f^T, where S_it, S_ii, S_tt∈ℝ^N × N and X_f∈ℝ^N × d and T_f∈ℝ^N × d represent the image and text features, respectively. Given the similarity score matrices, we define the assignment matrix M∈{0,1}^N × N as follows: M = (S_it > p_1) (S_ii > p_2) [(S_tt > p_3) (S_it > p'_1)], where is the logical and the logical operator, and p_1, p'_1, p_2 and p_3 are the thresholds above which a sample is marked as positive, with p'_1 < p_1. Note that we filter the positives found with text-text matching using image-text similarities (using threshold p'_1), as we observed a high portion of false positives within text-text matching, due to the fact that repeated samples often correlate with poor overall image description fidelity. The choice of p_1, p'_1, p_2 and p_3 is empirical and generally depends on the characteristics of the model. We ablate the dependency of the method on the threshold values in Sec. <ref> where we show little sensitivity. Note that M re-assigns a variable number of positives to each image. Fig. <ref> depicts the construction process of M at a high level. In order to calculate the cosine similarity matrices S_it, S_ii and S_tt required for the construction of M, we use a pre-trained model. This is akin to a form of auto-labeling/auto-filtering, where the pretrained model provides a signal for re-assessing the labeling of the samples. Although one could opt to use an EMA teacher-student approach, we found this simple approach to work sufficiently well. Moreover, some possible errors in M can be handled by the robust sigmoid loss used for training (see Eq. <ref>). §.§ Batch text augmentation with multiple positives   The currently available image-text datasets <cit.> are noisy, with high variability in the quality of the text descriptions among samples. To improve data quality, we use BLIP2 <cit.>, an off-the-shelf image captioner, to generate multiple pseudo-captions for each image in the training set (see supplementary material for visual examples). Inspired by <cit.>, we propose to include all pseudo-captions as true positives within the same batch, which we call batch text augmentation. Note that simultaneously training with multiple pseudo-captions within the same batch has not been considered in previous work. We show that this approach enables the training of highly accurate models (see Sec. <ref> and our ablation in Sec. <ref>). In the next section, we also show how batch text augmentation can be integrated with the mask construction process defined in Sec. <ref>. Finally, we note that while batch text augmentation improves the overall performance, it does not address the presence of semantic near duplicates (i.e. false negatives) within the same training batch. §.§ Combined approach Our approach for fixing incorrect negative pairs (Sec. <ref>) and batch text augmentation (Sec. <ref>) can be naturally combined in order to define the total number of true positives per image. To this end, and without loss of generality, we assume k captions per image (original caption plus pseudo-captions), hence the total number of captions and images are related by N_txt = k N_img. Given this, the image-text similarity matrix has now (by construction) size S_it∈ℝ^N_img× N_txt. Hence, the computation of S_ii and S_tt needs to be adjusted to reflect this change. For the image-image case, and as N_img < N_txt, to make the image-image similarity matrix S_ii have the same dimensions as S_it, i.e. S_ii∈ℝ^N_img× N_txt, we replicate the scores k times. In other words, a given image x_i will share the score with each group of captions belonging to image x_j, ∀ i,j ∈ N_img. For the text-text case, the similarity matrix is now of size S_tt∈ℝ^N_txt× N_txt. Analogously, to make the S_tt have the same dimensions as S_it, we take the average score between each caption of image x_i and all k captions of image x_j. Overall, we end up with similarity matrices of the same dimensions S_it, S_ii, S_tt∈ℝ^N_img× N_text and hence the assignment matrix M can be again constructed by applying Eq. <ref>. The overall process is depicted in Fig. <ref>. §.§ Loss function The symmetrical contrastive loss (i.e. text → image and image → text) used in CLIP <cit.> supports only one positive pair per sample (see Fig. <ref>), being in discordance with the requirement of training with a variable number of positive pairs per image set by the proposed methods in Secs. <ref> and <ref>. A solution to this problem could be given by the Supervised Contrastive Loss <cit.>, originally introduced to enable multi-view training of supervised image recognition. However, this loss is prone to noise <cit.>, with the harder positive pairs dominating the signal and hindering, in part, the effect from the rest of the positive samples. This is especially problematic in the context of web-collected datasets, which are notoriously noisy. Finally, it is memory intensive and computationally demanding. In practice, we observe a 1.9× slowdown for a batch size of 8,096 samples. A natural alternative is the BCE loss, shown to outperform cross-entropy for image classification <cit.>, and also shown to be a viable alternative for image-text representation learning <cit.>. Such formulation is particularly advantageous for the proposed approach, as the BCE loss natively supports an arbitrary number of positives per sample per batch, with the ground truth being provided simply as a binary mask. Moreover, the loss is more robust to noise in general, and hence to false negatives and positives <cit.>. Finally, the initial negative bias prevents the model from being forced to learn incorrect assignments early one. Hence, we propose to use the following loss: ℓ_mp = -1/N_txt∑_i=1^N_img∑_j=1^N_txtlog1/1 + exp ( m_ij(-s_ij/τ + β)), where m_ij is the i,j element of M (-1 for negative and 1 for positive pairs), and respectively, s_ij the i,j element of the similarity matrix S_it. As the negative pairs considerably outnumber the positive ones, to ensure that we start from a low initial loss (making the same observation as in <cit.>), we add a learnable scalar β, set initially to a negative value. However, as the number of positive pairs is dynamic and is typically tied to both the specifics of the dataset and the threshold used to define a positive sample, different from <cit.>, we propose to estimate β at the beginning of the training process. Specifically, given the randomly initialized model, we sample b batches out of the training set, and then compute and store the cosine similarities. Then, given the scores and the corresponding labels, we search for β such that the initial loss is minimized (everything else is kept frozen). The value of β can be found either by gradient descent or alternatively, by performing a grid search. § RESULTS Pretraining Datasets: To allow for fair comparisons with prior work, we pre-train our approach on YFCC15M-v2 <cit.>, a subset of YFCC100M <cit.> containing approximately 15M image-text pairs. To cover different dataset sizes, we also conduct experiments on CC3M <cit.> and CC12M <cit.>, and in the supplementary material, on Open30M and Open70M datasets, further showcasing our method's scalability with respect to the dataset size. Implementation details: Architecturally, we use the same model topology and setting as in CLIP <cit.>, specifically, using AdamW <cit.>, learning rate of 1e-3 and weight decay of 0.1, except for CC3M where we set the weight decay to 0.5, as in prior work <cit.>. In terms of augmentations, we follow <cit.>, randomly resizing and cropping the image to 224×224px, applying random flipping, random Gaussian blur (between 0.1 and 2.0) and color jittering (0.4, 0.4, 0.4, 0.1). For text, the data is truncated to 77 tokens. Note, that the branch used to construct the assignment matrix M uses no augmentations (i.e. resize to 256×256px, followed by center crop, resulting in a 224× 224px image). The thresholds were set to p_1 = 0.27, p_2=0.92, p_3=0.99, p'_1=0.24. Unless otherwise specified, the models are trained for 32 epochs with a batch size of 8,096 on 8 NVIDIA A100 GPUs. All of our models and training code are implemented using PyTorch <cit.>. §.§ Comparison with state-of-the-art Following recent work on vision-language pretraining <cit.>, we compare our method with state-of-the-art approaches for zero-shot classification and zero-shot retrieval. See supplementary material for linear probe evaluation. Zero-shot classification: For zero-shot classification evaluation, for the main setting, we select the common subset of datasets that facilitate a direct comparison with prior state-of-the-art. In particular, we evaluate our approach on the following datasets: CIFAR-10 <cit.>, CIFAR-100 <cit.>, Food101 <cit.>, Pets <cit.>, Flowers <cit.>, SUN397 <cit.>, Stanford Cars <cit.>, DTD <cit.>, Caltech101 <cit.>, FGVC-Aircraft <cit.> and ImageNet <cit.>. The evaluation is performed using the same prompt templates and class names as in prior work <cit.>. As the results from Tab. <ref> show, our approach outperforms all prior methods, improving by 6.2% in absolute terms on top of the previous best result of HiDeCLIP <cit.> (which benefits from a better architecture) when aggregated across 11 datasets. Notably, we set a new state-of-the-art result on ImageNet, too (51.1%). Finally, we significantly improve upon ALIP <cit.>, which also makes use of synthetic captions, outperforming it by 9.1%. For completeness, we also adhere to the protocol of pretraining a ResNet-50 on CC3M, and respectively, CC12M and then evaluating it for zero-shot classification on ImageNet. As the results from Tab. <ref> show, the same conclusions hold. Our method outperforms the previous best result by 3.1% on CC3M (30.3% vs 33.4%) and 3.0% on CC12M (44.4% vs 47.4%). See supplementary material for results on Open30M and Open70M. Zero-shot retrieval: Consistent with prior work, we evaluate our approach for zero-shot retrieval on Flickr-30k <cit.> and MS-COCO <cit.> reporting results in terms of R@{1,5,10} for both text and image retrieval. The results are summarized in Tab. <ref>. As it can be observed, our approach offers significant gains across all metrics and datasets used, improving on top of the prior state-of-the-art ALIP <cit.> by 14.8% and 18.7% in terms of R@1 on Flickr30k for text, and respectively, image retrieval. Similarly, we outperform the previous best result by 14.9% and 15.0% in terms of R@1 on MSCOCO for text and image retrieval. This highlights that our approach results in representations that can capture subtle and fine-grained details. § ABLATION STUDIES For our ablation studies, the results reported are produced using a ViT-B/16 model pretrained on CC3M dataset. Effect of fixing incorrect negatives: herein, we analyze the effectiveness of the proposed algorithm of Sec. <ref>. By analyzing the result from Tab. <ref>, we can observe consistent gains for all 3 cases of interest: a) when using the web-collected captions (+2.7% gain), b) when using one pseudo-caption (+3.5% improvement) and c) when all available pseudo-captions at once (+1.8%). Overall, compared to the baseline accuracy of 18.6%, our approach improves by +14.3% (top-1 accuracy of 32.9%). The results show that our approach provides gains across all options considered. Effect of different components in Eq. <ref>: In Eq. <ref>, the constructed assignment matrix M is computed from three feature similarity matrices S_it, S_ii and S_tt. Herein, we evaluate the impact of each of these components. As the results from Tab. <ref> show, viewed independently, the S_it is the most impactful, as it has a dual effect, both in terms of filtering incorrect pairs and of adjusting for semantically similar samples. Moreover, the results hold for both ground truth captions and pseudo-captions. Effect of batch text augmentation: Herein, we assess the impact of training with multiple pseudo-captions within the same batch, as described in Sec. <ref>. Tab. <ref> shows accuracy vs number of pseudo-captions used during training. As we can observe, increasing the number of captions increases the accuracy of the model, inline with the expectations. As an additional baseline, we compare against a model trained by randomly sampling 1 out of 5 captions (as opposed to using them jointly as proposed in our work) on CC3M and YFCC-15M. On CC3M the performance drops by 1.5%, from 32.9% to 31.4%, while on YFCC-v2 from 51.1% to 44.1%. This further highlights the importance of the proposed batch text augmentation. Effect of image captioner: We also compare the effect of using two different state-of-the-art image captioners, OFA <cit.> and BLIP-2 <cit.>. As the results from Tab. <ref> show, both captioners lead to identical performance. Comparison with the supervised contrastive loss: To further validate the loss choice, we compare against a model trained with the supervised contrastive loss <cit.>. For a fair comparison, both models were trained using the same settings on CC3M. When evaluated for zero-shot classification on ImageNet, the supervised contrastive model achieved only 19.0% accuracy vs 21.3% achieved by our model. Note, that similar results are obtained using a InfoNCE based loss. This result empirically solidifies the arguments made in Sec. <ref>. § CONCLUSIONS In this work, we propose a new approach to vision-language pretraining based on multi-positive sample pairing that fixes incorrect negatives and addresses low caption quality. The latter is tackled by a newly introduced batch text augmentation strategy, in which multiple new positive pairs are concomitantly added via synthetic recaptioning. Departing from the typical contrastive loss, to enable efficient training under an arbitrary number of positives per sample, we propose to train the model with a sigmoid loss. In the process, we highlight the crucial role of noise and caption quality in vision-language pre-training, offering an in-depth analysis. All in all, we show large improvements over the current state-of-the-art method for both zero-shot image recognition (∼ +6% on average of 11 datasets) and retrieval (∼ +19% on Flickr30k and ∼ +15% on MSCOCO). ieeenat_fullname § ADDITIONAL COMPARISONS WITH STATE-OF-THE-ART §.§ Zero-shot recognition on Open30M and Open70M datasets To further showcase the scalability of our approach, we follow <cit.>, pretraining our method on a combination of 4 publicly available datasets, dubbed Open30M (see Tab. <ref> for composition). The pretraining hyperparameters remain the same as for YFCC. Once trained, we evaluate it in a zero-shot manner on the same suite of 11 datasets. As the results from Tab. <ref> show, our approach outperforms all prior methods, improving upon the prior best result of <cit.> by +4.7% aggregated over 11 datasets, including by +3.1% on ImageNet. Finally, we extend the Open30M images dataset by adding RedCaps <cit.>, OpenImages-8M <cit.> and YFCC-v1, creating Open70M. As the results from Tabs. <ref> and <ref> show, our approach scales well, with consistent gains for both zero-shot retrieval and classification. §.§ Linear probe In addition to zero-shot evaluation, we also present linear probe results in <ref> for models pre-trained on YFCC15M and in <ref> for models pre-trained on Open30M. Similar to zero-shot experiments, we use the repository[<https://github.com/LAION-AI/CLIP_benchmark>] to run these experiments. For each dataset, we cache the features of the training and test sets, and then use the training set's features and its ground-truth labels to train a linear layer on top. The linear linear is trained for 20 epochs using the standard cross-entropy loss and AdamW optimizer with a learning rate of 0.1, no weight decay, and a cosine learning rate scheduler. The trained linear layer is then used over the cached test features to obtain the accuracy. Similar to zero-shot experiments, our approach outperforms previous methods by large margins, , +7.0% with YFCC15M pertaining (<ref>) and +6.2% with Open30M pertaining over 11 image classification datasets. § ADDITIONAL ABLATION STUDIES Sensitivity to the threshold value: The selection of threshold values is intuitive, and the model is generally forgiving within a certain plateau of values. For S_tt and S_ii, they are simply set to high values to target nearly identical samples. For S_it, we start from the mean score of the positive pairs, which is 0.29, and explore a few adjacent values, noting that all values located in the same vicinity perform well as shown in <ref>. § ZERO-SHOT CLASSIFICATION PROMPTS For zero-shot recognition, we align with prior work <cit.>, using the same list of prompts. The full list is defined in Tab. <ref>. § ZERO-SHOT RETRIEVAL EVALUATION CONSIDERATIONS As the synthetic captions are generated by models pretrained on external data, a reasonable question to ask is wherever there is potential data leakage. For the Flick30k dataset, no such issues are present, as BLIP2 did not use any data from the training set of Flickr30k during any of its training phases. For MSCOCO, we note that only 100k out of 120M samples used for training BLIP2 were images from the COCO training set, hence the impact is likely minimal, if any. We note here that the current state-of-the-art method, ALIP, is subject to the same potential issue, as they also make use of synthetic captions produced by a model that was pre-trained on MSCOCO data (i.e. OFA).
http://arxiv.org/abs/2405.09151v1
20240515073115
Thermodynamics and kinetics of state switching for the asymptotically flat black hole in a cavity
[ "Ran Li", "Jin Wang" ]
gr-qc
[ "gr-qc", "cond-mat.stat-mech", "hep-th" ]
Completeness and Termination of Tableau Calculus for Undirected Graphs Yuki Nishimura0009-0000-0475-1168 Tsubasa Takagi0000-0001-9890-1015 May 20, 2024 ======================================================================= § INTRODUCTION Since the discovery of Hawking radiation <cit.>, the thermodynamics of black holes has gained significant attention. It is generally believed that understanding the thermal features of black holes can promote us the construction of a completed theory of quantum gravity. Phase transition is a important topic in the field of the black hole thermodynamics. Great efforts have been made to understand the thermodynamics of the black hole phase transitions. In this aspect, two important examples are Hawking-Page phase transition <cit.> and the Van der Waals type phase transition of the charged AdS black holes <cit.>. Particularly in recent years, there has been considerable interest in the criticality and the phase transitions of asymptotically AdS black holes in the extended phase space <cit.>, which is achieved by treating the cosmological constant as the thermodynamic pressure <cit.>. It's noteworthy that these examples primarily focus on the asymptotically AdS black holes. For the asymptotically flat black holes, York <cit.> realized that Schwarzschild black holes placed inside a spherical cavity at finite radius can constitute a statistical ensemble with the cavity playing the role of the thermal reservoir. In this approach, the system has two stationary points that correspond two branches of Schwarzschild black holes <cit.>. It is shown that the phase transition of Schwarzschild black holes in a cavity is similar to the Hawking-Page phase transition for the Schwarzschild AdS black holes <cit.>. Using York’s approach, the Reissner-Nordström black hole enclosed by a cavity in the grand canonical ensemble, where the temperature and the electric potential are specified at the boundary, was further discussed in <cit.>. The phase transition and critical behavior of the charged black holes in a cavity was latter studied in <cit.>, where it is found that there is a first-order phase transition that terminates at a critical point. These behaviors are closely analogous to the phase behaviors for the charged AdS black holes <cit.>. Note that the York's approach has also been generalized to study the thermodynamics and the phase transitions of asymptotically AdS and dS black holes <cit.>, as well as the black brane configurations in string theory <cit.>, black holes in Gauss-Bonnet gravity <cit.>, hairy black holes in cavity <cit.>, Born-Infeld-de Sitter black holes <cit.>, black holes in Teitelboim-Jackiw gravity <cit.>, and black holes in higher dimensions <cit.>. Within the York's approach, the gravitational action is calculated by only imposing the Hamiltonian constraint condition on the spacetime geometry, which gives rise to the generalized free energy <cit.>. In this regards, not only the on-shell black holes but also the off-shell black holes are taken into account to formulate the thermodynamics ensemble. The off-shell black holes are the spacetime configurations that can be reached via thermal fluctuations and the state switching process can occur via passing through such off-shell configurations. Therefore, the York's approach to the thermodynamics of black holes also provides a natural way to study the black hole state switching process. In this paper, we will study the kinetics of state switching for the asymptotically flat black holes enclosed by a cavity in terms of the free energy landscape framework <cit.>. We give a rigorous derivation of the generalized free energy for the black hole with both electric charge and magnetic charge from the gravitational action by using the York's approach, where the temperature on the cavity and the charges in the cavity are kept as the fixed parameters. It is shown that the gravitational action is evaluated only by imposing the Gauss-law constraint and the gravitational Hamiltonian constraint. The full Einstein equations are not fully satisfied, which admits the existence of the off-shell spacetime configurations. By quantifying the corresponding free energy landscape, we study the phase transitions and phase structures for the black holes in cavity, which reveals a Hawking-Page type transition for the uncharged black hole and a Van der Waals type transition for the charged black hole. We further assume that the dynamics of black hole state switching is determined by the Langevin equation where the gradient force and the stochastic force are originated from the free energy landscape and the thermal noises respectively <cit.>. For calculating the kinetic times that characterized the black hole state switching process, we derive a recurrence relation for the n-momentum of the first passage time distribution function. This enables the analytical expressions of the kinetic times characterized by the mean first passage time and its fluctuation. The numerical results for the kinetic times as the functions of ensemble temperature are calculated from the analytical expressions. Our analysis illustrates that the kinetics of black hole state switching is determined by the ensemble temperature and the barrier height on the free energy landscape. This paper is arranged as follows. In Sec.<ref>, we briefly review the thermodynamics of the asymptotically flat black hole with both the electric charge and the magnetic charge. In Sec.<ref>, we obtain the generalized free energy of the black hole-cavity system by evaluating the Euclidean gravitational action. In Sec.<ref>, we discuss the phase transitions and the phase diagrams for the uncharged black holes as well as the charged black holes. In Sec.<ref>, we first discuss how to describe the dynamical process of the black hole state switching, then derive the analytical expressions of the mean first passage time and its fluctuation, and finally present the numerical results for the kinetic times. The conclusion and discussion are presented in the last section. § ASYMPTOTICALLY FLAT CHARGED BLACK HOLE In this section, we briefly discuss the geometry and the thermodynamics of the asymptotically flat black holes with the electric and the magnetic charges. This type of black hole is usually called the dyonic black hole, which was firstly proposed by Carter in <cit.>[*]This paper was reprinted in <cit.>.. These black holes are solutions to the Einstein's field equations of general relativity coupled with electromagnetic field by considering the existence of magnetic monopoles. In the framework of string theory and supergravity, the high dimensional dyonic black holes are extensively investigated <cit.>. In the present work, we are interested in the four dimensional dyonic black hole, which the metric is given by ds^2=-f(r)dt^2+1/f(r)dr^2+r^2(dθ^2+sin^2θ dϕ^2) . The blackening factor f(r) is given by f(r)=1-2M/r+Q^2+P^2/r^2 . Compared to the Reissner-Nordström black hole, the spatial component of the gauge potential is opened to take the magnetic charge into account. The electromagnetic gauge potential is given by A=-Q/rdt-P cosθ dϕ , with Q and P being the electric and the magnetic charges respectively. The event horizon r_+ is determined by the largest root of the equation f(r)=0, which gives the analytical expression as r_+=M+√(M^2-Q^2-P^2) . In terms of the horizon radius r_+, the mass, the Hawking temperature and the entropy of the dyonic black hole are given by M = r_+/2(1+Q^2+P^2/r_+^2) , T_H = r_+^2-(Q^2+P^2)/4π r_+^3 , S = π r_+^2 . It is also easy to check that these quantities satisfy the thermodynamic first law dM=T_HdS+Φ_Q dQ+Φ_P dP , with Φ_Q=Q/r_+ and Φ_P=P/r_+ being the electric and magnetic potentials at the horizon. Furthermore, they satisfy the Smarr relation as M=2T_HS+Φ_Q Q+Φ_P P . These equations summarize the basic thermodynamic properties of the dyonic black holes. In the next section, we will employ the York's approach <cit.> to study the thermodynamics of the dyonic black hole enclosed by a cavity. § GENERALIZED FREE ENERGY FROM GRAVITATIONAL PATH INTEGRAL In this section, we derive the generalized free energy for the dyonic black holes enclosed in cavity by using the Euclidean gravitational path integral method <cit.>. We start with the general form of the static spherically symmetric black hole metric in Euclidean signature <cit.> ds^2=b^2dτ^2+a^-2dr^2+r^2(dθ^2+sin^2θ dϕ^2) , where τ=it is the Euclidean time to preserve the positivity of the metric, and b and a are the functions of the radial coordinate r. The Euclidean time τ is assumed to have the period 2π. The event horizon is given by r=r_+. The spacetime manifold ℳ described by the metric (<ref>) is bounded by a boundary ∂ℳ, which is represented by a cavity located at r=r_B. Therefore, the radial coordinate r is bounded by r_+≤ r ≤ r_B. In the present work, we will consider the canonical ensemble, where the temperature of the cavity is selected to be fixed. In addition, the electric and the magnetic charges enclosed in the cavity are also kept fixed. In this sense, the cavity plays the role of the thermal bath. According to the argument made in <cit.>, the inverse temperature of the cavity β is given by the proper length of the time coordinate at the boundary r=r_B β=∫_0^2π b(r_B) dτ=2π b(r_B) , which is just the inverse temperature of the event horizon redshifted to the cavity. The “center" or the inner boundary of the geometry is at the event horizon r=r_+ of the Euclidean black hole. Since there is a horizon, we must have b(r_+)=0 . The inner boundary of the Euclidean geometry is also required to be regular. Thus, each τ-r plane should be a two dimensional disk in the product manifold. By calculating the Euler characteristic number of the τ-r sector of the metric Eq.(<ref>) in the Appendix <ref>, we obtain the regularity condition as .(ab')|_r=r_+=1 . In addition, to distinguish the "hot flat space" with the topology S^1× R^3 from the black hole topology D^2× S^2, we calculate the Euler characteristic number of the four dimensional manifold with the boundary in Eq.(<ref>) in the Appendix <ref>, which gives us another boundary condition at r=r_+ as a(r_+)=0 . These boundary conditions are adequate to determine the generalized free energy of the dyonic black hole in cavity meanwhile admitting the fluctuations beyond the stationary black hole state. With the boundary ∂ℳ fixed, the appropriate Euclidean action for the Einstein-Maxwell theory in the canonical ensemble is given by <cit.> I_E=I_bulk+I_surf , with I_bulk = -1/16π∫_ℳd^4x√(g)( R-F_μνF^μν) , I_surf = -1/8π∫_∂ℳ d^3 x√(h)(K-K_0+2n_μ F^μν A_ν) , Here, h_μν=g_μν-n_μn_ν is the induced metric on the boundary ∂ℳ where n is an outward pointing unit normal vector to ∂ℳ. In the Gibbons-Hawking boundary term I_surf, the trace of the extrinsic curvature is defined by K=∇^μn_μ. The Gibbons-Hawking surface term is to preserve a well posed variation problem while the surface term for the electromagnetic field is to preserve the fixed charge boundary condition at the boundary. We first evaluate the gravitational part of the action. The results for the scalar curvature and the extrinsic curvature are given by R = -2[(r a^2)'-1/r^2+a/r^2 b(r^2 a b')'] , K = a(2/r+b'/b) K_0 = 2/r , where the prime represents the derivative with respect to the radial coordinate r. Performing the integration, we can get the gravitational part of the action as -1/16π∫_ℳd^4x√(g) R -1/8π∫_∂ℳ d^3 x√(h)(K-K_0) = π∫_r_+^r_B dr b/a((ra^2)'-1 )+ 2π r_Bb(r_B) -2π r_B a(r_B) b(r_B)-π r_+^2 . For the electromagnetic part, we consider the Maxwell equation ∇_μ F^μν=0 , with the gauge potential ansatz as A=A_τ(r) dτ+ A_ϕ(θ) dϕ . Here, A_τ(r) represents the electric part generated by the electric charge while A_ϕ(θ) represents the magnetic part generated by the magnetic charge. The Maxwell equation results in two Gauss constraints as (r^2 a /b A_τ' )'=0 , ∂_θ(1/sinθ∂_θ A_ϕ)=0 . The first equation can be partly integrated as r^2 a /b A_τ'=-iQ , where Q is the electric charge enclosed in the cavity. The second equation gives the solution to the magnetic part of the gauge potential as A_ϕ=P(1-cosθ) , where P is the magnetic charge in the cavity and the constant term is introduced to preserve the regularity when θ=0. Evaluating the electromagnetic part, we get the action as 1/16π∫_ℳd^4x√(g) F_μνF^μν-1/4π∫_∂ℳ d^3 x√(h) n_μ F^μν A_ν = π∫_r_+^r_Bdr b/a(P^2/r^2-Q^2/r^2)+2π i Q A_τ(r_B) = π∫_r_+^r_Bdr b/a(P^2/r^2+Q^2/r^2) , where we have used the fact that A_τ(r_B)=-i∫_r_+^r_B dr b/aQ/r^2 . Here, A_τ(r_+) is selected to be zero in order to preserve the regularity of the norm of the gauge potential at the horizon. Next, we consider the gravitational Hamiltonian constraint, which is the τ-τ component of the Einstein field equation G^τ_τ-8π T^τ_τ=0 , which gives us the equation (ra^2)'-1+Q^2+P^2/r^2=0 . The solution to this equation is given by a^2(r)=1-C/r+Q^2+P^2/r^2 , where C is the integral constant. Imposing the boundary at the horizon given in Eq.(<ref>), we have a^2(r)=(1-r_+/r)(1-Q^2+P^2/r_+ r) . By substituting Eq.(<ref>) into Eq.(<ref>), one can see that the first term in the gravitational action just cancels the electromagnetic action properly, which gives the final result as I_E = 2π r_Bb(r_B) -2π r_B a(r_B) b(r_B)-π r_+^2 = β r_B(1-√((1-r_+/r_B)(1-Q^2+P^2/r_+ r_B)))-π r_+^2 . This in turn gives us the generalized free energy of the black hole-cavity system as F=I_E/β=r_B(1-√((1-r_+/r_B)(1-Q^2+P^2/r_+ r_B)))-π T r_+^2 , where T=1/β is the ensemble temperature on the cavity. This expression for the generalized free energy resembles the form of E-TS, where the quasilocal energy play the role of the energy. When P=0, this result recovers that for the charged Reissner-Nordström black hole in a cavity in canonical ensemble <cit.>. In deriving this result, we have used the Gauss-law constraints (<ref>) for the electromagnetic field and the Hamiltonian constraint (<ref>) for the gravitational field. However, the full Einstein equations are not required to satisfy. The metric function a(r) and the ϕ component of the Gauge potential A_ϕ are specified by Eq.(<ref>) and Eq.(<ref>) respectively, while the metric function b(r) and the τ component of the Gauge potential A_τ are not known. Therefore, the metric function b(r) is an arbitrary function of the radial coordinate r, which satisfies the fixed boundary conditions at r=r_+ and r=r_B. Especially, near r=r_+, we have the asymptotic expansion for b(r) as b(r)=2r_+^2/√(r_+^2-(Q^2+P^2))(1-r_+/r)^1/2+λ(1-r_+/r)^3/2+⋯ , which is not consistent with the dyonic black hole metric given by Eq.(<ref>) and (<ref>) due to the existence of the high order terms. This inconsistency can be regarded as a kind of quantum fluctuations near the stationary dyonic black hole solution. In this sense, the generalized free energy given by Eq.(<ref>) is considered to describe the fluctuating black hole as well. § THERMODYNAMICS OF PHASE TRANSITIONS FOR THE ASYMPTOTICALLY FLAT BLACK HOLES IN CAVITY §.§ Charged dyonic black holes The solutions for a dyonic black hole in a cavity as a reservoir are given by finding the stationary points of the generalized free energy. It is convenient to introduce the non-dimensional variables by using the radius of the cavity as the characteristic length ℱ=F/r_B , x=r_+/r_B , q=Q/r_B , p=P/r_B , 𝒯=4π r_B T . Then the rescaled generalized free energy is given by ℱ=1-√((1-x)(1-q^2+p^2/x))-1/4𝒯 x^2 . In this expression, the electric charge q, the magnetic charge p and the cavity temperature 𝒯 are fixed by the canonical ensemble. The only free parameter is the horizon radius x, which is bounded by [q^2+p^2,1]. Extremizing the generalized free energy with respect to x gives us 𝒯=(1-q^2+p^2/x^2)/x√((1-x)(1-q^2+p^2/x)) . This equation gives the equilibrium state condition for the dyonic black hole with the cavity <cit.>. By solving this equation for the fixed parameters q, p and 𝒯 of the canonical ensemble, the equilibrium dyonic black hole can be determined by the event horizon x. We plot the temperature 𝒯 as the function of black hole radius x in Figure <ref>. In general, it is a non-monotonic function. The intersections between the 𝒯(x) curve with the constant 𝒯 gives the stationary points of the generalized free energy function. There is a temperature range [𝒯_min,𝒯_max] in which there are three stationary points for the system. For the case plotted in Figure <ref>, we have 𝒯_min=2.46 and 𝒯_max=2.74. In the following, we will show that this temperature range will reduce to null when increasing the electric charge q. Now by keeping the magnetic charge p as a fixed parameter, we want to show that there is a line of first order phase transitions in this region that terminates at a critical point in the (𝒯,q) plane. We will show that this critical point is the location of a second order phase transition. In Figure <ref>, we have plotted the free energy landscapes at different temperatures. It shows that when 𝒯<𝒯_min and 𝒯>𝒯_max, the landscape has the shape of single well, which means only one stationary point on it. When 𝒯_min<𝒯<𝒯_max, the landscape has the double well's shape, which means there are three stationary points on it. In addition, when increasing the temperature, the small black hole state that is globally stable at the low temperature becomes the locally stable state at the high temperature. This implies that there exists a first order phase transition between the small black hole and the large black hole. In general, the temperature range [𝒯_min,𝒯_max] will be closed when increasing the charge q. This implies a second order phase transition at the end point. The simplest way to show this is to take the first order and the second order derivatives of generalized free energy ℱ with respect to the black hole radius x. Recall that at the critical point, they are all equal to zero. Combine them to eliminate the temperature 𝒯, and impose the condition that the resultant equation has two equal roots. It can be shown that the resultant equation is given by 3 x^4 -2 (p^2+q^2+1) x^3 -6 (p^2+q^2) x^2 +6 (p^2+q^2) (p^2+q^2+1) x-5 (p^2+q^2)^2=0 . The condition that this equation has two equal roots is that the discriminant of the above algebraic equation is zero, which is given by 6912 (p^2+q^2-1)^4 (p^2+q^2)^3 (p^4+2 p^2 (q^2-9)+q^4-18 q^2+1)=0 . The solution to the above equation gives the critical value of the electric charge, which is given by q_c=√(9-4 √(5)-p^2) . It is clear that the critical value is dependent on the magnetic charge q. Substituting this critical value into Eq.(<ref>), one can get the critical value for the black hole radius as x_c=5-2√(5). The corresponding critical temperature is 𝒯_c=2/25(5+2 √(5))^3/2. The phase diagram in 𝒯-q parameter space is presented in Figure <ref>. The first order phase transition is determined by equal free energies of the two basins on the free energy landscape. By numerically solving the condition, one can get the red line on the phase diagram, which is also the coexistence curve of the small and the large black holes. In the light yellow region, the small black hole is thermodynamically stable while in the light pink region, the large black hole is stable. In the region between the blue and the green lines, the free energy landscape has the shape of double well, while outside of this region, there is only single well. At last, we point out that compared with the van der Waals type phase transition for the charged AdS black holes, the first order phase transition happens when the temperature is greater than the critical temperature 𝒯_c. §.§ Uncharged black holes For the uncharged case, we take P=Q=0. The corresponding generalized free energy is then given by ℱ=1-√((1-x))-1/4𝒯 x^2 . The horizon radius x is in the range [0,1]. The free energy landscapes for the uncharged black hole at different ensemble temperatures are plotted in Figure <ref>. Above the temperature 𝒯_min=2.598, the free energy landscape has two branches of uncharged stationary solutions, one of which has the larger horizon radius and another has the smaller radius. Besides the black hole solutions, the system also admits the thermal vacuum solution without event horizon, i.e. x=0 solution denoted by the origin point. At the temperature 𝒯_c=3.375, the thermal vacuum solution has the equal free energy with the large black hole solution, which means a phase transition between them. The behavior of the free energy landscapes with changing the ensemble temperature resembles that of the Hawking-Page phase transition discussed in <cit.>. From Figure <ref>, one can see that When 𝒯<𝒯_min, the free energy landscape has only one global minimum at the origin and the system is stable at the thermal Minkovski spacetime enclosed by a cavity. When 𝒯_min<𝒯<𝒯_c, the thermal vacuum is still globally stable although the new black hole phases emerge. Above the phase transition temperature 𝒯>𝒯_c, the large black hole becomes globally stable. At the phase transition temperature, the thermal vacuum and the large black hole can coexist with the same free energies. § KINETICS OF THE STATE SWITCHING FOR THE ASYMPTOTICALLY FLAT BLACK HOLES IN CAVITY §.§ Kinetic equation and mean first passage time We now discuss the effective theory of the kinetics of the state switching for the asymptotically flat black holes in a cavity in terms of the stochastic dynamics. The underlying assumption is that due to the thermal fluctuations from the reservoir, a locally stable black hole located at one basin on the free energy landscape is possible to switch its state to another local stable state located at another basin. It is also assumed that the state switching process can be coarse-grainedly described by a stochastic process of the black hole order parameter wherever the generalized free energy plays the role of the thermal potential <cit.>. In analogy to the motion of a Brownian particle, the kinetics of the state switching for the the asymptotically flat black hole in a cavity can be described by the Langevin equation that governs the stochastic evolution of the order parameter x: ẍ+ζẋ +ℱ'(x) -η̃(t)=0 , where the dot denotes the derivative with respect to the time t. The effective friction coefficient ζ is introduced to describe the interaction between the black hole and its reservoir. The stochastic noise η̃(t) is Gaussian white noise. In addition, we also assume that the kinetics of the state switching for the black hole is Markovian <cit.>. In the overdamped regime, the Langevin equation can be simplified as ẋ= -1/ζℱ'(x) +η(t) , where η(t)=1/ζη̃(t) is introduced for convenience. The noise with zero mean is assumed to be white Gaussianly distributed and satisfies the fluctuating-dispersion relation ⟨η(t)η(t') ⟩=2Dδ(t-t') , where D=𝒯/ζ is the diffusion coefficient. The equation (<ref>) is just the overdamped Langevin equation that describes the stochastic motion of the Brownian particle in an external potential. We now assume that there is a large number of black hole configurations in the thermodynamic ensemble. The probability distribution of the states (on-shell solutions as well as the off-shell solutions on the free energy landscape) are denoted by ρ(x,t). Then the time evolution of the probability distribution should be described by the Fokker-Planck equation which is given by ∂ρ(x,t)/∂ t=D ∂/∂ x{ e^-β̃ℱ(x)∂/∂ x[e^β̃ℱ(x)ρ(x,t)] }=𝒟ρ(x,t) , where the inverse temperature is defined as β̃=1/𝒯 and 𝒟 is the Fokker-Planck operator. Without the loss of generality, we take ζ=1 in the following. As shown in the last section, we know that the free energy landscape as a function of the order parameter x exhibits double well shape when the temperature lies in the range [𝒯_min, 𝒯_max]. Due to the thermal fluctuations, one local stable black hole state in a potential well can make a transition to another locally stable state by going through the potential barrier (refer to Figure <ref> for an illustration of a typical free energy landscape for such a state switching process). This state switching process can be properly characterized by the MFPT, which gives the average time scale of this stochastic event to take place for the first time <cit.>. We now study the kinetics of the black hole state switching by computing the MFPT. As an example, we consider the state switching process from the small black hole to the large black hole. Imagine that there is a cloud of the initial black hole states located at the left well of the landscape. The black hole state will be removed from the system if it crosses the intermediate black hole state at the landscape barrier for the first time. This can be done by imposing an absorbing boundary condition at x_m. Then the solution to the Fokker-Planck equation in the range x_min≤ x≤ x_m with x_min=q^2+p^2 being the lower bound of the order parameter can be formally given by ρ(x,t)=e^t𝒟δ(x-x_0) , where the δ-function gives the initial condition with x_min≤ x_0≤ x_m. Due to the absorbing boundary condition, this probability is not conserved but decays to zero at very late time. Define Σ(x_0,t) to be the probability that the initial state has not made a first passage process by time t. Then it is given by Σ(x_0,t)=∫_x_min^x_mρ(x,t) dx= ∫_x_min^x_m e^t𝒟δ(x-x_0) dx . This also vanishes at very late time. Note that the first passage time is a random variable. Its distribution function can be given by P_F(t)=-dΣ(x_0,t)/dt . The n-th moments of the FPT distribution function can be calculated from the following relation ⟨ t^n ⟩=∫_0^∞ t^n P_F(t) dt=-∫_0^∞ t^n dΣ(x_0,t)/dt dt=n∫_0^∞ t^n-1Σ(x_0,t) dt , where n≥ 1. By using the formal solution of Fokker-Planck equation, it can be further expressed as ⟨ t^n ⟩ = n∫_0^∞ dt  t^n-1∫_x_min^x_m e^t𝒟δ(x-x_0) dx = n∫_0^∞ dt  t^n-1∫_x_min^x_mδ(x-x_0) (e^t𝒟^† 1) dx = n∫_0^∞ dt  t^n-1(e^t𝒟^† 1) . In deriving this result, we have used the adjoint operator 𝒟^†=D e^β̃ℱ(x)∂/∂ x{ e^-β̃ℱ(x)∂/∂ x} , which is defined by ∫ dx ϕ(x)𝒟ψ(x)=∫ψ(x)𝒟^†ϕ(x) . Note that in Eq.(<ref>), the exponential of the adjoint operates on the number 1. Because we have performed the integration over x with the delta function, the result shows that ⟨ t^n ⟩ is inherently a function of x_0. Then we can drop the subscript "0" and treat ⟨ t^n ⟩ as the function of x. Now, applying the adjoint operator on Eq.(<ref>), one can get the following recurrence relation 𝒟^†⟨ t^n ⟩= -1 n=1 -n ⟨ t^n-1⟩ n≥ 2 When n=1, integrating the adjoint equation 𝒟^†⟨ t ⟩=-1 and imposing the reflecting boundary condition at x_min, one can get the analytical expression of the MFPT for the state switching from the small black hole to the large black hole as ⟨ t ⟩ = 1/D∫_x_s^x_mdx ∫_x_min^xdx' e^β̃(ℱ(x)-ℱ(x')) . Similarly, the analytical expression of the MFPT for the state switching from the large black hole to the small black hole is given as ⟨ t ⟩ = 1/D∫_x_m^x_ldx ∫_x^1dx' e^β̃( ℱ(x)-ℱ(x')) . Note that in deriving this expression, we have also imposed the reflecting boundary condition at the upper bound of the order parameter x=1. We can also derive the analytical expression of ⟨ t^2 ⟩ from the recurrence relation. For the state switching from the small black hole to the large black hole, it is given by ⟨ t^2 ⟩ = 2/D^2∫_x_s^x_mdx ∫_x_min^xdx' ∫_x'^x_mdx”∫_x_min^x”dx”' e^β̃[ℱ(x)-ℱ(x')+ℱ(x”)-ℱ(x”')] . For the inverse process, it is given by ⟨ t^2 ⟩ = 2/D^2∫_x_m^x_ldx ∫_x^1dx' ∫_x_m^x'dx”∫_x”^1dx”' e^β̃[ℱ(x)-ℱ(x')+ℱ(x”)-ℱ(x”')] . These analytical expressions are also valid for the uncharged black hole case, in which x_min should be replaced by 0 and the integrating domain should be correspondingly modified to match the state switching process from the thermal vacuum to the large black hole and its inverse process. §.§ Charged dyonic black holes We now present the numerical results of the kinetic times for the charged dyonic black holes. In the following, we set P=0.1 and Q=0.15. We are mainly interested in the dependences of the mean first passage time ⟨ t ⟩ and its relative fluctuation on the ensemble temperature. The relative fluctuation of the first passage time is defined as (⟨ t^2⟩-⟨ t⟩^2 )/⟨ t⟩^2, which reflects the relative deviation of the first passage time from its average value. The temperature range is taken to be [𝒯_min, 𝒯_max], in which the free energy landscape has the shape of double well. Note that the black hole state switching is essentially described by the uphill process of a particle on the free energy landscape. Therefore, the barrier height on the free energy landscape is closely related to the kinetics of the black hole state switching <cit.>. In Figure <ref>, we present the barrier heights ℱ(x_m)-ℱ(x_s) and ℱ(x_m)-ℱ(x_l) as the functions of the ensemble temperature 𝒯. It shows that when the temperature increases, the barrier height ℱ(x_m)-ℱ(x_s) is monotonic decreasing while the barrier height ℱ(x_m)-ℱ(x_l) is monotonic increasing. In fact, the kinetics of the black hole state switching characterized by the mean first passage time is positively correlated to the barrier height. The behaviors of the barrier heights indicate that the mean first passage time for the state switching process from the small dyonic black hole to the large black hole is the monotonic decreasing function of the ensemble temperature while the means first passage time for the inverse process is the monotonic increasing function of the ensemble temperature. This conclusion can be explicitly observed from the numerical results presented in Figure <ref>, which shows the dependences of the mean first passage times on the ensemble temperature. There is another factor that can influence the kinetics of the black hole state switching process. The black hole state switching is described by the overdamped Langevin equation, which is essentially the diffusion process caused by the thermal fluctuations. These thermal fluctuations become stronger at higher temperatures. However, the behavior of mean first passage time for the process from the large black hole to the small black hole indicates that the free energy barrier is the dominant factor that impacts the kinetic time of the state switching process rather than the thermal fluctuations or the ensemble temperature. In Figure <ref>, we show the positive correlation between the mean first passage time and the barrier height explicitly. It can be concluded that the free energy depths give the thermodynamic stability and the free energy landscape topography in terms of the barrier heights quantifies the mean first passage time. We also present the numerical results of the relative fluctuations of the first passage times for the two switching processes. It shows completely opposite behaviors compared with the mean first passage times. For the small black hole to the large black hole process, the relative fluctuation becomes larger at higher temperature, while for the inverse process, it is monotonic decreasing along with the ensemble temperature. The results indicate that for the previous process, the higher ensemble temperature and the small barrier height lead to a large fluctuation, while for the latter process, the barrier height is the dominant factor. §.§ Uncharged black holes We now consider the kinetic times of state switching for the uncharged black holes. The temperature range is taken to guarantee that there are three emerged spacetime states on the free energy landscape as shown in Figure <ref>. In Figure <ref>, we plot the barrier heights ℱ(x_s) and ℱ(x_s)-ℱ(x_l) as the functions of the ensemble temperature. Note that in the present case, the small black hole is located at the potential barrier on the free energy landscape. Since the free energy of the thermal Minkovski spacetime is zero, the barrier height for the state switching process from the thermal Minkovski spacetime to the large black hole is given by the free energy of the small black hole. The dependences of the barrier heights on the ensemble temperature are shown to be similar with that for the charged dyonic black holes discussed previously. In Figure <ref> and <ref>, we present the numerical results of the kinetic times of the state switching processes between the thermal Minkovski spacetime and the uncharged large black hole. As discussed previously, although there are two factors that can influence the kinetics of black hole state switching, the dominant factor for the mean first passage time is the corresponding barrier height. The plots in Figure <ref> and <ref> show a positively correlation between the mean first passage time and the barrier height. The relative fluctuations for the uncharged black holes plotted in Figure <ref> exhibit different behavior compared with its dependences on the temperature for the charged dyonic black holes. For the state switching process from the thermal Minkovski spacetime to the large black hole, the relative fluctuation is the monotonic decreasing function of the ensemble temperature, although its variation range is very narrow. The observation indicates that in this process the relative fluctuation is mainly dominated by the barrier height on the landscape and the influence from the ensemble temperature can be neglected. For the inverse process, the result presented in the right panel of Figure <ref> shows the relative fluctuation is the monotonic decreasing function of the ensemble temperature and approaches a constant value at high temperature. Therefore, we can conclude that the previous mentioned two factors the ensemble temperature and the barrier height on the landscape have no effect on the relative fluctuation of the state switching process from the large black hole to the thermal Minkovski spacetime when the temperature is very high. § CONCLUSION AND DISCUSSION In summary, we have employed the free energy landscape formalism to study the thermodynamics and the kinetics of state switching for the asymptotically flat black hole enclosed by a cavity. The generalized free energy for the black hole enclosed by a cavity with both the electric charge and the magnetic charge in the canonical ensemble is derived by using the York's approach, where the temperature on the cavity and the charges inside the cavity are kept as the fixed parameters. In this approach, the Euclidean manifold is regular but still admit an arbitrary ensemble temperature. It is shown that the York's approach to the black hole thermodynamics provides a natural way to study the kinetics of the state switching for the asymptotically flat black holes since the off-shell black holes are admitted to exist in the thermodynamic ensemble. Compared with the approach used in our previous work <cit.> on the generalized free energies for the asymptotically AdS black holes, the introduction of an arbitrary ensemble temperature leads to a conical singularity at the event horizon of the Euclidean manifold. In this approach, the off-shell black holes refer to the Euclidean geometries with the conical singularity at the horizon. Although the specific metric used ensures that the Einstein equations are satisfied everywhere except at the conical singularity, the singularity itself represents a "delta source" of matter. One might intuitively speculate that such a geometry would not serve as a saddle point of the action functional. However, the argument presented in <cit.> contradicts this assumption. It is noted that fixing the horizon radius when evaluating the action functional introduces a Lagrange multiplier and an associated energy density along the r = r_+ surface in the phase space. Consequently, the solution to the classical equations of motion derived from this action functional will inherently feature a conical singularity at the horizon. Therefore, as long as the constraint of a constant horizon radius is maintained, the Euclidean geometry with a conical singularity is indeed a stationary point of the action functional. Within this setup, the corresponding gravitational action for the asymptotically flat charged black hole is calculated in Appendix <ref>. This calculation demonstrates that the two approaches yield the same result when considering the redshift of the ensemble temperature in the conical singularity approach. However, there is a fundamental conceptual difference between the two methods. In our approach <cit.>, the off-shell black holes on the free energy landscape are described by Euclidean geometries with a conical singularity. In contrast, in York's approach <cit.>, the off-shell black holes are geometries that do not satisfy the full Einstein equations but still meet certain gravitational or electromagnetic constraints. Therefore, the two approaches produce the same form of the generalized free energy landscape with completely different interpretations. The free energy landscape illustrates the potential pathways for gravitational phase transitions, with the off-shell black holes representing fluctuating configurations generated by thermal noise. Therefore, the difference in the physical interpretation of the off-shell black holes can be treated as representing two distinct pathways through which the state switching processes occur. Based on the free energy landscape, we also discussed the phase transition thermodynamics of the charged dyonic black hole and the uncharged black hole respectively. It is shown that the stability of the black hole can be quantified by the topography of the free energy landscape. We also obtained the phase diagrams for the black holes in cavity, which reveal a Hawking-Page type transition for the uncharged black hole and a Van der Waals type transition for the charged black hole. Finally, we employed the stochastic Langevin equation and the corresponding Fokker-Planck equation to study the kinetics of the black hole state switching. The first passage problem for the black hole state switching was dressed analytically, where a recurrence relation for the n-momentum of the first passage time distribution function was obtained. This enables the analytical expressions of the kinetic times characterized by the mean first passage time and its relative fluctuation. Our numerical analysis illustrates that the kinetics of black hole state switching is determined by the ensemble temperature and the barrier height on the free energy landscape. § EULER CHARACTERISTIC NUMBER FOR 2-DIMENSIONAL DISK In this appendix, we discuss how to obtain the regularity condition Eq.(<ref>) for the 2-dimensional disk by calculating its Euler characteristic number. The τ-r sector of the spacetime manifold ℳ is a 2-dimensional disk D, which is described by the metric ds_D^2=b^2 dτ^2+a^-2dr^2 . However, we should impose the regularity condition to guarantee that it is really a disk without singularity. The disk is bounded by r=r_B where the cavity is located at. In two dimensions, the Euler characteristic number is given by χ=1/4π∫_D d^2x √(g_D) R_D+1/2π∫_∂ D ds k , where R_D is the scalar curvature of the disk D, k is the geodesic curvature on the boundary ∂ D and ds is the proper distance of the boundary. It is obvious that ds=bdτ. The geodesic curvature k is defined as k=t^α n_β∇_α t^β , where t^α and n^α are the tangent vector and the normal vector of the boundary. For our case, they are given by t^α=(1/b,0) , n_α=(0,-1/a) . Then it is easy to obtain the geodesic curvature of the boundary as k=.ab'/b|_r=r_B . By assuming that the disk is regular at the horizon r=r_+, one can get that the scalar curvature is given by R=-2a/b(ab')' . Then, the Euler number is given by χ = -∫_r_+^r_B dr (ab')'+.(ab')|_r=r_B = .(ab')|_r=r_+ . Due to the fact that the Euler characteristic number of a two dimensional disk is 1, we can get the regularity condition at the horizon as .(ab')|_r=r_+=1 . This is just the inner boundary condition given in Eq.(<ref>). § EULER CHARACTERISTIC NUMBER FOR 4-D MANIFOLD WITH BOUNDARY In this appendix, we briefly present the result of the Euler number for the four dimensional manifold ℳ with the boundary, which in turn gives the boundary condition Eq.(<ref>). For the 4-dimensional Riemann manifold with boundary, the Gauss-Bonnet-Chern formula for the Euler characteristic number is given by <cit.> χ=∫_ℳΩ +∫_∂ℳ n^*Π . The bulk term is given by ∫_ℳΩ = 1/32π^2∫_ℳ d^4 x √(g)(R_μνλρR^μνλρ-4R_μνR^μν+R^2) , This is just the so-called Gauss-Bonnet term. For the four dimensional metric given by Eq.(<ref>), the Gauss-Bonnet term is given by a compact form R_μνλρR^μνλρ-4R_μνR^μν+R^2 =8a/r^2 b(ab'(-1+a^2))' . Then, by performing the integration, the bulk term is given by ∫_ℳΩ = 2 ∫_r_+^r_B dr (ab'(-1+a^2))' = .2(1-a^2)|_r=r_+-.2ab'(1-a^2)|_r=r_B , where the regularity condition given in Eq.(<ref>) was used in the last step. The boundary term is given by ∫_∂ℳ n^*Π = 1/4π^2∫_∂ℳ d^3 x√(h)(R_ijklK^ikn^jn^l-R_ijK^ij-K R_ijn^in^j. . +1/2 KR+1/3K^3-KTr(K^2)+2/3Tr(K^3)) , where K_ij and n^i are the extrinsic curvature and the normal vector of the boundary ∂ℳ. It should be noted that the final result is independent of the orientation of the normal vector n^i. It can be checked that the integrand in the above equation is given by (1-a^2)ab'/r^2 b. Then the boundary term can be calculated as ∫_∂ℳ n^*Π=.2ab'(1-a^2)|_r=r_B , which exactly cancels the last term in the bulk integral. Therefore, the Euler characteristic number for the 4-dimensional manifold with the boundary is given by χ= .2(1-a^2)|_r=r_+ . For our case, the 4-dimensional manifold is just the product manifold of a 2-dimensional disk and a 2-dimensional sphere. By using the formula χ(ℳ)=χ(D)×χ(S^2)=2, one can get the condition for the metric function a as a(r_+)=0 . This is just the inner boundary condition given in Eq.(<ref>). § GRAVITATIONAL ACTION FOR THE EUCLIDEAN BLACK HOLE WITH CONICAL SINGULARITY In this appendix, we evaluate the gravitational action (<ref>) for the Euclidean geometry of the charged dyonic black hole in the canonical ensemble. By introducing the Euclidean time τ=it and imposing the arbitrary period β̃ in τ, the Euclidean geometry of the dyonic metric (<ref>) has a conical singularity at the event horizon. By noting that the energy-momentum tensor of the electromagnetic field in four dimensions is traceless, the Ricci scalar vanishes for the charged Euclidean geometry far away from the conical singularity. The bulk action comes from the contribution of the conical singularity <cit.> and the electromagnetic field, which is given by I_bulk = -A/4(1-β̃/β_H)+β̃/2(P^2-Q^2)(1/r_+-1/r_b) = -A/4+β̃(M/2-Q^2/r_+)-β̃/2(P^2-Q^2)/r_b , where β_H=1/T_H is the inverse Hawking temperature and A=4π r_+^2 is the horizon area. Here, we have used the fact that in Euclidean signature, F^2=2(P^2-Q^2 )/r^4. In the last step, we have also used the Smarr relation (<ref>) to simplify the expression. The surface terms can be evaluated as I_surf=-β̃ r_b (1-√(f(r_b))) +3/2β̃ M-3/2β̃Q^2/r_b-1/2β̃P^2/r_b+β̃Q^2/r_+ , where a gauge transformation is introduced to preserve the regularity of the potential <cit.>. The total Euclidean action is then given by I_E = -A/4+2β̃ M -β̃ r_b (1-√(f(r_b)))-β̃(Q^2+P^2)/r_b = β̃r_b √(f(r_b))(1-√(f(r_b)))-π r_+^2 . If the reshifted inverse temperature is introduced as β=β̃√(f(r_b)) , the result (<ref>) is shown to be equivalent to that in Eq.(<ref>). Therefore, the York's approach and the conical singularity's approach produce the same results. JHEP
http://arxiv.org/abs/2405.08792v1
20240514174107
Towards Enhanced RAC Accessibility: Leveraging Datasets and LLMs
[ "Edison Jair Bejarano Sepulveda", "Nicolai Potes Hector", "Santiago Pineda Montoya", "Felipe Ivan Rodriguez", "Jaime Enrique Orduy", "Alec Rosales Cabezas", "Danny Traslaviña Navarrete", "Sergio Madrid Farfan" ]
cs.LG
[ "cs.LG", "cs.AI" ]
Prospects of Privacy Advantage in Quantum Machine Learning Marco Pistoia May 20, 2024 ========================================================== This paper explores the potential of large language models (LLMs) to make the Aeronautical Regulations of Colombia (RAC) more accessible. Given the complexity and extensive technicality of the RAC, this study introduces a novel approach to simplifying these regulations for broader understanding. By developing the first-ever RAC database, which contains 24,478 expertly labeled question-and-answer pairs, and fine-tuning LLMs specifically for RAC applications, the paper outlines the methodology for dataset assembly, expert-led annotation, and model training. Utilizing the Gemma1.1 2b model along with advanced techniques like Unsloth for efficient VRAM usage and flash attention mechanisms, the research aims to expedite training processes. This initiative establishes a foundation to enhance the comprehensibility and accessibility of RAC, potentially benefiting novices and reducing dependence on expert consultations for navigating the aviation industry's regulatory landscape. You can visit the https://huggingface.co/somosnlp/gemma-1.1-2b-it_ColombiaRAC_FullyCurated_format_chatML_V1dataset and the https://huggingface.co/datasets/somosnlp/ColombiaRAC_FullyCuratedmodel here. § INTRODUCTION The Colombian aviation industry operates under the Aeronautical Regulations of Colombia (RAC) <cit.>, a comprehensive legal framework that comprises approximately 50 detailed regulations and manuals. It is worth noting that the RAC is currently undergoing harmonization with the Latin American Aeronautical Regulations (LAR). The technical complexity and voluminous nature of these documents pose significant challenges to accessibility. However, the advent of LLMs promises to revolutionize this scenario by simplifying complex texts, making regulatory information more understandable and accessible to a broader audience. As highlighted by Yang et al. <cit.>, LLMs such as ChatGPT have demonstrated significant potential in various practical applications, suggesting their utility in interpreting and simplifying legal and regulatory texts. By translating legal terminology into plain language, LLMs play a crucial role in demystifying aviation regulations, thereby improving understanding and compliance within the industry. § OBJECTIVES The overarching goal is to utilize LLMs to make the Aeronautical Regulations of Colombia (RAC) more accessible and understandable for both aviation professionals and the general public. The approach is multifaceted: First, we aim to develop a comprehensive dataset from the RAC's initial five documents, laying the groundwork for LLM training. Second, this project seeks collaboration with industry and academic experts to annotate and refine this dataset, ensuring its relevance and accuracy. Third, we will train LLMs with the curated dataset, and lastly, evaluate their performance in simplifying the RAC's content based on feedback from aeronautical experts, thereby enhancing regulatory compliance and understanding. § BACKGROUND Innovative tech solutions are crucial for navigating the complexities of the Aeronautical Regulations of Colombia (RAC). Starting with decision trees to enhance access to RAC, the aviation sector's push for safety and efficiency demands more sophisticated approaches, given its high reliability standards <cit.>. Early AI uses in aviation, such as expert systems for decision support, highlight AI's impact on safety and operational efficiency <cit.>. The emergence of LLMs, including Seq2Seq and Transformer architectures, marks a significant advancement, offering detailed conversational AI support <cit.>. A UAEAC (Unidad Administrativa Especial de Aeronáutica Civil) project demonstrated AI's potential for real-time RAC inquiries, showcasing AI's role in enhancing regulatory consultations <cit.>. This move towards LLMs for regulatory compliance and decision support illustrates a shift to intelligent solutions, with LLMs providing the flexibility and depth to address the regulatory domain's complexities effectively. § METHODOLOGY This research systematically applies LLMs to enhance the accessibility of the Aeronautical Regulations of Colombia (RAC). Our methodology includes dataset generation, expert-driven labeling, and iterative LLM fine-tuning, ensuring a comprehensive, data-driven approach. §.§ Dataset Generation from RAC Documents The dataset was crafted from the RAC using an automated process, as depicted in Figure <ref>. This process began with converting PDFs to text, followed by processing through a GPT API or a similar model. The system handled two pages at a time and iteratively compiled the extracted data. The resulting dataset, comprising questions, answers, and relevant RAC references, was thus assembled for further analysis. §.§ Labeling Process for the RAC Dataset The RAC Dataset underwent refinement using the Argilla framework within the Hugging Face environment, as illustrated in Figure <ref>. This tool facilitated structured annotation tasks, leveraging the expertise of aeronautical engineering specialists from Fundación Universitaria Los Libertadores. They assessed each sample for quality, retaining those ranked above 3 and removing lower-ranked ones, thus ensuring the dataset's integrity. The concluding phase of the process involved removing the discarded samples. Consequently, a high-quality dataset was consolidated and made available through Hugging Face. §.§ Fine tunning models In this study, we fine-tuned the pre-trained GEMMA model for a specific NLP task by employing a Parameter-Efficient Fine-Tuning (PEFT) strategy integrated with Low-Rank Adaptation (LoRA). Initially, the somosnlp/ColombiaRACFullyCurated dataset was tokenized and divided into training and testing subsets. The model was then configured to efficiently adapt to the task by selectively modifying a minimal subset of its parameters, utilizing PEFT and LoRA techniques. Key hyperparameters for the training included a learning rate of 5e-5 and a batch size of 3, optimized using the AdamW optimizer with a weight decay of 0.001. The training, managed by the SFTTrainer, emphasized gradient accumulation and learning rate scheduling to enhance model performance. After training, the fine-tuned model was combined with LoRA weights for deployment. This process demonstrates an efficient approach to customizing large-scale language models for specific tasks, leveraging advanced techniques to balance computational efficiency and model effectiveness. PyTorch and Hugging Face transformers were utilized for implementation due to their robustness and support for complex NLP tasks. § RESULTS §.§ Quantitative results The quantitative results (see Table <ref>) highlight the effectiveness of the efforts. Earlier versions (V5 to V3) had low loss but lacked answer quality, while subsequent iterations improved response accuracy, training efficiency, and environmental sustainability. V8 was the most optimized model, enhancing RAC interpretability with advanced LLMs, lower training loss, and reduced environmental impact. §.§ Qualitative results Table <ref> shows the model's strong performance with average scores of 7 from 276 tests. However, RAC 3's low scores (mean 3.464, median 1) indicate areas needing improvement, while high ratings in RACs 1 and 5 suggest strengths. These results confirm the model's potential for accuracy and generalization, though RAC 3 requires adjustments. § CONCLUSIONS The overarching goal was to enhance accessibility to the RAC through the utilization of LLMs. This objective was pursued through a multi-faceted approach: first, by developing a comprehensive dataset from the initial RAC documents; second, by collaborating with industry and academic experts to refine the dataset; third, by training LLMs with this curated dataset; and finally, by evaluating their performance. This initiative aims to improve regulatory compliance and understanding among aviation professionals and the general public. acl_natbib
http://arxiv.org/abs/2405.09339v1
20240515134229
Optimal information acquisition for eliminating estimation risk
[ "Zongxia Liang", "Qi Ye" ]
q-fin.MF
[ "q-fin.MF" ]
setauthors @warning topsep30@topsep by - * andify empty,-3 setcontribs closetoccontribs 19pt Optimal information acquisition for eliminating estimation risk] Optimal information acquisition for eliminating estimation risk 10ptZongxia Liang, Qi Ye] Zongxia Liang^a, Qi Ye^b 10ptDepartment of Mathematical Sciences, Tsinghua University, Beijing 100084, China [ a email: liangzongxia@tsinghua.edu.cn b Corresponding author, email: yeq19@mails.tsinghua.edu.cn Funding: This work was funded by National Natural Science Foundation of China (Grant Nos.12271290)] [ [ ===== This paper diverges from previous literature by considering the utility maximization problem in the context of investors having the freedom to actively acquire additional information to mitigate estimation risk. We derive closed-form value functions using CARA and CRRA utility functions and establish a criterion for valuing extra information through certainty equivalence, while also formulating its associated acquisition cost. By strategically employing variational methods, we explore the optimal acquisition of information, taking into account the trade-off between its value and cost. Our findings indicate that acquiring earlier information holds greater worth in eliminating estimation risk and achieving higher utility. Furthermore, we observe that investors with lower risk aversion are more inclined to pursue information acquisition. Keywords: bayesian learning, estimation risk, filtering, information inquisition, the value of information, variation method JEL classification: G11, C11, C61 § INTRODUCTION In the financial market, investors are often assumed to possess complete information, allowing them to develop strategies that maximize their utility. However, in reality, obtaining complete information, especially regarding asset returns, is often uncertain, leading to estimation risk (<cit.>). <cit.> and <cit.> considers optimal consumption in an incomplete market setting, in <cit.>, <cit.> and <cit.> the optimal terminal wealth is derived and the optimal strategy determined for linear Gaussian dynamics of the returns. <cit.> solves the problem if the return is a fixed random variable with known distribution. <cit.> consider the optimization of the asymptotic growth rate for an infinite time horizon. <cit.> considers the return follows continuous time Markov chain. To adapt to changing circumstances, investors continuously update their beliefs about asset returns over time as new asset prices arrive, employing Bayesian learning (<cit.>). However, the estimation of asset returns is not solely updated based on asset prices. Investors have access to various additional sources of information, such as corporate earnings reports, macroeconomic indicators, political news or expert opinion (<cit.>). With the introduction of this extra information, it becomes intriguing to explore its impact on investment decisions and objective utility. While, most papers assume the information is given and usually it is of the same correlation of unknown part (<cit.>). Here, inspired by <cit.>, we assume the investors can actively and dynamically to acquire the information in the market. While, this paper is originated by the classical paper <cit.> where the information is used to determine the ultimate payoffs's expectation and their object is to maximize the expected payoffs rather than the utility. But we still absorb the setting that we can acquire any information over time which can help to erase the estimation risk where the parameter is uncertain. In this paper, we adopt the framework in <cit.> and equipped with extra information. We demonstrate that the correlated part of the extra information with the unobserved Brownian motion W (the diffusion term of the risky asset's log price) can be considered as a small component of W. Importantly, this part is observed and can be utilized in return estimation. Consequently, the presence of extra information reduces estimation risk at a faster pace, leading to improved objective utility. Thus, this paper not only addresses the maximization of expected utility with any given extra information but also investigates the nature of the extra information itself, ultimately determining optimal information acquisition strategies. We reveal that the extra information with higher correlation at any time can actually achieve higher utility. However, comparing utilities equipped with any extra information still requires solving the value function. To facilitate this analysis, we introduce the concept of an “informative clock", which represents the ratio of the conditional variance of estimation to the risky asset's volatility. The informative clock quantifies the total volume of information and provides an intuitive index to assess the quality of the extra information. This perspective enhances the precision of estimation and allows for a concise presentation of the value function. It provides a more intuitive understanding of how extra information influences investment decisions and yields fruitful insights from various perspectives, such as attention, inattention, and information quality (<cit.> and <cit.>). We propose a criterion to quantify the value of extra information. Inspired by <cit.> and <cit.>, we adopt the concept of equivalence of certainty. It means that taking endowment along with extra information can achieve the same utility as the case of taking endowment adding the value of information along with no extra information. Our calculations demonstrate that the value of extra information is relatively independent of market conditions, including interest rates, volatility rates, and return estimates. Instead, it primarily depends on risk aversion and the informative clock. We show that the value of extra information remains constant across different initial wealth levels in the case of Constant Absolute Risk Aversion (CARA), providing a fixed value regardless of the endowment. Conversely, in the case of Constant Relative Risk Aversion (CRRA), investors focus on the multiple of wealth, where the value of information contributes a fixed proportional increase to the endowment. However, information acquisition and processing can incur significant costs in terms of time, effort, or expenses. To address this, we propose a penalty function for information acquisition based on the informative clock. At last, we assume that any information characterized by an informative clock can be obtained in the market. Investors can strategically determine the optimal effort required to acquire such information for their investment decisions. We employ the variation method to solve the functional associated with the informative clock and derive its necessary condition. Our findings indicate that investors pay greater attention to information acquisition in the early stages rather than later stages and investors with less risk aversion tend to pay greater attention. In summary, this paper makes three primary contributions. First, we establish a framework based on the concept of the "informative clock," enabling the concise solution of the value function. Second, we employ the concept of equivalence of certainty to determine the value of information. Lastly, we solve the optimal information acquisition strategy. The remainder of this paper is structured as follows: Section 2 describes the market model and formulates the admissible space for investment strategies. Section 3 derives the value function and corresponding strategy under the assumption of given extra information, utilizing the concept of the informative clock, presents the core analysis of the impact of extra information on investment decisions, evaluating its value and cost. Section 4 complements the content of information's source and informative clock's explosion. Lastly, the conclusion is drawn in the final section, with technical proofs and calculations provided in the appendix. 15pt § MARKET MODEL AND PROBLEM FORMULATION Suppose that (Ω,𝔽,ℙ) is a probability space equipped with the filtration 𝔽= {ℱ_t: t≥ 0} satisfying the usual conditions. We consider a financial market with one risk-free asset and one risky asset. The price of the risk-free asset S_0={S_0(t), t≥ 0} is given by dS_0(t) = rS_0(t)dt, S_0(0)=1, and the price of the risky asset S={S(t), t≥ 0} follows the stochastic differential equation (abbr. SDE): dS(t) = S(t) [μ dt+σ dW_t ], S(0)=1, where W={ W_t: t≥ 0} is a Brownian motion on (Ω,𝔽,ℙ), r and σ are nonnegative constants. The drift term μ is an ℱ_0-measurable Gaussian random variable satisfying μ∼𝒩(μ_0,σ_0^2), and μ and W are independent. 5pt The investor allocates the amounts π_t of her wealth into the risky asset at time t. Then the investor's total wealth as the self-financing process X={ X_t, t≥ 0} follows the following SDE: dX_t=rX_t+π_t(μ-r)dt+π_t σ dW_t, t≥ 0, X_0=x_0. 5pt Traditionally, the investor formulates her investment strategy by observing the price of risky asset, i.e., the strategy π={π_t: t≥ 0} must be only adapted to the filtration 𝔽^S={ℱ^S_t: t≥ 0}, where ℱ^S_t=σ{ S_u: 0≤ u≤ t }, which is strictly smaller than ℱ_t, t≥ 0. 5pt To capture the cross-sectional feature of unobserved process W, we introduce an extra process m={m_t: t≥ 0} in our model. It has to have some properties: it is observable, it is both Markov process and martingale from the background information which is independent to random variable μ. The most valuable thing is that the process has correlation with the Brownian motion W: d⟨ m, W⟩_t/√(d⟨ m ⟩_t d⟨ W ⟩_t)=ρ(t), where ρ(t)∈ (-1,1) is determined function of time t. it is easy to see that ρ(t) refers the correlation coefficient of two random variable dW_t and dm_t in the slight time from t to t+dt. For (<ref>) is well-defined, we set ρ(t)=0 when d⟨ m ⟩_t/dt = 0. 5pt Now the investor can no longer only settle the strategy based on the observation of S, the process m will be likely to provide more information for the more accurate estimation of μ where the ambiguity comes from. Mathematically speaking, we amplify the space of our investment strategy that π can be adapted to the filtration 𝔽^S ∨𝔽^m rather than only adapted to the filtration 𝔽^S. And we denote the space of admissible strategy as 𝒜_m:={π | ∫_0^T π_t^2 dt<∞, π 𝔽^S ∨𝔽^m}, where T>0 refers the objective investment terminal time. 5pt The previous article's objective is to maximize the expectation utility of the terminal wealth at time T over 𝒜_m: sup_π∈𝒜_m EU(X_T), with the given extra information. Our primary objective is to determine the optimal information to acquire in the market. To achieve this, we first need to establish what we mean by “optimal”. We must define a criterion to value the extra information, denoted as Value(m), and formulate a cost function, denoted as Cost(m), to quantify the expenses associated with acquiring this extra information. Consequently, our objective is to solve the optimal information acquisition strategy that maximizes the net value, defined as Net(m) = Value(m) - Cost(m). However, determining the criterion to value Value(m) is not a straightforward task. One possible approach is to use equation (<ref>) to represent the value Value(m). However, this approach has limitations. For instance, it may lead to negative values for Value(m), which is counterintuitive. Additionally, comparing the value of information for investors with different risk aversions becomes challenging, as different utility functions cannot be directly compared. Despite these challenges, we still need to utilize the insights from equation (<ref>) to establish a criterion for valuing Value(m), as a larger value in equation (<ref>) indicates greater value for the information. The specific criterion will be provided in a later section. § EXTRA INFORMATION'S VALUE, COST AND ITS OPTIMAL ACQUISITION §.§ The effect of correlation coefficient The correlation coefficient of the extra information can be comprehended as a part of the unobserved W. A natural idea comes about that higher correlation coefficient means more valuable information of W. Then it can be conjectured that if two different sources of extra information have the same correlation coefficient with W, two corresponding optimal investment would achieve the same expected utility of terminal wealth. Similarly, if one extra information has the bigger correlation coefficient than the other one, then its corresponding expected utility would be bigger than the other. We now show the two conjectures in the following Propositions <ref> and <ref>. If there are two sources of extra information m^1 and m^2 sharing the same correlation coefficient that ρ^1(t)=ρ^2(t) for all t, where ρ^1(t):=d⟨ m^1, W⟩_t/√(d⟨ m^1 ⟩_t d⟨ W ⟩_t) and ρ^2(t):=d⟨ m^2, W⟩_t/√(d⟨ m^2 ⟩_t d⟨ W ⟩_t). Denote V_m^1:=sup_π∈𝒜_m^1 EU(X_T) and V_m^2:=sup_π∈𝒜_m^2 EU(X_T), similarly, the superscript 1,2 are indicating corresponding content in the case with information m^1 and m^2 (see (<ref>)). Then V_m^1=V_m^2. 5pt See Appendix A. Proposition <ref> tells us that not the exact form of the extra information but its correlation with W indeed affect the objective function, thus it provides us some innate feeling that it is fine to ignore the exact form of extra information. 5pt Here we can assume d⟨ m ⟩_t=dt. It is the reason that the class of the process {(H· m)_t=∫_0^t H(s)dm_s ., H is a certain determined function with respect to time t and H(t)≠ 0 almost everywhere} shares the same information as m, i.e., 𝔽^m=𝔽^(H· m). Moreover, this transformation makes sure (<ref>) still valid. Then we can treat m is a Brownian motion and (<ref>) is reduced into a simplified form as d⟨ m, W⟩_t=ρ(t)dt. In addition, we can assume ρ(t)≥ 0 if we replace new process m'={m'_t:=∫_0^t [1_{ρ(t)≥ 0}-1_{ρ(t)< 0} ] dm_s} with the process m to assure the nonnegative and this setting is applied in the later content. 5pt If there are two sources of extra information m^1 and m^2 sharing the correlation coefficient with the relation that ρ^1(t) ≥ρ^2(t) for ∀ t, then V_m^1≥ V_m^2. 5pt See Appendix B. Proposition <ref> implies that higher correlation coefficient means the extra information is more valuable. Beyond that, we first put forward the idea about information dilution as the form m^3 being the mixture of the original information m^1 and a noise W^noise, indeed, W^noise has no use at all. Because m^2 contributes the same estimation of μ as m^3, it contributes less than that m^1 contributes. So does it in the investment strategy. 5pt The method proving Proposition <ref> corresponds the setting that ρ(t)=0 if d ⟨ m ⟩_t/dt= 0, which is explained as follows: When the equation holds, it means dm_t=0 at that instant. This implies that no valuable information is available at that instant. If we define the process m': m'_t=∫_0^t [1_{d ⟨ m ⟩_t/dt≠ 0}(s) dm_s+1_{d ⟨ m ⟩_t/dt= 0}(s) dW^noise_s ], as that both no information is equal to noise itself benefit nothing to strategy, then m and m' have the same effect to the process and investment. 10pt The extra information would bring about huge advantage for our strategy as what the former content reveal. But we still need to objectively measure the value of the extra information by a criterion rather than comparing two different extra information. Therefore, it is necessary to calculate Problem (<ref>) for preparation. §.§ Optimal investment Problem (<ref>) with given extra information The solution of control problem with ambiguity is usually based on bayesian learning method that we find the posterior probability distribution evolves over time by the data available up to each instant. Filtering method serves as an instrumental appliance to transform the estimation of parameter into the evolution of stochastic differential equation, which assists us in further analysis (<cit.>). Among them, the most widespread filtering is Kalman Bucy filtering, however, it can not be used directly. With the extra process m, the estimation of μ over time can not be given directly by two equations of conditional mean and variance as there must be aggregation of two sources of information. It is necessary to find the sufficiency statistic generated by S and m to fully estimate μ. 5pt In order to assure the deduction below carried out successfully, we need the following assumption on the extra observable process m. 10pt ∫_0^T 1/1-ρ^2(s)ds<∞. 5pt In the later subsection, we will illustrate the meaning of Assumption <ref> and discuss what happens in the absence of the assumption. We now present the following conditional expectation E[μ|ℱ_t^S∨ℱ_t^m]. 5pt E[μ|ℱ_t^S∨ℱ_t^m]=1/2σ^2+y_0+∫_0^t q(t)^2 [dY_s-σρ(s)dm_s]/t_0+∫_0^t q(s)^2 ds, where t_0:=σ^2/σ_0^2, y_0:=(μ_0-1/2σ^2)t_0 and q(s):=1/√(1-ρ(s)^2), Y is defined by (<ref>) in Appendix C. See Appendix C. 5pt We find from the form of (<ref>) in Appendix C that the conditional distribution of μ given ℱ_t^S∨ℱ_t^m is still a Gaussian, i.e., μ | ℱ_t^S∨ℱ_t^m ∼𝒩 (1/2σ^2+y_0+∫_0^t q(s)^2 [dY_s-σρ(s)dm_s]/t_0+∫_0^t q(s)^2 ds,. . σ^2/t_0+∫_0^t q(s)^2 ds). To solve Problem (<ref>), based on Theorem <ref>, we introduce a new process Z={Z_t, t≥ 0} and the innovation process W̅={W̅_t, t≥ 0} as follows: Z_t :=1/2σ^2+y_0+∫_0^t q(s)^2 [dY_s-σρ(s)dm_s]/t_0+∫_0^t q(s)^2 ds, W̅_t :=∫_0^t σ^-1q(s) {dY_s-σρ(s)dm_s-[E[μ|ℱ_s^S∨ℱ_s^m]-1/2σ^2]ds}. Then Z is adapted to the filtration {ℱ_t^S∨ℱ_t^m: t≥ 0} and a sufficient statistic for estimating μ in this filtration, and W̅ is a Brownian motion adapted to the filtration {ℱ_t^S∨ℱ_t^m: t≥ 0}. Moreover, it is tested that W̅ is independent to the Brownian motion m. Based on the definition of Brownian motion W̅ and the explicit form of E[μ|ℱ_t^S∨ℱ_t^m], we obtain that the processes X, Y and Z satisfy the following SDEs: dX_t =rX_t dt+π_t(Z_t-r) dt+π_t σρ(t)dm_t+π_t σ q(t)^-1 dW̅_t dY_t =(Z_t- 1/2σ^2)dt+σρ(t)dm_t+σ q(t)^-1 dW̅_t, dZ_t =σ q(t) dW̅_t/t_0+∫_0^t q(s)^2 ds. 5pt Using the Markov property of the processes X and Z, and the fact that Z is a sufficient statistic of μ, we establish the following expected utility maximization problem given extra information: V(t,x,z)=sup_π∈𝒜_m E[U(X_T) | X_t=x, Z_t=z]. 10pt It is easy to see that Problem (<ref>) is equivalent to Problem (<ref>) with the initial state (0,x_0,μ_0). 5pt If we define V(t,x,y,m):=sup_π∈𝒜_m E[U(X_T) | X_t=x, Y_t=y, m_t=m]. The state value of Y_t and m_t can not give the estimation of μ as the entire process of Y and m has capability. Then the dynamic programming no longer hold for each instant the state's failure to estimate μ. What's more that the value function V(t,x,y,m) is not well-defined. That is why we must construct the process Z to overcome this obstacle. 5pt To solve Problem (<ref>), based on the definition of V(t,x,z) and dynamic programming principle, we start with analyzing the Hamilton-Jacobi-Bellman (HJB) equation satisfied by the value function V(t,x,z) as follows: sup_πℒ^π V(t,x,z)=0, V(T,x,z)=U(x), where ℒ^π V(t,x,z) :=V_t+[rx+π(z-r)]V_x+1/2σ^2 π^2 V_xx +σ^2 π1/t_0+∫_0^t q(s)^2 dsV_xz +1/2q(t)^2σ^21/[t_0+∫_0^t q(s)^2 ds]^2 V_zz. We define a functional τ(·):=t_0+∫_0^· q(s)^2 ds. The explanation of the functional (<ref>) will be given later. By using τ(·), the infinitesimal generator can be rewritten as the following concise form: ℒ^π V(t,x,z) :=V_t+[rx+π(z-r)]V_x+1/2σ^2 π^2 V_xx +σ^2 π1/τ(t)V_xz+1/2σ^2(-1/τ)'(t) V_zz. Now we derive the closed-form solutions of Problem (<ref>) given extra information with CARA and CRRA utility functions, which is the foundation of the further discussion about the extra information and the optimal information acquisition in Section 4. 5pt (1) If U(x)=-1/βe^-β x (CARA case), then we have V(t,x,z)=U(e^r(T-t)x+ψ(t,z)), where ψ(t,z)=1/2a(t)(z-r)^2+c(t), a(t)=1/βσ^2τ(t)(T-t)/τ(t)+(T-t), c(t)=1/2β∫_t^Tτ'(s)(T-s)/τ(s)[τ(s)+(T-s)]ds, and the optimal investment strategy is π^*=e^-r(T-t)1/βτ(t)/τ(t)+T-tZ_t-r/σ^2. (2) If U(x)=x^1-γ/1-γ (CRRA case) where γ satisfies the condition: γ>1 or 0<γ<1 with t_0/T>1-γ/γ, then we have V(t,x,z)=U(e^r(T-t)x)exp{γψ(t,z)}, where ψ(t,z)=1/2a(t)(z-r)^2+c(t), a(t)=1/γσ^2τ(t)1-γ/γ(T-t)/τ(t)-1-γ/γ(T-t), c(t)=1/2γ∫_t^T τ'(s)1-γ/γ(T-s)/τ(s)[τ(s)-1-γ/γ(T-s)]ds, and the optimal investment strategy is π^*=τ(t)/γτ(t)-(1-γ)(T-t)Z_t-r/σ^2X_t. (3) If U(x)=x^1-γ/1-γ (CRRA case) and γ satisfies the condition t_0/T≤1-γ/γ, then the value function will be infinite, i.e., the problem become meaningless. The occurrence of meaningless is not bring about by the extra information but bayesian problem itself. (4) If U(x)=ln(x), the case with γ approaching 1, then V(t,x,z)=U(e^r(T-t)x)+ψ(t,z), where ψ(t,z)=1/2a(t)(z-r)^2+c(t), a(t)=1/σ^2(T-t), c(t)=1/2∫_t^T τ'(s)(T-s)/τ^2(s)ds, and the optimal investment strategy is π^*=Z_t-r/σ^2X_t. See Appendix D. 5pt Especially, if no extra information is involved, Problem (<ref>) reduces to most viewed ambiguity case, which corresponds the situation the value function with τ(·)=t_0+·; If no ambiguity is considered here, the value function will be given in the sense of limitation that t_0= ∞, τ(t)=∞ and Z_t=μ. Then Problem (<ref>) reduces to the classical problem. Moreover, noticing the fact τ(t)/τ(t)+T-t<1, we can find with ambiguity, the investment strategy will be conservative than the classical case and less conservative if more information is involved in CARA case. Similarly, in CRRA case, it depends on the value of relative risk aversion γ. When γ>1, the investment strategy will be also conservative and less conservative if more information is involved. But when T/t_0+T<γ<1, the investment strategy will be aggressive and less aggressive if more information is involved. At last, when γ=1, the investment strategy remains same as the classical case. 15pt §.§ Advantageous perspective: informative clock Now we discuss the conditional variance var(t):=σ^2/t_0+∫_0^t q(s)^2 ds at time t, which is strictly decreasing over time, and introduce the concept of informative clock as τ(t):=σ^2/var(t)=t_0+∫_0^t q(s)^2 ds, an index measuring the volume of the information, where q(s):=1/√(1-ρ(s)^2) defined in Theorem <ref>. 5pt To comprehend this concept, we take an analogy. If we want to obtain a normal distribution's mean where the variance σ^2 is known. We get a sample as X=(X_1,⋯,X_N). Then the conditional distribution of μ given X is 𝒩(X̅,σ^2/N), denoting this fact by μ | X ∼𝒩(X̅,σ^2/N). It can be seen that N=σ^2/var(μ|X), We make a similar comparison as the size of sample in this case and the informative clock in this article. It can be said that informative clock is actually an index measuring the volume of the information just like the size of the sample. 5pt The natural case is that ρ(t)≡ 0 among where m can be treated as noise. Then the estimate of μ over time is only estimated by the information of the price of risky asset itself. Correspondingly, q(s)=1 and τ(t)=t_0+t. Informative clock keeps pace as the real time with a fixed distance as t_0. In fact, t_0 is the informative clock at real time t=0 which is actually the information volume hidden in the prior distribution. Imagining that there exists a duration of history, if the duration is of value t_0, the distribution of μ would be estimated with variance of σ_0^2 by information in this duration. Reversely thinking, that is why t_0 defined as t_0:=σ^2/σ_0^2 to match the duration of history. 5pt Now the extra information intrudes into our system. It is natural to predict that it would bring about larger volume of information. This is obvious as q(t)≥ 1 and τ(t)=t_0+∫_0^t q(s)^2 ds where bigger correlation coefficient can bring higher increasing speed of the informative clock. The extra information serves as an acceleration of informative clock. It accelerates to reduce the variance of the estimation of μ than the real time, i.e., the extra information can eliminate estimation risk in fast pace. 5pt The concept of informative clock is a more useful perspective for analyzing the value of information rather than correlation coefficient. If there are two sources of information, where the one has the information advantage than the other, i.e., with respect to the informative clock, the one is bigger than the other. The investor can make better decision to obtain higher utility. Moreover, Proposition <ref> is a more strong conclusion than Proposition <ref> as it covers more cases. 10pt If there are two sources of extra information m^1 and m^2 with informative clock relation that τ^1(t) ≥τ^2(t) for ∀ t≥ 0, then V_m^1≥ V_m^2. See Appendix E. From the procedure of the comparison of c, we must broaden our mind. τ^1(t)≥τ^2(t) implies that in any real time the first information has more informative clock. However, if we take the perspective with (τ^1)^-1(u) ≤ (τ^2)^-1(u), it tells that in any same informative clock the first information has more remaining time to invest. Then we see that the informative clock contributes the value function with two ways: the whole larger informative clock just as the first term in (<ref>) shows and the adequate time to prepare in any same informative clock just as the second term shows. 5pt Another useful effect of informative clock is that HJB equation can be reduced into a representation with only informative clock as (<ref>). In the infinitesimal generator, the term V_t+[rx+π(z-r)]V_x+1/2σ^2 π^2 V_xx is the most classical HJB equation in the investment problem where z is the certain rate of drift of the risky asset. Then σ^2 π1/τ(t)V_xz is comprehended as the adjustment of investment by the ambiguity of drift. The last term 1/2σ^2(-1/τ)'(t) V_zz depicts the evolution of the drift over time. Then HJB equation is intuitive under this perspective. 5pt Due to the close connection between ρ and τ : τ(t):=σ^2/var(t)=t_0+∫_0^t q(s)^2 ds and q(s):=1/√(1-ρ(s)^2). For the convenience and vividness of expression, we use the functional τ to refer the information which has this volume of information. With it, all the value can be represented in concise form. And we assume the continuum of the ρ(t)∈ [0,1), i.e., any value in this interval can be obtained from the market. Thus we can assume any τ as the continuous function with derivative larger than one is obtained in the market. Finally, Proposition <ref> has told us that the utility is irrelevant to the actual form of the source of information but what informative clock it can bring about for estimating μ. Thus we can ignore the form of the extra information and only focus on the problem via the perspective of functional τ. 10pt §.§ Value and cost of extra information Now we solve the Problem (<ref>) and are equipped with the perspective of informative clock. To objectively measure the value of the extra information by a criterion, we introduce the concept of certainty equivalence of utility. As we have posed, we define the value function V_τ(t,x,z) as the value function in the case that extra information can bring about the informative clock as the functional τ. In the case without any extra information, the functional has the structure τ(·)=t_0+·. Here we define it as the natural case and we denote that 0 as the functional in this case. It is the value function that we achieve by price itself. Then we define the value functional Value(·) by V_τ(0,x_0,μ_0)=V_0(0,x_0+Value(τ),μ_0). It means that taking initial value x_0 along with extra information with informative clock τ, we can achieve the same utility as the case of dealing with initial value x_0+Value(τ) along with no extra information. Here V_0 is comprehended as our objective criterion. 5pt In the case of CARA utility function, comparing two value functions with the relationship V_τ(0,x_0,μ_0) =V_0(0,x_0+Value(τ),μ_0), we obtain Value(τ)=∫_0^T 1/2β(T-t)τ'(t)/τ(t)[τ(t)+T-t]dt-C_1, where C_1:=∫_0^T 1/2β(T-t)/(t+t_0)(T+t_0)dt is a constant. This is actually the difference of c(0) in two value functions with corresponding informative clock and it is more than zero just as Proposition <ref> shows(see(<ref>)). We see that it has nothing to do with the endowment x_0. This is derived from the property of CARA utility. People are caring about the absolute wealth in the utility function. Therefore the extra information will bring about the certain value. 5pt In the case of CRRA utility function, comparing two value function with the relation V_τ(0,x_0,μ_0)=V_0(0,x_0+Value(τ),μ_0), we obtain Value(τ) =[exp{∫_0^T 1/2(1-γ)τ'(t)1-γ/γ(T-t)/τ(t)[τ(t)-1-γ/γ(T-t)]dt}/C_2-1]x_0. where C_2:=exp{∫_0^T 1/2(1-γ)1-γ/γ(T-t)/(t+t_0)[(t+t_0)-1-γ/γ(T-t)]dt} is a constant (The case γ=1 still holds in the sense of limit). Distinct from the CARA utility, people are caring about the multiple of the wealth with CRRA utility. The investors are surging to doubling their initial value. And we know from Eq.(<ref>) and Proof of Proposition <ref> that the extra information will bring about the multiple increase of the endowment and the multiple number is the function of quotient of c(0) with two informative clocks which coincides our experience. 5pt No matter in CARA utility or CRRA utility, we find that the value of the extra information is independent with the market condition. It has nothing to do with the interest rate, risky asset's volatility and even the estimation of return. It only concerns about the informative clock which represents precision of the estimation and investor's preference for risk. Here are two pictures of the value of information in CARA and CRRA case where the parameter is given with t_0=4, T=2, β=0.001, x_0=1000 and most importantly, τ(t)=t_0+k t where k is a constant represent the derivative of the informative clock. For convenience, here we only use the linear informative clock for representation. It is obvious the information will bring huge value if the investor is less risk aversion both in CARA case and CRRA case where the value is inversely proportional to the absolute risk aversion in CARA case. 5pt The other thing need to regard is that the marginal of volume of the information contributes less value when the information is plenty. Moreover, it has its bound which will be given in Subsection <ref>. 5pt Conversely, the gleaning of huge volume of information is costly. There must be a restriction of behavior that collecting information as much as the investor is not allowed. We define the punishment functional Cost(·) as follows: Cost(τ)=∫_0^T cost(τ'(t))dt, where the function cost: [1,∞)→R^+ is an increasing and convex function. As we know, τ'(t)=1/√(1-ρ(t)^2)≥ 1 and when τ'(t)=1, it means ρ(t)=0 and no useful extra information is involved. Thus, we set the boundary condition: cost(1)=0. The monotone property and convexity originate from the fact that the effort to glean the marginal information (τ'(t)) is positive and increases w.r.t.the volume of information. §.§ The most worthwhile information to acquire The value and cost of information are well defined, see (<ref>) and (<ref>). In this subsection the most important thing we consider is to balance the value and cost to determine the most worthwhile information to acquire, i.e., solving the following functional maximization problem: sup_τ Net(τ), Net(τ):= Value(τ)-Cost(τ), where the informative clock τ is differentiable and its derivative is larger than one everywhere. 5pt Now we assume cost(x)=λ (x-1)^2 for an instance, where λ>0 is the relative ability of acquiring information, and solve Problem (<ref>) as follows. 5pt In the case of CARA utility function, the optimal informative clock satisfies the condition τ”(t)+1/4βλ1/[τ(t)+T-t]^2=0. In the case of CRRA utility function, the optimal informative clock satisfies the condition τ”(t)+y/4γλ1/[τ(t)-1-γ/γ(T-t)]^2=0, where y is a positive number. 5pt See Appendix F. 5pt In fact, using the same method as in Theorem <ref> we can solve Problem (<ref>) for other punishment function. Theorem <ref> just provides a template to find extreme point of Problem (<ref>). And this template is the most widespread for describing the punishment function. 5pt The last work to find the optimal informative clock is to search all the functional satisfying the necessary condition with the different initial condition τ'(0) and y. This part is trivial and we omit it here. 5pt Based on Eq.(<ref>) and Eq.(<ref>), we have τ”(t)+g(t)=0, where g(t)>0 (it holds for any punishment functional). It follows that τ”(t)<0. That is to say the investor will devote less effort to glean the information as the time goes. In fact, this conclusion can also be derived from Proposition <ref>. If there is a information with τ'(t_1)<τ'(t_2) where t_1<t_2 having the most net value, we reshuffle the order of τ' as τ̂' and define a new τ̂(t)=∫_0^t τ̂'(s)ds to achieve τ̂(t) ≥τ(t) for all t. Then V^τ̂> V^τ and Value(τ̂)> Value(τ). However, their cost is the same. Then we find a new informative clock with more net value and there is a contradiction. Thus we must glean the information as soon as possible. 5pt Here are two pictures of the optimal informative clock in CARA and CRRA case where the parameter is given the same as t_0 = 4, T = 2, x_0 = 1000, λ = 1. As we can see, the optimal informative is really increasing and concave. Furthermore, if we compare the optimal informative clock for the investor with different risk aversion in CARA case or CRRA case, we will see investor with lower risk aversion are more inclined to glean more information. In other words, gleaning information benefits more to the investor with lower risk aversion. § COMPLEMENTS OF INFORMATION ITSELF §.§ Sources of information We provide a discussion around the extra information itself. In the financial market, a wide range of information can be gathered and utilized. One commonly considered type of information is systemic risk, which we assume is generated by the process W^systemic. For instance, we can consider m to be equal to W^systemic. However, it is important to note that any observed risk can be represented by the process m. Whenever a risky asset exists in the market, various risk factors contribute to the price fluctuations and can be observed. Furthermore, the phenomenon of co-movement in asset prices offers us a perspective to identify the process m. One potential concern might arise regarding the calculation of ρ(t) as W itself is not directly observable. However, since we have the observation Y (defined in (<ref>)), and ρ(t)=d⟨ m, Y⟩_t/√(d⟨ m ⟩_t d⟨ Y ⟩_t), we can effectively calculate their correlation coefficient without direct observation of W. This equation provides a feasible way to compute the correlation coefficient, which empowers us to test different sources of information. Therefore, we have the ability to assess the correlation between Y and different sources of information, allowing us to identify the ones with the highest correlation. Furthermore, we can even combine multiple sources of information through signal engineering techniques to achieve a higher correlation with Y. 15pt §.§ the explosion of informative clock Assumption <ref> (see (<ref>)) can be rewritten as τ(T)<∞. It is the case that the informative clock is finite from 0 to T. It arouses our curiosity that what would happen when the informative clock become infinite. In fact, at that time, the ambiguity hidden in μ is totally erased. As the conditional variance of μ becomes zero and the distribution of μ has collapsed into a single point's distribution. While this phenomenon has been supposed as there is a `insider' investor, who can observe both the drift and driving Brownian motion W (<cit.>) . The informative clock becomes infinite means that the investor becomes `insider' investor knowing everything. 5pt It is natural to put forward some problems around that the informative clock evolve from finite time to the infinite. The main one is that how it happens. In fact, if we extend the range of ρ with ρ(t)∈ [0,1]. It will happen when the fact ρ(t) approaches or equals to one (for example, m=W) with enough time which contributes infinite value for the informative clock. Another one may be around the function of informative clock becomes singular as it tends infinite in finite time. There is no worry as in the HJB equation, we can observe that τ only exists with the form 1/τ which can be continuous from 1/τ(0) to 0 where two ends are both finite. Thus, it really bothers nothing when there is a surge of infinite information if we assure the continuity of 1/τ. Moreover, all the deduction in Section 3 still hold if we use limitation as there is always informative clock exist in the dominator to assure anything well-posedness. Above all, everything remains alright if we just take (1/τ)(t)=0 when τ(t)=0. 5pt Ultimately, we take τ(0^+)=∞ for consideration (can be comprehended as the limit τ(t)=t_0+nt where n→∞), that is, the investor is assumed as `insider' investor at the beginning (time is 0^+). This corresponds the value of V(0^+,x_0,Z_0^+) which is the classical case with μ=Z_0^+. And μ has the prior distribution 𝒩(μ_0,σ^2_0). Then we have that ∫_R V(0^+,x_0,μ_0+u) 𝒩_σ_0^2(u)du is actually the objective value what we want. In fact, this value can be gotten directly in the value function if we allow the generalized function calculation in c(0) in the HJB equation. Taking CARA for example, c(0) =1/2β∫_0^Tτ'(s)(T-s)/τ(s)[τ(s)+(T-s)]ds =1/2β∫_0^0^+[τ'(s)/τ(s)-[τ(s)+T]'/τ(s)+T]ds =1/2β[ln(τ(s))-ln(τ(s)+T)]|_0^0^+ =1/2βln(t_0+T/t_0). Similarly, we obtain the value c(0)=1/2γln(t_0/t_0-1-γ/γT) in CRRA case if γ≠ 1. And c(0)=1/2T/t_0 in CRRA case when γ=1. 5pt As regards to Proposition <ref> and content of information's value, it is obvious that the value of any extra information has its bound. And the bound is finite and can be approached if we pay enough cost. 10pt § CONCLUSIONS In this paper, we show that, with the extra information, the estimation of the risky asset's is preciser and the utility can be improved in addition. We first put forward the new concept of “informative clock” to describe the precision of the estimation and use it to represent everything in the concise form. The value and cost are well defined for any extra information. From the process of trading off, it tells us to glean the extra information earlier will be great and we find investors with less risk aversion are more inclined to glean information. Finally, it bothers us nothing to permit the informative clock infinite and find the bound of the extra information's value. 10pt 10pt § APPENDIX A. PROOF OF PROPOSITION <REF> Without loss of generality, m^1 and m^2 are regarded as Brownian motion (see in the Remark). As two processes m^1 and m^2 sharing the same correlation coefficient with Y, by using Ito formula, the characteristic functions of the two pairs (Y, m^1) and (Y, m^2) are equal, as such, they have the same probability law. In the case containing m^1, as the optimal π^1 is adapted to the filtration ℱ_t^S∨ℱ_t^m^1, there must exist the functional f s.t. π^1_t=f(Y_u,m^1_u,0≤ u≤ t). We choose π^2_t=f(Y_u,m^2_u,0≤ u≤ t) adapted to the filtration ℱ_t^S∨ℱ_t^m^2. Then the process classes (Y, m^1, π^1, X^1) and (Y, m^2, π^2, X^2) have the same probability law. As such, X_T^1 and X_T^2 have also the same probability law, and their expected utilities are equal. As π^1 is the optimal one and π^2 is an selected one, V_m^1≥ V_m^2. Similarly, we have V_m^2≤ V_m^1. Thus, V_m^1= V_m^2. § APPENDIX B. PROOF OF PROPOSITION <REF> Before proving the second conjecture, we prepare a lemma as follows: 5pt Let (Ω,ℱ,𝒫) be the probability space with 𝒢_1 and 𝒢_2 being the sub-σ-algebras of ℱ. For any random variable X∈ L^1( Ω,ℱ,𝒫 ), if X and 𝒢_1 are independent of 𝒢_2, then we have E[X|𝒢_1∨𝒢_2]=E[X|𝒢_1]. 5pt As both sides of (<ref>)are 𝒢_1∨𝒢_2-measurable, and the nonempty collection 𝔉_0≜{ A_1∩ A_2 | A_1∈𝒢_1,A_2∈𝒢_2} of subsets of Ω is a π-system generating 𝒢_1∨𝒢_2, Eq.(<ref>) is equivalent to ∫_A_1∩ A_2 E[X|𝒢_1∨𝒢_2]dP=∫_A_1∩ A_2 E[X|𝒢_1]dP for ∀ A_1∩ A_2∈𝔉_0 . As E[X1_A_1|𝒢_1] and 1_ A_2 are independent, and X1_A_1 and 1_ A_2 are independent, we have RHS =∫_Ω 1_{A_1∩ A_2} E[X|𝒢_1]dP=∫_Ω1_A_11_ A_2 E[X|𝒢_1]dP =∫_Ω E[X1_A_1|𝒢_1] 1_ A_2dP=E[ E[X1_A_1|𝒢_1]] E[1_ A_2] =E[ X1_A_1] E[1_ A_2]=∫_Ω X1_A_11_ A_2 dP =∫_Ω1_A_1∩ A_2 E[X|𝒢_1∨𝒢_2]dP=∫_A_1∩ A_2 E[X|𝒢_1∨𝒢_2]dP. 5pt We can construct artificially a pure noise Brownian motion W^noise which is assumed to be on the probability space (Ω,𝔽,ℙ) and adapted to the filtration 𝔽. If not, there is an extension (Ω,𝔽,ℙ) of (Ω,𝔽,ℙ) which can be defined by the cartesian product of the original one and the probability space of W^noise. Additionally, the extension of space makes no difference of investing and objective function as the restriction of π. Thus the probability space can be arbitrarily fertile for our need. Similarly, m^1 and m^2 are still regarded as Brownian motions. Next we claim that the observed noise contributes nothing to our strategy for any given extra information, that is, sup_π∈𝒜_m EU(X_T)=sup_π∈𝒜_m,W^noise EU(X_T), where 𝒜_m,W^noise:={π | ∫_0^T π_t^2 dt<∞, π adapted to 𝔽^S ∨𝔽^m∨𝔽^W^noise}. Let us review the estimation of μ. In fact, if we let X=μ, 𝒢_1= ℱ_t^S∨ℱ_t^m and 𝒢_2= ℱ_t^noise in Eq.(<ref>), we have E[μ|ℱ_t^S∨ℱ_t^m∨ℱ_t^W^noise]=E[μ|ℱ_t^S∨ℱ_t^m]. Eq.(<ref>) tells us that the noise give totally no information of μ. Using the same method, we can construct innovation process W̅ and sufficient process Z, in which no differences exist with or without W^noise. Thus, (<ref>) holds. Taking m=m^1, we obtain sup_π∈𝒜_m^1 EU(X_T)=sup_π∈𝒜_m^1,W^noise EU(X_T). Define the process m^3:={m^3_t, t≥ 0}: m^3_t=∫_0^t {ρ^2(s)/ρ^1(s)dm^1_s+√(1-[ρ^2(s)/ρ^1(s)]^2) dW^noise_s}. where ρ^2(s)/ρ^1(s):=0 if ρ^1(s)=0, ρ^2(s)=0. Then m^3 is a Brownian motion with the property that ρ^3(t):=d⟨ m,W ⟩_t=ρ^2(t). Using Proposition <ref>, we have sup_π∈𝒜_m^2 EU(X_T)=sup_π∈𝒜_m^3 EU(X_T). As 𝒜_m^3⊂𝒜_m^1,W^noise, based on the definition of supreme, we obtain sup_π∈𝒜_m^3 EU(X_T)≤sup_π∈𝒜_m^1,W^noise EU(X_T). Thus sup_π∈𝒜_m^1 EU(X_T) ≥sup_π∈𝒜_m^2 EU(X_T), that is, V_m^1≥ V_m^2. 5pt § APPENDIX C. PROOF OF THEOREM <REF> 5pt dY_t=(μ-1/2σ^2)dt+σ dW_t, t≥ 0, Y_0=0. Define n={n_t:=∫_0^t q(s)[dW_s-ρ(s)dm_s], 0≤ t≤ T}. Assumption <ref> guarantees that the process n is well-defined. Furthermore, n is a Brownian motion and orthogonal to the process m. And we have the following orthogonal decomposition of W: dW_t=ρ(t) dm_t+q(t)^-1 dn_t. Using SDE(<ref>), we obtain [dY_t-σρ(t)dm_t+1/2σ^2dt]=μ dt+σ q(t)^-1 dn_t. Define the posterior distribution: p(u,t)du:=P(μ∈ du | ℱ_t^S∨ℱ_t^m), ∀ u∈R. Using the same detailed calculation as in <cit.> , we have p(u,t)du = p(u,0)duexp{-1/2u^2∫_0^t σ^-2q(s)^2 ds. . +u ∫_0^t σ^-2q(t)^2 [dY_s-σρ(s)dm_s+1/2σ^2ds]} / ∫_R p(u,0)duexp{-1/2u^2∫_0^t σ^-2q(s)^2 ds. .+u ∫_0^t σ^-2q(t)^2 [dY_s-σρ(s)dm_s+1/2σ^2ds]}. The prior distribution of μ yields p(u,0)du=1/√(2πσ_0^2)e^-(u-μ_0)^2/2 σ_0^2. Putting it in (<ref>), we obtain 0.3cm p(u,t)du =1/√(2πσ^2/t_0+∫_0^t q(s)^2 ds)× exp{-[u-(1/2σ^2+ y_0+∫_0^t q(t)^2 (dY_s-σρ(s)dm_s)/t_0+∫_0^t q(s)^2 ds)]^2/2 σ^2/t_0+∫_0^t q(s)^2 ds}. Therefore E[μ|ℱ_t^S∨ℱ_t^m]=∫_R u p(u,t)du =1/2σ^2+y_0+∫_0^t q(t)^2 [dY_s-σρ(s)dm_s ]/t_0+∫_0^t q(s)^2 ds. 15pt § APPENDIX D. PROOF OF THEOREM <REF> 5pt (1) CARA case: utility function U(x)=-1/βe^-β x. We use the ansatz: V(t,x,z)=U(e^r(T-t)x+ψ(t,z)). Then the HJB equation corresponding to V(t,x,z) becomes the following PDE of ψ: 0= -β V [ ψ_t+π^*(z-r)e^r(T-t)-β1/2σ^2 π^*2e^2r(T-t). . -βσ^2 π^* 1/τ(t)e^r(T-t)ψ_z -β1/2σ^2 (-1/τ)'(t)ψ_z^2+1/2σ^2 (-1/τ)'(t)ψ_zz], where π^*=e^-r(T-t)(z-r)-βσ^2 1/τ(t)ψ_z/βσ^2. Putting π^* into (<ref>) yields ψ_t+β[ 1/β(z-r)-σ^2 1/τ(t)ψ_z]^2/2σ^2+ 1/2σ^2 (-1/τ)'(t)[-βψ^2_z+ψ_zz] =0. We guess that ψ has the following form: ψ(t,z)=1/2a(t)(z-r)^2+b(t)(z-r)+c(t). Substituting the form of ψ into Eq.(<ref>), we have 1/2a'(t)+β[ 1/β-σ^2 1/τ(t)a(t) ]^2/2σ^2 +1/2σ^2 (-1/τ)'(t)(-β)a(t)^2=0, a(T)=0, b'(t)+(⋯)b(t)= 0, b(T)=0, c'(t)+(⋯)b(t)+1/2σ^2 (-1/τ)'(t)(a(t))=0, c(T)=0. It follows that ψ(T,z)=0 and b(t)=0 for t∈ [0,T]. To get a, we define m(t)= 1/β-σ^2 1/τ(t)a(t). Using the ODE of a(t), τ(t)m'(t)+β[τ'(t)-1]m(t)^2-τ'(t)m(t)=0, which is a Riccati equation. Let d(t)=1/m(t), the Riccati equation is transformed into the following ODE: d'(t)=-τ'(t)/τ(t)d(t)+β[τ'(t)-1]/τ(t). Thus d(t) =β+β(T-t)1/τ(t), m(t) =1/β+β(T-t)1/τ(t), a(t) =τ(t)/σ^2[1/β-1/β+β(T-t)1/τ(t)] =1/βσ^2τ(t)(T-t)/τ(t)+(T-t), c(t) =∫_t^T 1/2σ^2 (-1/τ)'(s)(a(s))ds =1/2β∫_t^Tτ'(s)(T-s)/τ(s)[τ(s)+(T-s)]ds. (2) As CRRA case is intricacy, we show the proof in three different categories: The first one is U(x)=x^1-γ/1-γ with the necessary condition that t_0/T>1-γ/γ, the second one with t_0/T≤1-γ/γ and the last one is U(x)=ln(x). (i) If U(x)=x^1-γ/1-γ with t_0/T>1-γ/γ, we use the ansatz: V(t,x,z)=U(e^r(T-t)x)exp{γψ(t,z)}. The HJB equation corresponding to V(t,x,z) is equivalent to the following: 0= γ V [ ψ_t+1-γ/γπ^*/x (z-r)-1/2σ^2 (π^*/x)^2 (1-γ). .+σ^2 π^*/x1-γ/τ(t)ψ_z+1/2σ^2 (-1/τ)'(t)[γψ^2_z+ψ_zz] ], where π^*=xσ^2 1/τ(t)ψ_z+1/γ(z-r)/σ^2. Then ψ_t+(1-γ)[ σ^2 1/τ(t)ψ_z + 1/γ(z-r)]^2/2σ^2 +1/2σ^2 (-1/τ)'(t)[γψ^2_z+ψ_zz] =0. We guess ψ(t,z)=1/2a(t)(z-r)^2+b(t)(z-r)+c(t). Similar to that of CARA case, 1/2a'(t)+(1-γ)[ σ^2 1/τ(t)a(t)+1/γ]^2/2σ^2 +1/2σ^2 (-1/τ)'(t)γ a(t)^2=0, a(T)=0, b(t)≡ 0, b(T)=0, c'(t) +1/2σ^2 (-1/τ)'(t)(a(t))=0, c(T)=0. As for the equation of a, first making a substitution: m(t)= σ^2 1/τ(t)a(t)+1/γ, we get a Riccati equation of m as follows: τ(t)m'(t)+[ (1-γ)+γτ'(t)] m(t)^2-τ'(t)m(t)=0. Using a substitution: d(t)=1/m(t), the Riccati equation is the following solvable form: d'(t)=-τ'(t)/τ(t)d(t)+(1-γ)+γτ'(t)/τ(t). Solving the last ODE, d(t)=γ-(1-γ)(T-t)1/τ(t), m(t)=1/γ-(1-γ)(T-t)1/τ(t). Then a(t) =τ(t)/σ^2[ 1/γ-(1-γ)(T-t)1/τ(t)-1/γ] =1/γσ^2τ(t)1-γ/γ(T-t)/τ(t)-1-γ/γ(T-t), c(t) =∫_t^T 1/2σ^2 (-1/τ)'(s)(a(s))ds =1/2γ∫_t^T τ'(s)1-γ/γ(T-s)/τ(s)[τ(s)-1-γ/γ(T-s)]ds. Thus we obtain that closed-form of the value function is solved in the first situation. Moreover, if γ>1, the value function will always be bounded. But if 0<γ<1, the necessary condition that t_0/T>1-γ/γ assures the boundedness of the value function. 5pt (ii) If U(x)=x^1-γ/1-γ with t_0/T≤1-γ/γ, the value function will be infinite, the above method of “ansatz" does not work. Here we take the strategy that π_t=k X_t. We will show that if k→∞, the utility expectation will tend to infinite. Indeed, if π_t=k X_t, the terminal state is X_T=e^rTx_0exp{[k(μ-r)-1/2σ^2k^2]T+σ k W_T}. As μ and W_T are independent, we have E{U(X^k_T)/U(e^rTx_0)} = E{exp[(1-γ)(k(μ-r)-1/2σ^2k^2)T+(1-γ) σ k W_T]} = exp{(1-γ)k(μ_0-r)T-1/2(1-γ)σ^2k^2T . +. 1/2(1-γ)^2 σ^2 k^2 T+ 1/2(1-γ)^2 k^2 σ^2 T^2 1/t_0} = exp{(1-γ)k(μ_0-r)T+1/2(1-γ)^2 σ^2 k^2 T[T/t_0-γ/1-γ] }. It follows that EU(X^k_T)→∞ as k→∞. Thus, the original problem is meaningless no matter the extra information is included or not. The phenomenon happens when the investor has less relative risk aversion and the ambiguity hidden in drift term is really too much that the extreme situation where the drift term is far beyond the normal life weighted a lot in probability which cause the expectation utility infinite. Moreover, if T/t_0-γ/1-γ< 0, then sup_k E{U(X^k_T)} =U(e^rTx_0)exp{1/2T/γ/1-γ-T/t_0(μ_0-r)^2/σ^2} =U(e^rTx_0) exp{γ·1/2a(0)(μ_0-r)^2}. From which we see that a(t) is a important function to weight the informative clock and remaining time to the maturity. That is why with any extra informative a(0) is the same which will benefit the discussion of value of information later. In this perspective, the calculation of a(t) in fact is not only by trick. 5pt (iii) If U(x)=ln(x), we use the ansatz: V(t,x,z)=U(e^r(T-t)x)+ψ(t,z). Similar to that of (i), we have ψ_t+(z-r)(π^*/x)-1/2σ^2 (π^*/x)^2 +1/2(-1/τ)'ψ_zz=0, π^*=xz-r/σ^2. And ψ(t,z)=1/2a(t)(z-r)^2+c(t), a(t)=1/σ^2(T-t), c(t)=∫_t^T 1/2σ^2 (-1/τ)'(s)(a(s))ds=1/2∫_t^T τ'(s)(T-s)/τ^2(s)ds. § APPENDIX E. PROOF OF PROPOSITION <REF> If we want to compare the value of V_m^1 and V_m^2, we only need to compare the value of V^1(0,x_0,μ_0) and V^2(0,x_0,μ_0). 5pt In fact, a(0) is the value determined by τ(0) and T, which is irrelevant to any extra information in both CARA case and CRRA case (see Proof of Theorem <ref>(3)). The only comparison of value function will focus on the value of c(0). We make the comparison of c(0) with different informative clock by some transformations. 5pt In the case of CARA for example, based on Theorem <ref>, we have c(0) =1/2β∫_0^Tτ'(s)(T-s)/τ(s)[τ(s)+(T-s)]ds =1/2β∫_0^T[1/τ(s)-1/τ(s)+T-s]τ'(s)ds =1/2β∫_0^τ(T)[1/u-1/u+T-τ^-1(u)]du, where the last equation is by the variable transformation u=τ(s). Then c^1(0)-c^2(0) =1/2β∫_τ^2(T)^τ^1(T)[1/u-1/u+T-(τ^1)^-1(u)]du +1/2β∫_0^τ^2(T)[1/u+T-(τ^2)^-1(u)-1/u+T-(τ^1)^-1(u)]du ≥ 0+0=0, where c^1(0) and c^2(0) are in the value function V^1 and V^2 respectively. The second term is deduced from the fact (τ^1)^-1(u) ≤ (τ^2)^-1(u), where τ^-1 is the reverse function of τ. Then V^1(0,x_0,μ_0) =U(e^rTx_0+1/2a(0)(μ_0-r)^2+c^1(0)) ≥ U(e^rTx_0+1/2a(0)(μ_0-r)^2+c^2(0)) =V^2(0,x_0,μ_0). This is what we desire. 5pt Similarly, the conclusion can still be proven by the same way as in CRRA case. When γ≠ 1. c(0) =1/2γ∫_0^T τ'(s)1-γ/γ(T-s)/τ(s)[τ(s)-1-γ/γ(T-s)]ds =1/2γ∫_0^T[1/τ(s)-1-γ/γ(T-s)-1/τ(s)]τ'(s)ds =1/2γ∫_0^τ(T)[1/u-1-γ/γ[T-τ^-1(u)]-1/u]du. 5pt Similarly, If γ<1, we have c^1(0)≥ c^2(0). But if γ>1, we have c^1(0)≤ c^2(0). But it doesn't matter for the comparison of value function if we notice that the utility functions U(x) are positive or negative with γ <1 or γ >1 . Hence, we obtain V^1(0,x_0,μ_0) =U(e^rTx_0)exp{γ·[1/2a(0)(μ_0-r)^2+c^1(0)]} ≥ U(e^rTx_0)exp{γ·[1/2a(0)(μ_0-r)^2+c^2(0)]} =V^2(0,x_0,μ_0). At last, when γ=1, c(0) =1/2∫_0^T τ'(s)(T-s)/τ^2(s)ds =1/2∫_0^τ(T)T-τ^-1(u)/u^2du It is easy to see the relation with c^1(0)≥ c^2(0), we also have V^1(0,x_0,μ_0) =rT +ln(x_0)+1/2a(0)(μ_0-r)^2+c^1(0) ≥ rT +ln(x_0)+1/2a(0)(μ_0-r)^2+c^2(0) =V^2(0,x_0,μ_0). Thus, we end the proof. § APPENDIX F. PROOF OF THEOREM <REF> In the case of CARA utility function, if we define 𝔏(t,τ,τ'):=1/2β(T-t)τ'(t)/τ(t)[τ(t)+T-t]- λ [τ'(t)-1]^2, then Problem (<ref>) is equivalent to solving sup_τ∫_0^T 𝔏(t,τ(t),τ'(t))dt. In fact, based on the method of variations, we obtain that the optimal functional has to satisfy the necessary condition as the Euler-Lagrange equation: d/dt∂/∂τ'𝔏(t,τ(t),τ'(t))=∂/∂τ𝔏(t,τ(t),τ'(t)). It is simplified as τ”(t)+1/4βλ1/[τ(t)+T-t]^2=0. 5pt In the case of CRRA utility function with γ≠ 1, it will become a little complex because the Euler-Lagrange method will become invalid due to the nonlinear part of the integration. We use the method of duality to make the integration linear. 5pt Noticing that x_0/C_2exp(x) has the duality function y-yln(C_2 y/x_0), we have x_0/C_2exp(x)=sup_y>0{y-yln(C_2 y/x_0)+xy}. By using Eq.(<ref>) and Eq.(<ref>), the objective function Net(τ) is Net(τ) =sup_y>0{y-yln(C_2 y/x_0)-x_0. .+y [∫_0^T 1/2(1-γ)τ'(t)1-γ/γ(T-t)/τ(t)[τ(t)-1-γ/γ(T-t)]dt] . . -∫_0^T λ [τ'(t)-1]^2 dt}. If we define the 𝔏^y(t,τ,τ') :=y/2(1-γ)τ'(t)1-γ/γ(T-t)/τ(t)[τ(t)-1-γ/γ(T-t)]- λ [τ'(t)-1]^2, then Net(τ)=sup_y>0{y-yln(C_2 y/x_0)-x_0+ ∫_0^T 𝔏^y(t,τ,τ') dt}. As such, sup_τ Net(τ) =sup_τsup_y>0{y-yln(C_2 y/x_0)-x_0+ ∫_0^T 𝔏^y(t,τ,τ') dt} = sup_y>0sup_τ{y-yln(C_2 y/x_0)-x_0+ ∫_0^T 𝔏^y(t,τ,τ') dt}. Then we fix y>0 and solve sup_τ{y-yln(C_2 y/x_0)-x_0+ ∫_0^T 𝔏^y(t,τ,τ') dt} by Euler-Lagrange equation with d/dt∂/∂τ'𝔏^y(t,τ(t),τ'(t))=∂/∂τ𝔏^y(t,τ(t),τ'(t)). It follows that τ”(t)+y/4γλ1/[τ(t)-1-γ/γ(T-t)]^2=0. Moreover, for U(x)=ln(x), as the calculation is similar and the outcome coincide (<ref>) if we treat γ=1, we do not need to carefully distinguish the case whether γ equals to one. [Andrei and Hasler(2015)]andrei2015investor Andrei, D., & Hasler, M. 2015. Investor attention and stock market volatility. The Review of Financial Studies, 28(1), 33–72. [Banerjee and Breon-Drish(2020)]banerjee2020strategic Banerjee, S., & Breon-Drish, B. 2020. Strategic Trading and Unobservable Information Acquisition. Journal of Financial Economics, 138(2), 458–482. [Cabrales et al.(2013)]cabrales2013entropy Cabrales, A., Gossner, O., & Serrano, R. 2013. Entropy and the value of information for investors. American Economic Review, 103(1), 360–377. [Frey et al.(2012)]frey2012portfolio Frey, R., Gabih, A. and Wunderlich, R., 2012. Portfolio optimization under partial information with expert opinions. International Journal of Theoretical and Applied Finance, 15(01), p.1250009. [Gennotte(1986)]gennotte1986optimal Gennotte, G. 1986. Optimal portfolio choice under incomplete information. The Journal of Finance, 41(3), 733–746. [Huang and Liu(2007)]huang2007rational Huang, L., & Liu, H. 2007. Rational inattention and portfolio selection. The Journal of Finance, 62(4), 1999–2040. [Karatzas and Zhao(2001)]karatzas2001bayesian Karatzas, I. and Zhao, X., 2001. Bayesian adaptive portfolio optimization. Option pricing, interest rates and risk management, pp.632-669. [Karatzas and Xue(1991)]karatzas1991note Karatzas, I., & Xue, X. 1991. A Note On Utility Maximization Under Partial Observations. Mathematical Finance, 1(2), 57–70. [Kadan and Manela(2019)]kadan2019estimating Kadan, O., & Manela, A. 2019. Estimating the Value of Information. The Review of Financial Studies, 32(3), 951–991. [Kuwana(1995)]kuwana1995certainty Kuwana, Y. 1995. Certainty Equivalence and Logarithmic Utilities in Consumption/Investment Problems. Mathematical Finance, 5(4), 297–309. [Kumar et al.(2008)]kumar2008estimation Kumar, P., Sorescu, S. M., Boehme, R. D., & Danielsen, B. R. 2008. Estimation Risk, Information, and the Conditional CAPM: Theory and Evidence. The Review of Financial Studies, 21(3), 1037–1075. [Kyle(1985)]kyle1985continuous Kyle, A.S., 1985. Continuous auctions and insider trading. Econometrica: Journal of the Econometric Society, pp.1315-1335. [Lakner(1995)]lakner1995utility Lakner, P. 1995. Utility Maximization with Partial Information. Stochastic Processes and their Applications, 56(2), 247–273. [Lakner(1998)]lakner1998optimal Lakner, P. 1998. Optimal Trading Strategy for an Investor: The Case of Partial Information. Stochastic Processes and their Applications, 76(1), 77–97. [Liptser and Shiriaev(1977)]liptser1977statistics Liptser, R. S., & Shiriaev, A. N. 1977. Statistics of Random Processes: General Theory. Springer. [Runggaldier and Zaccaria(2000)]runggaldier2000stochastic Runggaldier, W. J., & Zaccaria, A. 2000. A Stochastic Control Approach to Risk Management under Restricted Information. Mathematical Finance, 10(2), 277–288. [Sass and Haussmann(2004)]sass2004optimizing Sass, J. and Haussmann, U.G., 2004. Optimizing the terminal wealth under partial information: The drift process as a continuous time Markov chain. Finance and Stochastics, 8, pp.553-577. [Veronesi(2000)]veronesi2000does Veronesi, P. 2000. How Does Information Quality Affect Stock Returns? The Journal of Finance, 55(2), 807–837. [Wonham(1964)]wonham1964some Wonham, W. M. 1964. Some Applications of Stochastic Differential Equations to Optimal Nonlinear Filtering. Journal of the Society for Industrial and Applied Mathematics, Series A: Control, 2(3), 347–369. [Xiong and Yan (2010)]xiong2010hetero Xiong, W. and Yan, H., 2010. Heterogeneous expectations and bond markets. The Review of Financial Studies, 23(4), pp.1433-1466. [Zhao(1999)]zhao1999bayesian Zhao, X. 1999. Bayesian Adaptive Portfolio Optimization. Columbia University. [Zohar(2001)]zohar2001generalized Zohar, G. 2001. A generalized Cameron-Martin formula with applications to partially observed dynamic portfolio optimization. Mathematical Finance, 11(4), 475–494
http://arxiv.org/abs/2405.09851v1
20240516070044
Region of Interest Detection in Melanocytic Skin Tumor Whole Slide Images -- Nevus & Melanoma
[ "Yi Cui", "Yao Li", "Jayson R. Miedema", "Sharon N. Edmiston", "Sherif Farag", "J. S. Marron", "Nancy E. Thomas" ]
eess.IV
[ "eess.IV", "cs.CV", "q-bio.QM" ]
Cui et al. Department of Economics, University of North Carolina at Chapel Hill, Chapel Hill, NC 27599, USA yicui@unc.edu Department of Statistics & Operations Research, University of North Carolina at Chapel Hill, Chapel Hill, NC 27599, USA {yaoli,marron}@email.unc.edu School of Medicine, University of North Carolina at Chapel Hill, Chapel Hill, NC 27516, USA {jayson_miedema,sherif_farag,nancy_thomas}@med.unc.edu Lineberger Comprehensive Cancer Center, University of North Carolina at Chapel Hill, Chapel Hill, NC 27599, USA edmiston@ad.unc.edu Region of Interest Detection in Melanocytic Skin Tumor Whole Slide Images - Nevus & Melanoma Yi Cui1 Yao Li2 Jayson R. Miedema3 Sharon N. Edmiston4 Sherif Farag3 J.S. Marron2,3 Nancy E. Thomas3 May 20, 2024 ======================================================================================================== Automated region of interest detection in histopathological image analysis is a challenging and important topic with tremendous potential impact on clinical practice. The deep-learning methods used in computational pathology may help us to reduce costs and increase the speed and accuracy of cancer diagnosis. We started with the UNC Melanocytic Tumor Dataset cohort that contains 160 hematoxylin and eosin whole-slide images of primary melanomas (86) and nevi (74). We randomly assigned 80% (134) as a training set and built an in-house deep-learning method to allow for classification, at the slide level, of nevi and melanomas. The proposed method performed well on the other 20% (26) test dataset; the accuracy of the slide classification task was 92.3% and our model also performed well in terms of predicting the region of interest annotated by the pathologists, showing excellent performance of our model on melanocytic skin tumors. Even though we tested the experiments on the skin tumor dataset, our work could also be extended to other medical image detection problems to benefit the clinical evaluation and diagnosis of different tumors. § INTRODUCTION The American Cancer Society predicted that in 2022 an estimated 99,780 cases of invasive and 97,920 cases of in-situ melanoma would be newly diagnosed and 7,650 deaths would occur in the US <cit.>. The state-of-the-art histopathologic diagnosis of a melanocytic tumor is based on a pathologist’s visual assessment of its hematoxylin and eosin (H&E)-stained tissue sections. However, multiple studies have suggested high levels of diagnostic discordance among pathologists in interpreting melanocytic tumors <cit.>. Correct diagnosis of primary melanoma is key for prompt surgical excision to prevent metastases and in identifying patients with primary melanoma who are eligible for systemic adjuvant therapies that can improve survival. Alternatively, overdiagnosis can lead to unnecessary procedures and treatment with toxic adjuvant therapies. We are applying deep-learning methods in computational pathology to determine if we can increase diagnostic accuracy, along with increasing speed and decreasing cost. Here, we examine methods for improving Region of Interest (ROI) detection in melanocytic skin tumor Whole Slide Images (WSIs), which is an important step toward the computational pathology of melanocytic tumors. Traditionally, expert pathologists visually identify the annotate the potential or related regions for melanoma and nevus, and then take a close look to classify certain types. However, this process is time-consuming and the accuracy is also not satisfactory <cit.>. One potential solution may be the combination of high-quality histopathological images and AI technology. Histopathological images have long been utilized in treatment decisions and prognostics for cancer. For example, histopathological images are used to score tumor grade in breast cancer to predict outcomes for cancer cases or perform histologic pattern classification in lung adenocarcinoma, which is critical for determining tumor grade and treatment for patients <cit.>. AI technology, like deep learning-based predictors trained on annotated and non-annotated data, could be a potentially efficient technology to improve early detection <cit.>, help pathologists diagnose tumors, and inform treatment decisions to potentially improve overall survival rates. Recently, with the advancement of machine learning, especially deep learning, many researchers have developed various frameworks and Convolutional Neural Network (CNN) architectures, like ZefNet <cit.>, Visual geometry group (VGG) <cit.>, ResNet <cit.>, DenseNet <cit.>, etc., to solve the biomedical image computing and classification problems in the field of computer vision and pathology <cit.>. The idea of transfer-learning from these frameworks is to use a network that has been trained on unrelated categories on a huge dataset, like Imagenet, and then transfer its knowledge to the small dataset. Besides these transfer-learning-based methods, there also exist some methods that do not use the pre-trained model. These models are trained only by training datasets to update all the CNN parameters. In other words, deep learning largely expands our methods to deal with prediction and classification problems in pathology, and the applications include tumor classification <cit.>, cancer analysis and prediction <cit.>, cancer treatment prediction <cit.> and so on. Therefore, in the field of computational pathology, more and more researchers use these deep learning methods on medical images that include rich information and features. Some papers <cit.> try to use WSIs and show high performance and accuracy of their models on certain types of cancers like breast cancers and uterine cancers. Recently, there has been some literature <cit.> analyzing skin cancer based on histopathological images. However, previous literature does not include the ROI detection of melanocytic skin tumors and only has limited accuracy in classification as well as identification among various tumors. Benefiting from the AI technology and histopathological images, we developed a deep neural network-based ROI detection method that could precisely detect the ROI in melanocytic skin tumors through WSIs and at the same time, classify the slides accurately. In Fig. <ref>, the slides had ROI indicated by black dots. Our goal was to automatically find this region without the use of black dots. The performance of our model could be seen as the green boundary in the right panel. Large images were broken into small patches and extracted abundant features from these patches <cit.>. In addition, we leveraged the partial information from annotations, also called “semi-supervised learning”, to enhance our model detection method. This method improved classification accuracy compared to previous approaches for certain kinds of tumors. Also, we proved our algorithm’s accuracy and robustness by decreasing our training samples to various subsets of the original training samples. Fig. <ref> illustrated the overview of our method. § MATERIALS AND METHODS §.§ Data Melanocytic Tumor Dataset cohort. The melanocytic tumor dataset contained 86 melanomas (skin cancer) and 74 nevi (benign moles) WSIs. Besides slide-level labels, there were annotations made by pathologists on these slides. A slide might contain multiple slices of the same tissue, and pathologists annotated ROI on some slices for diagnosis purposes, but not others. We used Aperio ScanScope Console to scan the tissue samples with 20× magnification. Training Set. We randomly selected 80% (134 WSIs[It contained 71 melanoma (skin cancer) and 63 nevus (benign moles) WSIs.]) of data as our training set (Fig. 2a). For the training set, the slide-level labels (melanoma vs. nevus) are available, but the true annotations of ROI are not. While a portion of the ROIs in the slides were annotated, it should be noted that not all ROIs received annotation. This causes a challenge of using these annotations to evaluate the performance of the model in ROI detection task. However, we can still leverage these partial annotations to train a deep-learning model that can perform slide classification and ROI detection. Testing Set. We took the other 20% (26 WSIs) as our testing set (Fig. 2a). For evaluation of our method and other baseline models, these 26 WSIs were manually annotated by our pathologist pathologists. Our model was trained on slides (from the training set) without ground-truth annotations with only partial information on WSIs. We used Aperio ImageScope Console to mark tumor boundaries as annotations and exported these annotations from the aforementioned software in the Extensible Markup Language (XML) format. It also included the annotated regions related to corresponding coordinates. We utilized these coordinates for each slide to figure out these regions solely from the rest of the image, labeled as melanoma or nevus. §.§ Data Preparation Data prep-processing: color normalization. To minimize potential side effects of color recognition, we preprocessed all WSIs using previous color normalization methods <cit.>. There were different qualities or colors for scans performed in different labs or even the same lab for scans of the same WSIs processed at different times. The model may detect these undesirable changes to influence the feature extraction and even the following classification and ROI detection. Thus, we applied color normalization methods to these WSIs to ensure the slides that were processed under different circumstances were in the common, normalized space, which could enhance the robustness of model training and quantitative analysis (Fig. 2b). Data prep-processing: data augmentation. First, tissue detection for the patches extracted from WSIs was completed. If we detected certain tissues, we would collect these tissues into patches and then finish the color normalization part. Data augmentation was then done by randomcorp, random horizontalflip and normalization of patches (Fig. 2b). And the edge features were restored accurately. Patch extraction. Image slides were tiled into non-overlapping patches of 256 × 256 pixels in 20× magnification. Given a WSI, patches were extracted based on the slide-level label and annotations (Fig. 2c). If the slide-level label was nevus, all patches inside the annotated regions were labeled as nevus. If the slide-level label was melanoma, all patches inside the annotated regions were labeled as melanoma. Besides patches from annotated regions, some patches outside those regions were also extracted and labeled as other. However, since not all ROIs were annotated by pathologists, there could be melanoma and nevus patches outside annotated regions. To avoid labeling those patches as other, we manually extracted patches of other classes from regions. §.§ Model training and assessment Training patch classifier. A three-class patch classification model (PCLA-3C) was trained on the labeled patches with VGG16 <cit.> as base architecture (Fig. 2d). Models were trained using this CNN architecture and by backpropagation, we manually changed the last layer’s parameters to optimize the model. The patch classifier would return a WSI with three key scores, corresponding to three categories (melanoma, nevus and other). Slide classification and ROI detection. In the testing stage, all patches from a WSI were first fed into the trained patch classifier. Ignoring patches predicted as other, slide-level prediction was done by majority vote based on patches predicted as melanoma and nevus. If the number of patches labeled as melanoma exceeded the number of patches labeled as nevus in one WSI, we classified it as melanoma, and vice versa (Fig. 2e). For a WSI classified as melanoma, all the patches from this slide will be ranked by melanoma predicted scores. Otherwise, all the patches will be ranked by nevus predicted scores (Fig. 2f). Model assessment. To evaluate the performance of ROI detection, the annotated ratio was measured to calculate Intersection over Union (IoU) for each slide. Given a slide, annotated ratio β was calculated by the number of patches in the annotated region divided by the number of patches extracted from the slide: β = A_p/C_p,where A_p is the number of patches in A (annotated region) and C_p is the number of patches in C (WSI). Then, the top nβ patches based on predicted scores were classified as ROI, where n was the total number of patches from a slide. For example, if β=0.2 for a slide in the testing set, it means that 20% of the regions in the slide are ROIs. Then, the model will predict the top 20% of patches (based on the predicted scores) as patches in the ROIs. The performance was measured by Intersection over Union (IoU), which compared the annotated region and predicted ROI region. Since the framework was patch-based, IoU was calculated by the number of patches in the intersection region (the region in both annotated and predicted regions) divided by the number of patches in the union of the annotated and predicted ROI regions: IoU = AB_p/AB_p, where AB_p shows the number of patches in the region of (A∩ B) and AB_p shows the number of patches in the region of (A∪ B). A is annotated region and B is the predicted/highlighted region. Visualization. The detection methods could provide three types of visualization maps: boundary, overlap and heatmap (examples were in Fig. <ref>). Three visualization maps will be generated based on the predicted scores calculated in the ROI detection section (Fig. 2g). The overlap map highlighted top-ranked patches in a WSI and masks other areas with a transparent blue color (Fig. 3a, 3d). The percentage of highlighted patches equaled β (the annotated ratio). Therefore, the highlighted region was also the predicted ROI. The boundary map showed the boundary of the largest ROI cluster based on the highlighted patches, where the highlighted patches were clustered by OPTICS algorithm <cit.> (Fig. 3b, 3e). The last one was a heatmap where red covered regions that had high predicted scores and blue covered regions that had low predicted scores (Fig. 3c, 3f). § RESULTS §.§ Method Comparison Two methods were tested on the melanocytic skin tumor dataset to do ROI detection and slide classification: 1) CLAM (clustering-constrained attention multiple instance learning) <cit.>, 2) PCLA-3C (the proposed patch-based classification model). The 160 WSIs from UNC Melanocytic Tumor Dataset cohort were randomly split into training and testing sets with 134 for training and 26 for testing. Both methods were trained on the training set, and the performances on both training and testing sets were evaluated. Visualization results and code could be found on GitHub[https://github.com/cyMichael/ROI_Detectionhttps://github.com/cyMichael/ROI_Detection]. Computational configuration. All analyses were used by Python. Images were analyzed and processed using OpenSlide. All the computational tasks were finished on UNC Longleaf Cluster with Linux (Tested on Ubuntu 18.04) and NVIDIA GPU (Tested on Nvidia GeForce RTX 3090 on local workstations). NVIDIA GPUs supports were followed to set up and configure CUDA (Tested on CUDA 11.3) and the torch version should be greater than or equal to 1.7.1. §.§ Model Validation and Robustness We trained the model based on different proportions of the training dataset, but the results were based on the testing set (26 WSIs), see table <ref>. There was a high agreement between the predictions of the ROI by PCLA-3C and the true ones, showing the accuracy of our automatic ROI detection. By using the training data, our method achieved an accuracy of 92.3% in slide-level classification and IoU rate of 38.2% in the ROI detection task on the testing set. Our method achieved better accuracy than CLAM with an accuracy of 69.2% in slide-level classification and IoU rate of 11.2% in the ROI detection task. Also, we analyzed the robustness results in the supplementary information, showing the accuracy was 0.7866 (95% CI, 0.761-0.813) at the patch level, and accuracy was 0.885 (95% CI, 0.857-0.914) at the slide level by using 80% (107 WSIs) of the original training set. Our true testing data were kept unchanged since these data included true annotations. However, the training data did not include the true annotations. As in the PCLA-3C, the improvements in patch classification accuracy, slide classification accuracy and IoU showed the importance of annotations in the training of deep learning classifiers for prediction. Also, we showed that patch classification results can be used to predict the slide-level label accurately. This is important as accurate tumor type is the clinical biomarker for future treatment. In summary, our deep-learning-based framework has outperformed the state-of-the-art ROI detection method <cit.>, leading to better model visualization and interpretation. This is quite crucial in medical imaging fields and related treatment recommendations. §.§ Misclassified Slides Discussion The proposed method PCLA-3C only misclassified two slides in the testing set. The two WSIs are both labeled as nevus but misclassified as melanoma by the model (see the two slides and corresponding visualization results in Fig. <ref> and Fig. <ref>). The slide in Fig. <ref> is not a typical nevus and it has the features of a pigmented spindle cell nevus, which is one diagnostic challenge of melanocytic skin tumor. However, the slide in Fig. <ref> is a routine type of nevus. The reason that PCLA-3C misclassified the slide could be based on the difference in color. In general, the ROIs in melanoma cases were dark, while those in nevus cases were light. As shown in Fig. 7b, there were some dark areas outside the annotated ROIs, which contributed to the misclassification of slides and the incorrect detection of ROIs. § DISCUSSION In this work, we presented deep-learning-based classifiers for predicting the correct tumor types with and without annotations. Using high-quality WSIs from the UNC Melanocytic Tumor Dataset cohort annotated by our pathologists, we systematically selected the proper cases for training and testing. Heatmap, boundary and overlay figures exerted by PCLA-3C showed a considerable agreement with annotations finished by our pathologist group. Also, as shown in table <ref> and table <ref>, the test results showed that PCLA-3C had higher accuracy in patch level, slide level and ROI level by just using limited WSIs as the training set than CLAM. Some recent studies have also examined tumors by using the deep-learning architecture in the medical imaging field. Most literature mainly studied the effects of CNN-based methods on different cancers like breast cancer and skin cancer, and achieved high accuracy on the classification task. Khalid et al. <cit.> have utilized deep learning and transfer learning to classify skin cancers. Some literature <cit.> tried to solve the classification problem in breast cancer by deep learning methods. Besides, Farahmand et al. <cit.> have not only classified the WSI accurately, but they are also focused on the ROI detection tasks and achieved nice results. From Lu et al. <cit.>, CLAM has been used to solve the detection of renal cell carcinoma and lung cancer. CLAM is proposed to do slide classification and ROI detection, which does not require pixel or patch-level labels. However, when applied to the melanocytic skin tumor dataset, the ROI detection of this method is not satisfactory. Lerousseau et al. <cit.> introduced a weakly supervised framework (WMIL) for WSI segmentation that relies on slide-level labels. Pseudo labels for patches were generated during training based on predicted scores. Their proposed framework has been evaluated on multi-locations and multi-centric public data, which demonstrated a potentially promising approach for us to further study the WSIs. Here we reported on a novel method that performed automated ROI detection on primary skin cancer WSIs. It improved the performance of the state-of-the-art method by a large margin. In most places, the diagnostic pathologists will manually scan all the slides to analyze the tumor types. Thus, it is convenient and cheap to apply the deep-learning method to these existing WSIs. The high accuracy of our deep learning-based method results has made huge progress toward digital assistance in diagnosis. The key strength of our model is that it overcomes the lack of ground-truth labels for the detection task. The performance of previous methods was not satisfactory on melanocytic WSIs. One reason is that melanocytic tumors are difficult to diagnose and detect, and the literature reports 25–26% of discordance between individual pathologists for classifying a benign nevus versus malignant melanoma <cit.>. Using only slide-level labels was hard to train a promising method. The success of our method means the combination of partial information from annotations and patch-level information could largely enhance the analysis of melanocytic skin tumors. The weakness of our model is that our model does not classify all the WSIs accurately. Our slide classification is 92.3%, so we could not rely completely on the model (PCLA-3C). Two WSIs (true label: nevus) in the testing set were misclassified as melanoma. Although our method does not perform the same as the gold standard, our results can assist pathologists in efficiently classifying the WSIs and finding the ROI. In summary, the deep-learning architecture that we developed and utilized in this study could produce a highly accurate and robust approach to detect skin tumors and predict the exact type of tumors. Given that it takes lots of time to examine the patients’ WSIs, besides the conventional methods, our efficient AI method could help medical staff save time and improve the efficiency and accuracy of diagnosis, which benefits each patient in the future. We expect that our approach will be generalizable to other cancer-related types, not restricted to skin cancer, or breast cancer <cit.>, and vision-related treatment outcome predictions. The deep-learning-based framework could also be widely applied in identification and prediction in diagnostics. In the future, we plan to extract some detailed information from high-quality WSIs and then improve our model to get higher accuracy in detection and prediction. Future work will also include further improvements in the ROI detection performance by incorporating extra information into the model, such as gene expression and clinical data. §.§.§ We would like to thank the School of Medicine of the University of North Carolina at Chapel Hill for providing study materials. All authors performed final approval of the paper, are accountable for all aspects of the work, confirm that we had full access to all the data in the study and accept responsibility to submit for publication. This study was supported by the National Cancer Institute at the National Institutes of Health P01CA206980, R01CA112243 grants and the UNC Health Foundation. §.§.§ Approval for the study was granted by the Institutional Review Board (IRB) of UNC Chapel Hill. The tissues and data for the Melanocytic Tumor Dataset cohort were retrieved under permission from IRB # 22-0611. The IRB determined the research met the criteria for waiver of informed consent for research [45 CFR 46.116(d)] and waiver of HIPAA authorization [45 CFR 164.512(i)(2)(ii)]. The study complies with all regulations. The datasets used and/or analyzed during the current study are available from the corresponding author on reasonable request. All data generated or analyzed during this study are included in this paper <cit.>. splncs04
http://arxiv.org/abs/2405.09434v1
20240515152538
Chiral extensions of regular toroids
[ "Antonio Montero", "Micael Toledo" ]
math.CO
[ "math.CO", "52B15, 52C22, 05E18" ]
antonio.montero@fmf.uni-lj.si [a]Faculty of Mathematics and Physics, University of Ljubljana, SI-1000 Ljubljana, Slovenia [b]Institute of Mathematics, Physics and Mechanics, Jadranska 19, SI-1000 Ljubljana, Slovenia micaelalexitoledo@gmail.com Abstract polytopes are combinatorial objects that generalise geometric objects such as convex polytopes, maps on surfaces and tiling of space. Chiral polytopes are those abstract polytopes that admit full combinatorial rotational symmetry but do not admit reflections. In this paper we build chiral polytopes whose facets (maximal faces) are isomorphic to a prescribed regular cubic tessellation of the n-dimensional torus (n ≥ 2). As a consequence, we prove that for every d ≥ 3 there exist infinitely many chiral d-polytopes. Chiral extensions of regular toroids Micael Toledo May 20, 2024 ==================================== § INTRODUCTION An abstract polytope is a partially ordered set that generalises the incidence structure of geometric convex polyhedra to higher dimensions. The rank of is the combinatorial equivalent to the geometric notion of dimension. Abstract polytopes inherit a natural recursive structure from their convex and geometric analogues: just as a cube can be thought of as a family of six squares (objects of dimension 2) glued together along their edges (of dimension 1), an abstract polytope of rank n (called an n-polytope) can be thought of as a family of (n-1)-polytopes glued along their faces of rank n-2. Those (n-1)-polytopes are the facets of , and whenever all the facets are isomorphic to a fixed polytope we say that is an extension of . The problem of determining whether or not a fixed polytope admits an extension has been part of the theory's development since its beginning. In fact in <cit.> Danzer and Schulte attack this problem for regular polytopes. They prove that every non-degenerate regular polytope admits an extension, and this extension is finite if and only if is finite. In <cit.>, Danzer proves that every non-degenerate polytope , regardless of their symmetry properties, admits an extension that is finite (resp. regular) if and only if is finite (resp. regular). In <cit.>, Schulte builds a universal regular extension for every regular polytope . This extension is universal in the sense that any other regular extension of is a quotient of . In <cit.> Pellicer develops several constructions that have as a consequence that every regular polytope admits a regular extension with prescribed conditions on its local combinatorics. In particular, this proves that every regular polytope admits an infinite number of non-isomorphic regular extensions. Chiral abstract polytopes are those that admit full symmetry by (combinatorial) rotations but do not admit reflections. They were introduced by Schulte and Weiss in <cit.> as a generalisation of Coxeter's twisted honeycombs in <cit.>. The notion of a chiral polytope arises naturally from a geometric idea and examples of chiral 3-polytopes are abundant. Many of them have been part of the literature in the context of maps on surfaces. In <cit.>, it is proved that there are infinitely many chiral maps on the torus. On the other hand, it is known that there are no chiral maps on orientable surfaces of genus 2, 3, 4, 5 and 6 (see <cit.>, for example). The smallest non-toroidal chiral map was constructed by Wilson in <cit.>. The results obtained by Sherk in <cit.> imply that infinitely many orientable surfaces admit chiral maps. Finding examples of chiral polytopes of higher ranks has proved to be a rather difficult problem. Some examples of chiral 4-polytopes have been built from hyperbolic tilings in <cit.>. In <cit.> Schulte and Weiss developed a technique to build chiral extensions of polytopes, which introduced the first examples of chiral 5-polytopes. However, this technique cannot be applied twice directly, so it cannot be used to build 6-polytopes. Moreover, the construction introduced by Schulte and Weiss gives a locally infinite polytope. The first examples of finite chiral 5-polytopes were constructed by Conder, Hubard and Pisanski in <cit.> with the use of computational tools. In <cit.> Breda, Jones and Schulte develop a technique to build new finite chiral n-polytopes from known finite chiral n-polytopes. This technique allowed the construction of concrete examples of chiral polytopes of ranks 3, 4 and 5. It was not until 2010 that Pellicer proved in <cit.> the existence of chiral polytopes of all ranks higher than 3. His construction is based on finding a chiral extension of a particular regular polytope and can be applied recursively to the minimal regular cover of the resulting chiral extension. Unfortunately, because of the nature of this construction, the size of the polytopes obtained grows so fast that they quickly become conceptually intractable, to the point where determining many of their basic structural properties, such as the kind of facets they have, becomes practically impossible at high ranks. One of the limitations of any construction of chiral extensions of polytopes is that, unlike the constructions for regular extensions, they cannot be applied recursively. If is a chiral polytope, its facets can be either chiral or regular, but the (n-2)-faces of must be regular (see <cit.>). This implies that if is a chiral extension of , then is either regular or chiral with regular facets. If is chiral with regular facets, a universal chiral extension of <cit.> exists. In <cit.>, Cunningham and Pellicer proved that any finite chiral polytope with regular facets admits a finite chiral extension. There are examples of orientably regular polytopes that do not admit a chiral extension (see <cit.>), but these examples are, in some sense, degenerate. In <cit.> the authors build chiral 4-polytopes with symmetric and alternating groups as automorphism groups. In <cit.> Conder, Hubard and O'Reilly-Regeiro extend the techniques in <cit.> to build chiral polytopes rank higher than 4 whose automorphism group is alternating or symmetric. These polytopes arise naturally as chiral extensions of the simplex. To the best of our knowledge, this is the only known construction of chiral extensions of regular polytopes. An (n+1)-toroid is a quotient of a regular tiling of the Euclidian space by a lattice group. Apart from a few exceptions, every (n+1)-toroid possesses the natural structure of an abstract polytope. Moreover, regular toroids are well understood: all of them are quotients of a regular tessellation of ; in particular, if n∉{2,4}, all the regular (n+1)-toroids are a quotient of a cubic tessellation. The family of cubic regular toroids is arguably the most natural infinite family of regular polytopes of any given rank. In this paper, we attack the problem of building chiral extensions of cubic regular toroids. More precisely, we prove the following result (see Theorem<ref> for a more detailed version). Let n ≥ 2. For all but finitely many cubic regular (n+1)-toroids there exists a chiral (n+2)-polytope whose facets are isomorphic to . § PRELIMINARIES §.§ Regular abstract polytopes. Abstract polytopes are structures derived from combinatorial properties of geometric polytopes. They generalise convex polytopes, tilings of the Euclidean spaces, maps on surfaces, among others. Formally, an abstract polytope of rank n or an n-polytope, for short, is a partially ordered set (, ≤) (we usually omit the order symbol) that satisfies the properties in Items<ref> to<ref> below. These properties are precisely P1, P2, P3' and P4 in <cit.>, where they are described in detail. * has a minimum element F_-1 and a maximum element F_n. * Every flag (maximal chain) of contains exactly n+2 elements, including F_-1 and F_n. * is strongly flag connected. * satisfies the diamond condition. The elements of are called faces. We say that two faces F an G are incident if F ≤ G or G≤ F. The condition in Item<ref> allows us to define a rank function : →{ -1, …, n } by (F)=i, where i+2 is the length of a maximal chain of whose largest face is F. In particular (F_-1) = -1 and (F_n)=n. We usually call vertices, edges and facets the elements of rank 0, 1 and n-1 respectively. In general, a face of rank i is called an i-face. The diamond condition implies that given i ∈{ 0, …, n-1 }, for every flag Φ of there exists a unique flag Φ^i such that Φ and Φ^i differ exactly in the face of rank i. In this situation we say that Φ and Φ^i are adjacent or i-adjacent, if we need to emphasise i. If i_1, i_2, …, i_k∈{ 0, …, n-1 }, we define recursively Φ^i_1, …, i_k = (Φ^i_1, …, i_k-1)^i_k. This notion of adjacency turns the set () of flags of into a properly n-edge-coloured graph, called the flag graph of . For every i ∈{0,…,n-1}, let r_i be the permutation of the flags of mapping every flag Φ to its i-adjacent flag Φ^i. We call the group () := ⟨ r_0, …, r_n-1⟩ the monodromy group of . We will let () act of the left, so that r_iΦ denotes the image of Φ under r_i. Note that the action of () is transitive on the set of flags of but it may not be free. The even monodromy group of is the subgroup of () defined as ^+() = ⟨ s_1, …, s_n-1⟩ where s_i=r_i-1r_i for all i ∈{1…,n-1}. Note that ^+() is a subgroup of index at most 2 in (). This group will play an important role throughout this paper. If F and G are two faces of a polytope such that F ≤ G, the section G/F is the restriction of the order of to the set { H ∈ : F ≤ H ≤ G }. Note that if (F) = i and (G) = j, the section G/F is an abstract polytope of rank j-i-1. If F_0 is a vertex, the vertex-figure at F_0 is the section F_n/F_0. We sometimes identify each face F with the section F/F_-1. In particular, every facet F_n-1 can be identified with the section F_n-1/F_-1 of rank n-1. Given an abstract n-polytope , an (n+1)-polytope is an extension of if all the facets of are isomorphic to . For i ∈{ 1, …, n-1 }, if F is an (i-2)-face and G is an (i+1)-face with F ≤ G then the section G/F is a 2-polytope. Therefore G/F is isomorphic to a p_i-gon for some p_i∈{ 2, …, ∞}. If the number p_i does not depend on the particular choice of F and G but only on i we say that has Schläfli symbol { p_1, …, p_n-1}; in this situation sometimes we just say that is of type { p_1, …, p_n-1}. Note that if is an n-polytope of type { p_1, …, p_n-1}, then the the facets of are of type { p_1, …, p_n-2}. In particular, if is an extension of , and has a well-defined Schläfli symbol, all but the last entry of this symbol are determined by . An automorphism of an abstract polytope is an order-preserving bijection γ: →. The group of automorphisms of is denoted by (). The group () acts naturally on (). We will consider this to be a right action so that (r_iΨ) γ = r_i( Ψγ ) for every flag Ψ, r_i ∈() and γ∈(). That is, we may think of () as the group of all flag permutations that commute with the action of (). As a consequence of the strong-flag-connectivity, the action of () on () is free, although it may not be transitive. Let = { F_-1, …, F_n} be a base flag of such that (F_i) = i. Let Γ≤() and for I ⊂{ 0, …, n-1 } let Γ_I denote the set-wise stabiliser of the chain { F_i : i ∉I }⊂. Note that for every pair of subsets I, J ⊂{ 0, …, n-1 } we have Γ_I∩Γ_J = Γ_I ∩ J We call this condition the intersection property for Γ. An abstract polytope is regular if the action of () on () is transitive (hence, regular). Traditionally, among abstract polytopes the regular ones have been studied most frequently. Extensive theory can be found in <cit.>. A rooted polytope is a pair (, ) where is a polytope and is a fixed base flag. If is regular, every two flags are equivalent under () and the choice of a particular base flag plays no relevant role. However, if is not regular, then the choice of the base flag is important. See <cit.> for a discussion on rooted k-orbit polytopes. If is a regular rooted polytope, then for every i ∈{ 0, …, n-1 } there exists an automorphism ρ_i such that ρ_i = ^i. We call the automorphisms ρ_0, …, ρ_n-1 the abstract reflections (with respect to the base flag ). It is easy to see that if is a regular n-polytope, then () = ⟨ρ_0, …, ρ_n-1⟩. It is important to remark that the group elements depend on . However, since () is transitive on flags, a change in the base flag corresponds to applying an inner group-automorphism to the generators of (). More precisely, let Φ and Ψ be flags of a regular n-polytope and let ρ_0, …, ρ_n-1 and ρ'_0, …, ρ'_n-1 denote the abstract reflections with respect to Φ and Ψ respectively. If γ∈() is such that Φγ = Ψ, then ρ'_i = γ^-1ρ_iγ. Note that every regular polytope has a well-defined Schläfli symbol. If is a regular n-polytope of type { p_1, …, p_n-1}, then the abstract reflections satisfy ρ_i^2 = 𝕀 for i ∈{ 0, …, n-1 } , ( ρ_iρ_j)^2 = 𝕀 if |i-j| ≥ 2, ( ρ_i-1ρ_i)^p_i = 𝕀 for i ∈{ 1, …, n-1 }. If = { F_-1, …, F_n}, the stabiliser of the chain { F_i : i ∉I } is the group ⟨ρ_i : i ∈ I ⟩. It follows that for regular polytopes the intersection property in Equation for () itself is equivalent to ⟨ρ_i : i ∈ I ⟩∩⟨ρ_j: j ∈ J ⟩ = ⟨ρ_k : k ∈ I ∩ J ⟩ for every pair of sets I, J ⊂{ 0, …, n-1 }. A string C-group is a group ⟨ρ_0, …, ρ_n-1⟩ satisfying Equations and. Clearly, the automorphism group of an abstract polytope is a string C-group. One of the most remarkable facts in the theory of highly symmetric polytopes is the correspondence between string C-groups and abstract regular polytopes. To be precise, for every string C-group Γ, there exists an abstract regular polytope = (Γ) such that () = Γ. The abovementioned correspondence has been used to build families of abstract regular polytopes with prescribed properties. For instance, some universal constructions are explored in <cit.> and <cit.>. In another direction, in <cit.> some abstract regular polytopes with prescribed (interesting) groups are investigated. Of particular interest for this paper is the work in <cit.>, where the problem of finding regular extensions of regular polytopes is addressed. §.§ Rotary polytopes Abstract regular polytopes are those with a maximal degree of reflectional symmetry. A slightly weaker symmetry condition than regularity for an abstract polytope is to admit all possible rotational symmetries. In a similar way as has been done for maps (see <cit.>, for example), we call these polytopes rotary polytopes. In this section, we review some of the theory of rotary polytopes. Most of this theory is developed in <cit.>. An abstract polytope is orientable if ^+() = ⟨ r_i-1r_i : 1 ≤ i ≤ n-1 ⟩ has index 2 in (); otherwise, is non-orientable. If is a rooted orientable polytope, we may bicolour the flags of by defining as white and recursively colouring any other flag Ψ black (resp. white) if and only if it is adjacent to a white (resp. black) flag. We denote the set of white flags by (). This is just another way of naming what in <cit.> are called even and odd flags. In fact, the set of white flags corresponds to the orbit of the base Φ_0 under the action of the even mondoromy group ^+(). For convenience, if is non-orientable, we set () = (). In other words, every flag is white. If is an abstract polytope, the rotational group () of is the subgroup of () that permutes the set of white flags. A polytope is rotary if () acts transitively on the set of white flags. It is clear that a rotary non-orientable polytope is a regular polytope. Therefore, we restrict our discussion below to orientable polytopes. Note that the choice of the base flag of plays a stronger role now. In particular, it defines the set of white flags. In the discussion below it is assumed that is actually a rooted polytope . If is a polytope, for every i ∈{ 1, …, n-1 } the flag ^i,i-1 is a white flag. Therefore, if is a rotary polytope, there exists an automorphism σ_i such that σ_i = ^i, i-1. The automorphisms σ_1, …, σ_n-1 are called the abstract rotations with respect to . It is easy to see that () = ⟨σ_1, …, σ_n-1⟩. We emphasise that the abstract rotations depend on the choice of the base flag. If is a rotary n-polytope, then has a well-defined Schläfli symbol. If is of type { p_1, …, p_n-1} the automorphisms σ_1, …, σ_n-1 satisfy σ_i^p_i = 𝕀, and (σ_iσ_i+1⋯σ_j)^2 = 𝕀 for 1 ≤ i < j ≤ n-1. Sometimes it is useful to consider an alternative set of generators for (). For i,j ∈{0, …, n-1} with i <j we define the automorphisms τ_i,j = σ_i+1⋯σ_j. Note that this is a small change with respect to the notation of <cit.>. What they call τ_i,j for us is τ_i-1,j. Observe that τ_i-1,i = σ_i for i ∈{1, …, n-1}. It is also convenient to define τ_j,i = τ_i,j^-1 for i < j and τ_-1,j= τ_i,n = τ_i,i = 𝕀 for every i,j ∈{0, …, n-1}. In particular, we have that ⟨τ_i,j : i,j ∈{0, …, n-1}⟩ = ⟨σ_1, …, σ_n-1⟩. We also have τ_i,j= ^j,i. Moreover, if = { F_-1, …, F_n}, and I ⊂{ 0, …, n-1 }, the stabiliser of the chain { F_i : i ∉I } is the group ⟨τ_i,j : i,j ∈ I ⟩. It follows that the intersection property in Equation for () can be written as ⟨τ_i,j : i,j ∈ I ⟩∩⟨τ_i,j : i,j ∈ J ⟩ = ⟨τ_i,j : i,j ∈ I ∩ J ⟩. If is a regular polytope with automorphism group () = ⟨ρ_0, …, ρ_n-1⟩, then is rotary with () = ⟨σ_1, …, σ_n-1⟩ where σ_i = ρ_i-1ρ_i. If is also orientable, we say that is orientably regular. In this situation () is a proper subgroup of () of index 2. Furthermore, () induces two flag-orbits, namely, the white flags and the black flags. If is rotary but not regular, then () = () and this group induces precisely two orbits in flags in such a way that adjacent flags belong to different orbits. In this case we say that is chiral. Chiral polytopes were introduced by Schulte and Weiss in <cit.> as a combinatorial generalisation of Coxeter's twisted honeycombs in <cit.>. If (,) is a rooted chiral polytope, the enantiomorphic form of , denoted by , is the rooted polytope (, ^0). In the classic development of the theory of chiral polytopes, the enantiomorphic form of is usually thought as the mirror image of , thus as a polytope which is different from but isomorphic to . However, when treated as rooted polytopes, it is clear that the only difference is the choice of the base flag. The underlying partially ordered set is exactly the same. For a traditional but detailed discussion about enantiomorphic forms of chiral polytopes we suggest <cit.>. The automorphism group of is generated by the automorphisms σ'_1, …, σ'_n-1 where (^0) σ'_i = (^0)^i,i-1. It is easy to verify that σ'_1 = σ_1^-1, σ_2 = σ_1^2σ_2 and for i ≥ 3, σ'_i = σ_i. If is orientably regular the conjugation by ρ_0 defines a group automorphism ρ: () →() that maps σ_i to σ'_i. A group that satisfies the relations in Equation together with the intersection property in Equation must be the rotation group of a rotary polytope. Moreover, the existence of the group-automorphism ρ: () →() mentioned above determines whether or not the rotary polytope is regular. More precisely, the following result holds. Let 3≤ n, 2 < p_1, …, p_n-1≤∞ and let Γ be a group such that Γ=⟨σ_1, …, σ_n-1⟩. For every i,j ∈{-1, …, n}, with i ≠ j define τ_i,j= 𝕀 if i<j and i=-1 or j=n, σ_i+1⋯σ_j if 0≤ i<j≤ n-1, σ^-1_j⋯σ^-1_i+1 if 0≤ j<i≤ n-1. Assume that Γ satisfies the relations in Equation. Assume also that Equation holds. Then * There exists a rotary polytope =(Γ) such that ()=Γ and σ_1, …, σ_n-1 act as abstract rotations for some flag of . * is of type {p_1, …, p_n-1}. The facets and vertex-figures of are isomorphic to (⟨σ_1, …, σ_n-2⟩) and (⟨σ_2, …, σ_n-1)⟩, respectively. In general if n ≥ 4, F is a (k-2)-face and G is an incident (l+1)-face, for 1 ≤ k < l ≤ n-1, the section G/F is a rotary (l-k+2)-polytope isomorphic to (⟨σ_k…, σ_l⟩). * is orientably regular if and only if there exists an involutory group automorphism ρ:Γ→Γ such that ρ: σ_1↦σ^-1_1, ρ: σ_2↦σ_1^2σ_2 and ρ: σ_i↦σ_i for i≥ 3. §.§ Extensions of rotary polytopes Recall that if is a rotary n-polytope, then its facets must be rotary (chiral or orientably regular) but its (n-2)-faces must be orientably regular (see <cit.>). It follows that if is a chiral extension of a polytope , then is either orientably regular or chiral with regular facets. We shall carry these assumptions over through out this section. A natural approach to build extensions of rotary polytopes is to use Part<ref> of Theorem<ref>. More precisely, let be a rotary n-polytope with ()=⟨σ_1, …, σ_n-1⟩. Assume that Γ = ⟨σ̅_1, …, σ̅_n-1, σ̅_n⟩ is a group satisfying Equation (for rank n+1) and such that the mapping σ_i↦σ̅_̅i̅ (for 1 ≤ i ≤ n-1) is an embedding of () into Γ. In order to build a polytope from Γ we need to prove that it has the intersection property (Equation). In this situation the polytope :=(Γ) obtained from Γ is rotary. If is chiral, then must be chiral; however, if is regular then (Γ) might be orientably regular. To prove that is chiral we need to show that Γ does not admit a group automorphism as the one described in Part<ref> of Theorem<ref>. In this section we develop a series of small results that will prove to be useful in Section<ref>, when we establish our main results an build chiral extensions of regular toroids. Most of these results are either straightforward or can be found in the literature, hence we omit the proofs and rather provide the appropriate references. Moreover, instead of assuming that the group () is embedded into a group Γ, we shall abuse notation and simply denote by σ_1, …, σ_n the set of distinguished generators of Γ and think of () as the subgroup Γ_n= ⟨σ_1, …, σ_n-1⟩. We begin with a result that guarantees that the group Γ has the appropriate relations. Let Γ = ⟨σ_1, …, σ_n⟩ be a group with the property that the subgroup Γ_n = ⟨σ_1, …, σ_n-1⟩ satisfies Equation. If the group elements σ_1, …, σ_n satisfy the equations σ_n^p_n = 𝕀, for some p_n≥ 3, (σ_i⋯σ_n)^2 = 𝕀, for 1≤ i ≤ n-1, then Γ itself satisfies Equation. There is a similar result regarding the intersection property. We give it here without proof. This is <cit.>. Let n ≥ 3 and Γ = ⟨σ_1, …, σ_n⟩ be a group which satisfies Equation. Assume that the subgroup Γ_n-1 = ⟨σ_1, …, σ_n-1⟩ has the intersection property in Equation with respect to its generators. Also, suppose that the following intersection conditions hold: ⟨σ_1, …, σ_n-1⟩∩⟨σ_j, …, σ_n⟩ = ⟨σ_j, …, σ_n-1⟩ for j ∈{2, …, n}. Then Γ itself has the intersection property of Equation. Sometimes it is useful to consider a different set of generators for Γ. For i ∈{1, …, n} define τ_i=σ_1⋯σ_i. Assume for a second that Γ = () for a rotary polytope and that Φ̂_̂0̂ denotes the base flag of , then Φ̂_̂0̂τ_i = Φ̂_̂0̂^i,0. We also have σ_1 = τ_1 and for 2 ≤ i ≤ n-1, σ_i = τ^-1_i-1τ_i. It follows that Γ_n=⟨τ_1, …, τ_n-1⟩. The following result is essentially the same as Lemma<ref> but in terms of the generators τ_1, …, τ_n. Let Γ = ⟨σ_1, …, σ_n⟩ be a group with the property that the subgroup Γ_n = ⟨σ_1, …, σ_n-1⟩ satisfies Equation. For i ∈{1, …, n}, let τ_i = σ_1⋯σ_i. Then the set of relations of Equation is equivalent to the set of relations (τ_n-1^-1τ_n)^p_n = 𝕀, τ^2_n = 𝕀, (τ^-1_iτ_n)^2 = 𝕀, for i ∈{1, …, n-2}. Finally, in order to determine if the group Γ = ⟨τ_1, …, τ_n⟩ satisfies Corollary<ref> and Lemma<ref> is the automorphism group or a chiral polytope, (and not the rotation subgroup of an orientably regular polytope) we need to show that there is no group automorphism α:Γ→Γ such as the one described in Part<ref> of Theorem<ref>. The properties of this automorphism are described, in terms of τ_1, …, τ_n in the following result. Let Γ = ⟨σ_1, …, σ_n⟩ be a group satisfying Equation. Assume that there is a group automorphism α: Γ→Γ satisfying the conditions of Part<ref> of Theorem<ref>, that is, α: σ_1↦σ_1^-1, α: σ_2↦σ_1^2σ_2 while fixing σ_i for i ≥ 3. If τ_i = σ_1⋯σ_i, then α satisfies α(τ_1) = τ_1^-1, α(τ_i) = τ_i if i ≥ 2. Conversely, a group automorphism α:Γ→Γ satisfying Equation also satisfies the conditions of Part<ref> of Theorem<ref>. § MONODROMY GROUP OF ROTARY POLYTOPES Let be an abstract polytope and consider its monodromy group (). It is well-known that if is a regular polytope then () ≅() with the isomorphism mapping r_i to ρ_i (see <cit.>). If is a rotary polytope, then the action of () and the action of () have an interesting relationship. This relationship is described in the following proposition (see <cit.>). Let be a rotary n-polytope with base flag . Let ()=⟨σ_1, …, σ_n-1⟩ and ()=⟨ r_0, …, r_n-1⟩ be the rotation and monodromy groups of respectively, and consider the even monodromy group () = ⟨ s_1, …, s_n-1⟩ of , where s_i=r_i-1r_i. Then the following hold * For every i_1, …,i_k∈{1, …, n-1} s_i_1⋯ s_i_k=σ_i_1⋯σ_i_k. * An element s_i_1⋯ s_i_k∈() fixes the base flag if and only if σ_i_1⋯σ_i_k = ϵ. In this situation, s_i_1⋯ s_i_k stabilises every white flag. * If () denotes the set of white flags of , then there is an isomorphism f: () →() with ()= ⟨s̅_1, …, s̅_n-1⟩, where s̅_i denotes the permutation of () induced by s_i. This isomorphism maps s̅_i to σ_i for every i ∈{1, …, n-1}. Note that unlike the regular case, where there is an isomorphism from () to () mapping r_i to ρ_i, if is chiral, the mapping g: () →() defined by g: s_i↦σ_i is not an isomorphism. By Part<ref> of Proposition<ref>, this mapping is a well-defined epimorphism, but in general it is not injective. In other words, there are non-trivial elements of () that fix every white flag of . The subgroup of () containing all such elements (i.e. the kernel of g) is called the chirality group. For some uses and properties of the chirality group see <cit.> and <cit.>. To avoid confusion, we will try to avoid the use of () and instead use the group (), which is the permutation group on () induced by the action of (). Since we will only use the action of () on white flags, it is safe to abuse notation and identify s_i = r_i-1 r_i∈() with s̅_̅i̅∈(), the permutation induced by s_i on the set (). § GEOMETRIC CUBIC (N+1)-TOROIDS Now we turn our attention to cubic toroids. Most of what is discussed in this section is somewhere in the literature. We shall use <cit.> as our main reference, but <cit.> are also relevant. Throughout this section, will denote the cubic honeycomb of the Euclidean space ^n of type and () its translation group. A cubic (n+1)-toroid is the quotient of of by a lattice group , that is, a subgroup ≤() generated by n linearly independent translations. If = ⟨ t_1, …, t_n⟩ and v_i is the translation vector associated to t_i, then {v_i : 1 ≤ i ≤ n } is a basis for . The lattice associated with is the orbit of the origin o of . That is, = o = {m_1 v_1 + ⋯ + m_n v_n : m_1, …, m_n ∈}. Clearly the basis { v_1, …, v_n} determines the lattice and the lattice group itself. Geometrically, we can identify the toroid with the corresponding tessellation of the torus /. The (open) Dirichlet domain (centered at o) associated with is the set D() = { x ∈ : d(o,x) < d(o t, x) for all t ∈{1}}. We can geometrically realise / by identifying the points of the closure of D() that are equivalent modulo . Observe that by definition, no two points in D() are equivalent under and such identifications happen in the boundary of D(). Now we turn our attention to the symmetries of . The group of (geometric) symmetries of is generated by the reflections R_i, i∈{ 0, …, n }, on hyperplanes of and play the role of the abstract reflections in Equation. Moreover, the symmetry group G() coincides with (), which is isomorphic to the affine string Coxeter group [4, 3^n-2, 4] (often denoted C̃_n). That is, the generating reflections satisfy the defining relations: R_i^2 = 𝕀 for i ∈{ 0, …, n } , ( R_i R_j)^2 = 𝕀 if |i-j| ≥ 2, ( R_0 R_1)^4 = ( R_n-1 R_n)^4 = 𝕀, ( R_i R_i+1)^3 = 𝕀 for i ∈{ 1, …, n-2 }. Up to similarity, we may assume that the vertex set of is the set (1/2, …, 1/2) + ^n so that the center of the facets of are precisely the points of integer coordinates. In this situation we may give explicit definitions for the reflections R_i (see <cit.>) so that the group G_o() = ⟨ R_0, …, R_n-1⟩ preserves the origin o (acting as the stabiliser of the corresponding facet). The group G^n_o()=⟨ R_1, …, R_n-1⟩ acts as the symmetric group S_n permuting the coordinate axes. On the other hand, the group H=⟨ R_0^G_o()⟩ generated by the conjugates of R_0 under G_o() is the group generated by the n reflections on the coordinate hyperplanes (isomorphic to C_2^n). In fact, it can be seen that the group G_o() is isomorphic to G^n_o() ⋉ H ≅ S_n⋉ C_2^n (with S_n acting on C_2^n by permutation of coordinates). The product T_1= R R_n, with R ∈ G_o() a suitable conjugate of R_0 such that the hyperplanes of R_n and R are parallel, is a translation in () with respect to the vector e_1. The conjugates of T_1 under G_o() are precisely the translations with respect to the vectors e_i, i ∈{ 1, …, n }. The group generated by those translations acts regularly on the facets of , which implies that () = ⟨ T_i : i ∈{ 1, …, n }⟩≅^n (with T_i the translation with respect to e_i). In fact, we have that G() ≅() ≅ G_o() ⋉() ≅ (S_n⋉ C_2^n) ⋉^n, where for t ∈^n, an element σ∈ S_n acts on t by permuting coordinates while ∈ C_2^n acts on t by multiplying by -1 the entries of t on the coordinates where is non-trivial. Now we are interested in which automorphisms of induce automorphisms of . Those automorphisms are discussed in detail in <cit.> and <cit.>. The results obtained there are summarized in the following lemma. With the notation given above, the following statements hold. * A symmetry S ∈ G() projects to an automorphism of if and only if S normalises . This is always the case if S ∈(). If S∈ G_o(), then S ∈() if and only if S =. * Since all lattices are centrally symmetric, :x ↦ -x always projects to an automorphism of . * The automorphism group () of is isomorphic to (K' ⋉() ) / ≅ K' ⋉ (() / ) where K'={S ∈ G_o() : S^-1 S = } ={S ∈ G_o() S = }. In particular ⟨⟩≤ K' ≤ G_o(). The group () has k orbits on the set of flags of if and only if the index of K' in G_o() is k. * The toroids and ' are isomorphic if and only if and ' are conjugate in G(). This in turn is true if and only if there exists S ∈ G_o() such that S = '. An immediate consequence of Lemma<ref> is that is regular if and only if is preserved by G_o(). Those lattices are classified in <cit.>. Such a lattice must be an integer multiple of one the following: * : the lattice associated with ^n with basis { e_1, …, e_n}. * : the index-2 sublattice of consisting of the points whose coordinate sum is even. The set { e_1 + e_2, e_2- e_1, …, e_n-e_n-1} is a basis for . If n=3 this lattice is often called the face-centred cubic lattice. * : the index-2^n-1 sublattice of consisting of the points whose coordinates have the same parity. The set { 2e_1, …, 2e_n-1, e_1+⋯+e_n} is basis for . If n=3 this is often called the body-centred lattice. Naturally, the corresponding lattice groups are denoted by , and . Notice that for n=2 we have =. In short, we have the following theorem. Let n ≥ 2 and let = be a regular cubic (n+1)-toroid. Then ≅/_ where = (a^k, 0^n-k) for some a ∈ and k ∈{ 1,2,n }. In every case () ≅ (S_n⋉ C_2^n) ⋉^n/_. § COMBINATORIAL TOROIDS In order to give the appropriate notation for this section we are going to study the structure of the regular toroid . We label the flags of as follows. Recall that () ≅ (S_n⋉) ⋉ ( ^n/_) for some lattice group _ (see Theorem<ref>). Let us denote by () group ^n/_ of translations of . Let γ∈() and let σ∈ S_n, ∈ and t ∈ () such that γ = σ·· t. If Φ_0 is the base flag of , label Φ_0γ with the triple (σ, , t). In particular, Φ_0 is labelled with ((1), 1, 0) where 1 is the vector of with all its entries equal to 1. Observe that we are just identifying every flag of with an element of its automorphism group. This identification is well defined, since () acts freely and transitively on flags. The combinatorial aspects of the triplets will be insightful in how the geometry and the combinatorics connect, as we shall explain below. First observe that () acts on the facets of by translation. It follows that two flags (σ, , t) and (τ, , u) belong to the same facet of if an only if u=t. This allows us to identify the facets of with the elements of (). The first two coordinates of the label of a flag also have a combinatorial interpretation. We describe this interpretation only on the base facet F_0. Recall that F_0 is an n-cube. Label the vertices of the cube with the elements of in such a way that if k ∈{0, …, n}, each k-face F(,I) of the cube can be described by its vertex set as F(, I) ={∈ : y_j = x_j if j ∉I}, where = (x_1, …, x_n), = (y_1, …, y_n), I ⊂{1, …, n}, |I|=k. With this identification, two faces F(, I) and G(, J) with |I| ≤ |J| are incident if and only if F(,I) ⊂ G(,J). In other words, we are just identifying the n-dimensional cube with the polytope 2^ where is the (n-1)-simplex (see <cit.>). Observe that a flag of the cube containing a vertex now has the form {∅, F(, I_0), F(, I_1), … , F(, I_n)}, where |I_j|= j for j ∈{0, …, n} and I_0⊂ I_1⊂⋯⊂ I_n. Thus I_0= ∅ and I_n= { 1 …, n }. Notice that for every j ∈{1, …, n}, the set I_j I_j-1 has exactly one element i_j. The family of sets {I_j : 0 ≤ j ≤ n} defines a permutation σ∈ S_n such that σ: j ↦ i_j. Conversely, a permutation σ∈ S_n determines a family {I_j : 0 ≤ j ≤ n} of nested sets such that I_j I_j-1 = j σ. Therefore, a permutation σ and an element ∈ uniquely determine a flag of the base cube. The label associated to this flag is precisely ( σ,, 0). Informally speaking, if (σ, , 0) is the label of a flag Φ, then describes the relative position of the vertex of Φ on the cube F_0. The permutation σ defines the “direction” of the faces relative to in the following sense. To get the other vertex of the edge in Φ we have to “move” (change the sign of ) in direction 1 σ; to get the four vertices of the 2-face of Φ we have to “allow movement” in the directions 1 σ and 2 σ; etc. See Figure<ref> for an example on dimension two. We next introduce some notation. Given = (x_1, …, x_n) ∈ and a subset I={i_1,…, i_k} of {1, …, n} we denote by ^(i_1, …, i_k) the vector (x'_1, …, x'_n) ∈ such that x_i = x'_i if and only if i ∉{i_1, …, i_k}. Note that ^(i_1, …, i_k) = _I where _I is the vector in whose i^th-entry is 1 if and only if i ∉I. For a permutation σ∈ S_n and a vector = (x_1, …, x_n) ∈, we denote by σ the vector resulting after permuting the coordinates of according to σ, this is, σ = (x_1 σ^-1, x_2 σ^-1, …, x_n σ^-1 ). Similarly, if t= (t_1, …, t_n) ∈ (), then σ t denotes the vector (t_1 σ^-1, t_2 σ^-1, …, t_n σ^-1 ), and if =(y_1, …, y_n) ∈, then t =(y_1t_1, …, y_n t_n). Finally, observe that all these operations can be understood as left actions of the corresponding groups. With the notation just introduced it is easy to describe the action of () on the set of flags. This is given by r_i(σ, , t) = ( σ, ^(1σ), t) if i=0, ((i i+1) σ, , t) if 1 ≤ i ≤ n-1, ( σ, ^(n σ), t-x_nσe_nσ) if i=n, where = (x_1, …, x_n) and e_k denotes the vector (0^k-1, 1, 0^n-k) ∈ (). The validity of Equation can be derived from the combinatorial interpretation of the labelling discussed above. For instance, it is clear that r_0 must fix the first and third coordinates in Φ = (σ, , t) since the faces of rank i≥ 1 of Φ and r_0 Φ are the same, but the vertex of r_0 Φ is a vertex that result from moving in the direction of the base edge (1 σ). Alternatively, we can also define the (n+1)-maniplex (in the sense of <cit.>) whose flags are the set S_n × C_2^n×() and whose monodromy group is given by Equation and prove that ≅. We leave the details to the reader. Assume now that γ∈() = (S_n⋉) ⋉ () and let γ = τ·· u, τ∈ S_n, ∈, u ∈(). Let Φ = (σ, , t) be a flag. Observe that τ^-1τ∈ and that τ^-1τ is the automorphism given by the vector τ. Similarly, τ^-1 t τ is the automorphism given by the vector τ t ∈ (). For ∈, the automorphism ^-1 t is given by the vector t ∈ (). The previous discussion implies that the action of () is given by (σ, , t) τ = (στ, τ, τ t), (σ, , t) = (σ, , t), (σ, , t) u = (σ, , t + u), or equivalently (σ, , t)γ = (στ , (τ),(τ t) + u ). As in Section<ref>, we say that a flag Ψ of is white if Ψ and the base flag Φ_0 belong to the same orbit under (). If Φ is not white, then we say that Φ is a black flag. If σ∈ S_n, then (σ)∈{1,-1} and is equal to 1 if and only if σ is an even permutation. If ∈, let () = (-1)^k where k denotes the number of entries of equal to -1. Observe that a flag (σ, , t) of is white if and only if (σ)() = 1. Let us now prove some structural results related to the labelling presented above that will be of use in Section<ref>. Let X_0 be the base vertex of . Let Φ = (σ, , t) ∈() with =(x_1, …, x_n) and t= (t_1, …, t_n). Then Φ contains X_0 if and only if t_i ∈{0,-1} and t_i = -1 if and only if x_i = -1. Assume first that Φ is a flag containing the vertex X_0. Let Φ_0 = ( (1), 1,0) be the base flag of . Since Φ and Φ_0 share the vertex, then there exists w ∈⟨ r_1, …, r_n ⟩ such that Φ= w Φ_0. Let v_0, …, v_d ∈⟨ r_1, …, r_n-1⟩ be such that w= v_d r_n v_d-1⋯ v_1 r_n v_0. Observe that v_i (for 1 ≤ i ≤ d ) preserves the second and third entries of the label of every flag. Meanwhile r_n changes both the second and third entries, and it changes them on the same coordinate. Since both such entries of the label of Φ_0 are trivial, an inductive argument on d shows that t_i ∈{0, -1} and t_i = -1 if and only if x_i = -1. An immediate consequence of Lemma<ref> is the following result. Let Φ be the flag (σ, , t), with =(x_1, …, x_n) and t= (t_1, …, t_n). Let X be a vertex of and u = (u_1, …, u_n) ∈() be the (unique) translation of that maps the base vertex X_0 to X. Then Φ contains the vertex X if and only if t_i-u_i ∈{0, -1} and t_i -u_i = -1 if and only if x_i=-1. In particular, the facets of that contain the vertex X are those whose associated vector is in the set {(u_1 + ϵ_1, …, u_n + ϵ_n) : ϵ_1, …ϵ_n ∈{0,-1}}. § CHIRAL EXTENSIONS OF REGULAR TOROIDS In this section we develop the construction of (the automorphism group of) a chiral extension of a regular toroid satisfying certain properties (described below). As a consequence we prove Theorem<ref>. The construction of a chiral extension of a toroid relies on certain properties of a particular monodromy element . We describe such properties in the following paragraphs. Let ((1), 1,0) be the base flag of . Let v_0 be the vector (1,2,…,n) ∈(). Let be the (unique) element of () that satisfies that ((1), 1,0) = ( (1), 1, v_0). For a permutation σ∈ S_n and an element ∈, let v(σ, ) denote the vector σ v_0. This is the vector obtained from v_0 after permuting its entries according to σ then changing signs according to . It follows from Equation that if (σ, , t) is a flag, then (σ, , t) = (σ, , t + v(σ, )). For a flag Φ = (σ, , t) of , let Φ̂ denote the flag (σ, -, t). In this particular case Equation takes the form of Φ̂ = (σ, , t - v(σ, )). Consider also the element = r_0 r_0. Observe that (σ, , t) = (σ, , t + v(σ, ^1 σ)). In what follows we will use the permutations and to define a set of special facets of . We will do so by considering the images of the base facet of under some power of or . Recall that is defined by a vector of the form (a,0,…,0), (a,a,0,…,0) or (a,a,…,a) for some integer a. For things to go smoothly, and to ensure that all such special facets are distinct from one another, we need to be `large enough'. More precisely, we shall require that a ≥ 6n + 1. This is a technical requirement we impose for the sake of convenience, to ensure that is large enough so that we do not end up `looping around' the toroid when we do not want to. Moreover, when talking about translations in () we can think of their translation vectors as elements in ^n and not in ^n/, or more precisely, as their representatives in the appropriate D(). We will henceforth always assume that the parameter a satisfies the above bounds, even though this might not be strictly necessary for the construction we are about to describe to work. For a flag Ψ∈(), let F(Ψ) denote the facet of Ψ. If Ψ = (σ, , t) we shall abuse notation and use F(Ψ) to denote both the facet in the poset of and the vector t. Let F_0 be the base facet of . For i,j ∈{±1, ± 2, ± 3} and Ψ_1≠Ψ_2 flags of containing the facet F of . The elements and satisfy: * If F = F_0, then the facets F(^iΨ_1) and F(^iΨ_2) are different. Similarly, the facets F(^iΨ_1) and F(^iΨ_2) are different. * If F is any arbitrary facet, then F(^iΨ_1) ≠ F(^iΨ_2). Similarly F(^iΨ_1) ≠ F(^iΨ_2). * The flags Ψ_1 and Ψ_1^0=r_0 r_0 r_0 Ψ_1 are 0-adjacent. In particular, F(Ψ_1) = F(Ψ_1^0). * The facet of ^iΨ_1 is the same as the facet of ^-iΨ̂_̂1̂. Similarly F(^iΨ_1) = F(^-iΨ̂_̂1̂). * If i ≠± j, then {F(^iΨ_1), F(^iΨ_1)}∩{ F(^jΨ_2), F(^jΨ_2) } = ∅. * If either F(^iΨ_1) = F(^jΨ_2) or F(^iΨ_1) = F(^jΨ_2), then i = -j and Ψ_2=Ψ̂_̂1̂. * If F(^iΨ_1) = F(^jΨ_2), then i=j and Ψ_1 = r_0Ψ_2; or i = -j and Ψ_1=r_0 Ψ̂_̂2̂. If Ψ_1 and Ψ_2 are different flags containing the base facet F_0, then Ψ_1 = (σ_1, _1, 0) and Ψ_2 = (σ_2, _2, 0) for some σ_1, σ_2∈ S_n and _1, _2∈ with (σ_1, _1) ≠ (σ_2, _2). It follows that ^iΨ_1 = (σ_1, _1, i v(σ_1,_1)) and ^iΨ_2 = (σ_2, _2, i v(σ_2,_2)). From our assumptions on , the elements i v(σ_1,_1), i v(σ_2,_2) ∈() are different whenever (σ_1, _1) ≠ (σ_2, _2), therefore, the facets of ^iΨ_1 and ^iΨ_2 are different. An analogous argument for completes the proof of Item<ref>. Item<ref> follows from Item<ref> and from the fact that is facet-transitive. In fact, assume that G_1 and G_2 are the facets of ^iΨ_1 and ^iΨ_2, respectively. Take γ∈() such that Fγ = F_0. The facet of (^iΨ_1)γ = ^i (Ψ_1 γ) is G_1 γ, while G_2 γ is the facet of (^iΨ_2)γ = ^i (Ψ_2 γ). Both Ψ_1 γ and Ψ_2 γ are flags containing F_0, hence G_1 γ≠ G_2 γ, which implies that G_1 ≠ G_2. Item<ref> is obvious. Item<ref> follows from Equation. Let us prove Item<ref> assuming that F = F_0. The other cases follow in a similar fashion as Item<ref> follows from Item<ref>. Assume then that Ψ_1 = (σ_1, _1, 0) and Ψ_2=(σ_2, _2, 0). For a vector v, let m_v be the smallest number among the absolute values of the entries of v. Observe that if v = u then m_v = m_u. Moreover, we have m_v = m_v(σ, ) for all σ∈ S_n and x ∈ C_2^n. Now, ^iΨ_1 = (σ_1,_1,iv_0(σ_1, _1)) and ^jΨ_2 = (σ_2,_2,jv_0(σ_2, _2)). What is more, m_iv_0(σ_1, _1) = m_iv_0 = i and m_jv_0( σ_2,_2 ) = m_jv_0 = j. Since i ≠± j, we see that m_iv_0≠ m_jv_0 and thus F(^iΨ_1) = iv_0(σ_1, _1) ≠ jv_0(σ_2, _2) = F(^jΨ_2). We conclude that ^iΨ_1 contains a different face than ^jΨ_2. A similar argument proves that F(^iΨ_1) ≠ F(^jΨ_2). The remaining cases for Item<ref> follow from Item<ref>. Items<ref> and<ref> follow directly from Item<ref> and Item<ref>. Now we are ready to develop our main construction and as a consequence, prove Theorem<ref>. Let () denote the set of white flags of . The idea is to build a permutation group (acting on the left) on the set () ×_3 p for some p ∈. We shall prove that the group satisfies the relations required to be the rotation group of a rotary extension of ; namely those in Corollary<ref>. Then we will impose conditions on p to prove that the potential abstract polytope is actually chiral (and not regular). Finally, we shall use Lemma<ref> to prove that the group satisfies the intersection property. We shall denote the elements of () ×_3p as pairs (Φ,ℓ) (for Φ∈() and ℓ∈_3p) keeping in mind that if necessary, Φ will be represented as a triplet (σ, , t), following the labelling of () described in Section<ref>. Observe that if l_1, l_2∈ are such that l_1≡ l_23p, then l_1≡ l_23. Therefore, for ℓ∈_3p expressions such as ℓ≡ 0 3 are well defined. Let us establish some notation that we will carry out along the whole section. Assume that the base flag of is Φ_0 and F_0 is the facet of Φ_0. Let be as in Equation. For i ∈{± 1, ± 2, ± 3} let Φ_i denote the flag ^iΦ_0 and let F_i denote the facet of Φ_i. If F is a facet of we denote by (F,ℓ) the set { (Φ,ℓ) : Φ is a white flag and F ∈Φ}. We shall abuse language and call (F,ℓ) the facet of (Φ,ℓ) whenever (Φ,ℓ) ∈ (F,ℓ). Let X_0 be the base vertex of and let X_H be the translate of X_0 by the vector (0,-3,0^n-2). Define H = {(F,1) : F is incident to X_H}. Observe that, by Corollary<ref>, if (F,1) ∈ H then the vector associated with F is of the form (ϵ_1,-3 +ϵ_2, ϵ_3, …,ϵ_n) with ϵ_i ∈{0,-1}. From this, we see that if a flag Ψ is in the same facet as ^jΦ_0 or ^jΦ_0 with j ∈{0, ± 1,± 2, ± 3}, then the facet of Ψ is not in H. This information is relevant for the following definition and observations, but the set H itself will not play a role until Section<ref>, when we prove that, under certain circumstances, has the intersection property. Let F be a facet of and ℓ∈_3p. The root (Φ,ℓ) of (F,ℓ) is defined as follows: * If F = F_i for some i ∈{0,±1,±2,±3}, the root of (F,ℓ) is (Φ_i, ℓ). * If ℓ≢1 3 and F=F(^jΦ) for some white flag Φ with F_0∈Φ and j ∈{±1,±2,±3}, the root of ( F,ℓ) is (^jΦ,ℓ). * If (F,1) ∈ H, then the root of (F,1) is (Ψ,1), with Ψ any white flag incident to the vertex X_H. * In any other case the root of (F,ℓ) is (Ψ, ℓ) with Ψ = (1, 1, F). That is, Ψ is the translate of Φ_0 that contains F. In particular, if ℓ≡ 1 3 and (F,ℓ) ∉H, the root of (F,ℓ) is (Ψ, ℓ) with Ψ a translate of Φ_0. In Figure<ref> we represent the choice of the roots for dimension 2 in the torus { 4,4 }_(13,0). Note that in most cases the root of the pair (F,ℓ) is a translate of (Φ_0,ℓ). Strictly speaking, there could be facets (F, ℓ) with more than one root. For example if n is odd, according to Item<ref> of Definition<ref> the facet (F_1,0) has (Φ_1,0) as its root; however, according to Item<ref>, the root of (F_1,0) should be (r_0Φ̂_̂1̂,0) = (^-1r_0Φ̂_̂0̂,0). We let that pass for now but in Remak<ref> it will become obvious that this potential double definition of roots is not relevant for our purposes. Now, consider a facet F of and an integer ℓ∈_3p, and suppose (Φ_F,ℓ) is a root of (F,ℓ). For i ∈{0,1}, we define ρ_i^(F,ℓ) as the automorphism of F mapping Φ_F to r_iΦ_F. As we will see shortly, this permutation does not depend on the choice of the root of (F,ℓ) (if there happens to be more than one). Further, ρ_i^(F,ℓ) permutes the flags of F and maps white flags to black flags and vice versa. For every ℓ∈_3p, we can define a permutation ρ^ℓ of () by ρ^ℓΦ = Φρ_0^(F,ℓ), if (F,ℓ) ∉ H Φρ_1^(F,ℓ), if (F,ℓ) ∈ H where F is the facet of Φ. A fairly obvious observation is that for every ℓ, is an involution. Moreover, we have the following results. Let 0 ≤ i ≤ n-1 and ℓ∈_3p, Φ a flag of and F the facet of Φ. Since the element r_i preserves the facet of Φ and both ρ_0^(F,ℓ) and ρ_1^(F,ℓ) are automorphisms of F, then r_i( Φρ_0^(F,ℓ)) =(r_iΦ) ρ_0^(F,ℓ); and r_i( Φρ_1^(F,ℓ)) =(r_iΦ) ρ_1^(F,ℓ). It follows that and r_i commute. Let F be a facet of and ℓ∈_3p such that (F,ℓ) ∉H. Let Φ_F be the flag containing F such that (Φ_F, ℓ) is a root of (F,ℓ). If Φ∈{Φ_F, Φ̂_̂F̂, r_0Φ̂_̂F̂}, then Φ = r_0Φ, or equivalently, r_0 (Φ) = Φ. Following the notation in the remark above, all the flags Φ_F, Φ̂_̂F̂ and r_0Φ̂_̂F̂ define the same ρ^(F,ℓ) and thus is well defined regardless of the potential existence of two base flags for a facet F. Moreover, the only property of those base flags that will be relevant to us, is the one described in Remak<ref>. We are now ready to give explicit a definition for generators of the group = ⟨_1, …, _n+1⟩. For 1≤ i ≤ n define. _i(Φ, ℓ) = (r_0r_iΦ,ℓ). The permutation _n+1 is given by _n+1(Φ, ℓ) = ( r_0Φ,ℓ+1 ), if F_1∈Φ and ℓ≡ 0 3; or F_3∈Φ and ℓ≡ 1 3; or F_0∈Φ and ℓ≡ 2 3; ( r_0Φ,ℓ-1 ), if F_0∈Φ and ℓ≡ 0 3; or F_1∈Φ and ℓ≡ 1 3; or F_3∈Φ and ℓ≡ 2 3; ( r_0Φ,ℓ), otherwise. We can rewrite Remak<ref> as follows. Let F be a a facet of and ℓ∈_3p such that (F,ℓ) ∉H, then for all for all Φ∈{Φ_F, Φ̂_̂F̂, r_0Φ̂_̂F̂} we have _n+1 (Φ,ℓ) = (Φ,ℓ') where ℓ' = ℓ+1, if F= F_1 and ℓ≡ 0 3; or F=F_3 and ℓ≡ 1 3; or F= F_0 and ℓ≡ 2 3; ℓ-1 , if F = F_0 and ℓ≡ 0 3; or F = F_1 and ℓ≡ 1 3; or F = F_3 and ℓ≡ 2 3; ℓ, otherwise. Now, note that for a fixed ℓ∈_3p, the group _n+1 := ⟨_1, …, _n⟩ acts on _ℓ just as the group ⟨ r_0r_1, r_0 r_2, …, r_0r_n⟩≅() acts on the set (). We abuse notation by considering the elements of () as elements of . In particular, if denotes the element defined in Equation, then we may think that ∈. Moreover, _n+1≅() ≅() (see Part<ref> of Proposition<ref>). This implies that the (potential) polytope defined by has facets isomorphic to . The group elements _1, …_n+1 will play the role of the elements τ_1, …, τ_n+1 of Corollary<ref> (note the shift of indices), meaning that in order to guarantee that the group is the automorphism group of a chiral extension of , we need to prove that the generators _1, …, _n+1 satisfy the relations in Equation. With the notation given above, the group elements ⟨_1, …, _n+1⟩ satisfy the relations of Corollary<ref>. Translating the notation of Corollary<ref>, we only need to prove that _n+1^2 = 1, (_i^-1_n+1)^2 = 1 for all 1 ≤ i ≤ n-1. First note that an immediate consequence of Remak<ref> is that r_0 is a transposition. Assume that Φ is a white flag with facet F_1 and ℓ∈_3p is such that ℓ≡ 0 3. Then _n+1^2(Φ, ℓ) = _n+1( r_0Φ, ℓ+1), but r_0 maps flags containing F_1 to flags containing F_1 and ℓ+1 ≡ 1 3, hence _n+1( r_0Φ, ℓ+1) = ( [ℓ +1] r_0 r_0Φ, ℓ). Finally observe that ρ_0^(F_1,ℓ) = ρ_0^(F_1,ℓ+1); which implies that [ℓ +1] r_0 r_0Φ = Φ. With a similar argument, we can prove that if Φ and ℓ are any of the other possible pairs such that the second coordinate of _n+1(Φ,ℓ) is not ℓ, then _n+1^2 (Φ,ℓ) = ( Φ, ℓ). If _n+1(Φ,ℓ) = ( r_0 Φ,ℓ) then clearly ^2_n+1(Φ,ℓ) = (Φ,ℓ). To prove the second part, first note that if Φ is a flag and 1 ≤ i ≤ n-1, then Φ and r_iΦ contain the same facet. Again, we will only prove the case when F_1∈Φ and ℓ≡ 0 3; the remaining cases are analogous. We have (_i^-1_n+1)^2(Φ, ℓ) = _i^-1_n+1(r_ir_0 r_0Φ, ℓ+1) = _i^-1_n+1(r_iΦ, ℓ+1) = _i^-1([ℓ + 1] r_0 r_iΦ, ℓ) = (r_i r_0[ℓ + 1] r_0 r_iΦ, ℓ) = (Φ, ℓ). The last equality follows from the facts that r_0 and r_i commute with (Remak<ref>) and and [ℓ+1] induce the same permutation on the flags containing F_1. Next we will prove that under certain conditions, there is no group automorphism as described in Corollary<ref>. More precisely, Let be the group generated by the permutations _1, …, _n+1 defined in Equations and acting on the set () ×_3p, for a prime p > 3|()|. Let μ = _n+1^-3_n+1^2_n+1 and = r_0^-3 r_0_n+1 r_0^2 r_0_n+1 r_0 r_0. Then, there is no group automorphism α: → such that α(μ) = . In particular, there is no automorphism α:→ satisfying α(_1) = ^-1_1, α(_i) = _i for all 2 ≤ i ≤ n+1. The idea behind the proof goes as follows: the hypothetical automorphism α acts on ⟨_1, …, _n⟩≅() as conjugation by r_0. Since α fixes _n+1, it should map μ to . To prove that this is impossible we shall prove that ⟨μ⟩ induces an orbit of length p (Lemma<ref>) and that the size of the orbits ⟨⟩ is bounded (Lemma<ref>). By choosing p equal to a large prime we would have proved that p divides the order of μ but does not divide the order of , which in turn implies that α cannot map μ to . Let Φ_0 denote the base flag of . Let = ⟨_1, …, _n+1⟩ be the permutation group on the set () ×_3p defined by Equations and. Let μ be as in Equation. Then the orbit of (Φ_0, 0) under ⟨μ⟩ has p elements. As before, for i∈{±1, ± 2, ± 3} let Φ_i denote the flag ^iΦ_0 and let F_i denote the facet of Φ_i. Recall that for every j ∈_3p, _n+1(Φ_i,j) = (Φ_i,j') with j' ∈{j-1, j , j+1} (see Remak<ref>). Let ℓ∈_3p be such that ℓ≡ 0 3. Consider then the following computation: μ(Φ_0,ℓ) = _n+1^-3_n+1^2_n+1 (Φ_0, ℓ) = _n+1^-3_n+1^2_n+1 (Φ_1, ℓ) = _n+1^-3_n+1^2 (Φ_1, ℓ+1) = _n+1^-3_n+1 (Φ_3, ℓ+1) = _n+1^-3 (Φ_3, ℓ+2) = _n+1 ( Φ_0, ℓ+2) = (Φ_0, ℓ+3), were in (<ref>), (<ref>) and (<ref>), we used that F_i∈Φ_i for i = 1, 3 and 0, respectively (see Figure<ref> and Equation). It follows that μ^j(Φ_0, 0) = (Φ_0, 3 j), which implies that the length of the orbit of (Φ_0, 0) under ⟨μ⟩ is p. Let us now explore the possibilities for the orbits under ⟨⟩. Informally, we aim to show that the orbit of a flag under ⟨⟩ is either very small, or is confined within three `copies' of 𝒯. This is stated formally in Lemma<ref> below, which will be proved at the end of this section. Let (Φ,ℓ) ∈() ×_3p and let t ∈_3p such that ℓ∈{ 3t, 3t+1, 3t+2 } then one of the following hold: * The orbit ⟨⟩ (Φ,ℓ) is contained in _3t∪_3t+1∪_3t+2; or * | ⟨⟩ (Φ,ℓ) | ≤ 3. To prove Lemma<ref>, we will need a series of auxiliary results. The idea of the proof is as follows. Assume that the orbit of (Φ,ℓ) under ⟨⟩ is not contained in _3t∪_3t+1∪_3t+2 and let j be such that ^j(Φ, ℓ) = (Ψ,ℓ') ∈_3t∪_3t+1∪_3t+2 but ^j+1(Φ,ℓ) = (Ψ,ℓ') ∉_3t∪_3t+1∪_3t+2. Consider the following pairs (Ψ_1, ℓ'_1) = (Ψ,ℓ') (Ψ_2, ℓ'_2) = ^2_n+1(Ψ,ℓ') (Ψ_3, ℓ'_3) = ^-3_n+1^2_n+1(Ψ,ℓ') Since (Ψ,ℓ') ∉_3t∪_3t+1∪_3t+2, one of the pairs (Ψ_i, ℓ'_i) must be such that the second coordinate of _n+1(Ψ_i, ℓ'_i) is equal to 3t-1 or 3t+3. Note that this happens if and only if the pair (Ψ_i, ℓ'_i) satisfies the following two conditions: * ℓ'_i∈{3t,3t+2}, * F_0∈Ψ_i. The orbit of (Φ, ℓ) will depend on which of the three pairs (Ψ_i, ℓ'_i) satisfies the above conditions. In the following pages, we will show that (Φ,ℓ) has an orbit of size 2 (Lemmas<ref> to<ref> ) unless Φ satisfies very specific conditions (Lemma<ref>), in which case | ⟨⟩ (Φ, ℓ) | = 3. This will imply that | ⟨⟩ (Ψ, ℓ') | = | ⟨⟩ (Φ, ℓ) | ≤ 3, completing the proof. Let ℓ∈_3p such that ℓ≡ 0 3 and Φ∈{r_0Φ̂_̂1̂, r_0Φ̂_̂3̂}. Observe that Φ is white if and only if n is odd. In this situation, the orbits of (Φ,ℓ) under ⟨⟩ are { (r_0Φ̂_̂1̂,ℓ), (r_0Φ̂_̂1̂,ℓ-1), (r_0Φ̂_̂1̂,ℓ+1) }, { (r_0Φ̂_̂3̂,ℓ), (r_0Φ̂_̂3̂,ℓ-2), (r_0Φ̂_̂3̂,ℓ-1) }. In particular, this implies that if ℓ≡ i 3 for some i ∈{ 0,2 } and Φ∈{r_0Φ̂_̂1̂,r_0Φ̂_̂3̂}, then | ⟨⟩( Φ) | = 3. Observe that a direct consequence of Equations and is that ^j r_0Φ̂_̂î = r_0Φ̂_̂î-̂ĵ. This can be used to see that the first coordinates of (Φ,ℓ), (Φ,ℓ) and (^3 Φ,ℓ) are all fixed by _n+1, for every ℓ∈_3p (Remak<ref>), which implies that the first coordinate of μ(Φ,ℓ) is Φ. It only remains to track the second coordinate of μ(Φ,ℓ). We shall only do it for Φ = r_0Φ̂_̂1̂, the other case is analogous. The second coordinate of μ(Φ,ℓ) must be ℓ -1 because F_0∈Φ but the facets of ^3Φ and Φ are not F_3 or F_1 (note that ℓ-1 ≡ 2 3). A similar argument proves that ^2(Φ, ℓ) = (Φ, ℓ + 1) and that ^3(Φ,ℓ) = (Φ,ℓ). Let (Φ,ℓ) ≠ (r_0 Φ̂_̂1̂,ℓ) and let (Ψ,ℓ') = (Φ,ℓ). Suppose F_0 ∈Ψ and ℓ' ≡ i 3 for some integer i∈{0,2}. Then the following hold: * (Φ,ℓ) = (^-1Ψ, ℓ'), in particular ℓ' = ℓ; * if i = 2 then (Φ,ℓ) = (^-1 r_0 ^1 Φ,ℓ+1); * if i = 0 then (Φ,ℓ) = (^-1 r_0 ^1 Φ,ℓ-1); * |⟨⟩ (Φ,ℓ)| = 2. Item<ref> is obvious. Assume that i = 2 and consider the following computation: (Φ, ℓ) = _n+1^-3_n+1^2_n+1( Φ, ℓ) = _n+1^-3_n+1^2_n+1(Φ, ℓ) = _n+1^-3_n+1^2( r_0Φ, ℓ+1 ) = _n+1^-3_n+1( ^2 r_0Φ, ℓ+1 ) = _n+1^-3( ^2 r_0Φ, ℓ+1 ) = _n+1(^-1 r_0Φ, ℓ+1 ) = (^-1 r_0Φ, ℓ+1 ), where (<ref>) holds because F_0 is the facet of Φ. The facet of ^2 ( r_0Φ) cannot be F_0 or F_1 (see Item<ref> of Proposition<ref>) and ^2 ( r_0Φ,ℓ+1) is the root of its facet; hence (<ref>) holds. Finally observe that F_0∉^-1( r_0Φ), because F_0∈ r_0Φ and if F_1∈^-1( r_0Φ), then Φ must be r_0Φ̂_̂1̂, which was excluded by hypothesis. Therefore, Equation holds. This proves Item<ref>. A completely analogous computation to that in Equation can be used to prove Item<ref>. That is, that if i=0, then (Φ, ℓ) = (^-1 r_0Ψ, ℓ-1). Keep in mind that in this case we shall observe that F_3 is not the facet of ^2 r_0 Φ (to prove Equation) or of ^-1 r_0 Φ (to prove Equation), but both cases follow from Item<ref> of Proposition<ref>. Let Φ' = ^-1 r_0Φ and observe that if ℓ≡ 2 3, then (Φ',ℓ+1) satisfies the hypotheses of Item<ref>. Indeed, Φ' ≠ r_0Φ̂_̂1̂, otherwise r_0Φ = r_0Φ̂_̂0̂ which in turn implies that Φ = r_0Φ̂_̂1̂. Clearly ℓ+1 ≡ 0 3 and F_0∈Φ' = r_0Φ. A similar argument can be used to show that if ℓ≡ 0 3, then (Φ',ℓ-1) satisfies the hypotheses of Item<ref>. The proof is complete by observing that ^-1[ℓ+1] r_0 (^-1 r_0Φ) = Φ and ^-1[ℓ-1] r_0 (^-1 r_0Φ) = Φ, where in both cases we have used that [ℓ-1], and [ℓ+1] act the same way on the flags containing F_0. Let (Φ,ℓ) ≠ (r_0 Φ̂_̂3̂,ℓ) and let (Ψ,ℓ') = ^2_n+1 (Φ,ℓ). Suppose F_0 ∈Ψ and ℓ' ≡ i 3 for some integer i∈{0,2}. Then the following hold: * (Ψ,ℓ') = (^3Φ, ℓ); * if i = 2 then (Φ,ℓ) = (^-3 r_0 ^3 Φ,ℓ+1); * if i = 0 then (Φ,ℓ) = (^-3 r_0 ^3 Φ,ℓ-1); * |⟨⟩ (Φ,ℓ)| = 2. Let (Φ,ℓ) and (Ψ, ℓ') as described above. Since F_0∈Ψ, then clearly the facet of ^-2Ψ is not F_0, F_1 or F_3, which implies ( Φ,ℓ) = _n+1^-2 (Ψ, ℓ') = (^-2Ψ, ℓ' ), where the last equality follows from the fact that ^-2(Ψ, ℓ' ) is the root of its facet. This proves that (Φ,ℓ) = (^-2 Ψ,ℓ'), or equivalently (Ψ,ℓ) = (^3 Φ,ℓ'). Now assume that ℓ≡ 2 3. Let us now compute ( Φ, ℓ): ( Φ, ℓ) = _n+1^-3_n+1^2_n+1( Φ, ℓ) = _n+1^-3_n+1(^3Φ, ℓ) = _n+1^-3( r_0^3Φ, ℓ+1 ) = _n+1(^-3 r_0^3Φ, ℓ+1 ) = ( ^-3 r_0^3Φ, ℓ+1 ) . Equation holds because F_0∈Ψ = ^3Φ. Observe that ^-3( r_0 Φ,ℓ) is the root of its facet and clearly F_0∉^-3 r_0 ^3Φ, because F_0∈ r_0^3Φ. Moreover, F_1 cannot be the facet of ^-3 r_0 ^3Φ (Item<ref> of Proposition<ref>). Hence, (<ref>) holds and this completes the proof of Item<ref>. An analogous computation to that in Equation can be used to prove Item<ref> (keeping in mind to replace ℓ+1 by ℓ-1). In order to prove Equation in this case we need to show that F_3 is not the facet of ^-3 r_0 ^3Φ. However, according to Item<ref> of Proposition<ref> this is only possible if r_0 ^3Φ = r_0Φ̂_̂0̂, and this, in turn, implies that Φ = r_0Φ̂_̂3̂, which was excluded by hypothesis. Let Φ' = ^-3 r_0^3Φ. We will show that if ℓ≡ 2 3, then (Φ',ℓ+1) satisfies the hypotheses of of Lemma<ref> and thus, Item<ref> holds for Φ',ℓ). First, note that the same argument used in the proof of Lemma<ref> can be used to prove that Φ' ≠ r_0Φ̂_̂3̂. Further, ℓ+1 ≡ 0 3. It only remains to show that if (Ψ',ℓ') = ^2_n+1 (Φ', ℓ+1), then F_0∈Ψ' and ℓ' ≡ 0 3. Just consider (Ψ',ℓ') = ^2_n+1( ^-3 r_0^3Φ, ℓ+1 ) = ^2_n+1( ^-2 r_0^3Φ, ℓ+1 ) = ^2( ^-2 r_0^3Φ, ℓ+1 ) = ( r_0^3Φ, ℓ+1 ). Recall that F_0∈^3Φ, which implies that F_0∈ r_0^3Φ. It follows that ^-2( r_0^3Φ, ℓ+1 ) is the root of its facet and this facet cannot be F_0 or F_1 (or F_3). Therefore ^-2( r_0^3Φ, ℓ+1 ) is fixed by _n+1 which proves (<ref>). We just proved that ℓ' = ℓ +1 ≡ 0 3 and, as previously observed, F_0∈ r_0^3Φ = Ψ'. As before, a similar argument can be used to show that if ℓ≡ 0 3, then (Φ',ℓ-1) satisfies the hypotheses of Item<ref>. Finally Item<ref> follows from observing that ^-3[ℓ+1] r_0^3 (^-3 r_0^3Φ) = Φ and ^-3[ℓ-1] r_0^3 (^-3 r_0^3Φ) = Φ, where in both cases we have used that [ℓ-1], and [ℓ+1] act the same way on flags containing F_0. Let (Φ,ℓ) be a flag, let (Ψ,ℓ') = ^-3_n+1^2_n+1 (Φ,ℓ). Suppose F_0 ∈Ψ and ℓ' ≡ i 3 for some integer i∈{0,2}. Then the following hold: * (Φ,ℓ) = (Ψ, ℓ'); * if i = 2 then (Φ,ℓ) = ( r_0 Φ,ℓ+1); * if i = 0 then (Φ,ℓ) = ( r_0 Φ,ℓ-1); * |⟨⟩ (Φ,ℓ)| = 2. To prove Item<ref> consider the flag ^3(Ψ,ℓ') = _n+1^2 _n+1 (Φ,ℓ). Observe that ^3(Ψ,ℓ') is the root of its facet and that this facet is different from F_0 and F_3 (by Item<ref> of Proposition<ref>). Thus _n+1 acts trivially on ^3(Ψ,ℓ'). It follows that ^3(Ψ,ℓ') =_n+1^3(Ψ,ℓ') = _n+1_n+1^2 _n+1 (Φ,ℓ) = ^2 _n+1 (Φ,ℓ), thus (Ψ,ℓ') = ^-3^2_n+1 (Φ,ℓ) = ^-1_n+1 (Φ,ℓ). With a similar argument but now using that (Ψ,ℓ') is the root of its facet one can prove that (Ψ,ℓ') = _n+1 (Φ,ℓ) = (Φ,ℓ), which clearly implies that (Ψ,ℓ') = ^-1^1 (Φ,ℓ) = (Φ,ℓ). This proves Item<ref>. In particular, (Φ,ℓ) = _n+1(Φ,ℓ) and by definition _n+1(Φ,ℓ) = ( r_0 Φ,ℓ+j), where j=1 if i=2 and j=-1 if i=0. Items<ref> and<ref> follow. Now we will prove Item<ref>. As before, we will just prove the case when i=2. The case when i=0 follows from an analogous argument. Define Φ' = r_0Φ and (Ψ',ℓ') =^-3_n+1^2_n+1(Φ',ℓ+1). Notice that F_0∈Φ', which implies that (Φ', ℓ+1) and (^3Φ', ℓ+1) are the root of its respective facet and this facet is neither F_0, F_1 or F_3; therefore, they are fixed by _n+1. Consider the following computation. ( Ψ', ℓ' ) = ^-3_n+1^2_n+1(Φ', ℓ+1 ) = ^-3_n+1^2( Φ', ℓ +1 ) = ^-3_n+1(^3Φ', ℓ+1 ) = ^-3( ^3Φ', ℓ+1 ) =(Φ',ℓ+1). Now we clearly have that F_0∈Ψ' = Φ' and ℓ' = ℓ+1≡ 0 3, as desired. Finally the proof of Item<ref> is complete by observing that [ℓ+1] r_0 ( r_0Φ) = Φ and [ℓ-1] r_0 ( r_0Φ) = Φ, where as before, we used that [ℓ-1], and [ℓ+1] act the same way on flags containing F_0. Lemmas<ref> to<ref> prove that Lemma<ref> holds, which in turns proves Proposition<ref>. §.§ The intersection property Now we are going to prove that the group satisfies the intersection property. The key to this proof is to use Lemma<ref>. In order to do so, we shall consider the following group elements _1 = _1, _i = _i-1^-1_i if 2 ≤ i ≤ n+1. Now for each i, _i plays the role of σ_i in Lemma<ref>. Observe that ⟨_1, …, _n⟩ = ⟨_1, …, _n⟩≅() ≅(). Moreover, the elements _1, …, _n act on (Φ, ℓ) by _i(Φ,ℓ) = (s_iΦ, ℓ), where s_i = r_i-1r_i. Therefore, in order to use Lemma<ref> to prove the that satisfies the intersection property, we just need to show that ⟨_1, …, _n⟩∩⟨_j, …, _n+1⟩ = ⟨_j, …, _n⟩ for every j such that 2 ≤ j ≤ n+1 (note the change of rank). The strategy to follow is to consider a certain flag Φ of such that the intersection between the orbit of (Φ,1) under ⟨_1, …, _n⟩ and the orbit of (Φ,1) under ⟨_j, …, _n+1⟩ is precisely the orbit of (Φ,1) under ⟨_j, …, _n⟩. The intersection property will follow from the fact that the action of ⟨_1, …, _n⟩ on the set () ×{1} is free. The following results will focus on guaranteeing the existence of such a flag, for which we will introduce some terminology. Consider the base vertex V_0 of , and let γ_1∈() be the translation by the vector (1,0^n-1). We define the line L as the orbit of the base vertex V_0 under ⟨γ⟩. We say that Ψ= (τ, , u) is perpendicular to L if n τ = 1. In geometric terms a flag is perpendicular to L precisely when its (n-1)-face belongs to a hyperplane whose normal vector points in the same direction as L. We will now prove some technical structural results about . If Φ is a flag whose vertex is in L, then the facet of Φ is neither F_1 nor F_3, nor does it belong to the set H defined in Section<ref>. Let Φ be a flag of such that the vertex of Φ is in L and assume that Φ is perpendicular to L. Let γ_1 be the translation by the vector e_1 and let _n+1 be as in Equation. Then _n+1(Φ, 1) ∈ (Φ, 1)⟨γ_1⟩. Moreover, all the flags in ⟨_n+1⟩ (Φ, 1) are perpendicular to L and their vertex is in L. First observe that the facet of Φ is not F_1 or F_3 and it does not belong to H (see Remak<ref>), then _n+1(Φ, 1) = ^-1_n_n+1 (Φ,1) = ^-1_n([1]r_0Φ,1) = (r_n r_0 [1]r_0 Φ,1) = (r_n[1] Φ,1). Let Φ= (σ, , t). Then we have _n+1(Φ, 1) = (r_n[1] (σ, , t),1) = (r_n (σ, ^(1), t),1) = ((σ, , t+x_1e_1),1), where the second equality holds because the facet of Φ is not in H, while the last equality holds because nσ = 1, since Φ is perpendicular to L. Finally observe that the flag Ψ=(σ, , t+x_1e_1) also satisfies that its vertex belong to L and it is perpendicular to L, hence the result follows. Let Φ= (σ,,t) be a flag of such that the vertex of Φ belongs to L . Let j ≥ 2, v ∈⟨_j, …, _n⟩, and (Ψ, ℓ) = _n+1 v _n+1^-1(Φ, 1). Then ℓ = 1 and Ψ is the image under a translation in direction 1 of a flag in ⟨ s_j, …, s_n⟩Φ. Let v ∈⟨_j, …, _n⟩. Observe that v fixes the second coordinate (Φ, 1) and thanks to Remak<ref>, so does _n+1. This indeed proves that ℓ = 1. Moreover, we can think of v as an element of ⟨ s_j, …, s_n⟩. Observe that we can write v as v=u_rs_n u_r-1 s_n⋯ s_n u_0, with u_i∈⟨ s_j, …, s_n-1⟩ for i ∈{0, …, r}. We shall prove our result by induction over r. Assume that r=0, this implies that Ψ = r_n [1] u_0 [1] r_n Φ and recall that u_0 ∈⟨ s_j, …, s_n-1⟩ commutes with [1], thus Ψ = r_n u_0 r_n Φ and r_n u_0 r_n ∈⟨ s_j, …, s_n⟩. Before proving the general case, let us prove the following fact: if Ψ_1 is a translate in direction 1 of a flag in ⟨ s_j, …, s_n⟩Φ, then so is the flag Ψ_2 = r_n[1] s_n[1] r_nΨ_1. Indeed, take Ψ_1 = (σ, , t) as above. Notice that if n σ≠ 1, then [1] r_nΨ_1 = r_n[1] Ψ_1. In this situation r_n[1] r_n-1 r_n[1] r_nΨ_1 = r_n[1] r_n-1 r_n r_n[1] Ψ_1 = r_n r_n-1Ψ_1 = s_n^-1Ψ_1, where the second equality follows from the fact that [1] and r_n-1 commute (see Remak<ref>). Assume that n σ = 1 and consider then the following computation: r_n[1] r_n-1 r_n[1] r_n (σ, , t) = r_n r_n-1[1] r_n[1] r_n (σ, , t) = r_n r_n-1[1] r_n[1] (σ, ^(1), t-x_1e_1) = r_n r_n-1[1] r_n (σ, , t-x_1e_1) = r_n r_n-1[1] (σ, ^(1), t-x_1e_1- x_1 e_1) = r_n r_n-1 (σ, , t-x_1e_1- x_1 e_1). , where the first equality holds because [1] and r_n-1 commute (see Remak<ref>). Let us come back to the general case. Assume that the result holds for any word u'_r-1s_n u'_r-1 s_n⋯ s_n u'_0 with u_i∈⟨ s_j, …, s_n-1⟩ and consider now v as above. Define the flags Ψ_1 = r_n[1] u_0[1] r_nΦ = r_n u_0 r_n Φ and Ψ_2 = r_n[1] s_n[1] r_nΨ_1. From our previous discussion, Ψ_2 is the translate in the direction of e_1 of a flag in ⟨ s_j, …, s_n⟩Φ, in particular its vertex belongs to L. Notice that Ψ = r_n[1] v [1] r_nΦ = (r_n[1] u_r s_n ⋯ s_n u_1 [1] r_n)(r_n[1] s_n[1] r_n) (r_n[1] u_0[1] r_nΦ) = (r_n[1] u_r s_n ⋯ s_n u_1 [1] r_n)(r_n[1] s_n[1] r_nΨ_1) = (r_n[1] u_r s_n ⋯ s_n u_1 [1] r_n) Ψ_2. The result follows from our inductive assumption on u_r s_n ⋯ s_n u_1. Using the previous results it is easy to obtain a description of the orbit of certain flags under the action of ⟨_j, …, _n+1⟩ for j ≥ 2. Let Φ be a flag of such that the vertex of Φ is in L. Assume that Φ is perpendicular to L. Let j ≥ 2. Then ⟨_j, …, _n+1⟩(Φ,1) =⋃_i=0^a-1⟨_j, …, _n⟩ (Φ,1)γ_1^i where γ_1∈() is the translation by the vector e_1. In other words, the orbit of (Φ,1) under ⟨_j, …, _n+1⟩ is the union of translates in the direction of e_1 of the orbit of Φ under ⟨ s_j, … s_n⟩. Observe that Lemma<ref> proves that the right side of Equation is contained in the orbit of (Φ,1) under ⟨_j, …, _n+1⟩. To prove the other inclusion, it suffices to show that given a permutation w ∈⟨_j, …, _n+1⟩ there exists v ∈⟨_j, …, _n⟩ and d ∈ such that w(Φ,1) = v(Φ,1)γ_1^d. First, observe that for some integer r we can rewrite w as w = v_r _n+1 v_r-1_n+1…_n+1v_0 where v_i ∈⟨_j, … , _n⟩. We proceed by induction on r. If r=0 then w = v_0 ∈⟨_j …_n⟩, and we are done. Let us now show that the case r=1 holds. Suppose that r=1 so that w = v_1 _n+1 v_0 with v_0,v_1 ∈⟨_j , … , _n⟩. Let (Ψ,ℓ) = _n+1(Φ,1) and note that by Lemma<ref> we have Ψ = Φγ_1^ϵ (for some ϵ∈{± 1}) and Ψ is perpendicular to L. Then v_1 _n+1 v_0 (Φ, 1) = v_1 _n+1 v_0 _n+1^-1(Ψ, ℓ) = v_1v_0'(Ψ,ℓ)γ_1^d = v_1v_0'(Φ,1)γ_1^d+ϵ for some v_0' ∈⟨_j, …, _n⟩ and some integer d, where the second equality follows from Lemma<ref>. Since v_1v_0' ∈⟨_j, …, _n+1⟩, this concludes the case r=1. Now, suppose that the result holds for all N < r and consider an element w = v_r _n+1…_n+1v_1 _n+1v_0. By Equation, there exists v_0' ∈⟨_j,…, _n ⟩ and d_1 ∈ such that w(Φ,1) = v_r _n+1…_n+1v_1 _n+1v_0(Φ,1) = (v_r _n+1…_n+1)v_1 _n+1v_0(Φ,1) = (v_r _n+1…_n+1)v_1v_0'(Φ,1)γ_1^d_1 = (v_r _n+1…_n+1v_1v_0')(Φ,1)γ_1^d_1 Now let w'= (v_r s_n+1… s_n+1v_1v_0') so that Equation can be rewritten as w(Φ,1) = w'(Φ,1)γ_1^d_1. Observe that (Φ,1)γ_1^d_1 is perpendicular to L. Further, w' ∈⟨_j,…,_n+1⟩ and w' has less than r factors equal to _n+1. Thus, by inductive hypothesis we have w'(Φ,1)γ_1^d_1 = v(Φ,1)γ_1^d_1+d_2 for some v ∈⟨_j …_n⟩ and d_2 ∈. Let Φ be a flag of perpendicular to L. Assume that the vertex of Φ belongs to L. Let 2≤ j ≤ n and w ∈⟨_1, …, _n⟩∩⟨_j, …_n+1⟩. If (Φ', ℓ) = w (Φ, 1), then ℓ = 1 and the vertex of Φ' is the same as the vertex of Φ. We will slightly abuse language and say that w fixes a vertex V, if for every (Ψ,ℓ) such that Ψ has vertex V, the first coordinate of w(Ψ,ℓ) is also a flag with vertex V. First, observe that since w ∈⟨_1, …, _n⟩, w acts on the first coordinate of every pair (Ψ,1) as a permutation s ∈⟨ s_1, …, s_n⟩≤(). Furthermore, since is regular (and s is in the monodromy group), we see that if w fixes one vertex of , then it must fix all vertices. Now, w ∈⟨_j, …_n+1⟩ and clearly all s_j with 2 ≤ j ≤ n fix all vertices of . This means that if _n+1 is not a factor of w, then w fixes all vertices of . Therefore, it suffices that we show that _n+1 fixes some vertex. In other words, we will show that for some flag Ψ of , the vertex of (Ψ,ℓ) is the same as the vertex of _n+1(Ψ,ℓ). Let X_H be the vertex of defined in Definition<ref> and let Ψ_H be a flag with vertex X_H, so that the facet of Ψ_H belongs to H. Observe that, by the definition of ρ^ℓ (Equation), the vertex of ρ^1 Ψ is X_H as well. Now, consider the following computation. _n+1 (Ψ_H,1) = _n^-1_n+1(Ψ_H,1) = (r_nr_0ρ^1r_0 Ψ_H,1) = (r_nρ^1Ψ_H,1). Clearly, r_nρ^1Ψ has the same vertex as Ψ, as neither r_n nor ρ^1 move X_H. That is, the vertex of _n+1 (Ψ,1) is X_H. Therefore, w fixes all vertices of . In particular, the vertex of Φ is the same as the vertex of Φ'. Now we are ready to prove the key results that will help us to prove that the group γ =⟨_1, …, _n+1⟩ satisfies the intersection property in Equation. Let Φ be a flag of that is perpendicular to L and whose vertex belongs to L. Let 2≤ j, then (⟨_1, …, _n⟩∩⟨_j, …, _n+1⟩) (Φ,1) = ⟨_j, …, _n⟩(Φ, 1). It is only necessary to prove that (⟨_1, …, _n⟩∩⟨_j, …, _n+1⟩) (Φ,1) ⊂⟨_j, …, _n⟩(Φ, 1). By Lemma<ref>, (⟨_1, …, _n⟩∩⟨_j, …, _n+1⟩) (Φ,1) ⊂⟨_j, …, _n+1⟩ (Φ,1) ⊂(⋃_i=0^a-1⟨_j, …, _n⟩ (Φ,1)γ_1^i), Now, by Lemma<ref>, if (Ψ,1) is an element of ⟨_j, …, _n+1⟩ (Φ,1), then Ψ has the same vertex as Φ. This implies that ⟨_1, …, _n⟩ (Φ,1) ∩⟨_j, …, _n+1⟩ (Φ,1) ⊂⟨_j, …, _n⟩ (Φ,1). As explained before Lemma<ref> offers the conditions to prove the intersection property for the group ⟨_1, …, _n+1⟩. Let _1, …, _n+1 be the group elements defined in Equation. Let j ≥ 2. Then ⟨_1, …, _n⟩∩⟨_j, …, _n+1⟩ = ⟨_j, …, _n⟩. If j = n+1, then there is nothing to prove. If j < n+1, then let Φ be as in the hypothesis of Lemma<ref>. Let w ∈⟨_1, …, _n⟩∩⟨_j, …, _n+1⟩ and observe that (⟨_1, …, _n⟩∩⟨_j, …, _n+1⟩) (Φ,1) = ⟨_j, …, _n⟩(Φ, 1) by Lemma<ref>. This implies that there exists w' ∈⟨_j, …, _n⟩ such that w(Φ,1) = w'(Φ,1). Observe that the action of the group ⟨_1, …, _n⟩ on the set {(Ψ,1) : Ψ is a white flag of } is equivalent to the action of () on the set of white flags of . In particular, this action is free. Now, since both w and w' belong to the group ⟨_1, …, _n⟩ and w(Φ,1) = w'(Φ,1), then w=w', and we have that w ∈⟨_j, …, _n⟩. The other inclusion is obvious. As a consequence of Propositions<ref>, <ref> and<ref> we have the following. Let be a regular toroid _ where = (a^k, 0^n-k) with a ≥ 6n +1 and k ∈{1,2,n}. Let _1, …, _n+1 be the corresponding group elements defined in Equation. Then the group = ⟨_1, …, _n+1⟩ is the automorphism group of a chiral (n+2)-polytope whose facets are isomorphic . The abstract polytopes constructed in Theorem<ref> prove Theorem<ref>. § ACKNOWLEDGEMENTS This research was partially developed when the first author was a Ph.D. student in Centro de Ciencias Matemáticas at UNAM Mexico. The second author gratefully acknowledges financial support from the Fédération Wallonie-Bruxelles – Actions de Recherche Concertées (ARC Advanced grant). Both authors are grateful to Daniel Pellicer for his insights and contributions at the beginning of this project.