entry_id
stringlengths
33
33
published
stringlengths
14
14
title
stringlengths
18
175
authors
sequencelengths
1
1.12k
primary_category
stringclasses
114 values
categories
sequencelengths
1
8
text
stringlengths
5
364k
http://arxiv.org/abs/2407.12738v1
20240717165843
New Laboratory Constraints on Neutrinophilic Mediators
[ "P. S. Bhupal Dev", "Doojin Kim", "Deepak Sathyan", "Kuver Sinha", "Yongchao Zhang" ]
hep-ph
[ "hep-ph", "hep-ex" ]
bdev@wustl.edu Department of Physics and McDonnell Center for the Space Sciences, Washington University, St. Louis, MO 63130, USA doojin.kim@tamu.edu Mitchell Institute for Fundamental Physics and Astronomy, Department of Physics and Astronomy, Texas A&M University, College Station, TX 77843, USA dsathyan@umd.edu Maryland Center for Fundamental Physics, Department of Physics, University of Maryland, College Park, MD 20742, USA kuver.sinha@ou.edu Department of Physics and Astronomy, University of Oklahoma, Norman, OK 73019, USA zhangyongchao@seu.edu.cn School of Physics, Southeast University, Nanjing 211189, China § ABSTRACT Neutrinophilic mediators are well-motivated messenger particles that can probe some of the least known sectors of fundamental physics involving nonstandard interactions of neutrinos with themselves and potentially with dark matter. In particular, light mediators coupling to the active neutrinos will induce new decay modes of the Standard Model mesons (e.g., π^±, K^±→ℓ^± + ν + ϕ), charged leptons (e.g., τ^±→π^± + ν + ϕ), and gauge bosons (e.g., Z →ν + ν̅+ ϕ). A common lore is that these decays suffer from infrared divergences in the limit of the vanishing mediator mass, i.e., m_ϕ→ 0. Here, we show for the first time that including the 1-loop contributions of these mediators to the standard 2-body decays (e.g., π^±, K^±→ℓ^± + ν, etc.), the infrared divergence from the 3-body decay cancels out exactly by virtue of the Kinoshita–Lee–Nauenberg theorem. Including these cancellation effects, we then update the existing laboratory constraints on neutrinophilic scalar mediators, thereby extending the limits far beyond the decaying parent particle mass and excluding a wider range of parameter space. These new “physical” limits derived here have significant implications for the future detection prospects of nonstandard neutrino (self-)interactions. New Laboratory Constraints on Neutrinophilic Mediators Yongchao Zhang July 22, 2024 ====================================================== Introduction.– Neutrinos are the least understood out of the Standard Model (SM) particles. In particular, they can have potentially large nonstandard interactions and can serve as a portal to beyond the SM (BSM) physics. While nonstandard neutrino interactions with charged SM fermions are readily probed with neutrino scattering and oscillation experiments <cit.>, neutrino self-interactions <cit.> and possible connections to dark matter <cit.> can be effectively probed by studying the interactions of neutrinophilic mediators. In fact, it is common to have (light) scalar or vector bosons mediating the self-interactions among active neutrinos in many BSM scenarios. For instance, a light leptonic scalar ϕ can couple to neutrinos in the form of ϕνν̅ or ϕνν^c depending on the lepton number carried by ϕ, which induces neutrino self-interactions and gives rise to interesting signals at both low-energy experiments <cit.> and high-energy colliders <cit.>, as well as from astrophysical <cit.> and cosmological <cit.> observables; see Ref. <cit.> for a recent review. Another example of neutrinophilic mediators is the so-called Majoron particle J with interaction structure J ν̅iγ_5 ν originating from global symmetry breaking in seesaw models <cit.>. In some scenarios, the scalar might also couple to neutrino and dark (matter) particle χ, e.g., in the form of ϕν̅χ <cit.>. Such couplings may contribute to neutrino self-interactions at the 1-loop level <cit.> or radiatively generate nontrivial electromagnetic properties of neutrinos <cit.>. There are also some seesaw models with a scalar coupling to the active neutrinos and heavy neutrino N via ϕN̅ν <cit.>. Such nonstandard interactions of neutrinos via neutrinophilic mediators induce new decay modes of SM particles, e.g., π^±, K^±→ℓ^± + ν / χ + ϕ (see Fig. <ref>a) and Z →ν + ν̅ / χ + ϕ <cit.>. Then the corresponding experimental data, e.g., the decay widths and the spectra of charged leptons ℓ^± from meson decays can be used to set limits on these decay rates as a function of the mediator mass m_ϕ, as done by the PIENU <cit.> and NA62 <cit.> experiments using charged pion and kaon decays, respectively. However, it is a common lore that these decay channels are potentially subject to the infrared (IR) divergences; the corresponding partial widths approach infinity as m_ϕ→ 0 (see e.g., Refs. <cit.>). This is clearly unphysical. We show that the IR divergence is removed by including the interference between the 1-loop contribution (Fig. <ref>c) and the 2-body decay (Fig. <ref>b). This is reminiscent of the standard calculations of quantum electrodynamics <cit.> (see also, e.g., Ref. <cit.>) and expected as a natural consequence of the Kinoshita–Lee–Nauenberg (KLN) theorem <cit.>. For illustration purposes, we focus on the following decay processes in this letter: exotic charged-meson decays M^±→ℓ^± + ν + ϕ with M = π,K and ℓ = e,μ, hadronic tau decays τ^±→π^± + ν + ϕ, and Z boson decays Z →ν + ν̅ + ϕ. More general cases involving dark matter χ or heavy neutrino N in the final state such as M^±→ℓ^± + χ/N + ϕ with nonzero mass m_χ / N are also of great interest, e.g., for DM phenomenology and heavy neutrino searches, and will be reported in our forthcoming work <cit.>. We point out that summing up the tree and 1-loop contributions will not only give “physical" constraints on the associated decays in the IR limit of small ϕ mass but also, in general, improve the constraints at large ϕ mass. When the mediator is heavy, the tree-level process is kinematically suppressed or forbidden and the BSM effects are dominated by the virtual mediator in the loop. This has far-reaching implications for the experimental limits on m_ϕ and its couplings. Meson decays.– Let us first consider the meson decays M^±→ℓ^± + ν + ϕ with the light scalar ϕ emitted from the neutrino line (Fig. <ref>a). We consider the generic coupling in the form of L = g_νϕν̅ν . The couplings of ϕ can be either flavor-diagonal or flavor-off-diagonal. Possible ultraviolet (UV) completions of this effective operator can be found in Refs. <cit.>. It is well known that the partial widths for the 2-body leptonic decays M^±→ℓ^± + ν are helicity-suppressed in the SM, i.e., proportional to the charged-lepton mass squared m_ℓ^2. This is not the case for the 3-body decay; see the Supplement for details. In the small m_ϕ limit, the partial width Γ ( M^±→ℓ^± + ν + ϕ) can be written as Γ≃G_F^2 m_ M^3 f_ M^2 |V|^2 g_ν^2/128π^3[ - x_ℓ M (1-x_ℓ M)^2 log x_ϕ M + C_2 (x_ℓ M) ] , where G_F is the Fermi constant, m_ M and f_ M are respectively the charged-meson mass and decay constant, V is the CKM matrix element (V_ud for pions and V_us for kaons), x_ab≡m_a^2/m_b^2, and C_2 (x_ℓ M) is a dimensionless function of the mass ratio x_ℓ M, given in Eq. (<ref>) in the Supplement. It is apparent that the first term in Eq. (<ref>) is IR-divergent, i.e., goes to infinity in the limit of m_ϕ→ 0, or equivalently x_ϕ M→ 0. For concreteness, we neglect the neutrino mass, which will not affect our results in the ϕ mass range of interest here. The coupling in Eq. (<ref>) also induces a self-energy correction to the neutrino line, as shown in Fig. <ref>c. The amplitude M^(0) of the tree-level SM decay M^±→ℓ^± + ν, shown in Fig. <ref>b, interferes with the amplitude M^(1) for the 1-loop diagram, when we calculate the partial width ΔΓ^ loop ( M^±→ℓ^± + ν). In particular, the interference term Re[ M^(0)∗ M^(1)] ∝ g_ν^2 , which is at the same order in g_ν as the partial width in Eq. (<ref>). The full expression for ΔΓ^ loop ( M^±→ℓ^± + ν) is given in Eq. (<ref>) of the Supplement. The IR-divergent part is +x_ℓ M (1-x_ℓ M)^2 log x_ϕ M that exactly cancels out the first term of Eq. (<ref>), as expected from KLN theorem. Taking into account the loop effects, we show in Fig. <ref> the updated limits from π^± and K^± decays in the left and right panels, respectively. We first report conservative limits, taking 90% C.L. uncertainty ranges of the partial widths based on the information in the PDG data <cit.>; more details can be found in the Supplement. The blue and red lines are for the muon and electron decay modes, respectively. In the case of ℓ = e, the logarithmic divergence is heavily suppressed by x_e M≡ m_e^2 / m_π, K^2, hence the C_2 term becomes more important in Eq. (<ref>) for m_ϕ≳ eV. In other words, the IR divergence does not dominate the decay rate of M^±→ e^± + ν + ϕ. The limits on M^±→ e^± + ν + ϕ from the M^±→ e^± + ν data (red lines) are flat even in the limit of m_ϕ→ 0, showing little differences between the tree-level and tree+loop-level limits, as expected. When m_ϕ≪ m_ M, the pion and kaon decay limits on g_ν in the electron channel are respectively 5.9 × 10^-3 and 2.2 × 10^-3. By contrast, in the muon case ℓ = μ, as m_μ is comparable to m_π, K, the divergent behavior is noticeable in the m_ϕ→ 0 limit, which is clear from the dashed blue lines in Fig. <ref> (the label “tree”). The corresponding IR-free limits with the loop corrections are presented by the solid blue lines, labeled as “tree+loop”. Notably, while the tree-level decay M^±→ℓ^± + ν + ϕ is kinematically forbidden when m_ϕ≥ m_ M - m_ℓ, the 1-loop contribution to the decay M^±→ℓ^± + ν still exists. Consequently, the solid blue lines in Fig. <ref> can extend to large m_ϕ, even beyond the parent particle mass m_ M, whereas the dashed blue lines quickly vanish as m_ϕ gets closer to m_ M - m_μ. With the loop contributions included, π^±→μ^± + ν + ϕ gives g_ν < 0.061 in the massless ϕ limit. For heavy ϕ, this decay constrains m_ϕ up to ∼ 14 GeV for g_ν <1. The IR limit for the kaon decay is relatively weaker, i.e., g_ν <0.53 is allowed, while the UV limit is ∼ 15 GeV for g_ν<1. The dip feature of the solid blue line at around 200 MeV in the right panel of Fig. <ref> is due to the substantial cancellation of the tree and 1-loop contributions. These limits can be further improved by a dedicated shape analysis of the final decay products. The PIENU experiment has provided limits on the branching ratio (BR) of π^±→ e^±/μ^± + ν + X as a function of the invisible X mass <cit.>. We reinterpret them with our updated partial width calculations in both electron and muon channels, which are shown respectively by the orange and purple lines in the left panel of Fig. <ref>. In the electron channel, we find that the limit of g_ν is improved by ∼ 13% for small m_ϕ. In contrast, the loop-included result (solid) differs from the tree-level one (dashed) in the muon channel. Qualitatively, when m_ϕ approaches the kinematic threshold, m_π - m_μ≃ 34 MeV, the tree-level contribution is highly suppressed by phase space and the loop contribution becomes the dominant BSM effect. Therefore, the “tree+loop” limit gets much stronger at m_ϕ≳ 10 MeV. For m_ϕ→ 0, the PIENU limit on g_ν_μ is 0.029, while at m_ϕ∼ 10 MeV, it is improved to 7.2× 10^-3. Similarly, the NA62 experiment has reported the shape-analysis-based limits on BR (K^±→μ^± + ν + X) with X being a scalar <cit.>. They are of the order of O (10^-6) for 10 MeV < m_X < 370 MeV, roughly three orders of magnitude stronger than the limits from the partial widths above. The resultant “tree” and “tree+loop” limits are shown respectively by the dashed and solid purple lines in the right panel of Fig. <ref>. Again, with the 1-loop contribution included, the NA62 limits get much stronger, especially when m_ϕ is close to the threshold; at m_ϕ = 370 MeV, the limit can reach down to 2.1× 10^-3. We note that due to the existence of an off-shell neutrino propagator in the 3-body decay, the energy/angular distribution of the charged lepton from meson decays might be (mildly) affected, and the corresponding limits should be interpreted accordingly; see e.g., Fig. 7 of Ref. <cit.>. Therefore, more dedicated analyses of the PIENU and NA62 data may improve to some extent the limits on m_ϕ and g_ν reported here. We will examine this aspect in future work. Speaking of the existing limits, all the gray-shaded regions in both panels of Fig. <ref> show the exclusions by current terrestrial, astrophysical, and cosmological data <cit.>, i.e., those from the cosmic microwave background <cit.>, big bang nucleosynthesis <cit.>, SN1987A <cit.>, IceCube High Energy Starting Events <cit.>, the high-energy neutrinos detected by IceCube from the blazar TXS 0506+056 <cit.>, and double-beta decays (only for ν_e) <cit.>. Other existing limits, e.g., those from stellar cooling <cit.>, are relatively weaker for the parameter space of our interest and hence not shown in Fig. <ref>. Moreover, the coupling in Eq. (<ref>) induces 1-loop couplings of ϕ to the quarks and charged leptons <cit.>, which would give additional limits from neutrino-electron and neutrino-nucleus scattering <cit.>, e.g., those from Borexino <cit.> and COHERENT <cit.>. However, they are highly suppressed by the loop factor and the heavy W and Z particles in the loop, and are therefore not shown here. Finally, as natural extensions, we have also calculated the partial widths for other cases and the corresponding meson decay limits: (i) The scalar is replaced by a pseudoscalar J with couplings to neutrinos in the form of J ν̅ iγ_5 ν; we find that the results are the same as the scalar case above. (ii) The scalar couples to the charged leptons, i.e., g_ℓϕℓ̅ℓ or g_ℓϕℓ̅ iγ_5 ℓ (with ℓ = e,μ) <cit.>, where the same cancellation happens. However, such couplings contribute to the anomalous magnetic moments of electron <cit.> and muon <cit.>, which give rise to more stringent limits than the meson decay limits under consideration <cit.>. Therefore we do not pursue this case further. (iii) The analysis above can also be applied to the charged D meson decays, i.e., D^±→ℓ^± + ν + ϕ, and also to the semileptonic B-meson decays. However, the corresponding limits are weaker than the ones shown here <cit.>. (iv) If the scalar ϕ is replaced by a vector boson Z', the corresponding partial width Γ ( M^±→ℓ^± + ν + Z') is dominated by the term m_ M^4/m_Z'^2 originating from the longitudinal polarization of Z'; this is much larger than the IR divergent term m_ℓ^2 log ( m_Z'^2/ m_ M^2), see e.g., Refs. <cit.>. We delve into this intriguing case in forthcoming work <cit.>. Tau decays.– One of the dominant tau-lepton decay channels in the SM is τ^±→π^± + ν, which is closely related to the charged-pion decays π^±→ℓ^± + ν. Calculations of the decay channel τ^±→π^± + ν + ϕ are similar to those for the charged-meson decays, and the details are given in the Supplement. The resultant “tree” and “tree+loop” limits on m_ϕ and g_ν estimated with the τ^±→π^±+ ν partial width measurement are presented in Fig. <ref> respectively by the dashed and solid red lines. The most important limit is from SN1987A <cit.>, shown by the gray-shaded region. Here we also consider the case of ϕ coupling to τ via g_τϕτ^+ τ^-. The corresponding limits from τ^±→π^± + ν are shown by the (dashed) blue lines. The existing limits on g_τ are much weaker, mainly from the measurement of the anomalous τ magnetic moment. The current ATLAS constraint of -0.057 < a_τ < 0.024 <cit.> leads to the exclusion bound of g_τ > 1.1 <cit.>, and is out of the presentation range in Fig. <ref>. We find that, once the 1-loop contribution is included, the allowed values of g_ν,τ are smaller than respectively 0.48 and 0.23 in the m_ϕ→ 0 limit. For g_ν,τ<1, m_ϕ is constrained up to 22 GeV and 58 GeV, respectively, which are well beyond the τ mass and the existing SN1987A limit. The pure leptonic decay channel τ^±→ℓ^± + ν_ℓ + ν_τ (with ℓ = e, μ) can also be used to set limits on exotic decays, i.e., τ^±→ℓ^± + ν_ℓ + ν_τ + ϕ with ϕ emitted from the neutrino or charged-lepton lines <cit.>. However, with respect to the 3-body decay τ^±→π^± + ν + ϕ considered above, these 4-body leptonic decays are both phase-space and BR-suppressed. Similarly, we expect that the 4-body decay channels from muon, i.e., μ^±→ e^± + ν_e + ν_μ + ϕ, will not give competitive limits. Nevertheless, we will examine these 4-body decays in future work <cit.> for completeness. Z boson decays.– The invisible Z decay data can be utilized to set limits on our neutrinophilic mediator ϕ through the Z →ν +ν̅+ ϕ decay channel. The calculational details are given in the Supplement. Just like in our previous cases, the tree-level contribution shows IR divergence which is removed by including the 1-loop contributions. But here the 1-loop corrections come from the neutrino self-energy, as well as from the Z νν̅ vertex, unlike the meson and tau cases above. While we observe this cancellation even with (almost) massless neutrinos, it is interesting to compare our findings with the results in Ref. <cit.>, where the IR divergence was regulated by the neutrino mass, which becomes relevant only in the regime m_ϕ≲ m_ν. Our result is more general in this sense. The orange lines in Fig. <ref> show the resulting limits from invisible Z data; g_ν<1.4 is constrained for m_ϕ≪ m_Z with 1-loop contributions included. Due to the cancellation of the tree and loop contributions, the “tree+loop” limit gets much weaker at around m_ϕ∼ 30 GeV (as shown by the “gap”). The current limit mainly comes from SN1987A <cit.> within the presentation range, indicated by the gray-shaded region in Fig. <ref>. The search prospect of ϕ at the large hadron collider (LHC) Run-3 in the W^±→μ^±+ MET channel with an integrated luminosity of 300 fb^-1 and 0.1% systematics is shown by the brown line <cit.>. It is clear from the solid orange lines in Fig. <ref> that, for the ϕ mediator, the invisible Z-decay data have excluded a sizable range of parameter space beyond the SN1987A limit, complementing the prospect at the LHC. One can also derive limits from the exotic W boson decays, i.e., W^±→ℓ^± + ν + ϕ, and the calculations are very similar to the Z boson case. However, the uncertainty Δ BR (W^±→ℓ^± + ν) ≃ 3.6 × 10^-3, is much larger than that from the invisible Z data, 7.3 × 10^-4 <cit.>. The resulting exclusions on the coupling are g_ν > 1 and are thus not shown in Fig. <ref>. The invisible Z data can also be used for other rare Z decay channels; e.g., Z →ν +ν̅+Z' with a neutrinophilic vector mediator Z'. This certainly carries nontrivial physics implications and features in mitigating the associated IR divergence, which are quite different from the case of M^±→ℓ^± + ν + Z'. We will defer the detailed discussion of the vector case for future publication <cit.>. Discussions and conclusions.– In this letter, we have studied the exotic decays of charged mesons, tau lepton, and Z gauge boson in the presence of a (light) neutrinophilic scalar ϕ. We particularly focused on the IR divergence arising in the m_ϕ→ 0 limit which is shown to be removed with the 1-loop contributions included. The methodology here can also be applied to other decay channels, e.g., π^0 →γ + γ + ϕ with ϕ coupling to photons or the channel π^±→ e^± + ν + a with pseudoscalar a coupling to the W mediator or the valence quarks of π^± <cit.>. One may also constrain the hadronic couplings of a (light) scalar from meson and tau decays, e.g., in the channel of τ^±→π^± + ν + ϕ with ϕ coupling to π^± instead of ν or τ^±. Several implications of the 1-loop corrections are worth mentioning. (i) The 1-loop contributions are important in not only removing the IR divergence but also, in general, bringing new limits in the region of parameter space that is kinematically “forbidden” to constrain at the tree level (see the solid lines in Figs. <ref> through <ref>). (ii) When conducting similar phenomenological studies, one should carefully include loop contributions to perform the theory calculations more accurately and place experimental bounds more robustly without the unphysical IR divergence. (iii) Some past experimental limits should be revisited accordingly, e.g., the PIENU and NA62 limits re-interpreted in our study. In summary, the SM should be IR-finite, as stated by the KLN theorem. This holds even in the presence of BSM couplings. We have demonstrated this general feature with a scalar ϕ interacting with the active neutrinos and τ^±. Acknowledgments.– We thank Kaladi Babu, Bhaskar Dutta, Sudip Jana, Lorenzo Ricci, and Oleksandr Tomalak for useful discussions and comments on the draft. BD is supported by the U.S. Department of Energy grant No. DE-SC 0017987. The work of DK is supported by the DOE Grant No. DE-SC0010813. DS is supported by NSF Grant No. PHY-2210361 and by the Maryland Center for Fundamental Physics. KS is supported by the U.S. Department of Energy grant DE-SC0009956. YZ is supported by the National Natural Science Foundation of China under grant No. 12175039, the 2021 Jiangsu Shuangchuang (Mass Innovation and Entrepreneurship) Talent Program No. JSSCBS20210144, and the “Fundamental Research Funds for the Central Universities”. BD, DK, and KS acknowledge the Center for Theoretical Underground Physics and Related Areas (CETUP* 2024) and the Institute for Underground Science at SURF for hospitality and for providing a stimulating environment, where this work was finalized. JHEP Supplemental Material § A DATA USED FOR THE LIMITS Here we collect in Table <ref> all the data used in the Letter for the meson, tau, and Z decay limits. For instance, from the π→ e ν data in the table, the 1σ range uncertainty is ΔΓ (π→ eν) = BR (π→ e ν)/τ_π^±[ Δτ_π^±/τ_π^± + Δ BR (π→ e ν)/ BR (π→ e ν)] . To apply for the limits at the 90% C.L., we multiply the 1σ uncertainties by a factor of 1.64. § B MESON DECAY M→ℓ + Ν + Φ For the meson decay M (p) →ℓ (p_ℓ) + ν (p_ν) + ϕ (p_ϕ), with ϕ coupling to neutrinos with the strength g_ν, the squared amplitude is given by ∑ | M ( M→ℓ + ν + ϕ)|^2 = 8 g_ν^2 f_ M^2 G_F^2 |V_|^2/q^4{ q^4 (p_ℓ· p_ν) + m_ℓ^2 [ 2 (q· p_ν) (q^2 + (q· p_ℓ)) - q^2 (p_ℓ· p_ν) ] } , with q being the momentum of the neutrino mediator. After a lengthy calculation, one can find the total partial width to be Γ ( M→ℓ + ν + ϕ ) = g_ν^2 G_F^2 m_ M^3 f_ M^2 |V|^2/128π^3 f_1 (x_ϕ M, x_ℓ M) , with x_ab≡ m_a^2/m_b^2, and the dimensionless function f_1 (x_1, x_2) is given by f_1 (x_1, x_2) = 1/3λ^1/2 (1,x_1,x_2)/1-x_2( 1 + 10 x_1 + x_1^2 - x_2 (9 - 6x_1 + x_1^2) + x_2^2 (18- 19x_1) - 10 x_2^3 ) + ( 2 x_1 (1+x_1) - x_2 (1-3x_1^2) - 2 x_2^2 (1-3 x_1) + x_2^3 ) arctanhλ^1/2 (1,x_1,x_2)/1-x_1+x_2 - [ x_1 (1-3x_2^2) - x_2 (1-x_2)^2 + ( 2 - x_2 + 4 x_2^2 -3 x_2^3 ) x_1^2/(1-x_2)^2] arctanh(1-x_2)λ^1/2 (1,x_1,x_2)/(1-x_2)^2 + x_1 (1+x_2) , where λ (a, b, c) ≡ a^2 + b^2 + c^2 - 2ab - 2ac - 2 bc . For sufficiently small x_1, f_1 (x_1, x_2) ≃ - x_2 (1 + 2x_2 - x_2^2) arctanh1-x_2/1+x_2 + 1/6 (1-x_2) [ 2 - 4 x_2 (4-5x_2) - 3 x_2 (1-x_2) logx_1^2 x_2/(1-x_2)^4] . Then we get the IR-divergent part, i.e., the first term in Eq. (<ref>), and the rest of f_1 (x_1,x_2) is the finite function C_2 (x_2) ≃ - x_2 (1 + 2x_2 - x_2^2) arctanh1-x_2/1+x_2 + 1/6 (1-x_2) [ 2 - 4 x_2 (4-5x_2) - 3 x_2 (1-x_2) x_2/(1-x_2)^4] . For interference term between Figs. <ref> (b) and (c), its contribution to the width of the decay M→ℓ + ν is ΔΓ^ loop ( M→ℓ + ν ) = - g_ν^2 G_F^2 m_ M m_ℓ^2 f_ M^2 |V|^2/128π^3( 1 - m_ℓ^2/m_ M^2)^2 f_1^ loop (x_ϕ M, x_ℓ M) , where we have removed the UV divergence, which can always be done by adding counterterms in a UV-complete theory, and the dimensionless function f_1^ loop (x_1, x_2) = 5/2 - logx_1 (1-x_2)^2/16π^2 , leading to the IR term proportional to x_ℓ M (1-x_ℓ M)^2 log x_ϕ M, which cancels exactly with the first term in Eq. (<ref>) from the 3-body decay M→ℓ + ν + ϕ, as expected. § C TAU DECAY Τ→Π + Ν + Φ In the SM, the matrix element for the semileptonic decay τ→π + ν is closely correlated with that for π→ℓ + ν. For the case of ϕ coupling to the neutrino, we can easily obtain the squared amplitude for the decay τ (p) →π (p_π) + ν (p_ν) + ϕ (p_ϕ): 1/2∑ | M_ν (τ→π + ν + ϕ) |^2 = 4 g_ν^2 f_π^2 G_F^2 |V_ud|^2/q^4{ q^4 (p · p_ν) + m_τ^2 [ 2 (q· p_ν) ((q· p)-q^2) - q^2 (p · p_ν) ] } , where the factor of 1/2 is for averaging over the spins of tau in the initial state. Then the partial width reads Γ ( τ→π + ν + ϕ ) = g_ν^2 G_F^2 m_τ^3 f_π^2 |V_ud|^2/256π^3 f_2 (x_ϕτ, x_πτ) , with the dimensionless function f_2 (x_1, x_2) = - λ^1/2 (1, x_1, x_2)/3[ (10 + x_1^2-8 x_2+x_2^2) + x_1/1-x_2 ( 19 - 6 x_2 - 10 x_2^2 ) ] - ( 1 - x_2 (2+x_2) + 2 x_1 (3+ x_2^2) + x_1^2 (3+2 x_2) ) arctanhλ^1/2 (1, x_1, x_2)/1-x_1+x_2 + [ (1-x_2)^2 + 2x_1 (3-x_2^2) + x_1^2/(1-x_2)^2( 3 - x_2 (4 - x_2 + 2x_2^2) ) ] arctanh(1-x_2)λ^1/2 (1,x_1,x_2)/(1-x_2)^2 + x_1 (1+x_2). In the limit of x_1 → 0, we have f_2 (x_1, x_2) ≃ -1/3 (1-x_2) (10-8x_2+x_2^2) - (1-x_2)^2 logx_1/(1-x_2)^2 - x_2^2 log x_2 . The loop contribution to the decay τ→π + ν is ΔΓ^ loop ( τ→π + ν ) = - g_ν^2 G_F^2 m_τ^3 f_π^2 |V_ud|^2/256π^3( 1 - m_π^2/m_τ^2)^2 f_1^ loop (x_ϕτ, x_πτ) . For the case of ϕ coupling to the tau with the strength g_τ, the amplitude square is very similar to Eq. (<ref>): 1/2∑ | M_τ (τ→π + ν + ϕ) |^2 = 4 g_ν^2 f_π^2 G_F^2 |V_ud|^2/(q^2-m_τ^2)^2{ q^4 (p · p_ν) + m_τ^2 [ 2 (q· p_ν) ((q· p)+q^2) - q^2 (p · p_ν) ] } , with q being the momentum of the tau propagator. For the partial width, one only needs to replace the coupling g_ν by g_τ in Eq. (<ref>), and the corresponding dimensionless function is f_3 (x_1, x_2) = - λ^1/2 (1,x_1,x_2)/6[ 119 - 115 x_2 + 2 x_2^2 -x_1 ( 61 - 26x_2) + 2 x_1^2 ] + √(x_1 (4-x_1)) (1-x_2) ( 2 (9-x_2) - x_1 (3+x_2) ) [ π/2 + arctanx_1 (3 - x_1 + x_2) λ^-1/2 (1,x_1,x_2)/√(x_1(4-x_1))] + ( 8 - 16 x_2 + 7x_2^2 -2 x_1 ( 18 - 12x_2 + x_2^2 ) + x_1^2 (3-2 x_2) ) arctanhλ^1/2 (1,x_1,x_2)/1-x_1+x_2 + x_2^2 (1-x_1)^2 arctanh(1-x_1)λ^1/2 (1,x_1,x_2)/(1-x_1)^2 - x_2 (1+x_1) . In the limit of x_1 → 0, f_3 (x_1, x_2) ≃ -1/6 (1-x_2) (119 -115 x_2 + 2x_2^2) - 4(1-x_2)^2 logx_1/(1-x_2)^2 - x_2^2 log x_2 . The loop contribution is the same as in Eq. (<ref>) with g_ν replaced by g_τ and multiplied by a factor of 4. § D Z BOSON DECAY Z →Ν + Ν̅+ Φ For the decay Z (p) →ν (p_ν) + ν̅(p_ν̅) + ϕ (p_ϕ), the amplitude square is 1/3∑ | M (Z →ν + ν̅ + ϕ)|^2 = 2√(2) g_ν^2 m_Z^2 G_F/3q^4[ 2 (q_1 · p_ν) (q_1 · p_ν̅) + 2 (q_2 · p_ν) (q_2 · p_ν̅) - (q_1^2+q_2^2) (p_ν· p_ν̅) + 2/m_Z^2( 2 (p · p_ν̅) (p · q_1) (q_1 · p_ν) + 2 (p · p_ν) (p · q_2) (q_2 · p_ν̅) - (q_1^2+q_2^2) (p · p_ν) (p · p_ν̅) ) ] , where the factor of 1/3 is for averaging the spins of Z boson, and q_1, 2 = p_ϕ + p_ν, ν̅ is the momentum of the (anti)neutrino propagator. Then the corresponding partial decay width reads Γ (Z →νν̅ϕ) = g_ν^2 G_F m_Z^3/96 √(2)π^3 f_4 (x_ϕ Z) , with the dimensionless function f_4 (x) = - 1/6 (1 - x) (17+8x-x^2) -(1+3x) log x . The dimensionless function for the corresponding loop contribution to Z →νν̅ is, with the prefactors the same as in Eq. (<ref>): f_2^ loop (x) = -3/2 + log x + 2(1 - log x) - 2+3x+2x log x/(1+x) - x (3+ 2 log x) log( x/1+x) + 1/1+x_2 f_1^(0,0,1,0)( 1,1,3, -1/x) + 2/x_2 f_1^(0,0,1,0)( 1,2,3, -1/x) + 1/x_2 f_1^(0,1,0,0)( 1,2,3, -1/x) + 2 ∫_0^1 db [ _2 F_1^(0,1,0,0)( 1,0,2, b/x(1-b)) - _2 F_1^(0,0,1,0)( 1,0,2, b/x(1-b)) ] , where _2 f_1 and _2 F_1 are respectively the hyper-geometric and regularized hyper-geometric functions.
http://arxiv.org/abs/2407.13391v1
20240718105848
Double interdiction problem on trees on the sum of root-leaf distances by upgrading edges
[ "Xiao Li", "Xiucui Guan", "Junhua Jia", "Panos M. Pardalos" ]
math.OC
[ "math.OC" ]
Double interdiction problem]Double interdiction problem on trees on the sum of root-leaf distances by upgrading edges 1]Xiao Lialt.xiaoli@gmail.com [1]Xiucui Guanxcguan@163.com 1]Junhua Jia230218184@seu.edu.cn 2]Panos M. Pardalospardalos@ise.ufl.edu *[1]School of Mathematics, Southeast University, No. 2, Sipailou, Nanjing, 210096, Jiangsu Province, China [2]Center for Applied Optimization, University of Florida, Weil Hall, Gainesville, 32611, Florida, USA The double interdiction problem on trees (DIT) for the sum of root-leaf distances (SRD) has significant implications in diverse areas such as transportation networks, military strategies, and counter-terrorism efforts. It aims to maximize the SRD by upgrading edge weights subject to two constraints. One gives an upper bound for the cost of upgrades under certain norm and the other specifies a lower bound for the shortest root-leaf distance (StRD). We utilize both weighted l_∞ norm and Hamming distance to measure the upgrade cost and denote the corresponding (DIT) problem by (DIT_H∞) and its minimum cost problem by (MCDIT_H∞). We establish the 𝒩𝒫-hardness of problem (DIT_H∞) by building a reduction from the 0-1 knapsack problem. We solve the problem (DIT_H∞) by two scenarios based on the number N of upgrade edges. When N=1, a greedy algorithm with O(n) complexity is proposed. For the general case, an exact dynamic programming algorithm within a pseudo-polynomial time is proposed, which is established on a structure of left subtrees by maximizing a convex combination of the StRD and SRD. Furthermore, we confirm the 𝒩𝒫-hardness of problem (MCDIT_H∞) by reducing from the 0-1 knapsack problem. To tackle problem (MCDIT_H∞), a binary search algorithm with pseudo-polynomial time complexity is outlined, which iteratively solves problem (DIT_H∞). We culminate our study with numerical experiments, showcasing effectiveness of the algorithm. [ [ July 22, 2024 ================= § INTRODUCTION The landscape of terrorism has experienced marked transformation widespread use of drones. The enduring nature of this menace has precipitated continued episodes of violence in the subsequent years, compelling governments to adopt proactive strategies to forestall future calamities. A pivotal component of counterterrorism initiatives is the interception of terrorist transportation channels. Nevertheless, considering the constraints on available resources, it is essential for policymakers to prioritize and target specific transportation routes to achieve the most significant disruption. Within this paradigm, the Network Interdiction Problem (NIP), grounded in game theory, presents an insightful framework for judicious decision-making. Generally, the NIP encompasses two main players: a leader and a follower, each propelled by distinct, often opposing, agendas. The follower seeks to maximize their goals by adeptly navigating the network to ensure the efficient transit of pivotal resources, such as supply convoys, or by augmenting the amount of material conveyed through the network. Conversely, the leader's ambition is to obstruct the follower's endeavors by strategically compromising the network's integrity. Network Interdiction Problems that involve the deletion of edges (NIP-DE) are strategies aimed at impeding network performance by removing a set of K critical edges. These strategies are pivotal in diverse fields, including transportation <cit.>, counterterrorism <cit.>, and military network operations <cit.>. Significant scholarly effort has been invested in exploring NIP-DE across a spectrum of network challenges. These encompass, but are not confined to, the StRD <cit.>, minimum spanning tree <cit.>, maximum matching <cit.>, maximum flow <cit.>, and center location problems <cit.>. Pioneering work by Corley and Sha in 1982 <cit.> introduced the notion of edge deletion in NIP to prolong the StRD within a network. Subsequent research by Bar-Noy in 1995 <cit.> established the 𝒩𝒫-hard nature of this problem for any K. Later, Khachiyan et al. (2008) <cit.> demonstrated the impossibility of achieving a 2-approximation algorithm for NIP-DE. Bazgan and colleagues, in their 2015 and 2019 studies <cit.>, proposed an O(mn) time algorithm for incrementing path lengths by 1, and further solidified the 𝒩𝒫 -hard status for increments greater than or equal to 2. Despite theoretical advances, practical application of critical edge or node deletion remains challenging. To address these practical limitations, Zhang et al. (2021) <cit.> introduced an upgraded framework for NIP that focuses on edge upgrades. They explored this concept through the Shortest Path Interdiction Problem (SPIT) and its variant, the Minimum Cost SPIT (MCSPIT), on tree graphs. For SPIT, an O(n^2) primal-dual algorithm was provided under the weighted l_1 norm, with the complexity improved to O(n) for the unit l_1 norm. They extended their investigation to unit Hamming distance, designing algorithms with complexities O(N + l log l) and O(n(log n + K^3)) for K=1 and K>1, respectively <cit.>. Subsequently, Lei et al. (2023) <cit.> enhanced these to O(n) and O(nK^2) time complexities. In a recent study, Li et al. (2023) <cit.> addressed the sum of root-leaf distances (SRD) interdiction problem on trees with cardinality constraint by upgrading edges (SDIPTC), and its related minimum cost problem (MCSDIPTC). Utilizing the weighted l_∞ norm and the weighted bottleneck Hamming distance, they proposed two binary search algorithms with both time complexities of O(n log n) for problems (SDIPTC) and two binary search algorithms within O(N n^2) and O(n log n) for problems (MCSDIPTC), respectively. However, these introductive problems did not limit the shortest root-leaf distance (StRD), which makes the upgrade scheme lack of comprehensiveness and rationality. To remedy this, we introduce the the double interdiction problem on the sum of root-leaf distance by upgrading edges that have restriction both on the SRD and StRD. Specifically, certain advanced transportation networks can be visualized as rooted trees.<cit.> In this model, the root node denotes the primary warehouse, the child nodes signify intermediary transit points, and the leaf nodes portray the ultimate users or terminals. Serving as the leader in this scenario, our aim is to proficiently impede and neutralize this network. The corresponding challenges are articulated as follows. Let T=(V,E,w,u,c) be an edge-weighted tree rooted at s, where V:={ s,v_1,v_2,…,v_n } and E:={ e_1,e_2,…,e_n } are the sets of nodes and edges, respectively. Let Y:={ t_1,t_2,…,t_m } be the set of leaf nodes and S(v):={ v'|v' is the son of v }. Let w(e) and u(e) be the original weight and upper bound of upgraded weight for edge e∈ E, respectively, where u(e)≥ w(e) ≥ 0. Let c(e)>0, r(e) ∈ℤ >0 be the unit modification cost of edge e∈ E for l_∞ norm and weighted sum Hamming distance, respectively. Let P_k:=P_t_k:=P_s,t_k be the root-leaf path from the root node s to the leaf node t_k. Let w(P_k):=∑_e∈ P_kw(e) and w(T):=∑_t_k∈ Yw(P_k) be the weight of the path P_k and SRD under the edge weight w, respectively. Let In_w̅(e) represent the increment of SRD by edge e under edge weight vector w̅. Given two values K and M, the double interdiction problem on trees on the sum of root-leaf distance by upgrading edges (DIT) aims to maximize SRD by determining an edge weight vector w̃ such that the modification cost w̃-w in a certain norm does not exceed K, and the StRD from root s to any leaf node must not be less than M. The mathematical representation of this problem can be articulated as follows. max_w̃ w̃(T):=∑_t_k ∈ Yw̃(P_k) (DIT) s.t. min_t_k ∈ Yw̃(P_k)≥ M w̃-w≤ K, w(e) ≤w̃(e) ≤ u(e), e ∈ E. Its related minimum cost problem (MCDIT) by exchanging its objective function and the modification cost can be formulated as follows. min_w̃ C(w̃):= w̃-w (MCDIT) s.t. min_t_k ∈ Yw̃(P_k)≥ M w̃(T) ≥ D, w(e) ≤w̃(e) ≤ u(e), e ∈ E. In practical applications, several norms are employed to quantify the cost of modifications, notably including the l_1 norm, l_∞ norm, bottleneck Hamming distance, and weighted sum Hamming distance. Each of these norms finds extensive use in various domains. For instance, the l_∞ norm is instrumental in ensuring that traffic on a single network link does not surpass its maximum capacity at any given time, thereby preventing congestion and optimizing network traffic distribution issues <cit.>. Similarly, the weighted sum Hamming distance is applied to manage the number of interfaces within an optical network, as discussed in <cit.>. Under certain specific conditions, both the l_∞ norm and weighted sum Hamming distance are utilized to assess costs, particularly when modifications involve more than one resource. However, existing research has overlooked the scenario involving the simultaneous application of both l_∞ and weighted sum Hamming distances. This paper aims to address this gap by measure the upgrade cost with both l_∞ and weighted sum Hamming distances, as described below. max_w̃ w̃(T):= ∑_t_k ∈ Yw̃(P_k) (DIT_H∞) s.t. min_t_k ∈ Yw̃(P_k)≥ M, max_ e∈ Ec(e)(w̃(e)-w(e))≤ K, ∑_e∈ E r(e)H(w̃(e),w(e))≤ N, w(e) ≤w̃(e) ≤ u(e), e ∈ E. where H(w̃(e),w(e))= 0, w̃(e)=w(e) 1, w̃(e) w(e) is the Hamming distance between w̃(e) and w(e) and N is a given positive value. Its related minimum cost problem obtained by exchanging the l_∞ norm cost and the SRD objective function can be written as min_w̃ C(w̃) := max_e∈ Ec(e) (w̃(e) - w(e)) (𝐌𝐂𝐃𝐈𝐓_H∞) s.t. min_t_k ∈ Yw̃(P_k) ≥ M, ∑_t_k ∈ Yw̃(P_k) ≥ D, ∑_e∈ E r(e)H(w̃(e), w(e)) ≤ N, w(e) ≤w̃(e) ≤ u(e), e ∈ E. The structure of the paper is organized as follows: Section 2 establishes the 𝒩𝒫-hardness of the problem (DIT_H∞) by demonstrating a reduction from the 0-1 knapsack problem. In Section 3, we introduce a dynamic programming algorithm <ref> to solve the problem (DIT_H∞), albeit with pseudo-polynomial time complexity. Moving to Section 4, the paper delves into proving the 𝒩𝒫-hardness of the minimum cost problem (MCDIT_H∞) through a two-step process. In Section 5, we address the problem (MCDIT_H∞) by employing a binary search algorithm which iteratively calling Algorithm <ref>. Section 6 is dedicated to presenting the outcomes of computational experiments, which affirm the efficiency and accuracy of the proposed algorithms. The paper concludes in Section 7, where we summarize the key findings and outline potential directions for future research in this domain. § THE 𝒩𝒫-HARDNESS OF PROBLEM (DIT_H∞) When the weighted l_∞ norm and weighted sum Hamming distance is applied to the upgrade cost, the problem (DIT_H∞) is formulated in (<ref>). Note that the weighted sum Hamming distance is discrete, posing challenges in its treatment. To gain a clearer understanding of problem (DIT_H∞), we initially examine its relaxation (DIT_∞) by removing the constraint of weighted sum Hamming distance. Its mathematical model can be outlined as follows. max_w̃ w̃(T):= ∑_t_k ∈ Yw̃(P_k) (DIT_∞) s.t. min_t_k ∈ Yw̃(P_k)≥ M, max_ e∈ Ec(e)(w̃(e)-w(e))≤ K, w(e) ≤w̃(e) ≤ u(e), e ∈ E. In the problem (DIT_∞), we can maximize the weight of each edge to the greatest extent possible under the constraint of cost K and the upper bound u(e) as follows w̅(e)=min{ w(e)+ K/c(e),u(e) }(e∈ E) . If, in this scenario, the weight of the StRD remains less than M, expressed as min_t_k ∈ Yw̅(P_k) < M, it implies problem is infeasible with the following theorem. Let w̅ be defined as shown in Equation (<ref>). If min_t_k ∈ Yw̅(P_k) < M, then the problem (DIT_∞) is infeasible. Otherwise, the solution w̅ by Equation (<ref>) is optimal for the problem (DIT_∞) and thus can be obtained in O(n) time. When min_t_k ∈ Yw̅(P_k) < M, and all edges have been adjusted to their maximum permissible values, it indicates that the StRD has reached its highest potential length. If, in such a scenario, the StRD still does not fulfill the specified constraints, it indicates that the problem cannot be solved. Conversely, if the StRD meets the constraints under these conditions, it signals the presence of a viable solution. And note that all edges are already at their upper limits, precluding any possibility for further enhancement, the maximal SRD is also achieved. For Equation (<ref>), we just upgrade every edges, which takes O(n) time. To represent the SRD increment by upgrading one edge, we introduce the following definition. <cit.> For any e ∈ E, define L(e):={ t_k∈ Y | e ∈ P_s,t_k} as the set of leaves t_k to which P_s,t_k passes through e. If t_k ∈ L(e), then t_k is controlled by the edge e. Similarly, for any v ∈ V, define L(v):={ t_k∈ Y | v ∈ P_s,t_k} as the set of leaves controlled by the node v. Using this definition, we know that the increment of SRD can be expressed as In_w̅(e):= |L(e)|(w̅(e) -w(e)). Building upon the discussion before, we understand that under weighted l_∞ norm, it is advantageous to upgrade an edge to its upper bound determined by Equation (<ref>). Consequently, the problem can be reformulated into a new 0-1 Integer Linear Programming model, where the binary decision variable x(e) is assigned to each edge e ∈ E. An edge e is considered upgraded or not if x(e)=1 or 0. Thus, the original problem (<ref>) can be transformed into the following general form. max ∑_e ∈ E In_w̅(e)x(e) (GDIT_H∞) s.t.∑_e ∈ P_k (w̅(e)-w(e)) x(e) ≥ M -w(P_k), ∀ k=1,… ,m. ∑_e ∈ E r(e)x(e) ≤ N, x(e) ∈{ 0,1}, e ∈ E. Here, the objective function is transformed from w̃(T) in problem (<ref>) to the increment of SRD by upgrading edges as w̃(T)-w(T)=∑_e ∈ E In_w̃(e)=∑_e ∈ E In_w̅(e)x(e). Meanwhile, note that min_t_k ∈ Yw̃(P_k) ≥ M in problem (<ref>) is equivalent to w̃(P_k) ≥ M for any t_k ∈ Y, then the StRD constraint can be interpreted as (<ref>). Consequently, we arrive at the following theorem. Problem (GDIT_H∞) is equivalent to the problem (DIT_H∞). For clarity, let us denote problem (GDIT_H∞) as Q_1 and problem (DIT_H∞) as Q_2. The theorem is established through the validation of the following two statements. (i) For every feasible solution x_1 to Q_1, there exists a feasible solution w_2 to Q_2, with equal or higher objective value. By Theorem <ref>, we can upgrade every edge e with x_1(e)=1 to the upper bound w̅(e) defined by Equation (<ref>) to obtain a new solution w_2 of Q_2 defined below w_2(e)=w̅(e), x_1(e)=1, w(e), otherwise. Obviously, w_2 satisfies all the constraints within Q_2 and have the same SRD value. (ii) For every feasible solution w_2 to Q_2, there exists a feasible solution x_1 to Q_1 defined below x_1(e)= 1, w_2(e) w(e) , 0, otherwise. Then x_1 have exactly the same objective value as w_2, and (ii) is established. Using Theorem <ref>, we can show the 𝒩𝒫-hardness of problem (DIT_H∞) by the following theorem. The problem (DIT_H∞) is 𝒩𝒫-hard. To establish the 𝒩𝒫-hardness of Problem (DIT_H∞), we reduce from the well-known 𝒩𝒫-hard 0-1 knapsack problem, which is defined as follows: max ∑_i=1^n p_i x_i s.t. ∑_i=1^n c_i x_i≤ R, x_i∈{0,1}, i = 1, …, n. where each c_i is a positive integer and R is a constant. Given any instance I_1 of the 0-1 knapsack problem in (<ref>), we construct an instance of problem (GDIT_H∞) in (<ref>) as follows. Let In_w̅(e_i) := p_i, r(e_i) := c_i, N:=R and M := 0. Furthermore, x_i=1 in an instance I_1 if and only if x(e_i)=1 in an instance of (GDIT_H∞). This equivalence shows that problem (GDIT_H∞) is 𝒩𝒫-hard, thereby establishing the 𝒩𝒫-hardness of problem (DIT_H∞) by Theorem <ref>. § SOLVE THE PROBLEM (DIT_H∞) When applying the l_∞ norm and the weighted sum Hamming distance to the double interdiction problem, the complexity of the situation escalates significantly. This is due to the necessity to identify crucial edges that not only comply with the StRD constraint but also maximize the SRD value. To navigate the challenges presented by the integration of weighted sum Hamming distance, we introduce an enhanced dynamic programming approach. Our methodology begins with an exhaustive case-by-case examination. In particular, the scenario becomes markedly simpler when upgrading a single edge is permitted, serving as the initial case for our analysis. As the complexity of the scenarios increases with the general case, our focus shifts to formulating a dynamic programming objective function, denoted as h. This function employs a convex combination to simultaneously cater to the augmentation in SRD and the necessity to minimize the path length. Following this, we define a transition equation based on the structure of left subtrees, facilitating the execution of a dynamic programming iteration. Moreover, we implement the binary search technique iteratively to fine-tune the optimal parameters, ultimately leading to the determination of the optimal solution to the original problem. §.§ Solve problem (DIT_H∞) when upgrading one edge We denote the problem (DIT_H∞) by (DIT_H∞^1) when N=1 and r(e)=1 for all e ∈ E, which aims to modify a single critical edge to maximize SRD while ensuring compliance with the StRD constraint. To approach this task, we initially set forth some necessary definitions. Subsequently, we employ a greedy algorithm to efficiently address the problem, achieving a solution in O(n) time complexity. For any e^* ∈ E, we define the upgrade vector w_e^* as follows. w_e^*(e)= w̅(e), if e =e^* w(e), otherwise. In order to determine our next optimization goal, we need to consider whether the constraints required in the original situation are satisfied. If min_t_k ∈ Yw(P_k) ≥ M, let e^*:= arg max_e ∈ E In_w_e(e), the optimal solution of the problem (DITH_∞^1) is w_e^*. When min_t_k ∈ Yw(P_k) ≥ M, the StRD constraint is trivally satisfied. Our objective then shifts to maximizing the SRD by upgrading one edge. Naturally, the optimal solution selects the edge e^* with the greatest SRD increment. If min_t_k ∈ Yw(P_k) < M, let k^*:= arg min_t_k ∈ Y w(P_k), then there must be an optimal solution w_e^* for some e^* ∈ P_t_k^*. The necessity of the upgrade edge being a part of the shortest path is evident for it to have an impact in the constraint. Consequently, it has been established that the target edge exists within the shortest path P_k^* connecting the source node s to the leaf node t_k^*. In order to further decompose the problem, we consider the following cases. Case 1. When max_e ∈ P_k^* (w̅(e) -w(e)) <M-w(P_k^*). In this case, the problem is infeasible since upgrading any edge would not satisfy the constraint. Case 2. When max_e ∈ P_k^* (w̅(e) -w(e)) ≥ M-w(P_k^*). Let Y_1:={t_k∈ Y| w(P_k) < M} be the set of leaves not satisfying (<ref>). Let B := { e ∈ P_k^* |(w̅(e) -w(e)) ≥ M-w(P_k^*), Y_1 ⊆ L(e) }. Case 2.1 |Y_1|=1. We need to upgrade ẽ that satisfies w̅(ẽ) -w(ẽ) ≥ M-w(P_k^*) and also has the largest SRD increment. To be specific, let ẽ:= arg max_ e ∈ B In_w_e(e), and then w̃:=w_ẽ is the optimal solution. Case 2.2. |Y_1|> 1 and B=∅. In this case, the problem is infeasible. Modifying a single edge to extend the StRD is ineffective, as this alteration fails to impact all existing shortest paths. Case 2.3. |Y_1|> 1 and B≠∅. In this case, upgrade any edge in B can satisfy all StRD constraint and therefore we choose the one e^*:= arg max_e ∈ B In_w_e(e) with the largest SRD increment. Then the optimal solution is identified as w̃=w_e^*. By the above analyze, we give the following theorem. The w̃ defined in Case 2.1 and Case 2.3. is an optimal solution of problem (DIT_H∞^1). For Case 2.1, suppose there exists a superior solution w' with larger SRD, let the upgraded edge be denoted by e'. According to Theorem <ref>, we have e' ∈ P_k^*, and by the definition of w̃, it must hold that (w̅(e) -w(e)) < M-w(P_k^*). This leads to a contradiction with feasibility. For Case 2.3, assume a superior solution w' exists with larger SRD than w̃, and denote the upgraded arc by e'. Then, form Scenario 1 we know that e' ∈ B, by definition, w̃ should have the largest SRD, which is a contradiction. Therefore, we have the following algorithm <ref> to solve (DIT_H∞^1). Given that finding the maximum value in a set takes O(n) time, we can draw the following conclusion. The problem (DIT_H∞^1) can be solved by Algorithm <ref> in O(n) time. §.§ Solve the general problem (DIT_H∞) For the general form of problem (DIT_H∞), things escalates in complexity. When deciding whether to upgrade an edge, a delicate balance between the StRD and SRD must be struck. We begin by laying down foundational definitions. Subsequently, we formulate an objective function that accounts for both factors which results in a Combination Interdiction problem on Trees (CIT_H ∞^λ). Constructing the state transition equation marks the completion of one iteration. Ultimately, we iterate to establish optimal parameters and uncover the optimal solution, a process that incurs pseudo-polynomial time complexity. <cit.> Define T_v=(V_v, E_v) as the subtree rooted at v for any v ∈ V. Let S(v)={v_s_1, …, v_s_|S(v)|} be the son set, where v_s_q the q-th son for any non-leaf node v. Let P(v_1,v_2) the unique path from v_1 to v_2. Define the left q-subtree of v as T_v(q)=T_v_s_q∪ P(v, v_s_q) and the p:q-subtree of v as T_v(p:q)=(⋃_i=p^q T_v(i)) ∪{v}. Specially, define T_v(p: 0)=∅. §.§.§ A dynamic programming algorithm to solve (CIT_H ∞^λ) To achieve a balance between the StRD and SRD, we introduce the Combination Interdiction problem on Trees (CIT_H ∞^λ), which employs a convex combination of these two factors and 0≤λ≤ 1 is a parameter. This problem is specifically applied to the structure of the left p:q-subtree. Upon solving (CIT_H ∞^λ), we can demonstrate the existence of an optimal parameter λ^* such that the solution to (CIT_H ∞^λ) perfectly aligns with the solution to the general problem (DIT_H∞). Suppose T_v(p:q)=(V',E'). max_ŵ { (1-λ) ∑_e ∈ E'|L(e)|ŵ(e) + λmin_t_k ∈∪_i=p^qL(v__i) ŵ(P(v,t_k))} (CIT_H∞^λ) s.t. ∑_e∈ E'r(e)H(ŵ(e),w(e))≤ N, w(e) ≤ŵ(e) ≤w̅(e), e ∈ E'. Define h(v,p:q,N,λ) the optimal value and ŵ_(v,p:q,N,λ) an optimal solution to the problem (CIT_H ∞^λ). Specifically, define h(v,p:q,N,λ)=+∞, if p>q or N<0. And define the StRD and SRD under h(v,p:q,N,λ) as SP_(v,p:q,N,λ)= min _t_k ∈∪_i=p^qL(v_s_i) ŵ(P(v,t_k)) and SRD_(v,p:q,N,λ)=∑ _e∈ E'|L(e)|ŵ(e), respectively. Through these definitions, we can perform dynamic programming methods to solve problem (CIT_H ∞^λ). We first show the state transition from tree T_v(p,p) to its subtree T_v_s_p(1:|S(v_s_p)|). For any non-leaf node v and any integer k ≥ 1, let e_s_p:= (v,v_s_p), then we have [ h(v,p:p,k,λ)= max{ h(v_s_p, 1:|S(v_s_p)|, k,λ)+λ w(e_s_p)+ (1-λ) In_w(e_s_p),; h(v_s_p, 1:|S(v_s_p)|, k-r(e_s_p),λ)+λw̅(e_s_p)+ (1-λ) In_w̅(e_s_p) } ] Suppose ŵ_p is an optimal solution corresponding to the objective value h(v,p:p,k,λ), then there are the following two cases for ŵ_p depending on whether the edge e_s_p is upgraded or not. Case 1: H(ŵ_p(e_s_p),w(e_s_p))=0, which means the edge e_s_p is not upgraded and ŵ_p(e_s_p)=w(e_s_p), then SP_(v,p:p,k,λ)=w(v_s_p)+ SP_(v_s_p,1:|S(v_s_p)|,k, λ). So the StRD increases w(e_s_p) by edge e_s_p, and SRD increases by In_ w(e_s_p), which renders the first item in (<ref>). Case 2: H(ŵ_p(e_s_p),w(e_s_p))=1. In this case, the edge e_s_p is upgraded to w̅(e_s_p). Then SP_(v,p:p,k,λ)=w̅(e_s_p)+SP_(v_s_p,1:|S(v_s_p)|,k-r(e_s_p),λ). Hence StRD increases w̅(e_s_p), SRD increases by In_w̅(e_s_p), which renders the second item in (<ref>). Next, we show that the problem (CIT_H ∞^λ) defined on T_v(p:q) can be divided into two sub-trees T_v(p:l) and T_v(l+1:q) with the following theorem. Suppose that v is a non-leaf node. For any child node index p,l,q satisfying p ≤ l< r, then the optimal value of the problem (CIT_H ∞^λ) defined on T_v(p:q) can be calculated by h(v,p:q,k,λ)=max_ k_1+k_2 ≤ k { λmin{ SP_(v,p:l,k_1,λ),SP_(v,l+1:q,k_2,λ)} + (1-λ)(SRD_(v,p:l,k_1,λ)+SRD_(v,l+1:q,k_2,λ)) }. Let E_1, E_2 and E_3=E_1 ∪ E_2 be the edge sets of T_v(p:l), T_v(l+1:q), and T_v(p:q), respectively. Suppose w^*:=ŵ_(v,p:q,k,λ) is the optimal solution, let k_1^*:=∑_ e∈ E_1,w^*(e) w(e) r(e), k_2^*:=∑_ e∈ E_2,w^*(e) w(e) r(e), then k_1^* and k_2^* are both integers and k_1^*+k_2^* ≤ k. h(v,p:q,k,λ) = min_t_k ∈∪_i=p^qL(v_s_i) λ w^*(P(v,t_k))+(1-λ) ∑_e ∈ E_3 |L(e)| w^*(e) = λmin{min _t ∈⋃_i=p^l L(v_s_i) w^*(P(v, t)), min _t ∈⋃_i=l+1^q L(v_s_i) w^*(P(v, t))} +(1-λ)∑_e ∈ E_1|L(e)|w^*(e)+(1-λ)∑_e ∈ E_2|L(e)|w^*(e) = λmin{SP_(v, p: l, k_1^*, λ), SP_(v, l+1: q, k_2^*,λ)} +(1-λ)(SRD_(v,p:l,k_1^*,λ)+SRD_(v,l+1:q,k_2^*,λ)). The last equation comes from the definition of problem (CIT_H ∞^λ), which reveals a good property that the optimal value h(v,p:q,k,λ) on T_v(p:q) comes from two optimal solutions ŵ_(v,p:l,k_1^*,λ) and ŵ_(v,l+1:q,k_2^*,λ) on two sub-trees T_v(p:l) and T_v(l+1:q). Therefore the theorem holds. Using the two theorems above, we are able to calculate function h in this two step. [ h(v, 1: p, N,λ)=max_0≤ N_1 ≤ N, N_1 ∈𝒵^+{λmin{SP_ (v,1:p-1,N_1,λ),SP_ (v,p:p,N-N_1, λ) }+; (1-λ) (SRD_(v,1:p-1,N_1,λ)+SRD_(v,p:p,N-N_1, λ)) }; ] h(v,p:p,N-N_1,λ) = max{ h(v_s_p, 1:|S(v_s_p)|, N-N_1, λ)+λ w(e_s_p)+(1-λ) |L(e_s_p)| w(e_s_p), h(v_s_p, 1:|S(v_s_p)|, N-N_1-r(e_s_p), λ)+ λw̅(e_s_p)+(1-λ) |L(e_s_p)| w̅(e_s_p) }. Throughout the discussion above, we propose Algorithm <ref> to solve the problem (CIT_H ∞^λ), where we need to call Depth-First Search algorithm DFS(s,1:|S(s)|, N, λ) recursively <cit.>. Algorithm <ref> can solve problem (CIT_H ∞^λ) in O(nN^2) time. Given that N is a predefined constant, for each node v and each interger 0 ≤ N_1 ≤ N, there is a distinct state DFS(v,1:|S(v)|,N_1,λ). Consequently, the total number of states is bounded by O(nN). For each state, DFS function delineated in Algorithm <ref> requires O(N) time. Therefore, the overall computational complexity for problem (CIT_H ∞^λ) is O(nN^2). §.§.§ A binary search algorithm to solve problem (DIT_H∞) To solve problem (DIT_H∞), we first analyse the connection between problem (CIT_H ∞^λ) and (DIT_H∞), then propose the montonicity theorem of problem (CIT_H ∞^λ). Based on which, we develop a binary search algorithm. Note that by setting λ=1 or 0, we are able to derive optimal solutions for two planning problems: maximizing the StRD and maximizing SRD, subject to upper bound and weighted sum Hamming distance constraints, respectively. Then the following theorem concerning infeasibility can be obtained. If h(s, 1:|S(s)|, N, 1) < M, then problem (DIT_H∞) is infeasible. Note that h(s, 1:|S(s)|, N, 1)=SP_(s, 1:|S(s)|, N, 1) < M, then there is no set of edges capable of extending the StRD to meet or exceed M, which means the problem is infeasible. Analysing problem (DIT_H∞), we need to strike a balance between these objectives StRD and SRD. Specifically, within the original constraints, we seek to establish a lower bound for the StRD while simultaneously maximizing SRD. Given that the objective function h, comprises two components, we observe that as the parameter λ varies from 1 to 0, the contribution of the StRD to h steadily diminishes, while the contribution of the SRD progressively increases. Despite the inherent discontinuity imposed by the weighted sum Hamming distance constraints, as λ changes, the algorithm becomes biased in the selection of update edges, which allows us to draw the following theorems. Let 1≥λ_1 ≥λ_2≥ 0, then SP_(s,1:|S(s)|,N,λ_1)≥ SP_(s,1:|S(s)|,N,λ_2), SRD_(s,1:|S(s)|,N,λ_1)≤ SRD_(s,1:|S(s)|,N,λ_2). Let w_1:=ŵ_(s,1:|S(s)|,N,λ_1) and w_2:=ŵ_(s,1:|S(s)|,N,λ_2) be optimal solutions with respect to λ_1 and λ_2, respectively. Let P_1 and P_2 be the corresponding shortest path under w_1 and w_2, respectively. Let SP_w_i=SP_(s,1:|S(s)|,N,λ_i) and SRD_w_i=SRD_(s,1:|S(s)|,N,λ_i),i=1,2 for simplicity. By Theorem <ref>, problem (CIT_H ∞^λ) can be transformed to a 0-1 knapsack problem (TCIT_H ∞^λ), for convenience, we divide 1-λ in both terms of the objective function. Let x(e)=1,0 represent whether the weight of edge e is upgraded to w̅(e) or not. Then we have max {∑_e ∈ E |L(e)|(x(e)(w̅(e)-w(e))+w(e)) +. . λ/1-λmin_t_k ∈ Y ∑_e ∈ P_k (x(e)(w̅(e)-w(e))+w(e)) } (TCIT_H∞^λ) s.t. ∑_e∈ Er(e)x(e)≤ N, x(e)=1,0. There are two cases to be considered. Case 1: P_1 = P_2. In this case, the increment of the objective function resulting from upgrading edge e on path P_1 is monotonically increasing as λ varies from 0 to 1 since λ/1-λ is monotonically increasing, while the increment of the objective function by upgrading other edges not on P_1 keeps still. When w_1 = w_2, the theorem holds trivially. Suppose w_1 ≠ w_2. Since w_2 is the optimal solution when λ=λ_2, in order for w_1 to surpass w_2, it must select edges that can increase higher values. Consequently, more edges on P_1 will be chosen under w_1, leading to SP_w_1 > SP_w_2. Case 2: P_1 ≠ P_2. Following the insights from Case 1, where identical shortest paths result in a non-decreasing StRD as λ increases, the scenario where P_1 ≠ P_2 implies that enhancements to P_1 have been so substantial that it is no longer the shortest path, whereas P_2 becomes so. Under these circumstances, SP_w_1 > SP_w_2 is ensured. In both cases, we get SP_w_1≥ SP_w_2, now we show SRD_w_2≥ SRD_w_1. By the optimality of w_2, we have λ_2 SRD_w_1+(1-λ_2)SP_w_1≤λ_2 SRD_w_2+(1-λ_2)SP_w_2 Rearrange the inequality (<ref>), then we have λ_2 (SRD_w_1 - SRD_w_2) ≤ (1-λ_2)(SP_w_2 - SP_w_1) Since (1-λ_2) ≥ 0 and SP_w_2≤SP_w_1, the right-hand side of (<ref>) is non-positive, thus: λ_2 (SRD_w_1 - SRD_w_2) ≤ 0, which leads to SRD_w_1≤SRD_w_2. This completes the proof. If h(s, 1:|S(s)|, N, 1) ≥ M, then the optimal solution of the problem (DIT_H∞) is given by ŵ_(s, 1:|S(s)|, N, λ^*), where λ^* is the critical value with 0≤λ^*≤ 1. When h(s, 1:|S(s)|, N, 1) ≥ M, then the problem is feasible by Theorem <ref>. Notice that the StRD is non-increasing and the SRD is non-decreasing as λ changes from 1 to 0. Besides, the StRD and SRD reaches the maximal at λ=1 and 0, respectively. Hence, a solution ŵ_(s, 1:|S(s)|, N, λ^*) emerges at a critical threshold denoted as λ^*, which not only adheres to the StRD constraint but also optimally maximizes SRD. Then w^*:=ŵ_(s, 1:|S(s)|, N, λ^*) is the optimal solution for problem (DIT_H∞). Suppose there exists a better solution w̃ than w^* with w̃(T) > w^*(T). Observed that w̃ also satisfies the feasibility of problem (CIT_H ∞^λ^*) and w^* have the largest h value under feasibility by Theorem <ref>, then λ^* min _t_k ∈ Yw̃(P_k) +(1-λ^*)w̃(T) ≤λ^* min _t_k ∈ Y w^*(P_k) +(1-λ^*) w^*(T), which leads to min _t_k ∈ Yw̃(P_k)< min _t_k ∈ Y w^*(P_k) since w̃(T) > w^*(T). Note that λ is continuous and λ^* is the threshold, there is no room for the StRD of w^* to decrease, which is a contradiction. Finally, we present Algorithm <ref> to solve (DIT_H∞), encapsulating our findings. This algorithm leverages a binary search method to determine the critical value λ^* corresponding to Theorem <ref>. Then we call Algorithm <ref> to solve problem (CIT_H ∞^λ^*), in which DFS(s, 1:|S(s)|, N, λ^*) is utilized. Let U:=min_t_k ∈ Y u(P_k)+∑_e∈ E|L(e)| u(e). When N ≥ 1, Algorithm <ref> can solve problem (DIT_H∞) within O(nN^2log_2 U) time. By Theorem <ref>, there exists a critical value λ^*∈ (lr^*,rr^*) that gives the optimal solution. To be specific, there exists a small interval [a,b] satisfying (lr^*,rr^*)⊂ [a,b] ⊂ [0,1] such that ŵ_(s,1:|S(s)|,N,λ^*)=ŵ_(s,1:|S(s)|,N,λ) holds for any λ∈ [a,b]. Therefore, suppose b-a=ϵ, for the binary search process, it takes at most O(|log_2 ϵ|) iterations and it runs O(nN^2) in each iteration by Theorem <ref>. Hence the time complexity is O(nN^2|log_2 ϵ|). Here we give a lower bound for ϵ. Without loss of generosity, let us assume w and w̅ are integer vectors since they are represented as decimal in calculation. Then, for problem (CIT_H ∞^λ) in (<ref>), h is a piecewise linear function of λ coloured in red as shown in Fig. <ref>. Each feasible edge upgrade vector ŵ_i can be represented as a non-increasing line segment by h( λ) = (min_t_k ∈ Yŵ_i(P_k) - ∑_e ∈ E |L(e)| ŵ_i(e)) λ + ∑_e ∈ E |L(e)| ŵ_i(e). By analyzing, we can obtain -∑_e ∈ E |L(e)| w̅(e) < a_i:=min_t_k ∈ Yŵ_i(P_k) - ∑_e ∈ E |L(e)| ŵ_i(e) < min_t_k ∈ Yw̅(P_k), 0 ≤ b_i:=∑_e ∈ E |L(e)| ŵ_i(e) ≤∑_e ∈ E |L(e)| w̅(e). For each λ∈ [0, 1], it corresponds to a problem (CIT_H ∞^λ) where the optimal solution output by Algorithm <ref> is the highest among all feasible edge weight vectors. For any two lines, we have h_i(λ) = a_i λ + b_i, h_j(λ) = a_j λ + b_j. The horizontal coordinate of their intersection point is λ_1 = b_j - b_i/a_i - a_j. Similarly, let λ_2 = b_j' - b_i'/a_i' - a_j' represents the horizontal coordinate of another intersection point. Consequently, we get a lower bound of b-a as b - a ≥b_j - b_i/a_i - a_j - b_j' - b_i'/a_i' - a_j'≥1/(a_i - a_j)(a_i' - a_j') > 1/(min_t_k ∈ Yw̅(P_k) + ∑_e ∈ E |L(e)| w̅(e))^2. We also know: min_t_k ∈ Yw̅(P_k) + ∑_e ∈ E |L(e)| w̅(e) ≤ U := min_t_k ∈ Y u(P_k) + ∑_e ∈ E |L(e)| u(e). Therefore, we establish |log_2(b - a)| > |log_2 1/U^2|, from which it follows that the algorithm terminates within O(nN^2 |log_2 1/U^2|)=O(2nN^2 log_2 U)=O(nN^2 log_2 U). This runtime is classified as pseudo-polynomial because N is dependent on the input provided. Specifically, when r(e) = 1, N < n, the weighted sum of Hamming distances becomes a cardinality constraint. This algorithm runs in O(n^3 log_2 U) time, making it a polynomial-time algorithm. § THE 𝒩𝒫-HARDNESS OF (MCDITH) UNDER THE WEIGHTED L_∞ NORM Next, we consider its related minimum cost problem (MCDIT_H∞) by exchanging the objective function and the l_∞ norm of problem (DIT_H∞), which is formulated in (<ref>). To prove (MCDIT_H∞) is 𝒩𝒫-hard, it suffices to show that its decision version (DMCDIT_H∞) is 𝒩𝒫-complete. Typically, proving a decision problem is 𝒩𝒫-complete involves two steps: first, demonstrating that the decision version is in 𝒩𝒫; second, showing that the decision version can be reduced from a problem already proven to be 𝒩𝒫-complete. For our purposes, we choose the decision version of the 0-1 knapsack problem as the original problem <cit.>. This leads to the following theorem. The problem (MCDIT_H∞) is 𝒩𝒫-hard. We prove the theorem in two steps. Step 1: The decision version of the (MCDIT_H∞) problem is in 𝒩𝒫. The decision version of (MCDIT_H∞) can be stated as follows: Given a maximal cost K, determine whether there exists an edge weight vector ŵ that satisfies the following constraints: min 1 s.t. ŵ(T) ≥ D, (DMCDITH_∞) min_t_k ∈ Yŵ(P_k) ≥ M, ∑_e∈ E r(e)H(ŵ(e), w(e)) ≤ N, max_e∈ Ec(e) (ŵ(e) - w(e)) ≤ K, w(e) ≤ŵ(e) ≤ u(e), e ∈ E. Note that under the maximal cost K, the weight of one edge is constrained to the upper bound in Equation (<ref>), thus the u(e) in the constraint can be replaced by w̅(e). By the definition of an 𝒩𝒫 problem, given the vector w̅, one can easily verify whether the vector satisfies the constraints in O(n) time, as it only requires basic vector operations. Step 2: Problem (DMCDIT_H∞) can be reduced from the 0-1 knapsack problem. Similar to (DIT_H∞), observe that when the upgrade cost is fixed, it is always better to upgrade an edge to its upper bound w̅(e) in Equation (<ref>). Therefore, the decision version of (MCDIT_H∞) is equivalent to the following problem, where In_w̅(e) represents the SRD increment when upgrading edge e to w̅(e), x(e) represents whether to upgrade edge e. max 1 (GMCDIT_H∞) s.t. ∑_e ∈ E r(e)x(e) ≤ N, ∑_e ∈ E In_w̅(e)x(e) ≥ D, ∑_e ∈ P_k (w̅(e)-w(e)) x(e) ≥ M -w(P_k), ∀ k=1,… , m, x(e) ∈{ 0,1}. Conversely, the decision version of the 0-1 knapsack problem can be formulated as follows: Given a constant M_1, find a feasible solution of max 1 s.t. ∑_i=1^n c_i x_i≤ R, ∑_i=1^n p_i x_i≥ M_1, x_i∈{ 0,1 }, ∀ i=1,…,n. Then for any instance I_1 of form (<ref>), consider a chain with root s, then |L(e)|=1, e ∈ E, by setting r(e_i) =c_i, In_w̅(e)=p_i, D=M_1, N=R, instance I_2 of problem (DMCDIT_H∞) is exacly instance I_1. Therefore the decision version of (MCDIT_H∞) can be reduced from the 0-1 knapsack problem. In conclusion, the (MCDIT_H∞) problem is 𝒩𝒫-hard. § AN PSEUDO-POLYNOMIAL TIME ALGORITHM TO SOLVE PROBLEM (MCDIT_H∞) For problem (MCDIT_H∞), the objective function, which represents the maximum cost under the l_∞ norm, is actually constrained within the interval [K^1,K^2], where K^1=0, and K^2= max_e ∈ E c(e)(u(e)-w(e)). Upon investigating the interconnection between problems (DIT_H∞) and (MCDIT_H∞), it is discovered that the latter can be addressed by sequentially solving (DIT_H∞) to ascertain the minimum K^* such that w̃(T) > D. Moreover, in problem (DIT_H∞), it is observed that as the value of K ascends, the optimal SRD value, w̃(T) also exhibits a monotonic increase. To find optimal K^* within the defined interval [K^1, K^2], a binary search Algorithm is employed. Each phase of the iteration leverages Algorithm <ref> for the computation of w̃(T), thus iteratively pinpointing K^*. Consequently, we have the following theorem. Problem (MCDIT_H∞) can be solved by Algorithm <ref> within O(nN^2 log_2 Klog_2 U) time. By Theorem <ref>, Algorithm <ref> operates in O(nN^2 log_2 U) time. Suppose K is integer, the binary search method requires at most O(log_2 K) iterations. Consequently, Algorithm <ref> has a time complexity of O(nN^2 log_2 K log_2 U), which qualifies it as a pseudo-polynomial time algorithm depending on N. § NUMERICAL EXPERIMENTS §.§ One example to show the process of Algorithm <ref> For the better understanding of Algorithm <ref>, Example <ref> is given to show the detailed computing process. Let V:={ s,v_1,…,v_12}, E:={ e_1,…,e_12}, the corresponding i,c,w,w̅ are shown on edges with different colors. Now we have w(T):=106, u(T):=216. Suppose the given values are M:=30,K=320 and N:=3, r(e)=1, ∀ e ∈ E. When λ=1, the algorithm aims to max StRD as shown in Fig.<ref> where all upgrade edges are on path (s,v_4). Table <ref> shows that only on the shortest path h have positive values. In ŵ_(s,1:3,3,1), SP_(s,1:3,3,1)=34 ≥ M=30, so the problem is feasible. Conversely, when λ=0, the algorithm maximizes SRD, as shown in Fig.<ref>. In ŵ_(s,1:3,3,0), the edges with larger SRD increment are chosen. Table <ref> shows that the algorithm maximizes h in all feasible solution. By adjusting λ, different weight vector are generated. In the end, when λ=0.75, the algorithm terminates with the optimal value w̃(T)=141 in Fig.<ref> and Table <ref>. §.§ Computational experiments We present the numerical experimental results for Algorithms <ref>, <ref>, <ref>, <ref> in Table <ref>. These programs were coded in Matlab2023a and ran on an Intel(R) Core(TM) i7-10875H CPU @ 2.30GHz and 2.30 GHz machine running Windows 11. We tested these algorithms on six randomly generated trees with vertex numbers ranging from 10 to 500. We randomly generated the vectors u, c and w such that 0≤ w ≤ u and c > 0. We generated K, D and N for each tree based on n with randomness, respectively. In this table the average, maximum and minimum CPU time are denoted by T_i, T_i^max and T_i^min, respectively, where i = 1, ⋯, 4 represents Algorithms <ref>, <ref>, <ref>, <ref>, respectively. From Table <ref>, we can see that Algorithm <ref> is the most time-consuming due to the repeatedly calling of Algorithm <ref> in pseudo-polynomial time and the uncertainty of its iteration number. Overall, these algorithms are all very effective and follow their respective time complexities well. When n is small, the time differences among the three algorithms are relatively small, but as n increases, the differences between the algorithms become more pronounced. § CONCLUSION AND FUTURE RESEARCH This paper delves into the intricate dynamics of the double interdiction problem on the sum of root-leaf distance on trees, with a primary focus on maximizing SRD through edge weight adjustments within cost limitations and minimum path length requirements. By establishing parallels with the 0-1 kapsack problem, it illustrates the 𝒩𝒫-hardness of problem (DIT_H∞). In addressing scenarios where a single upgrade is permissible, a pragmatic greedy algorithm is proposed to mitigate complexity. For situations necessitating multiple upgrades, an pseudo-polynomial time dynamic programming algorithm is advocated, striking a delicate balance between shortest path considerations and the summation of root-leaf distances. Specifically, when the weighted sum type is replaced by a cardinality constraint, the algorithm becomes a polynomial time algorithm. Moreover, the paper ventures into the realm of the related minimum cost problem (MCDIT_H∞), demonstrating its 𝒩𝒫-hardness through a reduction from the 0-1 knapsack problem. Subsequently, it outlines a binary search methodology to tackle the minimum cost predicament, culminating in a series of numerical experiments that vividly showcase the efficacy of the algorithms presented. For further research, a promising avenue lies in extending similar methodologies to interdiction problems concerning source-sink path lengths, maximum flow, and minimum spanning trees, employing diverse metrics and measurements across general graphs. Such endeavors hold the potential to deepen our understanding and broaden the applicability of interdiction strategies in various real-world contexts. 0.5cm Funding The Funding was provided by National Natural Science Foundation of China (grant no: 11471073). 0.3cm Data availability Data sharing is not applicable to this article as our datasets were generated randomly. § DECLARATIONS Competing interests The authors declare that they have no competing interest. 50 Network flow Ahuja RK, Thomas LM, Orlin JB (1995) Network flows: theory, algorithms and applications. Prentice hall. r2_terrorist Albert R, Jeong H, Barabasi A (2000) Error and attack tolerance of complex networks, Nature, 406(6794): 378–382. r21_maximum_flow Altner DS, Ergun Z, Uhan NA (2010) The maximum flow network interdiction problem: Valid inequalities, integrality gaps and approximability, Operations Research Letters, 38:33–38. r5_Bar–Noy Bar–Noy A, Khuller S, Schieber B (1995) The complexity of finding most vital arcs and nodes, Technical Report CS–TR–3539, Department of Computer Science, University of Maryland. Bazgan2019 Bazgan C, Fluschnik T, Nichterlein A, Niedermeier R, Stahlberg M (2019) A more fine-grained complexity analysis of finding the most vital edges for undirected shortest paths. Networks 73.1: 23-37. Bazgan2015 Bazgan C, Nichterlein A, Niedermeier R (2015) A refined complexity analysis of finding the most vital edges for undirected shortest paths: algorithms and complexity, Lecture Notes in Computer Science,9079: 47-60. r15_spanning_tree Bazgan C, Toubaline S, Vanderpooten D (2012) Efficient determination of the k most vital edges for the minimum spanning tree problem, Computers & Operations Research, 39(11): 2888–2898. r23_location Bazgan C, Toubaline S, Vanderpooten D (2013) Complexity of determining the most vital elements for the p-median and p-center location problems, Journal of Combinatorial Optimization, 25(2): 191–207. r19_maximum_matching Bazgan C, Toubaline S, Vanderpooten D (2013) Critical edges for the assignment problem: complexity and exact resolution, Operations Research Letters, 41: 685–689. r4_Corley_and_Sha Corley HW, Sha DY (1982) Most vital links and nodes in weighted networks. Operations Research Letters, 1:157–161. Intro_ALG Cormen TH, Leiserson CE, Rivest RL, et al. (2022) Introduction to algorithms. 4th edn. MIT press. r11_spanning_tree Frederickson GN, Solis-Oba R (1996) Increasing the weight of minimum spanning trees, Proceedings of the 7th ACM–SIAM Symposium on Discrete Algorithms (SODA 1996), 539–546. r12_spanning_tree Iwano K, Katoh N (1993) Efficient algorithms for finding the most vital edge of a minimum spanning tree, Information Processing Letters, 48(5), 211–213. r7_Khachi Khachiyan L, Boros E, Borys K, Elbassioni K, Gurvich V, Rudolf G, Zhao J (2008) On short paths interdiction problems: total and node-wise limited interdiction, Theory of Computing Systems, 43(2): 204–233. Combia book Korte BH, Vygen J, Korte B (2011) Combinatorial optimization[M]. Berlin: Springer. LeiLei Y, Shao H, Wu T, et al. (2023). An accelerating algorithm for maximum shortest path interdiction problem by upgrading edges on trees under unit Hamming distance. Optimization Letters 17: 453–469. Xiao Li X, Guan XC, Zhang Q, Yin XY, Pardalos PM (2023). The sum of root-leaf distance interdiction problem with cardinality constraint by upgrading edges on trees. Accepted by Journal of Combinatorial Optimization, July, 2024, arXiv preprint arXiv:2307.16392. r14_spanning_tree Liang W (2001) Finding the k most vital edges with respect to minimum spanning trees for fixed k, Discrete Applied Mathematics, 113(2–3): 319–327. r6_shortest path Nardelli E, Proietti G, Widmyer P (2001) A faster computation of the most vital edge of a shortest path between two nodes, Information Processing Letters, 79(2): 81–85. r13_spanning_tree Pettie S (2005) Sensitivity analysis of minimum spanning tree in sub-inverse- Ackermann time. In: Proceedings of 16th international symposium on algorithms and computation (ISAAC 2005), Lecture notes in computer science, 3827, 964–73. Optical networks Ramaswami R, Sivarajan K, Sasaki G (2009) Optical networks: a practical perspective[M]. Morgan Kaufmann. r18_maximum_matching Ries B, Bentz C, Picouleau C, Werra D, Costa M, Zenklusen R (2010) Blockers and transversals in some subclasses of bipartite graphs: when caterpillars are dancing on a grid, Discrete Mathematics, 310(1): 132–146. r16_maximum_matching Zenklusen R, Ries B, Picouleau C, Werra D, Costa M, Bentz C (2009) Blockers and transversals, Discrete Mathematics, 309(13): 4306–4314. r17_maximum_matching Zenklusen R (2010) Matching interdiction, Discrete Applied Mathematics, 158(15): 1676–1690. r20_maximum_flow Zenklusen R (2010) Network flow interdiction on planar graphs, Discrete Applied Mathematics, 158(13): 1441–1455. Zhang_SPIT Zhang Q, Guan XC, Pardalos PM (2021a) Maximum shortest path interdiction problem by upgrading edges on trees under weighted l_1 norm. Journal of Global Optimization 79(4):959–987. Zhang_MSPH Zhang Q, Guan XC, Wang H, Pardalos PM (2021b) Maximum shortest path interdiction problem by upgrading edges on trees under Hamming distance. Optimization Letters 15(8): 2661–2680.
http://arxiv.org/abs/2407.13707v1
20240718170615
Dissipation at limited resolutions: Power law and detection of hidden dissipative scales
[ "Qiwei Yu", "Pedro E. Harunari" ]
cond-mat.stat-mech
[ "cond-mat.stat-mech", "physics.bio-ph" ]
These two authors contributed equally to this work. Contacts: qiweiyu@princeton.edu and pedro.harunari@uni.lu Lewis-Sigler Institute for Integrative Genomics, Princeton University, Princeton, NJ 08544 These two authors contributed equally to this work. Contacts: qiweiyu@princeton.edu and pedro.harunari@uni.lu Complex Systems and Statistical Mechanics, Department of Physics and Materials Science, University of Luxembourg, L-1511 Luxembourg City, Luxembourg § ABSTRACT Nonequilibrium systems, in particular living organisms, are maintained by irreversible transformations of energy that drive diverse functions. Quantifying their irreversibility, as measured by energy dissipation, is essential for understanding the underlying mechanisms. However, existing techniques usually overlook experimental limitations, either by assuming full information or by employing a coarse-graining method that requires knowledge of the structure behind hidden degrees of freedom. Here, we study the inference of dissipation from finite-resolution measurements by employing a recently developed model-free estimator that considers both the sequence of coarse-grained transitions and the waiting time distributions: σ_2=σ_2^ℓ + σ_2^t. The dominant term σ_2^ℓ originates from the sequence of observed transitions; we find that it scales with resolution following a power law. Comparing the scaling exponent with a previous estimator highlights the importance of accounting for flux correlations at lower resolutions. σ_2^t comes from asymmetries in waiting time distributions, with its peak revealing characteristic scales of the underlying dissipative process. Alternatively, the characteristic scale can be detected in a crossover of the scaling of σ_2^ℓ. This provides a novel perspective for extracting otherwise hidden characteristic dissipative scales directly from dissipation measurements. We illustrate these results in biochemical models as well as complex networks. Overall, this study highlights the significance of resolution considerations in nonequilibrium systems, providing insights into the interplay between experimental resolution, entropy production, and underlying complexity. Dissipation at limited resolutions: Power law and detection of hidden dissipative scales Pedro E. Harunari July 22, 2024 ======================================================================================== § INTRODUCTION Although the pursuit of ever-increasing resolution is a primary goal of technological progress, there are many instances where coarser observations enhance features that would otherwise be missed. A technique known as image binning lumps adjacent pixels to increase signal-to-noise ratio at the expense of resolution, and it has been an important tool in the discovery of faint objects in the universe <cit.>. Regarding emergent phenomena, the microscopic interactions between individual components often do not suffice to appreciate complex large-scale phenomena, such as the chemical reactions in Turing patterns <cit.> or the rules for Conway's game of life <cit.>. In these cases, improving measurement resolution might introduce additional computational requirements without aiding the detection of patterns. In a more routine experience, squinting the eyes can reveal a figure in optical illusions such as hybrid images <cit.>, with the Monroe/Einstein image being the most famous example. Across resolutions, observables present different behaviors and correlations, and hidden structures can be uncovered if the right observable is measured. In this contribution, we demonstrate that for nonequilibrium systems, quantifying energy dissipation across scales reveals new insights into the underlying structures. The vast majority of biological phenomena are intrinsically nonequilibrium processes sustained by the continuous dissipation of free energy <cit.>, whose quantification is key for understanding the dynamics and energetics of biological function and physical processes. Examples of these processes are adaptation <cit.>, error correction <cit.>, and environment sensing <cit.>. Multiple temporal and spatial scales can be involved: While free energy is often harnessed on the molecular scale (e.g. from the hydrolysis of energy-rich molecules such as ATP), it can be used to drive processes at much larger scales, like pattern formation <cit.> and collective motion <cit.>. Such multiscale nonequilibrium processes can be studied in model experimental systems such as the actomyosin cortex of a starfish oocyte <cit.> and microtubule active gels <cit.>. Understanding how much free energy is dissipated on those different scales can lead to mechanistic insights into the intricacies of the underlying structure, such as the characteristic timescale of active processes <cit.>. However, this is usually difficult due to limitations on the scales and degrees of freedom that can be resolved in experiments. It is crucial to elucidate how much information can be extracted from measurements with finite resolution. From a theoretical perspective, the thermodynamics of coarse-grained systems have been studied in distinct scenarios, where the most prominent measure of dissipation is the entropy production rate (EPR). Previous studies have considered different forms of coarse-graining: Timescale separation <cit.> represents the possibility of monitoring slow degrees of freedom while fast ones go undetected; decimation <cit.> considers subsets of states and transitions as observables and can preserve the full entropy production <cit.>; milestoning has been used to map continuous dynamics onto the framework of discrete state space and ensures thermodynamic consistency <cit.>; lumping refers to merging states that cannot be resolved due to, e.g., proximity in space, and often leads to a drastic decrease in EPR at the coarse-grained level <cit.>. Forms of coarse-graining can also be inspired by basins of attraction <cit.>, first-order phase transitions <cit.>, and imperfect measurements <cit.>. Obtaining the entropy production or finding its upper/lower bounds provides key insights into the system not only because it estimates the real EPR or establishes the minimal thermodynamic cost of a process, but it also establishes bounds for efficiency <cit.>. Although much progress has been made to extract the EPR from statistics of the coarse-grained dynamics, e.g. using waiting time distributions <cit.>, it remains an open question how much can be learned about the microscopic system from coarse-grained observations. Here, we explore the scenario of measurement with limited resolutions. These are typically experiments where not all fine-grained degrees of freedom are distinguished by the measurement apparatus, resulting in unresolved trajectories. To represent this scenario, we coarse-grain the system by lumping states that are sufficiently similar, i.e., close in terms of a relevant distance measure. This allows us to examine the effect of varying resolution by changing the distance threshold for lumping states, which reveals the dependence of detected dissipation with resolution. Notice that decreasing resolution can always be done in the post-processing, hence the data can be treated to unveil the properties we discuss without the need for always improving resolution. Measuring EPR in coarse-grained systems is generally difficult because it depends on the statistics of current observables which, after coarse-graining, do not share simple relations with fully resolved currents <cit.>. Previously, it was shown that the apparent entropy production rate at coarse-grained scales decreases following an inverse power law, with an exponent that depends both on the topology of the state space and the correlation of the probability fluxes <cit.>. When the fluxes are negatively correlated (i.e. frequent back-and-forth transitions), the apparent EPR decreases faster than the number of coarse-grained transitions. This suggests that harnessing the information encoded by flux correlations might result in a more accurate estimation of the EPR. Indeed, recent works have made significant progress in estimating the EPR of systems with partially visible transitions through specialized estimators that take into account the correlation (through sequence and waiting time) between coarse-grained transitions <cit.>. Hence, it is natural to ask whether applying this approach to coarse-grained measurements can reveal more information. We focus on one estimator that is obtained by the sum of two contributions, one from the sequence of visible transitions and one from their waiting times; they are affected in distinct ways by changes in resolution and will prove to have different roles in the quantitative assessment of dissipation and internal scales. Importantly, the estimator strictly bounds the EPR from below. When applying the specialized estimator to limited-resolution measurements, we find that the estimator leads to more accurate apparent values of EPR and provides mechanistic insights into the dissipative scales. First, we show that the apparent EPR estimated from this approach decreases with the coarse-grained scale following a power law, with an exponent that is smaller than that of the direct coarse-graining approach. Thus, accounting for flux correlation drastically improves EPR estimation. In addition, we show that the irreversibility from the waiting time distribution follows a non-monotonic relation with the coarse-grained scale, with its peak position reflecting the dissipative scale of the system. This is similar to a non-monotonic relation reported in the actomyosin cortex <cit.>, where the peak position corresponds to the dissipative timescale, and a spatial counterpart to the detection of dissipative timescales through temporal coarse-graining <cit.>. If multiple scaling regimes are present, their crossover may be detected by a crossover in the EPR scaling. These results can be readily applied to experimental data, as illustrated in biochemical reaction systems such as Brusselator <cit.> and Schlögl models <cit.>. We also find similar relations in state networks with non-regular topologies <cit.>, which may be useful for analyzing time irreversibility in complex networks <cit.> such as neural networks present in brain dynamics <cit.>. § FORMALISM We consider a system whose dynamics is described by a continuous-time Markov chain among discrete mesoscopic states. These states capture configurations that are of thermodynamic relevance, and the transition rates between them include the influence of the environment. Its EPR, quantifier of the statistical asymmetry between a process and its time-reversal, is given by σ = 𝒦∑_ℓ P(ℓ) lnP( ℓ)/P(ℓ̅), where ℓ sums over all transitions with ℓ̅ being its reverse. P(ℓ) is the probability of observing transition ℓ, and 𝒦 is the dynamical activity defined as the average number of transitions per unit time. We start with a microscopic description where all the transitions are visible and the steady-state probability P(ℓ) can be obtained by solving the master equation. After coarse-graining, only a subset of transitions remain observable, leading to an observed activity 𝒦_obs<𝒦. To investigate how limited resolution affects dissipation, we adopt a coarse-graining procedure that lumps together states and transitions identified by sufficiently similar degrees of freedom. For illustrative purposes, we start with a square lattice where states are identified by two degrees of freedom, which can be, for instance, spatial positions or chemical concentrations (see Fig. <ref>A). The proposed procedure representing limited resolutions joins in a coarse-grained state (shaded boxes) “microscopic states” belonging to a neighborhood of a given size. All microscopic states within the same coarse-grained state cannot be resolved, and thus transitions between them become invisible. Furthermore, all “microscopic” transitions ℓ_α,i that stem from one coarse-grained state to another are observed as the same coarse-grained transition ℓ_α. A typical measurement yields a sequence of coarse-grained transitions and the waiting times between consecutive transitions (intertransition times): (ℓ_α, t_α; ℓ_β, t_β; …). The probability of observing transition ℓ_α and, after time t, ℓ_β, is given by the sum of the probability of its constituent microscopic transitions P(ℓ_α, ℓ_β ;t) = ∑_i,j P(ℓ_α,i, ℓ_β,j;t). Similarly, the joint probability of a sequence of transitions is given by the marginalization P(ℓ_α, ℓ_β)=∫_0^∞ P(ℓ_α, ℓ_β ;t) t = ∑_i,j P(ℓ_α,i, ℓ_β,j). While the low-resolution dynamics is typically non-Markovian, which makes it difficult to estimate the true EPR, the experimenter can still compute the apparent (or “local”) EPR  <cit.> from the statistics of the observed coarse-grained transitions. The simplest estimator accounts for the statistics of each coarse-grained transition individually: σ_1 = 𝒦_obs∑_α P(ℓ_α) lnP( ℓ_α)/P(ℓ̅_α), where ℓ_α sums over coarse-grained transitions. This quantity is readily available from empirical data since the probabilities can be estimated from the frequencies of observable transitions. It bounds the full EPR from below by only considering the absolute probabilities of each transition, disregarding hidden degrees of freedom, the presence of limited resolutions, and the fact that trajectories are non-Markovian. Higher-order statistics can be included to build better estimators by also considering correlations between distinct transitions, usually in the form of joint probabilities. Transition-based estimators that consider the statistics of pairs of transitions and the waiting time between them have recently been developed <cit.>. However, it is unclear whether they rigorously bound EPR when observables are lumped transitions consisting of many indistinguishable transitions. To overcome this, we consider an alternative second-order estimator that strictly bounds EPR from below when applied to lumped transitions <cit.>. The estimator can be split into sequence and waiting time contributions: σ_2 ≡σ_2^ℓ + σ_2^t, where σ_2^ℓ = 𝒦_obs/2∑_α, β P(ℓ_α, ℓ_β) lnP( ℓ_α, ℓ_β )/P( ℓ̅_β , ℓ̅_α), and σ_2^t = 𝒦_obs/2∑_α, β P(ℓ_α, ℓ_β) D_KL[P(t|ℓ_α, ℓ_β)||P(t|ℓ̅_β,ℓ̅_α)], with D_KL denoting the Kullback-Leibler divergence between waiting time distributions. Importantly, σ_2 estimates the EPR by forming a lower bound to the EPR that is tighter than that of σ_1 because it retains more information on the underlying dissipative processes. Notice that this estimator is agnostic to the experimental resolution, it can be applied even when it is not known whether the experiment is able to capture all transitions. For applications, the probabilities of transitions ℓ and their waiting times can be directly extracted from experimental measurements, with ℓ representing all transitions between distinguishable states at a given resolution. Here, we illustrate the calculation in model systems, where they are evaluated using the survival operator approach <cit.> (see Appendix <ref> for details). In the following, we observe that the two components of the more specialized estimator, σ_2, play different roles. σ_2^ℓ tends to be larger which makes it more important for estimating the total EPR. It scales with the coarse-graining scale following a power law, with the value of the exponent revealing more information about internal structures. Meanwhile, peaks in σ_2^t provide information on the internal structure such as the characteristic temporal and spatial scales of dissipation. § SCALING We start with σ_2^ℓ, which is typically dominant in the total EPR estimation. To study how dissipation quantifiers change with resolution, we evaluate σ_2^ℓ and σ_1 at different levels of coarse-graining in several model systems. We start with a simple example where the state space is a square lattice. Coarse-graining is done by merging n_B-by-n_B square blocks into a single (coarse-grained) state. Intrablock transitions become hidden, and interblock transitions are lumped into n_E visible transitions. Each interblock transition ℓ_α is composed of n_B microscopic transitions ℓ_α,i which cannot be resolved in measurements. As n_B increases, the number of visible transitions decreases as n_E∝ n_B^-2, causing a scaling in the number of terms in Eqs. (<ref>) and (<ref>). The probabilities involved in these expressions scale with n_B in a nontrivial manner partly due to the correlations between fluxes. Reference <cit.> showed that σ_1 decreases following a power law in terms of the block size due to the statistical properties of the lumped fluxes represented by P(ℓ_α). A similar argument can be made to show that σ_2^ℓ also scales with the block size following a power law. Indeed, while P(ℓ_α) represents steady-state fluxes in the state space, P(ℓ_α , ℓ_β) plays the role of fluxes in the transition space, which is the space spanned by all possible transitions of the system (or equivalently, by pairs of consecutive states). As resolution decreases, lumping in the transition space is analogous to lumping in the state space, except that the dimension of the transition space is much higher. Hence, we expect σ_2^ℓ to decrease following a power law, but the scaling exponent may differ from that of σ_1 since P(ℓ_α , ℓ_β) follows a different statistical structure. As shown in Fig. <ref>B, for a system with i.i.d. random rates, both σ_1 (blue) and σ_2^ℓ (red) decrease following power laws with n_E, consistent with the prediction above. σ_2^ℓ in general has a smaller exponent than σ_1, which means that it is not only larger, but its relevance rapidly grows at smaller resolutions. Thus, by accounting for joint probabilities between consecutive coarse-grained transitions, σ_2^ℓ provides a more accurate estimate of the EPR, which can be orders of magnitude better than σ_1 for small resolutions (large n_B). One mechanism behind the different scalings is that the asymmetry P( ℓ_β|ℓ_α)≠ P( ℓ̅_α|ℓ̅_β) captures some of the EPR associated with transitions internal to the coarse-grained state, which is completely discarded in the direct lumping approach (σ_1). Interestingly, the scaling exponent for σ_2^ℓ is approximately 1 (black dashed line), representing a linear scaling with the number of visible transitions, and this same exponent is observed in further examples in the following sections. In other words, the EPR per coarse-grained transition remains constant across resolutions. To see whether similar scaling relations hold in real biochemical systems, we turn to the Brusselator model <cit.>, which describes a class of biochemical oscillators. Here, we study the simplified Brusselator model <cit.> to avoid singular behavior at the first coarse-graining iteration in the original Brusselator <cit.>. The model describes the dynamics of two chemical species X and Y with reactions A [k_1]k_-1 X, B [k_2]k_-2 Y, 2 X+Y [k_3]k_-3 3 X, with k_± i being kinetic constants of each reaction ± i, and A and B molecules held at constant concentrations. We assume mass-action kinetics, where the transition rates are given by the product of kinetic constants and the concentrations of the substrate; for instance, the forward rate of reaction 3 is k_3[X]^2 [Y], with concentrations [X]=N_x/V and [Y]=N_y/V. The state space is a 2D lattice spanned by the number of molecules N_x and N_y, with horizontal, vertical, and diagonal transitions corresponding to the three reactions in Eq. (<ref>). In certain parameter regimes, the system exhibits oscillation, represented by a limit cycle in the state space (Fig. <ref>A). Since the transitions within coarse-grained states are highly directional along the limit cycle, we expect σ_2^ℓ to do a much better job than σ_1 in capturing the internal EPR by accounting for irreversibility associated with the directionality of P( ℓ_β|ℓ_α). Indeed, by applying the same coarse-graining procedure as in the square lattice (Fig. <ref>B), we again find that σ_2^ℓ scales with n_E, with exponent ∼ 0.3 that is much smaller than the exponent for σ_1 (∼ 0.6). The power law persists until n_B reaches the size of the limit cycle, at which scale σ_2^t may reveal more information on the dissipation internal to the coarse-grained states (see next Section). In the systems considered so far, the scaling of σ_2^ℓ persists until coarse-graining approaches the largest scale: the system size for the square lattice and the size of the limit cycle for the Brusselator. This indicates that, provided that the coarse-graining level does not cross characteristic scales, the dissipation has the same scaling structure across distinct resolutions. However, a system can exhibit drastically different dynamics at different scales (either in state space or in physical real space), which leads to σ_2^ℓ scaling with distinct exponents for each regime. To illustrate this, we consider the two-component Schlögl model <cit.>. The system has two compartments with identical chemical reactions. The particles can either undergo reactions within compartments or hop between compartments. Thus, the model is analogous to the two-site active Ising model <cit.>. Let Z=X,Y be the particles in the two compartments. The reactions within each compartment are: B [k_2]k_0 Z, 2Z+A [k_3]k_1 3Z, with identical rates in the two compartments. The concentrations [A] and [B] are fixed by chemostats. The compartments exchange particles with rate γ: X [γ]γ Y. At intermediate γ, the system has four locally stable (macroscopic) states [see Fig. <ref>A for the probability distribution in the (N_x, N_y) plane] representing homogeneous/inhomogeneous high/low-density states. Each of the four states is dissipative since the inter-compartment exchange does not commute with reactions within the compartments, resulting in local vortices. The four states are also connected by large-scale dissipative flows that form global vortices. Thus, we expect σ_2^ℓ to exhibit distinct scaling exponents for local and global flows. At high resolution, coarse-graining reduces dissipation predominantly by operating on local flows, while at low resolution it does so by lumping global (large-scale) flows between the four states. Indeed, Fig. <ref>B shows two scaling regimes: at large (small) block size (n_B), σ_2^ℓ decreases much faster (slower). The two regimes cross at n_B^⋆≈ 28, which is approximately the spread of each macroscopic state (grids in Fig. <ref>A). The scaling exponent is larger at low resolutions (larger n_B) because coarse-graining only operates on the transitions between stable states while ignoring their internal structures. Thus, the crossover of scaling regimes reveals the characteristic scales of the dissipative dynamics. A similar crossover can be identified in experimental data as long as the measurement is at a resolution higher than n_B^⋆. § EXTRACTING DISSIPATIVE SCALES Another component of the transition-based estimator is σ_2^t, which captures the irreversibility associated with the asymmetric waiting time distributions between coarse-grained transitions, also called intertransition times. σ_2^t is always non-monotonic with the block size n_B: at the fine-grained level (n_B=1), the waiting time distribution is exponential and symmetric, which leads to σ_2^t=0; at large n_B, the number of coarse-grained transitions is small, which also leads to small σ_2^t (it eventually vanishes when n_B reaches the system size). Thus, σ_2^t is maximized at an intermediate scale n_B, m, where the dynamics within coarse-grained states have maximally asymmetric waiting time distributions upon time reversal. The peak block size n_B,m provides a natural and model-free measurement of the characteristic scale of the dissipative dynamics. For square lattice with i.i.d. random transition rates, σ_2^t starts at zero at the finest scale and reaches a maximum at n_B=2 before decreasing monotonically with n_B (Fig. <ref>C). Since the rates are drawn independently, this model does not exhibit long-range structures. Hence, the peak of σ_2^t falls at the first level of coarse-graining n_B = 2. As resolution decreases, the waiting time distributions become increasingly less asymmetric under time reversal, leading to the decrease in σ_2^t. Furthermore, the fraction of observable EPR approximately collapses for different system sizes due to this homogeneity of the rates. When a system has more intricate properties and presents an underlying structure, the peak might reflect the spatial scale of such a structure instead of being localized at the first coarse-graining step. To illustrate this, we compute σ_2^t for the simplified Brusselator model with different volumes V, which tunes both the size and the period of the limit cycle. As shown in Fig. <ref>C, σ_2^t is non-monotonic in n_B, with the peak position n_B,m (dashed lines) increasing with the volume V. In order to compare the peak position with the period of the limit cycle, we convert n_B to the characteristic timescale defined as the inverse dynamic activity, τ≡𝒦_obs^-1, which represents the average waiting time between transitions at each coarse-grained level. This converts σ_2^t to a function of τ (Fig. <ref>D), whose peak position τ^⋆ is approximately linear with the oscillation period (inset). In other words, when the resolution is tuned to get maximally asymmetric waiting time distributions, the resulting coarse-grained timescale is related to the oscillation period, which is an internal dissipative timescale. Notably, a non-monotonic relation between the EPR and a similar definition of the coarse-grained timescale has been recently reported in the actomyosin cortex <cit.>, and the peak position was used to extract the dissipative timescale. Our results provide a potential theoretical explanation for this observation and suggest its broader applicability for coarse-graining not only in time but also in state space or real space. § NETWORKS WITH NON-REGULAR TOPOLOGIES Our results on the power-law scaling of σ_2^ℓ and the non-monotonic scaling of σ_2^t are also evidenced in networks of complex topologies. For direct lumping, previous work has shown that the EPR scaling only emerges in networks with a self-similar structure, such as a scale-free network <cit.>. Here we find the scaling of σ_2^ℓ in lattice-embedded scale-free networks <cit.> with an exponent of approximately 1 (Fig. <ref>A), an improvement from the direct coarse-graining approach <cit.>. In contrast to lattice-embedded networks, where the measurement resolution can be captured by the size of blocks used in coarse-graining, many real networks have no apparent embedding, which makes it difficult to define the coarse-graining procedure. For these systems, we describe resolution by assuming that only a random subset of transitions are visible. Although these edges do not necessarily divide the network into equal-sized coarse-grained states, they provide an apparent measure of the irreversibility of the coarse-grained dynamics. We study how σ_2^ℓ and σ_2^t scale with the number of visible transitions. We first consider the Erdős-Rényi model, a canonical model for real-world networks with random topology. It is characterized by two values, the number of vertices and the probability p. A network is constructed by randomly selecting a pair of vertices and creating an edge between them with probability p. Despite the topology being random, the interplay between the two parameters leads to rich phenomena, such as the emergence of a giant connected component through a phase transition <cit.>. Here, we consider Erdős-Rényi networks with randomized transitions rates. By growing the subset of visible edges through random selection, we find an increasing relation between σ_2^ℓ and the number of visible edges. When averaged over the order of edge selection, σ_2^ℓ exhibits robust scaling with the number of visible edges with an exponent of approximately 1 (Fig. <ref>B). The same procedure reveals that σ_2^t varies non-monotonically (Fig. <ref>C), consistent with our intuition from the Brusselator model (Section <ref>). The peaks of σ_2^t are associated with an internal notion of dissipative scale, and it remains unclear how to determine it through the topological or dynamical properties of the system since there is no unique way of defining the complexity of a network. As a candidate, we compare the peak of σ_2^t with half the number of cycles in the network (dashed lines of Fig. <ref>C), which are the elementary units of dissipative fluxes in nonequilibrium reaction networks <cit.>. This comparison is also motivated by the fact that a visible edge per cycle causes σ_2^t to vanish and σ_2^ℓ to capture the full EPR <cit.>. We observe that the peaks are roughly related to the number of cycles, which is a factor in defining the internal dissipative scale but not the sole ingredient. A plethora of models that generate random topologies are relevant in the study of real-world networks <cit.>, and properties such as degree distribution and number of cycles can be substantially different among them. To study the effect of network topology, we perform the same analysis in other remarkable networks: Barabási-Albert, Watts-Strogatz, random-regular, and a two-dimensional grid graph with and without boundary conditions. We observe that the scaling behavior of the apparent EPR measured by σ_2^ℓ is robust with respect to topology (Fig. <ref>D), presenting a very similar exponent of ∼ 1. It is striking that these results are robust to the state-space structure as well as to the specific coarse-graining procedure, suggesting the general applicability of this approach to a broad class of systems. § DISCUSSION Our results demonstrate that both the sequence of transitions and the distribution of intertransition times can help extract the entropy production rate from measurements with limited resolutions. In all cases studied here, ranging from chemical reaction systems to networks with complex topologies, σ_2^ℓ scales with resolution following power laws. The scaling exponent is smaller than that of the less-informed estimator σ_1, often close to unity, representing a linear scaling with the number of visible transitions. It would be revealing to explore whether higher-order estimators have even smaller exponents. Although the scaling of σ_2^ℓ can be conceptually understood by generalizing the scaling argument <cit.> for σ_1 from state space to transition space, it remains unclear how the scaling exponent for σ_2^ℓ can be determined quantitatively. Since the exponent for σ_1 depends on the network structure and flux correlations, we hypothesize that the exponent for σ_2^ℓ is related to correlations in transition space, which may be determined through a renormalization group analysis <cit.>. It will be interesting to investigate whether the exponent uncovers more properties of the underlying dissipative dynamics and how it depends on physical properties of the system. Regarding the detection of internal scales, the crossover between multiple scaling regimes of σ_2^ℓ provides a way to detect characteristic length scales in a dissipative system. On the other hand, the intertransition-time-based estimator σ_2^t varies non-monotonically with resolution, with its peak reflecting an internal dissipative scale at which the waiting time distributions become maximally asymmetric. Our results show that the σ_2^t peak is related to the period of oscillation in the Brusselator and to the number of cycles in a complex network. However, further studies are needed to elucidate how the network topology and transition dynamics affect the crossover in σ_2^ℓ and the non-monotonicity of σ_2^t. In addition, the power laws with distinct exponents separated by a kink in σ_2^ℓ are reminiscent of the (inverse) energy cascades in turbulent flows <cit.>, where energy is injected at a given scale and separates the cascade into distinct regimes; this similarity could lead to additional insights into dissipative scales. Both σ_2^ℓ and σ_2^t can be readily computed for experimental data: while σ_2^ℓ can be estimated directly from a histogram of visible transitions, σ_2^t requires evaluating the Kullback-Leibler divergence from a finite data set of continuous random variables, for example, with the algorithm in Refs. <cit.>. In addition to estimating the irreversibility σ_2^ℓ+σ_2^t, one can also introduce more coarse-graining levels at post-processing to investigate the dependence of σ_2^ℓ and σ_2^t on resolution. Therefore, these behaviors can be used as a tool to uncover hidden properties. It may be fruitful to combine the approach with experiments in spatially extended dissipative systems such as the actomyosin cortex <cit.> and microtubule active gels <cit.> to reveal more information on the irreversibility across length scales. An alternative approach explored how measuring dissipation from stroboscopic observations, namely snapshots separated by different lag times, also uncovers internal structures <cit.>, in particular dissipative timescales. In a given model, the dissipative timescale and length scale might be connected by simple relations, but universal considerations cannot be drawn at this point and deserve further investigation. We also highlight that, depending on the system, the resolution in time and space are of different relevance. For instance, in the two-compartment Schlögl model, a lower spatial resolution might completely miss the dissipative dynamics inside each metastable state, whereas a lower time resolution might still be able to capture it provided the typical escape time from metastable states is sufficiently long. Understanding the interplay between both notions of resolution can guide optimal strategies for experimental applications. Our empirical results show that at very low resolutions, σ_2^ℓ tends to be much smaller than the true EPR due to the power-law scaling, highlighting that a route to estimate the dissipated energy should include either resolution enhancement or methods that rely on additional information beyond the pairwise statistics of transitions. It will also be interesting to investigate alternative estimators that also account for a priori knowledge on the nature of the hidden structures, such as the possible chemical reactions or the topological state-space structure, these may provide more information on the underlying dissipative dynamics. § ACKNOWLEDGMENTS We thank Junang Li for useful discussion and comments on an early version of the manuscript. The work by PH was supported by the project INTER/FNRS/20/15074473 funded by F.R.S.-FNRS (Belgium) and FNR (Luxembourg). This work was initiated at a Physics of Life symposium at the Center for the Physics of Biological Function (NSF PHY-1734030). * § SURVIVAL MATRIX TECHNIQUE To compute σ_2^ℓ and σ_2^t for the coarse-grained system, we use the survival matrix method <cit.> to analytically derive the joint probabilities P(ℓ_α, ℓ_β) = P(ℓ_β|ℓ_α) P(ℓ_α) and the waiting time distributions P(t|ℓ_α, ℓ_β). This amounts to solving a first-passage problem between given transitions. We consider a continuous-time Markov chain whose dynamics is described the rate matrix 𝐑 through the master equation d_t p_t = 𝐑p_t. The off-diagonal element [𝐑]_ij is the transition rate from state j to i (≠ j). The diagonal element [𝐑]_ii = -∑_j≠ i [𝐑]_ji is the escape rate of leaving state i. The goal is to compute the first-passage time distribution between a subset of transitions that are visible. To this end, we introduce the survival matrix 𝐒, which is defined by removing all visible transitions in the off-diagonal elements from the rate matrix 𝐑 while preserving diagonal elements. The survival matrix captures the internal evolution of the system between observable transitions. To be more precise, [exp (𝐒t)]_j,i is the probability of being in the microscopic state j at time t after starting at microscopic state i at time 0, without taking any visible transitions. In this work, only the transitions between coarse-grained states are visible. Here, we compute the first-passage probabilities between the microscopic transitions ℓ_α, i and ℓ_β, j that connect distinct coarse-grained states. They can be used to derive the probabilities of coarse-grained transitions through Eqs. (<ref>) and (<ref>). The visible transitions divide the system into distinct coarse-grained states, which allows applying the survival matrix approach to each coarse-grained state individually. For each coarse-grained state, the survival matrix reads [𝐒]_ij = [𝐑]_ij - δ_ij∑_k [𝐑]_kj, where i,j enumerate all microscopic states inside the coarse-grained state, while k runs over states both inside and outside the coarse-grained state. While the off-diagonal terms capture all the internal transitions, the diagonal ones include both internal and external (i.e. escaping) transitions. Let 𝗌(ℓ) and 𝗍(ℓ) be the source and target states of transition ℓ, respectively. The joint probability of a transition ℓ_β, j and intertransition time t conditioned on the previous transition ℓ_α, i is given by P( ℓ_β,j, t |ℓ_α,i) = [𝐑]_𝗍 ( ℓ_β,j ) , 𝗌 ( ℓ_β,j ) [ e^𝐒 t ]_𝗌 ( ℓ_β,j ) , 𝗍 ( ℓ_α,i ) , which can be marginalized to obtain the conditional transition probability P( ℓ_β,j|ℓ_α,i) = - [𝐑]_𝗍 ( ℓ_β,j ) , 𝗌 ( ℓ_β,j ) [ 𝐒^-1]_𝗌 ( ℓ_β,j ) , 𝗍 ( ℓ_α,i ) . The ratio gives the probability of the intertransition time between two transitions P( t |ℓ_α,i, ℓ_β,j) = P( ℓ_β,j, t |ℓ_α,i)/P( ℓ_β,j|ℓ_α,i). Lastly, it is also necessary to compute the absolute probability of a single transition P(ℓ) = [𝐑]_𝗍 ( ℓ) , 𝗌 ( ℓ ) p_𝗌 ( ℓ )/∑_ℓ' [𝐑]_𝗍 ( ℓ') , 𝗌 ( ℓ' ) p_𝗌 ( ℓ' ). In the present work, we consider that transitions between coarse-grained states ℓ_α cannot be resolved as ℓ_α,i, hence we combine the probabilities above to obtain the quantities involved in σ_2^ℓ and σ_2^t, as described by Eqs. (<ref>) and (<ref>).
http://arxiv.org/abs/2407.13158v1
20240718045827
HHGT: Hierarchical Heterogeneous Graph Transformer for Heterogeneous Graph Representation Learning
[ "Qiuyu Zhu", "Liang Zhang", "Qianxiong Xu", "Kaijun Liu", "Cheng Long", "Xiaoyang Wang" ]
cs.LG
[ "cs.LG", "cs.DB" ]
Nanyang Technological University Singapore qiuyu002@e.ntu.edu.sg Nanyang Technological University Singapore [2] liang012@e.ntu.edu.sg Nanyang Technological University Singapore qianxion001@e.ntu.edu.sg Nanyang Technological University Singapore kaijun001@e.ntu.edu.sg Nanyang Technological University Singapore c.long@ntu.edu.sg [2] University of New South Wales Australia xiaoyang.wang1@unsw.edu.au [2] § ABSTRACT Despite the success of Heterogeneous Graph Neural Networks (HGNNs) in modeling real-world Heterogeneous Information Networks (HINs), challenges such as expressiveness limitations and over-smoothing have prompted researchers to explore Graph Transformers (GTs) for enhanced HIN representation learning. However, research on GT in HINs remains limited, with two key shortcomings in existing work: (1) A node's neighbors at different distances in HINs convey diverse semantics; for instance, a paper's direct neighbor (a paper) in an academic graph signifies a citation relation, whereas the indirect neighbor (another paper) implies a thematic association, reflecting distinct meanings. Unfortunately, existing methods ignore such differences and uniformly treat neighbors within a given distance in a coarse manner, which results in semantic confusion. (2) Nodes in HINs have various types, each with unique semantics, e.g., papers and authors in an academic graph carry distinct meanings. Nevertheless, existing methods mix nodes of different types during neighbor aggregation, hindering the capture of proper correlations between nodes of diverse types. To bridge these gaps, we design an innovative structure named (k,t)-ring neighborhood, where nodes are initially organized by their distance, forming different non-overlapping k-ring neighborhoods for each distance. Within each k-ring structure, nodes are further categorized into different groups according to their types, thus emphasizing the heterogeneity of both distances and types in HINs naturally. Based on this structure, we propose a novel Hierarchical Heterogeneous Graph Transformer (HHGT) model, which seamlessly integrates a Type-level Transformer for aggregating nodes of different types within each k-ring neighborhood, followed by a Ring-level Transformer for aggregating different k-ring neighborhoods in a hierarchical manner. Extensive experiments are conducted on downstream tasks to verify HHGT's superiority over 14 baselines, with a notable improvement of up to 24.75% in NMI and 29.25% in ARI for node clustering task on the ACM dataset compared to the best baseline. HHGT: Hierarchical Heterogeneous Graph Transformer for Heterogeneous Graph Representation Learning Xiaoyang Wang ================================================================================================== § INTRODUCTION Heterogeneous Information Networks (HINs) <cit.>, also well-known as Heterogeneous Graphs (HGs), consist of multiple types of objects (i.e., nodes) and relations (i.e., edges). They are prevalent in real-world scenarios, ranging from citation networks <cit.>, social networks <cit.> to recommendation systems <cit.>. For example, the academic data shown in Figure <ref>(a) can be represented as an HIN, which contains three types of nodes (i.e., paper, author, subject) and three types of relations (i.e., author-write-paper, paper-belong-subject, paper-cite-paper). Recently, there has been a notable surge in research focusing on representation learning for HINs <cit.>, which emerges as a powerful technique for embedding nodes into low-dimensional representations while retaining both graph structures and heterogeneity. Given the success of traditional Graph Neural Networks (GNNs) <cit.> in handling homogeneous graphs (containing only one type of nodes and relations), researchers are increasingly turning their attention to HIN representation learning using GNNs, known as Heterogeneous Graph Neural Networks (HGNNs). HGNN-based approaches <cit.> often leverage neighbor aggregation strategies to effectively capture and propagate information across diverse types of nodes in HINs. For example, R-GCN <cit.> extends the traditional Graph Convolutional Networks (GCNs)  <cit.> by incorporating relation-specific weight matrices, aiming to capture the diverse relations within an HIN. Fu et al. <cit.> propose to incorporate intermediate nodes along meta-paths, using both intra-meta-path and inter-meta-path information for higher-order semantic information aggregation. Despite HGNNs have achieved success in modeling real-world HINs, the presence of challenges such as limitations in expressiveness <cit.>, over-smoothing <cit.> and over-squashing <cit.> has driven researchers to investigate Graph Transformers (GTs) <cit.> for enhanced HIN representation learning. For instance, Hu et al. <cit.> propose a heterogeneous Transformer-like attention architecture for neighbor aggregation. Mao et al. <cit.> leverage a local structure encoder and a heterogeneous relation encoder to capture structure and heterogeneity information in HINs. In general, existing GT-based methodologies for HIN representation learning, i.e., HGT-based methods, depicted in Figure <ref>(b), adhere to a typical principle: Given a target node, its k-hop neighborhood (i.e., those nodes within a reachable distance of ≤ k from the target node) is first extracted. Then, GCN <cit.> or Transformer <cit.> would be utilized to propagate information from these nodes to the target node. Nevertheless, existing HGT-based approaches tend to mix nodes of different types and uniformly treat all nodes within k-hop neighborhood during neighbor aggregation, leading to potential semantic confusion. In particular, (1) Limitation 1: Neighbors of a target node at different distances in HINs carry varied semantics. Using Figure <ref>(a) as an illustration, paper P_1's direct neighbor, paper P_2, indicates a citation relation. Conversely, the indirect neighbor, paper P_3, implies a thematic connection without a direct citation relation, showcasing different connotations. Regrettably, existing strategies overlook such distinctions by uniformly addressing each neighbor within distance k, i.e., packing P_1, P_2, P_3 together into a single sequence and aggregating them uniformly. This is not desirable since these nodes serve different functions. (2) Limitation 2: Neighbors of a target node with different types also carry distinct semantics. Taking Figure <ref>(a) as an instance, paper P_1's direct neighbors include paper P_2, author A_1,A_3 and subject S_1. Here, P_2 represents a citation relation, A_1,A_3 reflect authorship relations, while S_1 signifies a topic alignment relation. While existing HGT-based methods consider node types, they typically pack P_2,A_1,A_3,S_1 together as a unified sequence. This approach is not desirable because it mixes nodes of different types during neighbor aggregation, blurring the distinct functions of papers, authors, and subjects. To overcome these challenges, we propose the following two main designs: (1) Design 1: To distinguish a node's neighbors at varying distances, we introduce an innovative structure called the k-ring neighborhood. This structure specifically refers to nodes whose distance from the target node is exactly k, differentiating it from the commonly known k-hop neighborhood. In essence, we split the k-hop neighborhood into k+1 non-overlapping k-ring neighborhoods, where the nodes in each k-ring neighborhood share the same distance to the target node. As illustrated in Figure <ref>(c), considering k=2, for paper P_1, its neighbors within a distance of 2 can be decomposed into three distinct k-ring neighborhoods: the 0-ring neighborhood {P_1}, the 1-ring neighborhood {P_2,A_1,A_3,S_1}, and the 2-ring neighborhood {P_3,P_4}. Building upon this new structure, we extract diverse k-ring neighborhoods for each node, which can naturally discern different functions and thus preventing semantic confusion. Then, a Ring-level Transformer is designed to aggregate distinct k-ring neighborhoods separately, with aggregation based on the relevance and significance of each k-ring neighborhood to the target node. (2) Design 2: To avoid mixing nodes of different types within each k-ring structure, we further propose a novel (k,t)-ring structure by arranging nodes into different groups based on their types within each k-ring structure. Based on such neighborhood partition, a Type-level Transformer is proposed to separately aggregate neighbors of distinct types for a target node within each k-ring structure, considering the importance of each type to the target node. In Figure <ref>(a), consider the 1-ring neighborhood of node P_1 (i.e., P_2, A_1, A_3, S_1), where nodes of diverse types coexist. We partition this 1-ring neighborhood into three groups based on node types, namely, paper P_2, author A_1, A_3, and subject S_1, with each group carrying unique functions. Then, we apply a Type-level Transformer to aggregate each group separately, rather than treating them as a unified sequence as done by existing HGT-based methods. This approach enables us to mimic the diverse roles of nodes with various types. In summary, for each target node, we extract its neighbors from diverse k-ring neighborhoods, where the nodes within each ring are further grouped according to their types, forming an innovative (k,t)-ring neighborhood structure. Building upon this structure, we introduce a novel Hierarchical Heterogeneous Graph Transformer (HHGT) model. This model seamlessly integrates a Type-level Transformer for aggregating nodes of different types within each k-ring neighborhood separately, followed by a Ring-level Transformer for aggregating different k-ring neighborhoods in a hierarchical manner. The main contributions of our paper are summarized as follows: * For the first time, we design an innovative (k,t)-ring neighborhood structure for HIN representation learning, which emphasizes the heterogeneity of both distances and types in HINs naturally. * To the best of our knowledge, we are the first to propose a hierarchical graph transformer model for node representation learning in HINs, which seamlessly integrates a Type-level Transformer for aggregating nodes of distinct types within each k-ring structure separately, followed by hierarchical aggregation utilizing a Ring-level Transformer for different k-ring neighborhoods. * Extensive experimental results on two real-world HIN benchmark datasets demonstrate that our model significantly outperforms 14 baseline methods on two typical downstream tasks. Additionally, the ablation study validates the advantages and significance of considering the heterogeneity of both distances and types in HINs. § RELATED WORK §.§ Shallow Models for HIN Embedding In recent years, a plethora of graph embedding techniques <cit.> have emerged with the goal of mapping nodes or substructures into a low-dimensional space, preserving the connecting structures within the graph. As real-world networks typically consist of various types of nodes and relations <cit.>, research on shallow models for HIN embedding <cit.> has garnered significant attention. Shallow models for HIN embedding can be broadly classified into random walk-based methods <cit.> and first/second-order proximity-based methods <cit.>. For instance, Metapath2vec <cit.> adopts meta-path guided random walk to acquire the semantic information between pairs of nodes. These methods leverage meta-paths or type-aware network closeness constraints to exploit network heterogeneity for HIN embedding. Despite their contributions, these shallow models lack the ability to effectively capture intricate relations and semantics within HINs, resulting in suboptimal representation learning. §.§ Deep Models for HIN Embedding As deep learning models have shown remarkable success in capturing both structural and content information within homogeneous graphs <cit.>, the research focus extended to Gs, giving rise to deep models for HINs <cit.>. Deep models for HIN embedding are broadly categorized into two types: meta-path-based deep models <cit.> and meta-path-free deep models <cit.>. Meta-path-based deep models employ meta-paths to aggregate information from type-specific neighborhoods, offering the advantage of capturing higher-order semantic information dictated by selected meta-paths. For example, HAN <cit.> leverages a hierarchical attention mechanism, which considers both node-level attention and semantic-level attention to learn the importance of nodes and meta-paths, respectively. However, these approaches require expert knowledge for meta-path selection, posing a significant impact on model performance. For meta-path-free strategies, Schlichtkrull et al. <cit.> propose to model relational data through relation-aware graph convolutional layers, enabling robust representation learning in HINs without meth-paths. Hu et al. <cit.> introduce an attention mechanism inspired by Transformers, specifically designed for neighbor aggregation. Despite eliminating handcrafted meta-paths, meta-path-free deep models exhibit two key shortcomings: (1) They mix nodes of different types during neighbor aggregation, resulting in a failure to adequately capture the correlations between nodes of different types. (2) They ignore the fact that a node’s neighbors at different distances in HINs carry distinct semantics and thus treating them uniformly during neighbor aggregation, which may lead to semantic confusion and suboptimal performance on downstream tasks. § PRELIMINARIES §.§ Problem Definition Definition 3.1. Heterogeneous Information Network <cit.>. A heterogeneous information network (HIN) is formally defined as 𝒢={𝒱,ℰ,𝒞,ℛ}, where 𝒱, ℰ, 𝒞, ℛ represent the set of nodes, edges, node types, and relation types, respectively. In an HIN, each node is associated with a node type in 𝒞 and each edge has its corresponding relation type in ℛ. Each node v∈𝒱 is associated with a feature vector f∈ℝ^d, where d is the feature dimension. An HIN is characterized by the condition |𝒞| + |ℛ| > 2. Definition 3.2. HIN Representation Learning Problem <cit.>. Given an HIN, we aim to learn the node embedding z_v∈ℝ^d for each node v, where d ≪ |𝒱| denotes the embedding dimension. After learning the node representation of HINs, we can use the embeddings obtained for many downstream tasks, including semi-supervised node classification, unsupervised node clustering, etc. §.§ Transformer Encoder Transformer encoder, a core component of Transformer <cit.>, consists of multiple identical layers, each containing two main sub-modules: the Multi-Head Self-Attention (MSA) module and the Feed-Forward Network (FFN) module. Both components incorporate residual connections and Layer Normalization (LN). To simplify the explanation, we just focus on the single-head self-attention module. Given an input sequence ℋ∈ℝ^n× d where n denotes the token number and d denotes the hidden dimension, MSA firstly projects it to query, key and value spaces (namely Q, K, V, respectively), which are written as: Q = ℋ W_q, K = ℋ W_k, V = ℋ W_v, where W_q∈ℝ^d× d_k, W_k∈ℝ^d× d_k, W_v∈ℝ^d× d_v are learnable matrices. After that, it calculates the attention scores by taking the dot product of Q and the transpose of K, normalized by the scaling factor √(d_k): MSA(ℋ) = Softmax(QK^T/√(d_k)) V. Then, the MSA output is passed through FFN with an LN and a residual connection to generate the output of the l-th Transformer layer as: ℋ^(l) = MSA(LN(ℋ^(l-1))) + ℋ^(l-1), ℋ^(l) = FFN(LN(ℋ^(l))) + ℋ^(l). Here, l=1,…,L represents the l-th layer of the Transformer. § METHODOLOGY In this section, we present the details of HHGT model, consisting of two important modules: Ring2Token and TRGT. The overall framework is depicted in Figure <ref>(a). Given an HIN and an integer K, for each target node, we initially utilize Ring2Token to extract multiple k-ring neighborhoods (k∈ [0,K]), spanning from 0-ring neighborhood to K-ring neighborhood, with well-organized nodes partitioned by their types within each k-ring structure, forming the (k,t)-ring neighborhood structure. After the neighborhood partition, we use the TRGT module to learn node representations via the GT layer based on these extracted (k,t)-ring neighborhoods. This involves a Type-level Transformer to aggregate nodes of different types within each k-ring neighborhood, followed by a Ring-level Transformer to aggregate different k-ring neighborhoods hierarchically. After obtaining representations for all nodes via our HHGT model, following previous work <cit.>, we apply the classification head to transform these node representations into the classification results and train HHGT using the cross-entropy loss function. The details of Ring2Token and TRGT modules are discussed as follows. §.§ Ring2Token How to effectively aggregate information from neighbors into a node is critical for designing a powerful HIN representation learning model  <cit.>. However, existing methods overlook the distinctions between neighbors at different distances and mix nodes of distinct types during neighbor aggregation.To address this limitation, we introduce Ring2Token, which considers neighbor information involving different node types at distinct distances. To grasp the distinctions between neighbors at different distances, we first design the novel k-ring neighborhood structure as follows. Definition 4.1. k-ring Neighborhood. Given a node u, suppose Γ_k(u)={v∈𝒱 | d(u,v)=k} denote the k-ring neighborhood of u, where d(u,v) refers to the shortest path distance between nodes u and v. The 0-ring neighborhood is the target node, i.e., Γ_0(u)={u}. Taking Figure <ref>(a) as an example where paper P_1 is the target node, and paper P_6 is its 1-ring neighbor, while paper P_2, P_3, P_4, P_5 are its 2-ring neighbors. In this case, paper P_6 signifies a citation relation and paper P_2, P_3, P_4, P_5 imply thematic associations. Using the 2-hop (including P_2, P_3, P_4, P_5, P_6) mix nodes at different distances, thereby failing to distinguish the different functions associated with paper P_6 and P_2, P_3, P_4, P_5. In contrast, the k-ring separates the 2-hop neighbors into different subsets as P_6 and P_2, P_3, P_4, P_5, which naturally discerns the different functions and thus prevents semantic confusion. Meanwhile, each node type carries specific information, embodying distinct concepts. For instance, in Figure <ref>(b), the 1-ring neighborhood of paper P_1 involves nodes of three different types: paper P_6, authors A_1,A_2,A_3, and subject S_1, where papers offer citation relations, authors contribute to the creation of the paper while subjects provide thematic information. Therefore, it is not suitable to mix nodes of all types together during neighbor aggregation. Motivated by this, we introduce the concept of type-aware k-ring neighborhood (named (k,t)-ring neighborhood) to categorize nodes within each k-ring structure by their types. Definition 4.2. (k,t)-ring Neighborhood. Given a node u, let Γ_k,t(u)={v∈𝒱 | v∈Γ_k(u) ∧𝒞_v = t} denote the (k,t)-ring neighborhood of u, where Γ_k(u) refers to u's k-ring neighborhood and 𝒞_v denotes v's node type. Based on the concept of (k,t)-ring, we can further partition the 1-ring neighborhood of P_1 into three different subsets as Γ_1,1(P_1)={P_6}, Γ_1,2(P_1)={A_1, A_2, A_3}, Γ_1,3(P_1)={S_1}. Then, we can aggregate them separately, enabling us to mimic their distinct roles and avoid mixing different types. To sum up, given an HIN, for each node, Ring2Token extracts and partitions all its neighborhoods with a (k,t)-ring structure. Specifically, given an integer K and node type number T, for a node u, it possesses a sequence of k-ring neighborhoods with length K+1. Within each k-ring neighborhood, it can be further divided into a series of (k,t)-ring neighborhoods with a total length of T. These (k,t)-ring sets will be fed into the TRGT module for model training. §.§ TRGT Module Built upon this innovative (k,t)-ring structure, TRGT module seamlessly integrates a Type-level Transformer for aggregating nodes of different types within each k-ring neighborhood, followed by a Ring-level Transformer for aggregating different k-ring neighborhoods in a hierarchical manner. The diagram of TRGT is shown in Figure <ref>(b) and the details are elaborated below. §.§.§ Type-level Transformer Recall that nodes in HINs come in various types, each representing distinct concept. Nevertheless, existing HGT-based methodologies tend to mix nodes of different types by packing all node types into a single sequence and uniformly employing attention over them during neighbor aggregation, which fails to model the distinct roles of nodes with various types, as discussed before. To overcome this limitation, we design a Type-level Transformer to aggregate neighbors by explicitly considering node type difference. Particularly, given a node u and its neighbors, we first adopt the Ring2Token module to divide the neighbors of node u into several subsets, i.e., (k,t)-ring sets. Within each k-ring structure, a series of (k,t)-ring neighborhoods with a total length of T are extracted. For instance, in Figure <ref>(c), three (k,t)-ring sets are formed within P_1's 1-ring neighborhood. Here, nodes in the same (k,t)-ring are associated with the same node type and semantic function. Therefore, for each (k,t)-ring set, the features of all nodes within it are firstly aggregated using an average pooling function to create an embedding token with a specific size d, i.e., x_k,t∈ℝ^d, which explicitly summarizes the information of all nodes with type t within the k-ring. In case of an empty set, the embedding token is filled with zeros, ensuring a consistent size of d. Thus, each k-ring neighborhood can be represented as a sequence of tokens denoted as x_k = {x_k,1, …, x_k,T}∈ℝ^T× d. Then, we aggregate the representations of different subsets by adopting a Type-level Transformer encoder over the sequence x_k, as discussed in Section <ref>. Through this type-aware aggregation approach, our model can explicitly distinguish neighbors with different types and avoid potential semantic confusion. Note that for the 0-ring feature x_0, we employ a Multi-Layer Perceptron (MLP) to convert the embedding dimension from ℝ^T× d to ℝ^1× d. By stacking L Transformer blocks, we derive the final representation for each k-ring neighborhood using a read-out function, denoted as {h_0, h_1, …, h_K}, where h_0 ∈ℝ^1× d and h_k = {h_k,1, …, h_k,T}∈ℝ^T× d. Type-level Attention Mechanism. Here, we further introduce a type-level attention mechanism to serve as the read-out function within each k-ring structure. Attention function can be described as a mapping between a query and a set of key-value pairs, yielding an output. Specifically, the type-level attention function within each k-ring can be defined as follows: α_t = exp(h_0· h_k,t)/∑_i=1^Texp(h_0· h_k,i). Here, h_0∈ℝ^1× d denotes the 0-ring representation and h_k,t∈ℝ^1× d denotes the (k,t)-ring representation after Transformer encoder. α_t ∈ℝ^1 is an attention score, T denotes the number of node types and · denotes the dot product. The final representation of each k-ring neighborhood is calculated as: h_k=∑_t=1^T α_t · h_kt. Finally, Type-level Transformer outputs a sequence of k-ring representations for each node, which later are forwarded into the Ring-level Transformer for representation learning. §.§.§ Ring-level Transformer Each k-ring neighborhood contributes a new layer of information, and their combination provides diverse perspectives, essential for a comprehensive understanding of the HIN. Therefore, the effective collection of information from these k-ring neighborhoods is crucial. To tackle this challenge, we develop the Ring-level Transformer for global aggregation across different k-ring neighborhoods. For every node, given a sequence of k-ring tokens {h_0, h_1, …, h_K} obtained from Type-level Transformer, with each summarizing the unique information of neighbors belonging to a specific k-ring structure, Ring-level Transformer first leverages Transformer encoder to learn the node representations. After the stacking of L Transformer layers, we obtain the representation of each node, i.e., a sequence of representations {z_0, z_1, …, z_K} where z_k∈ℝ^d. Ring-level Attention Mechanism. Considering the potential diverse and unique impacts of different k-ring neighborhoods, we design the ring-level attention mechanism for information aggregation. Specifically, it calculates attention coefficients by assessing the relations between the 0-ring neighborhood (the node itself) and all other k-ring neighborhoods, which is formulated as: α_k=exp((z_0|| z_k)W^T)/∑_i=1^K exp((z_0|| z_i)W^T), where W ∈ℝ^1× 2d denotes the learnable projection, and || indicates the concatenation operator. Once the attention scores are obtained, they are employed to calculate a linear combination of the corresponding representations, which is written as: z=z_0+∑_k=1^Kα_k·z_k, where K represents the number of rings, z_0 and z_k denote the representations of 0-ring neighborhood and k-ring neighborhood, respectively. Then, we can derive the final representation of each node as z ∈ℝ^d. §.§ Objective Function After obtaining the final representations of all nodes through our HHGT model, we employ the cross-entropy loss to optimize node embeddings in HINs following existing work <cit.>. Specifically, we utilize a classification head to predict the labels of nodes. This prediction results in a predicted label matrix for nodes, denoted by Ŷ∈ℝ^n× |ℒ|, where |ℒ| denotes the number of classes. The cross-entropy loss employed is then described as follows: ℒ = - ∑_i∈ℐ∑_j∈ℒ Y_i,jlnŶ_i,j. Here, ℐ denotes the labeled node set and Y_i,j represents the true label. Using the labeled data, we can refine the model through back-propagation, iteratively adjusting parameters to learn the node representations in HINs. § EXPERIMENTS In this section, we conduct extensive experiments to answer the following research questions: * RQ1: Can HHGT outperform all baselines across various downstream tasks? * RQ2: What do the learned node embeddings represent? Can these embeddings capture the intricate structures and heterogeneity within HINs? * RQ3: How do different modules of HHGT contribute to enhancing the model performance? * RQ4: How do varying hyper-parameters impact the performance of HHGT? §.§ Experimental Settings §.§.§ Datasets. We conduct experiments on two publicly available real-world HIN benchmark datasets (i.e. ACM[<https://dl.acm.org/>] and MAG[<https://www.microsoft.com/en-us/research/project/microsoft-academic-graph/>]), which are widely employed in related works <cit.>. The main statistics of datasets are summarized in Table <ref>, and the details are shown below. * ACM is a subgraph of ACM digital library, which contains 4,025 papers (P), 7,167 authors (A) and 60 subjects (S). All objects have 128-dimensional features. There are two link types, i.e., 13,407 Paper-Author (PA) links and 4,025 Paper-Subject (PS) links between all kinds of objects. Papers (P), the target nodes, are classified into three categories: Computer Network, Data Mining and Database, according to their fields. * MAG is extracted from Microsoft Academic Graph, which contains 4,017 papers (P), 15,383 authors (A), 1,480 institutions (I) and 5,454 fields (F). All objects have attributes with 128-dimensional features. It also contains four relation types: 3,880 Paper-Paper (PP) links, 40,378 Paper-Field (PF) links, 26,144 Paper-Author (PA) links and 15,468 Author-Institution (AI) links. The target nodes, which are the papers (P), are divided into four groups based on their published venues: Astrophysics, IEEE Journal of Photovoltaics, Journal of Applied Meteorology and Climatology, and Low Temperature Physics. §.§.§ Baselines. To verify the effectiveness of our model, we compare our HHGT with two groups of baselines: shallow model-based methods and deep model-based methods. The former group includes PTE <cit.>, ComplEx <cit.>, HIN2Vec <cit.>, M2V <cit.>, AspEm <cit.>. The latter group can be further divided into (1) HGNN-based models including R-GCN <cit.>, HAN <cit.>, AGAT <cit.>, SHGP <cit.>; (2) GT-based models including one homogeneous GT-based method NAGphormer <cit.> and three heterogeneous GT-based methods including HGT <cit.>, FastGTN <cit.> and HINormer <cit.>. Here, SHGP, PTE, ComplEx, HIN2Vec, M2V and AspEm are unsupervised methods, while R-GCN, HAN, AGAT, GTN, HGT, FastGTN, HINormer and NAGphormer are semi-supervised methods, which are the same with our model. §.§.§ Reproducibility. For the proposed HHGT, we optimize the model with Adam. We set the dropout rate, attention dropout rate, weight decay and head number as 0.01, 0.05, 0.00 and 8 for both datasets, respectively. The learning rate is searched from 1e-4 to 1e-2. For all compared baselines, we employ their publicly released source code and adopt the hyper-parameters recommended in their papers to ensure consistency. For all methods, we set the hidden dimension as 128 for ACM dataset and 512 for MAG dataset for a fairness comparison. To ensure reproducibility, we include our source code, datasets, as well as the instructions to the selected baselines, in an anonymous repository [<https://anonymous.4open.science/r/HHGT-D78B>]. All the experiments are conducted on a Linux (Ubuntu 18.04.6 LTS) server with one GPU (NVIDIA Tesla V100-SXM2) and two CPUs (Intel Xeon E5-2698 v4). Remarks. Following previous work <cit.>, we employ the cross-entropy loss to optimize node embeddings in HINs. Once our HHGT is trained, we can get all node embeddings via feed forward. Consistent with the settings of prior works <cit.>, we employ node classification and node clustering as downstream tasks to evaluate the quality of the learned node embeddings. This setting allows the learned node embeddings to serve as as a universal feature representation, which enables our model to be applied to various tasks without the need for retraining. §.§ Node Classification (RQ1) Settings. Node classification task aims to assign categories to nodes within a network. Following <cit.>, we train a separate linear Support Vector Machine (LinearSVC) <cit.> with 80% of the labeled nodes and predict on the remaining 20% data. We repeat the process 10 times and report the average results. Micro-F1 and Macro-F1 scores are adopted to evaluate the effectiveness. Results. The overall experimental results are shown in Table <ref>. As observed, HHGT achieves the best overall performance, which indicates its superior effectiveness. The main reasons for the observed improvement may include: (1) The Type-level Transformer optimally utilizes node type information during neighbor aggregation, enabling the effective capture of proper correlations between nodes of different types; (2) The Ring-level Transformer considers the distinctions between neighbors at different distances, facilitating powerful neighbor aggregation across various hierarchical levels. Additionally, we observe that the second best is relatively unstable, demonstrating that our model is very robust against various settings as well as metrics. Among the baselines, we can observe that deep model-based approaches generally outperform their shallow model-based counterparts, highlighting the benefits of multi-layered feature extraction in capturing complex information within HINs. §.§ Node Clustering (RQ1) Settings. Node clustering task seeks to group nodes in a network, according to common structural and attribute features. Following <cit.>, we adopt an unsupervised learning set where the input is solely node embeddings, and we leverage K-Means to cluster nodes based on their representations generated by the model. The number of clusters for K-Means is set as the category number, and we utilize the same ground truth as employed in node classification task. NMI and ARI are utilized as evaluation metrics here, following <cit.>. Due to the sensitivity of K-Means to initial centroids, we conduct the process 10 times and report the average results. Results. Table <ref> demonstrates the overall results of all approaches for node clustering task. As we can see, our HHGT consistently surpasses all baseline methods in node clustering across various HINs, demonstrating substantial improvements in both NMI and ARI metrics. Specifically, HHGT exhibits an improvement of up to 24.75% in NMI and 29.25% in ARI over the top-performing baselines on the ACM dataset, respectively. This is because our model simultaneously incorporates Ring-level Transformer and Type-level Transformer, where the former aids in capturing the differences between neighbors at different distances while the latter emphasizes the importance of node types during neighbor aggregation. The hierarchical integration of these two Transformers enables the model to synthesize information at multiple levels, enhancing the diversity and richness of node representations. Among the baselines, we can observe that GT-based models generally perform much better than HGNN-based models, demonstrating the advantages of the graph transformer architecture in HIN representation learning. Furthermore, the performance of heterogeneous GT-based methods such as GTN and HGT is better than that of their homogeneous GT-based counterpart NAGphormer in many cases, verifying the importance of considering the relation heterogeneity within HINs. Our model also benefits from this aspect by designing the (k,t)-ring neighborhood structure to emphasize the inherent heterogeneity of both distance and types within HINs. §.§ Embedding Visualization (RQ2). Settings. For a more intuitive comparison, we conduct embedding visualization to represent an HIN in a low-dimensional space. The goal is to learn node embeddings using the HIN representation learning model and project them into a 2-dimensional space. We employ the t-SNE <cit.> technique for visualization, with a specific focus on paper representations on both datasets. Nodes are color-coded based on fields and published venues for ACM and MAG, respectively. Results. The results are shown in Figure <ref> and Figure <ref>, from which we can find the following phenomenons: 1) The shallow model-based methods always show mixed patterns among papers from various venues, lacking clear clustering boundaries. For example, in HIN2Vec, all papers are mixed together in both datasets. This is due to their limited modeling ability to capture intricate structural and semantic relations within HINs. 2) Among all baselines, HINormer and GTN, two HGT-based models, provide more reasonable visualization results. However, some clusters are still mixed with each other, and there is no clear margin between different classes, especially in the ACM dataset, as shown in Figure <ref> and Figure <ref>. 3) In contrast, the visualization of HHGT reveals high intra-class similarity, distinctly separating papers from different published venues with well-defined boundaries. This clustered structure signifies the tight connections and semantic similarities, showcasing the effectiveness of our proposed HHGT model. §.§ Ablation Study (RQ3) In order to understand the impact of different components within the proposed framework on the overall performances, we conduct ablation studies by removing or replacing key modeling modules of HHGT on two datasets. Specifically, we focus on three key modules: (1) the k-ring neighborhood structure and its corresponding Ring-level Transformer for distance heterogeneity modeling, (2) the (k,t)-ring neighborhood structure and its corresponding Type-level Transformer for further type heterogeneity modeling, (3) and the attention-based readout function upon two Transformer encoders. By removing or replacing these modules, we can obtain different variants of HHGT as follows: * w/o Ring: In this variant, we replace our k-ring structures with traditional k-hop patterns for neighbor extraction without partitioning them into different distance-based non-overlapping subsets, and the Ring-level Transformer is then utilized upon the extracted hop-based neighborhood structure. * w/o Type: Within each k-ring structure, this variant mixes all neighbors of different node types without further partitioning them into different type-based subsets, and removes the Type-level Transformer module. * w/o ATT: In this variant, we replace the attention-based readout functions defined in Equation (<ref>) and Equation (<ref>) with average pooling functions. The results for node classification and node clustering on both datasets are illustrated in Figure <ref> and Figure <ref>, respectively. Based on the results, we have the following observations: (1) The model performance significantly decreases across both datasets when the k-ring structure is replaced with the traditional k-hop pattern (i.e., HHGT vs. w/o Ring), which emphasizes the importance of the proposed k-ring structure. The reason is that the k-ring structure, integral to HHGT, excels in capturing distance heterogeneity within HINs by effectively differentiating between neighbors at varying distances. (2) HHGT consistently outperforms w/o Type in all metrics on both datasets, which demonstrates the effectiveness of our Type-level Transformer module. The results also highlight the importance of explicitly considering type heterogeneity in HIN representation learning by further partitioning k-ring to (k,t)-ring structure. (3) w/o ATT shows inferior performance compared to HHGT across two downstream tasks and two datasets, indicating that the proposed attention-based readout function is beneficial for learning more general and expressive node representations. §.§ Parameter Study (RQ4) We investigate the sensitivity of HHGT with respect to four key hyper-parameters, i.e., the Ring-level Transformer layer number Lh, the Type-level Transformer layer number Lt, the embedding size d and the number of rings K. The results of node classification with different settings on both datasets are depicted in Figures <ref>-<ref>. Effect of Transformer Layer Lr and Lt. To estimate the sensitivity of Transformer layer, we vary the Ring-level Transformer layer Lr and Type-level Transformer layer Lt in {1,2,3,4,5,6} while fixing other parameters fixed. For the Ring-level Transformer layer, as shown in Figure <ref>, the best results on both datasets are achieved with Lr=4. Additionally, we observe different trends in performance with increasing Lr on both MAG and ACM datasets, which can be attributed to their unique characteristics and structural differences. For the MAG dataset, increasing Lr initially improves performance by capturing more complex patterns. However, further increases in Lr lead to a slight decline in performance, indicating that the model's capacity becomes too large, resulting in overfitting and over-smoothing. In contrast, on the ACM dataset, performance decreases initially with increasing Lr, but then improves as Lr continues to increase, suggesting that the model begins to capture more relevant patterns as its capacity grows. For the Type-level Transformer layer as shown in Figure <ref>, we observe similar results due to the same reasons. Besides, the best results on both datasets are achieved with different Lt values, since different datasets exhibit distinct characteristics. Effect of Embedding Size d. We vary d in {128, 256, 512, 1024, 2048} to validate the impact of embedding size. Figure <ref> reports the node classification results over both datasets. As observed, in most cases, model performance improves with increasing hidden dimension size, as a larger embedding size generally provides stronger representational power. However, it is interesting to discover that employing high-dimensional representations does not consistently yield optimal results. For instance, the model achieves the optimal Micro-F1 and Macro-F1 when d=128 on ACM dataset and d=512 on MAG dataset, respectively. This indicates adopting a higher-dimensional representation does not guarantee the best performance across all scenarios. Effect of Ring Number K. We range K from 1 to 10 to analyze the effect of the number of rings, and the node classification results are illustrated in Figure <ref>. As observed, the model achieves the best with different K on different datasets, since various HINs display distinct neighborhood configurations. Besides, as K increases, performance gradually improves across all datasets, followed by a slight decline observed with further increments. Though greater K implies nodes consider a broader neighborhood, too large K may cover the entire network, causing the node's neighborhood to include a significant amount of irrelevant information and even over-fitting. § CONCLUSION In this paper, we study the HIN representation learning problem. To deal with it, we introduce an innovative (k,t)-ring neighborhood structure to extract neighbors for each node, aiming to capture the differences between neighbors at distinct distances and with different types. Based on this novel structure, we propose an effective HHGT model, seamlessly integrating a Type-level Transformer for aggregating nodes of different types within each k-ring neighborhood, and a Ring-level Transformer for hierarchical aggregation across multiple k-ring neighborhoods. Experimental results on two real-world datasets demonstrate the advantages of our HHGT model across various downstream tasks. ACM-Reference-Format
http://arxiv.org/abs/2407.13082v1
20240718010211
An $\mathrm{NSOP}_{1}$ theory without the existence axiom
[ "Scott Mutchnik" ]
math.LO
[ "math.LO" ]
§ ABSTRACT Answering a question of Dobrowolski, Kim and Ramsey, we find an NSOP_1 theory that does not satisfy the existence axiom. Scheduling Deep Learning Jobs in Multi-Tenant GPU Clusters via Wise Resource Sharing Yizhou Luo1, Qiang Wang2, Shaohuai Shi2, Jiaxin Lai1, Shuhan Qi2, Jiajia Zhang2, Xuan Wang2Corresponding authors: Qiang Wang, Shaohuai Shi Harbin Institute of Technology (Shenzhen) Guangdong Provincial Key Laboratory of Novel Security Intelligence Technologies 1{23S151149,200110515}@stu.hit.edu.cn, 2{qiang.wang,shaohuais,shuhanqi,zhangjiajia,wangxuan}@hit.edu.cn ==================================================================================================================================================================================================================================================================================================================================================================================== § INTRODUCTION One of the core informal questions of model theory is to determine what role stability theory, famously introduced by Morley (<cit.>) and Shelah (<cit.>) to classify the number of non-isomorphic models (of a given size) of a first-order theory, can play in describing theories that are themselves unstable–and that may not even be simple. This project was initiated in large part in celebrated work of Kim (<cit.>) and Kim and Pillay (<cit.>), who showed that the (forking-)independence relation has many of the same properties in simple theories that it does in stable theories, and in fact that these properties characterize simplicity. In a key step towards the non-simple case, Kaplan and Ramsey (<cit.>) then defined Kim-independence over models, which generalizes the definition of forking-independence: Let M T. A formula φ(x, b) Kim-divides over M if there is an M-invariant Morley sequence {b_i}_i ∈ω starting with b such that {φ(x, b_i)}_i ∈ω is inconsistent. A formula φ(x, b) Kim-forks over M if it implies a (finite) disjunction of formulas Kim-dividing over M. We write a ^K_M b, and say that a is Kim-independent from b over M, if tp(a/Mb) does not include any formulas Kim-forking over M. In, for example, <cit.>, <cit.>, <cit.>, <cit.>, it is shown that Kim-independence has many of the same properties in theories without the first strict order property–NSOP_1 theories–that forking-independence has in simple theories. It is also shown that NSOP_1 theories have a characterization in terms of properties of Kim-independence, similarly to how simplicity has a characterization in terms of properties of forking-independence. However, in contrast to the case of simple theories, this work only describes the properties of a relation a _M^K b defined when M is a model, leaving open the question of whether independence phenomena in NSOP_1 theories extend from independence over models to independence over sets. The following axiom of first-order theories, though introduced earlier under different names (see, e.g. <cit.>), was defined by Dobrowolski, Kim and Ramsey in <cit.>: A theory T satisfies the existence axiom if no type p ∈ S(A) forks over A. This is equivalent to every type p ∈ S(A) having a global extension that does not fork over A. All simple theories satisfy the existence axiom (<cit.>), while the circular ordering is an example of a dependent theory not satisfying the existence axiom (<cit.>, Example 2.11). In <cit.> it is shown that, in NSOP_1 theories satisfying the existence axiom, it is possible to extend the definition of Kim-independence a _C^K b from the case where C is a model to the case where C is an arbitrary set, so that the relation a _C^K b will have similar properties to Kim-independence over models in general NSOP_1 theories. Specifically, because, under the existence axiom, (nonforking-)Morley sequences are defined over arbitrary sets, they can be used in place of invariant Morley sequences[It is easy to see that invariant Morley sequences are not defined over arbitrary sets in NSOP_1 theories: for example, any set A such that acl(A) ≠dcl(A).] to define Kim-dividing of a formula over a set: (<cit.>) Let T be a NSOP_1 theory, and let C ⊂𝕄 be an arbitrary set. A formula φ(x, b) Kim-divides over C if there is a (nonforking-)Morley sequence {b_i}_i ∈ω over C starting with b such that {φ(x, b_i)}_i ∈ω is inconsistent. A formula φ(x, b) Kim-forks over C if it implies a (finite) disjunction of formulas Kim-dividing over C. We write a ^K_A b, and say that a is Kim-independent from b over C, if tp(a/Ab) does not include any formulas Kim-forking over C. Dobrowolski, Kim and Ramsey (<cit.>) show that, in an NSOP_1 theory satisfying the existence axiom, the Kim-independence relation ^K as defined above over arbitrary sets satisfies symmetry and the independence theorem (for Lascar strong types), and that Kim-dividing over arbitrary sets satisfies Kim's lemma and coincides with Kim-forking. Chernikov, Kim and Ramsey, in <cit.>, show even more properties of Kim-independence over arbitrary sets in NSOP_1 theories satisfying the existence axiom, including transitivity and witnessing. (All of these properties correspond to the properties of Kim-independence over models proven in general NSOP_1 theories in <cit.>, <cit.>.) Motivated by these results, <cit.>, and later <cit.>, ask: Does every NSOP_1 theory satisfy the existence axiom? We show that the answer to this question is no: There is an NSOP_1 theory not satisfying the existence axiom. This contrasts with the work of Kim, Kim and Lee in <cit.>, where, for the definition of Kim-forking over sets given by Dobrowolski, Kim and Ramsey in <cit.> (Defintion <ref> above), it is shown that no type p ∈ S(A) Kim-forks over A (i.e. contains a formula Kim-forking over A), where A is an arbitrary set in an NSOP_1 theory. In fact, our example is the first known example of a theory without the strict order property, or NSOP theory, not satisfying the existence axiom. Prior to our results, the theory of an algebraically closed field with a generic multiplicative endomorphism, constructed by d'Elbée in <cit.>, was suggested there as a candidate for an NSOP_1 theory without the existence axiom; whether the theory constructed by d'Elbée satisfies the existence axiom was left unresolved in that article. However, as discussed in a personal communication with d'Elbée (<cit.>), it is expected that that theory, which was shown in <cit.> to be NSOP_1, actually does satisfy the existence axiom. Our construction is based on the theory of ω-stable free pseudoplanes, a classical example of a non-one-based ω-stable theory with trivial forking discussed in, say, <cit.>. The theory of ω-stable free pseudoplanes is the theory of undirected graphs of infinite minimum degree without cycles. Variations of this construction appear in e.g. <cit.>, <cit.>, <cit.>, <cit.>. Note also that some arguments from the below, including the strategy, in axiom schema T_4, of requiring that n connected sets of pairwise distance greater than 2^n can be colored independently, and the associated Claims <ref> and <ref>, their proofs, and their application in proving claim (***), are formally similar to those of Section 3 of Chernikov, Hrushovski, Kruckman, Krupiński, Moconja, Pillay and Ramsey (<cit.>). § THE CONSTRUCTION Let ℒ be the language with sorts P and O, symbols R_1 and R_2 for binary relations on O, and symbols ρ_1 and ρ_2 for binary relations between P and O. Call an ℒ-structure A copacetic if: (C1) For i = 1, 2, R_i(A) is a symmetric, irreflexive relation on O(A), and the two are mutually exclusive: for a_1, a_2∈ O(A), A R_1(a_1, a_2) ∧ R_2(a_1, a_2). (C2) The relation R_1(A) ∪ R_2(A) has no loops on O(A) (i.e. there are no distinct a_0… a_n-1∈ O(A), n > 2, and i_1… i_n∈{1, 2} so that, for 0 ≤ j ≤ n-1, A R_i_j(a_i, a_i+1 mod n)). (C3) For all b ∈ P(A), a ∈ O(A), exactly one of A ρ_1(b, a) and A ρ_2(b, a) hold. (C4): (a) For each b ∈ P(A), there are no distinct a_1, a_2∈ O(A), so that there there is some a_*∈ O(A) so that A R_1(a_1, a_*) ∧ R_1(a_2, a_*), and A ρ_1(b, a_1) ∧ρ_1(b, a_2). (b) For each b ∈ P(A), there are no distinct a_1, a_2, a_3∈ O(A), so that there there is some a_*∈ O(A) so that A R_2(a_1, a_*) ∧ R_2(a_2, a_*) ∧ R_2(a_3, a_*), and A ρ_2(b, a_1) ∧ρ_2(b, a_2) ∧ρ_2(b, a_3). Let A be copacetic, and let b ∈ P(A), a ∈ O(A), i ∈1, 2. Then there are at most i many a' ∈ O(A) with A R_i(a, a') and A ρ_i(b, a'). For N the number of such a' that are defined, let b^A, j_→^i(a), 1 ≤ j ≤ N, denote the N many such a' (so if i = 2 and there are two such a', make an arbitrary choice of which is b^A,1 _→^i(a) and which is b^A,2 _→^i(a), while if i = 1, then b^A,1 _→^i(a) is the sole such a', if it exists.) If B is copacetic and A ⊂ B is a substructure of B (and is therefore also copacetic), call A closed in B (denoted A ≤ B) if (i) For b ∈ P(A), a ∈ O(A), i ∈{1, 2}, 1 ≤ j ≤ i, if b^B, j_→^i(a) exists, then b^B, j_→^i(a) ∈ A. (ii) Any R_1(B) ∪ R_2(B)-path between nodes of O(A) lies in O(A): for all a_1, a_n∈ O(A), a_2, … a_n-1∈ O(B) which are distinct and distinct from a_1, a_n, and i_1 , …, i_n-1∈{1, 2}, if A R_i_j(a_i, a_i+1) for 1 ≤ j ≤ n-1, then for all 2 ≤ i ≤ n-1, a_i∈ O(A). Call a copacetic ℒ-structure A connected if R_1(A) ∪ R_2(A) forms a connected graph on O(A): for all a, a' ∈ O(A), there are a_2, … a_n-1∈ O(A), i_1 , …, i_n-1∈{1, 2} for some n, so that for a_1 = a, a_n = a', A R_i_j(a_i, a_i+1) for all 1 ≤ j ≤ n-1. So if B is copacetic and A ⊆ B, connectedness of A supplants requirement (ii) of A being closed in B. Call a subset of A a connected component of A if it is a maximal connected subset of O(A). Let O be an undirected graph without cycles and with a 2-coloring of its edges, with R_1, R_2 denoting edges of either color. Let ρ_1, ρ_2⊂ O, O = ρ_1∪ρ_2, ρ_1∩ρ_2 = ∅ be a coloring of the vertices of O so that, for i = {1, 2}, no i+1 distinct vertices of O, lying on the boundary of the same R_i-ball of radius 1 (i.e. they have a common R_i-neighbor), are both colored by ρ_i. Then we call ρ_1, ρ_2 a (C4)-coloring of O. For A copacetic, O' ⊆ O(A), and ρ_1, ρ_2⊆ O' a (C4)-coloring of O', say that b ∈ P(A) induces the (C4)-coloring ρ_1, ρ_2 on O' if for i ∈{1, 2}, ρ_i = {a ∈ O': A ρ_i(b, a)}. The following assumptions on an ℒ-structure A are expressible by a set of first-order sentences: (T_1) (Copaceticity) A is copacetic. (T_2) (Completeness) For b ∈ P(A), a ∈ O(A), i ∈{1, 2}, if A ρ_i(b, a), then b^A, j_→^i(a) exists for 1 ≤ j ≤ i. (T_3) (Tree extension) For C ⊆ A, and B ≥ C finite and copacetic with P(B) = P(C), there is an embedding ι: B ↪ A with ι|_C = id_C. (T_4) (Parameter introduction) For any n < ω, finite connected sets O_1, …, O_n⊆ O(A) so that there does not exist an R_1∨ R_2-path of length at most 2^n between a vertex of O_i and a vertex of O_j for any distinct i, j ≤ n (so in particular, O_i and O_j are disjoint), and (C4)-colorings ρ^i_1, ρ^i_2⊆ O_i of O_i for i ≤ n, there are infinitely many b ∈ P(A) so that, for each i ≤ n, b induces the (C4)-coloring ρ^i_1, ρ^i_2⊆ O_i on O_i. Let T^∄=T_1∪ T_2∪ T_3∪ T_4. We claim that T^∄ is consistent. This will follow by induction from the following three claims: (*) If A is copacetic, b ∈ P(A), a ∈ O(A), i ∈{1, 2}, and 1 ≤ j ≤ i, there is a copacetic ℒ-structure A' ⊇ A (i.e. containing A as a substructure) so that b^A', j_→^i(a) exists. (**) If A is copacetic, C ⊆ A, and B ≥ C is finite and copacetic with P(B) = P(C), there is a copacetic structure A' ⊇ A and an embedding ι: B ↪ A' with ι|_C = id_C. (***) If A is copacetic, for any n < ω, finite connected sets O_1, …, O_n⊆ O(A) so that there does not exist an R_1∨ R_2-path of length at most 2^n between a vertex of O_i and a vertex of O_j for any distinct i, j ≤ n, and (C4)-colorings ρ^i_1, ρ^i_2⊆ O_i of O_i for i ≤ n, there is some copacetic A' ⊃ A and p ∈ P(A'\ A) so that, for each i ≤ n, p induces the (C4)-coloring ρ^i_1, ρ^i_2 on O_i. We first show (*). Suppose b^A, j_→^i(a) does not already exist. Let A' = A ∪{*} for * a new point of sort O, and let R_i(A') = R(A) ∪{(a, *), (*, a)}, R_3-i(A')=R_3-i(A), ρ_3-i(A') = ρ_3-i(A) ∪ (P(A) \{b}) ×{*}, ρ_i(A') = ρ_i(A) ∪{(b, *)}. Then A' is copacetic; first, (C1)-(C3) are clearly satisfied. Second, no point of P(A) \{b} can witness a failure of (C4), because A is copacetic and * is not an R_3-i-neighbor of any point of O(A'). Finally, neither can b witness a failure of (C4), because * is not an R_i-3 neighbor of any point of O(A') and a is the unique R_i-neighbor of * in O(A'), that failure must by copaceticity of A be witnessed on the boundary of the R_i-ball of radius 1 centered at a. But because b^A, j_→^i(a) does not already exist, there are fewer than i many R_i-neighbors a' of a in O(A) with A ρ_i(b, a'), so there are at most i many R_i-neighbors a' of A in O(A') with A ρ_i(b, a'); thus the failure of (C4) is not in fact witnessed on this ball's boundary. By construction, we can then choose b^A', j_→^i(a)= *. We next show (**). For this we need the following claim: Let O' be undirected graph without cycles and with a 2-coloring of its edges, with R_1, R_2 denoting edges of either color. Let O ⊂ O' be a subgraph with the induced coloring, and with O ≤ O' (in the sense of (ii), so any path between two vertices of O' consisting of edges of any colors is contained in O). Then any (C4)-coloring ρ_1, ρ_2 of O extends to a (C4)-coloring ρ_1', ρ'_2 of O', which has the following additional property: if v ∈ O' \ O has an R_i-neighbor in O, then v ∈ρ'_3-i. By the assumption on paths in O' between two vertices of O, we may decompose O' \ O into a disjoint union ⊔ O^i of connected subgraphs, so that each O^i has at most one vertex v_i with any neighbors in O; v_i will in fact have only one neighbor w_i in O. For every O^i all of whose vertices have no neighbors in O, let v_i be an arbitrary vertex of O^i. Then inductively, we can order each O^i as a tree (i.e., a partial order with linearly ordered downsets) so that v_i is the root, any node's immediate successors are all neighbors of that node and, among any two neighbors, one must be an immediate successor of the other, and each maximal linearly ordered set is well-ordered of order type at most ω. Extend ρ_1, ρ_2 on each O_i as follows, starting from v_i and proceeding by induction. If w_i is an R_i-neighbor of v_i, color v_i by ρ'_3-i; otherwise, color v_i arbitrarily. Then for each vertex v of O^i, each immediate successor of v will be an R_i-neighbor of v for some i ∈{1, 2}; color it by ρ_3 - i. No two distinct vertices v, w with an R_i-distance of exactly 2 in O' can then be colored in ρ^i: if the R_i-path between v and w goes through O, then whichever of v and w is not in O will be colored by ρ'_3-i, and if the R_i-path between v and w stays in O^i, whichever of v and w is not the least in the path will be colored by ρ'_3-i. This shows that ρ'_1, ρ'_2 is a (C4)-coloring, and the additional property is immediate from the construction. Now, take the set A' to be the disjoint union of A and B over C; for notational simplicity we identify A, B and C respectively with their images in A'. Let the R_1, R_2-structure on O(A') be given as follows: the identification on O(A) and O(B) preserves the R_i-structure, and O(A) and O(B) are freely amalgamated over O(C) (i.e. there are no R_i-edges between O(A\ C) and O(B\ C)). This guarantees (C1), and also guarantees (C2) by condition (ii) of C ≤ B. Now let the ρ_1, ρ_2-structure on A' be given as follows: the identifications on A and B preserve the ρ_i-structure, which, by the assumption that P(B) = P(C), leaves us only by way of satisfying (C3) to define the ρ_1, ρ_2-structure on P(A\ C) × O(B \ C), and this will be the following. Let p ∈ P(A\ C); the requirement that the ρ_i-structure on A is preserved tells us the (C4)-coloring that p induces on O(C), and we extend this to a (C4)-coloring on O(B) as in Claim <ref>; here we use condition (ii) of C≤ B. We have defined the full ℒ-structure on A', so it remains to show (C4); in other words, we show that p induces a (C4)-coloring on A' in the case where p ∈ P(C) and in the case where p ∈ P(A\ C). In either case p induces a (C4)-coloring on A and on B, so the only way (C4) can fail is on the boundary of an R_i-ball of radius 1 centered at c ∈ C. But (C4) cannot fail on the boundary of this ball, because it does not fail there in A, and by condition (i) of C ≤ B in the first case, or by the additional clause of Claim <ref> in the second case, any R_i-neighbor of c in B\ C is colored by ρ_3-i in the (C4)-coloring induced by p on B, so (C4) still cannot fail on the boundary of this ball in A'. So A' is a copacetic ℒ-structure containing A, and clearly, the embedding ι as in (**) exists, so this shows (**). Finally, we show (***). For this, we need an additional combinatorial claim: Let O be be undirected graph without cycles and with a 2-coloring of its edges, with R_1, R_2 denoting edges of either color, and let O_1, …, O_n⊆ O be connected subsets so that there does not exist an R_1∨ R_2-path of length at most 2^n between a vertex of O_i and a vertex of O_j for any distinct i, j ≤ n. Let ρ^i_1, ρ^i_2⊆ O_i be a (C4)-coloring of O_i for i ≤ n. Then there is some O' ≤ O containing O_1∪…∪ O_n and (C4)-coloring ρ_1, ρ_2 of O so that, for i ≤ n, ρ_1, ρ_2 restricts to ρ^i_1, ρ^i_2 on O_i. Considering each connected component of O individually, we may assume O to be connected. We prove this claim by induction on n. Because we will later consider a variant of this construction where the case n=2 is the main difference, we isolate this case, which is necessary for the induction, as a subclaim: Claim <ref> is true where O is connected and n=2. Let I be the shortest path between O_1 to O_2, which will consist, ordered in the direction from O_1 to O_2, of o_0, … o_n, for o_0∈ O_1, o_n∈ O_2, o_1… o_n-1∈ O \ (O_1∪ O_2), and n ≥ 5. Because O_1 and O_2 are connected, it suffices to color O_1∪ O_2∪ I so as to extend the colorings ρ^i_1, ρ^i_2 on O_i for i = 1, 2, and to preserve the condition of being a (C4)-coloring. So color O_1 by ρ^1_1, ρ^1_2, O_2 by ρ^2_1, ρ^2_2, and color o_1… o_n-1 as follows: first, color o_1 by ρ_3-i, for i so that o_0 is an R_i-neighbor of o_1, and color o_n-1 by ρ_3-i, so that o_n-1 is an R_i-neighbor of o_n. Since we colored O_1 by ρ^1_1, ρ^1_2 and O_2 by ρ^2_1, ρ^2_2, the only vertices o of the O_i so that the condition of being a (C4)-coloring can fail in O_1∪ O_2∪ I on the boundary of the R_i-ball of radius 1 centered at o are o_0 and o_n, and we have just prevented this. It remains to color o_2 , … o_n-2; we color them all by ρ_2. The only remaining vertices o of O_1∪ O_2∪ I where the condition of being a (C4)-coloring can fail on the 1-ball centered at O are now o_1… o_n-1, but the boundaries of the 1-balls centered at those vertices in O_1∪ O_2∪ I have just two points, at least one of which must be colored by ρ_2, because n ≥ 5 and the interval {o_2 , … o_n-2} whose points we colored by ρ_2 contains at least one of the two neighbors of each o_1, …, o_n-1. So the condition of being a (C4)-coloring cannot fail there. We now consider the general case of the induction. Without loss of generality, we may assume that O_n-1 and O_n have minimum distance d > 2^n among any O_i, O_j with i, j distinct. By Lemma <ref>, we may find a (C4)-coloring (ρ')^n-1_1, (ρ')^n-1_2 of O'_n-1=: O_n-1∪ O_n∪ I extending ρ^n-1_1, ρ^n-1_2 and ρ^n_1, ρ^n_2, where I is the shortest path, which will be of length d, between O_n-1 and O_n. Clearly O'_n-1 is connected, so it suffices to show that no two of O_1, … O_n-2, O'_n-1 have distance less than d/2 > 2^n-1, because then we can apply the inductive step. Let i ≤ n-2; it suffices to show that O_i does not have distance less than d/2 from O'_n-1. Suppose otherwise; then it has distance less than d/2 from some point p of O'_n-1, but by definition of O'_n-1, p must have distance at most d/2 from either O_n-1 or O_n. So O_i must have distance less than d/2 + d/2 = d from either O_n-1 or O_n, contradicting minimality of d. By Claims <ref> and <ref>, there is a (C4)-coloring ρ_1, ρ_2 on O(A) extending the (C4)-coloring ρ^i_1, ρ^i_2 on O_i for each i ≤ n. So we can extend A to A'=A ∪ p, where p ∈ P(A'\ A), and define the relations ρ_1 and ρ_2 on O(A) ×{p} so that p induces the (C4)-coloring ρ_1, ρ_2 on O(A). This proves (***), and the consistency of T^∄. We call a copacetic ℒ-structure complete if it satisfies axiom T_2; completeness of A will always supplant condition (i) of A ≤ B. We now prove that T^∄ has the following embedding property: Let 𝕄 be a sufficiently saturated model of T^∄. Let C ≤𝕄, and let B ≥ C be a small copacetic ℒ-structure. Assume additionally that B is complete. Then there is an embedding ι: B ↪𝕄 that is the identity on C and satisfies ι(B) ≤𝕄. We prove the lemma in the following two cases: (1) P(B) = P(C) (2) O(B) = O(C) and |P(B \ C)|=1. These cases suffice, because the closed extension C ≤ B can be decomposed into a closed extension satisfying (1) followed by an ascending chain of (obviously closed) extensions satisfying (2), and the property ι(B) ≤𝕄 is clearly preserved under taking unions. To prove case (1), by completeness of B and saturatedness of 𝕄, it suffices to find, for arbitrarily large n < ω, an embedding ι_n: B ↪𝕄 that is the identity on C and so that points of ι_n(B) that are not connected by a path in ι_n(B) have distance at least n in 𝕄. We claim that, for every copacetic B' ≥ C, and any points b, b' ∈ O(B) so that b and b' belong to distinct connected components of B and b' belongs to a connected component of B not containing any point of C, there is some copacetic B”⊇ B' so that C ≤ B”, that consists of B' together with a path of from b to b' of length greater than n. One way to do this is to add an R_1-path of length greater than n (and greater than 2) between b and b', so that all induced (C4)-colorings color the new nodes of this path by ρ_2. By choice of induced colorings, the resulting ℒ-structure B” will satisfy (C4) and condition (i) of C ≤ B, and B” will also satisfy condition (ii) of C ≤ B by the assumption of what connected components of B” have b and b' as members. By repeatedly applying this claim to connect each connected component of B not meeting C to some fixed connected component of B meeting C (or an arbitrary fixed connected component of B, if C is empty), we then obtain a copacetic ℒ-structure B' ⊇ B with P(B') = P(C) and B' ≥ C such that, for any b, b' ∈ B that are not connected in B, either b and b' have finite distance greater than n in B', or they belong to different connected components of B' meeting C. Now suppose ι': B' ↪𝕄 is any embedding restricting to the identity on C; then for ι'(b), ι'(b') ∈ι'(B), that are not connected in ι'(B), either b and b' have finite distance greater than n in B', so ι(b) and ι(b') have distance greater than n in 𝕄 as desired, or b and b' belong to different connected components of B' meeting C, so ι(b) and ι(b') belong to different connected components of ι(B') meeting C, so are not connected in 𝕄 because C ≤𝕄; then ι_n = ι'|_B is as desired. So if we can show that, for any small, copacetic ℒ-structure B'≥ C with P(B') = P(C), there is an embedding ι : B' →𝕄 that is the identity on C (with no additional requirements on ι(B)), we will have proven case (1). Note that if B_0⊆ B', B_0∩ C ≤ B_0, so we may assume that B' is finite by saturatedness of 𝕄. Then we can just apply tree extension. To prove case (2), let ρ_1, ρ_2 be the (C4)-coloring induced by p, where {p} = P(B\ C), on C. Since B is complete and C ≤𝕄, any embedding ι: B →𝕄 with ι|_C = id will satisfy ι(B) ≤𝕄. By saturatedness of 𝕄 it suffices to find, for every finite P_0⊂ P(C) and finite C_0⊂ O(C), some p ∈𝕄\ P_0 inducing the (C4)-coloring ρ_1|_C_0, ρ_2|_C_0 on C_0. We may enlarge C_0 to a finite subset of O(C) consisting of a finite union of n connected sets that are not connected in C, so by condition (ii) of C ≤𝕄, are not connected in 𝕄; in particular, any two of them have distance greater than 2^n in 𝕄. So by parameter introduction, we may find infinitely many p ∈ P(𝕄) inducing the (C4)-coloring ρ_1|_C_0, ρ_2|_C_0 on C_0, so in particular, one not belonging to P_0. We now fix a sufficiently saturated 𝕄 T^∄, which we take as the ambient model; for p ∈ P(𝕄), fix the notation, p^j_→^i =:p^𝕄, j_→^i The following quantifier elimination is a corollary of the above: Let A, B ≤𝕄. Then if qftp_ℒ(A)=qftp_ℒ(B), tp(A) = tp(B). For any A ≤𝕄 and a ∈𝕄, there is some (small) B ≤𝕄 so that Aa ⊂ B, and in particular A ≤ B. Because B ≤𝕄 and 𝕄 satisfies completeness, B is complete. So the corollary follows by Lemma <ref> by a back-and-forth argument. Next, we show that T^∄ is NSOP_1. By Theorem 9.1 of <cit.>, it suffices to find an invariant ternary relation between subsets of 𝕄 over models with the following properties: (a) Strong finite character: For all a, b and M T, if a _M b, then there is some formula φ(x, b) ∈tp(a/Mb), where φ(x, y) has parameters in M, such that for every a' φ(x, b), a' _M b. (b) Existence over models: for all a and M T, a _M M. (c) Monotonicity: For all A' ⊆ A, B' ⊆ B, M T, A _M B implies A' _M B'. (d) Symmetry: For all a, b and M T, a _M b implies b _M a (and vice versa). (e) The independence theorem: for any a, a', b, c and M T, a _M b, a' _M c, b _M c and a ≡_M a' implies that there is a” with a”b ≡_M ab, a”c ≡_M a'c and a”_M bc. First, note that for any A, there is A' ⊇ A such that A' ≤𝕄 and for all A”⊇ A with A”≤𝕄, A' ⊆ A”. Denote this cl(A); clearly, this is contained in acl(A) (and it can even be checked that cl(A) = acl(A)). This allows us to define : A _M B if cl(MA) ∩cl(MB) = M, and every (R_1(𝕄) ∪ R_2(𝕄))-path between a point of O(cl(MA) \ M) and O(cl(MB) \ M) contains a point of M. We see this has strong finite character: for a _M b, a_0∈cl(aM), b_0∈cl(bM) the endpoints of the path of length n witnessing this, let φ_1(y, b) isolate b_0 over Mb, φ_2(x) say that x is not within distance n of the closest point of M to some (any) b'_0φ_1(x, b) (which would be required, should there be a (non-self-overlapping) path of length n between x and b_0 going through M), and φ_3(x, b) say that there is a path of length n from x to some point c satisfying φ_1(c, b). Then φ(x, b)=: φ_2(x) ∧φ_3(x, b) suffices. Existence over models, monotonicity, and symmetry are immediate. So it remains to show the independence theorem. By definition of , we may assume a=cl(aM) and similar for a', b, c. We first give an analysis of the structure of pairs a, b with a _M b: Let a = cl(aM), b = cl(bM), and a _M b. Then cl(ab) = a^*b^*, where for P_0= P(ab), a^* is the closure of a under the functions p^j_→^i for p ∈ P_0 (so in particular, P(a^*)=P(a)), and b^* is the closure of b under these same functions. Every point of O(a^*\ a) is connected by some (R_1∨ R_2)-path to a point of a \ M, and a ≤ a^*; similarly for b^*. The second sentence follows from the definition of the p^j_→^i, and the fact that M, a, b ≤𝕄; it remains to show that a^*b^*≤𝕄. But a^*b^* is the closure of ab under the functions p^j_→^i for p ∈ P_0, so is complete, and condition (i) is satisfied. Moreover, O(a)O(b) ≤𝕄 (i.e., condition (ii) of being closed in 𝕄 holds for ab): any path with endpoints in O(a)O(b) and intermediate points all in O(𝕄\ ab) must, because a _M b and M ⊆ a, b, have both endpoints in either of O(a) or O(b), a contradiction because O(a) and O(b) are closed in 𝕄. Because every point of O(a^*)O(b^*) is connected to ab by the second clause of this lemma, condition (ii) of a^*b^*≤𝕄 then follows from O(a)O(b) ≤𝕄. In the following, for P_0⊂ P(𝕄) and A ⊂𝕄, let cl_P_0(A) denote the closure of A under p^j_→^i for p ∈ P_0, as in Lemma <ref>. We want to find ã, b̃, c̃ so that ãb̃≡_M ab, ãc̃≡_M a'c, b̃c̃≡_M bc, ã_Mb̃c̃. Let b^*_c= cl_P(b)P(c)(b), c^*_b= cl_P(b)P(c)(c), b^*_a= cl_P(a)P(b)(b), c^*_a'= cl_P(a')P(c)(c), a^*_b= cl_P(a)P(b)(a), a^*_c= cl_P(a')P(c)(a'). We now build a copacetic ℒ-structure extending M as follows. Let A_0 =: ã_0b̃_0c̃_0 where R_1, R_2 is defined on O(ã_0), O(b̃_0), O(c̃_0) so that these are freely amalgamated sets isomorphic to O(a), O(b), O(c) over O(M) (so in particular, O(a) O(b ) ≡^ℒ-qftp_O(M) O(ã_0)O(b̃_0) because a _M b, and so on), and so that P(ã_0), P(b̃_0), P(c̃_0) and ρ_1, ρ_2 are defined so that ã_0b̃_0≡^ℒ-qftp_M ab, ã_0c̃_0≡^ℒ-qftp_M a'c, b̃_0c̃_0≡^ℒ-qftp_M bc; this makes sense because a ≡_M a'. Then A_0 is copacetic, because the only way axiom (C4) can fail is, without loss of generality, when there is p ∈ P(ã_0\ M), o_1∈ O(b̃_0\ M), o_2∈ O(c̃_0\ M) so that o_1, o_2 lie on the boundary of some common R_1-ball and A_0ρ_1(p, o_1) ∧ρ_1(p, o_2). Because O(b̃_0), O(c̃_0) are freely amalgamated over M, the only way the former can happen is for o_1 and o_2 to have a common R_1-neighbor m ∈ M. But then (say) o_1 = p^1_→^1(m) ∈ã_0, contradicting ã_0∩b̃_0 = M. Now extend A_0 to A_1=: ã^*_b̃ã^*_c̃b̃^*_ãb̃^*_c̃c̃^*_ãc̃^*_b̃, where (1)ã^*_b̃= ã_0∪ O(ã^*_b̃\ã_0), ã^*_c̃= ã_0∪ O(ã^*_c̃\ã_0), b̃^*_ã= b̃_0∪ O(b̃^*_ã\b̃_0), b̃^*_c̃= b̃_0∪ O(b̃^*_c̃\b̃_0), c̃^*_ã= c̃_0∪ O(c̃^*_ã\c̃_0), c̃^*_b̃= c̃_0∪ O(c̃^*_b̃\c̃_0), (2) O(ã^*_b̃\ã_0), O(ã^*_c̃\ã_0), O(b̃^*_ã\b̃_0), O(b̃^*_c̃\b̃_0), O(c̃^*_ã\c̃_0), O(c̃^*_b̃\c̃_0) are pairwise disjoint and disjoint from A_0 (3) The only R_i-edges of A_1 are those of A_0, as well as those required to make O(ã^*_b̃) ≡^ℒ-qftp_O(M) O(a^*_b), O(ã^*_c̃) ≡^ℒ-qftp_O(M) O(a'^*_c), O(b̃^*_ã) ≡^ℒ-qftp_O(M) O(b^*_a), O(b̃^*_c̃) ≡^ℒ-qftp_O(M) O(b^*_c), O(c̃^*_ã) ≡^ℒ-qftp_O(M) O(c^*_a'), O(c̃^*_b̃) ≡^ℒ-qftp_O(M) O(c^*_b). (4) Let us extend ρ_1ρ_2 where required so that ã^*_b̃b̃^*_ã≡^ℒ-qftp_M a^*_b b^*_a, ã^*_c̃c̃^*_ã≡^ℒ-qftp_M a'^*_c c^*_a', b̃^*_c̃c̃^*_b̃≡^ℒ-qftp_M b^*_c c^*_b. Because ã^*_b̃b̃^*_ã, ã^*_c̃c̃^*_ã, b̃^*_c̃c̃^*_b̃ are already known to satisfy (C4), a failure of (C4), which would then be witnessed by p ∈ P(A_1), o_1, o_2, o_3∈ O(A_1) (with the last two perhaps equal), could now happen in the following two cases (among the instances where the ρ_i are yet defined), both of which we rule out. First, one of the o_i is in a^†\A_0, where a^† is one of ã^*_b̃ã^*_c̃, b̃^*_ãb̃^*_c̃, c̃^*_ãc̃^*_b̃, and the other is in A_1\ a^†. But then, by the connectedness claim in the second clause of Lemma <ref>, (3) and the construction of O(A_0) tell us that these two o_i must have distance at least 2 apart, so a failure of (C4) cannot arise here. The other case is where o_1, o_2, o_3∈ a^† for a^† one of ã^*_b̃ã^*_c̃, b̃^*_ãb̃^*_c̃, c̃^*_ãc̃^*_b̃, and some two do not belong to the same ã^*_b̃,ã^*_c̃,b̃^*_ã,b̃^*_c̃,c̃^*_ã,c̃^*_b̃; then p ∈ a^† because this is the only way the ρ_i can be defined so far for p, o_1, o_2, o_3. Then these two o_j, say o_1 and o_2 must satisfy ρ_i(p, o_1), ρ_i(p, o_2) for some i ∈{1, 2}, and must be common R_i-neighbors of the evident ã, b̃, c̃ while lying outside of this ã, b̃, c̃. But this is impossible, by the claim a ≤ a^*, b ≤ b^* of Lemma <ref>. Therefore, we do not yet get a failure of (C4), and it remains to extend ρ_1, ρ_2 to get (C3) while maintaining (C4), which we do in the next step. (5) By the claim a ≤ a^*, b ≤ b^* in the second clause of Lemma <ref>, we can use Claim <ref> to extend ρ_1, ρ_2 where not yet defined, maintaining (C4) by the description of the R_1, R_2-structure in (3), and thereby producing a copacetic ℒ-structure. Observe also the following: (6) In A_0, there is no path between ã and b̃c̃ not going through M. So by the connectedness claim in Lemma <ref> applied to c^*_b, b^*_c, in A_1 there is no path between ã and c̃^*_b̃b̃^*_c̃ not going through M. (7). By construction, O(ã), O(b̃), O(c̃) are closed in O(A_1). So by the proof of the first clause of Lemma <ref>, ã^*_b̃b̃^*_ã, ã^*_c̃c̃^*_ã, b̃^*_c̃c̃^*_b̃ are each closed in A_1 (and are also each complete). Finally, we extend A_1 to a complete copacetic ℒ-structure, so that (6) and (7) still hold replacing A_1 with A_2; since ã^*_b̃b̃^*_ã, ã^*_c̃c̃^*_ã, b̃^*_c̃c̃^*_b̃ are complete, to preserve (7) we must just preserve clause (ii). We can extend A_1 to a complete copacetic ℒ-structure just by repeated applications of the proof of (ii). But notice that this proceeds just by successively adding nodes in the sort O with exactly one R_1∨ R_2-neighbor in the previously added nodes, so adds no new paths between nodes in O(A_1). So (6) and (7) are in fact preserved. Note that A ≤ B ≤ C implies A ≤ C, so by (4) and (7) (i.e., the version where A_1 is replaced with A_2), M ≤ A_2. Now use Lemma <ref> to obtain an embedding ι: A_2↪𝕄 which is the identity on M, and such that ι(A_2) ≤𝕄. Let ã=ι(ã_0), b̃=ι(b̃_0), c̃=ι(c̃_0). Then (using Corollary <ref>) by (4), (7) (again, the version where A_1 is replaced with A_2), ι(A_2) ≤ M, and the first clause of Lemma <ref>, ãb̃≡_M ab, ãc̃≡_M a'c, b̃c̃≡_M bc. Moreover, by (6) (yet again, the version where A_1 is replaced with A_2) and ι(A_2) ≤ M , ã_Mb̃c̃. So we have proven the independence theorem for , and T^∄ is NSOP_1. It remains to show that T^∄ does not satisfy the existence axiom. Let p(x) be the unique type in the sort P (in one variable) over ∅. Let o ∈ O(𝕄). Then p(x) ⊢ρ_1(x, o) ∨ρ_2(x, o), by (C3). We show that ρ_1(x, o) 2-divides over ∅. We can find an ∅-indiscernible sequence {o_i}_i < ω, o_0 = o, so that the o_i lie on the boundary of some fixed R_1-ball of radius 1. Then {ρ(x, o_i)}_i < ω is 2-inconsistent by (C4), so ρ_1(x, o_i) 2-divides over ∅. That ρ_2(x, o_i) 3-divides over ∅ will be similar. So p ∈ S(∅) forks over ∅, violating the existence axiom. This proves the main theorem of this paper, Theorem <ref>, and answers the main question, Question <ref>. It is not too hard to show that above even satisfies the following axiom: (f) Witnessing: Let a _M b, and let {b_i}_i < ω, b_0 = b, be an M-indiscernible sequence with b_i_M b_0… b_i-1 for all i < ω. Then there is a formula φ(x, b) ∈tp(a/Mb), φ(x, y) ∈ L(M), so that {φ(x, b_i)}_i < ω is inconsistent. Theorem 6.11 of <cit.> says that if an invariant ternary relation between subsets of 𝕄 over models satisfies strong finite character, existence over models, monotonicity, symmetry, the independence theorem and witnessing, then coincides with Kim-independence ^K (Definition <ref>.) So in T^∄, Kim-independence (over models) is given by . Because T^∄ does not satisfy the existence axiom, the results on Kim-independence over sets from <cit.>, <cit.>) do not apply to T^∄. But note that it makes sense to define a _C b the same way as above when (acl(C) =) cl(C) = C, and, when C is any set, define a _C b by a _cl(C) b, giving a ternary relation on sets. Over sets, by the same proofs as above, satisfies the analogues of strong finite character, monotonicity, symmetry, the independence theorem (where ≡_C is replaces by ≡^Lstp_C, though this is the same as ≡_cl(C), with respect to which the independence theorem holds for ), and witnessing; moreover satisfies a stronger version of existence over sets: (b') Existence and extension over sets: for any a, C and B' ⊇ B ⊇ C, a _C C, and if a _C B there is a' ≡_B a with a' _C B'. Ramsey, in a presentation at the Banff International Research Station on joint work with Itay Kaplan (<cit.>), defines the assertion that Kim-independence is defined over sets to mean that there is a ternary relation between sets satisfying the analogues over sets of strong finite character, monotonicity, symmetry, the independence theorem, and witnessing, as well as existence and extension over sets, and shows that such is uniquely determined when it exists. So by <cit.>, <cit.>, in any NSOP_1 theory satisfying the existence axiom, Kim-independence is defined over sets in the sense of <cit.>. But also, despite T^∄ not satisfying the existence axiom, Kim-independence is defined over sets in this sense in the theory T^∄, even if the results of <cit.> on Kim-independence as defined by Dobrowolski, Kim, and Ramsey (Definition <ref> above) do not apply in T^∄.[In fact, in T^∄, actually coincides with ^K as defined by <cit.> (i.e. Kim-forking independence with respect to nonforking Morley sequences, Definition <ref> above), so the conclusions of, say, Corollary 4.9 or Theorem 5.6 of <cit.> hold: ^K as defined there is symmetric, and satisfies the independence theorem, over arbitrary sets. (If a _C b and aC is algebraically closed, then tp(a/Cb) implies a finite disjunction of formulas of the form φ(x, b'), where b' ∉cl(C) is a singleton of O or P and φ(x, b') either says that x = b' or implies that there is a path between x and b' with no points in cl(C), and a formula of either kind divides over C with respect to a cl(C)-invariant Morley sequence. On the other hand, a _C b implies a _C M for M some |C|^+-saturated model containing Cb, so a ^K_C M by the independence theorem and |C|^+-saturatedness of M; see the clause ⇒^K of Theorem 9.1 of <cit.>, and the standard argument that forking-dependence on a sufficiently saturated model implies dividing-dependence on that model.) But Proposition 4.9 of <cit.> fails–it is not necessarily true that φ(x, b) forks over C with respect to nonforking Morley sequences if and only if it divides with respect to nonforking Morley sequences. For example, for o ∈ O, p ∈ P, let φ(x, op) =: x = p; then φ(x, op) does not divide with respect to a nonforking Morley sequence over ∅ (i.e. Kim-divide over ∅, as in Definition <ref>), because there are no nonforking Morley sequences over ∅ starting with op), but it implies φ̃(x, p) =: x = p, which does divide with respect to a nonforking Morley sequence over ∅, so φ(x, op) Kim-forks over ∅. Moreover, Kim's lemma, Theorem 3.5 of <cit.>, is also false in T^∄: φ(x, op) divides over ∅ with respect to all nonforking Morley sequences over ∅ starting with op, but not with respect to some nonforking Morley sequence over ∅ starting with op! By way of obtaining an NSOP_1 theory where ^K as defined over sets by <cit.> (Defintion <ref> here) does not, say, satisfy the independence theorem, we expect that, by an extremely tedious verification, T^∄ can be shown to eliminate ∃^∞. So by Theorem 5 of <cit.> and Theorem 4.5 of <cit.>, the generic expansion of T^∄ by functions from P to O and from O to p (i.e. the model companion of models of (the Morleyization of) T^∄ expanded by a unary function from sort P to sort O and a unary function from sort O to sort P) will exist and have NSOP_1, and no consistent formula can Kim-divide over ∅, because every nonempty parameter will have an element of O and an element of P in its definable closure, so can be shown to begin no Morley sequence over ∅ as in the original proof that T^∄ does not satisfy the existence axiom. So, using Definition <ref> to define ^K over arbitrary sets, any set will be Kim-independent over ∅ from any nonempty set. ] So the results stated in <cit.> are independent of the previous work on the existence axiom. § QUANTITATIVE RESULTS Doborowolski, Kim, and Ramsey show, in Remark 6.7 of <cit.>, that in a theory without the strict order property (i.e. an NSOP theory), the failure of the existence axiom cannot be witnessed by two formulas that 2-divide: Let T be NSOP, and p ∈ S(A). Then there are no formulas φ_1(x, b_1), φ_2(x, b_2), each of which 2-divide over A, such that p ⊢φ_1(x, b_1) ∨φ_2(x, b_2). In the previous section, we gave an example, T^∄, of an NSOP_1 theory where, for p ∈ S(∅), p ⊢φ_1(x, b) ∨φ_2(x, b), where φ_1(x, b) 2-divides over ∅ and φ_2(x, b) 3-divides over ∅. Here, we describe an example, T^∄^2,2,2, of an NSOP_1 theory where, for p ∈ S(∅), p ⊢φ_1(x, b) ∨φ_2(x, b) ∨φ_3(x, b), where for i = 1, 2, 3 each φ_i(x, b) 2-divides over ∅. This will show the optimality of Fact <ref>. Let ℒ be the language with sorts P and O, symbols R_1, R_2 and R_3 for binary relations on O, and symbols ρ_1, ρ_2, and ρ_3 for binary relations between P and O. Call an ℒ-structure A copacetic^2,2,2 if: (C1)^2,2,2 For i = 1, 2, 3, R_i(A) is a symmetric, irreflexive relation on O(A), and the three are mutually exclusive: for a_1, a_2∈ O(A), A R_i(a_1, a_2) ∧ R_j(a_1, a_2) for i ≠ j ∈{1, 2, 3} (C2)^2,2,2 The relation R_1(A) ∪ R_2(A) ∪ R_3(A) has no loops on O(A) (i.e. there are no distinct a_0… a_n-1∈ O(A), n > 2, and i_1… i_n∈{1, 2, 3} so that, for 0 ≤ j ≤ n-1, A R_i_j(a_i, a_i+1 mod n)). (C3)^2,2,2 For all b ∈ P(A), a ∈ O(A), exactly one of A ρ_1(b, a), A ρ_2(b, a), and ρ_3(b, a) hold. (C4)^2,2,2: For i ∈{1, 2, 3}, there is no b ∈ P(A) and distinct a_1, a_2 on the boundary of some fixed unit R_i-ball so that A ρ_i(b, a_1) ∧ρ_i(b, a_2). We define the closure relation ≤ analogously to the previous section, and construct a theory satisfying the analogous statement to Lemma <ref>, which will be NSOP_1 and satisfy p ⊢ρ_1(x, o) ∨ρ_2(x, o) ∨ρ_3(x, o) for any o ∈ O(𝕄) and p ∈ S(∅) the unique type (in one variable) in sort P over ∅; ρ_i(x, o) will 2-divide over ∅ for i ∈{1, 2, 3}, as desired. The entire proof is a straightforward generalization of the previous section, with a single exception: in place of Subclaim <ref>, we must prove the below subclaim. Let O be an undirected graph without cycles and with a 3-coloring of its edges, with R_1, R_2, R_3 denoting edges of each color. Let ρ_1, ρ_2, ρ_3⊂ O, O = ρ_1∪ρ_2∪ρ_3, ρ_i∩ρ_j = ∅ for i ≠ j ∈{1, 2, 3} be a coloring of the vertices of O so that, for i = {1, 2, 3}, no two distinct vertices of O, lying on the boundary of the same R_i-ball of radius 1 (i.e. they have a common R_i-neighbor), are both colored by ρ_i. Then we call ρ_1, ρ_2, ρ_3 a (C4)^2,2,2-coloring of O. Let O be a connected graph without cycles, and with a 3-coloring of its edges. Let O_1, O_2⊂ O be connected subgraphs so that each vertex of O_1 has distance at least 5 from each vertex of O_2. For i = 1, 2, let ρ_1^i, ρ_2^i, ρ_3^i be a (C4)^2,2,2-coloring of O_i. Then there is a (C4)^2,2,2-coloring ρ_1, ρ_2, ρ_3 of some connected set O' containing O_1 and O_2, where for i = 1, 2, ρ_1, ρ_2, ρ_3 extends ρ_1^i, ρ_2^i, ρ_3^i on O_i. As in the proof of Subclaim <ref>, let O'= O_1∪ O_2∪ I where I is the shortest path between O_1 and O_2, and let I consist, ordered in the direction from O_1 to O_2, of o_0, … o_n, for o_0∈ O_1, o_n∈ O_2, o_1… o_n-1∈ O \ (O_1∪ O_2), and n ≥ 5. Again, as in that proof, color O_i by ρ^i_1, ρ^i_2, ρ^i_3 for i ∈{1, 2}, color o_1 by ρ_i where i is such that o_1 is not an R_i-neighbor of o_0, and color o_n-1 by ρ_j where j is such that o_n-1 is not an R_j-neighbor of o_n–then as before, the condition of being a (C4)^2,2,2-coloring cannot fail at the boundary of a unit ball centered at a point of O_1 or O_2. Now let n_even, n_odd, respectively, be the least even and odd numbers less than n-1. Then, because there are three colors available, we can color o_2, …, o_2i, …, o_n_even so that each vertex in the sequence o_0, o_2, …, o_2i, …, o_n_even, o_n_even+2 is colored differently from the previous vertex in that sequence–noting that the colors of o_0 and o_n_even+2 are already decided, alternate the color of o_0 with a color distinct from that of o_0 and o_n_even+2. Similarly, we can color o_3, …, o_2i+1 , … o_n_odd so that each vertex in the sequence o_1, o_3, …, o_2i+1, …, o_n_odd, o_n_odd+2 is colored differently from the previous vertex in that sequence. Coloring the intermediate vertices o_2, … o_n-2 according to these observations, we see that the condition of being a (C4)^2,2,2-coloring cannot fail on the boundary of a unit ball centered at one of o_1, … o_n-1, because the boundary of such a ball will always be colored by two different colors. Note that a similar subclaim would fail, were we to try to use an analogous construction to obtain an NSOP_1 theory where, for p ∈ S(∅), p ⊢φ_1(x, b_1) ∨φ_2(x, b_2) for φ_i(x, b_i) 2-dividing over ∅. § OPEN QUESTIONS The theory T^∄, despite being an NSOP_1 theory that does not satisfy the existence axiom, is not countably categorical. Motivated by this, we ask: Does every countably categorical NSOP_1 (or even NSOP) theory satisfy the existence axiom? Moreover, in T^∄, Kim-independence over models is not just given by the operation acl^eq; see Remark <ref>. In Definition 6.10 of <cit.>, the definition of the property of being one-based is extended (up to elimination of hyperimaginaries) from simple theories to NSOP_1 theories: Let T be an NSOP_1 theory. Then T is one-based if A ^K_M B implies (equivalently, is equivalent to) acl^eq(AM) ∩acl^eq(BM) ⊋ M. So T^∄ is not one-based. (See Example 4.6.1 of <cit.>.) This leads us to ask: Does every one-based NSOP_1 theory satisfy the existence axiom? Recall that, as stated in Remark <ref>, Kim-independence is defined over sets in any NSOP_1 theory satisfying the existence axiom, but is also defined over sets in T^∄ despite T^∄ not satisfying the existence axiom. A final question, motivated by this remark and by the original motivation discussed in the introduction for Question <ref>, the main question of this paper, is asked by Ramsey: (<cit.>, <cit.>) Is Kim-independence defined over sets in every NSOP_1 theory? § ACKNOWLEDGEMENTS The author would like to thank James Freitag, Maryanthe Malliaris and Nicholas Ramsey for many insightful conversations. In particular, conversations with Nicholas Ramsey were instrumental in inspiring the discussion in Remark <ref> and Question <ref> of this paper. plain
http://arxiv.org/abs/2407.13660v1
20240718163824
CogniVoice: Multimodal and Multilingual Fusion Networks for Mild Cognitive Impairment Assessment from Spontaneous Speech
[ "Jiali Cheng", "Mohamed Elgaar", "Nidhi Vakil", "Hadi Amiri" ]
cs.LG
[ "cs.LG", "cs.SD", "eess.AS" ]
Studying the Performance of the Jellyfish Search Optimiser for the Application of Projection Pursuit [ July 22, 2024 ==================================================================================================== § ABSTRACT Mild Cognitive Impairment (MCI) is a medical condition characterized by noticeable declines in memory and cognitive abilities, potentially affecting individual's daily activities. In this paper, we introduce , a novel multilingual and multimodal framework to detect MCI and estimate Mini-Mental State Examination (MMSE) scores by analyzing speech data and its textual transcriptions. The key component of is an ensemble multimodal and multilingual network based on “Product of Experts” that mitigates reliance on shortcut solutions. Using a comprehensive dataset containing both English and Chinese languages from TAUKADIAL challenge, outperforms the best performing baseline model on MCI classification and MMSE regression tasks by 2.8 and 4.1 points in F1 and RMSE respectively, and can effectively reduce the performance gap across different language groups by 0.7 points in F1[Code: https://github.com/CLU-UML/CogniVoicehttps://github.com/CLU-UML/CogniVoice.]. § INTRODUCTION Mild Cognitive Impairment (MCI) is a medical condition characterized by noticeable declines in memory, language skills, and logical thinking, and is often observed in the elderly population. MCI is considered as an early indicator or precursor to dementia, but not all cases of MCI progress to this more severe cognitive decline. Globally, dementia affects 55 million individuals, ranks as the seventh leading cause of mortality with women being disproportionately impacted, and is a major contributor to disability and dependency among the elderly.[www.who.int/news-room/fact-sheets/detail/dementiawww.who.int/news-room/fact-sheets/detail/dementia.] Speech offers a reflection of cognitive status and has been used as a key digital biomarker for cognitive evaluation. This potential underscores the opportunity for the integration of speech analysis techniques in cognitive health assessment as follows: given speech samples from elderly individuals describing a select set of pictures, the task is to automatically detect the presence of MCI in these individuals and estimate their Mini-Mental State Examination (MMSE) scores through detailed analysis of their speech; the MMSE is a brief 30-point questionnaire test commonly used in clinical settings to assess cognitive function and screen for cognitive loss. Previous research proposed effective unimodal and multimodal techniques to detect cognitive impairment. In <cit.>, authors employed feature-based and instance-based domain adaption techniques to overcome data sparsity and improve generalizability for dementia detection. In <cit.>, several wav2vec models <cit.> were fine-tuned on various frequency bands and eGeMAPS <cit.> acoustic features were combined with silence features for Alzheimer's disease recognition. In <cit.>, transcriptions were transformed into a co-occurrence network with words as nodes, word embeddings as features, and word adjacency in text as edge indicators to better represent short texts produced in MCI assessment. Topological features from the resulting graph (such as page rank and centrality) and linguistics features (such as coherence <cit.>) were then integrated to detect MCI. In <cit.>, a contrastive learning approach was developed for detecting Alzheimer’s disease on a small dataset, where negative examples were obtained by randomly removing text segments from transcripts. Other works captured language coherence <cit.>, explored fusion strategies <cit.>, mitigated the influence of the examiner using speaker discriminative features <cit.>, used speaker recognition and features from silence segments of speech <cit.>, captured linguistic and acoustic characteristics of MCI cases using hand-crafted features derived from domain knowledge <cit.>, jointly trained on speech and text data <cit.>, used paralinguistic features <cit.> to detect cognitive impairment. Previous works often focused on mono-lingual models, which demonstrate effectiveness on data from specific languages. Although there has been efforts to develop multilingual systems <cit.>, existing methods tend to overfit to spurious correlations or rely on shortcut solutions, which undermines models' ability to generalize across languages and patient groups. In this work, we develop a novel framework, called , to extract multimodal and multilingual features from speech inputs and their corresponding transcripts to predict MCI and cognitive test outcomes in elderly English and Chinese speakers. The key contribution of the paper is a systematic approach to ensemble multimodal and multilingual networks based on “product of experts” (Section <ref>). The approach effectively encourages learning robust multimodal and multilingual speech and text features, reduces overfitting, and mitigates reliance on shortcut solutions in the above tasks. outperforms the best performing baseline on MCI classification and MMSE regression tasks by 2.8 and 4.1 points in F1 and RMSE respectively. Existing methods have significant performance disparity between patient groups of different languages. can effectively reduce such performance gap by 0.7 point in F1.-1 § §.§ Overview Given D = {x^0, x^1,…,x^N-1}, a dataset of speech samples of elderly individuals describing a select set of pictures, we aim to train a model f that correctly predicts the presence of MCI in these individuals and estimates their MMSE scores. To diagnose MCI through speech, clinicians pay close attention to several key signs including word retrieval and repetition issues, change in language use, difficulties with attention and focus, confusion about time and place, and mood swings. These indicators can often be extracted from speech data, using acoustic and textual features. However, due to the limited training data, standard training can easily lead to overfitting, perhaps through learning spurious correlations between superficial features and class labels. This prioritize some features, e.g. some of acoustic features, while ignoring the other indicators that clinicians consider, and can lead to significant performance degradation on unseen test samples. To mitigate the overfitting issue and encourage model to focus on genuine and robust features, we propose to train the model f using Product-of-Experts (PoE) <cit.>. Our approach consists of several components depicted in Figure <ref>. Given an input speech sample, we extract features using transformers for speech and its corresponding text as well as acoustic features obtained from DisVoice <cit.>. A standard training approach concatenates all features and optimizes the cross-entropy loss, see the multi-feature model in Figure <ref>. To model potential shortcut signals within each feature sets, we propose to train with PoE, applied to the multi-feature model and several uni-feature models, which predict the labels using only one set of features separately. Our approach obtain ensemble logits using the multi-feature and uni-feature models, see element-wsie product in Figure <ref>(a). In addition, Figure <ref>(b) shows how PoE can reduce the loss for samples correctly predicted using both multimodal and unimodal inputs, and increase the loss for samples that cannot be accurately predicted using one of the modalities, which allows for identifying and mitigating weaknesses in the model's predictive capabilities. Therefore, the resulting ensemble logits can account for the spurious correlations in the dataset, while also being regularized to mitigate overfitting. §.§ Mitigating Spurious Feature with PoE Given a speech S and its transcribed text T, we first feed them into the multi-feature model to yield an initial prediction as follows: z_M = f_Multi(S, T). For clarity, we illustrate the PoE concept using a single uni-feature model as an example: we can model the spurious features in T by feeding the extracted text features into a separate feed-forward network (FFN) to predict class labels: z_T = 𝙵𝙵𝙽_T (f_T(T) ), which is then combined with the multi-feature model's prediction z_M using element-wise product (in log space for efficiency and stability) to encourage f_Multi to prioritize robust features over spurious ones: log z_F = log z_M + log z_T. Compared to z_M, the resulting prediction z_F contains guidance from f_Uni, which is used to adjust the standard cross-entropy loss for training a more robust MCI classifier. To avoid confusing the multi-feature model f_Multi, the gradient from the uni-feature model is not back-propagated through f_Multi, which is a standard practice in previous work on PoE <cit.>. During inference, we only use f_Multi to make predictions. Why PoE works We present two examples in Figure <ref>(b). When the uni-feature model is confident about a correct prediction, PoE can increase the confidence of the multi-feature model, resulting in a smaller loss. This indicates that the multi-feature model has learned the sample and is therefore less adjusted. However, when the uni-feature model is confident about a wrong prediction, the prediction from the multi-feature model has a more balanced confidence toward possible classes resulting in a larger loss. In this case, the input sample will have a larger contribution on the update of model parameters. We also provide two interpretations of PoE: (a): PoE regularizes model's prediction when model relies on spurious correlations, rather than learning causal relations; this regularization prevents model from overfitting; and (b): PoE dynamically adjusts the weight of training samples, which can be viewed as a dynamic curriculum that adaptively re-weights the contribution of each training example at every training iteration. §.§ Multilingual and Multimodal Features We extract the following multi-lingual features of different modality from speech sample S and its transcribed text T to facilitate MCI prediction. Transformer-based speech features: speech samples contain rich signals and indicators for MCI. We employ a Whisper encoder <cit.> f_S to extract features from each speech sample S, denoted as f_S(S). Transformer-based text features: other rich features related to MCI may exist in the transcripts. These include lexical repetitiveness and topical coherence, which may not be directly captured by the speech processing model. To capture such textual features, we transcribe the speech S into text T and employ a language-specific BERT <cit.> encoder f_T to extract features from T, denoted as f_T(T). Acoustic features: we use the following acoustic features in our model: static features for the entire utterance and dynamic features obtained frame-by-frame, from DisVoice. Phonation features related to tone, pitch, loudness, and quality to capture jitter and shimmer. Phonological features compute the posteriors probabilities of phonological classes from audio files for several groups of phonemes considering the mode and manner of articulation. Articulation features compute features related to sounds and their production from continuous speech. Prosody features compute features from continuous speech focusing on duration, energy and fundamental frequency. Representation learning features are computed using convolution and recurrent auto-encoders. Features are based on training the model to find MSE loss between decoded and input spectrogram, and extracting features from the last layer of the encoder. Each type of features is represented by a vector. We compute the average for each feature, resulting in a fixed size vector f_A(S),[We use a size of 10 in our experiments.] where f_A is the DisVoice feature extractor. Feature fusion: given all the available features f_S(S), f_T(T), and f_A(S), we feed them into a FFN to combine them and compute interactions between different types of features, extracting useful signals for downstream MCI prediction. Overall, the multi-feature model computes z_M = f_Multi(S, T) = 𝙵𝙵𝙽_M ([f_S(S); f_T(T); f_A(S)] ), where z_M ∈ℝ^2 denotes the logit and [;] denotes vector concatenation. Final model: we extend the PoE to all three types of features, including z_S, z_T and z_A. The resulting PoE is be obtained as follows: log z_F = log z_M + log z_S + log z_T + log z_A, after which z_F is used to compute the cross-entropy loss. § EXPERIMENTAL SETUP §.§ Dataset We use the MCI dataset from TAUKADIAL Challenge 2024 <cit.>. The dataset contains speech data from 129 participants, 62 (48.1%) are English speaker and 67 (51.9%) Chinese speaker. Each participant is asked to describe three pictures leading to 387 data points. Age and gender of the participants are also provided. Each description of an image is considered as a datapoint and is labeled to determine the presence of MCI in an individual in conjunction with mini mental status examination (MMSE) score, which indicates the severity of the MCI. Table <ref> shows the statistics of the dataset. §.§ Baseline and settings We compare our method to the following baselines: * Whisper <cit.>: An encoder-decoder transformer model trained on speech transcription and translation using spectrogram input. We fine-tune the encoder for classification and regression. * Wav2Vec 2.0 <cit.>: A speech representation extraction model, trained using a self-supervised masked-token-prediction objective. * Audio Spectrogram Transformer (AST) <cit.>: Speech encoder using patched spectrogram input. * XLSR-53 <cit.>: A Wav2Vec 2.0 that learns a shared space for quantized speech units of 53 languages. * XLS-R <cit.>: A Wav2Vec 2.0 model trained with 436k hours of speech in 128 languages. We train all methods for 10 epochs with a learning rate of 1e-5 and L2 regularizer λ=0.01 on an A100 GPU. §.§ Evaluation We use F1 and Unweighted Average Recall (UAR) score to evaluate the classification performance. For the regression task, we use Rooted Mean Squared Error (RMSE) and R2. We also report the performance of different subgroups, including male/female and English/Chinese patients. Due to the small size of the dataset, we adopt a stratified k-fold (k=10) cross validation to compute average validation score over k folds for comparison. §.§ Main Results On MCI prediction task, achieves an F1 score of 84.1, outperforming Whisper-Tiny, AST, XLSR-53, and XLS-R by 2.8, 13.1, 21.2, and 8.6 absolute points, respectively. On MMSE regression task, achieves an RMSE of 2.34, outperforming Whisper-Tiny, AST, XLSR-53, and XLS-R by 6.96, 1.08, 1.38, and 2.87 absolute points, respectively. reduces prediction disparity across patient groups: As shown in the Table <ref>, overall, compared to other models performs better across all groups. All the models have higher F1 score on English compared to Chinese language. Moreover, all the models performs well for male speakers compared to female speakers. Comparing Whisper-tiny against XLS-R(0.3B) model, increasing the size of the model does not lead to better MCI classification. Similarly, as shown in the Table <ref> performs better compared to all other models across all groups on the regression task, where we observe similar patterns in terms of RMSE. Effect of PoE: results in Table <ref> show that PoE can increase the overall F1 score from 81.7 to 84.1 (+2.4), and UAR from 73.6 to 75.1 (+1.5). Meanwhile, PoE can also increase the worst case F1 score in subgroups. With PoE, the worst case F1 are 82.3 and 81.7 for gender and language subgroups, respectively, higher than 80.9 and 79.5 when PoE is not incorporated. Nevertheless, the worse case UAR degrades when PoE is incorporated. In addition, PoE can reduce the performance disparity across different languages, where the gaps drop from 1.1 to 0.4 and 19.8 to 19.1 for F1 score and UAR, respectively. Across gender subgroups, however, PoE may cause higher performance gap than Non PoE. All features contribute: Figure <ref> show that removing speech features from the model (W/o S) results in degradation of F1 score by 1.7 points. When text feature are removed (W/o T), F1 score drops from 84.1 to 80.4. Interestingly, the F1 score on English-speaking patients only drops by 0.3 point, significantly smaller than Chinese-speaking patients, which has a drop of 6.5 points. On the other hand, the English-speaking patients has an increased UAR by 3.2 points, while the overall UAR and all other groups exhibit degradation. Removing DisVoice features (W/o A) decreases the F1 score by 4.6 points on male patients while increases it by 0.6 point on female patients. The F1 score increases by 1.2 points on English-speaking patients, while drops by a large margin of 10.9 points on Chinese-speaking patients, indicating that the critical contribution of language-specific text encoder. These results highlight the contribution of the collected multimodal and multilingual features. § CONCLUSION We proposed a novel model to extract multimodal and multilingual features from speech and transcribed text to predict MCI and MMSE regression score. Our model uses an ensemble approach based on Product Of Experts to effectively learn robust speech and text features and show reduced prediction disparity across patient groups. IEEEtran
http://arxiv.org/abs/2407.12405v2
20240717083214
Fisheye-Calib-Adapter: An Easy Tool for Fisheye Camera Model Conversion
[ "Sangjun Lee" ]
eess.IV
[ "eess.IV", "cs.CV", "cs.RO" ]
Asymptotic behaviour of the heat equation in an exterior domain with general boundary conditions I. The case of integrable data. Joaquín Domínguez-de-Tena^*,1 Aníbal Rodríguez-BernalPartially supported by Projects PID2019-103860GB-I00 and PID2022-137074NB-I00, MICINN and GR58/08 Grupo 920894, UCM, Spain ^,2 July 22, 2024 ====================================================================================================================================================================================================== empty empty § ABSTRACT The increasing necessity for fisheye cameras in fields such as robotics and autonomous driving has led to the proposal of various fisheye camera models. While the evolution of camera models has facilitated the development of diverse systems in the field, the lack of adaptation between different fisheye camera models means that recalibration is always necessary, which is cumbersome. This paper introduces a conversion tool for various previously proposed fisheye camera models. It is user-friendly, simple, yet extremely fast and accurate, offering conversion capabilities for a broader range of models compared to existing tools. We have verified that models converted using our system perform correctly in applications such as SLAM. By utilizing our system, researchers can obtain output parameters directly from input parameters without the need for an image set and any recalibration processes, thus serving as a bridge across different fisheye camera models in various research fields. We provide our system as an open source tool available at: https://github.com/eowjd0512/fisheye-calib-adapterhttps://github.com/eowjd0512/fisheye-calib-adapter Fisheye camera, Fisheye lens, Lens distortion, Camera calibration, Camera model conversion § INTRODUCTION Fisheye cameras are utilized in fields such as robotics and autonomous driving due to their wide Field of View (FoV), which provides more environmental information than pinhole cameras <cit.>. They are particularly used in technologies such as Visual Odometry and Simultaneous Localization And Mapping (SLAM) for estimating mobile motion (<cit.>). In the field of computer vision, there is active research on training neural networks using fisheye images (<cit.>). Defining a fisheye camera model is crucial as it allows various problems to be solved mathematically by utilizing a model that accurately represents the fisheye camera. Recently, several fisheye camera models have been proposed (<cit.>). Each new model's introduction typically leads to new versions of systems that utilize it. Applying datasets to these systems invariably requires the camera model coefficients, thus necessitating a calibration process. Calibration is essential to obtain a fisheye camera model. It involves acquiring the model's coefficients through the correspondence between a known object's actual size and its 2D image pixels. Calibration can be achieved using target-based methods like checkerboards or non-target-based methods leveraging landmark information such as the Manhattan assumption (<cit.>). However, there are scenarios where calibration is not feasible. For instance, if a dataset provided for research only includes coefficients for a specific fisheye camera model and does not provide a calibration dataset, it is impossible to perform calibration. This scenario often occurs when a dataset is proposed that is fixed to a specific model. Moreover, when attempting experiments with this dataset, challenges arise due to the inability to perform comparisons with systems that have applied new camera models. To address these situations, we propose a fisheye camera model adapter that allows direct conversion among the most commonly used fisheye camera models today: UCM <cit.>, EUCM <cit.>, Double Sphere <cit.>, Kannala-Brandt <cit.>, OCamCalib <cit.>, and Radial-Tangential distortion model <cit.> for pinhole camera model. Utilizing projection and unprojection, it requires only the coefficients of the model to be converted and does not necessitate any images for the recalibration process. The contributions of our paper are as follows: * We provide a simple tool for direct adaptation among a variety of fisheye camera models. * We detail the projection and unprojection processes between camera models, and propose methods for optimization, including initialization techniques, cost functions, and Jacobians, making it applicable to various systems. * We also offer an interface that facilitates the application of different models. The paper is structured as follows: Section <ref> discusses Related Work, Section <ref> describes the Method, Section <ref> covers Experiments, and Section <ref> concludes the paper. § RELATED WORK §.§ Fisheye Camera Models 3D points are projected onto the image plane by a predefined camera model <cit.>. Conversely, 2D pixel points on the image plane can be restored into 3D rays by unprojecting them using the camera model. The fundamental projective model, known as the pinhole model, can employ the Radial-Tangential <cit.> distortion model to account for distortion. However, the model is designed to suit cameras with narrow fields of view, such as pinhole cameras, making it less effective for wide-angle coverage. To address this, several models have been defined for fisheye cameras, which are an extension of the pinhole camera. The Kannala-Brandt model <cit.> (widely used in OpenCV <cit.> for a fisheye camera model), also considered as the equidistant distortion model for pinhole cameras <cit.>, has been proposed for wide-angle fisheye lens distortion. It focuses on a polynomial model of radial distortion, omitting the tangential distortion term. Similarly, the OCamCalib model <cit.> includes an affine transformation term to correct sensor misalignment. Models such as Radial-Tangential, Kannala-Brandt, and OCamCalib require the estimation of coefficients for polynomial terms, thus necessitating many parameters. The Unified Camera Model (UCM) <cit.> serves as a catadioptric camera model capable of modeling pinhole and fisheye cameras with a single distortion parameter using parabolic, hyperbolic, elliptic, and planar mirrors. However, to perfectly model fisheye cameras, additional distortion parameters were needed, leading to the proposal of the Enhanced Unified Camera Model (EUCM) <cit.> and the Double Sphere citeDS model. §.§ Fisheye Camera Model Conversion We have recently discovered research related to fisheye camera model conversion <cit.>. This system proposes conversions among three models: Kannala-Brandt, UCM, and OCamCalib. However, conversions are only possible through a dependency relationship from Kannala-Brandt to UCM, and from UCM to OCamCalib. Our proposed method enables direct conversions among the Kannala-Brandt, UCM, EUCM, Double Sphere, OCamCalib, and RT models without any dependencies. § METHOD The proposed Fisheye Camera Model Adapter (FCA) undergoes a process as illustrated in Figure <ref>. As shown in Figure <ref>, the FCA receives an input model and exports an output model. The transformation is represented as: i_out = FCA(i_in) where i_in and i_out represent the parameters of the input and output models, respectively. The FCA module unprojects N sampled points based on the given input model and uses these derived 3D points to perform initialization and optimization for the output model. Both the initialization and optimization processes are driven by the projection function of the output model. This modeling is possible under the assumption that the input camera model and the target output camera model were used to capture images in the same environment. The recovered ray from the input model will project to the same location regardless of where along the ray the depth is placed. Furthermore, since the projected point is used for recovering in the input model, its pair information is already known. Utilizing this, arbitrary points on the ray can be matched with their projected counterparts, allowing for the estimation of the output camera model. In our proposed FCA, the camera models handled for conversion are defined as follows. We have defined the most commonly used fisheye camera models recently, which include: Kannala-Brandt, Unified Camera Model, Enhanced Unified Camera Model, Double Sphere, OCamCalib, and the Radial-Tangential distortion model for pinhole cameras. Additionally, we address cases of other variant models in the Custom model section. The subsequent sections of the Method introduce the unprojection function and the initialization and optimization methods for each camera model. Projection function Given intrinsic parameters and coefficients 𝐢, the projection function is defined as π(𝐱,𝐢): ΩΘ, where 𝐱=[x,y,z]^T ∈Ω⊂ℝ^3. Unprojection function The unprojection function converts image coordinates back to a unit-length bearing vector as π^-1(𝐮, 𝐢): Θ𝕊^2, which defines a ray onto which all points corresponding to these image coordinates are projected, where 𝐮=[u, v]^T ∈Θ⊂ℝ^2. Initialization For the parameters χ that we wish to optimize, we solve the linear equation 𝐀χ=𝐛, where 𝐀 and 𝐛 are derived from the projection function. Optimization The parameters of the output model are obtained by solving the following nonlinear least squares: 𝐢^* = argmin_i∑_n^N e(u_n, i)^T Λe(u_n, i), where the residual is 𝐞(u, 𝐢_out) = π(π^-1(𝐮, 𝐢_in),𝐢_out)-𝐮. Since the 𝐮 corresponding pairs used for computing the input and output models are identical, the information matrix Λ utilizes the identity matrix. Additionally, given 𝐮, the condition satisfying the projection and unprojection for ⟨𝐮, 𝐱̃⟩, where 𝐱̃ = π^-1(𝐮, 𝐢_in), is always provided, allowing the error term for the optimization to be defined as: 𝐞(𝐢_out)=π(𝐱̃,𝐢_out)-𝐮. §.§ Unified Camera Model Using the reformulation from <cit.>, the intrinsic parameters and distortion coefficient for the Unified Camera Model (UCM) are defined as follows: 𝐢=(f_x,f_y,c_x,c_y,α), where α∈ [0,1]. The projection and unprojection functions for UCM are defined respectively as: π(𝐱,𝐢) = [ f_xx/α d+(1-α)z; f_yy/α d+(1-α)z ] + [ c_x; c_y ], π^-1(𝐮, 𝐢) = ξ+√(1+(1-ξ^2)r_u^2)/1+r_u^2[ m_x; m_y; 1 ]- [ 0; 0; ξ ], where d = √(x^2+y^2+z^2), r_u^2 = m_x^2 + m_y^2, ξ = α/1-α, [ m_x; m_y ] = [ u-c_x/f_x(1-α); v-c_y/f_y(1-α) ]. The conditions for projection and unprojection are: Ω = {𝐱∈ℝ^3 | z > -wd}, Θ = ℝ^2 if α≤ 0.5, {𝐮∈ℝ^2 | r_u^2 ≤(1-α)^2/2α-1} if α > 0.5. w = α/1-α, if α≤ 0.5, 1-α/α, if α > 0.5, In the initialization step for UCM, f_x, f_y, c_x and c_y are inherited from existing parameters, and only the distortion coefficient α is initialized. Thus, χ = α, and 𝐀 and 𝐛 are derived from the projection function as follows: 𝐀 = [ (d_1-z_1)𝐮_1-𝐜; ⋯; (d_n-z_n)𝐮_𝐧-𝐜 ], 𝐛 = [ 𝐟⊙𝐩_1-z_1(𝐮_1-𝐜); ⋯; 𝐟⊙𝐩_n-z_n(𝐮_n-𝐜) ], where 𝐮_i=[u_i, v_i]^T, 𝐜=[c_x, c_y]^T, 𝐟=[f_x, f_y]^T, 𝐩_i=[x_i, y_i]^T, and ⊙ is the Hadamard product. The error term for optimization is modified for easier derivation of the Jacobian for the α term: 𝐞(𝐢)=[ f_xx-(u-c_x)(α d+(1-α)z); f_yy-(v-c_y)(α d+(1-α)z) ], and the Jacobians for f_x, f_y, c_x, c_y and α can be easily obtained as: ∂𝐞/∂[f_x,f_y] = [ x 0; 0 y ], ∂𝐞/∂[c_x,c_y] = [ α d+(1-α)z 0; 0 α d+(1-α)z ], ∂𝐞/∂α = [ (z-d)(u-c_x); (z-d)(v-c_y) ]. §.§ Enhanced Unified Camera Model According to the redefinition of Enhanced Unified Camera Model (EUCM) from <cit.>, the parameters for the EUCM include an additional parameter β to those of the UCM: 𝐢=(f_x,f_y,c_x,c_y,α, β) where α∈ [0,1] , β > 0. The projection and unprojection functions for EUCM are as follows: π(𝐱,𝐢) = [ f_xx/α d+(1-α)z; f_yy/α d+(1-α)z ] + [ c_x; c_y ], π^-1(𝐮, 𝐢) = 1/√(m_x^2+m_y^2+m_z^2)[ m_x; m_y; m_z ], where r_u^2 = m_x^2 + m_y^2, d = √(β(x^2+y^2)+z^2), [ m_x; m_y; m_z ] = [ u-c_x/f_x; v-c_y/f_y; 1-βα^2r_u^2/α√(1-(2α-1)β r_u^2)+(1-α) ]. The conditions for projection and unprojection are: Ω = ℝ^3 if α≤ 0.5, {𝐱∈ℝ^3 | z ≥(α-1)(α d+(1-α)z)/2α-1} if α > 0.5, Θ = ℝ^2 if α≤ 0.5, {𝐮∈ℝ^2 | r^2 ≤1/β(2α-1)} if α > 0.5. In EUCM, β is not linearly solved, so it is set to 1 during initialization, similar to UCM, to obtain the value of α. For optimization, EUCM can use the same cost function as UCM since β only affects d (<ref>). The derivatives ∂𝐞/∂ f_x, ∂𝐞/∂ f_y, ∂𝐞/∂ c_x, ∂𝐞/∂ c_y, and ∂𝐞/∂α are the same as in UCM. Additionally, the derivative with respect to β can be obtained as follows: ∂𝐞/∂β= -[ α(x^2+y^2)(u-c_x)/2√(β(x^2+y^2)+z^2); α(x^2+y^2)(v-c_y)/2√(β(x^2+y^2)+z^2) ]. §.§ Double Sphere The parameters for the Double Sphere (DS) model include an additional parameter ξ compared to the UCM: 𝐢=(f_x,f_y,c_x,c_y, α, ξ) where α∈ [0,1]. The projection and unprojection functions for the DS model are as follows: π(𝐱,𝐢) = [ f_xx/α d_2+(1-α)(ξ d_1+z); f_yy/α d_2+(1-α)(ξ d_1+z) ] + [ c_x; c_y ], π^-1(𝐮, 𝐢) = m_zξ+√(m_z^2+(1-ξ^2)r_u^2)/m_z^2+r_u^2[ m_x; m_y; m_z ]- [ 0; 0; ξ ], where d_1 = √(x^2+y^2+z^2), d_2 = √(x^2+y^2+(ξ d_1+z)^2), r_u^2 = m_x^2 + m_y^2, [ m_x; m_y; m_z ] = [ u - c_x/f_x; v - c_y/f_y; 1-α^2r_u^2/α√(1-(2α-1)r_u^2)+1-α ]. The conditions for each function are: Ω = {𝐱∈ℝ^3 | z > -w_2d_1 }, Θ = ℝ^2 if α≤ 0.5, {𝐮∈ℝ^2 | r^2 ≤1/2α-1} if α > 0.5, where w_2 = w_1 + ξ/√(2w_1ξ + ξ^2 + 1), w_1 = α/1-α, if α≤ 0.5, 1-α/α, if α > 0.5. Like EUCM, since ξ cannot be solved linearly, it is set to 0, and initialization for α is conducted similarly to UCM. For optimization, the cost function is modified as below: 𝐞(𝐢) = [ f_xx-(u-c_x)(α d_2+(1-α)(ξ d_1+z)); f_yy-(v-c_y)(α d_2+(1-α)(ξ d_1+z)) ], and unlike EUCM, ξ affects all terms except f_x and f_y thus the Jacobians for each term are recalculated as follows: ∂𝐞/∂[f_x,f_y] = [ x 0; 0 y ], ∂𝐞/∂c_x = [ α d_2+(1-α)(ξ d_1+z); 0 ], ∂𝐞/∂c_y = [ 0; α d_2+(1-α)(ξ d_1+z) ], ∂𝐞/∂α = [ (ξ d_1+z-d_2)(u-c_x); (ξ d_1+z-d_2)(v-c_y) ], ∂𝐞/∂ξ = -[ (u-c_x)(α d_1(ξ d_1+z)/d_2+(1-α)d_1); (v-c_y)(α d_1(ξ d_1+z)/d_2+(1-α)d_1) ]. §.§ Kannala-Brandt Camera Model The intrinsic parameters and distortion coefficients for the Kannala-Brandt (KB) model are as follows: 𝐢=(f_x,f_y,c_x,c_y,k_1,k_2,k_3,k_4). The projection function for the KB model is defined as: π(𝐱,𝐢) = [ f_xd(θ)x/r; f_yd(θ)y/r ] + [ c_x; c_y ], where r = √(x^2+y^2), θ = atan2(r,z), d(θ) =θ + k_1θ^3+k_2θ^5+k_3θ^7+k_4θ^9, this projection is applicable in Ω = ℝ^3 ∖{[0,0,0]^T}. The unprojection function for the KB model is: π^-1(𝐮, 𝐢) = [ sin(θ^*) m_x/r_u; sin(θ^*) m_y/r_u; cos(θ^*) ], where θ^* = d^-1(r_u), r_u = √(m_x^2 + m_y^2), [ m_x; m_y ] = [ u-c_x/f_x; v-c_y/f_y ]. This unprojection is valid for all 2D space when d(θ) is monotonic. The angle θ^* satisfying Equation (<ref>) can be obtained using algorithms such as Newton-Raphson <cit.>. For parameter initialization in optimization, f_x, f_y, c_x, and c_y are inherited from the input model, and only the remaining distortion coefficients are initialized. Thus, χ=[k_1,k_2,k_3,k_4]^T, and 𝐀 and 𝐛 are derived from the projection function as follows: 𝐀 = 1_ 2n𝐯 ^T, 𝐛 = [ (𝐮_1 - 𝐜) ⊙ (𝐟⊙𝐩_1)^-1 r_1 - θ_1 1_2; ⋯; (𝐮_n - 𝐜) ⊙ (𝐟⊙𝐩_n)^-1 r_n - θ_n 1_2 ], where 𝐯 = [θ^3, θ^5, θ^7, θ^9] ^T. For optimization, 𝐞(𝐢) can be defined as shown in Equation (<ref>) and the Jacobian for each parameter is defined as follows: ∂𝐞/∂[f_x,f_y,c_x,c_y] = [ d(θ)x/r 0 1 0; 0 d(θ)y/r 0 1 ], ∂𝐞/∂k_1⋯4=∂𝐞/∂d(θ)∂d(θ)/∂k_1⋯4 =[ f_xx/r; f_yy/r ][ θ^3 θ^5 θ^7 θ^9 ]. §.§ OCamCalib Camera Model The parameters for the OCamCalib (OCC) model are as follows: 𝐢=(c,d,e,c_x,c_y,𝐚, 𝐤). Unlike other models, the OCC model does not use focal lengths f_x and f_y but instead employs an affine transformation matrix [c, d; e, 1] for sensor alignment. 𝐚=(a_0, a_1, a_2, a_3, a_4) are the coefficients of the polynomial function used in the unprojection function, and 𝐤=(k_0, k_1,k_2,k_3,k_4, ⋯, k_p) are the coefficients for the polynomial function used in the projection function. The projection and unprojection functions for the OCC model are defined as follows: π(𝐱,𝐢) = [ c d; e 1 ][ d(θ)x/r; d(θ)y/r ] + [ c_x; c_y ], π^-1(𝐮, 𝐢) = 1/√(m_x^2+m_y^2+m_z^2)[ m_x; m_y; m_z ], where r = √(x^2+y^2), r_u = √(m_x^2 + m_y^2), θ = atan(z/r), d(θ) = k_0+k_1θ+k_2θ^2+k_3θ^3 ⋯ k_pθ^p, [ m_x; m_y ] = [ c d; e 1 ]^-1[ u-c_x; v-c_y ] , m_z =a_0+a_1r_u+a_2r_u^2+a_3r_u^3+a_4r_u^4. As in the original paper <cit.>, initialization sets c=1, d=0 and e=0 focusing on 𝐚 because the order of the polynomial function used in the unprojection is experimentally determined to be 4, while the order of the polynomial function for projection is not specified. Therefore, 𝐀 and 𝐛 are also derived differently, using the unprojection function: χ = [a_0,a_1,a_2,a_3,a_4]^T, 𝐀 = 1_ 2n𝐨 ^T, 𝐛 = [ z_1 (𝐮_1 - 𝐜) ⊙𝐩_1^-1; ⋯; z_n (𝐮_n - 𝐜) ⊙𝐩_n^-1 ], where 𝐨=[1, r_u, r_u^2, r_u^3, r_u^4]^T. The error term for optimization, derived from the unprojection function, is: 𝐞(𝐢)=[ (u-c_x)-d(v-c_y)-m_z(c-de)x/z; e(u-c_x)+c(v-c_y)-m_z(c-de)y/z ]. Given that m_z is a function of (c, d, e) and goes up to the 8th degree, linear approximation for the Jacobian is challenging. Therefore, using c=1, d=0 and e=0 the cost function is redefined as: 𝐞(𝐢)=[ (u-c_x)-m̃_z x/z; (v-c_y)-m̃_z y/z ], where r̃_u = √(m̃_x^2 + m̃_y^2), m̃_z =a_0+a_1r̃_u+a_2r̃_u^2+a_3r̃_u^3+a_4r̃_u^4, [ m̃_x; m̃_y ] = [ 1 0; 0 1 ][ u-c_x; v-c_y ]. Since the parameters (c,d,e) are not directly optimized, their role in correcting sensor misalignment through affine transformation is expected to be compensated by the parameters c_x and c_y Thus, the final Jacobian for OCC's parameters is defined accordingly, ∂𝐞/∂[c_x,c_y] = -[ 1 0; 0 1 ], ∂𝐞/∂ a_0 ⋯ 4 = -[ x/z; y/z ][ 1 r̃_u r̃_u^2 r̃_u^3 r̃_u^4 ]. Subsequently, as proposed in the paper <cit.>, p is automatically estimated by solving the following linear equation to minimize the reprojection error, [ 𝐩_1/r_1; 𝐩_n/r_2; ⋮; 𝐩_n/r_n ][ 1; θ; ⋮; θ^p ]^T [ k_0; k_1; ⋮; k_p ] = [ 𝐮_1-𝐜; 𝐮_2-𝐜; ⋮; 𝐮_n-𝐜 ]. §.§ Radial-Tangential Distortion Model The intrinsic parameters and distortion coefficients for the Radial-Tangential (RT) distortion model are as follows: 𝐢=(f_x,f_y,c_x,c_y,k_1,k_2,k_3,p_1, p_2). The projection function for the RT model is: π(𝐱,𝐢) = [ f_x x”; f_y y” ] + [ c_x; c_y ], where r^2 =x'^2+y'^2, r' = 1+k_1r^2+k_2r^4+k_3r^6, [ x”; y” ] = [ r'x'+2p_1x'y'+p_2(r^2+2x'^2); r'y'+2p_2x'y'+p_1(r^2+2y'^2) ], [ x'; y' ] = [ x/z; y/z ]. The unprojection function π^-1(𝐮, 𝐢) for the RT model recovers x' and y' to satisfy given x” and y” . This restoration process is non-linear and can be computed using methods such as the Newton-Raphson. The Jacobian for x' and y' that satisfy Equation (<ref>) is as follows: 𝐉_· 1 = [ r' + 2x'(k_1+2r^2k_2+3r^4k_3)+2p_1y'+6p_2x'; 2x'y'(k_1+2r^2k_2+3r^4k_3)+2p_1x'+2p_2y' ], 𝐉_· 2 = [ 2x'y'(k_1+2r^2k_2+3r^4k_3)+2p_2y'+2p_1x'; r' + 2y'(k_1+2r^2k_2+3r^4k_3)+2p_2x'+6p_1y' ]. For optimization, parameter initialization is conducted as follows: f_x, f_y, c_x and c_y are inherited from the input model, while the remaining distortion coefficients are initialized. To simplify the model, p_1 and p_2 are initialized to zero, and the focus is on χ=[k_1,k_2,k_3]^T. 𝐀 and 𝐛 are derived from the projection function as follows: 𝐀 = 1_ 2n𝐫 ^T, 𝐛 = [ (𝐮_1 - 𝐜) ⊙ (𝐟⊙𝐩'_1)^-1 - 1_2; ⋯; (𝐮_n - 𝐜) ⊙ (𝐟⊙𝐩'_n)^-1 - 1_2 ], where 𝐫 = [r^2, r^4, r^6]^T, and 𝐩'_i=[x', y']^T. The error term for optimization, 𝐞(𝐢) can be defined as in Equation (<ref>) with the Jacobian for each parameter defined as: ∂𝐞/∂[f_x,f_y,c_x,c_y] = [ x” 0 1 0; 0 y” 0 1 ], ∂𝐞/∂[k_1, k_2, k_3] = [ x'r^2 x'r^4 x'r^6; y'r^2 y'r^4 y'r^6 ], ∂𝐞/∂[p_1, p_2] = [ 2x'y' r^2+x'^2; r^2+2y'^2 2x'y' ]. §.§ Custom Camera Model The proposed FCA module is modularized to support unprojection, projection, initialization, and optimization for various camera models. For example, the fisheye camera model provided by the WoodScape dataset <cit.> is similar to KB model but with slight differences. The implementation of d(θ) for the WoodScape dataset's model is defined as follows: d(θ) = k_1θ + k_2θ^2 + k_3 θ^3 + k_4 θ^4. Apart from this, the projection, unprojection, initialization, and optimization processes can be conducted in the same manner as with the KB model. § EXPERIMENT This system was implemented in C++ on a system equipped with an AMD Ryzen 7 5800U CPU and 16GB RAM. Optimization was performed using the Ceres Solver[http://ceres-solver.org/]. §.§ Evaluation We first acquired the sample point N for our proposed model. To obtain the sample points, we first compared parameter error and execution time. The samples were extracted by uniformly dividing the given image size into grid cells to ensure N samples were evenly distributed. Parameter error was calculated using the L2-norm of 𝐢^* - 𝐢̂. Experiment on Kalibr Dataset The experiments utilized the Kalibr dataset <cit.>, and since the Kalibr calibration toolkit allows direct calibration for the KB, EUCM, DS, and RT models, we used parameters obtained with the Kalibr toolkit as the ground truth 𝐢^* Here, 𝐢̂ refers to the output model converted from the input model, with input and output model pairs linked by a hyphen (i.e., EUCM-DS). Figure <ref> shows the experimental results, indicating that parameter error saturates around N=30 The speed remains within 10 ms up to N=1000 and then increases linearly. We observed more detailed results at N=500. The metrics for the ground truth output model relative to the input-output model include PSNR, SSIM <cit.>, Reprojection Error (RE), and Parameter Error (PE) calculated using the original image and the image recovered using the output model. The recovered image was obtained by unprojecting all pixels of the original image using the input model and then projecting them using the output model. Figure <ref> shows an example of a recovered image. The experimental results are summarized in Table <ref>. The conversion results of our proposed model show that RE is close to zero, and the PSNR and SSIM with the original image are excellent, along with fast estimation within 4 ms and a Parameter Error averaging 2.63, similar to the results obtained with the actual calibration toolkit. The results show that the conversions between the EUCM and KB models, as well as the DS model, are satisfactory. Particularly for the RT model, which is a distortion modeling of the pinhole model, there is a performance degradation in the conversion from fisheye camera models compared to other models. Especially in the conversion to the DS model, there tends to be a significant discrepancy in the estimation of the focal length from the actual value, resulting in a higher final parameter error compared to other conversion models. Experiment on OcamCalib Dataset For experiments with the OCC model, we utilized 190-degree large FOV images from the OCamCalib dataset[https://sites.google.com/site/scarabotix/ocamcalib-omnidirectional-camera-calibration-toolbox-for-matlab]. Ground truth for the OCC was obtained using the OCamCalib calibration toolkit. However, ground truth for other models such as KB, EUCM, and DS could not be obtained as the toolkit does not support these models. Therefore, we acquired the input model as i_in = FCA(i^*_OCC) and used this input model to again obtain î_OCC. In this experiment, the RT model was not compared due to its use of a large angle FoV. The experimental results for OCC are shown in Table <ref>. The results showed that the parameters estimated for various input models yielded an average PSNR of 34.5, SSIM of 0.84, RE of 0.33, and PE of 4.76, which are respectable outcomes. Comparison with the State-of-the-Art Method We compared the accuracy of the fisheye camera model conversion method available in libPeR <cit.> with our proposed method. For a pairwise comparison, parameters estimated up to the UCM model in the libPeR paper were used to estimate values for the OCC model and were compared with those from libPeR. The ground truth for the given OCC model is a^*_0 = 131.0074, a^*_1=0, a^*_2=-0.0018, c^*_x = 516.4379, c^*_y = 383.014 , and the order of polynomial is set to 2, obtainable from the OCamCalib calibration tool. For details, refer to the relevant paper <cit.>. The values obtained for the UCM model in libPeR are γ̂_x=259.889, γ̂_y=259.335, ĉ_x=514.168, ĉ_y=382.797, ξ̂=0.975; through model reformulation ξ=α/1-α, γ_x = f_x/1-α, γ_y = f_y/1-α, please refer to <cit.>, Equation (7)), we can derive f̃_x=131.5893, f̃_y=131.3089, c̃_x=514.168, c̃_y=382.797, α̃ = 0.4937. Using these UCM parameter values, libPeR obtained â_0 = 131.46, â_1=0, â_2=-0.0018, while our model yielded ã_0 = 130.809, ã_1=0.01238, ã_2=0.00186. The comparison of the distortion coefficients with the ground truth is shown in Table <ref>. The experimental results indicate that our proposed method achieves higher accuracy in terms of RMSE. §.§ Application We validated the performance of our proposed model for converting fisheye camera models in actual applications. Fisheye ORB-SLAM <cit.> has adopted the EUCM model for fisheye cameras. Utilizing the KB model provided by the TUM Visual Inertial(VI) Dataset <cit.>, we performed fisheye ORB-SLAM using both the KB-EUCM model parameters derived from converting the KB model and the directly acquired EUCM model parameters from the calib-cam1 sequence of the VI Dataset. We calculated the Absolute Pose Error (APE) and Relative Pose Error (RPE) for the acquired keyFrame trajectories using the evo package <cit.>. For the experiments, sequences corridor4 and room2 from the VI Dataset were used, as these relatively small spaces with potential for loop closing are less likely to exhibit significant trajectory divergence due to the randomness introduced by RANSAC in fisheye ORB-SLAM. The experimental results, as shown in Table <ref>, indicate that the difference between the directly acquired EUCM model and our KB-EUCM model is approximately 3 cm in average APE and about 0.1 cm in RPE. These differences are within the error margins typically expected in the system, confirming that our model’s conversion of fisheye camera models is performed correctly. Figure <ref> shows the results of fisheye ORB SLAM performed with directly acquired EUCM parameters and with EUCM parameters obtained through KB-EUCM conversion. The result images demonstrate that normal SLAM operations are successfully carried out with the converted EUCM parameters, confirming that the trajectory and structure are well generated. §.§ Limitation Our proposed FCA module acquires parameters for the output model from the input model. This process utilizes environmental information recovered from the input model. consequently, the quality of the calibrated parameters of the input model naturally affects the results of the output model. Therefore, the quality of the output model improves as the accuracy of the input model's calibration results increases. § ACKNOWLEDGEMENT This research was supported by StradVision. We appreciate all the supports of StradVision members who provided insight and expertise. The contents are solely the responsibility of the authors. § CONCLUSION We have proposed the Fisheye-Calib-Adapter, a tool designed to facilitate the easy conversion of fisheye camera models. Our system can quickly and accurately estimate parameters for the output model based solely on the intrinsic parameters of the camera model to be converted, without the need for any image set. Our method supports widely used models such as UCM, EUCM, Double Sphere, Kannala-Brandt, OCamCalib, and Radial-Tangential, and provides an interface for other custom models. The converted model parameters obtained using our system can be directly applied to applications like SLAM. We believe that our module will enable researchers to bridge the gap in fisheye camera models and be used in a variety of studies, as it allows the acquisition of parameters for the desired model without the need for recalibration. IEEEtranBST/IEEEtran
http://arxiv.org/abs/2407.13328v1
20240718092902
Unsupervised Domain Adaptive Lane Detection via Contextual Contrast and Aggregation
[ "Kunyang Zhou", "Yunjian Feng", "Jun Li" ]
cs.CV
[ "cs.CV" ]
Techical Report Shell et al.: Bare Demo of IEEEtran.cls for IEEE Journals Unsupervised Domain Adaptive Lane Detection via Contextual Contrast and Aggregation Kunyang Zhou, Yunjian Feng, and Jun Li, Senior Member, IEEE This work was supported in part by the National Key Research and Development Program of China under Grant 2021YFF0500904, and Shenzhen Fundamental Research Program under Grant JCYJ20190813152401690, and Qingdao New Qianwan Container Terminal (QQCTN). (Corresponding author: Jun Li.) K. Zhou, Y. Feng, and J. Li are with the Ministry of Education Key Laboratory of Measurement and Control of CSE, Southeast University, Nanjing 210096, China (e-mail: kunyangzhou@seu.edu.cn; fengyunjian@seu.edu.cn; j.li@seu.edu.cn). July 22, 2024 ====================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================== § ABSTRACT This paper focuses on two crucial issues in domain-adaptive lane detection, i.e., how to effectively learn discriminative features and transfer knowledge across domains. Existing lane detection methods usually exploit a pixel-wise cross-entropy loss to train detection models. However, the loss ignores the difference in feature representation among lanes, which leads to inefficient feature learning. On the other hand, cross-domain context dependency crucial for transferring knowledge across domains remains unexplored in existing lane detection methods. This paper proposes a method of Domain-Adaptive lane detection via Contextual Contrast and Aggregation (DACCA), consisting of two key components, i.e., cross-domain contrastive loss and domain-level feature aggregation, to realize domain-adaptive lane detection. The former can effectively differentiate feature representations among categories by taking domain-level features as positive samples. The latter fuses the domain-level and pixel-level features to strengthen cross-domain context dependency. Extensive experiments show that DACCA significantly improves the detection model’s performance and outperforms existing unsupervised domain adaptive lane detection methods on six datasets, especially achieving the best performance when transferring from CULane to Tusimple (92.10% accuracy), Tusimple to CULane (41.9% F1 score), OpenLane to CULane (43.0% F1 score), and CULane to OpenLane (27.6% F1 score). Unsupervised domain adaptation, Lane detection, Contextual contrast, Contextual aggregation. § INTRODUCTION Lane detection is crucial in autonomous driving and advanced driver assistance systems. Benefitting from developing convolutional neural networks, deep learning-based lane detection methods <cit.> demonstrate greater robustness and higher accuracy than traditional methods <cit.>. To train a robust lane detection model, a high-quality dataset is necessary. However, acquiring high-quality labeled data is laborious and costly. Simulation is a low-cost way to obtain training pictures. Nevertheless, the detection performance may be degraded after transitioning from the virtual (source domain) to the real (target domain). Unsupervised domain adaptation (UDA) has been proposed to solve this problem <cit.>. Recently, UDA has been successfully applied in the image segmentation task <cit.>, significantly improving the segmentation performance. However, applying existing unsupervised domain-adaptive segmentation methods to lane detection does not yield satisfactory results, even inferior to those of supervised training, as revealed in <cit.>. We consider the cross-entropy loss adopted in these methods only focuses on pulling similar features closer but ignores different features across categories, making these methods inefficient in learning discriminative features of different categories <cit.>. Contrastive learning <cit.> is expected to solve this problem by appropriately selecting positive and negative samples. However, segmentation models may generate false pseudo-labels on the input image for the unlabeled target domain, causing false assignments of positive samples. On the other hand, cross-domain context dependency is essential for adaptive learning of cross-domain context information <cit.>, which is overlooked by many existing domain adaptive lane detection methods, e.g. <cit.> and <cit.>. In MLDA <cit.>, an Adaptive Inter-domain Embedding Module (AIEM) is proposed to aggregate contextual information, but it is limited to performing on a single image and disregards useful contextual information from other images. How to effectively leverage the potential of cross-domain context dependency in domain-adaptive lane detection remains a challenging topic. This paper presents a novel Domain-Adaptive lane detection via Contextual Contrast and Aggregation (DACCA) to address the aforementioned issues. As shown in Fig.  <ref>, two positive sample memory modules (PSMMs) are adopted to save domain-level features for each lane in both source and target domains. We select two corresponding domain-level features as positive samples from both source and target PSMMs for each lane pixel in an input image. Subsequently, the selected domain-level features are aggregated with the original pixel feature to enrich the cross-domain contextual information. In addition, we pair the aggregated features with the source and target positive samples to avoid the false assignment of positive samples in the cross-domain contrastive loss. The main contributions of this paper are as follows. (1) We propose a novel cross-domain contrastive loss to learn discriminative features and a novel sampling strategy to fully utilize the potential of contrastive loss without modifying an existing contrastive loss. (2) A novel domain-level feature aggregation module combining pixel-level and domain-level features is presented to enhance cross-domain context dependency, Aggregating domain-level features, instead of feature aggregation of mini-batches or individual images, is a fresh perspective. (3) Extensive experiments show that our method can significantly improve the baseline performance on six public datasets. Remarkably, compared with existing domain adaptive lane detection methods, our approach achieves the best results when transferring from CULane to Tusimple,Tusimple to CULane, OpenLane to CULane, and CUlane to OpenLane. The rest of the paper is organized as follows. Section II reviews the related work and Section III details DACCA. Extensive experiments are conducted in Section IV. Section V concludes this paper. § RELATED WORK §.§ Lane Detection Traditional lane detection mainly depends on image processing operators, e.g., Hough transforms <cit.>. Although they can quickly achieve high detection accuracy in specific scenarios, their generalization ability is too poor to apply to complex scenarios. Deep learning-based lane detection has received increasing attention, including segmentation-based methods <cit.>, anchor-based methods <cit.>, and parameter-based methods <cit.>. SCNN <cit.> is one of the typical segmentation-based methods using a message-passing module to enhance visual evidence. Unlike pixel-wise prediction in segmentation-based methods, anchor-based methods regress accurate lanes by refining predefined lane anchors. For example, using a lightweight backbone, UFLD <cit.> pioneers row anchors in real-time lane detection. Parameter-based methods treat lane detection as the parameter modeling problem and regress the parameters of the lane. PolyLaneNet <cit.> models a lane as a polynomial function and regresses the parameters of the polynomial. Although parameter-based methods have a faster inference speed than the other two methods, they struggle to achieve a higher performance. In this paper, we consider segmentation-based domain-adaptive lane detection. §.§ Unsupervised Domain Adaptation Domain adaptation has been widely studied to address the domain discrepancy in feature distribution, usually, implemented through adversarial training and self-training. Adversarial training <cit.> eliminates the differences in feature distribution between the source and target domains by adversarial approaches. Different from adversarial training, self-training <cit.> trains a model in the target domain using generated pseudo labels. On the other hand, the contrastive loss is introduced as an auxiliary loss to improve the model's robustness. CDCL <cit.> takes labels and pseudo-labels as positive samples in the source and target domain, respectively. However, the model may generate false pseudo labels in the unlabeled target domain, leading to false positive sample assignments. There exists some works <cit.> taking positive samples from the prototypes to achieve accurate positive sample assignments. CONFETI <cit.> adopts the pixel-to-prototype contrast to enhance the feature-level alignment. CONFETI only uses a prototype to save source and target domain features, but we think this way is inappropriate because the feature distribution between the two domains is different. In our work, we use two PSMMs to save features of two domains separately and take the domain-level features as positive samples. In addition, we also optimize the sample selection policy in the contrastive loss but most works ignore it. §.§ Unsupervised Domain Adaptive Lane Detection Due to the lack of a domain adaptive lane detection dataset, early studies <cit.> focus on synthetic-to-real or simulation-to-real domain adaptation. Their generalizability in real-world scenarios is not satisfactory with low-quality synthetic and simulation images.  <cit.> establishes a specific dataset for domain adaptive lane detection and directly apply a general domain adaption segmentation method to this dataset. However, it does not yield good results, since conventional domain adaptive segmentation methods generally assume the presence of salient foreground objects in the image, occupying a significant proportion of the pixels. On the other hand, lane lines, which occupy a relatively small proportion of the image, do not exhibit such characteristics. To solve this problem, MLDA <cit.> introduces an AIEM to enhance the feature representation of lane pixel by aggregating contextual information in a single image. Unfortunately, in this way, useful contextual information from other images may be ignored. Instead, we propose to aggregate the domain-level features with pixel-level features. §.§ Context Aggregation Performing contextual information aggregation for pixel-level features can effectively improve segmentation performance in semantic segmentation. In supervised methods, common context information aggregation modules, e.g., ASPP <cit.>, PSPNet <cit.>, OCRNet <cit.>, and MCIBI <cit.>, only aggregate features within a single domain instead of both target and source domains. In UDA, some methods try to design modules to aggregate contextual features by attention mechanisms, such as cross-domain self-attention <cit.>, and context-aware mixup <cit.>. However, all existing cross-domain feature aggregation methods only fuse a mini-batch of contextual features. In contrast to previous works, our method tries to simultaneously fuse features from the whole target and source domains to enhance the cross-domain context dependency. § METHOD As illustrated in Fig.  <ref>, the network is self-trained in our DACCA, where the student model is trained in both the labeled source domain and the unlabeled target domain with pseudo-labels generated by the teacher model. DACCA has two key components, i.e., cross-domain contrastive loss and domain-level feature aggregation. §.§ Self-Training In UDA, a segmentation-based lane detection model s_θ is trained using source images X^s={x_S^k}_k=1^N_s with labels Y^s={y_S^k}_k=1^N_s, to achieve a good performance on the unlabeled target images X^t={x_T^k}_k=1^N_t, where N_s and N_t are the number of source and target images, respectively. y_S^k is a one-hot label. Pixel-wise cross-entropy loss L_S^k is adopted to train s_θ in the source domain. L_S^k=-∑_i=1^H∑_j=1^W∑_c=1^C+1(y_S^k)_ (i,j,c )× l o g(s_θ(x_S^k)_ (i,j,c )), where C is the number of lanes and class C+1 denotes the background category. H and W are the height and width of x_S^k. However, when transferred to the target domain, s_θ trained in the source domain suffers from performance degradation due to the domain shift. In this paper, we adopt a self-training method <cit.> to address this issue. As shown in Fig.  <ref> (a), in the self-training process, we train two models, i.e., student model s_θ and teacher model t_θ to better transfer the knowledge from the source domain to the target domain. Specifically, t_θ generates the one-hot pseudo-label y_T^k on the unlabeled target image x_T^k. (y_T^k)_(i,j,c)=[c=c' ∈ c∗argmax(t_θ(x_T^k)_(i,j,c'))],i∈[0,H],j∈[0,W], where [·] denotes the Iverson bracket and c* represents the set of all categories. To ensure the quality of pseudo-labels, we filter low-quality pseudo-labels by setting the confidence threshold α_c, i.e., (y_T^k)_(i,j,c)={[ (y_T^k)_(i,j,c), if (t_θ(x_T^k)_(i,j,c))≥α_c; 0, otherwise; ].. s_θ is trained on both labeled source images and unlabeled target images with pseudo-labels. The same pixel-wise cross-entropy loss L_T^k is used as the loss function in the target domain. L_T^k=-∑_i=1^H∑_j=1^W∑_c=1^C+1(y_T^k)_(i,j,c )× l o g(s_θ(x_T^k)_ (i,j,c )). During training, no gradients are backpropagated into t_θ and the weight of t_θ is updated by s_θ through Exponentially Moving Average (EMA) at every iteration m, denoted by, t_θ^m+1=β× t_θ^m+(1-β)× s_θ^m , where the scale factor β is set to 0.9 empirically. After the training, we use the student model s_θ for inference and produce the final lane detection results. §.§ Cross-domain Contrastive Loss Since the cross-entropy loss is ineffective in learning discriminative features of different lanes, we introduce the category-wise contrastive loss <cit.> to solve this problem. The formulation of category-wise contrastive loss L_C L is written as, L_C L=-1/C × M∑_c=1^C∑_p=1^Mlog[e^<V_c p, V_c^+>/ τ/e^<V_c p, V_c^+>/ τ+∑_q=1^N e^<V_c p, V_c p q^->/ τ], where M and N represent the numbers of anchors and negative samples, respectively. V_cp is the feature representation of the p-th anchors of class c, used as a candidate for comparison. V_c^+ is the feature representation of the positive sample of class c. V_cpq^- denotes the feature representation of the q-th negative samples of the p-th anchors of class c. τ is the temperature hyper-parameter and ⟨·,·⟩ is the cosine similarity between features from two different samples. In the target domain, existing methods either focus on improving the form of contrastive loss <cit.>, introducing extra hyper-parameters, or only select V_c^+ from the current input images <cit.>. However, the false pseudo-labels generated by t_θ cause the incorrect positive samples assignment, making the contrastive loss ineffective in learning discriminate features of different categories. We develop a sample selection policy without modifying the existing contrastive loss to overcome the difficulty. Anchor Selection. We choose anchors for each lane from a mini-batch of samples. The anchors of the c-th lane, A_c can be selected according to, A_c={(i,j)|GT_(i,j)=c,s_θ(x^in)_(i,j,c)≥μ_c,i∈[0,H],j∈[0,W]}, V_c={V_(i,j)|(i,j)∈ A_c}, where GT denotes the labels in the source domain or pseudo-labels in the target domain, x^in represents an input image, and μ_c is the threshold. We set pixels whose GT are category c and whose predicted confidence are greater than μ_c as anchors to reduce the effect of hard anchors. V ∈ R^H × W × D is the pixel-wise representation and D is the feature dimension. As illustrated in Fig.  <ref> (b), we achieve V by exploiting an extra representation head U. U shares the input with the prediction head and is only used in the training process. V_c is the set of feature representation of anchors and V_cp∈ R^D is randomly selected from V_c. Positive Sample Selection. To ensure the appropriate assignment of positive samples, we establish a positive sample memory module (PSMM) for each lane in both the source and target domains to save its domain-level feature, denoted as B_so∈ R^C× D and B_ta∈ R^C× D. We initialize and update the domain-level features saved in PSMM, following MCIBI <cit.>. For the c-th lane, we take its domain-level feature as the feature representation of the positive sample. V_c^+=B_o(c), where o is the source domain (so) or the target domain (ta). Feature Initialization. The process of initializing and updating features is the same for source and target PSMM. We take the target PSMM as an example to describe this process. MCIBI <cit.> selects the feature representation of one pixel for each lane to initialize the feature in PSMM. However, this way may bring out false feature initialization due to false pseudo labels. For the c-th lane, we initialize its feature in PSMM using the center of the features of all anchors, expressed by, B_ta(c) = 1/|V_c|∑_n_c ∈ V_c n_c, where |V_c| denotes the number of anchors and n_c is the feature representation of anchors in V_c. Feature Update. The features in target PSMM are updated through the EMA after each training iteration m, (B_ta(c))_m = t_m-1× (B_ta(c))_m-1 + (1-t_m-1) ×∂((V_c)_m-1), where t is the scale factor and ∂ is used to transform V_c to obtain the feature with the same size as (B_ta(c))_m-1. Following MCIBI, we adopt the polynomial annealing policy to schedule t, t_m = (1-m/T)^p× (t_0 - t_0/100) + t_0/100, m ∈ [0,T], where T is the total number of training iterations. We set both p and t_0 as 0.9 empirically. To implement ∂, we first compute the cosine similarity vector S_c between the feature representation of anchors in (V_c)_m-1 and (B_ta[c])_m-1, as below, S_c(i) = (V_c)_m-1(i)× (B_ta(c))_m-1/ (V_c)_m-1(i)_2 × (B_ta(c))_m-1_2, i ∈ [1,|V_c|], where we use (i) to index the element in S_c or feature representation in (V_c)_m-1. Then, we obtain the output of ∂((V_c)_m-1) by, ∂((V_c)_m-1) = ∑_i=1^|V_c|1-S_c(i)/∑_j=1^|V_c|(1-S_c(j))× (V_c)_m-1(i). For the source PSMM, features in V_c come from the source domain. Negative Sample Selection. We directly use pixels of a lane not labeled c as the negative samples in the source domain. On the other hand, in the target domain, pixels with the lowest predicted conference for category c are selected as negative samples. n e g_-loc_c={ (i, j) |c' ∈ c∗argmin(s_θ(x_T^k)_(i, j, c'))=c, i ∈[0, W], j ∈[0, H]}, n e g_c={V_(i,j)| (i,j) ∈ n e g_-loc_c}, where neg_-loc_c and n e g_c denote the location and the set of feature representation of negative samples of class c, respectively. V_cpq^- ∈ R^D is also randomly selected from n e g_c. To compare intra-domain and inter-domain features at the same time, we propose a Cross-domain Contrastive Loss (CCL), consisting of an intra-domain contrastive learning loss L_inter and an inter-domain contrastive learning loss L_intra. CCL=L_inter+L_intra, where L_inter and L_intra are the same as Eq. <ref>. CCL is applied in both source and target domains. For the source cross-domain contrastive loss (SCCL), the positive samples in L_inter are the domain-level features saved in B_ta, and the positive samples in L_intra are the domain-level features saved in B_so. The positive samples in the target cross-domain contrastive loss (TCCL) are opposite to SCCL. The overall loss of DACCA is, Loss = 1/N_s∑_k=1^N_s(λ_c× SCCL^k + L_S^k) + 1/N_t∑_k=1^N_t(λ_c× TCCL^k + L_T^k), where λ_c is the scale factor, which is set to 0.1 empirically. §.§ Domain-level Feature Aggregation Cross-domain context dependency is essential to transfer knowledge across domains. Cross-domain Contextual Feature Aggregation (CCFA) is an effective way to achieve cross-domain context dependency. Existing CCFA methods <cit.> only aggregate a mini-batch of features. We argue that aggregating features from a whole domain is more beneficial. As shown in Fig.  <ref> (b), Domain-level Feature Aggregation (DFA) aims to fuse the domain-level features into the pixel-level representation. DFA contains two key components, i.e., source and target domain-level feature assignment. The process is the same for both. We take the target domain-level feature assignment as an example to depict the process. Pixel Feature Selection. To select the corresponding domain-level feature for each lane pixel, we propose the pixel feature selection. We first obtain the predicted category at location (i,j) by, P=c' ∈ c∗argmax(Softmax(Conv(E))_(i,j,c' )), i ∈[0, W], j ∈[0, H], where E ∈ R^H × W × D represents the feature map, containing the pixel-level feature representation. 1×1 convolution (termed as Conv) is adopted to change the channels of E to C+1. P ∈ R^H × W saves the predicted category at each location of E. Then, we build a feature map Z whose pixel values are zero and whose size and dimension are the same as E. We assign the pixel-wise feature to Z using the domain-level feature. Z_(i,j)=B_ta(P_(i,j)), P_(i,j)≠ C+1, i ∈[0, W], j ∈[0, H]. After the assignment, Z is a domain-level feature map. Here, the lane pixels on E predicted as the background in training are called unreliable background pixels (UBP). For example, as illustrated in Fig.  <ref>, UBP is mainly located at the edge of the lane. However, the features of UBP can not be augmented since domain-level features are only aggregated for the foreground pixels. To refine the features of UBP, we also perform further feature aggregation on UBP. Specifically, the predicted confidence of the UBP is usually low, hence we distinguish UBP from reliable background pixels by setting confidence threshold ε. The UBP is defined as, UBP={(i,j)|pred_(i,j)<ε,P_(i,j)=C+1, i ∈[0, W], j ∈[0, H]}, where pred_(i,j) is the confidence of the predicted category at location (i,j). pred_(i,j) is obtained by: pred_(i,j)=c' ∈ c∗max(Softmax(Conv(E))_(i,j,c' )). We choose the category with the lowest Euclidean distance as the pseudo category of UBP and use domain-level feature of pseudo category to instantiate UBP in Z. P_(i,j)=c' ∈ c∗argmin(dis(E^UBP_(i,j),B_ta(c'))), (i,j) ∈ UBP, Z_(i,j)=B_ta(P_(i,j)), (i,j) ∈ UBP, where E^UBP_(i,j) is the feature representation of UBP at location (i,j) in E, and dis is used to calculate the Euclidean distance between the feature representation of UBP and the domain-level feature. Thereafter, we adopt a linear layer to extract features along the channel dimension in Z to obtain the output of target domain-level feature assignment F_T. In the same process, we replace the target PSMM with the source PSMM to obtain the feature F_S. F_S, F_T, and E are concatenated along the channel dimension and fused by a 1×1 convolution to enrich the cross-domain context information of E. F_aug=Conv(φ(E,F_S,F_T)), where F_aug∈ R^H× W× D is the aggregated features and φ is the concatenate operation. § EXPERIMENTS §.§ Experimental Setting Datasets. We conduct extensive experiments to examine DACCA on six datasets for lane detection tasks, i.e., TuLane <cit.>, MoLane <cit.>, MuLane <cit.>, CULane <cit.>, Tusimple <cit.>, and OpenLane <cit.>. The source domain of the TuLane dataset uses 24,000 labeled simulated images as the training set, and the target domain images derives from the Tusimple dataset. The source domain of the MoLane dataset uses 80,000 labeled simulated images as the training set, and the target domain training set is adopted from the real scenes and contains 43,843 unlabeled images. The MuLane dataset mixes the TuLane and MuLane datasets are uniformly blended. The source domain of MuLane dataset uses 48000 labeled simulated images as the training set, and the target domain combines the Tusimple and MoLane target domains. Following <cit.>, we conduct the experiments on "CULane to Tusimple" and "Tusumple to CULane". “Tusimple to CULane” means that the source domain is Tusimple and the target domain is CULane. To further validate the effectiveness of our method on the domain adaptation cross difficult scenes, we carry out the experiments on "CULane to OpenLane" and "OpenLane to CULane". CULane dataset <cit.> is a large scale lane detection dataset, consisting of 88880, 9675, and 34680 frames for training set, validation set, and testing set. Tusimple <cit.> dataset is small scale dataset for lane detection. It has 3626 training images and 2782 testing images. OpenLane <cit.> is a comprehensive benchmark for 2D and 3D lane detection, which is composed of 200K frames with 14 kinds of categories, complex lane structures, and five kinds of weather. Evaluation Metrics. For TuLane, MuLane. MoLane, and Tusimple datasets. We use three official indicators to evaluate the model performance for three datasets: Accuracy, false positives (FP), and false negatives (FN). Accuracy is defined by Accuracy=p_c/p_y, where p_c denotes the number of correct predicted lane points and p_y is the number of ground truth lane points. A lane point is regarded as correct if its distance is smaller than the given threshold t_pc=20/cos (a_yl ), where a_yl represents the angle of the corresponding ground truth lane. We measure the rate of false positives with FP=l_f/l_p and the rate of false positives with FN=l_m/l_y, where l_f is the number of mispredicted lanes, l_p is the number of predicted lanes, l_m is the number of missing lanes and l_y is the number of ground truth lanes. Following <cit.>, we consider lanes as mispredicted if the Accuracy < 85%. For CULane and OpenLane, we adopt the F1 score to measure the performance, F_1=2× P r e c i s i o n× R e c a l l/Precision+Recall, where Precision=TP/TP+FP and Recall=TP/TP+FN, where TP denote the true positives. Implementation Details. We update the learning rate by the Poly policy with power factor 1-(iter/total_iter)^0.9. We select the AdamW optimizer with the initial learning rate 0.0001. We adopt the data augmentation of random rotation and flip for TuLane, and random horizontal flips and random affine transforms (translation, rotation, and scaling) for MuLane and MoLane. The training epochs on the TuLane, MuLane, MoLane, "CULane to Tusimple", "Tusimple to CULane", "CULane to OpenLane", and "OpenLane to CULane" are 30, 20, 20, 20, 12, 50, and 50 respectively. We set the threshold for filtering false pseudo labels α_c to 0.3 during domain adaptation. The threshold for selecting anchors μ_c and UBP ε are 0.2 and 0.7, respectively. The number of anchors M and negative samples N in the cross-domain contrastive loss are 256 and 50, respectively. Temperature hyper-parameter τ is set to 0.07 empirically. The feature dimension D is 128. The optimizer and update policy of the learning rate are the same as those in pretraining. All images are resized to 384×800. All experiments are conducted on a single Tesla V100 GPU with 32 GB memory. DACCA is implemented based on PPLanedet <cit.>. §.§ Ablation Study We ablate the key components of DACCA and use SCNN with ResNet50 <cit.> as the detection model. If not specified, all ablation studies are conducted on TuLane. Effectiveness of cross-domain contrastive learning (CCL). In Table <ref>, when only source domain data are used in supervised learning, SCCL prompts the accuracy from 77.42% to 79.63%. It also indicates that our SCCL works for supervised training. On the other hand, the accuracy increases by 1.01%, i.e., from 80.76% to 81.77%, if TCCL is adopted. T-SNE visualization in Fig.  <ref> (c) shows that the model with CCL can learn more discriminative features. Effectiveness of domain-level feature aggregation (DFA). In Table <ref>, DFA can improve the detection accuracy from 81.77% to 82.43%. As for feature aggregation of UBP, the accuracy is further increased by 1.56% (83.99% vs. 82.43%). Also, we can observe a significant adaptation of the source and target domain features in Fig.  <ref> (c), which validates the effectiveness of domain-level feature aggregation. Generalizability of different methods. As shown in Table <ref>, our method can be integrated into various segmentation-based lane detection methods. In SCNN, using our method can increase the accuracy by 6.57% and decrease FP and FN by 16.02% and 14.09%, respectively. Also, in the lightweight model ERFNet, the accuracy rises by 7.17%, and FP and FN drop by 6.8% and 19.39%. Finally, in the Transformer-based method RTFormer, our method significantly improves the detection performance, in terms of accuracy, FP, and FN. Comparison with existing contrastive loss variants. In Fig.  <ref> (a), CCL is evaluated against other contrastive loss variants in UDA. In turn, we replace CCL in DACCA with CDCL, ProCA <cit.>, CONFETI <cit.>, and SePiCo <cit.>. Compared with ProCA and CONFETI, CCL increases the accuracy by 2.58% (81.77% vs. 79.19%) and 1.9% (81.77% vs. 79.87%), respectively. The reason may be that both ProCA and CONFETI ignore the differences in feature distribution between the source domain and target domain and only use a prototype to represent the features of the two domains. Moreover, CCL overwhelms SePiCo regarding accuracy. It attributes to SePiCo only taking domain-level features from the source domain as the positive samples but ignoring the samples from the target domain. Comparison with existing cross-domain context aggregation. We substitute the DFA with Cross-domain <cit.> and Self-attention module (SAM) <cit.>—the latter aggregate features in a mini-batch. The superiority of the DFA is shown in Fig.  <ref> (b). DFA performs better than Cross-domain and SAM, e.g., prompts the accuracy by 0.46% (83.51% vs. 83.05%) and 0.72% (83.51% vs. 82.79%), respectively. From the T-SNE visualization in Fig.  <ref>, we can see that DFA aligns the features of two domains better. The results demonstrate that aggregating features from the whole domain is more effective than from a mini-batch. The number of anchors M. We study the influence of the number of anchors M and the results are shown in Fig.  <ref> (c). It can be observed that the model achieves the best performance when M is 256. Besides, It causes extra computational burden when M increases. Considering accuracy and computational burden, we set M as 256. The number of negative samples N. Fig.  <ref> (d) shows the influence of the number of negative samples. When N is 50, model achieves the best performance. We can also see that as N increases, the accuracy does not always improve, indicating that excessive negative samples can degrade performance. The threshold for selecting anchors μ_c. We study the threshold for selecting anchors μ_c. As shown in Table <ref>, setting the anchor selection threshold can avoid hard anchors compared with anchor selection without the threshold (83.99% vs. 80.97%). However, when the threshold is too high, available anchors shrink, leading to performance degradation (83.99% vs. 80.80%). Hence, we set μ_c to 0.2. The threshold for selecting UBP. We can see that without the feature refinement of UBP, accuracy is only 83.32% in Table <ref>. When ε is 0.7, model achieves the best performance. It has little effect on model performance when ε is too low. This is attributed to the small number of UBP. When ε is too high, many background pixels are wrongly regarded as UBP, causing the negative effect. The threshold for filtering false pseudo labels. We study the threshold for filtering false pseudo labels α_c and results are shown in Table <ref>. When α_c is low, false pseudo labels have a greater impact on performance. If α_c is too high, the number of pseudo labels is too small, providing insufficient supervision signals. Therefore, we set α_c to 0.3. The way of feature fusion. We study the way of feature fusion in Table <ref>. Add denotes for element-wise adding E, F_S, and F_T. Compared with add, concatenation gains 6.07% accuracy improvements. The reason may be that Add directly changes the original pixel features but concatenation does not. Weighted add means adding E, F_S, and F_T weightedly where weights are predicted by a 1× 1 convolution. Concatenation overwhelms Weighted add regarding accuracy, FN, and FP. We adopt the concatenation as the way of feature fusion. §.§ Visualization of cross-domain features T-SNE visualization of the key components. As shown in Fig.  <ref> (a). There is a slight adaptation of cross-domain features when model is only trained in the source domain. Learned cross-domain features are aligned better using our proposed CCL in Fig.  <ref> (b). However, since CCL is a pixel-wise contrast, it can lead to the separation of the feature space due to lack of contextual information. To solve this problem, we enhance the links between cross-domain features by introducing domain-level feature aggregation (DFA). DFA incorporate cross-domain contextual information into the pixel-wise feature and effectively address the separation of the feature space in Fig.  <ref> (c). T-SNE visualization of different cross-domain context aggregation methods. Compared with Cross-domain <cit.> and Self-attention module (SAM) <cit.>, DACCA aligns source and target domain features better in Fig.  <ref>, indicating domain-level features can provide more cross-domain knowledge than features from a mini-batch. T-SNE visualization of different loss functions. As shown in Fig.  <ref>, our CCL learns more discriminative features than SePiCo <cit.>, indicating that our sample selection policy is effective. Besides, cross-entropy is inefficient in discriminating features of different categories. Our CCL can effectively compensate for the deficiency of cross-entropy loss. §.§ Comparison with state-of-the-art methods Performance on TuLane. The results on TuLane are shown in Table <ref>. When ERFNet is used as the detection model, our method performs better than other methods. For instance, our method outperforms MLDA in terms of accuracy by 2.04% (90.47% vs. 88.43%). Besides, using our CCL and DFA, the performance of MLDA gains consistent improvement. It indicates our sample selection policy is more effective than designing complicated loss functions, and DFA has a stronger domain adaptive ability than AIEM in MLDA. Regarding FN metrics, our method is 5.97% and 4.11% lower than PyCDA and Cross-domain, respectively. Significantly, when using the Transformer model RTFormer, DACCA outperforms the state-of-the-art SGPCS (92.24% vs. 91.55%) and achieves the best experimental results on TuLane in similar settings. Performance on OpenLane to CULane.To further validate our method's generalization ability, we carry out experiments transferring from OpenLane to CULane to demonstrate a domain adaptation between difficult real scenarios. As shown in Table <ref>, our method delivers 4.2% enhancement (43.0% vs. 38.8%) compared to the state-of-the-art MLDA. Our DACCA surpasses the existing methods in most indicators and also all these results reflect its outperformance. Performance on CULane to Tusimple. As presented in Table <ref>, our DACCA achieves the best performance on "CULane to Tusimple". For instance, DACCA increases the accuracy from 89.7% to 92.1% compared with the state-of-the-art method MLDA. It indicates our DACCA can perform well on the domain adaptation from difficult scene to simple scene. Performance on MoLane. Next, our method is tested on MoLane. By observing Table <ref>, we can conclude that DACCA is superior to existing unsupervised domain-adaptive lane detection methods. Specifically, DACCA improves the accuracy by 2.22% against SGPCS (93.50% vs. 91.28%). Moreover, using ERFNet as the detection model, DACCA improves the accuracy by 4.37% (90.52% vs. 86.15%) compared to the model using only source domain data. It is worth mentioning that if the Transformer model, RTFormer, is used as the detection model, the detection accuracy can be prompted by 6.73% (93.50% vs. 86.77%). Performance on MuLane. To further validate our method's generalization ability, we carry out experiments on MuLane. As shown in Table <ref>, when using ERFNet as the detection model, our method delivers 4.65% enhancement (87.93% vs. 83.28%) in contrast to the model using only the source domain data. Moreover, our method DACCA outperforms existing methods in accuracy, FP, and FN. Specifically, DACCA is 1.92% higher than PyCDA in accuracy (87.93% vs. 86.01%), 6.15% lower than Cross-domain in FP (25.95% vs. 32.10%), and 7.09% lower than PyCDA in FN (27.08% vs. 34.17%). All these results reflect the outperformance of our method. Performance ob Tusimple to CULane. We conduct the experiments on the domain adaptation from simple scene and difficult scene and result are shown in Table <ref>. DACCA demonstrates consistent performance advantages. Performance on CULane to OpenLane. From Table <ref>, we can see that DACCA achieves the best performance and gains 5.3% F1 score improvement. The results on domain adaptation cross difficult scenes manifest the effectiveness and generalizability of our DACCA. Qualitative evaluation. The visualization comparison results are illustrated in Figs. <ref> and  <ref>. In Fig.  <ref> (c) and Fig.  <ref> (b), our method predicts more smooth lanes than the other methods in urban scenarios. Our method can detect the complete lanes in real-world scenes, as shown in Fig.  <ref> and Fig.  <ref> (a), (c), and (d). Qualitative results demonstrate that our method can effectively transfer knowledge across domains. § CONCLUSION This paper presents a novel unsupervised domain-adaptive lane detection via contextual contrast and aggregation (DACCA), in which learning discriminative features and transferring knowledge across domains are exploited. Firstly, we create the positive sample memory module to preserve the domain-level features of the lane. Then, we propose a cross-domain contrastive loss to improve feature discrimination of different lanes by a novel sample selection strategy without modifying the form of contrastive loss. Finally, we propose the domain-level feature aggregation to fuse the domain-level features with the pixel-level features to enhance cross-domain context dependency. Experimental results show that our approach achieves the best performance, compared with existing methods, on TuLane, or transferring from CULane to Tusimple, Tusimple to CULane, CULane to OpenLane, and OpenLane to CULane. Moreover, on MuLane and MoLane datasets, our method outperforms existing unsupervised domain-adaptive segmentation-based lane detection methods. Furthermore, although DACCA is established upon the segmentation-based lane detection, it holds considerable potential for application in other lane detection methods, e.g., keypoint-based and transformer-based ones. Our future work is to explore this aspect. bibtex/IEEEtran [ < g r a p h i c s > ]Kunyang Zhou received his B.S. degree in electrical engineering and automation from Nantong University, Nantong, China, in 2022. He is currently pursuing his M.S. degree in Control Science and Engineering from Southeast University. His current research interests include deep learning and pattern recognition. [ < g r a p h i c s > ]Yunjian Feng received his M.S. degree in vehicle electronics engineering from Wuhan University of Technology, Wuhan, China, in 2020. He is currently pursuing his Ph.D. degree in control theory and control engineering from Southeast University. His current research interests include machine vision, deep learning, and autonomous driving. [ < g r a p h i c s > ]Jun Li (Senior Member, IEEE) received his Ph.D. degree in control theory and control engineering from Southeast University (SEU), Nanjing, China, in 2007. From 2008 to 2010, he was a Post-Doctoral Fellow at SEU. In 2014, he was a Visiting Scholar with the New Jersey Institute of Technology, Newark, NJ, USA. He is the Director of the Robotics and Intelligent Systems Laboratory at School of Automation, SEU. His current research interests include machine vision, logistics and construction robotics, machine learning, and operations research. Professor Li is a Fellow of the Institution of Engineering and Technology (IET) and serves as an Associate Editor for IEEE TRANSACTIONS ON INTELLIGENT TRANSPORTATION SYSTEMS.
http://arxiv.org/abs/2407.13087v1
20240718012339
Forecasting Supernova Observations with the CSST: I. Photometric Samples
[ "Chengqi Liu", "Youhua Xu", "Xianmin Meng", "Xin Zhang", "Shi-Yu Li", "Yuming Fu", "Xiaofeng Wang", "Shufei Liu", "Zun Luo", "Guanghuan Wang", "Hu Zhan" ]
astro-ph.HE
[ "astro-ph.HE", "astro-ph.IM" ]
subject Article SPECIAL TOPIC: 2024 ? ? ?? 000000 XXX YYY Forecasting Supernova Observations with the CSST: I. Photometric Samples 1,2,3]Chengqi Liu 3]Youhua Xu 3]Xianmin Meng 3]Xin Zhang 4]Shi-Yu Li 5,6]Yuming Fu 7] Xiaofeng Wang 3,8]Shufei Liu 3,8]Zun Luo 3,8]Guanghuan Wang 3,2]Hu Zhanzhanhu@nao.cas.cn Chengqi Liu Chengqi Liu, Youhua Xu, Xianmin Meng, et al [1]Department of Astronomy, School of Physics, Peking University, Beijing 100871, PR China [2]Kavli Institute for Astronomy and Astrophysics, Peking University, Beijing 100871, PR China [3]Key Laboratory of Space Astronomy and Technology, National Astronomical Observatories, Chinese Academy of Sciences, Beijing, 100101, China [4]Beijing Planetarium, Beijing Academy of Science and Technology, Beijing 100044, PR China [5]Leiden Observatory, Leiden University, P.O. Box 9513, NL-2300 RA Leiden, The Netherlands [6]Kapteyn Astronomical Institute, University of Groningen, P.O. Box 800, NL-9700 AV Groningen, The Netherlands [7]Physics Department, Tsinghua University, Beijing, 100084, China [8]School of Astronomy and Space Science, University of Chinese Academy of Sciences, Beijing 100049, PR China The China Space Station Telescope (CSST, also known as Xuntian) is a serviceable two-meter-aperture wide-field telescope operating in the same orbit as the China Space Station. The CSST plans to survey a sky area of 17,500 deg^2 of the medium-to-high Galactic latitude to a depth of 25–26 AB mag in at least 6 photometric bands over 255–1000 nm. Within such a large sky area, slitless spectra will also be taken over the same wavelength range as the imaging survey. Even though the CSST survey is not dedicated to time-domain studies, it would still detect a large number of transients, such as supernovae (SNe). In this paper, we simulate photometric SN observations based on a strawman survey plan using the Sncosmo package. During its 10-year survey, the CSST is expected to observe about 5 million SNe of various types. With quality cuts, we obtain a “gold” sample that comprises roughly 7,400 SNe Ia, 2,200 SNe Ibc, and 6,500 SNe II candidates with correctly classified percentages reaching 91%, 63%, and 93% (formally defined as classification precision), respectively. The same survey can also trigger alerts for the detection of about 15,500 SNe Ia (precision 61%) and 2,100 SNe II (precision 49%) candidates at least two days before the light maxima. Moreover, the near-ultraviolet observations of the CSST will be able to catch hundreds of shock-cooling events serendipitously every year. These results demonstrate that the CSST can make a potentially significant contribution to SN studies. 95.55.Fw, 97.60.Bw, 83.85.Ns Forecasting Supernova Observations with the CSST: I. Photometric Samples [ July 22, 2024 ======================================================================== 2 § INTRODUCTION Observations of SNe play a crucial role in measuring the accelerated expansion of the Universe and studying the SNe themselves. Over the past two decades, projects such as the ESSENCE supernova Survey <cit.>, the Catalina Real-Time Transient Survey <cit.>, the Dark Energy Survey <cit.>, the Panoramic Survey Telescope and Rapid Response System <cit.>, the All-Sky Automated Survey for Supernovae <cit.>, and the Zwicky Transient Facility <cit.> acquired a large number of high-quality SNe light curves (LCs) and spectra. They have shed light on many questions in SN science and cosmology <cit.>, but there is always more to explore. For instance, the progenitor stars and the mechanisms of SN explosions remain unresolved issues. The need for more extensive samples of SNe to tackle these issues has been a key driver for several new survey projects such as the Wide Field Survey Telescope (WFST) in Lenghu, China <cit.> and the Vera C. Rubin Observatory (Rubin) on Cerro Pachón in Chile <cit.>. The upcoming 2-meter China Space Station Telescope <cit.> plans to start a large-scale multiband imaging and slitless spectroscopy survey around 2026. The survey, taking approximately 70% time of the 10-year mission, will cover an area of about 17,500 deg^2 in 7 photometric bands and 3 spectroscopic bands. While the CSST places an emphasis on cosmology and extragalactic science, the survey can be quite useful for time-domain studies as well because of its depth, image quality, and near-ultraviolet (NUV) capability. The single-exposure depth of the CSST wide survey will be about 25-26 mag on average, enabling discoveries of SNe at early stages or high redshifts. The size of the CSST point spread function, quantified by the radius encircling 80% energy, is specified to be no more than 0.15^''. The combination of high-quality data from space-based observations with those from ground-based telescopes will enable detailed studies on SN progenitors, characteristics of host galaxies, and statistics of SN locations within the host galaxies <cit.>. The NUV band of the CSST allows it to catch shock-cooling events, which can further deepen our understanding of the SN explosion mechanism <cit.>. The CSST is expected to detect a large number of SNe and make potentially important contributions to SN research. Previous studies <cit.> have explored SN Ia detection and cosmological constraints with a proposed 9 deg^2 CSST ultra-deep field. In this paper, we aim to make a realistic assessment of the CSST's capability to detect SNe in its wide survey and deep fields using pointings generated by simulated operations. The paper is structured as follows. Section <ref> describes the simulation of the CSST SN observations and the classification methods for the mock data. Section <ref> presents the resulting SN samples. Discussions and conclusions are given in Section <ref>. Throughout this paper, we adopt a flat ΛCDM model with parameters H_0=70  km s^-1 Mpc^-1, Ω_m=0.3, and Ω_Λ=0.7. Unless otherwise specified, the default magnitude system utilized is the AB magnitude system. § MOCK OBSERVATIONS OF SUPERNOVA §.§ Survey specifications The CSST adopts a Cook-type off-axis three-mirror anastigmat system, which achieves high image quality within a large field of view (FoV). Fig. <ref> illustrates the arrangement of filters and gratings of the CSST survey camera. The focal plane consists of 30 detectors, each with a filter or two gratings mounted atop. In this paper, we only consider the case of photometric observations. Fig. <ref> shows the system throughputs of the CSST photometric bands from 255 nm to 1000 nm. The survey comprises both a wide component and a deep component, with stacked depths of g∼ 26.3 and 27.5, respectively <cit.>. An ultra-deep field of 9^2 reaching g∼ 28 is also under consideration. The single-exposure depths of the CSST imaging survey are given in Table <ref>. Details of the wide field survey and the deep field survey are outlined below. Wide Field Survey: The wide field survey primarily covers Galactic latitudes |b| ≥ 15^∘ and ecliptic latitudes |β|≥ 15^∘, spanning an area of about 17,500 deg^2. Each detector covers the entire area once with a nominal exposure time of 150 seconds. As a result, each patch of the sky receives 2 observations in the u, g, r, i, and z bands and 4 observations in the NUV and y bands. This leads to a total number of 18 photometric observations in each sky patch of the wide survey. Deep Field Survey: The deep field survey covers a sky area of 400 deg^2, whose selection has not been finalized yet. In the current simulation of the CSST operations, 8 fields are selected for demonstration purposes. Each detector covers all the deep fields 4 times with a nominal exposure time of 250 seconds per visit, resulting in a total of 72 visits summed over all photometric bands. Over the 10-year survey operations, the CSST is expected to take about 650,000 exposures, including about 60,000 exposures in the deep fields. Using Healpix <cit.>, we evenly divide the entire 17,500 deg^2 survey area into about 1,500,000 small sky patches, with each patch covering an area of about 0.013 deg^2. Subsequently, we extract the observation time series for each sky patch from the pointing sequence generated by the simulation of operations. The median interval between two consecutive visits, regardless of which two bands are observed, is around six weeks in the wide survey and two weeks in the deep survey. About 28% consecutive visits of the same band are made within one day, in which case only one visit is counted in our analyses. §.§ Supernova simulations We use the Python package Sncosmo to simulate SN observations based on the CSST survey specifications. To generate realistic SN LCs, we need to take into consideration the following factors: the SN volumetric rate, SN models, extinction correction, and signal-to-noise ratio calculation. §.§.§ Supernova volumetric rate The volumetric rate of SNe is the number of SNe within a given timespan and a fixed co-moving volume, which can be described as a function of redshift z. In our simulations, the maximum redshift of the SNe is set to 1.4 based on the single-exposure detection limit of the CSST. We adopt a power-law model to describe the event rate of SN Ia <cit.> R_Ia(z) = 2.5× (1+z)^1.5 z≤ 1, 9.7× (1+z)^-0.5 1<z<3, and that of Core-collapse supernova (CCSN) <cit.> R_CC(z) = 6.8×(1+z)^3.6. The unit of the rates is 10^-5  h_70^3 Mpc^-3 yr^-1. §.§.§ Supernova models SALT2 <cit.> is an empirical model depicting the spectro-photometric evolution of SNe Ia over time. It utilizes an extensive dataset comprising templates derived from LCs and spectra of both nearby and distant SNe Ia. SALT2 provides the average spectral sequence of SNe Ia and identifies their principal variability components, including a color variation law. We adopt the SALT2-extended model <cit.> which provides a wider rest-frame wavelength coverage than the SALT2 model. The SALT2-extended model allows us to measure the distance moduli in the spectral wavelength range of 30 nm to 1800 nm, which is essential for generating LCs in all the CSST photometric bands. The model flux density of a SN Ia at a rest-frame wavelength λ can be expressed as F(p,λ)= x_0×[M_0(p,λ)+x_1M_1(p,λ)]×exp(c'CL(λ)), where p represents the rest-frame time since the date of maximum luminosity in the B band (referred to as the phase), M_0(p,λ) denotes the average spectral sequence, M_1(p,λ) represents the first-order variation, and CL(λ) represents the average color correction law. The parameters x_0,  x_1, and c' are the amplitude, stretch, and color of the LC, respectively. CCSNe exhibit a more heterogeneous nature compared to SNe Ia. Unlike SNe Ia, there is currently no parameterized model available to describe the diversity observed in the LCs of CCSNe. Therefore, we use time series template models to simulate CCSN observations. The spectral flux density of the time series template model is given by F(p,λ) = A × M(p,λ), where M(p,λ) is the relative flux at a phase p and a wavelength λ. A (amplitude) is the single free parameter of the model. Two sets of CCSN template models can effectively depict the spectral evolution sequence of CCSN over time. The first set comprises 40 templates from the Supernova Photometric Classification Challenge <cit.>. The second set contains composite spectral templates compiled from the literature, which are known as the Nugent templates <cit.>. The wavelength range of Nugent templates spans from 100 nm to 1500 nm, providing sufficient coverage. We adopt the Nugent templates as the default templates to simulate CCSNe due to their compatibility with the CSST filters and their simplicity. §.§.§ Extinction law Dust in the MW and host galaxy will affect the shape of an observed SN spectrum, which in turn affects the LCs. We adopt the F99 dust extinction law with R_V=3.1 <cit.> for both the MW and the host galaxy, with the dust extinction parameters (color excesses) being mwebv and hostebv, respectively. We query the SFD dust map <cit.> for mwebv and hostebv based on the SN location with the Python package sfdmap. In our simulation, the mwebv and hostebv parameters have a mean value of 0.04 mag. §.§.§ Signal-to-noise ratio calculation The signal-to-noise ratio (SNR) is given by S/N = N_obj/√(N_obj+n_pix(N_sky+N_D+N_R^2+N_other^2)), where n_pix is the number of pixels involved in the SNR calculation, N_obj represents the number of photo-electrons collected from the SN, N_sky and N_D are the contribution per pixel from the sky background and dark current, respectively, N_R denotes the readout noise, and N_other is manually set to take into account other noises that leave a margin of 0.3 mag in the limiting magnitude. The number of photo-electrons collected from a target can be described by N_obj = ∫f_λ(m_AB)/hc/λ·τ(λ) · dλ· Area · t · 80%, where f(λ) is the spectral flux of the target, h is Planck's constant, c is the speed of light, Area is the area of the primary mirror of the telescope, τ(λ) is the system throughput in Fig. <ref>, and t is the exposure time. We assume 80% of the photons from the SN is incident on the central n_pix pixels in the calculation. With the angular size of a single pixel being 0.074^'', n_pix is then π(0.15/0.074)^2 ≈ 13. The average dark current is 0.02 e^-/s/pix, the readout noise is 5 e^-/pix, and the readout time is about 40 seconds. The sky background levels scaled from the Hubble Space Telescope observations <cit.> are 0.0026, 0.018, 0.16, 0.21, 0.21, 0.13, and 0.038 e^-/s/pix for the seven photometric bands of the CSST, respectively. §.§.§ Supernova mock observations We use the built-in models of Sncosmo to generate the SN LCs. The SALT2-extended model includes five free parameters: the time of the B-band peak magnitude (t_0), the redshift (z), the amplitude parameter (x_0), the stretch parameter (x_1), and the color parameter (c'). The peak time t_0 is randomly assigned within the 10-year survey duration, while the redshift distributions follow those outlined in Section <ref>. The B-band peak absolute magnitudes of SNe Ia follow a normal distribution of M_B ∼𝒩(-19.3, 0.3) according to <cit.>, which determines the parameter x_0. The other parameters are set as follows: x_1 ∼𝒩(0, 1) and c' ∼𝒩(0, 0.1) <cit.>. The LCs of SNe Ibc, SNe IIP, and SNe IIL are generated using Nugent-snIbc <cit.>, Nugent-sn2p, and Nugent-sn2l <cit.> templates, respectively. The Nugent templates are described by three free parameters: t_0, z, and amplitude. The parameters t_0 and z are set in the same way as that of SNe Ia. According to <cit.>, the average B-band peak absolute magnitudes, dispersions, and relative fractions of different CCSNe are listed in Table <ref>. The B-band peak absolute magnitudes, which determine the amplitude for SNe Ibc, SNe IIP, and SNe IIL, follow normal distributions of M_B ∼𝒩(-17.1, 0.99), M_B ∼𝒩(-16.8, 0.97), and M_B ∼𝒩(-17.98, 0.90), respectively. For each SN in each exposure, we generate its flux with scatter incorporated in the observed band based on its LC phase and the CSST survey specifications. Host galaxy contamination to the SN photometry is assumed to be sufficiently low and is therefore not included in our current simulations. A separate study will be needed to address such contamination for the CSST, likely in combination with external data. In total, about 5 million SNe of various types would be observed at least once with SNR≥ 5 and hence be cataloged as real objects. This includes about 1,700,000 SNe Ia, 590,000 SNe Ibc, and 2,000,000 SNe II in the wide field, and about 73,000 SNe Ia, 32,000 SNe Ibc, and 110,000 SNe II in the deep fields. Unfortunately, most of these SNe cannot be correctly identified as SNe with CSST data alone. §.§ Supernova classification There are mainly two types of SN photometric classification methods: machine learning (ML) and empirical approaches. ML typically requires a large number of well-observed training samples and a sufficient number of observation points to identify candidate SNe <cit.>. This requirement poses a challenge given the CSST survey characteristics. Empirical approaches can be further categorized into model-dependent <cit.> and model-independent <cit.> methods. Model-independent methods primarily rely on analytical approaches that utilize multi-band colors. However, most CSST SNe do not have simultaneous color information. We choose the model-dependent method of spectral energy distribution (SED) template fitting for CSST SNe classification, which is a physically motivated approach <cit.>. This method determines the SN subtype by minimizing the χ^2 when comparing the observed magnitudes with the synthetic magnitudes computed from filter throughput curves and library templates. The library templates used for the SED template-fitting method include type Ia (Nugent-snIa, Nugent-sn91t, Nugent-sn91bg), type Ibc (Nugent-snIbc, Nugent-hyper), and type II (Nugent-sn2p, Nugent-sn2l) from <cit.>. We utilize the built-in function mcmc_lc in Sncosmo, employing the Markov Chain Monte Carlo method, to perform SED template fitting for the selected CSST SN samples. Observations with SNR < 5 are considered too weak and are excluded from the fitting process. Each Nugent template includes five parameters to be fitted: z, t_0, amplitude, mwebv, and hostebv. The CSST will capture images of the host galaxies associated with the SNe. According to <cit.>, below redshift 1.4, the average photo-z uncertainty of the host galaxy observed by the CSST is within 0.05. For galaxies with lower redshifts, the uncertainty is even smaller. Additionally, the redshift of the host galaxy can also be obtained from existing catalogs. Therefore, during the fitting process, we apply a Gaussian prior distribution 𝒩(z_true, 0.05×(1+z_true)) to the z parameter. We assume that the parameter mwebv is perfectly known from the dust map, and the hostebv is fitted as a variable parameter ranging from 0 to 0.5 mag. The fitting function makes initial guesses for t_0 and amplitude based on the data, then runs a minimizer. We fit the simulated SN LCs with each of the seven templates in the library one by one. For each template, we obtain a χ^2 value by fitting the data. After fitting all seven templates, we compare the χ^2 values and select the template with the minimum χ^2 as the one that best fits the data. To evaluate the template fitting method performance, we use the confusion matrix to visualize the good and bad classification. Table <ref> shows the confusion matrix for binary classification. True positive (TP) is the number of sources predicted to be true and actual is true. False positive (FP) is the number of sources predicted to be true and actual is false. True negative (TN) is the number of sources predicted to be false and actual is false. False negative (FN) is the number of sources predicted to be false and actual is true. For ease of reading, the confusion matrix is usually normalized by dividing each entry by the true number of each SN subtype. Through the confusion matrix, the precision (also known as purity) and the recall (also known as completeness) are defined as precision = TP/TP+FP, recall = TP/TP+FN. The precision means the number of correct predictions in each class compared to the total number of predictions in that class, and the recall means the number of correct predictions in each class compared to the total number of that class. § CSST SUPERNOVA MOCK SAMPLES In this section, we make selection cuts to obtain a well-classified “gold” sample and an alert sample from the mock SNe data generated in the previous section. The results are summarized in Fig. <ref> and Table <ref>. §.§ Gold sample Following the selection criteria outlined in <cit.>, we implement a set of quality cuts, referred to as the Q1 cut, on the observed SNe before the fitting process: * At least two observations in the same band, with one being a non-detection (SNR <1) and the other having SNR >5. * At least two different bands with SNR > 5. * At least one observation with SNR > 5 before the B-band peak magnitude, and at least one observation with SNR > 5 after the B-band peak magnitude. * At least six observations with SNR > 5. As seen in Table <ref>, 27,210 SNe in the wide field and 4,546 SNe in the deep fields pass the Q1 cut. Fig. <ref> provides several examples of the SN LCs and SEDs along with mock observations in the observer's frame. Classification results of these SNe are shown in Fig. <ref>. For those relatively rare cases with a “large” number of observations, there is a good chance that several observations are made within a few days, providing little help on classification. Therefore, the precision and recall do not increase monotonically with the number of observations. We further refine the sample with a Q2 cut that removes SNe with inadequate fits. The resulting sample is referred to as the gold sample, which may be used as a candidate catalog for constraining cosmology in future work. The fitting program returns the corresponding minimum χ^2 values for each subtype (Ia, Ibc, and II), and we examine the relative χ^2 value of the best-matching and second-matching subtypes based on <cit.>. We denote the χ^2 value of the best-matching subtype as χ^2_min, the second-matching subtype as χ^2_sec. The Q2 cut is then described as follows: * The fit is convergent. * χ^2_min< χ^2_sec- χ^2_min. * The reduced chi-square χ^2_min/n_dof < 10. The resulting gold sample, with the wide field and deep fields combined, contains about 7,400 SNe Ia, 2,200 SNe Ibc, and 6,500 SNe II candidates with overall classification precision of 91%, 63%, and 93%, respectively. The redshift distributions of the subtypes are shown in Fig. <ref>. The recall rates for the sample's classification are provided in the normalized confusion matrices in Fig. <ref>. In the wide field, the recall rates for SN Ia, Ibc, and II are 87%, 80%, and 90% respectively, while in the deep fields, they are 88%, 94%, and 94% respectively. Fig. <ref> presents the photo-z estimation of the gold sample. We define outliers to be |δ_z|/(1+z_true) > 0.15, where δ_z = z_fit - z_true. The standard deviation of |δ_z|/(1+z_true) as σ_z is also calculated to measure the uncertainty of the photometric redshift estimation, considering the redshift evolution. In the case of SNe Ia, the photo-z uncertainty is roughly the same as the somewhat conservative prior 0.05(1+z_true) from the host galaxies. This means that improving the photo-zs of the host galaxies would directly improve those of the SNe Ia. The errors of SNe Ibc themselves are relatively small, but they are affected by contamination from other types, leading to a degradation of the sample's photo-zs. Since the Ibc subsample has a relatively low classification precision, it would not be very useful before further refinement. The SNe II subsample exhibits the best photo-z performance despite the contamination. §.§ Alert sample Early detection of pre-maximum SNe is crucial for subsequent observations and studies. These include classifying candidates based on their spectra near the peak brightness, understanding how the spectra evolve from early to late phases, and accurately estimating SN model parameters through well-observed LCs. The CSST's image quality and single-exposure depths help catch SNe during their early ascents. Alerts could be issued at the earliest opportunity, facilitating prompt follow-up observations. By requiring the SN to be observed in the same band at least twice before the B-band peak luminosity, one with SNR<1 and the other with SNR≥ 5, we obtain a pre-maximum sample of approximately 680,000 SNe: about 340,000 SNe Ia, 110,000 SNe Ibc, and 190,000 SNe II in the wide field, and about 22,000 SNe Ia, 9,100 SNe Ibc, and 17,000 SNe II in the deep fields. Fig. <ref> shows the redshift distributions of this sample. Most of these SNe would not receive enough CSST observations before the maximum to be correctly identified. One might want to set a minimum classification precision requirement to filter out an alert sample for further observations. However, in reality, the classification precision of the sample will not be known until proper follow-up observations have been conducted. Therefore, an indirect criterion is necessary. As demonstrated in Fig. <ref>, the classification accuracy improves as the number of observations increases. One can therefore use the the number of observations before the maximum as a proxy for precision. We consider ∼ 50% to be an acceptable precision for a meaningful alert sample. Fig. <ref> translates it into ≥ 4 and ≥ 6 observations before the maximum for SNe Ia and SNe II, respectively. These requirements and those for the pre-maximum sample are referred to as the Q3 cut collectively. The resulting alert sample, with the wide field and deep fields combined, contains about 20,000 SNe Ia (precision 61%) and 2,900 SNe II (precision 49%) candidates. SNe Ibc reach 50% precision only in the deep fields with merely 47 candidates, so we do not include them in the alert sample. Fig. <ref> shows the number of alerts that can be issued as a function of days before maximum light. The lead time of follow-up observations can vary significantly from one facility to another. For a well-coordinated full-sky follow-up program that has a lead time of just two days, roughly 15,500 SNe Ia and 2,100 SNe II candidates are within its reach. The numbers drop by 54% and 62% for SNe Ia and SNe II, respectively, if the lead time increases to seven days. § DISCUSSIONS AND CONCLUSIONS Our simulations show that the CSST wide field and deep fields can provide more than 16,000 well-classified SNe candidates of various types at z≲ 1 with classification precision above 90% for type Ia and type II. The proposed 9 deg^2 CSST ultra-deep field can contribute another ∼ 2000 well-observed SN Ia at z≲ 1.3 for cosmology <cit.>. Meanwhile, the CSST survey can trigger alerts for the detection of about 15,500 SNe Ia (precision 61%) and 2,100 SNe II (precision 49%) candidates at least two days before maximum. These samples will be of great value for SN science and cosmology. It is worth emphasizing the CSST's unique capability in the NUV. An example of important applications is searching and observing shock-cooling events. Early-time UV observations of CCSNe and the measurements of their optical LCs are crucial for comprehending the physics behind SN explosions and understanding the progenitor properties <cit.>. Only a handful of such events were detected in the past <cit.>. <cit.> proposed a method for identifying shock-cooling events: an optical survey dataset to locate the SN and a UV dataset to search for the associated shock-cooling event. UV observations can capture CCSNe explosions in the very early phase <cit.>. We adopt the method in Ref <cit.> to predict the number of shock-cooling events of SNe II to be seen by the CSST, assuming that all the events are from red supergiant (RSG) progenitors with a single set of fiducial parameters. The peak absolute magnitude of the shock-cooling model <cit.> in the NUV band is determined by the RSG parameters <cit.>: M_ peak^ NUV≈ -11.2 - 2.3log_10(R_∗/R_⊙) - 2.3log_10(E/10^51 erg), where the radius R_∗ takes the fiducial value of 500 R_⊙, and the energy E=10^51 erg normalized to ejecta mass of 10 M_⊙. Using Eq. (<ref>), we obtain the peak absolute magnitude M_ peak^ NUV≈-17 mag. Given that the CSST wide-field survey reaches 24.5 mag in the NUV band with a single exposure (Table <ref>), we expect it to be capable of detecting shock-cooling events up to z∼ 0.3. The CSST survey camera has four detectors for NUV imaging, totaling a FoV of 0.14 deg^2. With the volumetric rate of CCSNe from Eq. (<ref>), we estimate that there are about three z≤ 0.3 SNe II exploding within an area of the CSST NUV FoV every year. We adopt one day as the early detection window for shock-cooling events <cit.>. The probability of catching a shock-cooling event is then 3/365, or 0.0082, per CSST pointing, regardless of whether the pointing is fixed in the sky with a cadence ≥ 1 day or tiles the sky without overlapping. The CSST survey will complete about 65,000 pointings every year. We ignore those repetitive observations within one day (∼ 28%), which are helpful in general but do not significantly increase the probability of detecting shock-cooling events, and estimate the events observed every year to be 0.0082 × 65,000 × 0.72, or 380. Therefore, the CSST can catch hundreds of shock-cooling events serendipitously every year in the NUV. This is comparable in numbers to the projection of the dedicated wide-field UV explorer ULTRASAT <cit.> but with a wider redshift span. Another unique aspect of the CSST is its slitless spectroscopy observations in the wavelength range 255-1000 nm. The slitless spectra will provide an additional dimension for in-depth studies on SNe. We will explore CSST spectroscopic SN samples in a future paper. Finally, the CSST will not be operating alone. Close collaboration with its contemporary projects such as the WFST, the Euclid mission, the Rubin Observatory, and the Roman Space Telescope will greatly enhance the science for all parties. We would like to thank Subo Dong for helpful discussion. This work was supported by the National Key R&D Program of China No. 2022YFF0503400 and 2022YFF0503401, China Manned Space Program grant No. CMS-CSST-2021-B01, CMS-CSST-2021-B04, and CMS-CSST-2021-A12, Science Program of Beijing Academy of Science and Technology (24CD014), National Natural Science Foundation of China (NSFC grants 12288102 and 12033003), and Tencent Xplorer Prize. This work made use of Astropy <cit.> and Sncosmo <cit.>. The authors declare that they have no conflict of interest. 10 2007ApJ...666..674M G. Miknaitis, G. Pignata, A. Rest, W. M. Wood-Vasey, S. Blondin, P. Challis, R. C. Smith, C. W. Stubbs, N. B. Suntzeff, R. J. Foley, et al., http://dx.doi.org/10.1086/519986 666, 674 (2007), arXiv: astro-ph/0701043. 2011arXiv1102.5004D S. G. Djorgovski, A. J. Drake, A. A. Mahabal, M. J. Graham, C. Donalek, R. Williams, E. C. Beshore, S. M. Larson, J. Prieto, M. Catelan, et al., http://dx.doi.org/10.48550/arXiv.1102.5004arXiv e-prints arXiv:1102.5004 (2011). 2016MNRAS.460.1270D Dark Energy Survey Collaboration, T. Abbott, F. B. Abdalla, J. Aleksić, S. Allam, A. Amara, D. Bacon, E. Balbinot, M. Banerji, K. Bechtol, et al., http://dx.doi.org/10.1093/mnras/stw641 460, 1270 (2016), arXiv: 1601.00329. 2016arXiv161205560C K. C. Chambers, E. A. Magnier, N. Metcalfe, H. A. Flewelling, M. E. Huber, C. Z. Waters, L. Denneau, P. W. Draper, D. Farrow, D. P. Finkbeiner, et al., http://dx.doi.org/10.48550/arXiv.1612.05560arXiv e-prints arXiv:1612.05560 (2016). 2017PASP..129j4502K C. S. Kochanek, B. J. Shappee, K. Z. Stanek, T. W. S. Holoien, T. A. Thompson, J. L. Prieto, S. Dong, J. V. Shields, D. Will, C. Britt, et al., http://dx.doi.org/10.1088/1538-3873/aa80d9 129, 104502 (2017), arXiv: 1706.07060. 2019PASP..131a8002B E. C. Bellm, S. R. Kulkarni, M. J. Graham, R. Dekany, R. M. Smith, R. Riddle, F. J. Masci, G. Helou, T. A. Prince, S. M. Adams, et al., http://dx.doi.org/10.1088/1538-3873/aaecbe 131, 018002 (2019), arXiv: 1902.01932. 2007ApJ...666..694W W. M. Wood-Vasey, G. Miknaitis, C. W. Stubbs, S. Jha, A. G. Riess, P. M. Garnavich, R. P. Kirshner, C. Aguilera, A. C. Becker, J. W. Blackman, et al., http://dx.doi.org/10.1086/518642 666, 694 (2007), arXiv: astro-ph/0701041. 2009ApJ...696..870D A. J. Drake, S. G. Djorgovski, A. Mahabal, E. Beshore, S. Larson, M. J. Graham, R. Williams, E. Christensen, M. Catelan, A. Boattini, et al., http://dx.doi.org/10.1088/0004-637X/696/1/870 696, 870 (2009), arXiv: 0809.1394. 2016Sci...351..257D S. Dong, B. J. Shappee, J. L. Prieto, S. W. Jha, K. Z. Stanek, T. W. S. Holoien, C. S. Kochanek, T. A. Thompson, N. Morrell, I. B. Thompson, et al., http://dx.doi.org/10.1126/science.aac9613Science 351, 257 (2016), arXiv: 1507.03010. 2018ApJ...857...51J D. O. Jones, D. M. Scolnic, A. G. Riess, A. Rest, R. P. Kirshner, E. Berger, R. Kessler, Y. C. Pan, R. J. Foley, R. Chornock, et al., http://dx.doi.org/10.3847/1538-4357/aab6b1 857, 51 (2018), arXiv: 1710.00846. 2019ApJ...872L..30A T. M. C. Abbott, S. Allam, P. Andersen, C. Angus, J. Asorey, A. Avelino, S. Avila, B. A. Bassett, K. Bechtol, G. M. Bernstein, et al., http://dx.doi.org/10.3847/2041-8213/ab04fa 872, L30 (2019), arXiv: 1811.02374. 2020ApJ...904...35P D. A. Perley, C. Fremling, J. Sollerman, A. A. Miller, A. S. Dahiwale, Y. Sharma, E. C. Bellm, R. Biswas, T. G. Brink, R. J. Bruch, et al., http://dx.doi.org/10.3847/1538-4357/abbd98 904, 35 (2020), arXiv: 2009.01242. 2023SCPMA..6609512W T. Wang, G. Liu, Z. Cai, J. Geng, M. Fang, H. He, J.-a. Jiang, N. Jiang, X. Kong, B. Li, et al., http://dx.doi.org/10.1007/s11433-023-2197-5Science China Physics, Mechanics, and Astronomy 66, 109512 (2023), arXiv: 2306.07590. lsstsciencecollaboration2009lsst LSST Science Collaboration, P. A. Abell, J. Allison, S. F. Anderson, J. R. Andrew, J. R. P. Angel, L. Armus, D. Arnett, S. J. Asztalos, T. S. Axelrod, et al., LSST Science Book, version 2.0 (2009), arXiv: 0912.0201. 2019ApJ...873..111I Ž. Ivezić, S. M. Kahn, J. A. Tyson, B. Abel, E. Acosta, R. Allsman, D. Alonso, Y. AlSayyad, S. F. Anderson, J. Andrew, et al., http://dx.doi.org/10.3847/1538-4357/ab042c 873, 111 (2019), arXiv: 0805.2366. 2018RPPh...81f6901Z H. Zhan and J. A. Tyson, http://dx.doi.org/10.1088/1361-6633/aab1bdReports on Progress in Physics 81, 066901 (2018), arXiv: 1707.06948. 2011SSPMA..41.1441Z H. Zhan, http://dx.doi.org/10.1360/132011-961Scientia Sinica Physica, Mechanica & Astronomica 41, 1441 (2011). Zhan2021 H. Zhan, http://dx.doi.org/https://doi.org/10.1360/TB-2021-0016Chinese Science Bulletin 66, 1290 (2021). 2002MNRAS.336L..17F D. Farrah, W. P. S. Meikle, D. Clements, M. Rowan-Robinson, and S. Mattila, http://dx.doi.org/10.1046/j.1365-8711.2002.05948.x 336, L17 (2002), arXiv: astro-ph/0208059. 2016AJ....152..154G R. R. Gupta, S. Kuhlmann, E. Kovacs, H. Spinka, R. Kessler, D. A. Goldstein, C. Liotine, K. Pomian, C. B. D'Andrea, M. Sullivan, et al., http://dx.doi.org/10.3847/0004-6256/152/6/154 152, 154 (2016), arXiv: 1604.06138. 2017hsn..book..693V S. D. Van Dyk, in Handbook of Supernovae, (edited by A. W. Alsabti and P. Murdin), 693 (2017). 2009AJ....137.4517B P. J. Brown, S. T. Holland, S. Immler, P. Milne, P. W. A. Roming, N. Gehrels, J. Nousek, N. Panagia, M. Still, and D. Vanden Berk, http://dx.doi.org/10.1088/0004-6256/137/5/4517 137, 4517 (2009), arXiv: 0803.1265. 2010ApJ...721.1627M P. A. Milne, P. J. Brown, P. W. A. Roming, S. T. Holland, S. Immler, A. V. Filippenko, M. Ganeshalingam, W. Li, M. Stritzinger, M. M. Phillips, et al., http://dx.doi.org/10.1088/0004-637X/721/2/1627 721, 1627 (2010), arXiv: 1007.5279. 2017ApJ...838..130S N. Sapir and E. Waxman, http://dx.doi.org/10.3847/1538-4357/aa64df 838, 130 (2017), arXiv: 1607.03700. 2023SCPMA..6629511L S.-Y. Li, Y.-L. Li, T. Zhang, J. Vinkó, E. Regős, X. Wang, G. Xi, and H. Zhan, http://dx.doi.org/10.1007/s11433-022-2018-0Science China Physics, Mechanics, and Astronomy 66, 229511 (2023), arXiv: 2210.05450. 2024MNRAS.530.4288W M. Wang, Y. Gong, F. Deng, H. Miao, X. Chen, and H. Zhan, http://dx.doi.org/10.1093/mnras/stae1119 530, 4288 (2024), arXiv: 2401.16676. 2005ApJ...622..759G K. M. Górski, E. Hivon, A. J. Banday, B. D. Wandelt, F. K. Hansen, M. Reinecke, and M. Bartelmann, http://dx.doi.org/10.1086/427976 622, 759 (2005), arXiv: astro-ph/0409513. 2008ApJ...682..262D B. Dilday, R. Kessler, J. A. Frieman, J. Holtzman, J. Marriner, G. Miknaitis, R. C. Nichol, R. Romani, M. Sako, B. Bassett, et al., http://dx.doi.org/10.1086/587733 682, 262 (2008), arXiv: 0801.3297. 2018ApJ...867...23H R. Hounsell, D. Scolnic, R. J. Foley, R. Kessler, V. Miranda, A. Avelino, R. C. Bohlin, A. V. Filippenko, J. Frieman, S. W. Jha, et al., http://dx.doi.org/10.3847/1538-4357/aac08b 867, 23 (2018), arXiv: 1702.01747. 2009A A...499..653B G. Bazin, N. Palanque-Delabrouille, J. Rich, V. Ruhlmann-Kleider, E. Aubourg, L. Le Guillou, P. Astier, C. Balland, S. Basa, R. G. Carlberg, et al., http://dx.doi.org/10.1051/0004-6361/200911847 499, 653 (2009), arXiv: 0904.1066. 2007A A...466...11G J. Guy, P. Astier, S. Baumont, D. Hardin, R. Pain, N. Regnault, S. Basa, R. G. Carlberg, A. Conley, S. Fabbro, et al., http://dx.doi.org/10.1051/0004-6361:20066930 466, 11 (2007), arXiv: astro-ph/0701828. 2010A A...523A...7G J. Guy, M. Sullivan, A. Conley, N. Regnault, P. Astier, C. Balland, S. Basa, R. G. Carlberg, D. Fouchez, D. Hardin, et al., http://dx.doi.org/10.1051/0004-6361/201014468 523, A7 (2010), arXiv: 1010.4743. 2018PASP..130k4504P J. D. R. Pierel, S. Rodney, A. Avelino, F. Bianco, A. V. Filippenko, R. J. Foley, A. Friedman, M. Hicken, R. Hounsell, S. W. Jha, et al., http://dx.doi.org/10.1088/1538-3873/aadb7a 130, 114504 (2018), arXiv: 1808.02534. 2010PASP..122.1415K R. Kessler, B. Bassett, P. Belov, V. Bhatnagar, H. Campbell, A. Conley, J. A. Frieman, A. Glazov, S. González-Gaitán, R. Hlozek, et al., http://dx.doi.org/10.1086/657607 122, 1415 (2010), arXiv: 1008.1024. 2002PASP..114..803N P. Nugent, A. Kim, and S. Perlmutter, http://dx.doi.org/10.1086/341707 114, 803 (2002), arXiv: astro-ph/0205351. 1999PASP..111...63F E. L. Fitzpatrick, http://dx.doi.org/10.1086/316293 111, 63 (1999), arXiv: astro-ph/9809387. 1998ApJ...500..525S D. J. Schlegel, D. P. Finkbeiner, and M. Davis, http://dx.doi.org/10.1086/305772 500, 525 (1998), arXiv: astro-ph/9710327. 2023acsi.book...23R J. E. Ryon and D. V. Stark, in ACS Instrument Handbook for Cycle 32 v. 23.0, volume 23, 23 (2023). 2015ApJ...813...93S L.-G. Strolger, T. Dahlen, S. A. Rodney, O. Graur, A. G. Riess, C. McCully, S. Ravindranath, B. Mobasher, and A. K. Shahady, http://dx.doi.org/10.1088/0004-637X/813/2/93 813, 93 (2015), arXiv: 1509.06574. 2014AJ....147..118R D. Richardson, I. Jenkins, Robert L., J. Wright, and L. Maddox, http://dx.doi.org/10.1088/0004-6256/147/5/118 147, 118 (2014), arXiv: 1403.5755. 2011A A...534A..43B G. Bazin, V. Ruhlmann-Kleider, N. Palanque-Delabrouille, J. Rich, E. Aubourg, P. Astier, C. Balland, S. Basa, R. G. Carlberg, A. Conley, et al., http://dx.doi.org/10.1051/0004-6361/201116898 534, A43 (2011), arXiv: 1109.0948. 2013ApJ...763...88C H. Campbell, C. B. D'Andrea, R. C. Nichol, M. Sako, M. Smith, H. Lampeitl, M. D. Olmstead, B. Bassett, R. Biswas, P. Brown, et al., http://dx.doi.org/10.1088/0004-637X/763/2/88 763, 88 (2013), arXiv: 1211.4480. 2018MNRAS.477.4142D M. Dai, S. Kuhlmann, Y. Wang, and E. Kovacs, http://dx.doi.org/10.1093/mnras/sty965 477, 4142 (2018), arXiv: 1701.05689. 2005ApJ...624..880L A. Levan, P. Nugent, A. Fruchter, I. Burud, D. Branch, J. Rhoads, A. Castro-Tirado, J. Gorosabel, J. M. Castro Cerón, S. E. Thorsett, et al., http://dx.doi.org/10.1086/428657 624, 880 (2005), arXiv: astro-ph/0403450. 1999ApJ...521...30G R. L. Gilliland, P. E. Nugent, and M. M. Phillips, http://dx.doi.org/10.1086/307549 521, 30 (1999), arXiv: astro-ph/9903229. 2013MNRAS.435.1047B H. Brink, J. W. Richards, D. Poznanski, J. S. Bloom, J. Rice, S. Negahban, and M. Wainwright, http://dx.doi.org/10.1093/mnras/stt1306 435, 1047 (2013), arXiv: 1209.3775. 2016MNRAS.457.3119D A. D'Isanto, S. Cavuoti, M. Brescia, C. Donalek, G. Longo, G. Riccio, and S. G. Djorgovski, http://dx.doi.org/10.1093/mnras/stw157 457, 3119 (2016), arXiv: 1601.03931. 2016JCAP...12..008M A. Möller, V. Ruhlmann-Kleider, C. Leloup, J. Neveu, N. Palanque-Delabrouille, J. Rich, R. Carlberg, C. Lidman, and C. Pritchet, http://dx.doi.org/10.1088/1475-7516/2016/12/008 2016, 008 (2016), arXiv: 1608.05423. 2017MNRAS.472.1315W D. E. Wright, C. J. Lintott, S. J. Smartt, K. W. Smith, L. Fortson, L. Trouille, C. R. Allen, M. Beck, M. C. Bouslog, A. Boyer, et al., http://dx.doi.org/10.1093/mnras/stx1812 472, 1315 (2017), arXiv: 1707.05223. 2019PASP..131k8002M D. Muthukrishna, G. Narayan, K. S. Mandel, R. Biswas, and R. Hložek, http://dx.doi.org/10.1088/1538-3873/ab1609 131, 118002 (2019), arXiv: 1904.00014. 2021arXiv211112142D D. A. Duev and S. J. van der Walt, https://ui.adsabs.harvard.edu/abs/2021arXiv211112142DarXiv e-prints arXiv:2111.12142 (2021), arXiv: 2111.12142. 2010ApJ...717...40K R. Kessler, D. Cinabro, B. Bassett, B. Dilday, J. A. Frieman, P. M. Garnavich, S. Jha, J. Marriner, R. C. Nichol, M. Sako, et al., http://dx.doi.org/10.1088/0004-637X/717/1/40 717, 40 (2010), arXiv: 1001.0738. 2010A A...514A..63P N. Palanque-Delabrouille, V. Ruhlmann-Kleider, S. Pascal, J. Rich, J. Guy, G. Bazin, P. Astier, C. Balland, S. Basa, R. G. Carlberg, et al., http://dx.doi.org/10.1051/0004-6361/200913283 514, A63 (2010), arXiv: 0911.1629. 2007MNRAS.382..377W Y. Wang, G. Narayan, and M. Wood-Vasey, http://dx.doi.org/10.1111/j.1365-2966.2007.12376.x 382, 377 (2007), arXiv: 0708.0033. 2015MNRAS.451.1955W Y. Wang, E. Gjergo, and S. Kuhlmann, http://dx.doi.org/10.1093/mnras/stv1090 451, 1955 (2015), arXiv: 1501.06839. 2008AJ....135..348S M. Sako, B. Bassett, A. Becker, D. Cinabro, F. DeJongh, D. L. Depoy, B. Dilday, M. Doi, J. A. Frieman, P. M. Garnavich, et al., http://dx.doi.org/10.1088/0004-6256/135/1/348 135, 348 (2008), arXiv: 0708.2750. Zhou_2022 X. Zhou, Y. Gong, X.-M. Meng, X. Chen, Z. Chen, W. Du, L. Fu, and Z. Luo, http://dx.doi.org/10.1088/1674-4527/ac9578Research in Astronomy and Astrophysics 22, 115017 (2022), <https://dx.doi.org/10.1088/1674-4527/ac9578>. 2012ApJ...753..152B J. P. Bernstein, R. Kessler, S. Kuhlmann, R. Biswas, E. Kovacs, G. Aldering, I. Crane, C. B. D'Andrea, D. A. Finley, J. A. Frieman, et al., http://dx.doi.org/10.1088/0004-637X/753/2/152 753, 152 (2012), arXiv: 1111.1969. 1978ApJ...223L.109K R. I. Klein and R. A. Chevalier, http://dx.doi.org/10.1086/182740 223, L109 (1978). 2011ApJ...728...63R I. Rabinak and E. Waxman, http://dx.doi.org/10.1088/0004-637X/728/1/63 728, 63 (2011), arXiv: 1002.3414. 1989ARA A..27..629A W. D. Arnett, J. N. Bahcall, R. P. Kirshner, and S. E. Woosley, http://dx.doi.org/10.1146/annurev.aa.27.090189.003213 27, 629 (1989). 2006Natur.442.1008C S. Campana, V. Mangano, A. J. Blustin, P. Brown, D. N. Burrows, G. Chincarini, J. R. Cummings, G. Cusumano, M. Della Valle, D. Malesani, et al., http://dx.doi.org/10.1038/nature04892 442, 1008 (2006), arXiv: astro-ph/0603279. 2008ApJ...683L.131G S. Gezari, L. Dessart, S. Basa, D. C. Martin, J. D. Neill, S. E. Woosley, D. J. Hillier, G. Bazin, K. Forster, P. G. Friedman, et al., http://dx.doi.org/10.1086/591647 683, L131 (2008), arXiv: 0804.1123. 2010ApJ...720L..77G S. Gezari, A. Rest, M. E. Huber, G. Narayan, K. Forster, J. D. Neill, D. C. Martin, S. Valenti, S. J. Smartt, R. Chornock, et al., http://dx.doi.org/10.1088/2041-8205/720/1/L77 720, L77 (2010), arXiv: 1007.4551. 2014Natur.509..471G A. Gal-Yam, I. Arcavi, E. O. Ofek, S. Ben-Ami, S. B. Cenko, M. M. Kasliwal, Y. Cao, O. Yaron, D. Tal, J. M. Silverman, et al., http://dx.doi.org/10.1038/nature13304 509, 471 (2014), arXiv: 1406.7640. 2022ApJ...924...55G A. Gagliano, L. Izzo, C. D. Kilpatrick, B. Mockler, W. V. Jacobson-Galán, G. Terreran, G. Dimitriadis, Y. Zenati, K. Auchettl, M. R. Drout, et al., http://dx.doi.org/10.3847/1538-4357/ac35ec 924, 55 (2022), arXiv: 2105.09963. 2008Sci...321..223S K. Schawinski, S. Justham, C. Wolf, P. Podsiadlowski, M. Sullivan, K. C. Steenbrugge, T. Bell, H.-J. Röser, E. S. Walker, P. Astier, et al., http://dx.doi.org/10.1126/science.1160456Science 321, 223 (2008), arXiv: 0803.3596. 2024Natur.627..754L G. Li, M. Hu, W. Li, Y. Yang, X. Wang, S. Yan, L. Hu, J. Zhang, Y. Mao, H. Riise, et al., http://dx.doi.org/10.1038/s41586-023-06843-6 627, 754 (2024), arXiv: 2311.14409. 2023SciBu..68.2548Z J. Zhang, H. Lin, X. Wang, Z. Zhao, L. Li, J. Liu, S. Yan, D. Xiang, H. Wang, and J. Bai, http://dx.doi.org/10.1016/j.scib.2023.09.015Science Bulletin 68, 2548 (2023). 2016ApJ...820...57G N. Ganot, A. Gal-Yam, E. O. Ofek, I. Sagiv, E. Waxman, O. Lapid, S. R. Kulkarni, S. Ben-Ami, M. M. Kasliwal, ULTRASAT Science Team, et al., http://dx.doi.org/10.3847/0004-637X/820/1/57 820, 57 (2016), arXiv: 1412.4063. 2014AJ....147...79S I. Sagiv, A. Gal-Yam, E. O. Ofek, E. Waxman, O. Aharonson, S. R. Kulkarni, E. Nakar, D. Maoz, B. Trakhtenbrot, E. S. Phinney, et al., http://dx.doi.org/10.1088/0004-6256/147/4/79 147, 79 (2014), arXiv: 1303.6194. 2013A A...558A..33A Astropy Collaboration, T. P. Robitaille, E. J. Tollerud, P. Greenfield, M. Droettboom, E. Bray, T. Aldcroft, M. Davis, A. Ginsburg, A. M. Price-Whelan, et al., http://dx.doi.org/10.1051/0004-6361/201322068 558, A33 (2013), arXiv: 1307.6212. 2018AJ....156..123A Astropy Collaboration, A. M. Price-Whelan, B. M. Sipőcz, H. M. Günther, P. L. Lim, S. M. Crawford, S. Conseil, D. L. Shupe, M. W. Craig, N. Dencheva, et al., http://dx.doi.org/10.3847/1538-3881/aabc4f 156, 123 (2018), arXiv: 1801.02634. 2016ascl.soft11017B K. Barbary, T. Barclay, R. Biswas, M. Craig, U. Feindt, B. Friesen, D. Goldstein, S. Jha, S. Rodney, C. Sofiatti, et al., SNCosmo: Python library for supernova cosmology, Astrophysics Source Code Library, record ascl:1611.017 (2016). 2022zndo....592747B K. Barbary, S. Bailey, G. Barentsen, T. Barclay, R. Biswas, K. Boone, M. Craig, U. Feindt, B. Friesen, D. Goldstein, et al., SNCosmo, https://ui.adsabs.harvard.edu/abs/2022zndo....592747BZenodo (2022).
http://arxiv.org/abs/2407.13270v1
20240718082511
A BCS state formulation for the fermionic Tonks-Girardeau gas
[ "Francesc Sabater", "Abel Rojo-Francàs", "Grigori E. Astrakharchik", "Bruno Juliá-Díaz" ]
cond-mat.quant-gas
[ "cond-mat.quant-gas", "quant-ph" ]
Departament de Física Quàntica i Astrofísica, Facultat de Física, Universitat de Barcelona, E-08028 Barcelona, Spain Institut de Ciències del Cosmos, Universitat de Barcelona, ICCUB, Martí i Franquès 1, E-08028 Barcelona, Spain Departament de Física Quàntica i Astrofísica, Facultat de Física, Universitat de Barcelona, E-08028 Barcelona, Spain Institut de Ciències del Cosmos, Universitat de Barcelona, ICCUB, Martí i Franquès 1, E-08028 Barcelona, Spain Departament de Física, Universitat Politècnica de Catalunya, Campus Nord B4-B5, E-08034 Barcelona, Spain Departament de Física Quàntica i Astrofísica, Facultat de Física, Universitat de Barcelona, E-08028 Barcelona, Spain Institut de Ciències del Cosmos, Universitat de Barcelona, ICCUB, Martí i Franquès 1, E-08028 Barcelona, Spain § ABSTRACT We introduce an alternative expression for the ground state wave function of the fermionic Tonks-Girardeau gas. Our wave function is constructed based on the occupation numbers and natural orbitals of the one-body density matrix. We demonstrate that the newly found wave function describes the ground state of the fermionic Tonks-Girardeau gas under any external potential.By expressing the proposed wave function in the framework of second quantization, we show that the ground state of the fermionic Tonks-Girardeau gas is a number-conserving Bardeen-Cooper-Schrieffer (BCS) state. We provide explicit expressions for the corresponding coefficients that describe the fermionic Tonks-Girardeau gas as a number-conserving BCS state. Additionally, the suitable form of the proposed wave function in second quantization allows us to derive the necessary expectation values to experimentally detect pairing in the fermionic Tonks-Girardeau gas. With this, we prove and show how to detect that the fermionic Tonks-Girardeau gas not only exhibits non-trivial quantum correlations but is also a paired state. A BCS state formulation for the fermionic Tonks-Girardeau gas Bruno Juliá-Díaz July 22, 2024 ============================================================= Introduction. One-dimensional fermionic and bosonic systems are of particular interest due to their pronounced quantum effects and mathematical tractability, which aid in the development and validation of theoretical models <cit.>. Ultracold gases serve as a platform for studying one-dimensional quantum many-body systems with short-range interactions <cit.>. Such one-dimensional systems have already been experimentally realized using confinement in quasi-1D (highly elongated) harmonic traps <cit.>. At the same time, the interaction between particles can be precisely controlled and tuned <cit.>. A characteristic feature of one-dimensional systems is the mapping between fermionic systems and bosonic systems. This mapping allows to describe strongly correlated bosonic (fermionic) systems as the symmetrized (antisymmetrized) version of simpler fermionic (bosonic) systems. This fermion-boson duality can be described by the Girardeau mapping, for which the bosonic and fermionic systems share identical diagonal properties <cit.>. A prominent example is the Tonks-Girardeau gas, describing one-dimensional, strongly repulsive, hard-core bosons mapped onto a system of ideal fermions under the same external potential <cit.>. The generalization of this mapping to arbitrary interaction strengths, by Cheon and Shigehara <cit.>, allows for mapping the ground state of strongly attracting p-wave fermions to the ground state of noninteracting bosons. This system is referred to as the fermionic Tonks-Girardeau (FTG) gas. Its ground state wave function (WF) is ψ_F(x_1,… ,x_N)=ψ_B(x_1,… ,x_N)∏_j<k^N (x_j-x_k), and is related to that of an ideal Bose gas, ψ_B(x_1,…,x_N)=∏_i=1^Nϕ(x_i), where ϕ(x) is the single-particle ground state <cit.>. In a recent work, we computed the one-body density matrix (OBDM) ρ_1(x,x')=⟨Ψ̂^†(x)Ψ̂(x')⟩ for the FTG gas and diagonalized it <cit.>. We derived analytical expressions for the eigenvalues of the OBDM, λ_k^(N), i.e. the occupation numbers. We showed that the occupation numbers are always doubly degenerate, and presented expressions for the corresponding degenerate natural orbitals, χ_k+(x) and χ_k-(x). With these expressions, we showed that the total density of the FTG gas, which is equivalent to the density of N non-interacting bosons, n(x) = N|ϕ(x)|^2, can be explained by the formation of N/2 pairs of fermions in the orbitals χ_k+(x) and χ_k-(x) with a density of 2|ϕ(x)|^2 per pair in the even case. In the odd case, we reasoned that (N-1)/2 pairs are formed, with the remaining fermion occupying the single-particle ground state <cit.>. In this Letter, we propose a novel form of expressing the ground state WF of the FTG gas, as given by Eq. (<ref>), explicitly containing the observed pairing in Ref. <cit.>. We provide a detailed proof that the proposed WF is equivalent to the pair-product one (<ref>). Remarkably, we derive the proposed ground state WF solely from information obtained from the OBDM, that is, its occupation numbers, natural orbitals, and our understanding of the pairing mechanism. This is noteworthy because in most systems recovering the WF from OBDM information alone is neither straightforward nor possible. The proposed WF is more suitable for working within a second quantization framework than pair product form (<ref>), simplifying the calculation of important expectation values involving creation and annihilation operators. Furthermore, we demonstrate that the novel WF is a number-conserving BCS state, implying that the FTG gas can be fully described and characterized by a BCS type of state. Lastly, we provide the expectation values of a set of measurable operators, which show that the FTG gas is a paired state within the framework for pairing derived in Ref. <cit.>. A novel expression for the FTG gas ground state. For clarity, we initially present the simpler case of N=2. From Ref. <cit.>, we know that when dealing only with two fermions, these two fermions form a pair occupying the orbitals χ_k+(x) and χ_k-(x) for a given k with probability λ_k^(2) = 8/[π(2k-1)]^2. Based on this, we postulate that the ground state of the two-particle FTG gas can be written as: ψ_T(x_1,x_2)=∑_k=1√(λ_k^(2))φ_k(x_1,x_2), where φ_k(x_1,x_2) is the Slater determinant formed by the states χ_k+(x) and χ_k-(x). We denote our proposed WF as ψ_T to distinguish it from the well-known FTG gas ground state ψ_F given by Eq. (<ref>). The proposed WF is well normalized since: ∫∫ |ψ_T(x_1,x_2)|^2 dx_1dx_2=∑_k=1λ_k^(2) = N/2 = 1. The proof that ψ_T = ψ_F is straightforward and included here. For larger numbers of particles, we provide the proof in the Supplementary Material. We start by noting that φ_k(x_1,x_2)=ϕ(x_1)ϕ(x_2)√(2)sin[(2k-1)π(y_1-y_2)]. where y_i=F(x_i)=∫_-∞^x_i|ϕ(z)|^2 dz. Then, ψ_T(x_1,x_2)/ϕ(x_1)ϕ(x_2) = ∑_k=1√(λ_k^(2))√(2)sin[(2k-1)π(y_1-y_2)] = 4/π∑_k=1sin[(2k-1)π(y_1-y_2)]/2k-1 =(y_1-y_2) =[F(x_1)-F(x_2)], for F(x_1)-F(x_2)∈(-1,1). Since F(x) is a monotonically increasing function, [F(x_1)-F(x_2)]=(x_1-x_2). Therefore, ψ_T(x_1,x_2)=ϕ(x_1)ϕ(x_2)(x_1-x_2)=ψ_F(x_1,x_2), as we wanted to prove. Remarkably, Eq. (<ref>) can be interpreted as the Slater decomposition of the FTG gas. The Slater decomposition can be understood as analogous to the Schmidt decomposition for fermionic, thus indistinguishable, systems <cit.>. Since the Slater decomposition obtained contains more than one non-zero coefficient λ_k^(2), the FTG gas has a Slater rank greater than one. This implies that the FTG gas exhibits non-trivial quantum correlations according to Refs. <cit.>. The proposed WF for the N=3 case follows from a similar reasoning to the N=2 case. As shown in Ref. <cit.>, only one pair is formed in the states χ_k+(x) and χ_k-(x) while the third fermion occupies the single-particle ground state. Based on this, we postulate ψ_T(x_1,x_2,x_3)=∑_k=1√(λ_k^(3))φ_0,k(x_1,x_2,x_3), where φ_0,k(x_1,x_2,x_3) is the Slater determinant formed by the states χ_k+(x), χ_k-(x), and ϕ(x). Again, the WF is properly normalized <cit.>: ∑_k=1λ_k^(3) = (N-1)/2 = 1. Note that the non-degenerate eigenvalue λ_0^(3)=1 is not included in the summation. In the Supplementary Material, we provide a proof that ψ_T(x_1,x_2,x_3) = ψ_F(x_1,x_2,x_3). The next step is to obtain the wave function for N=4 fermions. In this case, there are two different pairs at two different k's, k_1 and k_2. Based on that, we propose the WF to be written as ψ_T(x_1,…,x_4)=∑_k_1<k_2√(p_4(k_1,k_2))φ_k_1,k_2(x_1,…,x_4), where φ_k_1,k_2(x_1,…,x_4) is the Slater determinant containing the 4 states involved in the two pairs k_1 and k_2. We denote by p_4(k_1,k_2) the coefficients that represent the probability of having pairs k_1 and k_2 occupied. Initially, one might consider p_4(k_1,k_2)=λ_k_1^(4)λ_k_2^(4), but these coefficients imply improper normalization. This is because we are treating the probabilities of filling each pair as independent when they are not. The probability of filling the second pair k_2 depends on which pair, k_1, is filled since, to fulfill the Pauli exclusion principle, we can not fill again k_1. Therefore, to obtain the proper coefficients p_4(k_1,k_2), we should account for the dependencies between the probabilities of filling each pair. Following the same reasoning regarding the formation of pairs, we find that the WF in the generic case of N fermions (here N is even) is ψ_T(x_1,…,x_N)=∑_k_1<…<k_N_p√(p_N(k_1,…,k_N_p))φ_k_1,…,k_N_p(x_1,…,x_N), where N_P=N/2 denotes the number of pairs and φ_k_1,… ,k_N_P(x_1,…,x_N) is the Slater determinant containing the N states χ_k_1+(x), χ_k_1-(x), …, χ_k_N_P+(x), χ_k_N_P-(x). The expression for odd number of fermions is similar, but it includes an unpaired fermion in the single-particle ground state, ψ_T(x_1,…,x_N)=∑_k_1<…<k_N_p√(p_N(k_1,…,k_N_p))φ_0,k_1,… ,k_N_p(x_1,…,x_N), where now N_P=(N-1)/2 and φ_0,k_1,… ,k_N_P(x_1,…,x_N) is the Slater determinant containing the N states χ_k_1+(x), χ_k_1-(x), …, χ_k_N_P+(x), χ_k_N_P-(x), and importantly, ϕ(x). Inspired by the fact that the probability of filling a pair k cannot be treated independently from the other filled pairs, we postulate the following recursive formula for p_N(k_1,…,k_N_P), p_N(k_1,…,k_N_p)=λ_k_1^(N)p_N-2(k_2,…,k_N_p)/1-∑_(l<i<…)≠ k_1p_N-2(k_1,l,i,…). This formula can be understood as a Bayes conditioned probability P(A,B)=P(A)P(B|A). Where in our case A is the probability of filling the pair k_1. Here, P(B|A) is the probability of filling the N_P-1 remaining pairs conditioned on k_1 being already filled. Importantly, we can check that if p_N-2 is properly normalized (which we know for N=2 and N=3), then p_N is also properly normalized. The short demonstration of proper normalization starts with ∑_k_1<…<k_N_pp_N(k_1, …, k_N_p) = 1/N_p!∑_k_1 ≠…≠ k_N_p p_N(k_1, …, k_N_p) which equals (N_p-1)!/N_p!∑_k_1λ_k_1^(N)∑_(k_2 < … < k_N_p) ≠ k_1p_N-2(k_2, …, k_N_p)/1 - ∑_(l< …) ≠ k_1 p_N-2(k_1, l, …). The second summation ∑_(k_2 < … < k_N_P) ≠ k_1 equals unity. Finally, ∑_k_1<…<k_N_P p_N(k_1, …, k_N_P) = 1/N_P∑_k_1λ_k_1^(N) = 1, as we wanted to prove. Both Eq. (<ref>) and the proof of proper normalization are valid for the even and odd cases. From the recursive formula Eq. (<ref>), we can derive an explicit formula for p_N(k_1, …, k_N_P). First, ∑_(l<i<…)≠ k_1 p_N-2(k_1, l, i, …) = λ_k_1^(N-2), which can be easily derived using Eq. (<ref>) since the summation does not involve k_1. As a consequence, Eq. (<ref>) is simplified to p_N(k_1,…,k_N_p)=λ_k_1^(N)p_N-2(k_2,…,k_N_p)/1-λ_k_1^(N-2). Then, it follows from expressing λ_k_1^(N) as a function of λ_k_1^(N-2), using the recursive formula for the occupation numbers derived in Ref. <cit.>, that p_N(k_1, …, k_N_P) = N(N-1)/(k_1-1/2)^2π^2 p_N-2(k_2, …, k_N_P), for the even case, and p_N(k_1, …, k_N_P) = N(N-1)/k_1^2π^2 p_N-2(k_2, …, k_N_P), for the odd case. Finally, by starting from p_2(k)=λ_k^(2) and p_3(k)=λ_k^(3), we arrive at the final explicit formulas, p_N(k_1, …, k_N_P) = {[ N!/π^N∏_i=1^N_P1/(k_i-1/2)^2, if N is even; N!/π^(N-1)∏_i=1^N_P1/k_i^2, if N is odd. ]. This form of p_N fulfills normalization, as already shown. Additionally, it is invariant under the chosen order of k's. While these two properties are necessary for a suitable form of p_N, they do not guarantee that our guess is the correct one. In the Supplementary Material, we provide a detailed proof that the proposed WF ψ_T (<ref>, <ref>) along with the proposed form of p_N (<ref>) satisfies ψ_T=ψ_F. Eqs. (<ref>, <ref>) for the WF and Eq. (<ref>) for the coefficients are the main results of our work. The FTG gas as a BCS state. In this section, we demonstrate that the WF of the FTG gas in second quantization form can be expressed as a number-conserving BCS state. The paired states appearing in the BCS theory of superconductivity <cit.> can be described as a superposition of number-conserving BCS states |ψ_BCS^(N)⟩, |ψ_BCS⟩=∑_N β_N |ψ_BCS^(N)⟩. The specific form of |ψ_BCS^(N)⟩ is <cit.> |ψ_BCS^(N)⟩=C_N(∑_kα_k P_k^†)^N_P|0⟩, where C_N is a normalization constant, P_k^†=a_k^† a_-k^†, and |0⟩ is the vacuum state. In |ψ_BCS^(N)⟩, all N_P pairs are in the same two-particle state ∑_k α_k P_k^†. The specific case where k=(k,↑) and -k=(-k,↓) corresponds to the BCS theory of superconductivity. Equation (<ref>) can be alternatively written as <cit.> |ψ_BCS^(N)⟩=C_N N_p! ∑_k_1<…<k_N_pα_k_1…α_k_N_p P_k_1^†… P_k_N_p^† |0⟩. Since our new formulation of the ground state WF is a superposition of Slater determinants, it is particularly convenient to express it in second quantization. For the even case, we have, |ψ_T⟩ = ∑_k_1<…<k_N_p√(p_N(k_1,…,k_N_p)) P_k_1^†… P_k_N_p^† |0⟩, where P_k^† = a_k+^† a_k-^†. Since p_N (<ref>) factorizes in all the k_i's, we can rewrite our state as, |ψ_T⟩ = ∑_k_1<…<k_N_pα_k_1…α_k_N_p P_k_1^†… P_k_N_p^† |0⟩, with α_k_i = N!^1/N/π1/(k_i - 1/2). Thus, we have clearly expressed the ground state WF of the even FTG gas as a number-conserving BCS state and provided the explicit expressions for the coefficients α_k_i. The normalization constant C_N is found to be C_N = 1/N_p!. To express the odd case WF in second quantization, we need to add the creation operator for a single particle in the single-particle ground state, |ψ_T⟩ = a_0^†∑_k_1<…<k_N_p√(p_N(k_1,…,k_N_p)) P_k_1^†… P_k_N_p^† |0⟩. Again, we express p_N (<ref>) as a product of α_k_i's, getting to, |ψ_T⟩ = a_0^†∑_k_1<…<k_N_pα_k_1…α_k_N_p P_k_1^†… P_k_N_p^† |0⟩, where now α_k_i = N!^1/(N-1)/π1/k_i. Thus, we have expressed the ground state WF of the odd FTG gas as a number-conserving BCS state, with the addition of a particle in the single-particle ground state. Therefore, we have seen that starting from the general form of a BCS state, we can fully characterize and describe the FTG ground state WF. The demostration is based on the novel form of the ground state WF and the fact that p_N factorizes in the k_i's. Detecting pairing in the FTG gas. The pairing phenomenon can be treated as a quantum correlation and can be formally defined within quantum information theory <cit.>. A fermionic state is said to be paired if there exists a set of operators containing at most two creation and two annihilation operators such that their expectation values cannot be reproduced by any separable state <cit.>. Following this definition, it can be formally proved that any number-conserving BCS state is a paired state <cit.>. The proof is based on the set of operators O⃗_3 = [ n_k + n_-k + n_l + n_-l; n_k n_-k + n_l n_-l; a_k^† a_-k^† a_-l a_l + h.c. ]. The expectation values of O⃗_3 obtained with a number-conserving BCS state can not be reproduced by any separable state  <cit.>. Specifically, the expectation value of H_1 = 1/2(n_k + n_-k + n_l + n_-l) - (n_k n_-k + n_l n_-l) -(a_k^† a_-k^† a_-l a_l + h.c.), is bigger or equal than zero, ⟨ H_1 ⟩≥ 0, for any separable state. For a number conserving BCS state, ⟨ H_1 ⟩ < 0 <cit.>. The set of operators O⃗_3 does not include any creation or annihilation operator acting on the ground state k=0. Therefore, the proof can be immediately extended to the even-number FTG gas and as well to the odd-number FTG gas, since the addition of a particle in the single-particle ground state does not alter the expectation values of O⃗_3. Therefore, using the newly proposed shape of the WF, we conclude that the FTG gas forms a paired state. Making use of our proposed ground state WF we are also able to compute the values of O⃗_3 for the FTG gas, ⟨O⃗_3^FTG⟩=[ 2(λ_k^(N)+λ_l^(N)); λ_k^(N)+λ_l^(N); 2∑_(k_2 < … < k_N_P) ≠ l,k√(p(l, k_2 …)p(k, k_2 …)) ]. These values imply ⟨ H_1⟩ = -2∑_(k_2 < … < k_N_P) ≠ l,k√(p(l, k_2 …)p(k, k_2 …))<0. The detailed calculations of these expectation values are provided in the Supplementary Material. The calculations are relatively straightforward due to the suitable form in second quantization of the proposed WF. The specific values of ⟨O⃗_3^FTG⟩ are of interest because they are the main quantities to measure in order to detect pairing in a fermionic system. Therefore, if ⟨O⃗_3^FTG⟩ were to be measured, one could observe the existence of pairing in the FTG gas and determine that the prepared system is indeed a FTG gas. Importantly, the expectation values of the operators in O⃗_3 can be measured with current experimental techniques, such as spatial noise correlations <cit.>. Conclusions. In this Letter, we present a new expression for the ground state wave function of the fermionic Tonks-Girardeau gas, as given by Eqs (<ref>) and (<ref>) for even and odd number of fermions, respectively. This wave function is based exclusively on the occupation numbers and natural orbitals obtained from the one-body density matrix. This is remarkable because, generally, it is not possible to recover the wave function of a quantum many-body system solely from the one-body density matrix. The proposed ground state can be used to describe the ground state of the fermionic Tonks-Girardeau gas under any external potential, due to the universality of the coefficients p_N, which is a consequence of the universality of the occupation numbers λ_k^(N) <cit.>. By expressing the newly formulated wave function in the second quantization, as given by Eqs (<ref>) and (<ref>), we demonstrate that the proposed wave function is a specific case of a number-conserving BCS state and provide the explicit form of the α coefficients in Eqs (<ref>) and (<ref>). Finally, in Eq. (<ref>), we explicitly derive the expectation values of the operators to be measured in order to experimentally detect pairing in the fermionic Tonks-Girardeau gas. We expect our results to have an impact on understanding one-dimensional quantum systems, in particular the fermionic Tonks-Girardeau gas and, and to be useful for the study of quantum phenomena and quantum correlations occurring in p-wave fermions. The system under study is particularly appealing as it is not a Luttinger Liquid, which instead describes compressible one-dimensional gases. Therefore, it is of great interest to understand how quantum correlations form beyond the Luttinger Liquid paradigm. All previous studies on the fermionic Tonks-Girardeau gas were based on pair-product form (<ref>) <cit.>. Since our proposed wave function is well-suited for working within a second quantization framework, we anticipate that our new formulation of the ground state will lead to novel investigations on the fermionic Tonks-Girardeau gas that were not possible with the previous wave function. For instance, we anticipate that the proposed wave function can be used to predict and study possible excitations of the system. On the other hand, the proposed wave function is well suited for calculating relevant quantities in the study of entanglement between fermionic orbitals, as has already been done with other systems <cit.>. Furthermore, we expect this Letter to be of interest to experimentalists, as we present the values of a set of measurable quantities for the fermionic Tonks-Girardeau gas that can be used to detect pairing in fermionic systems. We acknowledge helpful and insightful discussions with Joan Martorell, Maciej Lewenstein, Utso Bhattacharya, Rohit Kishan Ray and Felipe Isaule that significantly contributed to the development of this Letter. This work has been funded by Grants No. PID2020-114626GB-I00 and PID2020-113565GB-C21 by MCIN/AEI/10.13039/5011 00011033 and "Unit of Excellence María de Maeztu 2020-2023” award to the Institute of Cosmos Sciences, Grant CEX2019-000918-M funded by MCIN/AEI/10.13039/501100011033. We acknowledge financial support from the Generalitat de Catalunya (Grants 2021SGR01411 and 2021SGR01095). A.R.-F. acknowledges funding from MIU through Grant No. FPU20/06174. F.S. acknowledges funding from UB through Grant Master+UB 2023.2.FFIS.1. apsrev4-2 § SUPPLEMENTAL MATERIAL § PROOF OF Ψ_T=Ψ_F FOR THE N=3 CASE In this section we prove the equality of ψ_T=ψ_F for the N=3 case. In the N=3 case ψ_F is written as ψ_F(x_1,x_2,x_3)=ϕ(x_1)ϕ(x_2)ϕ(x_3) (x_1-x_2)(x_1-x_3)(x_2-x_3). The postulated form ψ_T is ψ_T(x_1,x_2,x_3)=∑_k=1√(λ_k^(3))φ_0,k(x_1,x_2,x_3), where φ_0,k(x_1,x_2,x_3) is the Slater determinant formed by the states χ_k+, χ_k- and ϕ. We start the demonstration by explicitly writing the form of φ_0,k(x_1,x_2,x_3), ψ_T(x_1, x_2, x_3)/ϕ(x_1)ϕ(x_2)ϕ(x_3) = 2/√(3!)∑_k=1√(λ_k^(3))(sin[2π k (F(x_1) - F(x_2))] + sin[2π k (F(x_2) - F(x_3))] + sin[2π k (F(x_3) - F(x_1))]). Applying λ_k^(3)=24/[π2k]^2, ψ_T(x_1, x_2, x_3)/ϕ(x_1)ϕ(x_2)ϕ(x_3) =4∑_k=1sin[2π k (F(x_1) - F(x_2))]/2π k + sin[2π k (F(x_2) - F(x_3))]/2π k + sin[2π k (F(x_3) - F(x_1))]/2π k. We now make use of ∑_k=1^∞sin 2π kz/π k = (z)/2 - z, for z∈(-1,1). Setting z= F(x_i)-F(x_j) and performing the summation for each term ψ_T(x_1, x_2, x_3)/ϕ(x_1)ϕ(x_2)ϕ(x_3) =[F(x_1 )-F(x_2)]+[F(x_2)-F(x_3)]+[F(x_3)-F(x_1)]. Applying that [F(x_i)-F(x_j)]=[x_i-x_j], results in ψ_T(x_1, x_2, x_3)/ϕ(x_1)ϕ(x_2)ϕ(x_3) =(x_1- x_2)+(x_2-x_3)+(x_3-x_1). It is left to prove that, for all possible values of x_1,x_2,x_3, X=Y, where X=(x_1- x_2)+(x_2-x_3)-(x_1-x_3), and Y=(x_1-x_2)(x_1-x_3)(x_2-x_3). Clearly, Y can only be either +1, -1 or 0 if two of the three positions are equivalent. If two of the three positions are equivalent, then, it is also direct to see that X=0. It can be seen that X^3=7X-6Y. If Y=1, X=1, X=2 and X=-3 are all possible solutions. However X=2 is not possible and X=-3 implies a contradiction of the type x_1<x_2<x_3 and x_3<x_1<x_2. If Y=-1, X=-1, X=-2 and X=3 are all possible solutions. X=-2 is again not possible and X=3 implies again a similar contradiction. Therefore, when Y=0, X=0 and when Y=±1, X=±1 meaning that X=Y and, therefore, ψ_T=ψ_F. Another, simpler way of showing that Y = X is to realize that the three signs appearing in X and Y are the same. Importantly, in X, (x_1 - x_3) appears with a minus sign in front. To have Y = 1, we must have all three signs being positive (+++) or two of them being negative (+–). In the (+++) case, it is clear that X = 1. In the (+–) case, X can be 1 or -3 if the positive sign is (x_1 - x_3). However, in the previous paragraph, we have already discussed that X = -3 brings a contradiction of the type x_1 < x_2 < x_3 and x_3 < x_1 < x_2. Therefore, in the (+–) case, X is also 1. To have Y = -1, we must have all three signs being negative (—), which clearly implies X = -1, or two of them being positive (-++). Again, in the (-++) case, X can be -1 or 3 if the minus sign is (x_1 - x_3). However, X = 3 implies a similar contradiction to the one in the X = -3 case. With this reasoning, we can show that X = Y without having to compute and solve an equation containing X^3 and X. § PROOF OF Ψ_T=Ψ_F FOR THE N-EVEN CASE The proof is done by induction. We have already proved that ψ^(2)_T(x_1,x_2)=ψ^(2)_F(x_1,x_2). We will assume that ψ^(N-2)_T(x_1,...,x_N-2) = ψ^(N-2)_F(x_1,...,x_N-2) and prove O≡∫ψ^(N)_T(x_1,...,x_N)ψ^(N)_F(x_1,...,x_N)dx_1...dx_N=1, which implies that ψ^(N)_T(x_1,...,x_N)=ψ^(N)_F(x_1,...,x_N). To do so we will need to derive a recursive formula for both ψ_T and ψ_F. The ψ_F recursive formula is rather direct to obtain, ψ^(N)_F(x_1, ..., x_N) = ϕ(x_N)ϕ(x_N-1) ( ∏_j=1^N-1(x_j - x_N) ) ( ∏_k=1^N-2(x_k - x_N-1) ) ×ψ^(N-2)_F(x_1, ..., x_N-2). Of course, one can choose the tag of the two added particles. If the two added particles are in positions x_i, x_j where i<j, ψ^(N)_F(x_1, ..., x_N) = ϕ(x_i)ϕ(x_j) (-1)^i-1( ∏_l ≠ i^N(x_i - x_l) ) (-1)^j-2( ∏_k ≠ j, i^N(x_j - x_k) ) ×ψ^(N-2)_F(x_1, ..., x_i-1, x_i+1, ..., x_j-1, x_j+1, ..., x_N), where the (-1)^i-1 and (-1)^j-2 account for the necessary sign changes in order to always have (x_l-x_k) where l<k. Next, we derive a recursive formula for the proposed ψ_T, ψ_T^(N)(x_1,...,x_N)=∑_k_1<...<k_N_p√(p_N(k_1,...,k_N_p))φ^(N)_k_1,...,k_N_p(x_1,...,x_N), where φ^(N)_k_1,...,k_N_p(x_1,...,x_N) is a Slater determinant containing the N states χ_k_1+, χ_k_1-, ..., χ_k_N_P+, χ_k_N_P- and N_P=N/2. Decomposing the Slater determinant in two states minors we get to ψ_T^(N)(x_1, …, x_N) = ∑_k_1 <...<k_N_p√(p(k_1, …, k_N_p))√((N-2)!)/√(N!)[ [ χ_k_1+(x_1) χ_k_1-(x_1); χ_k_1+(x_2) χ_k_1-(x_2) ]φ^(N-2)_k_2 … k_N_p(x_3, … ,x_N) - [ χ_k_1+(x_1) χ_k_1-(x_1); χ_k_1+(x_3) χ_k_1-(x_3) ]φ^(N-2)_k_2 … k_N_p(x_2, x_4,…, x_N) + [ χ_k_1+(x_1) χ_k_1-(x_1); χ_k_1+(x_4) χ_k_1-(x_4) ]φ^(N-2)_k_2 … k_N_p(x_2, x_3, x_5, …, x_N) +…], where the sign preceding the minor x_i,x_j is +1 if i+j is odd and -1 if i+j is even. The √((N-2)!) comes from expressing the purely mathematical determinant of N-2 particles as a properly normalised fermionic state. The two-minors determinants can be written as [ χ_k+(x_a) χ_k-(x_a); χ_k+(x_b) χ_k-(x_b) ] = 2ϕ(x_a)ϕ(x_b) sin[(2k-1)π(y_a - y_b)], where y=F(x). Then, ψ_T^(N)(x_1,..., x_N) = ∑_k_1 < ...<k_N_p√(p(k_1, …, k_N_p))√((N-2)!)/√(N!)× [2ϕ(x_1)ϕ(x_2) sin[(2k_1-1)π(y_1-y_2)]φ^(N-2)_k_2 … k_N(x_3, …, x_N) -2ϕ(x_1)ϕ(x_3)sin[(2k_1-1)π(y_1-y_3)]φ^(N-2)_k_2 … k_N(x_2, x_4, … ,x_N) +2ϕ(x_1)ϕ(x_4)sin[(2k_1-1)π(y_1-y_4)]φ^(N-2)_k_2 … k_N(x_2, x_3,x_5, … ,x_N)+... ]. The summation can be simplified using that, ∑_k_1 < …<k_N_p =1/N_p!∑_k_1≠…≠ k_N_P=1/N_p!∑_k_1,…,k_N_P, where the last equality is only valid for the specific function we are considering. This is because terms with repeated k's evaluate to zero since a Slater determinant with a repeated column evaluates to zero. Then, ψ_T^(N)(x_1, …, x_N) = √((N-2)!)/√(N!)1/(N/2)!∑_k_1,...,k_N_p√(p(k_1, …, k_N_p))× [2ϕ(x_1)ϕ(x_2) sin[(2k_1-1)π(y_1-y_2)]φ^(N-2)_k_2 … k_N(x_3, …, x_N) -2ϕ(x_1)ϕ(x_3)sin[(2k_1-1)π(y_1-y_3)]φ^(N-2)_k_2 … k_N(x_2, x_4, … ,x_N) +2ϕ(x_1)ϕ(x_4)sin[(2k_1-1)π(y_1-y_4)]φ^(N-2)_k_2 … k_N(x_2, x_3,x_5, … ,x_N)+... ]. Explicitly writing p(k_1,...,k_N_p) ψ_T^(N)(x_1,...,x_N) = √((N-2)!)/√(N!)1/(N/2)!√(N!)/π^N/22^N/2∑_k_1,...,k_N_p1/(2k_1-1)× ... ×1/(2k_Np-1)× [2ϕ(x_1)ϕ(x_2) sin[(2k_1-1)π(y_1-y_2)]φ^(N-2)_k_2 … k_N(x_3, …, x_N) -2ϕ(x_1)ϕ(x_3)sin[(2k_1-1)π(y_1-y_3)]φ^(N-2)_k_2 … k_N(x_2, x_4, … ,x_N) +2ϕ(x_1)ϕ(x_4)sin[(2k_1-1)π(y_1-y_4)]φ^(N-2)_k_2 … k_N(x_2, x_3,x_5, … ,x_N)+... ]. Since, k_1 is completely factorised from other k's we can perform the summation over k_1. To do so we apply ∑_k_1sin[(2k_1 - 1)π (y_a - y_b)]/2k_1 - 1 = π/4(y_a - y_b), for (y_a - y_b)∈(-1,1). Then, ψ_T^(N)(x_1,..., x_N) = √((N-2)!)/√(N!)1/(N/2)!√(N!)/π^N/22^N/2π/2∑_k_2,...k_N_p1/(2k_2-1)× ... ×1/(2k_Np-1)× [ϕ(x_1)ϕ(x_2) (y_1-y_2)] φ^(N-2)_k_2 … k_N(x_3, …, x_N) -ϕ(x_1)ϕ(x_3)(y_1-y_3)] φ^(N-2)_k_2 … k_N(x_2, x_4, … ,x_N) +ϕ(x_1)ϕ(x_4)(y_1-y_4)]φ^(N-2)_k_2 … k_N(x_2, x_3,x_5, … ,x_N)+... ]. Regrouping prefactors and changing back the summation to k_2<...<k_N_p, ψ_T^(N)(x_1,..., x_N) = √((N-2)!)/N/21/π^(N-2)/2∑_k_2<...<k_N_p1/(k_2-1/2)× ... ×1/(k_Np-1/2)× [ϕ(x_1)ϕ(x_2) (y_1-y_2)] φ^(N-2)_k_2 … k_N(x_3, …, x_N) -ϕ(x_1)ϕ(x_3)(y_1-y_3)] φ^(N-2)_k_2 … k_N(x_2, x_4, … ,x_N) +ϕ(x_1)ϕ(x_4)(y_1-y_4)]φ^(N-2)_k_2 … k_N(x_2, x_3,x_5, … ,x_N)+... ]. Identifying p(k_2,...,k_N_p), and applying (y_i-y_j)=(x_i-x_j) the wave function can be written as a function of the hypothesised wave function for N-2 particles: ψ_T^(N)(x_1 … x_N) = 2/N[ϕ(x_1)ϕ(x_2)(x_1-x_2)ψ^(N-2)_T(x_3, ..., x_N) -ϕ(x_1)ϕ(x_3)(x_1-x_3)ψ^(N-2)_T(x_2,x_4, ..., x_N) +ϕ(x_1)ϕ(x_4)(x_1-x_4)ψ^(N-2)_T(x_2,x_3,x_5,..., x_N) ... ] . Where the sum goes over all the minors in the initial slater determinant. Now that we have deduced a recursive formula for both ψ_F and ψ_T we proceed to prove that O=1 by induction. We start with O=∫ ψ^(N)_T(x_1,...,x_N)ψ^(N)_F(x_1,...,x_N)dx_1...dx_N= 2/N[∫ϕ(x_1)ϕ(x_2)(x_1-x_2)ψ_T^(N-2)(x_3, ..., x_N)ψ^(N)_F(x_1,...,x_N) dx_1...dx_N -∫ϕ(x_1)ϕ(x_3)(x_1-x_3)ψ_T^(N-2)(x_2,x_4, ..., x_N)ψ^(N)_F(x_1,...,x_N) dx_1...dx_N + ∫ϕ(x_1)ϕ(x_4)(x_1-x_4)ψ_T^(N-2)(x_2,x_3,x_5 ..., x_N)ψ^(N)_F(x_1,...,x_N) dx_1...dx_N +...] . The sign before each term x_i,x_j can be generalised as (-1)^i+j+1. With this we focus on a given term x_i,x_j which we will denote O_ij, O_ij = (-1)^i+j+1∫ϕ(x_i)ϕ(x_j)(x_i-x_j) ψ_T^(N-2)(x_1, ..., x_i-1, x_i+1, ..., x_j-1, x_j+1, ..., x_N) ψ^(N)_F(x_1,...,x_N) dx_1 … dx_N. To simplify the calculus we make the following change of variables y=F(x)=∫ϕ(x)^2 dx, dy=ϕ(x)^2 dx, and apply that (x_i-x_j)=(y_i-y_j). Also, we define ψ̅^(N)(x_1,...,x_N)=ψ^(N)(x_1,...,x_N)/ϕ(x_1)×...×ϕ(x_N). Applying both the change of notation and change of variables we get to O_ij = (-1)^i+j+1∫_0^1 (y_i-y_j)ψ̅_T^(N-2)(y_1, ..., y_i-1, y_i+1, ..., y_j-1, y_j+1, ..., y_N) ψ̅^(N)_F(y_1, ..., y_N) dy_1 … dy_N. Next, we rewrite ψ̅^(N)_F(y_1,...,y_N) using the recursive form such that the two particles added are x_i and x_j. This is ψ̅^(N)_F(y_1, ..., y_N) = (-1)^i-1( ∏_l ≠ i^N(y_i - y_l) ) (-1)^j-2( ∏_k ≠ j, i^N(y_j - y_k) ) ×ψ̅^̅(̅N̅-̅2̅)̅_̅F̅(̅y̅_̅1̅,̅ ̅.̅.̅.̅,̅ ̅y̅_̅i̅-̅1̅,̅ ̅y̅_̅i̅+̅1̅,̅ ̅.̅.̅.̅,̅ ̅y̅_̅j̅-̅1̅,̅ ̅y̅_̅j̅+̅1̅,̅ ̅.̅.̅.̅,̅ ̅y̅_̅N̅)̅. When inserting Eq. (<ref>) into Eq. (<ref>) and applying that for the N-2 case ψ_T^(N-2)=ψ_F^(N-2) (induction), ψ̅^(N-2)_F(y_1, …, y_i-1, y_i+1, …, y_j-1, y_j+1, …, y_N) ψ̅^(N-2)_T(y_1, …, y_i-1, y_i+1, …, y_j-1, y_j+1, …, y_N) = ψ̅^(N-2)_F(y_1, …, y_i-1, y_i+1, …, y_j-1, y_j+1, …, y_N)^2 = 1, where ψ̅_F^(N-2)(y_1,..,y_N)^2=1 since ψ̅_F is a product of signs, we get to O_ij=(-1)^2(i+j)-2∫_0^1 (y_i-y_j)( ∏_l ≠ i^N(y_i - y_l) ) ( ∏_k ≠ j, i^N(y_j - y_k) ) dy_1...dy_N. where (-1)^2(i+j)-2=1 for any i,j. Then, simplifying (y_i-y_j) we get to O_ij=∫_0^1 ( ∏_l ≠ i,j^N(y_i - y_l) (y_j - y_l) ) dy_1...dy_N, where now it is clear that the choice of i and j is arbitrary and all terms in Eq. (<ref>) are completely equivalent. It is easy to check that there are N(N-1)/2 terms since this is the number of 2-states minors in the initial slater determinant. Therefore, the initial integral is greatly simplified to O=(N-1) ∫_0^1 ( ∏_l ≠ 1,2^N(y_1 - y_l) (y_2 - y_l) ) dy_1...dy_N , where we have arbitrarily decided that i,j=1,2. One can see that the N-2 integrals over y_3,...,y_N are equivalent and independent among themselves, thus O= (N-1) ∫_0^1∫_0^1 (∫_0^1 (y_1 - y_3) (y_2 - y_3) dy_3)^N-2 dy_1dy_2 = (N-1)∫_0^1∫_0^1 (2(y_1-y_2)(y_2-y_1)+1)^N-2 dy_1dy_2=1, when N is even. § PROOF OF Ψ_T=Ψ_F FOR THE N-ODD CASE The proof is done again by induction. We have already proved that ψ^(3)_T(x_1,x_2,x_3)=ψ^(3)_F(x_1,x_2,x_3). We will follow the same procedure as in the even case and assume that ψ^(N-2)_T(x_1,...,x_N-2)=ψ^(N-2)_F(x_1,...,x_N-2) to prove O≡∫ψ^(N)_T(x_1,...,x_N)ψ^(N)_F(x_1,...,x_N)dx_1...dx_N=1, which implies that ψ^(N)_T(x_1,...,x_N)=ψ^(N)_F(x_1,...,x_N). The ψ_F recursive formula derived in Eq. (<ref>) for the even case is also valid in the odd case. The case of ψ_T is not that simple. Again, we start with our wave function, now for the odd case, ψ_T^(N)(x_1,...,x_N)=∑_k_1<...<k_N_p√(p(k_1,...,k_N_p))φ^(N)_0,k_1,...,k_N_p(x_1,...,x_N), where φ^(N)_0,k_1,...,k_N_p(x_1,...,x_N) is the Slater determinant containing the N states χ_k_1+, χ_k_1-, ..., χ_k_N_P+, χ_k_N_P-, and importantly, ϕ. The number of pairs in the odd case is N_P=(N-1)/2. Decomposing the Slater determinant in two states minors we get to ψ_T^(N)(x_1, …, x_N) = ∑_k_1 <...<k_N_p√(p(k_1, …, k_N_p))√((N-2)!)/√(N!)× [ [ χ_k_1+(x_1) χ_k_1-(x_1); χ_k_1+(x_2) χ_k_1-(x_2) ]φ^(N-2)_0,k_2 … k_N_p(x_3, … ,x_N) - [ χ_k_1+(x_1) χ_k_1-(x_1); χ_k_1+(x_3) χ_k_1-(x_3) ]φ^(N-2)_0,k_2 … k_N_p(x_2, x_4,…, x_N) + [ χ_k_1+(x_1) χ_k_1-(x_1); χ_k_1+(x_4) χ_k_1-(x_4) ]φ^(N-2)_0,k_2 … k_N_p(x_2, x_3, x_5, …, x_N) +…], where the sign preceding the minor x_i,x_j is +1 if i+j is odd and -1 if i+j is even. The √((N-2)!) comes from expressing the purely mathematical determinant of N-2 particles as a properly normalised fermionic state. The two-minors determinants can be now written as [ χ_k+(x_a) χ_k-(x_a); χ_k+(x_b) χ_k-(x_b) ] = 2ϕ(x_a)ϕ(x_b) sin[2kπ(y_a - y_b)], where y=F(x). Then, ψ_T^(N)(x_1,..., x_N) = ∑_k_1 < ...<k_N_p√(p(k_1, …, k_N_p))√((N-2)!)/√(N!)× [2ϕ(x_1)ϕ(x_2) sin[2k_1π(y_1-y_2)]φ^(N-2)_0,k_2 … k_N(x_3, …, x_N) -2ϕ(x_1)ϕ(x_3)sin[2k_1π(y_1-y_3)]φ^(N-2)_0,k_2 … k_N(x_2, x_4, … ,x_N) +2ϕ(x_1)ϕ(x_4)sin[2k_1π(y_1-y_4)]φ^(N-2)_0,k_2 … k_N(x_2, x_3,x_5, … ,x_N)+... ]. As in the even case, we simplify the summation, ψ_T^(N)(x_1, …, x_N) = √((N-2)!)/√(N!)1/[(N-1)/2]!∑_k_1,...,k_N_p√(p(k_1, …, k_N_p))× [2ϕ(x_1)ϕ(x_2) sin[2k_1π(y_1-y_2)]φ^(N-2)_k_2 … k_N(x_3, …, x_N) -2ϕ(x_1)ϕ(x_3)sin[2k_1π(y_1-y_3)]φ^(N-2)_k_2 … k_N(x_2, x_4, … ,x_N) +2ϕ(x_1)ϕ(x_4)sin[2k_1π(y_1-y_4)]φ^(N-2)_k_2 … k_N(x_2, x_3,x_5, … ,x_N)+... ]. Explicitly writing p(k_1,...,k_N_p) ψ_T^(N)(x_1,...,x_N) = √((N-2)!)/√(N!)1/[(N-1)/2]!√(N!)/π^(N-1)/2∑_k_1,...,k_N_p1/k_1× ... ×1/k_Np× [2ϕ(x_1)ϕ(x_2) sin[2k_1π(y_1-y_2)]φ^(N-2)_k_2 … k_N(x_3, …, x_N) -2ϕ(x_1)ϕ(x_3)sin[2k_1π(y_1-y_3)]φ^(N-2)_k_2 … k_N(x_2, x_4, … ,x_N) +2ϕ(x_1)ϕ(x_4)sin[2k_1π(y_1-y_4)]φ^(N-2)_k_2 … k_N(x_2, x_3,x_5, … ,x_N)+... ]. Since k_1 is completely factorised from other k's we perform the summation over k_1 applying ∑_n=1^∞sin [2π n(y_a-y_b)]/π n = (y_a-y_b)/2- (y_a-y_b), for y_a-y_b∈(-1,1). Then, ψ_T^(N)(x_1,..., x_N) = √((N-2)!)/√(N!)2/[(N-1)/2]!√(N!)∑_k_2,...,k_N_p1/π k_2× ... ×1/π k_Np× [ϕ(x_1)ϕ(x_2) ( (y_1-y_2)/2-(y_1-y_2)) φ^(N-2)_k_2 … k_N(x_3, …, x_N) -ϕ(x_1)ϕ(x_3)( (y_1-y_3)/2-(y_1-y_3))φ^(N-2)_k_2 … k_N(x_2, x_4, … ,x_N) +ϕ(x_1)ϕ(x_4)( (y_1-y_4)/2-(y_1-y_4))φ^(N-2)_k_2 … k_N(x_2, x_3,x_5, … ,x_N)+... ]. Regrouping prefactors and changing back the summation to k_2<...<k_N_p, ψ_T^(N)(x_1,..., x_N) = √((N-2)!)2/(N-1)/2∑_k_2<...<k_N_p1/π k_2× ... ×1/π k_Np× [ϕ(x_1)ϕ(x_2) ( (y_1-y_2)/2-(y_1-y_2)) φ^(N-2)_k_2 … k_N(x_3, …, x_N) -ϕ(x_1)ϕ(x_3)( (y_1-y_3)/2-(y_1-y_3))φ^(N-2)_k_2 … k_N(x_2, x_4, … ,x_N) +ϕ(x_1)ϕ(x_4)( (y_1-y_4)/2-(y_1-y_4))φ^(N-2)_k_2 … k_N(x_2, x_3,x_5, … ,x_N)+... ]. Identifying p(k_2,...,k_N_p), ψ_T^(N) can be written as a function of ψ_T^(N-2), ψ_T^(N)(x_1 … x_N) = 4/N-1[ϕ(x_1)ϕ(x_2)( (y_1-y_2)/2-(y_1-y_2)) ψ^(N-2)_T(x_3, ..., x_N) -ϕ(x_1)ϕ(x_3)( (y_1-y_3)/2-(y_1-y_3)) ψ^(N-2)_T(x_2,x_4, ..., x_N) +ϕ(x_1)ϕ(x_4)( (y_1-y_4)/2-(y_1-y_4)) ψ^(N-2)_T(x_2,x_3,x_5,..., x_N) ... ] . Where the sum goes over all the minors in the initial slater determinant. As in the even case, now that we have a recursive formula for both ψ_F and ψ_T we proceed with the actual proof of O=1. Applying the recursive formula of ψ_F into the calculation of O, O= 4/N-1[∫ϕ(x_1)ϕ(x_2)( (y_1-y_2)/2-(y_1-y_2)) ψ_T^(N-2)(x_3, ..., x_N)ψ^(N)_F(x_1,...,x_N) dx_1...dx_N -∫ϕ(x_1)ϕ(x_3)( (y_1-y_3)/2-(y_1-y_3)) ψ_T^(N-2)(x_2,x_4, ..., x_N)ψ^(N)_F(x_1,...,x_N) dx_1...dx_N + ∫ϕ(x_1)ϕ(x_4)( (y_1-y_4)/2-(y_1-y_4)) ψ_T^(N-2)(x_2,x_3,x_5 ..., x_N)ψ^(N)_F(x_1,...,x_N) dx_1...dx_N +...] , where the sign before each term x_i,x_j can be generalised as (-1)^i+j+1. With this we focus on a given term x_i,x_j, O_ij=(-1)^i+j+1∫ϕ(x_i)ϕ(x_j) ( (y_i-y_j)/2 - (y_i-y_j) ) ψ_T^(N-2)(x_1, ..., x_i-1, x_i+1, ..., x_j-1, x_j+1, ..., x_N) ×ψ^(N)_F(x_1, ..., x_N) dx_1 … dx_N. To simplify the calculus we make the change of variables y=F(x)=∫ϕ(x)^2 dx, dy=ϕ(x)^2 dx. Also, we denote by ψ̅^(N)(x_1,...,x_N)=ψ^(N)(x_1,...,x_N)/ϕ(x_1)×...×ϕ(x_N). Applying both the change of notation and change of variables we get to O_ij=(-1)^i+j+1∫_0^1 ( (y_i-y_j)/2 - (y_i-y_j) ) ψ̅_T^(N-2)(y_1, ..., y_i-1, y_i+1, ..., y_j-1, y_j+1, ..., y_N) ×ψ̅^(N)_F(y_1, ..., y_N) dy_1 … dy_N. We first focus on the term including the y_i-y_j and forget about the sign. We denote by I_ij the following integral, I_ij=(-1)^i+j∫_0^1 (y_i-y_j) ψ̅_T^(N-2)(y_1, ..., y_i-1, y_i+1, ..., y_j-1, y_j+1, ..., y_N) ψ̅^(N)_F(y_1, ..., y_N) dy_1 … dy_N. The integral I_ij does not depend on the choice of i,j as long as i<j which occurs in all terms of Eq. (<ref>). We rewrite ψ̅^(N)_F(y_1,...,y_N) using the recursive form such that the two particles added are x_i and x_j. This is ψ̅^(N)_F(y_1, ..., y_N) = (-1)^i-1( ∏_l ≠ i^N(y_i - y_l) ) (-1)^j-2( ∏_k ≠ j, i^N(y_j - y_k) ) ×ψ̅^(N-2)_F(y_1, ..., y_i-1, y_i+1, ..., y_j-1, y_j+1, ..., y_N). When inserting Eq. (<ref>) into Eq. (<ref>) and applying that for the N-2 case ψ_T^(N-2)=ψ_F^(N-2) (induction), ψ̅^(N-2)_F(y_1, …, y_i-1, y_i+1, …, y_j-1, y_j+1, …, y_N) ψ̅^(N-2)_T(y_1, …, y_i-1, y_i+1, …, y_j-1, y_j+1, …, y_N) = ψ̅^(N-2)_F(y_1, …, y_i-1, y_i+1, …, y_j-1, y_j+1, …, y_N)^2 = 1, where ψ̅_F^(N-2)(y_1,..,y_N)^2=1 since ψ̅_F is a product of signs, we get to I_ij=(-1)^2(i+j)-3∫_0^1 (y_i-y_j)( ∏_l ≠ i^N(y_i - y_l) ) ( ∏_k ≠ j, i^N(y_j - y_k) ) dy_1...dy_N. where (-1)^2(i+j)-3=-1 for any i,j. This can be rewritten as, I_ij=-∫_0^1 (y_i-y_j)(y_i-y_j)( ∏_l ≠ i,j^N(y_i - y_l)(y_j - y_l) ) dy_1...dy_N. For simplicity, we choose i,j=1,2 and realise that the N-2 integrals over x_3,...x_N are equivalent, then I_ij= -∫_0^1∫_0^1 (y_1-y_2)(y_1-y_2)( ∫_0^1 (y_1 - y_3)(y_2 - y_3) dy_3 )^N-2 dy_1dy_2 = -∫_0^1∫_0^1 (y_1-y_2)(y_1-y_2)(2(y_1-y_2)(y_2-y_1)+1)^N-2 dy_1dy_2=0 . when N is odd. After showing that I_ij=0, we are left with, O_ij=(-1)^i+j+1∫_0^1 (y_i-y_j)/2ψ̅_T^(N-2)(y_1, ..., y_i-1, y_i+1, ..., y_j-1, y_j+1, ..., y_N) ψ̅^(N)_F(y_1, ..., y_N) dy_1 … dy_N. Which has a completely equivalent form to the O_ij in the even case shown in Eq. (<ref>). Therefore, we can apply the same procedure and steps already applied in the even case to finally get to O= N ∫_0^1 ∫_0^1 ( ∫_0^1(y_1 - y_3) (y_2 - y_3) dy_3)^N-2 dy_1dy_2 = N∫_0^1∫_0^1 (2(y_1-y_2)(y_2-y_1)+1)^N-2 dy_1dy_2=1, when N is odd. § CALCULATION OF ⟨O⃗_3^FTG⟩ In this section we provide the explicit calculations of the expectation values of O⃗_3^FTG = [ n_k+ + n_k- + n_l+ + n_l-; n_k+n_k- + n_l+ n_l-; a_k+^† a_k-^† a_l- a_l+ + h.c. ]= [ 2(λ_k^(N)+λ_l^(N)); λ_k^(N)+λ_l^(N); 2∑_(k_2 < … < k_N_P) ≠ l,k√(p(l, k_2 …)p(k, k_2 …)) ]. for the FTG gas case. We start with ⟨ n_k⟩. ⟨ n_k+⟩ =⟨ a_k+^† a_k+⟩=⟨ψ_T|∑_k_1<..<k_N_P√(p_N(k_1,...,k_Np))a_k+^† a_k+P_k_1^† ...P_k_N_p^† |0⟩ =∑_(k_2<...<k_Np)≠ kp(k,k_2,...,k_Np)=λ_k^(N). Of course, the derivation ⟨ n_k-⟩=λ_k^(N) is analogous to the latter. With this, we get to ⟨ n_k++n_k-+n_l++n_l-⟩=2(λ_k^(N)+λ_l^(N)). The next expectation value we compute is ⟨ n_k+n_k-⟩, ⟨ n_k+n_k-⟩=<ψ_T|∑_k_1<..<k_N_P√(p_N(k_1,...,k_Np))a_k+^† a_k+a_k-^† a_k-P_k_1^† ...P_k_N_p^† |0⟩= ∑_(k_2<...<k_Np)≠ kp(k,k_2,...,k_Np)=λ_k^(N). With this, we get to ⟨ n_k+n_k-+ n_l+n_l-⟩=λ_k^(N)+λ_l^(N). Finally, the last expectation value left to be computed is ⟨ a_k+^† a_k-^† a_l- a_l+⟩. ⟨ a_k+^† a_k-^† a_l- a_l+⟩ = ⟨ψ_T | ∑_k_1 < … < k_N_P√(p(k_1, …, k_N_P)) a_k+^† a_k-^† a_l- a_l+ P_k_1^† ...P_k_N_p^† |0⟩ = ⟨ψ_T | ∑_(k_2 < … < k_N_P) ≠ l√(p(l, k_2, …,k_N_P)) P_k^† ...P_k_N_p^† |0⟩ =∑_(k_2 < … < k_N_P) ≠ l,k√(p(l, k_2 …,k_N_P)p(k, k_2, …,k_N_P)). With this, we get to ⟨ a_k+^† a_k-^† a_l- a_l+ + h.c⟩=2∑_(k_2 < … < k_N_P) ≠ l,k√(p(l, k_2, …,k_N_P)p(k, k_2, …,k_N_P)) .
http://arxiv.org/abs/2407.12496v1
20240717112513
Towards real-world applications of levitated optomechanics
[ "Yuanbin Jin", "Kunhong Shen", "Peng Ju", "Tongcang Li" ]
physics.optics
[ "physics.optics", "physics.app-ph", "physics.ins-det" ]
Department of Physics and Astronomy, Purdue University, West Lafayette, Indiana 47907, USA. Department of Physics and Astronomy, Purdue University, West Lafayette, Indiana 47907, USA. Department of Physics and Astronomy, Purdue University, West Lafayette, Indiana 47907, USA. tcli@purdue.edu Department of Physics and Astronomy, Purdue University, West Lafayette, Indiana 47907, USA. Elmore Family School of Electrical and Computer Engineering, Purdue University, West Lafayette, Indiana 47907, USA. Birck Nanotechnology Center, Purdue University, West Lafayette, Indiana 47907, USA. Purdue Quantum Science and Engineering Institute, Purdue University, West Lafayette, Indiana 47907, USA. § ABSTRACT Levitated optomechanics, a rapidly expanding field that employs light to monitor and manipulate the mechanical motion of levitated objects, is increasingly relevant across physics, engineering, and other fields. This technique, which involves levitating micro- and nano-scale objects in a vacuum where they exhibit high-quality motion, provides an essential platform for precision measurements. Noted for their ultra-high sensitivity, levitated particles hold potential for a wide range of real-world applications. This perspective article briefly introduces the principle of optical levitation and the dynamics of levitated particles. It then reviews the emerging applications of levitated particles in ultrasensitive force and torque measurements, acceleration and rotation sensing, electric and magnetic field detection, scanning probe microscopy, localized vacuum pressure gauging, acoustic transduction, and chemical and biological sensing. Moreover, we discuss the present challenges and explore opportunities to minimize and integrate levitation systems for broader applications. We also briefly review optomechanics with ion traps and magnetic traps which can levitate particles in high vacuum without laser heating. Towards real-world applications of levitated optomechanics Tongcang Li July 22, 2024 ========================================================== § INTRODUCTION Optical levitation of small particles in vacuum using laser radiation pressure was first experimentally demonstrated by A. Ashkin and J. M. Dziedzic in 1970's <cit.>. Decades later, Li et al. measured the instantaneous velocity of the Brownian motion of an optically levitated glass microsphere <cit.> and cooled its center-of-mass (CoM) motion to millikelvin temperatures using feedback cooling <cit.>. The core of levitation systems lies in their capability to levitate particles, typically at micro- and nano-scale, and precisely monitor and control dynamics using optical forces and other techniques. The CoM motion of a levitated particle has been cooled with various methods, i.e., force feedback cooling <cit.>, parametric feedback cooling <cit.>, and cavity cooling <cit.>. In 2020, the CoM motion temperature of a levitated particle in high vacuum was cooled to its quantum ground-state <cit.>, bringing levitated optomechanics to the quantum regime <cit.>. Due to the absence of mechanical contact with the thermal environment, levitation systems in vacuum exhibit an ultra-high mechanical quality factor, holding promise for innovative applications. In addition, the motion of levitated particles can be optically detected at the quantum limit, making them particularly suitable for sensing applications (Fig. <ref>). Levitated systems are good in measurement of weak forces and torques <cit.>. Recently, a force detection sensitivity of (6.3 ± 1.6) × 10^-21 N/√(Hz) <cit.> and a torque detection sensitivity of ( 4.2 ± 1.2 ) × 10^-27 N m / √(Hz) were experimentally demonstrated with optically levitated nanoparticles <cit.>. Besides applications in fundamental researches, such as exploration of non-Newtonian gravity <cit.> and probing dark mater <cit.>, levitated particles can serve as inertial sensors for detecting minute changes in acceleration and rotation <cit.>. An optical levitation system has been packed as a small-scale accelerometer and evaluated by testing it on a vehicle running on a road <cit.>, providing an example for real-world applications. Levitated systems have also been used for electric field <cit.> and magnetic field <cit.> detection, scanning probe microscopy <cit.>, localized vacuum pressure gauging <cit.>, acoustic transduction <cit.>, and chemical and biological sensing <cit.>. Various trapped particles with different characteristics provide versatile options to meet specific objectives of individual applications. The bottom row in Fig. <ref> lists some typical levitated particles, such as silica nanospheres <cit.>, silica nanodumbbells <cit.>, silicon nanorods <cit.>, NaYF_4:Yb/Er plates <cit.>, vaterite microspheres <cit.>, and silica microspheres <cit.>, among others. More details of application with different kinds of particle will be discussed in Sec. <ref>. In addition, levitated particles with embedded electron spin qubits, such as diamond nitrogen-vacancy (NV) centers <cit.>, are good for creating matter-wave interferometers <cit.>, investigating strong spin-mechanical coupling <cit.>, and probing quantum geometric phase <cit.>. There have been several review articles on levitated optomechanics <cit.>. However, these review articles <cit.> focused on the foundations of levitated optomechanics and its applications in fundamental research. In this perspective article, we focus on the practical applications of levitated optomechanics. We discuss the emerging applications of levitated particles in force and torque measurement, inertial sensing, electric and magnetic field detection, levitated scanning probe microscopy, vacuum gauging, acoustic transduction, and chemical and bioaerosol sensing. Furthermore, the integration and minimization of levitation systems become crucial for the practical implementation of these levitated sensors in real-world scenarios. We review the challenges in creating compact levitation systems, and discuss various methods of particle launching and on-chip levitation, and hybrid systems with ion traps and magnetic traps to address those challenges for real-world applications of levitated optomechanics. § OPTICAL LEVITATION §.§ Optical force Fig. <ref>(a) shows a schematic diagram of the motion of an optically levitated particle. The optical force on a particle is influenced by optical scattering, a phenomenon dependents on both the particle's dimensions and the laser's wavelength. This force comprises two constituent components: the gradient force and the scattering force. The scattering force is aligned with the laser's propagation direction, while the gradient force is directed towards the focal point, collectively creating a three-dimensional trapping potential. In the Rayleigh regime, where the particle's radius R is much smaller than the laser wavelength (R ≪λ), the particle is treated as a point dipole with Rayleigh approximation. The gradient force generated by the focused laser on the particle can be expressed as <cit.> F_grad(x,y,z) = 2πn_mR^3/c( n^2 - 1/n^2 + 2)∇ I(x,y,z) , and the scattering force is F_scat(x,y,z) = 128π ^5n_mR^6/3cλ ^4( n^2 - 1/n^2 + 2)^2I(x,y,z) , where n_m is the refractive index of the surrounding environment, n = n_0 / n_m is the relative refractive index of the particle, c is the speed of light in vacuum. The intensity distribution of a Gaussian laser is given by I(x,y) = [ 2P/( πω _z^2)]exp[ - 2(x^2 + y^2)/ω _z^2], where P is the laser power, ω_z is the beam size along the laser propagation direction. Generally, the gradient force is significantly larger than the scattering force and gravity of the particle, rendering the latter two forces negligible in the Rayleigh regime. When the particle size is comparable to the laser wavelength (R ∼λ), the optical force on the particle can be determined using the Lorentz-Mie theory, which involves analyzing the solutions of electromagnetic fields surrounding the levitated particle. The optical force exhibits a dependence on the particle's shape and size <cit.>, distinguishing it from the behavior observed in the Rayleigh regime. Additionally, when the particle size is significantly larger than the wavelength (R ≫λ), geometric optics becomes applicable for describing the optical force. §.§ Optical levitation schemes For nano-scale dielectric particles, such as silica nanoparticles, a single-beam trap generated by a tightly focused laser proves effective in stably levitating particles with a high trapping frequency, typically ranging from tens of kilohertz to a few megahertz <cit.>. However, when particles are approximately 1 μm in size or have a high refractive index, such as silicon particles, the scattering force approaches or even exceeds the gradient force. In these cases, the particles are suitable for levitating with a dual-beam trap, which can be formed by two counter-propagating focused lasers <cit.>. The scattering forces generated by the two counter-propagating beams are canceled due to the equal amplitudes and opposite directions. Consequently, laser power can be adjusted to lower levels, where the gradient force surpasses the gravity. Alternatively, two beams with the same optical frequency result in a standing wave, which can also be used for particle levitation <cit.>. Despite the high trapping frequency along the laser propagation direction, the trapping frequency in the radial direction remains comparatively low, typically in the range of a few kilohertz <cit.>. For large particles with masses on the order of nanograms, the gradient force may be inadequate to counteract gravity. An optical-gravitational trap, wherein the laser propagates in the direction opposing gravity, provides an alternative method for particle levitation. The scattering force acting on the particles is utilized to balance the gravitational force. Due to the substantial mass of the trapped particles, the resulting trapping frequency is low, typically in the range of tens of hertz <cit.>. §.§ Motion of levitated particles Considering the six degrees of freedom of a rigid object, we take a non-spherical particle, specifically a nanodumbbell (Fig. <ref>(a)), as a representative example. The trapping laser, linearly polarized along the x direction, propagates in the z direction. The long axis of the levitated nanodumbbell tends to align with the laser polarization direction due to the interaction between the dipole moment and the electric field. The levitated particle undergoes Brownian motion in the potential well caused by collisions of gas molecules, involving three center-of-mass (CoM) motions in the x, y and z directions, two torsional motions denoted as α (in the xy-plane) and β (in the xz-plane), as well as free rotation γ around the long axis of the particle. The CoM motion equation of a levitated particle with a mass of m in a one-dimensional harmonic trap can be expressed as <cit.>: ẍ( t ) + γ _CoM ẋ( t ) + Ω _0^2 x( t ) = F_th( t )/m , where Ω_0 is the angular frequency of the CoM motion, γ_CoM is the damping rate. The stochastic force caused by thermal noise is given by F_th( t ) = √(2k_BTmγ_CoM)δ( t ), where ⟨δ( t )⟩ = 0 and ⟨δ( t )δ( t')⟩ = δ( t - t'). For a levitated sphere, the damping rate due to the collisions with gas molecules can be written as <cit.> γ_CoM = 6πη R/m0.619/0.619 + Kn( 1 + c_k) , where the η is the dynamic viscosity of the gas, Kn= l/R is the Knudsen number, l is the mean free path of the gas molecules, c_k = 0.31Kn/( 0.785 + 1.152Kn + Kn^2). Because of the large mean free path in high vacuum, Kn≫ 1, the damping rate retains the first-order terms given as γ_CoM = 3.94πηd^2p/( k_BTρ R), where d is the mean diameter of the gas molecules. Based on the approximate expression, the damping rate is proportional to the pressure but inversely proportional to the particle's radius. The power spectral density (PSD) of the CoM motion is calculated by Eq: <ref> with Fourier transformation <cit.>, S( ω) = 2k_BT/mγ _CoM/( Ω _0^2 - ω ^2)^2 + ω ^2γ _CoM^2, The rich information of CoM motion, including the trapping frequency and the damping rate, can be directly obtained from the PSD measurements. The scattered light from a levitated particle, containing information about its motion, interferes with the trapping laser and can be measured by photodetectors, as shown in Fig. <ref>(b). With the help of optical interference, the trapping laser amplifies the scattering light signal and improves the detection sensitivity of CoM motion extends to the order of f m / √(Hz). Fig. <ref>(c) shows the PSDs of the CoM and torsional motions of a 170-nm-diameter levitated silica nanodumbbell at a pressure of 5 × 10^-4 Torr <cit.>. The left three peaks are the motion signals in the z (38 kHz), x (105 kHz), and y (120 kHz) directions, respectively. The signal peak at 400 kHz is α torsional signal. The broad linewidth is induced by the coupling with the free rotation of the levitated particle around its long axis <cit.>. When the polarization of the trapping laser is switched to circular, it generates a torque on the levitated particle, inducing rotational motion around the α direction in vacuum. The rotation frequency of the levitated particle exhibits an inverse relationship with the pressure, as depicted in Fig. <ref>(d). The inset is the PSD of the rotational motion at a pressure of 6 × 10^-5 Torr, indicating a rotation frequency about 6 GHz, which represents the fastest mechanical rotor demonstrated to date <cit.>. The vacuum isolates the levitated particle from the physical contact with the external environment. However, in high vacuum condition, the oscillation amplitude becomes larger and reaches the nonlinear trapping area without the air damping. Li et al. first demonstrated experimental feedback cooling of the CoM motion of a levitated particle in 2011 <cit.>. This cooling method relies on force feedback utilizing the velocity signal of the levitated particle. Additionally, other methods, such as parametric feedback cooling and cavity cooling, have been reported to cool the motion of levitated particles. Remarkably, researchers have successfully cooled the CoM motion of levitated particles to the quantum ground-state <cit.>. Several typical cooling experiments are summarized in Table <ref>. §.§ Particles for optical levitation Previous experiments have successfully demonstrated the stable optical trapping of various particle types. Based on the characteristics of levitated particles, the system reveals various applications, including fundamental physics studies, material science, and biological research. Table <ref> presents a list of particles that have been optically levitated in air or vacuum conditions, along with their corresponding parameters. Among numerous kinds of particles, silica particles are preferred in the experiments due to their small optical absorption coefficients and stability in vacuum and high temperature. Silica nanoparticles have been optically levitated in ultra-high vacuum and precisely controlled in six-dimensional motions <cit.>. The systems exhibit high torque sensitivity with non-spherical particles, such as nanodumbbells <cit.>. Additionally, silica microparticles with masses on the nanogram scale have been successfully levitated using either dual-beam traps or optical-gravitational traps in vacuum, offering potential applications in inertial sensing and fundamental research <cit.>. Compared with the spherical or clustered shape of silica particles, fabricated silicon nanorods offer shape controllability, facilitating the study of the dynamics of levitated particles. The higher refractive index of silicon particles leads to significantly larger optical forces and torques under identical conditions <cit.>. Moreover, birefringent particles, such as vaterite, experience both torque and translational force in an optical trap <cit.>. NaYF:Yb/Er and YLF:Yb particles are specifically utilized to explore laser refrigeration <cit.>. In addition to dielectric particles, metal particles can also be employed in levitation systems to investigate different phenomena. Gold nanoparticles can be used to study surface plasmon resonance away from a surface to eliminate the effect of particle-surface interaction <cit.>. Notably, levitated diamonds with embedded NV centers, in contrast to classical particles, serve as highly sensitive quantum sensors, offering valuable insights into precise measurements <cit.> and fundamental researches <cit.>. Apart from silica particles, the stability of other materials is compromised in high vacuum due to the heating induced by the trapping laser, which can be solved by using ion traps or magnetic traps. The multifaceted applications of optical levitation highlight its adaptability, wherein the selection of particles is finely tuned to the particular objectives of each experiment and the distinctive properties inherent to the particles themselves. § EMERGING APPLICATIONS Levitated optomechanics, which involves the detection and control of mechanical systems through optical forces, has sparked interest for its prospective real-world applications. Thanks to their isolation from the thermal environment, levitated particles in vacuum exhibit ultra-high sensitivity, enabling precise measurements of weak fields. Herein, we present a few potential real-world applications that are relevant to our daily lives. §.§ Ultrasensitive force and torque detection Levitated particles in high vacuum experience minimal thermal noise and friction due to reduced collisions with gas molecules, resulting in a high quality factor (Q) and holding promise for precise force measurements <cit.>. The force sensitivity of a levitated harmonic oscillator limited by the thermal noise is given by S_F^1/2 = √(4k_BTmΩ _0/Q), where T is the motion temperature, Ω _0/Q = γ_CoM is the damping rate of the CoM motion. Therefore, the minimum resolvable force during a measurement time of t is F_min = √(4k_BTmγ _CoM/t). One of the challenges in precise force measurement is the calibration. Generally, the force can be calibrated with a known electric field, as shown in Fig. <ref>(a). An optically levitated particle is positioned in an alternating current (AC) electric field, produced by a pair of electrodes. The displacement of the levitated particle under a sinusoidal electric field is denoted as x( t ) = ( qE/mω _eleZ_m)sin( ω _elet + φ). Here, q is the charge on the particle, E and ω _ele are the amplitude and frequency of the electric field, Z_m( ω _ele) = [ ( Ω _0^2 - ω _ele^2)^2 + γ _CoM^2]^1/2 is the impedance of the oscillation at the frequency of ω_ele, and φ is the phase of the oscillation relative to the electric field. The charge on the levitated particle can be conveniently measured and controlled through electric discharge <cit.> or ultraviolet (UV) light <cit.>. Consequently, the relationship between the force and the motion signal can be accurately determined. This method also enables the precise measurement of the mass of the levitated particle. Ranjit et al. have demonstrated a force sensitivity of (1.6 ± 0.37) × 10^-18 N/√(Hz) with optically levitated silica nanospheres <cit.>. The minimum measurable force is a function of measurement time, as illustrated in Fig. <ref>(b), with a value of (5.8 ± 1.3 ) × 10^-21 N achievable for measurement time exceeding 10^5 s. Recently, the minimum resolvable force of (40.80 ± 8.55) × 10^-24 N was achieved by Liang et al. (Fig. <ref>(c)) <cit.>, with a force sensitivity of (6.33 ± 1.62) × 10^-21 N/√(Hz). In addition, levitated particles can be driven to rotate w<cit.> and indicate ultra-high torque sensitivity <cit.>. In the case of an optically levitated rotor, driven by a circularly polarized trapping laser, the rotational motion is determined by both the driving torque (M_opt) exerted by the trapping laser and the damping torque (M_gas) arising from the surrounding gaseous environment. The angular rotation frequency ω_r satisfies Idω _r/dt = M_opt + M_gas, where I is the moment of inertia, which is I = 0.4 m R^2 for a spherical particle with a radius of R. The laser driving torque originates from three components, including absorption, birefringence and shape asymmetry of the levitated particle <cit.>. All three components are proportional to the laser intensity. During the rotational acceleration of the levitated rotor, the damping torque (M_gas = - Iω_rγ_rot) increases simultaneously, while the driving torque remaining constant. Here, γ_rot is the damping rate of the rotational motion. Finally, when the sum of driving and damping torques is zero, and the rotation frequency ends at ω_r = M_opt/Iγ_rot. Fig. <ref>(d) shows a schematic of torque measurement with optically levitated particles <cit.>. An external torque generated by a 1020 nm laser is applied on the levitated particle, which can be measured by monitoring the change of the rotation frequency. Similarly, the torque sensitivity is limited by the thermal noise induced by the residual gas molecules given by S_M^1/2 = √(4k_BTIγ_rot). The torque sensitivity spectrum at a pressure of 1.3 × 10^-5 Torr and room temperature is shown in Fig. <ref>(e). The highest sensitivity of ( 4.2 ± 1.2 ) × 10^-27 N m / √(Hz) is experimentally demonstrated <cit.>, which greatly surpasses that of state-of-the-art nanofabricated torque sensors at millikelvin temperatures (2.9 × 10^-24 N m / √(Hz)) <cit.>. The torque measurements for various 1020 nm laser power are presented in Fig. <ref>(f). The measured external torque is as low as 4.3 × 10^-28 Nm over a measurement duration of 100 s with a modulation power of 1.1 mW. Such high torque sensitivity enables the levitated particles to offer potential applications in detecting vacuum friction and Casimir torque <cit.>. §.§ Inertial sensing: accelerometers and gyroscopes Acceleration measurements are essential across various industries and wildly used in automotive, health and fitness, aerospace and aviation, and robotics. A few technologies are applied to measure acceleration, consisting of piezoelectric accelerometers <cit.>, MEMS (Micro-Electro-Mechanical Systems) accelerometers <cit.>, optical accelerometers <cit.>, and so on. LIGO (Laser Interferometer Gravitational-Wave Observatory) instruments, as extremely sensitive detectors for measuring gravitational waves, achieve a sensitivity at the level of 10^-10 g/√(Hz) (where g = 9.8 m / s^2) <cit.>. However, the impressive sensitivities are realized under controlled laboratory settings and typically limited to specific frequency ranges. The substantial size and significant cost of LIGO make it impractical for real-world applications. In contrast, levitated optomechanics, utilizing micro- or nano-particles, has been demonstrated acceleration sensitivities at the order of 10^-7 g/√(Hz) <cit.>. Theoretically, Fig. <ref>(a) shows calculated acceleration sensitivity as a function of the particle size at a pressure of 10^-10 Torr. The sensitivity can reach 3 × 10^-12 g/√(Hz) with a 100 μm particle. Compared to smaller particles, levitated massive particles demonstrate heightened sensitivity to acceleration but diminished sensitivity to force <cit.>. Given their notable sensitivity and cost-effectiveness, levitation systems hold significant potential across various applications, including the detection and monitoring of seismic activities, precise measurement of acceleration during flight, and monitoring the structural integrity of buildings, bridges, and other infrastructure. Fig. <ref>(a) shows the schematic diagram for acceleration measurements. A massive particle (∼ ng) is levitated in an optical-gravitational trap, formed by a vertically propagating 1064 nm laser focused by a low NA lens with a long working distance. Monteiro et al. demonstrated an acceleration sensitivity of 95 ± 41 ng/√(Hz) at frequencies near 50 Hz using optically levitated silica microspheres with a diameter of 10 μm <cit.>. The corresponding force sensitivity is 0.95 ± 0.11 aN / √(Hz). Fig. <ref>(b) gives the distribution of the accelerations measured during 52 s integration segments over a total of 12 hours in the absence of external force. Based on the Gaussian fitting, a minimum observable acceleration is 170 ± 340 [stat]± 70 [syst] pg. In addition to measuring AC acceleration and force, <cit.>, levitated optomechanics can also detect static forces using free-falling nanoparticles, such as gravity and static electric forces <cit.>. The force sensing scheme consists of three steps: (i) A particle is trapped in a harmonic potential with a high stiffness coefficient, sufficiently large to neglect the displacement of the levitated particle induced by the static force. (ii) The trapping potential is turned off for an interaction time. Under the influence of static force, the particle undergoes displacement, which depends on the amplitude of acceleration and time duration. (iii) Restarting the trapping potential produces an increased amplitude oscillation at the resonance frequency compared to the initial state. At high vacuum, the amplitude of displacement can be precisely measured to calculate the static force. Feedback cooling of the CoM motion can be applied to decrease the initial velocity of the levitated particle and improve measurement precision. Hebestreit et al. demonstrated a sensitivity of 10 aN for measuring static gravitational and electric forces <cit.>. The above-mentioned experiments of sensitivity measurements with levitation systems are carried out in lab condition. To facilitate their practical applications, levitation systems have been packed as a small-scale accelerometer. The schematic and optical image of an accelerometer with a size on the centimeter scale designed by Han et al. are shown in Fig. <ref>(c) and (d) <cit.>. A silica microsphere is levitated in a dual-beam optical trap in air. Fig. <ref>(e) is the measured acceleration sensitivity spectrum of the accelerometer without (black curve) and with (gray curve) feedback cooling, demonstrating a sensitivity of 25.5 ± 8.2 μ g/√(Hz) at the frequency of 55 Hz. Furthermore, the performance of the packed sensor is evaluated by testing it on a vehicle running on a road (Fig. <ref>(f)), providing an example of real-world applications. As inertial sensors, levitated optomechanics can act as gyroscopes for measuring orientation, angular velocity, and torque. Traditional gyroscopes rotate freely in all directions, which is a crucial characteristic for maintaining orientation. Beyond rotor gyroscopes that operate based on the conservation of angular momentum, various alternative types exist, such as the ring laser gyroscope or fiber optic gyroscope <cit.>, relying on the Sagnac effect, and MEMS gyroscopes <cit.>, using the Coriolis force. Gyroscopes have diverse applications, serving in navigation systems for aircraft, inertial measurement units in robotics and smartphones, stabilization systems in cameras and various other fields. MEMS gyroscopes are prevalent in portable electronic devices due to their compact size and low power consumption. However, the sensitivity of MEMS gyroscopes is relatively low. Instead, laser gyroscopes and fiber optic gyroscopes employ laser beams and fiber optics to measure orientation changes, offering high sensitivity and commonly used in aerospace and high-precision applications. Nonetheless, their substantial size and high cost restrict their application range. Levitated optomechanics is a promising alternative for gyroscopes, combining the advantages of high sensitivity, small size, and low cost. Optically levitated rotors have been reported ultra-high torque sensitivity <cit.>. Moreover, Zhang et al. proposed a scheme based on an NV center in a levitated nanodiamond to measure the angular velocity through matter-wave interferometry with an ultra-high sensitivity of 6.86 × 10^-7 rad / s / √(Hz) in an ion trap <cit.>. §.§ Electric and magnetic field sensing In the presence of an external field, such as an electric or magnetic field, the levitated particle will experience a force or torque if the particle is charged or has a magnetic moment. As introduced in previous section, levitation systems exhibit an ultra-high force and torque sensitivity to external perturbations. Consequently, levitated optomechanics can serve as weak field sensors for precision measurement. Compared with traditional sensing techniques, the non-contact characteristic of levitation systems removes the need for electrodes with physical contacts, enabling measurements in a wide range of environments, including meteorology, industrial automation, biomedical engineering and telecommunications. Using levitated dielectric nanoparticles with a certain net charge, Zhu et al. demonstrated three-dimensional electric field measurement with a high sensitivity <cit.>. By scanning the nanoparticle position relative to the electric field generated by a pair of electrodes, the electric field distribution is obtained. The electric field strength with a range from 1.03 V/m to 36.2 kV/m is detected during a measurement time of 1 second by changing the voltage on the electrodes, as shown in Fig. <ref>(a). The noise equivalent electric field reaches 7.5 μ V / cm / √(Hz) at 1.4 × 10^-7 mbar (equal to 1.05× 10^-7 Torr, Fig. <ref>(b)). Recently, Fu et al. proposed a prototype that uses levitated nanoparticles to measure AC electric field and serve as low-frequency receiving antennas <cit.>. Similarly, levitated ferromagnetic particles can be used to sense magnetic fields <cit.>. A ferromagnetic particle undergoes rotation due to the induced torque arising from the interaction between its intrinsic magnetic moment and an external magnetic field. Jiang et al. designed a superconducting levitation system to trap a neodymium magnetic disk attached to a high-reflectivity mirror, forming a Fabry-Pérot cavity in combination with another mirror. This design allows for precise detection of the dynamics of the levitated magnet under the interaction with external magnetic field, demonstrating a sensitivity in magnetic field measurements of 370 pT / √(Hz) <cit.>. Such a system for magnetometry even exceeds the standard quantum limit (ħ) of magnetic field measurement by quantum magnetometers, including superconducting quantum interference devices, solid-state spins, and optical atomic magnetometers. Recently, Ahrens et al. reported a sensitivity of 20 fT / √(Hz), corresponding to 0.064 ħ, with levitated ferromagnetic particles in a superconducting trap <cit.>. These highly sensitive systems provide a powerful tool for precision measurements and testing fundamental physics. §.§ Levitated scanning probe microscopy In the realms of materials, biology and manufacturing, the detection of surface structures is crucial for analyzing sample properties, structures, and quality. Various techniques are used for the detection, including optical microscopy, SEM, and atomic force microscopy (AFM). The resolution of optical microscopy is restricted by the diffraction limit, which is determined by the wavelength of light and the NA of objective lens, resulting in a resolution capped at a few hundred nanometers with visible lights. In contrast, SEM and AFM own higher resolutions in the nanometer range but are associated with drawbacks such as larger equipment size and higher costs. Levitated optomechanics emerge as highly sensitive probes enabling the study of surface characteristics with exceptional resolution. Ju et al. <cit.> reported a neutral silica particle optically levitated near a sapphire surface featuring a nanograting structure, as shown in Figs. <ref>(a) and (b). The interference between the reflected light from the surface and the trapping beam forms a standing wave. The particle can be stably trapped in the anti-nodes of the standing wave, with the first well situated approximately 370 nm away from the surface. Through systematic scanning of the nanograting with the levitated particle, the trapping frequency of the CoM motion varies periodically, as shown in Fig. <ref>(c). Such a scanning method proves to be an effective tool for detecting and characterizing the surface structure beyond the diffraction limit. Additionally, the force sensitivity of levitated optomechanics is not affect by surface structure or the separation between surface and particles. Montoya et al. demonstrated a silica nanosphere levitated by a standing wave near a conductor surface (Fig. <ref>(d)) <cit.>. The minimum observable force measured at the resonance frequency of CoM motion with a distance of 0.411 μm is shown in Fig. <ref>(e). By changing the distance of the levitated particle from the surface, no significant difference of force sensitivity is observed <cit.>. The three-dimensional intensity gradient of a nanophotonic cavity can also be imaged by a levitated particle, which has been achieved by Magrini et al. <cit.>. A nanoparticle is levitated at a distance of 310 nm from a nanophotonic cavity using a stranding wave. Due to the coupling to the evanescent field of the cavity mode, the motion signal of the particle is affected by the phase fluctuations of the cavity mode. The optomechanical coupling rates between the particle and nanocavity is shown in Fig. <ref>(f). Compared to AFM, the imaging resolution of a levitated particle is only limited by the measurement of the particle motion, rather than the size of the probe, which may reach up to tens of nanometers. §.§ Localized vacuum pressure gauge An ultra-high vacuum environment, approximately below 10^-8 Torr, is essential for various scientific research applications, including particle accelerators <cit.>, gravitational wave detectors <cit.>, thin film growth and preparation <cit.>, electron-beam lithography <cit.>, atomic force microscopy <cit.>, and so on. Ionization gauges, including both hot and cold cathode types, represent the most sensitive methods in low-pressure measurement. These gauges collect the ionized gas molecules and measure the weak current, which is proportional to the gas pressure. However, in extremely high vacuum conditions with ultra-low gas molecules density, the production of ions is limited and dependent on the types of gases, which are typically mixed and unknown. Consequently, ionization gauges face challenges in calibration and large errors. In general, the pressure measurement range is limited to the magnitude of 10^-11 Torr. Plenty of specific researches require an even higher vacuum level, which cannot be precisely measured by conventional ion gauge. Levitation system is a potential tool for measuring ultra-low pressure with high accuracy. The principle of such a vacuum gauge is based on the rotational motion of levitated particles. The damping torque caused by the collisions of gas molecules is given by M_gas = - Iω_rγ_rot. Specifically, for a spherical particle, the damping rate of the rotational motion is γ_rot = 10 κ p / ( πρ R v ) <cit.>, where κ≈ 1 is the accommodation factor of angular momentum transfer from gas molecules onto the particle, ρ is the particle density, p is the pressure, and v = √(8k_BT/πm_gas) is the mean speed of gas molecules, m_gas is the mass of a single gas molecule. Consequently, the damping rate of rotational motion is proportional to the gas pressure, implying that pressure can be obtained by measuring the damping rate. The damping rate of the rotational motion of a levitated particle can be detected through various methods, which is based on distinct driving diagrams. When the particle is driven by the trapping laser (Fig. <ref>(a)), the rotation frequency is dependent on the ellipticity of the laser polarization <cit.>. After abrupt alternation of the laser polarization, the evolution of the rotation frequency can be expressed as a function of time ω _r( t ) = ω _1 + ( ω _2 - ω _1)( 1 - e^ - ( t - t_1)/τ), where τ = 1 / γ_rot is the damping time of the rotational motion. Fig. <ref>(b) shows the damping time of a levitated particle measured at different pressures <cit.>. The damping time is inversely proportional to the pressure. In conclusion, the pressure can be calculated by the corresponding damping rate, p = πρ Rv γ_rot/10 κ. In high vacuum, the damping time of rotational motion is particularly long but easy to measure. The measured pressure uncertainty in previous experiments is typically within a few percent <cit.>. However, in medium vacuum, the damping time becomes much shorter, leading to a large measurement error. Instead, direct measurement of rotation frequency of the levitated particle, which is inversely proportional to the pressure, is a faster method <cit.>. According to earlier experiments, it has been demonstrated that the rotation frequency exhibits a small fluctuation of only 2.3%, far less than that of ionization gauges (20% fluctuation). To further enhance the accuracy of the vacuum gauges, cooling the CoM motion of levitated particles can be implemented, resulting in a decreased rotation frequency fluctuation of 0.17% <cit.>. In addition to being driven by a circularly polarized laser, a charged particle can also rotate due to the interaction with a spinning electric field generated by two pairs of orthogonal electrodes <cit.>. The four electrodes are applied with four sinusoidal waves with the same frequency and amplitude, but with a π / 2 phase difference between adjacent electrodes, as shown in Fig. <ref>(c) <cit.>. The rotation frequency of levitated particles equals to the driving frequency, and satisfies Idω _r/dt = M_ele + M_gas, where the driving torque (M_ele) generated by the electric field (| E|) is M_ele = | p|| E|sinφ, p is the dipole moment of the particle, and φ is the angle between the electric field and the dipole moment. Consequently, the damping rate of the rotational motion can be given by γ_rot = | p|| E| sinφ /( Iω _r). The angle φ can be determined in experiments by analyzing the time domain signals of both the rotational motion of the particle and the driving electric field. Specifically, there is a pressure limit at φ = π/2. In practical applications, the residual gas species are complicated, such as H_2O, He, N_2. The vacuum gauge is necessary to be calibrated based on the working environment. Fortunately, the rotational damping torque on levitated particles depends on the components of residual gases. Blakemore et al. measured the pressure limit of rotation as a function of residual gas species, as shown in Fig. <ref>(d) <cit.>, indicating the levitation system is even able to detect the gas species. Furthermore, the pressure can also be indicated by the CoM motion of levitated particles based on Eq. <ref> <cit.>. Liu et al. measured the air pressure around a levitated partile from atmosphere to 5 × 10^-6 Torr <cit.>. Dania et al. measured the damping rate of the CoM motion of a levitated particle at pressures as low as 5 × 10^-11 Torr, as shown in Fig. <ref>(e) <cit.>, which is proportional to the pressure. Theoretically, the vacuum gauge based on levitated optomechanics can be used in ultra-high vacuum conditions, even detection of the collision of a signal gas molecule <cit.>. Yin et al. calculated the impact of individual molecule collision on a ground-state cooling particle <cit.>. The distributions of the particle mean phonon number after a collision with single molecule with two different mass of m_a = 6.63 × 10^-26 kg and m_b = 2.18 × 10^-25 kg are shown in Fig. <ref>(f). With rich methods of measuring vacuum pressure, levitated optomechanical system hold the potential to function as highly accurate pressure gauges across a broad pressure range in vacuum environments, offering an advantage over ionization gauges as they are not constrained by residual gas species. §.§ High-bandwidth acoustic transducer The capability to precisely quantify the motion of levitated particles enables them to function as an acoustic transducer. Recently, Hillberry et al. reported a well-performed acoustic sensor with an optically trapped microsphere <cit.>. A silica microsphere is levitated in a dual-beam optical trap and two commercial acoustic sensors, a pressure microphone and a Microflown, are placed close to the trapped microsphere for comparison, as depicted in Fig. <ref>(a). After unit calibration, the acoustic detector velocity sensitivities for three sensors are shown in Fig. <ref>(b), indicating the levitated microsphere performs an augmented response to the acoustic velocity signal at low frequency domain and a moderate sensitivity at frequencies exceeding 200 kHz. The levitated microsphere also agrees well with the other two commercial sensors under tone-burst test shown in Fig. <ref>(c). Compared with a microphone or a Microflown, a levitated microsphere can self-calibrate its thermal dynamics without an anechoic room. Since the velocity measured by the microsphere is a vector, this system could be beneficial in sound-source localization. §.§ Chemical and bioaerosol sensing In addition to applications in physics and engineering, levitated optomechanics offers potential opportunities in chemistry and biology in atmospheric environment, such as trapping and manipulating individual biomolecules, cells <cit.> and bioaerosol particles <cit.>. Levitated optomechanics holds great promise for understanding biological systems at the molecular and cellular levels, thereby facilitating biomedical research, and driving innovation in areas such as healthcare and drug discovery. The detection of bioaerosol particles in air, i.e., fungi, pollen, bacteria and viruses, has numerous applications across various fields, including public health, environmental monitoring, agriculture, and others. Generally, bioaerosol can be detected by microscope, X-ray spectrometry and Raman spectroscope. However, these methods require performing on substrates or using large amounts of samples, which may change particle properties duo to different environments and contaminants. It is better to perform the study of bioaerosol properties under well-controlled conditions and involve individual airborne particles to avoid particle-surface contact. By combining the optical levitation and Raman spectroscopy, Ai et al. measured the physical, chemical, and biological properties of individual bioaerosol particles under simulated atmospheric conditions <cit.>. Fig. <ref>(a) is the schematic of a single bioaerosol particle levitated in a dual-beam trap at atmospheric pressure without photo-damage. Using the technology, they measured Raman spectra of seven different fungus samples in air, as shown in Fig. <ref>(b). § CHALLENGES AND OPPORTUNITIES §.§ Particles launching As most levitation systems only need to trap a single particle, an efficient and convenient particle launching approach is necessary. Typically, many experiments use the method that particles are sprayed out from a liquid solution (such as water or isopropanol) with an ultrasonic nebulizer. The particles dispersed in droplets are subsequently guided through a tube and transported to the trapping area. This method is straightforward and applicable at various situations. However, in some special cases, such as experiments involving ion traps <cit.>, the charge-to-mass ratio of particles may be insufficient for stable levitation. The number of charges carried on particles is limited to a few <cit.>. In hybrid traps with an optical cavity <cit.>, excessive particles and solution droplets sprayed into the chamber may adhere to the surface of the optical cavity, reducing its quality factor. Additionally, this method requires opening the vacuum chamber every time to load particles, which is harmful to high vacuum experiments. Recently, sublimation-activated release (SAR) loading technique using the sublimation of camphor was used to selectively load microparticles into a magneto-gravitational trap <cit.>. To increase the charge-to-mass ratio of particles, electrospray can be employed. The schematic of an electrospray is shown in Fig. <ref>(a). Particles in liquid solution (usually alcohol or isopropanol) is pumped through a capillary metal tube. A DC high voltage of a few kV is applied on the metal tube, leading to an increasing charge accumulation on a liquid droplet. Once the repulsive electric force is larger than its surface tension, the droplet is divided into smaller droplets, each containing a single particle. Subsequently, the particles are accelerated and sprayed out by the electric field formed between the metal tube and the grounded electrode. Typically, a particle with a diameter of 1 μm prepared via electrospray carries a charge number in between 1,000 and 10,000 <cit.>, which is large enough for stable levitation in an ion trap. As we mentioned, the traditional particle loading by ultrasonic nebulizer is inefficient and may contaminate the vacuum chamber, which wastes amount of time to re-establish the vacuum condition. Direct particle loading in a vacuum environment provides a viable solution to this issue. One approach involves the use of piezoelectric transducers <cit.>, as shown in Figs. <ref>(b) and (c). Initially, particles are deposited onto a glass substrate affixed to a piezo element. Large particles, with radii on the order of micrometers, can be easily ejected from the glass substrate. However, when dealing with small nanoparticles, the primary challenge lies in overcoming the strong attractive force between the particles and the glass substrate. This obstacle can be mitigated by using a polytetrafluoroethylene (PTFE) coated substrate, which reduces the adhesive forces (Fig. <ref>(c)), which has been demonstrated to successfully launch a nanoparticle with a radius of 43 nm <cit.>. An alternative approach for direct particle loading in vacuum involves the impact force of a pulsed laser to lift the particles from a substrate, which is named Laser-Induced Acoustic Desorption (LIAD) (Fig. <ref>(d)) <cit.>. Initially, a liquid solution containing particles is dried on a 250 μm thick aluminum foil. A focused pulse laser from the backside of the foil generates an acoustic shock wave that launches the particles from the front side through a process known as acoustic desorption. To enhance loading efficiency, a four-rod Paul trap with a broad trapping range and a deep potential well is positioned close to the foil. The launched particles are first trapped in the Paul trap and can subsequently be delivered to an optical trap. This approach proves particularly advantageous for loading particles in ultra-high vacuum environments. The ability to directly load particles into a Paul ion trap at pressures down to 10^-7 Torr has been experimentally demonstrated <cit.>. §.§ On-chip levitation Levitated optomechanics is widely used for a range of applications, as discussed in the previous sections. Minimization and integration are crucial considerations in the practical deployment of levitated devices. On-chip levitation emerges as a promising platform to achieve compact levitation systems. The high NA lens used for laser focusing and signal collection can be replaced by a metalens, as demonstrated by Shen et al. <cit.> and shown in Fig. <ref>(a). The metalens is constructed by arranging phase-shifting elements on a surface to create a phase profile analogous to that of a traditional lens. It is designed with a diameter of 425 μm and a focal length of 100 μm. It has an NA of 0.9 for a 1064 nm laser in vacuum. With this thin and compact metalens, a silica nanoparticle could be levitated in vacuum with a trapping laser power of 200 mW. In contrast to a conventional objective lens, the metalens is superior at operating under more extreme conditions and provides greater flexibility in generating complicated trapping potentials through nanofabrication technique <cit.>. Recently, Yu et al. <cit.> numerically demonstrated a wave-driven metalens for creating optical tweezer arrays. The nanopillars, made of silicon, are arranged in a square lattice as shown in Fig. <ref>(b). The incident light at the taper with transversal electric mode (TE_0) couples with the scattered wave and deliver extra phase, resulting in a multi-trap phase profiles. This compact integrated design of optical metalenses get rid of bulk spatial light modulators necessitated by conventional optical tweezer arrays. Furthermore, a pair of optical fibers, carrying two linearly polarized counter-propagating lasers to form a standing wave, can be used for particle levitation <cit.>. The nanoparticles are confined within the stationary wave pattern formed by the interference of two counter-propagating beams. Concurrently, the motion signals of the levitated particles can be detected using another pair of orthogonal fibers <cit.>, offering an advantage in better adapting to the scattering pattern of the particle. Additionally, a bottom layer of the hybrid on-chip trap with planar electrodes can be designed for cooling the CoM motion of levitated particles <cit.>. §.§ Optomechanics with ion traps and magnetic traps Optical levitation systems demonstrate high trapping frequencies (> 100 kHz) for nanoparticles, regardless of the charge and magnetic susceptibility of levitated objects. The motion of optically levitated particles is easily controlled and measured. However, laser recoil heating and particle absorption limit the stable levitation in high vacuum. Previous optical levitation experiments have exclusively used silica particles in high vacuum for precision measurements due to their low optical absorption and high-temperature thermal stability. However, optically levitating other materials is constrained to low vacuum conditions as detailed in Table <ref>. As the dimension of the levitated particle increases to the micrometer scale, the trapping frequency rapidly drops to tens of hertz. To address these limitations, ion traps and magnetic traps provide viable alternatives, enabling particle levitation in high vacuum with negligible heating effects. Ion traps create a three-dimensional potential for charged particles by employing a combination of static and AC quadrupole potentials. Various designs of ion traps have been reported, including the ring trap <cit.>, four-rod trap <cit.>, a pair of tips trap <cit.>, and surface trap <cit.>. The stable levitation of micro- and nano-scale particles, such as diamonds, with an ion trap has been demonstrated <cit.>. Conangla et al. used an ion trap formed with a pair of tip electrodes to levitate charged diamonds, as shown in Fig. <ref>(a). The optical readout of electron spins in a levitated diamond in high vacuum with a surface ion trap was first demonstrated by Jin et al. <cit.>. The surface ion trap is created on a sapphire substrate with a crucial part of only about 2 mm × 2 mm × 0.4 mm in size, as shown in Fig. <ref>(b). The levitated nanodiamond's internal temperature remains stable at about 350 K under a pressure of 6 × 10^-6 Torr. The temperature is still moderate for quantum control of the NV spins embedded in the diamonds. In addition, the levitated diamonds can be driven to rotate at a frequency of up to 20 MHz, surpassing typical NV center electron spin dephasing rates. Delord et al. demonstrated the strong spin-mechanical coupling of levitated diamonds in an ion trap <cit.>, offering the potential for using spins to create non-classical states of motion. Magnetic levitation, including diamagnetic levitation <cit.> and diamagnetically stabilized magnet levitation <cit.>, is an alternative technique for object levitation. The method is widely used in extensive applications, including force and acceleration sensors <cit.>, fluid viscosity sensors <cit.> and gas flowmeters <cit.>. Considering a diamagnetic particle levitated in an external magnetic field (B), the potential energy of the particle can be expressed as U = - χB^2 V / (2 μ _0) + mgy <cit.>, where the two terms are attributed to the external magnetic field and the Earth's gravity. χ, V, and m are the magnetic susceptibility, volume, and mass of the levitated particle, respectively. μ_0 is vacuum permeability, g is gravitational acceleration, and y is the position in the vertical direction. The stable trapping condition for diamagnetic particles depends on the equilibrium between the magnetic force acting on the particle and gravity. Recently, Leng et al. reported a low-drift room-temperature gravimeter based on diamagnetically stabilized magnet levitation with an acceleration sensitivity of 15 μGal/√(Hz) and a drift of 61 μGal/√(Hz) per day (the best among relative gravimeters) <cit.>. The levitated micro-resonator with a proof mass of 215 mg is located in the aluminum frame with a magnetic shield covered in vacuum chamber, shown in Fig. <ref>(c). Fig. <ref>(d) illustrates the recorded experimental acceleration data presented alongside theoretically calculated data of earth tides for comparison, showing a high correlation coefficient of 0.97. The earthquake events can also be recorded and manifested as spikes in the raw data, shown in Fig. <ref>(e). § CONCLUSION Levitated optomechanics, an area that has rapidly progressed since 2010 <cit.>, holds significant promise for diverse applications owing to its high force, acceleration, and torque sensitivities, which have been demonstrated in laboratory environments <cit.>. Levitated objects are also sensitive to external fields and variations, rendering them attractive candidates for high-precision sensors like accelerometers or gyroscopes. A compact accelerometer with an optically levitated particle has been installed on a vehicle running on a road <cit.>, showing its potential for real-world applications. Levitation systems are suitable for manipulation in vacuum and microgravity environments, showcasing potential advancements in instruments for future space missions. Levitation systems also offer the ability to conduct non-destructive testing and characterize materials at the nanoscale. While these applications hold promise, it is necessary to acknowledge challenges in implementing levitation systems beyond the laboratory environments. Addressing technical issues such as particle loading in vacuum, system minimization, stability maintenance, and scalability for practical applications remains a crucial focus in this evolving field. We thank the support from the Office of Naval Research under Grant No. N00014-18-1-2371, the National Science Foundation under Grant PHY-2110591, and the Gordon and Betty Moore Foundation. 137 fxundefined [1] ifx#1 fnum [1] #1firstoftwo secondoftwo fx [1] #1firstoftwo secondoftwo noop [0]secondoftwo ref[1]@startlink#1@href href[1]#1@endlink anitize@url [0]` 12`$12`&12`#12`1̂2`_12`%12 startlink[1] endlink[0] rl [1]href #1 @bib@innerbibempty [Ashkin and Dziedzic(1971)]Ashkin1971Levitation author author A. Ashkin and author J. M. Dziedzic, title title Optical Levitation by Radiation Pressure, https://doi.org/10.1063/1.1653919 journal journal Applied Physics Letters volume 19, pages 283 (year 1971)NoStop [Li et al.(2010)Li, Kheifets, Medellin, and Raizen]Li2010Measurement author author T. Li, author S. Kheifets, author D. Medellin, and author M. G. Raizen, title title Measurement of the instantaneous velocity of a Brownian particle, https://doi.org/10.1126/science.1189403 journal journal Science volume 328, pages 1673 (year 2010)NoStop [Li et al.(2011)Li, Kheifets, and Raizen]Li2011Millikelvin author author T. Li, author S. Kheifets, and author M. G. Raizen, title title Millikelvin cooling of an optically trapped microsphere in vacuum, https://doi.org/10.1038/nphys1952 journal journal Nat. Phys. volume 7, pages 527 (year 2011)NoStop [Bang et al.(2020)Bang, Seberson, Ju, Ahn, Xu, Gao, Robicheaux, and Li]Bang2020Five author author J. Bang, author T. Seberson, author P. Ju, author J. Ahn, author Z. Xu, author X. Gao, author F. Robicheaux, and author T. Li, title title Five-dimensional cooling and nonlinear dynamics of an optically levitated nanodumbbell, https://doi.org/10.1103/PhysRevResearch.2.043054 journal journal Phys. Rev. Res. volume 2, pages 043054 (year 2020)NoStop [Tebbenjohanns et al.(2019)Tebbenjohanns, Frimmer, Militaru, Jain, and Novotny]Tebbenjohanns2019Cold author author F. Tebbenjohanns, author M. Frimmer, author A. Militaru, author V. Jain, and author L. Novotny, title title Cold damping of an optically levitated nanoparticle to microkelvin temperatures, https://doi.org/10.1103/PhysRevLett.122.223601 journal journal Phys. Rev. Lett. volume 122, pages 223601 (year 2019)NoStop [Conangla et al.(2019)Conangla, Ricci, Cuairan, Schell, Meyer, and Quidant]Conangla2019Optimal author author G. P. Conangla, author F. Ricci, author M. T. Cuairan, author A. W. Schell, author N. Meyer, and author R. Quidant, title title Optimal feedback cooling of a charged levitated nanoparticle with adaptive control, https://doi.org/10.1103/PhysRevLett.122.223602 journal journal Phys. Rev. Lett. volume 122, pages 223602 (year 2019)NoStop [Magrini et al.(2021)Magrini, Rosenzweig, Bach, Deutschmann-Olek, Hofer, Hong, Kiesel, Kugi, and Aspelmeyer]Magrini2021Real author author L. Magrini, author P. Rosenzweig, author C. Bach, author A. Deutschmann-Olek, author S. G. Hofer, author S. Hong, author N. Kiesel, author A. Kugi, and author M. Aspelmeyer, title title Real-time optimal quantum control of mechanical motion at room temperature, https://doi.org/10.1038/s41586-021-03602-3 journal journal Nature volume 595, pages 373 (year 2021)NoStop [Tebbenjohanns et al.(2021)Tebbenjohanns, Mattana, Rossi, Frimmer, and Novotny]Tebbenjohanns2021Quantum author author F. Tebbenjohanns, author M. L. Mattana, author M. Rossi, author M. Frimmer, and author L. Novotny, title title Quantum control of a nanoparticle optically levitated in cryogenic free space, https://doi.org/10.1038/s41586-021-03617-w journal journal Nature volume 595, pages 378 (year 2021)NoStop [Blakemore et al.(2022)Blakemore, Martin, Fieguth, Priel, Venugopalan, Kawasaki, and Gratta]Blakemore2022Librational author author C. P. Blakemore, author D. Martin, author A. Fieguth, author N. Priel, author G. Venugopalan, author A. Kawasaki, and author G. Gratta, title title Librational feedback cooling, https://doi.org/10.1103/PhysRevA.106.023503 journal journal Phys. Rev. A volume 106, pages 023503 (year 2022)NoStop [Liška et al.(2023)Liška, Zemánková, Svak, Jákl, Ježek, Bránecký, Simpson, Zemánek, and Brzobohatý]Liska2023Cold author author V. Liška, author T. Zemánková, author V. Svak, author P. Jákl, author J. Ježek, author M. Bránecký, author S. H. Simpson, author P. Zemánek, and author O. Brzobohatý, title title Cold damping of levitated optically coupled nanoparticles, https://doi.org/10.1364/OPTICA.496072 journal journal Optica volume 10, pages 1203 (year 2023)NoStop [Gieseler et al.(2012)Gieseler, Deutsch, Quidant, and Novotny]Gieseler2012Subkelvin author author J. Gieseler, author B. Deutsch, author R. Quidant, and author L. Novotny, title title Subkelvin parametric feedback cooling of a laser-trapped nanoparticle, https://doi.org/10.1103/PhysRevLett.109.103603 journal journal Phys. Rev. Lett. volume 109, pages 103603 (year 2012)NoStop [Zheng et al.(2019)Zheng, Guo, and Sun]Zheng2019Cooling author author Y. Zheng, author G.-C. Guo, and author F.-W. Sun, title title Cooling of a levitated nanoparticle with digital parametric feedback, https://doi.org/10.1063/1.5099284 journal journal Appl. Phys. Lett. volume 115, pages 101105 (year 2019)NoStop [Gao et al.(2024)Gao, van der Laan, Zieli ńńska, Militaru, Novotny, and Frimmer]Gao2024Feedback author author J. Gao, author F. van der Laan, author J. A. Zieli ńńska, author A. Militaru, author L. Novotny, and author M. Frimmer, title title Feedback cooling a levitated nanoparticle's libration to below 100 phonons, https://doi.org/10.1103/PhysRevResearch.6.033009 journal journal Phys. Rev. Res. volume 6, pages 033009 (year 2024)NoStop [Arita et al.(2022)Arita, Bruce, Wright, Simpson, Zemánek, and Dholakia]Arita2022All author author Y. Arita, author G. D. Bruce, author E. M. Wright, author S. H. Simpson, author P. Zemánek, and author K. Dholakia, title title All-optical sub-kelvin sympathetic cooling of a levitated microsphere in vacuum, https://doi.org/10.1364/OPTICA.466337 journal journal Optica volume 9, pages 1000 (year 2022)NoStop [Millen et al.(2015)Millen, Fonseca, Mavrogordatos, Monteiro, and Barker]Millen2015Cavity author author J. Millen, author P. Z. G. Fonseca, author T. Mavrogordatos, author T. S. Monteiro, and author P. F. Barker, title title Cavity cooling a single charged levitated nanosphere, https://doi.org/10.1103/PhysRevLett.114.123602 journal journal Phys. Rev. Lett. volume 114, pages 123602 (year 2015)NoStop [Delić et al.(2020)Delić, Reisenbauer, Dare, Grass, Vuletić, Kiesel, and Aspelmeyer]Delic2020Cooling author author U. Delić, author M. Reisenbauer, author K. Dare, author D. Grass, author V. Vuletić, author N. Kiesel, and author M. Aspelmeyer, title title Cooling of a levitated nanoparticle to the motional quantum ground state, https://doi.org/10.1126/science.aba3993 journal journal Science volume 367, pages 892 (year 2020)NoStop [Pontin et al.(2023)Pontin, Fu, Toroš, Monteiro, and Barker]Pontin2023Simultaneous author author A. Pontin, author H. Fu, author M. Toroš, author T. S. Monteiro, and author P. F. Barker, title title Simultaneous cavity cooling of all six degrees of freedom of a levitated nanoparticle, https://doi.org/10.1038/s41567-023-02006-6 journal journal Nat. Phys. volume 19, pages 1003 (year 2023)NoStop [Piotrowski et al.(2023)Piotrowski, Windey, Vijayan, de los Ríos Sommer, Meyer, Quidant, Romero-Isart, Reimann, and Novotny]Piotrowski2023Simultaneous author author J. Piotrowski, author D. Windey, author C. Vijayan, Jayadev Gonzalez-Ballestero, author A. de los Ríos Sommer, author N. Meyer, author R. Quidant, author O. Romero-Isart, author R. Reimann, and author L. Novotny, title title Simultaneous ground-state cooling of two mechanical modes of a levitated nanoparticle, https://doi.org/10.1038/s41567-023-01956-1 journal journal Nat. Phys. volume 19, pages 1009 (year 2023)NoStop [Ranjit et al.(2016)Ranjit, Cunningham, Casey, and Geraci]Ranjit2016Zeptonewton author author G. Ranjit, author M. Cunningham, author K. Casey, and author A. A. Geraci, title title Zeptonewton force sensing with nanospheres in an optical lattice, https://doi.org/10.1103/PhysRevA.93.053801 journal journal Phys. Rev. A volume 93, pages 053801 (year 2016)NoStop [Monteiro et al.(2020)Monteiro, Li, Afek, Li, Mossman, and Moore]Monteiro2020Force author author F. Monteiro, author W. Li, author G. Afek, author C.-l. Li, author M. Mossman, and author D. C. Moore, title title Force and acceleration sensing with optically levitated nanogram masses at microkelvin temperatures, https://doi.org/10.1103/PhysRevA.101.053835 journal journal Phys. Rev. A volume 101, pages 053835 (year 2020)NoStop [Ahn et al.(2020)Ahn, Xu, Bang, Ju, Gao, and Li]Ahn2020Ultrasensitive author author J. Ahn, author Z. Xu, author J. Bang, author P. Ju, author X. Gao, and author T. Li, title title Ultrasensitive torque detection with an optically levitated nanorotor, https://doi.org/10.1038/s41565-019-0605-9 journal journal Nat. Nanotechnol. volume 15, pages 89 (year 2020)NoStop [Ju et al.(2023)Ju, Jin, Shen, Duan, Xu, Gao, Ni, and Li]Ju2023Near author author P. Ju, author Y. Jin, author K. Shen, author Y. Duan, author Z. Xu, author X. Gao, author X. Ni, and author T. Li, title title Near-field GHz rotation and sensing with an optically levitated nanodumbbell, https://doi.org/10.1021/acs.nanolett.3c02442 journal journal Nano Lett. volume 23, pages 10157 (year 2023)NoStop [Hebestreit et al.(2018)Hebestreit, Frimmer, Reimann, and Novotny]Hebestreit2018Sensing author author E. Hebestreit, author M. Frimmer, author R. Reimann, and author L. Novotny, title title Sensing static forces with free-falling nanoparticles, https://doi.org/10.1103/PhysRevLett.121.063602 journal journal Phys. Rev. Lett. volume 121, pages 063602 (year 2018)NoStop [Timberlake et al.(2019)Timberlake, Gasbarri, Vinante, Setter, and Ulbricht]Timberlake2019Acceleration author author C. Timberlake, author G. Gasbarri, author A. Vinante, author A. Setter, and author H. Ulbricht, title title Acceleration sensing with magnetically levitated oscillators above a superconductor, https://doi.org/10.1063/1.5129145 journal journal Appl. Phys. Lett. volume 115, pages 224101 (year 2019)NoStop [Priel et al.(2022)Priel, Fieguth, Blakemore, Hough, Kawasaki, Martin, Venugopalan, and Gratta]Priel2022Dipole author author N. Priel, author A. Fieguth, author C. P. Blakemore, author E. Hough, author A. Kawasaki, author D. Martin, author G. Venugopalan, and author G. Gratta, title title Dipole moment background measurement and suppression for levitated charge sensors, https://doi.org/10.1126/sciadv.abo2361 journal journal Science Advances volume 8, pages eabo2361 (year 2022)NoStop [Zhu et al.(2023)Zhu, Fu, Gao, Li, Chen, Wang, Chen, and Hu]Zhu2023Nanoscale author author S. Zhu, author Z. Fu, author X. Gao, author C. Li, author Z. Chen, author Y. Wang, author X. Chen, and author H. Hu, title title Nanoscale electric field sensing using a levitated nano-resonator with net charge, https://doi.org/10.1364/PRJ.475793 journal journal Photon. Res. volume 11, pages 279 (year 2023)NoStop [Liang et al.(2023)Liang, Zhu, He, Chen, Wang, Li, Fu, Gao, Chen, Li, Zhu, and Hu]Liang2023Yoctonewton author author T. Liang, author S. Zhu, author P. He, author Z. Chen, author Y. Wang, author C. Li, author Z. Fu, author X. Gao, author X. Chen, author N. Li, author Q. Zhu, and author H. Hu, title title Yoctonewton force detection based on optically levitated oscillator, https://doi.org/https://doi.org/10.1016/j.fmre.2022.09.021 journal journal Fundamental Research volume 3, pages 57 (year 2023)NoStop [Geraci et al.(2010)Geraci, Papp, and Kitching]Geraci2010Short author author A. A. Geraci, author S. B. Papp, and author J. Kitching, title title Short-range force detection using optically cooled levitated microspheres, https://doi.org/10.1103/PhysRevLett.105.101101 journal journal Phys. Rev. Lett. volume 105, pages 101101 (year 2010)NoStop [Chen et al.(2022)Chen, Liu, and Zhu]Chen2022constraining author author L. Chen, author J. Liu, and author K.-D. Zhu, title title Constraining the axion-nucleon coupling and non-newtonian gravity with a levitated optomechanical device, https://doi.org/10.1103/PhysRevD.106.095007 journal journal Phys. Rev. D volume 106, pages 095007 (year 2022)NoStop [Carney et al.(2021)Carney, Krnjaic, Moore, Regal, Afek, Bhave, Brubaker, Corbitt, Cripe, Crisosto, Geraci, Ghosh, Harris, Hook, Kolb, Kunjummen, Lang, Li, Lin, Liu, Lykken, Magrini, Manley, Matsumoto, Monte, Monteiro, Purdy, Riedel, Singh, Singh, Sinha, Taylor, Qin, Wilson, and Zhao]Carney2021Mechanical author author D. Carney, author G. Krnjaic, author D. C. Moore, author C. A. Regal, author G. Afek, author S. Bhave, author B. Brubaker, author T. Corbitt, author J. Cripe, author N. Crisosto, author A. Geraci, author S. Ghosh, author J. G. E. Harris, author A. Hook, author E. W. Kolb, author J. Kunjummen, author R. F. Lang, author T. Li, author T. Lin, author Z. Liu, author J. Lykken, author L. Magrini, author J. Manley, author N. Matsumoto, author A. Monte, author F. Monteiro, author T. Purdy, author C. J. Riedel, author R. Singh, author S. Singh, author K. Sinha, author J. M. Taylor, author J. Qin, author D. J. Wilson, and author Y. Zhao, title title Mechanical quantum sensing in the search for dark matter, https://doi.org/10.1088/2058-9565/abcfcd journal journal Quantum Sci. Technol. volume 6, pages 024002 (year 2021)NoStop [Afek et al.(2022)Afek, Carney, and Moore]Afek2022Coherent author author G. Afek, author D. Carney, and author D. C. Moore, title title Coherent scattering of low mass dark matter from optically trapped sensors, https://doi.org/10.1103/PhysRevLett.128.101301 journal journal Phys. Rev. Lett. volume 128, pages 101301 (year 2022)NoStop [Li et al.(2023a)Li, Li, Zhang, Dong, and Hu]Li2023Collective author author Y. Li, author C. Li, author J. Zhang, author Y. Dong, and author H. Hu, title title Collective-motion-enhanced acceleration sensing via an optically levitated microsphere array, https://doi.org/10.1103/PhysRevApplied.20.024018 journal journal Phys. Rev. Appl. volume 20, pages 024018 (year 2023a)NoStop [Han et al.(2023)Han, Xiong, Chen, Huang, Su, Kuang, Tan, Xiao, and Luo]Han2023Feedback author author X. Han, author W. Xiong, author X. Chen, author Z. Huang, author C. Su, author T. Kuang, author Z. Tan, author G. Xiao, and author H. Luo, title title Feedback-control acceleration sensing: Toward a practical small-scale levitated optomechanical accelerometer, https://doi.org/10.1109/JSEN.2023.3327877 journal journal IEEE Sensors Journal volume 23, pages 30163 (year 2023)NoStop [Jiang et al.(2020)Jiang, Rudge, and Hosseini]Jiang2020Superconducting author author X. Jiang, author J. Rudge, and author M. Hosseini, title title Superconducting levitation of a mg-scale cavity mirror, https://doi.org/10.1063/5.0008116 journal journal Appl. Phys. Lett. volume 116, pages 244103 (year 2020)NoStop [Ahrens et al.(2024)Ahrens, Ji, Budker, Timberlake, Ulbricht, and Vinante]Ahrens2024Levitated author author F. Ahrens, author W. Ji, author D. Budker, author C. Timberlake, author H. Ulbricht, and author A. Vinante, @noop title Levitated ferromagnetic magnetometer with energy resolution well below ħ (year 2024), https://arxiv.org/abs/2401.03774 arXiv:2401.03774 [quant-ph] NoStop [Blakemore et al.(2020)Blakemore, Martin, Fieguth, Kawasaki, Priel, Rider, and Gratta]Blakemore2020Absolute author author C. P. Blakemore, author D. Martin, author A. Fieguth, author A. Kawasaki, author N. Priel, author A. D. Rider, and author G. Gratta, title title Absolute pressure and gas species identification with an optically levitated rotor, https://doi.org/10.1116/1.5139638 journal journal J. Vac. Sci. Technol. B volume 38, pages 024201 (year 2020)NoStop [Hillberry and Raizen(2024)]Hillberry2024acoustic author author L. Hillberry and author M. Raizen, title title Optically trapped microspheres are high-bandwidth acoustic transducers, https://doi.org/10.1103/PhysRevApplied.21.014031 journal journal Phys. Rev. Appl. volume 21, pages 014031 (year 2024)NoStop [Ai et al.(2022)Ai, Wang, Pan, and Videen]Ai2022Characterization author author Y. Ai, author C. Wang, author Y.-L. Pan, and author G. Videen, title title Characterization of single fungal aerosol particles in a reactive atmospheric environment using time-resolved optical trapping-raman spectroscopy (ot-rs), https://doi.org/10.1039/D2EA00030J journal journal Environ. Sci. Atmos. volume 2, pages 591 (year 2022)NoStop [Ahn et al.(2018)Ahn, Xu, Bang, Deng, Hoang, Han, Ma, and Li]Ahn2018Optically author author J. Ahn, author Z. Xu, author J. Bang, author Y.-H. Deng, author T. M. Hoang, author Q. Han, author R.-M. Ma, and author T. Li, title title Optically levitated nanodumbbell torsion balance and GHz nanomechanical rotor, https://doi.org/10.1103/PhysRevLett.121.033603 journal journal Phys. Rev. Lett. volume 121, pages 033603 (year 2018)NoStop [Kuhn et al.(2017a)Kuhn, Stickler, Kosloff, Patolsky, Hornberger, Arndt, and Millen]Kuhn2017Optically author author S. Kuhn, author B. A. Stickler, author A. Kosloff, author F. Patolsky, author K. Hornberger, author M. Arndt, and author J. Millen, title title Optically driven ultra-stable nanomechanical rotor, https://doi.org/10.1038/s41467-017-01902-9 journal journal Nat. Commun. volume 8, pages 1670 (year 2017a)NoStop [Winstone et al.(2022)Winstone, Wang, Klomp, Felsted, Laeuger, Gupta, Grass, Aggarwal, Sprague, Pauzauskie, Larson, Kalogera, and Geraci]Winstone2022Optical author author G. Winstone, author Z. Wang, author S. Klomp, author R. G. Felsted, author A. Laeuger, author C. Gupta, author D. Grass, author N. Aggarwal, author J. Sprague, author P. J. Pauzauskie, author S. L. Larson, author V. Kalogera, and author A. A. Geraci (collaboration LSD Collaboration), title title Optical trapping of high-aspect-ratio NaYF hexagonal prisms for kHz-MHz gravitational wave detectors, https://doi.org/10.1103/PhysRevLett.129.053604 journal journal Phys. Rev. Lett. volume 129, pages 053604 (year 2022)NoStop [Arita et al.(2013)Arita, Mazilu, and Dholakia]Arita2013Laser author author Y. Arita, author M. Mazilu, and author K. Dholakia, title title Laser-induced rotation and cooling of a trapped microgyroscope in vacuum, https://doi.org/10.1038/ncomms3374 journal journal Nat. Commun. volume 4, pages 2374 (year 2013)NoStop [Lewandowski et al.(2021)Lewandowski, Knowles, Etienne, and D'Urso]Lewandowski2021high author author C. W. Lewandowski, author T. D. Knowles, author Z. B. Etienne, and author B. D'Urso, title title High-sensitivity accelerometry with a feedback-cooled magnetically levitated microsphere, https://doi.org/10.1103/PhysRevApplied.15.014050 journal journal Phys. Rev. Appl. volume 15, pages 014050 (year 2021)NoStop [Neukirch et al.(2015)Neukirch, von Haartman, Rosenholm, and Nick Vamivakas]Neukirch2015Multi author author L. P. Neukirch, author E. von Haartman, author J. M. Rosenholm, and author A. Nick Vamivakas, title title Multi-dimensional single-spin nano-optomechanics with a levitated nanodiamond, https://doi.org/10.1038/nphoton.2015.162 journal journal Nat. Photon. volume 9, pages 653 (year 2015)NoStop [Hoang et al.(2016)Hoang, Ahn, Bang, and Li]Hoang2016Electron author author T. M. Hoang, author J. Ahn, author J. Bang, and author T. Li, title title Electron spin control of optically levitated nanodiamonds in vacuum, https://doi.org/10.1038/ncomms12250 journal journal Nat. Commun. volume 7, pages 12250 (year 2016)NoStop [Jin et al.(2024)Jin, Shen, Ju, Gao, Zu, Grine, and Li]Jin2024Quantum author author Y. Jin, author K. Shen, author P. Ju, author X. Gao, author C. Zu, author A. J. Grine, and author T. Li, title title Quantum control and berry phase of electron spins in rotating levitated diamonds in high vacuum, https://doi.org/10.1038/s41467-024-49175-3 journal journal Nature Communications volume 15, pages 5063 (year 2024)NoStop [Yin et al.(2013a)Yin, Li, Zhang, and Duan]Yin2013Large author author Z.-q. Yin, author T. Li, author X. Zhang, and author L. M. Duan, title title Large quantum superpositions of a levitated nanodiamond through spin-optomechanical coupling, https://doi.org/10.1103/PhysRevA.88.033614 journal journal Phys. Rev. A volume 88, pages 033614 (year 2013a)NoStop [Bose et al.(2017)Bose, Mazumdar, Morley, Ulbricht, Toro šš, Paternostro, Geraci, Barker, Kim, and Milburn]Bose2017Spin author author S. Bose, author A. Mazumdar, author G. W. Morley, author H. Ulbricht, author M. Toro šš, author M. Paternostro, author A. A. Geraci, author P. F. Barker, author M. S. Kim, and author G. Milburn, title title Spin entanglement witness for quantum gravity, https://doi.org/10.1103/PhysRevLett.119.240401 journal journal Phys. Rev. Lett. volume 119, pages 240401 (year 2017)NoStop [Marletto and Vedral(2017)]Marletto2017Gravitationally author author C. Marletto and author V. Vedral, title title Gravitationally induced entanglement between two massive particles is sufficient evidence of quantum effects in gravity, https://doi.org/10.1103/PhysRevLett.119.240402 journal journal Phys. Rev. Lett. volume 119, pages 240402 (year 2017)NoStop [Ledbetter et al.(2012)Ledbetter, Jensen, Fischer, Jarmola, and Budker]Ledbetter2012Gyroscopes author author M. P. Ledbetter, author K. Jensen, author R. Fischer, author A. Jarmola, and author D. Budker, title title Gyroscopes based on nitrogen-vacancy centers in diamond, https://doi.org/10.1103/PhysRevA.86.052116 journal journal Phys. Rev. A volume 86, pages 052116 (year 2012)NoStop [Zhang and Yin(2023)]Zhang2023Highly author author H. Zhang and author Z.-Q. Yin, title title Highly sensitive gyroscope based on a levitated nanodiamond, https://doi.org/10.1364/OE.482436 journal journal Opt. Express volume 31, pages 8139 (year 2023)NoStop [Ma et al.(2017)Ma, Hoang, Gong, Li, and Yin]Ma2017Proposal author author Y. Ma, author T. M. Hoang, author M. Gong, author T. Li, and author Z.-q. Yin, title title Proposal for quantum many-body simulation and torsional matter-wave interferometry with a levitated nanodiamond, https://doi.org/10.1103/PhysRevA.96.023827 journal journal Phys. Rev. A volume 96, pages 023827 (year 2017)NoStop [Rusconi et al.(2022)Rusconi, Perdriat, Hétet, Romero-Isart, and Stickler]Rusconi2022Spin author author C. C. Rusconi, author M. Perdriat, author G. Hétet, author O. Romero-Isart, and author B. A. Stickler, title title Spin-controlled quantum interference of levitated nanorotors, https://doi.org/10.1103/PhysRevLett.129.093605 journal journal Phys. Rev. Lett. volume 129, pages 093605 (year 2022)NoStop [Delord et al.(2020)Delord, Huillery, Nicolas, and Hétet]Delord2020Spin author author T. Delord, author P. Huillery, author L. Nicolas, and author G. Hétet, title title Spin-cooling of the motion of a trapped diamond, https://doi.org/10.1038/s41586-020-2133-z journal journal Nature volume 580, pages 56 (year 2020)NoStop [Chudo et al.(2014)Chudo, Ono, Harii, Matsuo, Ieda, Haruki, Okayasu, Maekawa, Yasuoka, and Saitoh]Chudo2014Observation author author H. Chudo, author M. Ono, author K. Harii, author M. Matsuo, author J. Ieda, author R. Haruki, author S. Okayasu, author S. Maekawa, author H. Yasuoka, and author E. Saitoh, title title Observation of Barnett fields in solids by nuclear magnetic resonance, https://doi.org/10.7567/APEX.7.063004 journal journal Applied Physics Express volume 7, pages 063004 (year 2014)NoStop [Wood et al.(2017)Wood, Lilette, Fein, Perunicic, Hollenberg, Scholten, and Martin]Wood2017Magnetic author author A. Wood, author E. Lilette, author Y. Y. Fein, author V. S. Perunicic, author L. Hollenberg, author R. E. Scholten, and author A. M. Martin, title title Magnetic pseudo-fields in a rotating electron–nuclear spin system, https://doi.org/10.1038/nphys4221 journal journal Nat. Phys. volume 13, pages 1070 (year 2017)NoStop [Yin et al.(2013b)Yin, Geraci, and Li]yin2013optomechanics author author Z.-Q. Yin, author A. A. Geraci, and author T. Li, title title Optomechanics of levitated dielectric particles, https://doi.org/10.1142/S0217979213300181 journal journal International Journal of Modern Physics B volume 27, pages 1330018 (year 2013b)NoStop [Gieseler and Millen(2018)]gieseler2018levitated author author J. Gieseler and author J. Millen, title title Levitated nanoparticles for microscopic thermodynamics—a review, @noop journal journal Entropy volume 20, pages 326 (year 2018)NoStop [Millen et al.(2020)Millen, Monteiro, Pettit, and Vamivakas]millen2020optomechanics author author J. Millen, author T. S. Monteiro, author R. Pettit, and author A. N. Vamivakas, title title Optomechanics with levitated particles, https://doi.org/10.1088/1361-6633/ab6100 journal journal Reports on Progress in Physics volume 83, pages 026401 (year 2020)NoStop [Gonzalez-Ballestero et al.(2021)Gonzalez-Ballestero, Aspelmeyer, Novotny, Quidant, and Romero-Isart]Gonzalez2021Levitodynamics author author C. Gonzalez-Ballestero, author M. Aspelmeyer, author L. Novotny, author R. Quidant, and author O. Romero-Isart, title title Levitodynamics: Levitation and control of microscopic objects in vacuum, https://doi.org/10.1126/science.abg3027 journal journal Science volume 374, pages eabg3027 (year 2021)NoStop [Moore and Geraci(2021)]moore2021searching author author D. C. Moore and author A. A. Geraci, title title Searching for new physics using optically levitated sensors, @noop journal journal Quantum Science and Technology volume 6, pages 014008 (year 2021)NoStop [Winstone et al.(2023)Winstone, Bhattacharya, Geraci, Li, Pauzauskie, and Vamivakas]winstone2023levitated author author G. Winstone, author M. Bhattacharya, author A. A. Geraci, author T. Li, author P. J. Pauzauskie, and author N. Vamivakas, title title Levitated optomechanics: A tutorial and perspective, @noop journal journal arXiv preprint arXiv:2307.11858 (year 2023)NoStop [Jin et al.(2021)Jin, Yan, Rahman, Li, Yu, and Zhang]Jin2021GHz author author Y. Jin, author J. Yan, author S. J. Rahman, author J. Li, author X. Yu, and author J. Zhang, title title 6 GHz hyperfast rotation of an optically levitated nanoparticle in vacuum, https://doi.org/10.1364/PRJ.422975 journal journal Photon. Res. volume 9, pages 1344 (year 2021)NoStop [Harada and Asakura(1996)]Harada1996Radiation author author Y. Harada and author T. Asakura, title title Radiation forces on a dielectric sphere in the Rayleigh scattering regime, https://doi.org/https://doi.org/10.1016/0030-4018(95)00753-9 journal journal Opt. Commun. volume 124, pages 529 (year 1996)NoStop [Asano(1979)]Asano1979Light author author S. Asano, title title Light scattering properties of spheroidal particles, https://doi.org/10.1364/AO.18.000712 journal journal Appl. Opt. volume 18, pages 712 (year 1979)NoStop [Ren et al.(1997)Ren, Gréhan, and Gouesbet]Ren1997Scattering author author K. F. Ren, author G. Gréhan, and author G. Gouesbet, title title Scattering of a gaussian beam by an infinite cylinder in the framework of generalized Lorenz-Mie theory: formulation and numerical results, https://doi.org/10.1364/JOSAA.14.003014 journal journal J. Opt. Soc. Am. A volume 14, pages 3014 (year 1997)NoStop [Tzarouchis and Sihvola(2018)]Tzarouchis2018Light author author D. Tzarouchis and author A. Sihvola, title title Light scattering by a dielectric sphere: Perspectives on the Mie resonances, https://doi.org/10.3390/app8020184 journal journal Appl. Sci. volume 8, pages 184 (year 2018)NoStop [Jin et al.(2018)Jin, Yu, and Zhang]Jin2018Optically author author Y. Jin, author X. Yu, and author J. Zhang, title title Optically levitated nanosphere with high trapping frequency, https://doi.org/10.1007/s11433-018-9230-6 journal journal Sci. China-Phy. Mech. Astron. volume 61, pages 114221 (year 2018)NoStop [Jin et al.(2019)Jin, Yu, and Zhang]Jin2019Polarization author author Y. Jin, author X. Yu, and author J. Zhang, title title Polarization-dependent center-of-mass motion of an optically levitated nanosphere, https://doi.org/10.1364/JOSAB.36.002369 journal journal J. Opt. Soc. Am. B volume 36, pages 2369 (year 2019)NoStop [Shen et al.(2021)Shen, Duan, Ju, Xu, Chen, Zhang, Ahn, Ni, and Li]Shen2021Onchip author author K. Shen, author Y. Duan, author P. Ju, author Z. Xu, author X. Chen, author L. Zhang, author J. Ahn, author X. Ni, and author T. Li, title title On-chip optical levitation with a metalens in vacuum, https://doi.org/10.1364/OPTICA.438410 journal journal Optica volume 8, pages 1359 (year 2021)NoStop [Li(2013)]Li2013Fundamental author author T. Li, https://doi.org/10.1007/978-1-4614-6031-2 title Fundamental Tests of Physics with Optically Trapped Microspheres (publisher Springer New York, year 2013)NoStop [Yu et al.(2022)Yu, Jin, Shen, Han, and Zhang]Yu2022Hermitian author author X. Yu, author Y. Jin, author H. Shen, author Z. Han, and author J. Zhang, title title Hermitian and non-Hermitian normal-mode splitting in an optically-levitated nanoparticle, https://doi.org/10.1007/s44214-022-00003-z journal journal Quantum Front. volume 6, pages 1 (year 2022)NoStop [Melo et al.(2024)Melo, T. Cuairan, Tomassi, Meyer, and Quidant]Melo2023Vacuum author author B. Melo, author M. T. Cuairan, author G. F. Tomassi, author N. Meyer, and author R. Quidant, title title Vacuum levitation and motion control on chip, @noop journal journal Nature Nanotechnology , pages 1 (year 2024)NoStop [Li et al.(2023b)Li, Wang, Liu, Li, Li, and Hu]Li2023Flexible author author W. Li, author X. Wang, author J. Liu, author S. Li, author N. Li, and author H. Hu, title title Flexible control of an ultrastable levitated orbital micro-gyroscope through orbital-translational coupling, https://doi.org/doi:10.1515/nanoph-2022-0625 journal journal Nanophotonics volume 12, pages 1245 (year 2023b)NoStop [Hu et al.(2023)Hu, Kingsley-Smith, Nikkhou, Sabin, Rodríguez-Fortuño, Xu, and Millen]Hu2023Structured author author Y. Hu, author J. J. Kingsley-Smith, author M. Nikkhou, author J. A. Sabin, author F. J. Rodríguez-Fortuño, author X. Xu, and author J. Millen, title title Structured transverse orbital angular momentum probed by a levitated optomechanical sensor, https://doi.org/10.1038/s41467-023-38261-7 journal journal Nat. Commun. volume 14, pages 2638 (year 2023)NoStop [Jauffred et al.(2015)Jauffred, Taheri, Schmitt, Linke, and Oddershede]Jauffred2015Optical author author L. Jauffred, author S. M.-R. Taheri, author R. Schmitt, author H. Linke, and author L. B. Oddershede, title title Optical trapping of gold nanoparticles in air, https://doi.org/10.1021/acs.nanolett.5b01562 journal journal Nano Lett. volume 15, pages 4713 (year 2015)NoStop [Brzobohatý et al.(2023)Brzobohatý, Duchaň, Jákl, Ježek, Šiler, Zemánek, and Simpson]Brzobohaty2023Synchronization author author O. Brzobohatý, author M. Duchaň, author P. Jákl, author J. Ježek, author M. Šiler, author P. Zemánek, and author S. H. Simpson, title title Synchronization of spin-driven limit cycle oscillators optically, https://doi.org/10.1038/s41467-023-41129-5 journal journal Nat. Commun. volume 14, pages 5441 (year 2023)NoStop [Zhang et al.(2023)Zhang, Guo, Yu, Xiao, Fu, Zhang, and Zheng]Zhang2023Determining author author B. Zhang, author X. Guo, author X. Yu, author Y. Xiao, author Z. Fu, author Z. Zhang, and author H. Zheng, title title Determining the internal temperature of an optically levitated nanoparticle in vacuum by doped-Er^3+-ion luminescence, https://doi.org/10.1103/PhysRevA.108.033503 journal journal Phys. Rev. A volume 108, pages 033503 (year 2023)NoStop [Rahman and Barker(2017)]Rahman2017Laser author author A. T. M. A. Rahman and author P. F. Barker, title title Laser refrigeration, alignment and rotation of levitated Yb^3+:YLF nanocrystals, https://doi.org/10.1038/s41566-017-0005-3 journal journal Nature Photonics volume 11, pages 634 (year 2017)NoStop [Arita et al.(2020)Arita, Simpson, Zemánek, and Dholakia]Arita2020Coherent author author Y. Arita, author S. H. Simpson, author P. Zemánek, and author K. Dholakia, title title Coherent oscillations of a levitated birefringent microsphere in vacuum driven by nonconservative rotation-translation coupling, https://doi.org/10.1126/sciadv.aaz9858 journal journal Sci. Adv. volume 6, pages eaaz9858 (year 2020)NoStop [Arita et al.(2023)Arita, Simpson, Bruce, Wright, Zemánek, and Dholakia]Arita2023Cooling author author Y. Arita, author S. H. Simpson, author G. D. Bruce, author E. M. Wright, author P. Zemánek, and author K. Dholakia, title title Cooling the optical-spin driven limit cycle oscillations of a levitated gyroscope, https://doi.org/10.1038/s42005-023-01336-4 journal journal Commun. Phys. volume 6, pages 238 (year 2023)NoStop [Zeng et al.(2024)Zeng, Xu, Wu, Wu, and Xiao]zeng2024optically author author K. Zeng, author X. Xu, author Y. Wu, author X. Wu, and author D. Xiao, title title Optically levitated micro gyroscopes with an mhz rotational vaterite rotor, @noop journal journal Microsystems & Nanoengineering volume 10, pages 78 (year 2024)NoStop [Luntz-Martin et al.(2021)Luntz-Martin, Felsted, Dadras, Pauzauskie, and Vamivakas]LuntzMartin2021Laser author author D. R. Luntz-Martin, author R. G. Felsted, author S. Dadras, author P. J. Pauzauskie, and author A. N. Vamivakas, title title Laser refrigeration of optically levitated sodium yttrium fluoride nanocrystals, https://doi.org/10.1364/OL.426334 journal journal Opt. Lett. volume 46, pages 3797 (year 2021)NoStop [Demergis and Florin(2012)]Demergis2012Ultrastrong author author V. Demergis and author E.-L. Florin, title title Ultrastrong optical binding of metallic nanoparticles, https://doi.org/10.1021/nl303035p journal journal Nano Lett. volume 12, pages 5756 (year 2012)NoStop [Scala et al.(2013)Scala, Kim, Morley, Barker, and Bose]Scala2013Matter author author M. Scala, author M. S. Kim, author G. W. Morley, author P. F. Barker, and author S. Bose, title title Matter-wave interferometry of a levitated thermal nano-oscillator induced and probed by a spin, https://doi.org/10.1103/PhysRevLett.111.180403 journal journal Phys. Rev. Lett. volume 111, pages 180403 (year 2013)NoStop [Maclaurin et al.(2012)Maclaurin, Doherty, Hollenberg, and Martin]Maclaurin2012Measurable author author D. Maclaurin, author M. W. Doherty, author L. C. L. Hollenberg, and author A. M. Martin, title title Measurable quantum geometric phase from a rotating single spin, https://doi.org/10.1103/PhysRevLett.108.240403 journal journal Phys. Rev. Lett. volume 108, pages 240403 (year 2012)NoStop [Chen et al.(2019)Chen, Li, and Yin]Chen2019Nonadiabatic author author X.-Y. Chen, author T. Li, and author Z.-Q. Yin, title title Nonadiabatic dynamics and geometric phase of an ultrafast rotating electron spin, https://doi.org/https://doi.org/10.1016/j.scib.2019.02.018 journal journal Sci. Bull. volume 64, pages 380 (year 2019)NoStop [Ranjit et al.(2015)Ranjit, Atherton, Stutz, Cunningham, and Geraci]Ranjit2015Attonewton author author G. Ranjit, author D. P. Atherton, author J. H. Stutz, author M. Cunningham, and author A. A. Geraci, title title Attonewton force detection using microspheres in a dual-beam optical trap in high vacuum, https://doi.org/10.1103/PhysRevA.91.051805 journal journal Phys. Rev. A volume 91, pages 051805 (year 2015)NoStop [Hempston et al.(2017)Hempston, Vovrosh, Toroš, Winstone, Rashid, and Ulbricht]Hempston2017Force author author D. Hempston, author J. Vovrosh, author M. Toroš, author G. Winstone, author M. Rashid, and author H. Ulbricht, title title Force sensing with an optically levitated charged nanoparticle, https://doi.org/10.1063/1.4993555 journal journal Appl. Phys. Lett. volume 111, pages 133111 (year 2017)NoStop [Frimmer et al.(2017)Frimmer, Luszcz, Ferreiro, Jain, Hebestreit, and Novotny]Frimmer2017Controlling author author M. Frimmer, author K. Luszcz, author S. Ferreiro, author V. Jain, author E. Hebestreit, and author L. Novotny, title title Controlling the net charge on a nanoparticle optically levitated in vacuum, https://doi.org/10.1103/PhysRevA.95.061801 journal journal Phys. Rev. A volume 95, pages 061801 (year 2017)NoStop [Ashkin(1980)]Ashkin1980Applications author author A. Ashkin, title title Applications of laser radiation pressure, https://doi.org/10.1126/science.210.4474.1081 journal journal Science volume 210, pages 1081 (year 1980)NoStop [Kim et al.(2016)Kim, Hauer, Doolin, Souris, and Davis]Kim2016Approaching author author P. H. Kim, author B. D. Hauer, author C. Doolin, author F. Souris, and author J. P. Davis, title title Approaching the standard quantum limit of mechanical torque sensing, https://doi.org/10.1038/ncomms13165 journal journal Nat. Commun. volume 7, pages 13165 (year 2016)NoStop [Kardar and Golestanian(1999)]Kardar1999The author author M. Kardar and author R. Golestanian, title title The "friction" of vacuum, and other fluctuation-induced forces, https://doi.org/10.1103/RevModPhys.71.1233 journal journal Rev. Mod. Phys. volume 71, pages 1233 (year 1999)NoStop [Zhao et al.(2012)Zhao, Manjavacas, García de Abajo, and Pendry]Zhao2012Rotational author author R. Zhao, author A. Manjavacas, author F. J. García de Abajo, and author J. B. Pendry, title title Rotational quantum friction, https://doi.org/10.1103/PhysRevLett.109.123604 journal journal Phys. Rev. Lett. volume 109, pages 123604 (year 2012)NoStop [Xu et al.(2021)Xu, Jacob, and Li]Xu2021Enhancement author author Z. Xu, author Z. Jacob, and author T. Li, title title Enhancement of rotational vacuum friction by surface photon tunneling, https://doi.org/doi:10.1515/nanoph-2020-0391 journal journal Nanophotonics volume 10, pages 537 (year 2021)NoStop [Xu and Li(2017)]Xu2017Detecting author author Z. Xu and author T. Li, title title Detecting Casimir torque with an optically levitated nanorod, https://doi.org/10.1103/PhysRevA.96.033843 journal journal Phys. Rev. A volume 96, pages 033843 (year 2017)NoStop [Manjavacas et al.(2017)Manjavacas, Rodríguez-Fortuño, García de Abajo, and Zayats]Manjavacas2017Lateral author author A. Manjavacas, author F. J. Rodríguez-Fortuño, author F. J. García de Abajo, and author A. V. Zayats, title title Lateral Casimir force on a rotating particle near a planar surface, https://doi.org/10.1103/PhysRevLett.118.133605 journal journal Phys. Rev. Lett. volume 118, pages 133605 (year 2017)NoStop [Yang and Hsu(2010)]Yang2010A author author C.-C. Yang and author Y.-L. Hsu, title title A review of accelerometry-based wearable motion detectors for physical activity monitoring, https://doi.org/10.3390/s100807772 journal journal Sensors volume 10, pages 7772 (year 2010)NoStop [D’Alessandro et al.(2019)D’Alessandro, Scudero, and Vitale]Alessandro2019A author author A. D’Alessandro, author S. Scudero, and author G. Vitale, title title A review of the capacitive MEMS for seismology, https://doi.org/10.3390/s19143093 journal journal Sensors volume 19, pages 3093 (year 2019)NoStop [Lu et al.(2021)Lu, Wang, Wang, Yao, Wang, and Huang]Lu2021Review author author Q. Lu, author Y. Wang, author X. Wang, author Y. Yao, author X. Wang, and author W. Huang, title title Review of micromachined optical accelerometers: from mg to sub-μg, https://doi.org/10.29026/oea.2021.200045 journal journal Opto‐Electron. Adv. volume 4, pages 200045 (year 2021)NoStop [Hines et al.(2023)Hines, Nelson, Zhang, Valdes, Sanjuan, and Guzman]Hines2023Compact author author A. Hines, author A. Nelson, author Y. Zhang, author G. Valdes, author J. Sanjuan, and author F. Guzman, title title Compact optomechanical accelerometers for use in gravitational wave detectors, https://doi.org/10.1063/5.0142108 journal journal Appl. Phys. Lett. volume 122, pages 094101 (year 2023)NoStop [Lee(2003)]Lee2003Review author author B. Lee, title title Review of the present status of optical fiber sensors, https://doi.org/https://doi.org/10.1016/S1068-5200(02)00527-8 journal journal Opt. Fiber Technol. volume 9, pages 57 (year 2003)NoStop [Tadigadapa and Mateti(2009)]Tadigadapa2009Piezoelectric author author S. Tadigadapa and author K. Mateti, title title Piezoelectric MEMS sensors: state-of-the-art and perspectives, https://doi.org/10.1088/0957-0233/20/9/092001 journal journal Meas. Sci. Technol. volume 20, pages 092001 (year 2009)NoStop [Montoya et al.(2022)Montoya, Alejandro, Eom, Grass, Clarisse, Witherspoon, and Geraci]Montoya2022Scanning author author C. Montoya, author E. Alejandro, author W. Eom, author D. Grass, author N. Clarisse, author A. Witherspoon, and author A. A. Geraci, title title Scanning force sensing at micrometer distances from a conductive surface with nanospheres in an optical lattice, https://doi.org/10.1364/AO.457148 journal journal Appl. Opt. volume 61, pages 3486 (year 2022)NoStop [Magrini et al.(2018)Magrini, Norte, Riedinger, Marinković, Grass, Delić, Gröblacher, Hong, and Aspelmeyer]Magrini2018Near author author L. Magrini, author R. A. Norte, author R. Riedinger, author I. Marinković, author D. Grass, author U. Delić, author S. Gröblacher, author S. Hong, and author M. Aspelmeyer, title title Near-field coupling of a levitated nanoparticle to a photonic crystal cavity, https://doi.org/10.1364/OPTICA.5.001597 journal journal Optica volume 5, pages 1597 (year 2018)NoStop [Fu et al.(2024)Fu, Xu, Zhu, He, Zhu, Gao, Cai, He, Chen, Zhang, Li, Chen, Dong, Zhu, Liu, and Hu]fu2024optically author author Z. Fu, author J. Xu, author S. Zhu, author C. He, author X. Zhu, author X. Gao, author H. Cai, author P. He, author Z. Chen, author Y. Zhang, author N. Li, author X. Chen, author Y. Dong, author S. Zhu, author C. Liu, and author H. Hu, @noop title Optically levitated nanoparticles as receiving antennas for low frequency wireless communication (year 2024), https://arxiv.org/abs/2402.10907 arXiv:2402.10907 [physics.app-ph] NoStop [Jackson Kimball et al.(2016)Jackson Kimball, Sushkov, and Budker]JacksonKimball2016Precessing author author D. F. Jackson Kimball, author A. O. Sushkov, and author D. Budker, title title Precessing ferromagnetic needle magnetometer, https://doi.org/10.1103/PhysRevLett.116.190801 journal journal Phys. Rev. Lett. volume 116, pages 190801 (year 2016)NoStop [Dania et al.(2024)Dania, Bykov, Goschin, Teller, Kassid, and Northup]Dania2023ultrahigh author author L. Dania, author D. S. Bykov, author F. Goschin, author M. Teller, author A. Kassid, and author T. E. Northup, title title Ultrahigh quality factor of a levitated nanomechanical oscillator, https://doi.org/10.1103/PhysRevLett.132.133602 journal journal Phys. Rev. Lett. volume 132, pages 133602 (year 2024)NoStop [Yin et al.(2011)Yin, Li, and Feng]Yin2011Three author author Z.-q. Yin, author T. Li, and author M. Feng, title title Three-dimensional cooling and detection of a nanosphere with a single cavity, https://doi.org/10.1103/PhysRevA.83.013816 journal journal Phys. Rev. A volume 83, pages 013816 (year 2011)NoStop [Wiedemann(2015)]Wiedemann2015Particle author author H. Wiedemann, https://doi.org/10.1007/978-3-319-18317-6 title Particle Accelerator Physics (publisher Springer Nature, year 2015)NoStop [Harry and (for the LIGO Scientific Collaboration)(2010)]Harry2010Advanced author author G. M. Harry and author (for the LIGO Scientific Collaboration), title title Advanced LIGO: the next generation of gravitational wave detectors, https://doi.org/10.1088/0264-9381/27/8/084006 journal journal Class. Quantum Grav. volume 27, pages 084006 (year 2010)NoStop [Chambers(2000)]Chambers2000Epitaxial author author S. A. Chambers, title title Epitaxial growth and properties of thin film oxides, https://doi.org/https://doi.org/10.1016/S0167-5729(00)00005-4 journal journal Surf. Sci. Rep. volume 39, pages 105 (year 2000)NoStop [Vieu et al.(2000)Vieu, Carcenac, Pépin, Chen, Mejias, Lebib, Manin-Ferlazzo, Couraud, and Launois]Vieu2000Electron author author C. Vieu, author F. Carcenac, author A. Pépin, author Y. Chen, author M. Mejias, author A. Lebib, author L. Manin-Ferlazzo, author L. Couraud, and author H. Launois, title title Electron beam lithography: resolution limits and applications, https://doi.org/https://doi.org/10.1016/S0169-4332(00)00352-4 journal journal Appl. Surf. Sci. volume 164, pages 111 (year 2000)NoStop [Giessibl(2003)]Giessibl2003Advances author author F. J. Giessibl, title title Advances in atomic force microscopy, https://doi.org/10.1103/RevModPhys.75.949 journal journal Rev. Mod. Phys. volume 75, pages 949 (year 2003)NoStop [Fremerey(1982)]Fremerey1982Spinning author author J. Fremerey, title title Spinning rotor vacuum gauges, https://doi.org/https://doi.org/10.1016/0042-207X(82)94048-9 journal journal Vacuum volume 32, pages 685 (year 1982)NoStop [Barker et al.(2024)Barker, Carney, LeBrun, Moore, and Taylor]Barker2023Collision author author D. S. Barker, author D. Carney, author T. W. LeBrun, author D. C. Moore, and author J. M. Taylor, title title Collision-resolved pressure sensing, https://doi.org/10.1103/PhysRevA.109.042616 journal journal Phys. Rev. A volume 109, pages 042616 (year 2024)NoStop [Liu et al.(2024)Liu, Zheng, Tian, Wang, Guo, and Sun]liu2024nano author author L.-H. Liu, author Y. Zheng, author Y. Tian, author L. Wang, author G.-C. Guo, and author F.-W. Sun, title title A nano vacuum gauge based on second-order coherence in optical levitation, @noop journal journal arXiv preprint arXiv:2404.06907 (year 2024)NoStop [Souza et al.(2010)Souza, Molina, Raphael, Ozawa, Stark, Levin, Bronk, Ananta, Mandelin, Georgescu, Bankson, Gelovani, Killian, Arap, and Pasqualini]Souza2010Three author author G. R. Souza, author J. R. Molina, author R. M. Raphael, author M. G. Ozawa, author D. J. Stark, author C. S. Levin, author L. F. Bronk, author J. S. Ananta, author J. Mandelin, author M.-M. Georgescu, author J. A. Bankson, author J. G. Gelovani, author T. C. Killian, author W. Arap, and author R. Pasqualini, title title Three-dimensional tissue culture based on magnetic cell levitation, https://doi.org/10.1038/nnano.2010.23 journal journal Nat. Nanotech. volume 5, pages 291 (year 2010)NoStop [Xiao et al.(2020)Xiao, Kuang, Xiong, Han, and Luo]Xiao2020A author author G. Xiao, author T. Kuang, author W. Xiong, author X. Han, and author H. Luo, title title A PZT-assisted single particle loading method for dual-fiber optical trap in air, https://doi.org/https://doi.org/10.1016/j.optlastec.2020.106115 journal journal Opt. Laser Technol. volume 126, pages 106115 (year 2020)NoStop [Khodaee et al.(2022)Khodaee, Dare, Johnson, Delić, and Aspelmeyer]Khodaee2022Dry author author A. Khodaee, author K. Dare, author A. Johnson, author U. Delić, and author M. Aspelmeyer, title title Dry launching of silica nanoparticles in vacuum, https://doi.org/10.1063/5.0124029 journal journal AIP Adv. volume 12, pages 125023 (year 2022)NoStop [Bykov et al.(2019)Bykov, Mestres, Dania, Schmöger, and Northup]Bykov2019Direct author author D. S. Bykov, author P. Mestres, author L. Dania, author L. Schmöger, and author T. E. Northup, title title Direct loading of nanoparticles under high vacuum into a Paul trap for levitodynamical experiments, https://doi.org/10.1063/1.5109645 journal journal Appl. Phys. Lett. volume 115, pages 034101 (year 2019)NoStop [Conangla et al.(2018)Conangla, Schell, Rica, and Quidant]Conangla2018Motion author author G. P. Conangla, author A. W. Schell, author R. A. Rica, and author R. Quidant, title title Motion control and optical interrogation of a levitating single nitrogen vacancy in vacuum, https://doi.org/10.1021/acs.nanolett.8b01414 journal journal Nano Lett. volume 18, pages 3956 (year 2018)NoStop [Murphy et al.(2024)Murphy, Duenas, Iron, Nelson, and D’Urso]murphy2024selective author author C. E. Murphy, author M. Duenas, author D. Iron, author T. Nelson, and author B. D’Urso, title title Selective loading of a micrometer-scale particle into a magneto-gravitational trap by sublimation-activated release, @noop journal journal Review of Scientific Instruments volume 95 (year 2024)NoStop [Asenbaum et al.(2013)Asenbaum, Kuhn, Nimmrichter, Sezer, and Arndt]asenbaum2013cavity author author P. Asenbaum, author S. Kuhn, author S. Nimmrichter, author U. Sezer, and author M. Arndt, title title Cavity cooling of free silicon nanoparticles in high vacuum, @noop journal journal Nature communications volume 4, pages 2743 (year 2013)NoStop [Yu et al.(2024)Yu, Guo, Shi, Mao, Ding, Zheng, and Shen]Yu2024integrated author author G. Yu, author J. Guo, author J. Shi, author X. Mao, author H. Ding, author H. Zheng, and author C. Shen, title title On-chip multi-trap optical tweezers based on a guided wave-driven metalens, https://doi.org/10.1364/OL.517932 journal journal Opt. Lett. volume 49, pages 1225 (year 2024)NoStop [Leng et al.(2024)Leng, Chen, Li, Wang, Wang, Wang, Xie, Duan, Huang, and Du]Leng2024EarthTide author author Y. Leng, author Y. Chen, author R. Li, author L. Wang, author H. Wang, author L. Wang, author H. Xie, author C.-K. Duan, author P. Huang, and author J. Du, title title Measurement of the earth tides with a diamagnetic-levitated micro-oscillator at room temperature, https://doi.org/10.1103/PhysRevLett.132.123601 journal journal Phys. Rev. Lett. volume 132, pages 123601 (year 2024)NoStop [Sun et al.(2024)Sun, Pi, Kiang, Georgescu, Ou, Ulbricht, and Yan]sun2024tunable author author C. Sun, author H. Pi, author K. S. Kiang, author T. S. Georgescu, author J.-Y. Ou, author H. Ulbricht, and author J. Yan, title title Tunable on-chip optical traps for levitating particles based on single-layer metasurface, @noop journal journal Nanophotonics volume 13, pages 2791 (year 2024)NoStop [Kuhn et al.(2017b)Kuhn, Kosloff, Stickler, Patolsky, Hornberger, Arndt, and Millen]Kuhn2017Full author author S. Kuhn, author A. Kosloff, author B. A. Stickler, author F. Patolsky, author K. Hornberger, author M. Arndt, and author J. Millen, title title Full rotational control of levitated silicon nanorods, https://doi.org/10.1364/OPTICA.4.000356 journal journal Optica volume 4, pages 356 (year 2017b)NoStop [Dania et al.(2022)Dania, Heidegger, Bykov, Cerchiari, Araneda, and Northup]Dania2022Position author author L. Dania, author K. Heidegger, author D. S. Bykov, author G. Cerchiari, author G. Araneda, and author T. E. Northup, title title Position measurement of a levitated nanoparticle via interference with its mirror image, https://doi.org/10.1103/PhysRevLett.129.013601 journal journal Phys. Rev. Lett. volume 129, pages 013601 (year 2022)NoStop [Wang et al.(2015)Wang, Li, Noel, Chuang, Zhang, and Häffner]Wang2015Surface author author P.-J. Wang, author T. Li, author C. Noel, author A. Chuang, author X. Zhang, and author H. Häffner, title title Surface traps for freely rotating ion ring crystals, https://doi.org/10.1088/0953-4075/48/20/205002 journal journal Journal of Physics B: Atomic, Molecular and Optical Physics volume 48, pages 205002 (year 2015)NoStop [Simon and Geim(2000)]Simon2000Diamagnetic author author M. D. Simon and author A. K. Geim, title title Diamagnetic levitation: Flying frogs and floating magnets (invited), https://doi.org/10.1063/1.372654 journal journal J. Appl. Phys. volume 87, pages 6200 (year 2000)NoStop [Geim et al.(1999)Geim, Simon, Boamfa, and Heflinger]Geim1999Magnet author author A. K. Geim, author M. D. Simon, author M. I. Boamfa, and author L. O. Heflinger, title title Magnet levitation at your fingertips, https://doi.org/10.1038/22444 journal journal Nature volume 400, pages 323 (year 1999)NoStop [Ando et al.(2018)Ando, Baglio, Marletta, and Valastro]Ando2018A author author B. Ando, author S. Baglio, author V. Marletta, and author A. Valastro, title title A short-range inertial sensor exploiting magnetic levitation and an inductive readout strategy, https://doi.org/10.1109/TIM.2017.2785022 journal journal IEEE Trans. Instrum. Meas. volume 67, pages 1238 (year 2018)NoStop [Clara et al.(2015)Clara, Antlinger, Hilber, and Jakoby]Clara2015A author author S. Clara, author H. Antlinger, author W. Hilber, and author B. Jakoby, title title A viscosity and density sensor based on diamagnetically stabilized levitation, https://doi.org/10.1109/JSEN.2014.2368983 journal journal IEEE Sens. J. volume 15, pages 1937 (year 2015)NoStop [Zhang et al.(2018)Zhang, Su, Ding, Gong, and Duan]Zhang2018Design author author K. Zhang, author Y. Su, author J. Ding, author Q. Gong, and author Z. Duan, title title Design and analysis of a gas flowmeter using diamagnetic levitation, https://doi.org/10.1109/JSEN.2018.2853680 journal journal IEEE Sens. J. volume 18, pages 6978 (year 2018)NoStop [Romero-Isart et al.(2010)Romero-Isart, Juan, Quidant, and Cirac]Romero-Isart_2010 author author O. Romero-Isart, author M. L. Juan, author R. Quidant, and author J. I. Cirac, title title Toward quantum superposition of living organisms, https://doi.org/10.1088/1367-2630/12/3/033015 journal journal New Journal of Physics volume 12, pages 033015 (year 2010)NoStop [Chang et al.(2010)Chang, Regal, Papp, Wilson, Ye, Painter, Kimble, and Zoller]chang2010cavity author author D. E. Chang, author C. Regal, author S. Papp, author D. Wilson, author J. Ye, author O. Painter, author H. J. Kimble, and author P. Zoller, title title Cavity opto-mechanics using an optically levitated nanosphere, @noop journal journal Proceedings of the National Academy of Sciences volume 107, pages 1005 (year 2010)NoStop
http://arxiv.org/abs/2407.12891v1
20240717100454
Global-Local Similarity for Efficient Fine-Grained Image Recognition with Vision Transformers
[ "Edwin Arkel Rios", "Min-Chun Hu", "Bo-Cheng Lai" ]
cs.CV
[ "cs.CV", "I.2; I.4" ]
Global-Local Similarity for Efficient Fine-Grained Image Recognition with Vision Transformers Edwin Arkel Rios†, Min-Chun Hu, Bo-Cheng Lai† †National Yang Ming Chiao Tung University, Taiwan, National Tsing Hua University, Taiwan July 22, 2024 ================================================================================================================================================ § ABSTRACT Fine-grained recognition involves the classification of images from subordinate macro-categories, and it is challenging due to small inter-class differences. To overcome this, most methods perform discriminative feature selection enabled by a feature extraction backbone followed by a high-level feature refinement step. Recently, many studies have shown the potential behind vision transformers as a backbone for fine-grained recognition, but their usage of its attention mechanism to select discriminative tokens can be computationally expensive. In this work, we propose a novel and computationally inexpensive metric to identify discriminative regions in an image. We compare the similarity between the global representation of an image given by the CLS token, a learnable token used by transformers for classification, and the local representation of individual patches. We select the regions with the highest similarity to obtain crops, which are forwarded through the same transformer encoder. Finally, high-level features of the original and cropped representations are further refined together in order to make more robust predictions. Through extensive experimental evaluation we demonstrate the effectiveness of our proposed method, obtaining favorable results in terms of accuracy across a variety of datasets. Furthermore, our method achieves these results at a much lower computational cost compared to the alternatives. Code and checkpoints are available at: <https://github.com/arkel23/GLSim>. § INTRODUCTION Fine-grained image recognition (FGIR) involves classifying sub-categories within a larger super-category. Examples of FGIR problems include differentiating between bird species <cit.> and anime characters <cit.>. It is a widely studied area with various applications such as automatic biodiversity monitoring, intelligent retail, among others <cit.>. However, FGIR is a challenging task due to small inter-class differences and large intra-class variations. In order to tackle these challenges, most of the existing methods equip a coarse image recognition backbone encoder with modules to select discriminative regions <cit.>. Recently, with the advent of vision transformers (ViTs) <cit.>, many researchers have explored using this new architecture as a backbone for fine-grained image recognition <cit.>. Since transformer encoder blocks utilize multi-head self-attention, a natural choice was to use the attention scores to guide the discriminative region selection process, eliminating the need for an external attention module <cit.>. These regions are either cropped and re-inputted into the same encoder <cit.>, or combined using high-level feature refinement modules <cit.>, or both <cit.>, before making final predictions. However, as seen in <Ref>, several methods leveraging matrix-multiplication for attention aggregation <cit.> exhibit significant computational expense, characterized by a complexity of 𝒪(N^3) with respect to the sequence length N. Notably, the computational cost associated with attention aggregation can exceed that of the backbone's forward pass, particularly as image size increases. This computational burden presents a substantial limitation for FGIR tasks, which often benefit from the use of higher resolution images <cit.>, thereby constraining the practical applicability of these methods. To address this, we introduce a novel metric, GLS (Global-Local Similarity), to identify discriminative regions for Vision Transformers (ViTs) with a computational cost several orders of magnitude lower than that of aggregated attention. We compute the similarity between the global representation of an image, as given by the ViT's CLS token typically used for classification, and the local representations of individual patches. Regions exhibiting high similarity in the high-dimensional feature space are presumed to share underlying factors influencing the global representation, rendering them highly representative of the image. We subsequently crop the image based on the regions with highest GLS, resize it, and re-input it into the encoder. Finally, the high-level features of the original and cropped image image are refined collectively using an aggregator module, enhancing the robustness of predictions. Our contributions are summarized as follows: * We propose a novel metric for ViTs to identify discriminative regions in an image. GLS can be used as a visualization tool for interpretability of classification decisions, does not require any additional parameters and exhibits linear complexity 𝒪(N) with respect to sequence length. Compared to commonly used matrix-multiplication based aggregated attention the computational cost of GLS can be from 10^3 up to 10^6 times lower, depending on the architecture and image size. * We incorporate the proposed metric into a method that selects discriminative crops from an image and aggregates high-level features from both the original and cropped image for improved fine-grained image recognition performance. * We conducted a thorough analysis of fine-grained recognition models by comparing models across 10 datasets spanning a wide spectrum of tasks. Our model achieves the highest accuracy in 8 datasets, and on average, reduces the relative classification error[Relative error: Err_rel = 100 ·(100 - Acc) - (100 - Acc_ref)100 - Acc_ref] by 10.15% compared to the baseline ViT. These results demonstrate the potential of the proposed global-local similarity as discriminative region selection criteria. Moreover, our model achieves these results with 9.26x less VRAM and a 2.59x higher inference throughput than the best performing model in the other 2 datasets. § RELATED WORK §.§ Fine-Grained Image Recognition To address challenges of intra-class variations in FGIR, most approaches aim to identify discriminative regions that encapsulate the subtle differences between classes. Initially, part-level bounding boxes <cit.> or segmentation masks <cit.> were employed to train localization subnetworks, but the cost of manual annotations limited their applicability to a wide variety of tasks. Therefore, researchers aimed to use weak supervision, i.e., image-level labels, to localize discriminative regions. In this category most methods utilize either RPN <cit.> or attention mechanism for this goal. NTS-Net <cit.> is an example of the former that leverages classifier confidence to encourage collaboration between a Navigator and a Teacher network to propose and evaluate informative regions. RA-CNN <cit.>, WS-DAN <cit.> and CAL <cit.> are examples of the latter approach. RA-CNN employs an attention proposal subnetwork that recursively proposes finer regions, while WS-DAN and CAL use attention maps to generate augmentations of the image. While these methods achieve competitive results, they rely on external (attention) modules that increase the computational cost. Moreover, they require multiple recursive stages or multiple crops to obtain high-quality predictions, which further exacerbates the computational cost and limits their practicality. §.§ Transformers for Fine-Grained Recognition Recently, with the emergence of transformers in vision, there has been a substantial amount of research in how to effectively exploit this new architecture for FGIR <cit.>. The global receptive field of the self-attention mechanism, coupled with the inherent relative importance imparted by the attention weights, makes the ViT a natural candidate for usage as a backbone for fine-grained tasks. In <Ref> we compare the cost of the discriminative feature selection modules (DFSM) of proposed ViTs for FGIR as we observe it is the largest difference between these methods. All evaluated methods make use of the ViT's attention mechanism, mostly by leveraging recursive layer-wise matrix-matrix multiplication (TransFG <cit.>, RAMS-Trans <cit.>, DCAL <cit.>, TPSKG <cit.>, EV <cit.>) to calculate aggregated attention scores. RAMS-Trans and DCAL first compute the mean over different heads, while TransFG computes these scores separately for each head. AFTrans <cit.> computes head-wise aggregation of attention via element-wise multiplication, and re-weights layer contributions via its Squeeze-and-Excitation-like mechanism. FFVT <cit.> selects features on a layer-by-layer basis based on the normalized vector product of the first row and column of the attention matrix. Nevertheless, methods involving matrix-matrix multiplications incur a high computational cost that can even surpass the cost of the backbone itself, specially as the image size increases. The number of FLOPs required for multiplying two matrices 𝐌_1∈ℝ^N× N and 𝐌_2∈ℝ^N× N is N· N · (2N - 1). For TransFG, which entails H heads, and requires L-2 multiplications in the process of the PSM, its computational complexity is 𝒪(L· H · N^3). Those employing attention rollout <cit.> such as RAMS-Trans average the heads, thereby reducing the complexity by a factor of H. Although AF-Trans does not perform matrix-matrix multiplication, it still recursively computes N^2 element-wise products for H heads in each layer. The computationally lightest DFSM is FFVT's MAWS since it does not involve large matrix products. However, it exhibits considerable variation in performance across tasks, as observed from the results in <Ref> and poor interpretability as observed from the visualization in <Ref>. §.§ Similarity in Computer Vision Determining similarity between visual data is a fundamental requirement in various computer vision applications, including image retrieval and matching <cit.>. Typically, deep feature extraction models are employed to learn a high-level feature space, where metrics such as cosine similarity, L1, or L2 distance facilitate target retrieval given a query. Examples include CLIP and CCFR. CLIP <cit.> achieves this by training image and text encoders to optimize cosine similarity for correct image-text pairs and minimize it for incorrect pairs. CCFR <cit.> leverages cosine similarity to retrieve external features from a database to re-rank fine-grained classification predictions. Our approach, however, is distinct in that it is more closely related to self-similarity <cit.>, computing similarity within a single image rather than against external images. Self-similarity is commonly used as a descriptor that reveals the structural layout of an image or video by measuring similarities of a local patch within its neighborhood <cit.>. Recently, self-similarity has been harnessed as an intermediate feature transformation in deep neural networks, demonstrating its efficacy in video understanding <cit.>, image translation <cit.>, visual correspondence <cit.>, few-shot image recognition <cit.>, and image retrieval <cit.>. While self-similarity typically involves transformations based on local patch neighborhoods, our approach computes similarity between the global image representation and individual patches. As a result, our method deviates from conventional self-similarity, which focuses on local patch relations, and instead employs the similarity between the global representation and local features as a metric for discriminative feature selection. § PROPOSED METHOD: GLSIM An overview of our method is shown in <Ref>. Images are encoded using a transformer encoder. Then, to find discriminative regions, we employ the Global-Local Similarity (GLS) Module. The GLS Module computes the similarity between the global representation of the image and local tokens and crops the image based on the tokens with highest similarity. The image is then resized and forwarded through the encoder. Finally, an Aggregator module is employed to collectively refine high-level features from the original and cropped image, before being forwarded through a classification head to make our final predictions. §.§ Image Encoding with Vision Transformer We encode images using a ViT <cit.> encoder. Images are patchified using a convolution with large kernel size P and flattened into a 1D sequence of D channels and length N=(S_1/P)×(S2/P), where S_1 and S_2 represent the image width and height. Inspired by BERT <cit.>, the authors concatenate a learnable CLS token at the start of the sequence. Learnable positional embeddings are added to the sequence to incorporate spatial information. This sequence is forwarded through a series of L transformer encoder blocks <cit.> which apply multi-head self-attention (MHSA) with H heads and position-wise feed-forward networks (PWFFN), before being forwarded through a Layer Normalization <cit.> layer. This output of the transformer is denoted as 𝐟∈ℝ^(N+1) × D. §.§ Discriminative Feature Selection With GLS To identify the discriminative regions in the image, we then compute the similarity between the global representation of the image, as given by the CLS token (𝐟^0) and each of the other tokens in the sequence. This similarity map, denoted as 𝐬∈ℝ ^ N× D, is calculated according to <Ref> when cosine similarity is employed as similarity measure: 𝐬^i = sim(𝐟^0, 𝐟^i) = cos(𝐟^0, 𝐟^i) i ∈ 1, 2, ..., N cos(𝐟^0, 𝐟^i) = 𝐟^0 ·𝐟^i‖𝐟^0 ‖‖𝐟^i ‖ = ∑_j=1^D f^0_j f^i_j√(∑_j=1^D f^0_j)√(∑_j=1^D f^i_j) Then, we crop the image based on the image coordinates corresponding to a rectangle that encloses the top-O tokens with highest similarity. Subsequently, the cropped image is resized to the original image size S_1 × S_2 before being forwarded through the encoder to obtain 𝐟_crops. While the encoder's weights are shared, a different CLS token is utilized for the crops, drawing inspiration from <cit.>. As the CLS token aggregates the discriminative details from the image through self-attention, we expect the local tokens with high degree of similarity in the high-dimensional feature space to share the same underlying factors that drive the global representation. To verify our assumptions, we visualize various discriminative feature selection mechanisms (DFSMs) in <Ref>. We note that single-layer attention <cit.>, as depicted in the second and third column of the figure, does not focus on the objects of interest in certain scenarios. Conversely, while aggregating multiple layers attention through recursive matrix multiplication (fourth and fifth column) may offer a more effective alternative, it comes at a significant computational cost, up to 13,674x and 229x higher than the forward pass of the backbone, as seen in <Ref>. On the other side, while heatmaps of global-local similarity (last column) may not exhibit the same level of focus on specific regions as the aggregated attention maps, they are still mostly aligned with the regions of interest and the computational cost is much lower as seen in <Ref>. In particular, GLS requires between 10^3 to 10^7 times less FLOPs compared to attention rollout <cit.> and TransFG's PSM <cit.>, making it a highly efficient and effective alternative to attention. §.§ High-Level Feature Refinement In order to achieve more robust predictions, we explicitly combine high-level features from both the original and cropped images by incorporating an Aggregator module comprised of a single transformer encoder block. Specifically, we concatenate the output CLS token of the original image (𝐟^0) and the cropped image (𝐟_crops^0), and forward this concatenated representation through the Aggregator module. This is shown in <Ref>: 𝐫^' = MHSA(LN([𝐟^0; 𝐟_crops^0]) + [𝐟^0; 𝐟_crops^0]). 𝐫 = PWFFN(LN(𝐫^') + 𝐫^' Then we pass these tokens through a LayerNorm layer. We forward the first token in the sequence through a linear layer C_final∈ℝ^D × T to obtain our final classification predictions. We utilize cross-entropy as our loss function. This process is described by <Ref>: 𝐫_LN = LN(𝐫) y_final = 𝐫_LN^0𝐂_final § DATASETS AND EXPERIMENTAL SETUP We conduct experiments on 10 datasets, whose detailed statistics are shown in the Appendix. We perform a learning rate (LR) search for each dataset-model pair with a small split from the train set. Then as suggested by Gwilliam et al. <cit.> we report mean accuracy and its standard deviation on the test set across different seeded runs. The best and second best results, when applicable, are highlighted by boldface and underline, respectively. We employ the ViT B-16 <cit.> pre-trained on ImageNet-21k as the backbone for our proposed model. The hyperparameter O, which determines how many tokens with highest similarity we use for cropping, is set to 8 for most datasets, except for Dogs where it is set to 16. This value is doubled accordingly when using image size 448. We compare our proposal against several existing state-of-the-art (SotA) models, including the baseline ViT, two other ViTs specifically designed for FGIR (namely, TransFG <cit.> and FFVT <cit.>), as well as two CNN-based approaches, namely ResNet-101 <cit.>, and CAL <cit.> which employs the former as a backbone. The source code provided by the authors was incorporated into our training and evaluation codebase with minor modifications. We resize images to a square of size 1.34S_1× 1.34S_2 (e.g., 300x300 for image size 224x224) then perform a random crop during training or a center crop at inference. During training we additionally apply random horizontal flipping, random erasing <cit.>, trivial augment <cit.>, label smoothing <cit.> and stochastic depth <cit.>. We employ the SGD optimizer with batch size of 8, weight decay of 0 and cosine annealing <cit.> with a warmup of 500 steps for learning rate scheduling. Models are trained for 50 epochs. We use PyTorch <cit.> with mixed precision and employ Weight and Biases <cit.> for logging and experiment tracking. We conduct all experiments using a single V100 GPU. § RESULTS AND DISCUSSION §.§ FGIR SotA Model Comparison §.§.§ Comparison on NABirds In the NABirds <cit.> dataset the accuracies of most ViT-based methods are higher compared to CNN-based methods. This could be due to the usage of the self-attention operator with its global receptive field which can allow for effective aggregation of discriminative features from diverse regions of the image. Our method outperforms the best CNN-based method by 1%. Compared to ViT-based methods, our method outperforms the baseline ViT by 3.1%, while the second best, Dual-TR <cit.> by 0.7%. §.§.§ Comparison on iNat17 In the challenging iNat17 <cit.>, our method attains an improvement of 1.5% compared to the second best performing method, DeiT-NeT <cit.> which employs a ViT trained with the DeiT <cit.> pretraining recipe. Among models using the ViT B-16 pretraining recipe, our model outperforms the next best, TransFG <cit.> by 3.8% absolute accuracy, while the baseline ViT by 6.8%. This increase in relative improvement on the iNat17 dataset shows the promise behind our proposed methodology for large-scale FGIR tasks. §.§.§ Comparison Using Image Size 224x224 Previous research suggested the applicability of FGIR methods is task dependent <cit.>. Therefore, to obtain a better understanding of how our method performs across FGIR tasks we conduct a comparison across 10 FGIR datasets using image size 224x224 in <Ref>. Our proposed method, GLSim, obtains the highest accuracy in 8 datasets (CUB, Dogs, Flowers, Food, iNat17, NABirds, Pets, VegFru) and the second best in 2 others (DAFB, Moe). On average our method obtains the highest classification accuracy, reducing the relative classification error by 10.15% compared to the baseline ViT. These results demonstrate the robustness of GLSim across a wide variety of FGIR domains. §.§ Qualitative evaluation We evaluate the relation of the proposed global-local similarity to the quality of our crops and also compare them to crops from CAL by visualizing samples from various fine-grained datasets in <Ref>. In general, our method adeptly capture the objects of interest present within the corresponding images, thereby mitigating the impact of background noise. This, in turns, facilitates the extraction of finer details during the second forward pass. When comparing our cropping method to CAL on the DAFB dataset, we observe that our approach yields more zoomed-in crops than those produced by CAL. This tighter cropping may inadvertently exclude critical details necessary for effective discrimination in this task. Additionally, CAL's use of multiple crops could account for the observed accuracy gap between our method and CAL. §.§ Computational Cost Analysis As the accuracy of FGIR models reaches a certain threshold, the computational cost of deploying them becomes a critical factor. We compare the trade-off between accuracy and the throughput (in images per second) at inference time when batching the images, and the associated VRAM required for computing this batched inference. We show the results on <Ref> on accuracy vs inference throughput as for many users the throughput is a limiting factor when deploying models with real-time requirements. From this figure, we can observe that GLSim with image size 224 obtains a competitive accuracy, that is only surpassed by ViT, TransFG, FFVT and GLSim with image size 448, albeit slightly. However, the throughput for GLSim with image size 224 is 2.70x, 8.73x, 4.00x, and 5.11x times higher compared to the aforementioned models with image size 448. With regards to VRAM, we highlight the low memory requirements of GLSim with image size 224 as it is the only model to require less than 1 GB of VRAM during inference. This is because unlike TransFG and FFVT it does not require the storage and processing of intermediate features. GLSim requires 5.45x and 5.29x less VRAM compared to TransFG, and 2.32x and 1.65x less VRAM compared to FFVT, for image sizes 224 and 448, respectively. Furthermore, when compared to CAL, which yields the best results in DAFB and Moe, our proposed method has higher throughput, and requires substantially less VRAM during inference. Specifically, our method has 2.59x and 1.48x higher throughput, and requires 9.26x and 8.11x less VRAM, for image size 224 and 448, respectively. Overall, these results suggest that our proposed method achieves high accuracy while keeping computational cost low, making it a practical and efficient solution for deploying FGIR models in real-world applications. §.§ Ablation on Proposed Components We present a breakdown of the individual contributions of the proposed components of our system in <Ref>. The first row outlines the performance of the baseline ViT B-16 model, while the second row reflects the outcomes of a modified ViT with an extra encoder block that only processes the CLS token. The third row describes a scenario where we select and encode image crops but do not explicitly combine high-level features. Instead, we choose the prediction with the highest confidence between the original image and the cropped image. Finally, the fourth row represents our complete system, GLSim. The proposed Aggregator and GLS Cropping modules reduce the relative classification error by 4.41% and 4.38%, respectively. By incorporating both modules the error is further reduced by an additional 4.12% compared to cropping without explicit high-level feature aggregation. This highlights the importance of the proposed modules to the overall effectiveness of our system. § FUTURE WORK: GLS FOR VISUALIZATION AND DOWNSTREAM TASKS The proposed GLS metric can be used as a visualization tool to highlight which local regions of the image have high similarity to the global representation from the CLS token to interpret classification predictions. Furthermore, GLS shows even better discrimination performance when combined with state-of-the-art pretrained backbones such as DINOv2 <cit.> as shown in <Ref>. This could allow deploying these models in a variety of downstream tasks such as fine-grained weakly supervised localization and semantic segmentation <cit.>, fine-grained object recognition based on remote sensing imagery <cit.> where resources are constrained. We remark how the computational cost of our proposed GLS can range from 648x to 271,299x less compared to matrix-multiplication based attention aggregation mechanisms <cit.>, for a B-14 backbone with image size ranging from 224 to 1024. § CONCLUSION This paper proposes GLS, an efficient and effective alternative to attention scores in vision transformers, for the purpose of discriminative region identification to enhance fine-grained image recognition. GLS compares the similarity between global and local representations of an image. Based on this we propose a system GLSim which extracts discriminative crops and combines high-level features from both the original and cropped image using an aggregator module for robust predictions. Extensive evaluations across various datasets demonstrate the robustness of our method. ieee_fullname § APPENDIX FOR GLOBAL-LOCAL SIMILARITY FOR EFFICIENT FINE-GRAINED IMAGE RECOGNITION WITH VISION TRANSFORMERS § EXTENDED COMPARISON OF VITS FOR FGIR We summarize the differences between various ViT models proposed for FGIR in <Ref>. The following aspects are considered: (i) Patch Overlap (PO), which indicates whether the ViT's patchifier convolution has overlapping stride; (ii) Intermediate Features (IF), which refers to whether the model uses intermediate values (features or attention scores); (iii) Discriminative Feature Selection Mechanism (DFSM), which describes how the model selects discriminative features; (iv) Crops, which indicates whether the model crops the image for a second forward pass; (v) Feature Aggregation (FA), which denotes the modules used for performing discriminative feature refinement or aggregation; and (vii) Complexity. The `' represents the model uses the feature, while `-' represents it was not. We observe that 1) many of the proposed modifications to the original ViT are modular and can be substituted or combined and 2) the most significant difference between these methods lies in their DFSM. For this reason, we focus on this aspect in Table <ref> in the main text, and Tables <ref> and <ref> in the Appendix. We note how the cubic complexity of matrix-multiplication based attention aggregation mechanisms is pervasive across architectures with different patch size (see <Ref>) and model capacity (<Ref>). Furthermore, the percentage of computation required by these mechanisms increases as the patch size or model capacity decreases. This can hinder the applicability of attention rollout and similar mechanisms in resource constrained environments. § PSEUDOCODE FOR OUR METHOD We include Pytorch-like pseudocode to facilitate the understanding of our method, noting the tensor dimensions at the different steps of the forward pass, in <Ref>. § EXTENDED EXPERIMENTAL SETUP Previous research suggested the applicability of FGIR methods is task dependent <cit.>. Therefore, to obtain a better understanding of how our method performs across FGIR tasks we conduct a comparison across 10 diverse FGIR datasets. The statistics for the evaluated datasets are shown in <Ref>. The training and test settings employed in our experiments are summarized in Table <ref>. For each model and dataset pair, we first perform a learning rate (LR) search with a training and validation set created by partitioning 80% and 20% of the total training data (train-val), respectively. However, for datasets with more than 50,000 images in the train-val set (DAFB, Food, iNat17), we set the partition ratio to 20% and 80%. We subsequently train the models on the entire training-validation set, using three different seeds (1, 10, 100) each, except for the larger datasets mentioned earlier, where we only employ two seeds (1, 10). The models are trained for 50 epochs, except for the CAL model, which is trained for 160 epochs as per the original author's recommendation, except for DAFB, Food, and iNat17, where we train for 50 epochs. We employ the SGD optimizer with batch size of 8, weight decay of 0 (except for CAL, where it is set to 5× 10^-4), and cosine annealing <cit.> with a warmup of 500 steps (1,500 for CAL) for learning rate scheduling. Regarding the preprocessing of images, we first normalize them and then resize them to a square of size 1.34S_1× 1.34S_2. We then perform a random crop during training and a center crop during inference. In addition to the application of random horizontal flipping during training, we further incorporate a stronger set of augmentation and regularization (AugReg) techniques that have been shown to improve the generalization capabilities of models and mitigate overfitting. Our choice of AugReg is motivated by recent advances in training strategies for coarse image recognition <cit.>. Specifically, random erasing <cit.> and trivial augment <cit.> are employed for the purpose of data augmentation. For regularization, we employ label smoothing <cit.> and stochastic depth <cit.>, albeit the latter only for ViT-based models. § EXTENDED RESULTS AND DISCUSSION §.§ Extended Analysis on Computational Cost We compare the trade-off between accuracy and the throughput (in images per second) at inference time when batching the images, and the associated VRAM (in Gigabytes) required for computing this batched inference. For the batched throughput we first compare the throughput at different batch sizes, from 1 to 256 for all these model-image size pairs, and select the batch size which led to the highest throughput (and corresponding VRAM requirements). In <Ref> of the main text we show the results on accuracy vs inference throughput. We additionally show the results on <Ref> on accuracy vs inference memory requirements as the VRAM requirement is also an important decision when deploying models. Discussion is included in <Ref> of the main text. §.§ Effects of Augmentation and Regularization on FGIR While the impact of generic Augmentations and Regularizations (AugReg) on coarse image recognition has been widely studied, its influence on FGIR has received less attention. In general, newer coarse recognition backbones are trained using strong augmentations and regularization techniques, such as Random Erasing (RE) <cit.>, AutoAugment (AA) <cit.>, TrivialAugment (TA) <cit.> Label Smoothing (LS) <cit.>, and Stochastic Depth (SD) <cit.>. However, in FGIR, works typically only incorporate random cropping (RC) and random horizontal flipping (RF), with some newer works <cit.> also using LS. The reasoning behind this is that strong augmentations may obfuscate the subtle differences required to distinguish between fine-grained categories. As a result, a line of FGIR research has emerged, focused on designing data-aware AugReg techniques. One prominent work in this area is WS-DAN <cit.>, which proposes using attention maps to guide cropping and masking of a given image during training. These two techniques are employed in alternation throughout the training process to both augment the input images the network processes and regularize the network, preventing overfitting. Furthermore, during inference multiple crops are obtained to maximize recognition performance. CAL <cit.> builds on WS-DAN by using counterfactual causality to measure the attention quality and guide this process. However, as seen in <Ref> in the main text this process incurs considerable computational cost during inference. Therefore, we explore the potential behind incorporating generic AugReg techniques developed for coarse image recognition into FGIR tasks. Specifically, we compare four different levels of AugReg: minimal, weak, medium, and strong. The minimal level only employs RC and RF, and is the setting most commonly used in FGIR works <cit.>. The weak level, which is used in newer works <cit.>, also applies LS. The medium level further incorporates SD with a probability of 0.1. Finally, the strong level additionally includes RE and TA. We select TA based on qualitative evaluation that indicates its application of less severe distortions to the image compared to AA. The effects of AugReg choice on classification accuracy for the CUB dataset are shown in <Ref>. We observe that AugReg is indeed a crucial factor for FGIR and can have a significant impact on accuracy, often rivaling the effect of image size and the choice of model. On the CUB dataset, we observe an average absolute accuracy improvement of 1.59% and 1.43% for image sizes of 224x224 and 448x448, respectively, when using strong AugReg compared to minimal AugReg. For reference, the average absolute accuracy improvement from increasing the image size from 224 to 448 on the CUB dataset is 1.97%. Since incorporating stronger AugReg incurs minimal increase in training cost while not increasing inference cost at all we suggest practitioners to consider stronger generic AugReg as a cost-efficient method to improve recognition performance. §.§ Extended Ablations §.§.§ Importance of Aggregator for Crop Robustness To demonstrate the importance of the Aggregator module to our system, we conduct an ablation utilizing crops obtained via random selection. Results are shown in <Ref>. As seen in the second row, incorporating random crops without the Aggregator module results in a decrease in performance compared to the baseline, as the random crops deteriorate the quality of input data. Through the usage of our Aggregator module, the method obtains robustness to incorporating random crops, as the self-attention mechanism allows for dynamic reweighting of the contribution of the crops vs the original image. Therefore, when the crops are sub-optimal, the model can effectively discard noise and contributions from the crops. On the other side, when we guide the crop selection through the proposed GLS metric and the quality of the crop improves, the Aggregator modules incorporates this information to decrease the classification error by 8.29%. These results, along with the ones presented in <Ref> illustrate the importance of both the GLS module for discriminative feature selection and the Aggregator module for high-level feature refinement. §.§.§ Hyperparameter O We investigate the impact of the hyperparameter O, which governs how many tokens with the highest similarity are employed for crop selection, on the classification performance, in <Ref>. Smaller values lead to more aggressive cropping that may fail to crop the object of interest, while higher values may return a high percentage of background. Despite this, the findings indicate that the proposed approach is relatively robust to variations in this selection criterion, as all improve upon the baseline by at least 5.87% in the case of O=4 and up to 8.29% in the case of O=8. §.§.§ Similarity Metric We study the effects of similarity metric choice in <Ref>. We were inspired to employ cosine similarity due to its relation to self-attention, both involving dot products, but these results show that our our method exhibits robustness to the metric selection, as all reduce the ViT baseline relative classification error by at least 8.63%. §.§.§ Vision Transformer's Patch Size We evaluated the proposed system using two distinct patch sizes in <Ref>. It is worth remarking that the kernel and stride size utilized in the ViT's patchifier are critical factors that directly influence the granularity of the feature maps, albeit at the expense of increased computational overhead, resulting from the greater effective sequence length. Results show that the relative classification error compared to the baseline can be reduced by up to 17.23% for the model with patch size 32. These demonstrate the efficacy of the proposed approach at various feature map granularities. § ADDITIONAL SAMPLES ON GLS FOR VISUALIZATION We include additional samples of the proposed GLS metric as a visualization tool to highlight which local regions of the image have high similarity to the global representation from the CLS token to interpret classification predictions when combined with state-of-the-art pretrained backbones DINOv2 B-14 <cit.> across different datasets in <Ref> to <Ref>.
http://arxiv.org/abs/2407.12659v1
20240717154305
Dynamical Consequence of Shadows Cast to the Outer Protoplanetary Disks: I. Two-dimensional Simulations
[ "Zehao Su", "Xue-Ning Bai" ]
astro-ph.EP
[ "astro-ph.EP" ]
0000-0001-5567-0309]Zehao Su School of Physics and Astronomy, Beijing Normal University, Beijing 100875, China; suzh22@mail.bnu.edu.cn Institute for Frontier in Astronomy and Astrophysics, Beijing Normal University, Beijing 102206, China; 0000-0001-6906-9549]Xue-Ning Bai Institute for Advanced Study, Tsinghua University, Beijing 100084, China; xbai@tsinghua.edu.cn Department of Astronomy, Tsinghua University, Beijing 100084, China § ABSTRACT There has been increasing evidence of shadows from scattered light observations of outer protoplanetary disks (PPDs) cast from the (unresolved) disk inner region, while in the meantime these disks present substructures of various kinds in the submillimeter. As stellar irradiation is the primary heating source for the outer PPDs, the presence of such shadows thus suggest inhomogeneous heating of the outer disk in azimuth, leading to a “thermal forcing" with dynamical consequences. We conduct a suite of idealized 2D disk simulations of the outer disk with azimuthally-varying cooling prescription to mimic the effect of shadows, generally assuming the shadow is static or slowly-rotating. The linear response to such shadows is two-armed spirals with the same pattern speed as the shadow. Towards the nonlinear regime, we find that shadows can potentially lead to the formation of a variety of types of substructures including rings, spirals and crescents, depending on viscosity, cooling time, etc. We have conducted systematic and statistical characterization of the simulation suite, and as thermal forcing from the shadow strengthens, the dominant form of shadow-induced disk substructures change from spirals to rings, and eventually to crescents/vortices. Our results highlight the importance of properly modeling the dynamical impact of inhomogeneous stellar irradiation, while call for more detailed modeling incorporating more realistic disk physics. § INTRODUCTION Thanks to the advent of new observational facilities and instruments to conduct spatially resolved observations of protoplanetary disks (PPDs), it has now been well established that disk substructures are ubiquitous <cit.>. In the millimeter/sub-millimeter wavelengths, the Atacama Large Millimeter Array (ALMA) has revealed the richness of disk substructures that are primarily in the form of rings and gaps in addition to other features such as spirals and crescents at different radii <cit.>. These observations reflect the thermal emission from mm-sized dusts around the disk midplane, which are biased tracers of the gas density profiles due to finite aerodynamic coupling between gas and dust. In the optical and near infrared (NIR), high-contrast imaging with extreme adaptive optics (e.g., VLT/SHERE, VLT/CRIRES, GPI) reveal even richer and more complex features <cit.>. Emission at optical/NIR mainly result from the starlight scattered by micron-sized dust (better coupled to the gas) suspended in the disk, which are better tracers of the disk surface layers. As a result, features seen in scattered light do not necessarily have direct correspondence to substructures recognized by ALMA <cit.>. At least partly contributing to the complexity in features seen in scattered light is the presence of shadows, typically defined as low-intensity regions that are confined to specific azimuthal angles <cit.>. They must be cast from the (unresolved) disk inner region, and can be mainly classified into two types: broad extended shadows in azimuthal directions <cit.> and narrow shadow lanes with only a few degrees <cit.>. Considerable effort has been devoted to understanding the origin of shadows because the morphology and temporal variation of shadows can provide indirect information about the disk's inner regions. The most common case for shadow casting is the presence of a misaligned/warped inner disk. For instance, TW Hya shows a moving shadow pattern that could suggests a precessing inner disk <cit.>, shadows in HD 143006 can be reproduced using a 30^∘ misaligned inner disk <cit.>, fast time variations of shadows in RX J1604.3-2130A may come from dust very close to the central star in an irregular misaligned inner disk <cit.>, narrow shadow lanes in SU Aur that possibly suggest misalignment caused by late-time interactions with infalling materials <cit.>, shadows in HD 139614 can be explained by the combination of a misaligned inner ring and disk <cit.>, and the flux ratio switches sides in brightest nebula of IRAS40302 can be achieved by applying a tilted inner disk <cit.>. Even for disks with nearly aligned inner regions, subtle shadowing effects can still be recognized <cit.>. In addition to the misalignment of the inner disk regions, variations in the scale height of the inner disk atmosphere could also be responsible for generating shadows, such as in HD 163296 <cit.>. Most effort aiming to understand disk shadows so far has focused on the modeling the (inner) disk morphology to explain the shadow features using radiative transfer calculations <cit.>. On the other hand, we note that as stellar irradiation is the primary source of heating in the bulk of the (outer) PPDs, the presence of shadows must also give rise to dynamical consequences by “thermal forcing": the disk gas experiences (quasi-)periodic cooling and heating as it enters and exits the shadow, which hardly settles to thermal equilibrium and constantly exerts modest or even strong pressure perturbations to the neighboring fluid. This effect was first explored in <cit.>, who conducted 2D hydrodynamic simulations that take into account both stellar irradiation and periodic forcing of shadows with an opening angle of 28^∘ in the context of the transition disk HD 142527. They found that azimuthal pressure gradients generated by shadows can trigger m=2 spirals, which are enhanced by self-gravity and give rise to observable quasi-steady spiral signals. In this work, motivated by the diversity of shadowing features seen in scattered light images and the case study by <cit.> for the HD 142527 disk, we aim at a systematic exploration on the dynamical consequences of shadows cast onto outer PPDs. As an initial effort, we restrict ourselves to vertically-integrated systems by 2D hydrodynamic simulations. We follow the evolution of a passive, viscous gaseous disk with a thermal relaxation prescription towards a target temperature, which is set by stellar irradiation subject to shadowing. By exploring a large suite of simulations varying the viscosity, cooling time, and shadow geometry, we find that shadows can result in the formation of a wide variety of disk substructures, and perform a statistical analysis of all the substructures generated from our simulations. This paper is organized as follows. We detail our simulation setup in Section <ref>, followed by a description representative features of shadow-driven disk substructures in Section <ref>. We present statistical analysis of all our simulations and describe the substructure-forming process from linear to non-linear regimes in Section <ref>. Finally, we discuss the caveats and conclude in Section <ref>. § NUMERICAL METHODS §.§ Simulation Setup We solve the vertically integrated viscous hydrodynamic equations using the grid-based higher-order Godunov code ATHENA++ <cit.> in cylindrical coordinates (r,φ). The conservative form of control equations are: ∂Σ/∂ t+ ∇· (Σv)=0, ∂Σv/∂ t+ ∇· (Σvv + Pℐ + 𝒯_vis)=-Σ∇Φ, ∂ E/∂ t + ∇·[(E + P + 𝒯_vis)v]=-Σv·∇Φ + Λ, where Σ is the gas surface density in the disk, v is the gas velocity, P=Σ T is the vertically integrated pressure where T is the disk temperature, ℐ is the identity tensor, Φ is the gravitational potential written as Φ = -GM_*/r where M_* is mass of central star, E is total energy density, and Λ gives the cooling source terms. The viscous stress tensor 𝒯_visc in momentum equation reads 𝒯_vis = -Σν( ∂ v_i/∂ x_j + ∂ v_j/∂ x_i - 2/3∂ v_k/∂ x_kδ_ij), with ν being the kinematic viscosity. The total energy density E is given by the combination of kinetic energy and internal energy: E = 1/2Σ v^2 + P/γ-1, where γ=7/5 is the adiabatic index for molecular gas. Note that viscous heating is automatically included in the energy equation, although it is generally unimportant in the outer PPDs. The gas temperature T is associated with the isothermal sound speed as T=c_s^2, which yields the disk scale height H=c_s/Ω_K, where Ω_K=(GM_*/r^3)^1/2 is the Keplerian angular frequency. The disk aspect ratio is then given by h=H/r. With this, viscosity follows the standard α prescription <cit.>, ν=α c_sH. It is worth noticing that viscosity varies as the disk evolves. We choose the initial density profile to be power-law: Σ_ init = Σ_0( r/r_0)^d. where Σ_0 is the density at reference radius r_0, which we take to be the radius of the inner boundary. We specify the disk initial temperature as T_ init≡ c_s_0^2( r/r_0)^p=( h_0r_0Ω_K_0)^2( r/r_0)^p, where c_s_0, h_0, Ω_K_0 are the value of isothermal sound speed, aspect ratio and Keplerian angular velocity at reference radius r_0. The radial force balance leads to the initial rotation profile calculated by v_φ(r)=[(p+d)c_s^2+GM_*/r]^1/2 . With viscosity, the radial velocity is set by the accretion velocity given by v_r(r)=-3/2α c_s^2/rΩ_K . To ensure steady-state accretion in the initial equilibrium (shadow not included) with constant α, the initial temperature and density profile should satisfy d+p=-3/2. Our simulations are scale-free, adopting GM_*=Σ_0=r_0=1 in code units, with h_0=0.1. As a result, we have Ω_K_0=c_s_0=1. The computational domain ranges from r_ in=1 to r_ out=30 in code units, to ensure sufficient dynamical range. We employ a logarithmic grid in radial direction and a uniform grid in azimuthal direction with N_r× N_φ=512×512, achieving a gird resolution of 15 cells per H in r while keeping cell aspect ratio Δ r≈ 0.5rΔϕ. §.§.§ Shadow prescription In our simulations, we assume an obscuring structure present in disk inner region, which is outside of our simulation domain (inside the inner boundary). As an initial study, we tentatively take this obscuring structure as a slightly misaligned inner disk (inclination angle ∼ h). In this case, near half of the outer disk (in azimuth) is illuminated from one hemisphere, and the opposite side being illuminated from the other hemisphere. In the vertically-integrated sense, the two sides of the “pie-chart" are heated largely equally. It is the transition region, the disk can be largely blocked by the inner disk from both hemisphere, which is mostly affected by the shadow. The shadow then introduces a thermal forcing to the system, causing system's temperature to approach the target temperature T_ tar. For simplicity, we prescribe this target temperature by T_ tar (r,ϕ)= T_ init(r)( 1-ϵ e^-ϕ^2/2σ_ϕ^2)( 1-ϵ e^-(ϕ-π)^2/2σ_ϕ^2), where ϵ reflects the amplitude of the shadow and σ_ϕ characterizes the azimuthal width of the shadow. Although we have argued that two-sided shadows are the most basic case, we still examine the one-sided case in <ref> for reference. Also, in most cases in this paper, except for the simulation mentioned in Section <ref>, the shadow is static with pattern speed (Ω_ shadow) being zero. The final temperature structure depends on heating and cooling process, which is often modeled using the β cooling approximation <cit.>. The cooling term is given by thermal relaxation towards the target temperature Λ = -Σ/γ-1×T-T_ tar/t_ cool , where the cooling timescale is specified by the dimensionless parameter β: t_ cool = βΩ_K^-1 . It describes the disk's thermodynamic timescale, which can range from ∼10^-3 (approaching the isothermal limit) to at least ∼10 (approaching the adiabatic limit) in our simulations. Figure <ref> shows the expected temperature structure for four representative shadow prescriptions, with different shadow widths (15^∘ and 45^∘) and cooling times, calculated by following fluid elements undergoing heating and cooling on circular orbits. We have fixed shadow amplitude of ϵ=0.8. With fast cooling (β=0.001), we see that the shadow aligns with its expected position, and the temperature at its center is approximately 0.2T_0 as desired. When cooling is inefficient, the observed shadow center on the leading side from its expected location, and the lowest temperature becomes well above 0.2T_0. It should be noted that our shadow and cooling prescriptions are highly simplified and are not necessarily always physical (for instance, a flat disk with p=-1 would not be irradiated). We emphasize that the goal of this work is not to precisely model any particular system, but to explore the general phenomenology in a qualitative manner. §.§.§ Boundary Conditions We use modified outflow boundary conditions, where hydrodynamic variables are copied from the last grid zone assuming Σ∝ r^d, P∝ r^d+p, v_ϕ∝ r^-1/2, with v_r unchanged—except we set v_r=0 in case of inflow. To further dampen unphysical waves, we adopt wave-killing functions in the form described by <cit.>: dx/dt = -x-x_0/τ_ dampR(r) where x represents any fluid quantities (e.g. Σ, v, etc.). The damping timescale τ_ damp is defined as τ_ damp=ηΩ_K^-1, where η is the damping rate and is set to 1 for all of our simulations. The function R(r) is a parabolic function expressed as: R(r) = (r-r_ damp/L_ damp)^2, for|r-r_ damp|<L_ damp , where r_ damp is the boundary of the damping area which we take to be 2.08 and 26.57 in inner and outer part of our computation domain, respectively, and L_ damp is the length of wave killing zone. §.§ Simulation Runs In order to comprehensively investigate the dynamical effects of shadows in PPDs, we conducted a wide parameter scan. Five main parameters are included in our simulations: dimensionless cooling timescale β ranging from 10^-3 to 10, viscosity coefficient α ranging from 0 to 10^-2, shadow amplitude coefficient ϵ=0.5 and ϵ=0.8, shadow width σ_ϕ=0.236 and σ_ϕ=0.079, and the temperature slope p=-1 (flat case) and p=-0.5 (flared case). In the viscous simulations, they translate to density gradient d=-0.5 and d=-1 to ensure steady state accretion. In most simulations, the shadows do not rotate, and we fix the disk aspect ratio h_0=0.1 at r=1, thus h=0.1 is constant in most p=-1 (flat) cases. All of these simulations will be discussed in Section <ref>. In Sections <ref> and <ref>, we also briefly explore simulations with rotating shadows and vary h_0 from 0.05 to 0.15. To further comment on our choice of parameters, we first note that in outer disk conditions, we generally expect β≲1 <cit.>, though the finite thermal coupling between dust and gas may significantly enhance the effective β <cit.>. In inviscid simulations, we further examine the influence of the density profile (d=-0.5 and d=-1, which affects thermal forcing) since this parameter is no longer free when viscosity is included (dependent on p). Note that in the new paradigm of wind-driven accretion, the disk is more laminar and the surface density profile can be more arbitrary <cit.>. Although we do not incorporate wind-driven accretion, this exploration also serves the purpose to partly mimic “windy" disk conditions. On the choice of shadow amplitudes, note that given the T^4 dependence, the two choices correspond to the shadowed region receiving about 0.2^4≈0.002 and 0.5^4≈0.06 of the stellar irradiation compared to the non-shadowed regions. In all of our simulations, the total run time is chosen to be T=20000P_0, where P_0=2π/Ω_K_0 is the orbital period at the inner boundary. This is significantly longer than the timescales for substructure formation, which we find to be within 5000P_0 for most cases. In only a few cases (especially β∼ 10, α∼ 0), even on the timescale of 20000P_0, we cannot unambiguously identify the dominant form of disk substructure. However, we can infer their evolution trend from a statistical point of view. To facilitate comparison of the various simulations discussed in the following sections, we provide a list of all our runs and their parameters in Table <ref>. Our naming convention is structured as follows. We use “L" for runs in the linear regime (ϵ=0.001) and “NL" for runs in the nonlinear regime (ϵ=0.5, 0.8). The labels “hs", “hm" and “hl" indicate runs with h_0=0.05, h_0=0.1, and h_0=0.15, respectively, with h_0=0.1 as fiducial. To specify the dominant substructure, we use “S" for spiral-dominant, “R" for ring-dominant, and “V" for vortex-dominant. Shadow precession speeds are denoted as “NR" for nonrotating (fiducial), “FR" for fast rotating, “MR" for moderately rotating, and “SR" for slow rotating. For simulations dedicated to parameter searches discussed in Section 4, we use the label “S-h-all," as we do not discuss individual runs for these simulations. lccccccccc[!htb] 8 Summary of All Highlighted Simulations.^1 1 Run σ_ϕ ϵ α β p h_0 Ω_ shadow 8lRepresentative runs (Section <ref>) NL-hm-S-NR 0.236 0.8 10^-3 10 -1.0 0.1 0 NL-hm-R-NR 0.236 0.5 10^-4 1 -1.0 0.1 0 NL-hm-V-NR 0.236 0.5 0 10^-3 -1.0 0.1 0 8lStatistical runs (Section <ref>) S-h-all^2 (0.236, 0.079) (0.5, 0.8) (0, 10^-4, 10^-3, 10^-2) (10^-3, 10^-2, 10^-1, 1, 10) (-1.0, -0.5) 0.1 0 8lLinear run (Section <ref>) L-hm-S-NR 0.236 0.001 0 10^-3 -1.0 0.1 0 8lRotating shadow runs (Section <ref>) L-hm-S-FR^3 0.236 0.001 0 10^-3 -1.0 0.1 Ω_0 L-hm-S-MR 0.236 0.001 0 10^-3 -1.0 0.1 0.03Ω_0 L-hm-S-SR 0.236 0.001 0 10^-3 -1.0 0.1 0.003Ω_0 8lAspect ratio test runs (Section <ref>) NL-hs-S-NR 0.236 0.8 10^-3 10 -1.0 0.05 0 NL-hs-R-NR 0.236 0.5 10^-4 1 -1.0 0.05 0 NL-hs-V-NR 0.236 0.5 0 10^-3 -1.0 0.05 0 NL-hl-S-NR 0.236 0.8 10^-3 10 -1.0 0.15 0 NL-hl-R-NR 0.236 0.5 10^-4 1 -1.0 0.15 0 NL-hl-V-NR 0.236 0.5 0 10^-3 -1.0 0.15 0 1Simulations mentioned in Appendix <ref> and Appendix <ref> are not included. 2The unified name for parameter sruvey simulations, with parameters being all combinations of those listed in this row, totaling 160 simulations. 3The resolution of this run is set to N=2048. σ_ϕ: shadow width parameter; ϵ: shadow amplitude parameter; α: viscosity parameter; β: cooling rate parameter; p: temperature slope; h_0: disk aspect ratio at inner boundary; Ω_ shadow: shadow procession angular frequency; Ω_0: Keplerain angular velocity at r=1. All runs, except for run L-hm-S-FR, use a resolution of N=512. §.§ Diagnostics of Substructures As we will demonstrate, our simulations generate a variety of substructures of all types. In this section, we provide the diagnostics we employ to identify and characterize these substructures. To minimize the influence of the boundaries and wave-killing, we restrict the analysis domain to be r∈[3,21]. Vortices exhibit as anti-cyclonic flows with pressure maxima at the center that can potentially be strong dust traps. They are identified as regions with negative vorticity, which is defined as ∇×δv with δv being the difference between current fluid velocity and background fluid velocity. We quantify individual vortices based on their mean vorticity (normalized by background Keplerian angular velocity), density contrast, spacing, and aspect ratio. In doing so, we first choose the vortex boundary to be where the density is 10% of the density at the vortex center after subtracting background, while ensuring that the vorticity remains below zero. This is motivated from the analytical work of <cit.> while being robust to the influence of density waves. In our simulations, vortices are constantly generated and destroyed; only the largest vortices are chosen (usually can survive for at least 100 local orbits). We measure the density contrast by comparing the average density in vortex with the average density at the same radius. The spacing of vortices is calculated by the radial distances between neighboring vortices, which are normalized by the local scale height at the midpoint radius between the two vortices. As vortices can be highly time variable, all quantities are calculated and averaged over several snapshots (see in Section <ref>). In ring-forming disks, we measure the density contrast, width, spacing and eccentricity of the rings. We identify the rings by first fitting the background density as a power law, and consider peaks/troughs above/below the fitted profile as ring peaks/gap centers. The boundaries of the rings are identified as the radius at the midpoint between peak and valley densities, with ring width being the distance between the two boundaries for each ring. Density contrast is calculated by comparing the density between peaks and boundaries. The final ring width is obtained by averaging the widths of all identified rings, and each ring width is normalized to the local scale height of the disk. Ring spacing is measured as the radial distances between the boundaries of two neighboring rings, normalized in a way similar to that for vortices, and averaged over several snapshots. In the above, we have treated the rings as axsymmetric by working with 1D profiles, whereas in practice we have found that the rings can be eccentric. For identified rings, we further track the maximum density in 2D data and measure their eccentricity by fitting an ellipse. Incomplete rings at the boundary of the analysis domain are excluded from the statistics. For spirals, we quantify their density contrast, number of spiral arms, pattern speed and pitch angle. The density contrast is obtained by comparing density of the spiral spine and the fitted background density at same radii. In our simulations, we obtain the spiral phase at each radius using Fourier decomposition and the pitch angle is obtained by fitting the phase angle with a logarithmic function φ=m(tanα_p)^-1ln r+ϕ_0, where α_p is the pitch angle and m is number of spiral arms. The constant ϕ_0 is further employed to measure the pattern speed of the spirals. § REPRESENTATIVE RESULTS In this section, we present representative outcomes of shadow-driven substructures at fixed disk aspect ratio h before giving more comprehensive statistical results. The three representative runs, denoted as“NL-hm-S-NR," “NL-hm-R-NR," and “NL-hm-V-NR," can be found in Table <ref>. We show snapshots of major fluid quantities of interest (i.e. Σ, T, v_r, v_ϕ, ∇×v) from our simulations, and discuss the results below. §.§ Spirals Spirals typically form in disks where the shadow is relatively weak, such as those characterized by slow cooling or weak shadow amplitude. We choose spirals formed in a disk with the following parameters as an example: σ_ϕ=0.236, ϵ=0.8, α=10^-3, β=10, p=-1.0 (run NL-hm-S-NR). As depicted in Figure <ref>, spirals form relatively quickly (first row in Figure <ref>), typically within approximately 20 local orbits, and once formed, they remain highly stable[The growth in density perturbations observed in the last two rows of Figure <ref> is primarily due to the combined effects of strong viscous heating and the influence of wave damping zones.]. These spirals are clearly density waves, showing spiral patterns in all diagnostic physical quantities. The spiral patterns are stationary (i.e., pattern speed is zero), which is related to the fact that our shadow patterns have zero angular velocity. Further discussions regarding the relationship between the properties of the spirals and the other two substructures will be provided in Section <ref>. In addition, by examining the second column of Figure <ref>, we see that with inefficient cooling, the overall temperature is systematically cooler than the initial temperature by ∼15%. The azimuthal varies smoothly through the shadowing regions, with a maximum temperature variation of about 4%. §.§ Rings The conditions for ring formation generally require either slow cooling or a combination of moderate viscosity and shadow amplitude (for more detailed information, see Section <ref>). In Figure <ref>, we adopt parameters σ_ϕ=0.236, ϵ=0.5, α=10^-4, β=1, p=-1.0 (run NL-hm-R-NR) to illustrate the typical formation process and properties of ring structures. –Formation. The formation of rings begins with the presence of two-arm spirals following a transient period (as seen in the first to third rows of Figure <ref>). The spirals appear only marginally stable, which later break apart and reconnect to form concentric rings in surface density (as shown in the fourth and fifth rows of Figure <ref>), which takes a relatively long time of ∼ 100 local orbits. On the other hand, the spiral patterns remain in the velocity structure even after ring formation, although they are distorted (as opposed to the spirals discussed in Section <ref> and could become a distorted ring patterns in some cases). –Evolution and main properties. Once formed, the amplitudes of the rings continue to increase slowly, reaching a steady state over a few hundred local orbits, where the gas density in rings are about 10% higher than the background. However, the density within one ring at quasi-steady state is unevenly distributed, with surface density near the broken/reconnection location being smaller, which will be further discussed in Section <ref> and Appendix <ref>. The typical ring width is approximately twice the local scale height, and the spacing is regular (about 4H between peaks of two neighbouring rings) across the disk (further discussed in Section <ref>). We find the rings to be eccentric (but centered on the star), with the eccentricity measured to be e∼0.12. As can be inferred from the third and fourth columns in Figure <ref>, the ratio v_r/v_ϕ is approximately 10^-3≪ e, suggesting that these rings do not directly correspond to the gas moving in eccentric orbit. Also, the eccentric rings do not precess, analogous to spiral patterns that remain stationary, thus coroborating the fact that the rings emerge as the aftermath of spiral patterns. With moderate cooling, the azimuthal temperature contrast reaches 15% and may cause azimuthal brightness variations in observed rings, although we caution for our highly simplified thermodynamic treatment (further discussed in Section <ref>). §.§ Vortices and Crescents Crescents can be described as rings that exhibit an azimuthal variation in intensity <cit.>. Physically, the crescents discussed in this paper are all induced by vortices, thus we use “vortices" and “crescents" interchangeably. Figure <ref> shows an example of shadow-driven formation of vortices/crescents. This usually occurs with strong shadow amplitude and rapid cooling, thus strong thermal forcing, and we adopt ϵ=0.5, β=0.001 in this example (run NL-hm-V-NR). –Formation. With rapid cooling, the disk temperature almost instantly relaxes to the target temperature both within and outside of the shadow region, resulting in a 50% variation in azimuthal temperature given our setup. This leaves two symmetric low-pressure regions that form quickly at the shadow locations. In the initial stages (first and second rows in Figure <ref>), it leads to the appearance of spiral features in surface density. With strong thermal forcing that constantly perturbing the disk, the system subsequently becomes more chaotic (third row in Figure <ref>) where the velocity field undergoes significant alterations. Although the physical process is not entirely clear, vortex/crescent formation ensues, as identified in fourth row of Figure <ref>. Selected vortices and crescents are marked by white frames in Figure <ref>. –Evolution and main properties. Shadow-driven vortices are all anti-cyclonic in nature, which can be observed either from the negative vorticity (the 3rd column of Figure <ref>) or from the change in the sign of radial velocity across the vortex center (changing from negative to positive when viewed along the direction of rotation (counterclockwise), as shown in the 4th column of Figure <ref>). We observe that vortices started small and are continuously generated. They merge to form larger ones under the influence of differential rotation within approximately 60 local orbits, ultimately manifesting as relatively large crescent-shaped structures. In Figure <ref>, vortices labeled as 4a and 4b are undergoing a merger into one single vortex. We find that these patterns largely corotate with the gas, as expected, and their azimuthal locations are found to be largely random, with no preference to stay in or out of the shadows. The disk gas remains turbulent and chaotic throughout the evolution due to strong perturbations from thermal forcing. Velocity deviations from local Keplerian inside the vortex region are around 0.5c_s. Additionally, the local level of turbulence, measured in terms of root mean square (rms) velocity fluctuations averaged in azimuth, is approximately 10% of the local sound speed. The typical aspect ratio of the vortices/crescents is about 6, with their density contrast being 1.4. The normalized vorticity in this case is 0.2. Despite of modest to strong level of turbulence, the large vortices are relatively long-lived, with typical lifetime of at least 300 local orbits. § STATISTICS OF SUBSTRUCTURES To gain deeper insights into the dynamical consequences of shadows, we conducted a comprehensive exploration of parameter space. We performed a total of 160 simulations (run S-h-all), encompassing a wide combination of parameters. Most results show similarities with one of the aforementioned three representative cases. We thus primarily summarize the outcomes in a statistical manner. For simulations that exhibit the formation of rings and spirals, we only measure their properties at the end of the simulations when the system has already reached a steady state. For simulations with vortex/crescent formation which are generically chaotic, we select four specific snapshots, denoted as P_ orb1=5000P_0, P_ orb2=10000P_0, P_ orb3=15000P_0, and P_ orb4=20000P_0. The statistical values for vorticity, density contrast, spacing, and aspect ratio of the vortices are calculated by averaging the results at these snapshots. The simplified statistical results are presented in Figure <ref>, and more detailed ones are provided in Figures <ref> and <ref>. It is important to emphasize that panels shaded with red or blue lines are actually undergoing a vortex-ring transition or a ring-spiral transition state (see discussion in Appendix <ref>). Generally speaking, shadows are capable of generating different kinds of substructures under different parameter settings. Additionally, we found that the dominant form of shadow-driven substructures changes from spirals to rings and eventually becomes vortices/crescents as cooling timescales and/or viscosity decreases. Where exactly the transition occurs depends on other parameters such as the shadow amplitude, width, and disk aspect ratio, etc., and these will be discussed in more detail in the following subsections. §.§ Statistics for Spirals Two-arm spirals are fundamental substructures in our simulations, dominating in disks with cooling timescales significantly longer than the dynamical timescale, high viscosity (α > 10^-3), or very weak shadow amplitude (see Section <ref>). Here, we focus on discussing their density contrast, pattern speed, and pitch angle. –Density contrast. In general, stronger thermal forcing, higher shadow amplitude, wider shadow width, etc. leads to stronger density contrast in the spirals. However, as the spiral-dominated regime generally requires weak thermal forcing, the spiral amplitudes are typically low (with upper limit only 1% higher than background density at the same radius). –Pattern speed. Spirals found in our simulations are density wave patterns with zero pattern speed, which also results in non-precessing rings. More generally, the spiral pattern speed exactly matches the shadow's pattern speed, which will be further discussed in Section <ref>. –Pitch angle. The pitch angle is solely affected by the disk aspect ratio. With weak thermal forcing, we consider the dispersion relation of spiral density waves in the linear regime under the WKB approximation <cit.> m^2(Ω_p-Ω)^2=k^2c_s^2+κ^2. Here, Ω_p represents the spiral pattern speed, k is the radial wave number, and κ≈Ω_K is the epicyclical frequency. The spiral pitch angle can be estimated by α_p=∂ r/(r∂ϕ)≈ m/(|k|r). With Ω_p=0 and m=2, we obtain α_p∼2/√(3)h=constant for p=-1 disks and α_p∼2/√(3) h_0r^0.25 for p=-0.5 disks. Taking the disk parameters used in our simulations (with h_0=0.1) and averaging over radius gives α_p=6.6^∘ for disks with p=-1 and α_p=12.1^∘ for disks with p=-0.5. These estimated values agree well with our simulation results, which we find to be 7.344_-0.535^+0.607^∘ and 13.202_-1.766^+2.16^∘ (see Figure <ref>), respectively. §.§ Statistics for Rings In our simulations, rings dominate in disks with cooling timescales comparable to the dynamical timescale (β∼ 1) when α is roughly below 10^-3. For much higher viscosity, rings dominate even when the cooling rate approaches the isothermal limit (β=10^-3). Typically, this value is α=10^-2 for disks with σ_ϕ=0.236 and α=10^-3 for disks with σ_ϕ=0.079. Overall, the parameter space for the dominance of rings is modest thermal forcing, in between the cases that form vortices/crescents (strong forcing, see next subsection) and spirals (weak forcing). In fact, we pose that rings can be viewed either as “reconnected spirals" (stated in Section <ref>), or “failed vortices", where the latter connection arises from the finding that vortex-ring transitions often involve crescents with very large aspect ratios, although the boundary between this transition is not necessarily clear-cut, and will be further discussed in Appendix <ref>. Below, we will focus on “normal" rings (not under transition), and will discuss the density contrast, ring radial width, ring spacing, eccentricity, and the parameters that have strong influence on them. –Density contrast. As shown in Figure <ref> and <ref>, gas densities are typically 1-20% higher than the background density in ring-dominant disks, and ring density contrast is enhanced by larger shadow amplitude and width. Density contrast could reach very small values, such as 0.3%, in the ring-spiral transition, and very large values, such as 50%, in the vortex-ring transition. –Width and spacing. The ring widths in our simulations are usually 2 times the local scale height, regardless of shadow parameters. Similarly, for almost all cases, the spacing between neighboring rings is approximately 4H, as depicted in Figure <ref>. There is very small deviations from the mean, indicating a highly uniform distribution of rings within the disk. –Eccentricity. As will be stated in Section <ref>, ring structures are generated following the “reconnection" of two-armed spirals in the early stages of disk evolution, causing the ring to become eccentric with zero pattern speed (as shadows are stationary). More flared disk morphology results in larger spiral pitch angles, making the spirals less tightly wound. As a result, the rings formed in this case tend to be more eccentric. Additionally, we find that viscosity has a strong impact on eccentricity. Typically, ring eccentricity varies from 0.1 to 0.7 as α increases from 0 to 10^-2 in our simulations (see Figure <ref> for details). The angle between the ring's major axis and the effective shadow center (e.g. ϕ=0^∘, 180^∘ when β=0.001) is typically between 80^∘ and 110^∘. §.§ Statistics for Vortices and Crescents As we mentioned in Section <ref> and better seen in Figure <ref>, vortices/crescents tend to dominate in disks characterized by fast cooling processes (β<1), low viscosity (α=0,10^-4), high shadow amplitudes (ϵ=0.8), and wide shadow widths (σ_ϕ=0.236). Such parameter settings all point to strong thermal forcing. Below, we discuss the properties of the shadow-driven vortices/crescents, focusing on density contrast, spacing and aspect ratio of vortices/crescents under the influence of these parameters. –Vorticity and density contrast. The density contrast of substructures is a crucial factor as it directly influences their detectability. From our explorations, the density of the crescents are typically 10-50% higher than the average density at same radius for all vortex-dominated disks. The density contrast is generally slightly higher for stronger shadow intensity, larger shadow width, and faster cooling, but the trend is not definitive given the chaotic nature of the system. The normalized vorticity ranges from 0.1 to 0.6 in vortex-dominated disks, with vorticity around 0.2 in most cases, potentially reaching up to 0.6 in the most extreme cases (large ϵ and σ_ϕ). No clear relationship is found between vorticity and density contrast due to the high turbulence level, which is around 0.1c_s. The velocity deviations from local Keplerian inside the vortex region ranges from 0.4 to 1.2 c_s, indicative of strong rotation in the vortices. –Spacing. The statistical results of the spacing of vortices/crescents are plotted in Figure <ref>. In all simulations, the distance between neighboring vortices/crescents is typically between 2H and 4H. The spacing is less uniform compared to rings, and is related to the fact that vortex-dominated disks are usually turbulent. Note that the small error bars in a few cases are related to very limited number of vortices/crescents (2 or 3); the lower limit point represents the case where there is only one vortex-induced crescent in the disk. Similar to the case shown in Figure <ref>, the azimuthal locations of the vortices/crescents are largely random with no direct correlation with the position of the shadows. –Aspect ratio. The aspect ratio of crescents/vortices is less affected by different parameters. Typically, in vortex-dominated disks, this value is about 6. However, for cases close to (for example, σ_ϕ=0.079, p=-1, ϵ=0.5, α=0, β=0.01) or undergoing (for example, σ_ϕ=0.236, p=-0.5, ϵ=0.8, α=10^-4, β=1) the vortex-ring transition in parameter space, the aspect ratio can be very large (greater than 12). More detailed results are shown in Figure <ref> in Appendix <ref>. § DISCUSSION In this paper, we have conducted simple numerical experiments to study the dynamical consequence of shadows cast from the inner disk to the outer disk as a result of thermal forcing. We have restricted ourselves to a small number of parameters, and the discussion has been largely phenomenological. In this section, while not going into full detail, we conduct additional studies to help better understand the origin and trend of shadow-driven substructures, and briefly discuss their potential implications. §.§ Linear regime Based on the analysis and discussions in the previous sections, here we provide further analysis to gain better physical insights on the shadow-driven substructure formation. As we observe that in all cases, substructure formation starts from the formation of two-armed spirals under our shadow prescriptions. This suggests that spirals are the most fundamental form of shadow-driven substructure, and it can be instructive to look into how spirals form and evolve under very weak thermal forcing to avoid nonlinear effects. We thus further conducted a series of 2D inviscid hydrodynamic simulations with varying perturbation strengths (ϵ=0.001, ϵ=0.01, ϵ=0.1) while keeping the cooling timescales consistent (β=0.001). Without viscosity, the simulations are in hydrostatic equilibrium to start with before thermal forcing is introduced. In Figure <ref>, we present the results from the ϵ=0.001 simulation (run L-hm-S-NR in Table <ref>). When the shadow is introduced, gas flows into the shadowed region in a counterclockwise manner. The gas between the shadow center (pressure minimum) and its rear edge, i.e., between 315^∘ and 360^∘ in the first row of Figure <ref>, gets accelerated, while the gas between the shadow center and its leading edge, i.e., between 0^∘ and 45^∘ in the first row of Figure <ref>, gets decelerated. This leads to gas piling up near the shadow center, while the neighboring gas is slightly rarefied, which naturally launch density waves. As the disk evolves, such density waves wind up due to differential rotation (see second row of Figure <ref>). In the meantime, the periodic forcing at the shadow location continues, keep launching new density waves, leading to interference. After a few local orbits, the system reaches a relatively steady pattern of two-arm spirals (see third and forth rows of Figure <ref>), which remain stable over long-term. The spirals share the same pattern speed of the shadows (in this case, zero), and the pitch angle also remains unchanged. We note that this is very different from planet-induced spirals in that a planet launches density waves through discrete Lindblad resonances, while as shadows are cast over a wide range of radii, each radius can excite its own density waves. In our case, the pattern speed of the shadow is zero, and the only relevant resonance condition is simply given by Ω=κ/m, where m=1, 2, ⋯. However, taking m=2, we see that with κ≈Ω for Keplerian disks, no resonance condition is satisfied. In other words, the two-armed spirals are not driven by Lindblad resonances, but are the effective eigen-state of thermally-forced oscillations. §.§ Towards the nonlinear regime We note that even in the linear regime, the spiral patterns are distorted due to thermal forcing. These can be most easily seen from the velocity perturbations in the last three columns of Figure <ref>. They are also present in the density perturbations where the amplitude of the spirals varies across the shadow region. The form of the distortion can depend on system parameters, which is found to be different in Figure <ref> where cooling time is significantly longer. We speculate that such distortions are the source of instability when thermal forcing enters the nonlinear regime. Based on our discussions in the previous sections, we summarize the formation of shadow-driven substructures in Figure <ref>. Irrespective of whether thermal forcing is linear or nonlinear, the initial phase of the development is similar, involving the formation of two-armed spirals, as shown in (a)-(b). The spirals persist under linear and weakly nonlinear thermal forcing, as seen in the “linear branch" and “spiral branch" in (c)-(f). The properties of spirals are similar between the linear and weakly nonlinear regimes, in terms of pitch angle and pattern speed. When thermal forcing becomes slightly stronger, the spiral arms undergo a relatively quiescent transformation by “reconnecting" into eccentric rings (see Figure <ref>(g), (h)). The eccentricity of these rings is largely set by the pitch angles of the original two-arm spirals stage and disk viscosity. However, when the thermal forcing becomes too strong, the spirals break in a highly chaotic manner (see Figure <ref>(i), (j)), leading to the formation of more localized vortices/crescents. §.§ Rotating shadows In this paper, we have only discussed the situation when the shadow's pattern speed is zero. However, if the misaligned inner disk precesses around the central star, the shadow cast from the inner region would have a pattern speed, which then changes the resonance condition discussed in Secton <ref>. To extend our study to more general conditions, we have conducted additional simulations with rotating shadows in the linear regime, with three different shadow pattern speeds, Ω_shadow=1Ω_0 (run L-hm-S-FR), 0.03Ω_0 (run L-hm-S-MR), and 0.003Ω_0 (run L-hm-S-SR). Here, Ω_0 is Keplerain angular velocity at r=1. The detailed parameter settings can be found in Table <ref>. The density structure from these simulations in the final states are shown in Figures <ref>. We measure the pattern speed of the spirals Ω_p in these situations, and we confirm that in all three cases, the spirals all have Ω_p=Ω_shadow. Given the pattern speed, the radii of corotation resonances (CR), inner Lindblad resonances (ILR), and outer Lindblad resonances (OLR) can be calculated by Ω_p=Ω, Ω_p=Ω±κ/m (with m=2), and shown as yellow, green, and purple dashed lines in Figures <ref>. With the WKB dispersion relation <ref>, the permitted regions for density wave propagation are outside the Lindblad resonances. In the fast-rotating case Ω_p=Ω_0, density waves are permitted beyond the OLR, and the spirals are tightly wound towards outer radii with pitch angle α_p∼ h(Ω/Ω_p)∼ h(r/r_0)^-3/2. Even with a resolution of N=2048 in Figure <ref>, it is still insufficient to resolve the spirals across the entire disk, weakening the spirals at the outer disk by numerical dissipation. With intermediate Ω_ shadow=0.03Ω_0, the ILR and OLR are located at r=6.5 and r=13.5, respectively. Clearly, there are well-defined spirals outside the Lindblad resonances, which break inside the Lindblad resonances. In the slow-rotating case with Ω_ shadow=0.003Ω_0, even the ILR is beyond the computational domain, and the results are largely identical to the stationary case described in Section <ref>. Given the discussion above, we expect the results presented in this paper largely applies to regions inside the ILR in slowly-precessing shadows. Although not the focus of this paper, it is worth noting the significance of moderately rotating shadows, where the corotation radius lies within the disk region. Our findings are morphologically similar to those of <cit.>, who demonstrated that the morphology of shadow-driven spirals notably resembles the planetary wakes caused by embedded planets in the disc using radiative transfer. For better comparison with planet-induced spirals, more detailed investigation with more realistic physics (especially dust and radiative processes) is necessary for the slow-rotating case, especially in regions between the ILR and OLR. §.§ Dependence on disk aspect ratio In the preceding discussion, we observed that shadow-driven substructures are closely tied to thermal forcing, which is influenced not only by the cooling process but also by disk temperature. Additionally, detailed characteristics of substructures, such as pitch angle or eccentricity, are affected by the disk aspect ratio h. Therefore, it is natural to further investigate the influence of h_0. We conducted additional simulations with h_0 ranging from 0.03 to 0.15, focusing on h_0=0.05 and h_0=0.15. These simulations, denoted as NL-hs-S-NR, NL-hs-R-NR, NL-hs-V-NR and NL-hl-S-NR, NL-hl-R-NR, NL-hl-V-NR respectively, maintained the same parameters as the representative runs discussed in Section <ref> except for h_0 (see Table <ref>). We note that here “S", “R", and “V" do not necessarily indicate dominant form of substructures but rather serve to guide the reader that these runs only vary h_0 compared to representative runs. In the NL-hs run series (h_0=0.05), with lower target temperature, we see that the NL-hs-S-NR (Figure <ref>) and NL-hs-R-NR (Figure <ref>) runs maintain spirals and rings as the primary substructure, respectively. We see the spirals are more tightly wound and the rings spacing remains uniform except for being smaller. The changes are exactly in proportion to h_0, and the general properties of the rings and spirals are otherwise identical to those discussed in the NL-hm runs. For the NL-hs-V-NR run, while the vortices are clearly the dominant, many of the overdensities close a full circle, and we identify this run as in the vortex-ring transition state. In the NL-hl run series (h_0=0.15), with higher target temperature, we see that all three NL-hl runs retain their spiral, ring and crescent/vortex as the dominant substructure, respectively. Similarly, the spirals are more open, the rings are more eccentric, and the vortices are larger and more widely spaced, as expected. Overall, we find that varying h slightly alters the boundary where different forms of substructures dominate, while the general properties for individual substructures largely remain consistent with what we have found in the fiducial simulations with h_0=0.1. §.§ Observational implications Given the diverse dynamical consequence of shadowing, such disks is expected to exhibit a variety signatures that are potentially observable. However, it should be noted that our work serves as a general exploration without detailed modeling, including radiation transport, dust dynamics, shadow precession rates <cit.>, and realistic shadow morphologies may differ from our prescription <cit.>. Additionally, as there are a variety of other mechanisms that can drive substructures <cit.>, such as planet-disk interactions, icelines etc. Our shadowed disk simulations implicitly assumed a smooth disk to start with, and it is conceivable that the final outcome is set by the interplay between the existing substructures and shadowing. Besides such dynamical interplay, substructures themselves can self-shadow <cit.>, which can further complicate the situation. Therefore, a systematic observational comparison with specific sources is beyond the scope of this work. Below, we mainly discuss general aspects of potential observational implications. –Spirals. Spirals generated from shadows may not be easily detectable in the submm continuum or in kinematics, but may be observable in scatter light. Nearly all spiral-dominant disks correspond to weak thermal forcing, resulting in only about 0.1% higher gas density than the background. This not only makes pressure variations across the spirals small that is difficult for dust trapping, and only sufficiently small particles with a stopping time shorter than the spiral crossing time (typically requiring the Stokes number much less than 0.1) can potentially be trapped by the spiral <cit.>. With the weak spirals, the gas velocity is found to show very small deviationsfrom Keplerian (∼ 0.1% v_k, as opposed to ≳0.5% v_k for typical ALMA observations <cit.>.), making it difficult for kinematic detection. On the other hand, such spirals may be detectable in scattered light, as suggested by <cit.> for the HD 142527 disk, thanks to azimuthal variation of disk scale heights across the spirals, though three-dimensional simulations are needed for proper characterization. –Rings. For full disks, our simulations predict the presence of multiple gas rings that are uniformly spaced and weakly eccentric. The relatively high density contrast in our simulations suggests that these rings likely concentrate dust, making them readily observable in sub-mm wavelengths. While the resulting dust rings formed are also likely uniformly spaced, whether they can be eccentric remains uncertain (as the eccentric gas ring is a pattern and does not reflect real motion), requiring simulations incorporating dust dynamics. From all simulations, we find that the azimuthal temperature contrast in the ring-dominant disks are typically greater than 8% and can reach up to 50% as they approach to vortex-ring transition in disks with high viscosity and rapid cooling. Such azimuthal temperature variations should result in azimuthal brightness variations in the mm continuum image, which however has not been revealed in in real shadowed disks with rings (e.g., HD 143006). This suggests that thermal forcing by shadows in these systems are likely not as strong as given in our prescriptions, but we caution that without detailed modeling of shadow morphology, radiation transport and dust dynamics, we cannot make specific predictions for individual systems. On the other hand, we comment that both the weakly eccentric ring pattern and low-level of azimuthal temperature variation, if present, may affect the interpretation of azimuthal asymmetries seen in multi-ring systems <cit.>. Finally, we note that detection by kinematic signatures, with velocity disturbances being ∼1% of the Keplerian velocity, is possible but challenging since they are close to ALMA's detection limits. –Crescents. Vortices generate significant velocity perturbations and are favored sites for dust trapping. Given the adopted turbulent viscosity parameter α_t≳10^-2 in most vortex-dominated simulations, dust with Stokes number St > α_t∼ 10^-2 is expected to concentrate inside vortices overcoming turbulent diffusion <cit.>, and can be readily observable in sub-millimeter wavelengths <cit.>. Previous studies have found that detecting kinematic signatures of vortices can be possible but challenging <cit.>, despite the relatively large vorticity (typically around 0.2) and significant velocity deviations from local Keplerian (δ v up to 1.2c_s) inside vortex region . It is expected that sources with modest inclination favors detection but requires long integration time with ALMA (more than 10h) to achieve the necessary signal-to-noise ratio. § SUMMARY AND FUTURE PROSPECTS In this work, we have systematically studied the dynamical consequence of thermal forcing by shadows cast to the outer protoplanetary disks. With a large survey of parameters, we have identified a diverse forms of substructures generated by shadows and studied their trends under different thermodynamic and viscosity prescriptions. Our results apply in regimes where the shadow is static or slowly-rotating (prograde), so that the corotation radius is further than regions of interest. The main findings of our studies are as follows. 1. Two-arm spirals with identical pattern speed as the shadow are fundamental substructures generated by weak thermal forcing (ϵ<=0.5, σ_ϕ=0.079, β>1) or high viscosity (α>10^-3). They represent linear response to thermal forcing, and their pitch angle well agrees with standard density waves. Both the density contrast (0.1-1% higher than background) and velocity disturbance up to 0.5% v_K) are small and scale with the strength of thermal forcing. 2. Disks with moderate thermal forcing are dominated by ring-like substructures. In this regime (parameter space in between crescent/vortex and spiral-dominant disks), the gas density contrast reaches 1-20% above the background. The rings are uniformly spaced (Δ r/H ∼ 4H) and exhibit pattern eccentricities on the order of h/r or higher which rotates at the same rate of the shadow. 3. Crescents/vortices dominate disks under strong thermal forcing (ϵ>0.5, β≲0.1, σ_ϕ=0.236) and low viscosity (α<=10^-4). In this case, the density contrast is typically 10-50% higher than the average density at the same radius. The vortices in our simulations exhibit relatively large vorticity (ranging from 0.1 to 0.6, typically around 0.2) and significant velocity deviations from local Keplerian inside the vortex region (ranging from 0.4 to 1.2 c_s). Due to the chaotic nature (local turbulence level is 0.1 c_s) of the vortex-dominant disk, these structures are not uniformly spaced, with Δ r/H between 2 and 4. 4. Thermodynamics and viscosity significantly influence the formation of shadow-driven disk substructures. The dominant substructure transitions from spirals to rings and eventually to vortices as cooling timescales and/or viscosity decrease. 5. Owing to the simplicity of our problem setup, it is premature to definitely assess the observability of such shadow-driven substructures. We anticipate that the azimuthal brightness contrast in the sub-mm continuum to offer important constraints on the strength of the thermal forcing, while detecting in-plane kinematic signatures is likely challenging. Through our suite of physically-motivated while highly simplified simulations, we highlight the importance on the dynamical impact of shadows or more generally, inhomogeneous stellar irradiation, on the gas dynamics of PPDs through thermal forcing. Given the fact that shadows are often observed in scattered light images of disks, our results call for proper consideration and incorporation of such effects for adequate modeling of such systems. Our simulations can be considered as a starting point to understand the dynamical effects of shadows on PPDs, yet real systems are likely much more complex. This leaves several aspects to be considered and tested in the future. Proper characterizing disk thermodynamics is a pre-requisite to accurately model thermal forcing from shadows, which requires better modeling of the shadow geometry, together with self-consistent radiation transport. Such modeling under typical disk parameters (that are likely nearly optically-thin) will likely reduce the azimuthal temperature contrast due to in-plane radiation transport. Incorporation of dust dynamics is essential to obtain dust response to the shadow-driven substructures. Such simulations are expected to link the results with specific sources, as we are aware of efforts underway (Ziampras et al., in preparation). We have also assumed the shadows are cast to a full disk, whereas shadows are also observed in transition disks, and it is also pertinent to account for the interplay other physical mechanisms that cause disk substructures, with additional effect of self-shadowing. Finally, all existing studies of shadow-driven disk dynamics are conducted in 2D in the disk plane, whereas the shadow-driven thermal forcing is also expected to also drive oscillations in the vertical direction (Liu & Bai, in preparation). Future studies should incorporate 3D effects, which is essential to further assess the fidelity of 2D simulation results, and make more realistic observational predictions and comparisons. Acknowledgements We thank Yanqin Wu and Shangjia Zhang for useful discussions, Pinghui Huang for helpful instructions on problem setup, and Alexandros Ziampras for constructive exchanges. This work is supported by National Science Foundation of China under grant No. 12233004, 12325304. We also acknowledge the China Center of Advanced Science and Technology for hosting the Protoplanetary Disk and Planet Formation Summer School in 2022 when this work was initiated. Numerical simulations are conducted in the Orion and Sirius clusters at Department of Astronomy, Tsinghua University and TianHe-1 (A) at National Supercomputer Center in Tianjin, China. § TRANSITION STATE The vortex-ring transition represents the parameter regime where both the features of vortices/crescents and rings can be observed in the disk. Four examples of vortex-ring transition are illustrated in Figure <ref>. They are recognized as vortex-ring transitions generally based on two reasons: rings and vortices/crescents are simultaneously present in the disk (Figure <ref>), or the basic morphology appears as rings but with significant asymmetry (Figure <ref>, <ref>, <ref>). In Figures <ref> and <ref>, the left side of vortex-ring transition cases depicts vortex-dominated disks, while the right side illustrates ring-dominated disks. Further decreases in β or α lead to the disk being completely dominated by vortices/crescents. The ring-spiral transition represents the parameter regime where both the features of rings and spirals can be identified in the disk. Four examples of ring-spiral transitions are shown in Figure <ref>. They either exhibit regularly broken rings (Figure <ref> and <ref>) or clearly display both rings and spirals within the same disks (Figure <ref> and <ref>). These transition regions lie between ring-dominated disks and spiral-dominated disks in Figure <ref> and <ref>. The disk becomes dominated by spirals as β or α increases. From the transition states shown in Figure <ref> and <ref>, we can verify that rings exhibit characteristics of both vortices/crescents and spirals, as discussed in Section <ref>. Slightly excessive thermal forcing, relative to ring-dominant disks, can hamper reconnection (Figure <ref>) mentioned in Section <ref>, leading the disk into a vortex-ring transition state with strongly asymmetric rings (Figure <ref>) or crescents with large aspect ratios (Figure <ref>). Conversely, with weak thermal forcing, the breaking of two-armed spirals is partial (Figure <ref>), placing the disk into a ring-spiral transition state. § SIMULATION STATISTICS The detailed statistical plot of vorticity and density contrast (Figure <ref>), along with other parameters (Figure <ref>) of substructures, is presented here. These two figures share the same structures. Each of these figures is divided into two sections by a dashed line, representing shadow ranges of 45 degrees (σ_ϕ=0.236) and 15 degrees (σ_ϕ=0.079), respectively. In the left column, disks with a temperature slope of -1 are shown, while the right column represents disks with a temperature slope of -0.5. Each row, from top to bottom, corresponds to shadow amplitudes of ϵ=0.5 and ϵ=0.8. The x-axis of the subfigures represents β, while the y-axis represents α. Within each β-α section, there are three rows indicating the dominant structures in the disk: vortices/crescents, rings, and spirals, each represented by different types of squares, colored by the relevant physical properties as indicated in the color bars. The figure also includes red and blue line shaded areas, indicating disks undergoing transitions from vortex-ring and ring-spiral phases, respectively. We note that the inviscid (α=0) simulations maintain the same temperature gradient with a slope of -1 (indicating that the p value shown in the title of each subfigure only applies to viscid runs) and vary the density gradient of d=-0.5 and d=-1 in the left and right columns, respectively, which help us exclude the influence of density gradient. § ONE-SIDED SHADOW TEST In this Appendix, we briefly examine how the morphology and form of substructures can be affected by the morphology of the shadow region. As an experiment, we performed simulations with only the right side of the shadow shown in Figure <ref> present, and the target temperature is taken as T_ tar (r,ϕ)= T_ init(r)( 1-ϵ e^-ϕ^2/2σ_ϕ^2). The remaining parameters for the disk and shadow are the same as those in the representative simulations (NL-hm runs). For detailed parameter settings for the NL-hm runs, please refer to Table <ref>. It can be seen from Figure <ref> that the types of dominant substructures have not changed compare with NL-hm runs. The dominant spiral now has m=1, and the rings become asymmetric (with m=1, as opposed to eccentric with m=2), while crescents are generated as usual. These outcomes similarly follow the formation process described in Section <ref>. These simulations illustrate that besides a morphological change from m=2 to m=1, the general trends of shadow-driven substructures are not sensitive to shadow prescriptions. aasjournal
http://arxiv.org/abs/2407.12158v1
20240716202959
Gas-Phase metallicity for the Seyfert galaxy NGC 7130
[ "Amirnezam Amiri", "Johan H. Knapen", "Sébastien Comerón", "Alessandro Marconi", "Bret. D. Lehmer" ]
astro-ph.GA
[ "astro-ph.GA" ]
Department of Physics, University of Arkansas, 226 Physics Building, 825 West Dickson Street, Fayetteville, AR 72701, USA Instituto de Astrofísica de Canarias E-38205, La Laguna, Tenerife, Spain Departamento de Astrofísica, Universidad de La Laguna, E-38200, La Laguna, Tenerife, Spain Dipartimento di Fisica e Astronomia, Universitá degli Studi di Firenze, Via G. Sansone 1, 50019 Sesto Fiorentino, Firenze, Italy INAF – Osservatorio Astrofisico di Arcetri, Largo E. Fermi 5, 50125 Firenze, Italy Metallicity measurements in galaxies can give valuable clues about galaxy evolution. One of the mechanisms postulated for metallicity redistribution in galaxies is gas flows induced by Active Galactic Nuclei (AGN), but the details of this process remain elusive. We report the discovery of a positive radial gradient in the gas-phase metallicity of the narrow line region of the Seyfert 2 galaxy NGC 7130, which is not found when considering the star-forming (SF) components in the galaxy disk. To determine gas-phase metallicities for each kinematic component, we use both AGN and SF strong-line abundance relations, as well as Baldwin–Phillips–Terlevich (BPT) diagnostic diagrams. These relations involve sensitive strong emission lines, namely [O iii]λ5007, [N ii]λ6584, Hα, Hβ, [S ii]λ6716, and [S ii]λ6731, observed with the adaptive-optics-assisted mode of the Multi Unit Spectroscopic Explorer at the Very Large Telescope. The presence of a positive radial metallicity gradient only in the AGN ionized component suggests that metals may be transported from central areas to its purlieus by AGN activity. Gas-Phase metallicity for the Seyfert galaxy NGC 7130 A. Amiri1 , 2 Email: amirnezamamiri@gmail.com J. H. Knapen 2 , 3 S. Comerón 3 , 2 A. Marconi 4 , 5 B. D. Lehmer 1 Received: date / Revised version: date =========================================================================================================================================== § INTRODUCTION Metallicity is one of the most revealing physical quantities in the study of galaxy evolution <cit.>. Metals are produced in galaxies and returned to the interstellar medium (ISM) through a variety of mechanisms, such as supernova explosions <cit.>, neutron star mergers <cit.>, and the ejection of gas by asymptotic giant branch stars <cit.>. The gas-phase metallicity of galaxies is impacted by various processes in galaxy evolution, including star formation, gas accretion, gas flows, and wind-driven outflows of gas from galaxies <cit.>, and it correlates with physical properties such as the star formation rate <cit.>, stellar mass, and morphology <cit.>. In studies of galaxy formation and evolution, the distribution of metals plays an essential role <cit.>. Metallicity gradients in galaxies are most often observed to be negative (i.e., decreasing metallicity with increasing radius), but sometimes exhibit positive or flat behaviour. The presence of a negative gradient with a higher metallicity towards the nucleus indicates that star formation begins in the centre of a galaxy and expands outwards <cit.>. If galaxies evolve as a closed system and originate from the inside out, negative abundance gradients are expected <cit.>. This is common in the disks of most spiral galaxies <cit.>. Three major surveys have significantly increased the sample of metallicity gradient measurements in the local Universe, namely CALIFA <cit.>, MaNGA <cit.>, and SAMI <cit.>. The results of these observations reveal that metallicity gradients are primarily negative in nearby galaxies. A positive metallicity gradient, by contrast, has been observed in several other galaxies <cit.>. In a simple scenario, galaxies develop and grow in a dense environment, with gas flowing through, around, and within them <cit.>. Each of the events that make up this cycle, e.g. modified star formation, accretion, and mergers, has a unique impact on the galaxy. The central metallicity of galaxies should be diluted, and positive gradients should be observed if metal-poor gas accretion is deposited directly into their centres, resulting in a break in the gradients at small galactocentric distances <cit.>. <cit.> show that the capture of a gas-rich dwarf galaxy, which is a process that can start nuclear activity in galaxies, can result in the accretion of metal-poor gas into the nuclear region and its dilution. In a high-redshift investigation of star-forming (SF) galaxies, <cit.> observed positive metallicity gradients and proposed that the inversion of the abundance gradient might be caused by interactions with the environment. Metallicity gradients can also be flat. For example <cit.> demonstrated that almost 10% of their AGN from MANGA are consistent with a constant metallicity across the galactic disc. In addition to star formation, stellar evolution, and environmental influences, the presence of an AGN may have an impact on the evolution of the host galaxy properties <cit.>. The metal enrichment due to AGN could be due to either an in situ top-heavy initial mass function (IMF) in the accretion disk around the supermassive black hole <cit.> or dust destruction in the broad line region (BLR), which releases metals into the ISM <cit.>. In this case, following <cit.>, the AGN would promote fast star formation and ISM enrichment. Ionized gas outflows detected through optical emission lines are frequently observed in the narrow line regions (NLRs) and the extended NLRs <cit.>. NLR outflows present valuable information for studying the interaction between AGN and their host galaxies. The high resolution capabilities of modern telescopes can resolve the NLRs of nearby galaxies within just a few parsecs of the supermassive black hole (SMBH), and observe their extension over several kiloparsecs into the bulges and/or disks of the host galaxies <cit.>. The kinematics and physical conditions of the ionized gas in the NLR provide unique information regarding the properties of the outflows, including the energy associated with them <cit.>. The NLR would then be further enhanced by AGN-driven outflows of high-metallicity gas that has been shown to be ejected from the BLR on kiloparsec (kpc) scales <cit.>. In-situ star formation within AGN-driven outflows is another potential contribution to metal enrichment of the gas around the BLR <cit.>. In this paper we study, for the first time, the metallicity gradients separately in the disc, low velocity dispersion, and high velocity dispersion components of NGC 7130 by using the multi-component fits of the emission lines performed in <cit.>. The paper is organized as follows. In Section 2, we provide a brief overview of the galaxy areas used in our study. In Section 3, we outline how we classify the data into AGN and SF regions. We also elaborate on the calibration relations employed to estimate the gas-phase metallicities and their variations as a function of radial distance. Finally in Section 4, we discuss our findings in the conclusion, summarizing the implications and significance of our study. § OBSERVATIONAL DATA The southern galaxy NGC 7130 has a redshift of z=0.016151. With an axial ratio of 0.88, this galaxy is almost face-on <cit.>. It is a peculiar Sa galaxy <cit.> with a bar <cit.>, and shows evidence of an ionised outflow <cit.>. NGC 7130 also hosts a Seyfert 2 AGN <cit.>. Radio and optical investigations suggest that both star formation and nuclear activity contribute to ionising the gas in the nuclear region. The optical spectrum of NGC 7130 shows narrow (low velocity dispersion, σ<250 km s^-1) and broad kinematic components (high velocity dispersion, σ>250 km s^-1) that correspond to both ambient and outflowing gas in the galaxy <cit.>. These authors use optical integral field spectroscopy at high angular resolution (7.5×7.5 arcsec^2 field of view and the point source function has a full width at half maximum of about 0.18 arcsec) obtained with the adaptive-optics-supported the Multi Unit Spectroscopic Explorer (MUSE) instrument on the European Southern Observatory (ESO) Very Large Telescope (VLT). In this study, we use information from the multi-component decomposition of the circumnuclear ISM of NGC 7130 from <cit.>, which was based on the principal MUSE-wavelength emission lines. Each of the spectral lines was fitted with a superposition of up to six Gaussian components with distinct velocities and velocity dispersions. The fit was made using the re-implementation of the software <cit.> (called ) <cit.> over 2689 spectral bins with a signal-to-noise ratio of the Hα line of 100 or more generated using the Voronoi binning algorithm by <cit.>. To reduce the number of free parameters, the kinematics of different emission line components were tied to those of the Hα line. The number of components required for a given spectrum was chosen based on criteria using the χ^2 goodness-of-fit estimator <cit.>. In total, nine distinct kinematic components were characterised. Six of the nine kinematic components found by <cit.> are associated with the AGN outflow. Their nature is deduced from their large velocity with respect to the galaxy (typically 100s of km s^-1), their relatively large velocity dispersion (of 100s of km s^-1 or more), and line ratios incompatible with an ionization by stars. These components related to the AGN can be further subdivided into kinematically narrow components (typical velocity dispersion below 250 km s^-1) and kinematically broad components (typical velocity dispersion above 250 km s^-1). A further two of the nine kinematic components of the circumnuclear medium in NGC 7130 are related to the disc. The final component might be an artifact caused by the fitting procedure. Based on their observations, <cit.> produce a toy model of the circumnuclear medium of NGC 7130 (see their Fig. 14). They propose that the ISM in the disc has a low-density (n_e≈90 cm^-3), low velocity dispersion (σ<100 km s) background on top of which we observe a larger density (n_e≈500 cm^-3), slightly blueshifted (-100 km s^-1<V<0 km s^-1), and slightly higher velocity dispersion (σ<250 km s^-1) gas associated with the strongest knots of star formation and that was called the zero-velocity narrow component to distinguish it from kinematically narrow components in the outflow. The AGN outflow is postulated to have a biconical morphology with a main axis that is close to coplanar with the galaxy disc. Each of the cones would be made of one or more collimated narrow components, surrounded by a broad component with a larger opening angle. The blueshifted components would correspond to the approaching side of the bicone (located slightly above the plane of the galaxy), whereas the redshifted ones would correspond to the receding one (located below the mid-plane). According to the model, the redshifted side of the cone is partly obscured by extinction from the disc, and hence parts of it are not observed or are seen at a lower signal-to-noise level than their blueshifted counterparts. Hence, the properties derived for the redshifted components of the outflow are more uncertain than those derived for the blueshifted components. In this study, we consider four components out of nine from <cit.>, that is the gas-phase metallicity in the disc, blueshifted broad, blueshifted narrow, and zero velocity narrow components. The reason to exclude the redshifted high velocity dispersion and the redshifted low velocity dispersion components is ultimately that the redshifted components are obscured by the disc, so it is harder to obtain precise gas-phase metallicity measurements. The zero velocity high velocity dispersion component is probably a spurious artifact required to fit the spectra in some of the bins. We also do not investigate the crescent low velocity dispersion component because this region combines the interaction of the jet and disk and there is no clear method to calculate metallicity in those complex regions. § RESULTS To classify regions in NGC 7130 as AGN-ionized or H ii-ionized, we use the standard Baldwin–Phillips–Terlevich (BPT) diagrams <cit.> applied to the spectra of each of the Voronoi bins defined by <cit.>. The distribution of the bins in the [O iii] 5007/Hβ versus the [N ii]6584/Hα diagram, together with the boundary lines between SF and AGN defined by <cit.> and <cit.> are shown in Fig. <ref>. The red demarcation line by <cit.> is a theoretical upper limit on the location of SF galaxies in this diagram, obtained using a combination of photoionization and stellar population synthesis models. It yields a conservative selection of AGN. <cit.> revised the boundary line on the basis of observational evidence that SFs and AGNs are distributed along two separate sequences. It yields a conservative SF selection. In order to avoid ambiguous classifications we have adopted the more conservative selection for both SFs and AGNs, excluding from further consideration the bins located between the two lines. These could include regions with a mixture of ionizing sources <cit.>, and are expected to have a mixed AGN-stellar emission as their ionizing source <cit.>. In Fig. <ref>, the diagnostic BPT diagram displays the distribution of AGN and SF bins through each component. We find that the disc component is partly ionized by SF and AGN regions, whereas the other three components (outflow) are dominated by AGN photoionization. §.§ Gas-phase metallicity estimation The gas-phase metallicity in each bin is calculated using calibrations based on strong emission lines, i.e. adopting the so-called strong-line method. The MUSE wavelength coverage and the laser-affected wavelength (contamination caused by laser during adaptive optics observations) prevent us from measuring the temperature-sensitive emission lines, [O iii]λ4363 and [N ii]λ5756, and thus from estimating element abundances using the direct method (T_e). While we are able to identify [O ii]λλ7319,7330 in the majority of bins, the doublet [O ii]λλ3720,3730 fell outside the MUSE wavelength range, making it impossible to calculate abundances based on these auroral lines. We are aware of the systematic discrepancies in gas-phase metallicities which are potentially present when using strong-line methods: differences up to 0.6 dex for H ii regions <cit.> and up to 0.8 dex for AGN can be found when comparing metallicities obtained using the strong line method based on calibrations from different authors, particularly in the low-metallicity regime (smaller than 8.5, e.g., ). To estimate gas-phase metallicities we utilize the emission-line intensities [O iii]λ5007, [N ii]λ6584, Hα, Hβ, [S ii]λ6716 and [S ii]λ6716. We discard the bins where one or more of the aforementioned emission lines are not detected. We consider two different calibration relations to estimate the gas-phase metallicities for SF- and AGN-dominated regions. Hereafter, we refer to gas-phase metallicity as Z_ gas. For SF regions, we utilized the methodology developed by <cit.> to determine the gas-phase metallicity. <cit.> introduced novel empirical calibrations specifically designed for a selection of commonly employed strong-line diagnostics and the scatter around the calibration varies up to 0.15. These calibrations enable accurate calculation of the oxygen abundance in star forming galaxies and allow us to estimate the gas-phase metallicity in SF regions. Following <cit.>, we define: N2 = [ N ii]λ6584/ Hβ, S2 = ([ Sii]λ6716+[ Sii]λ6730)/ Hβ, R3 = [ Oiii]λ5007/ Hβ, and calculate the gas-phase metallicity for a given SF region following: Z_ gas = 8.424+ (0.030×log_10(1.33× R3/ S2)) +0.751×log_10(1.33× N2)+ (-0.349+0.182×log_10(1.33× R3/ S2)+ 0.508×log_10(1.33× N2))×log_10( S2) for log_10( N2) > -0.6 and Z_ gas = 8.072+ 0.789×log_10( R3/ S2) +0.726×log_10( N2)+ (1.069 - 0.170×log_10( R3/ S2) + 0.022×log_10( N2))×log_10( S2) for log_10( N2) ≤ -0.6. To compute gas-phase metallicities in AGN regions, we use the relation by <cit.>. They proposed first a calibration between the metallicity Z_ gas and the intensities of optical emission-line ratios of AGNs that is valid for the gas-phase metallicity in the range 8.4≤ Z_ gas≤ 9.4. The computed Z_ gas from these calibrations varies by ∼0.1 dex. The metallicity value should be corrected in order to take into account electron density (N_ e) effects: Z_ gas = Z_ int - (0.1×log_10(n_e /300 cm^-3) in which: Z_ int = 8.34 +(0.212× N2) -(0.012 × N2^2)-(0.002× R3) + (0.007× ( N2× R3))- (0.002× N2^2× R3) + (6.52× (10^-4× R3^2)) + (2.27× 10^-4× ( N2× R3^2)) + (8.87× 10^-5× ( N2^2× R3^2)). To estimate the N_ e, we adopt the <cit.> measurements, which mainly used the total [S ii]]λ6716 and [S ii]λ6730 flux ratio calibration from <cit.>. Each component's variations in Z_ gas, the combination of both AGNs and SFs, result in the histogram shown in Fig. <ref> while Fig. <ref> shows the spatial distribution (X,Y) of AGN and SF bins. We exclude a small number of data points (less than ten bins) for each component, where Z_ gas≤8.2. §.§ Radial variations of Z_ gas There is an increasing amount of observational data supporting the existence metal-rich outflows from AGN-powered winds and/or jets <cit.>. Cold gas that is highly enriched in metals has been discovered recently in various clusters and groups <cit.>. This gas is either correlated with radio jets or found preferentially along cavities. Its abundance is generally near-solar or even super-solar, and in some cases, it is even more enriched than the regions that are closest to the centre of galaxies <cit.>. Given the large amount of energy needed to move gas to its measured position, it is assumed that this metal-enriched gas was carried to its observed location by AGN-related processes <cit.>. We derive radial chemical abundance profiles for the different components of NGC 7130, as shown in Fig. <ref>. We employed a binned linear regression approach to fit the radially averaged Z_ gas as a function of radial distances. In Table <ref>, we list the slopes and intercepts of the gradients down to the centre (R=0) for each component. For the blueshifted narrow and blueshifted broad components, we find an overall positive radial trend which suggests a gradual increase in gas-phase metallicity as we move towards the outer regions. This finding implies that there must exist strong internal mechanisms within the galaxy that actively transport high amounts of metals towards these outer areas. The disc component (top-left panel in Fig. <ref> and left panel in Fig. <ref>) is harder to interpret, as it shows a superposition of AGN- and SF-ionized regions. Since the axis of the AGN outflow is probably close to the plane of the disc <cit.>, it is possible that the AGN radiation is genuinely ionizing parts of the disc. However, it is also possible that the bins ionized by the AGN are an artefact caused by the difficulty of measuring the intensity of the component of the [O iii] lines corresponding to the disc <cit.>. The parts of the disc ionized by the AGN tend to show a positive gradient of the radial gas-phase metallicity. For the SF bins in the disc (the zero velocity narrow component), we also find a negative radial gas-phase metallicity. Our results align with the notion that galaxies underwent relatively smooth gas accretion histories, with metal-poor inflows and outflows preferentially affecting the outer regions of galaxies. This combined with the inside-out evolution of galaxies, naturally gives rise to negative metallicity gradient <cit.>. § CONCLUSIONS We address the still unsettled issue of the radial distribution of gas-phase metallicities in AGN, by exploring the different gas components identified in NGC 7130 by <cit.>. We distinguish between SF bins and bins dominated AGNs by means of the classical BPT diagnostics. Depending on the type of emission lines that may be detected in both AGN and SF regions, many studies have proposed varying strong line ratios to characterize the metallicities <cit.>. Depending the location in the BPT diagram, we compute gas-phase metallicities based on either the <cit.> or the <cit.> calibration relations. We then analyse the radial gas-phase metallicity distribution for each component, separately. The majority of the gas-phase metallicities that we observe in the AGN component are consistently increasing with radius, meaning that the activity of the AGN indeed plays a significant role in shaping the radial distribution of the gas-phase metallicity. This suggests that AGN activity is responsible for actively transporting metals from the central region to its purlieus. This intriguing phenomenon has the potential to establish a notably steep relationship between the radial distance from the galactic centre and the gas-phase metallicity. We find that for the AGN-ionized gas Z_ gas has the lowest values specifically at the nuclear region. This piece of information potentially holds key insights into the environmental influences. The present AGN activity of NGC 7130 galaxy may be the result of a recent accretion from a metal-poor dwarf galaxy. Although there is no strong observational evidence of interaction between NGC 7130 and its neighbouring galaxies, the warped aspect of the outskirts of NGC 7130 <cit.> may demonstrate a past close encounter between them. Further signs of a possible recent interaction are the asymmetric velocity and velocity dispersion maps of the ionized gas in the galaxy <cit.>. As <cit.> have shown, the capture of a gas-rich dwarf galaxy, which is a process that can start nuclear activity in galaxies, can result in the accretion of metal-poor gas into the nuclear region and its dilution. To conclude, our findings emphasize the crucial role that AGN activity plays in shaping the metal enrichment of galaxies and provide valuable insights into the underlying processes driving the gas-phase metallicity gradients in galaxies. Our work highlights the importance of internal mechanisms in redistributing metal content throughout a galaxy from centre to outskirt. A.A. thanks Kastytis Zubovas, C. Ramos Almeida, Rogério Riffel, and A.Khoram for helpful discussions. Also, A.A. acknowledges support from the ACIISI, Consejería de Economía, Conocimiento y Empleo del Gobierno de Canarias and the European Regional Development Fund (ERDF) under the grant with reference PROID2021010044. SC acknowledges funding from the State Research Agency (AEI-MCINN) of the Spanish Ministry of Science and Innovation under the grant 'Thick discs, relics of the infancy of galaxies' with reference PID2020-113213GA-I00. Co-funded by the European Union (MSCA EDUCADO, GA 101119830). Views and opinions expressed are however those of the author(s) only and do not necessarily reflect those of the European Union. Neither the European Union nor the granting authority can be held responsible for them. JHK acknowledges grant PID2022-136505NB-I00 funded by MCIN/AEI/10.13039/501100011033 and EU, ERDF. aa
http://arxiv.org/abs/2407.12334v2
20240717062328
Cabin: Confining Untrusted Programs within Confidential VMs
[ "Benshan Mei", "Saisai Xia", "Wenhao Wang", "Dongdai Lin" ]
cs.CR
[ "cs.CR" ]
packeditemize ∙ Confining Untrusted Programs within Confidential VMs Benshan Mei, Saisai Xia, et al. Key Laboratory of Cyberspace Security Defense, Institute of Information Engineering, Chinese Academy of Sciences, Beijing, China School of Cyber Security, University of Chinese Academy of Sciences, Beijing, China Cabin: Confining Untrusted Programs within Confidential VMs Benshan Mei1,2 Saisai Xia1,2 Wenhao Wang1,2() Dongdai Lin1,2 July 22, 2024 ================================================================== § ABSTRACT Confidential computing safeguards sensitive computations from untrusted clouds, with Confidential Virtual Machines (CVMs) providing a secure environment for guest OS. However, CVMs often come with large and vulnerable operating system kernels, making them susceptible to attacks exploiting kernel weaknesses. The imprecise control over the read/write access in the page table has allowed attackers to exploit vulnerabilities. The lack of security hierarchy leads to insufficient separation between untrusted applications and guest OS, making the kernel susceptible to direct threats from untrusted programs. This study proposes , an isolated execution framework within guest VM utilizing the latest AMD SEV-SNP technology. shields untrusted processes to the user space of a lower virtual machine privilege level (VMPL) by introducing a proxy-kernel between the confined processes and the guest OS. Furthermore, we propose execution protection mechanisms based on fine-gained control of VMPL privilege for vulnerable programs and the proxy-kernel to minimize the attack surface. We introduce asynchronous forwarding mechanism and anonymous memory management to reduce the performance impact. The evaluation results show that the Cabin framework incurs a modest overhead (5% on average) on Nbench and WolfSSL benchmarks. § INTRODUCTION Privilege separation involves dividing privileges among different entities or processes within a system to limit potential damage caused by a compromised component. In traditional computing systems, privilege separation is achieved by separating the kernel code and userspace code. The kernel, trusted with access to all resources, is segregated from userspace programs, which are confined to their own address spaces. This separation is enforced by the CPU's execution mode and security checks performed by the memory management unit (MMU). However, traditional privilege separation has certain drawbacks. Firstly, the kernel-user interface, represented by system calls, can allow untrusted processes to bypass kernel protections due to the large code base of the kernel. While measures like sandboxing and system call filtering can restrict attackers' ability to abuse the interface, they also increase the kernel's attack surface since these countermeasures are often implemented as part of the kernel itself. Secondly, the MMU lacks fine-grained protection for applications. The access permissions defined in the page table entries (PTEs) can only be configured as either writable or non-writable, invariably remaining readable. This limitation hinders the efficient implementation of execute-only memory (XOM), which is known to be effective in thwarting code-reuse attacks by making it challenging for attackers to identify usable gadgets. In recent years, hardware-based trusted execution environment technologies, such as Intel SGX <cit.>, AMD SEV <cit.>, Intel TDX <cit.>, and ARM CCA <cit.>, have paved the way for the emergence of confidential computing. This new computing paradigm focuses on safeguarding the guest or enclave from attacks originating from potentially untrusted hosts. In the context of confidential computing, protecting the guest kernel assumes even greater significance, as it is responsible for securing users' most sensitive data. If the guest kernel is compromised, the entire CVM is at risk of compromise, potentially resulting in the leakage of any associated sensitive data. To address the concerns mentioned above, particularly the risks associated with CVMs, we have introduced Cabin, a novel secure execution framework tailored to confine vulnerable processes running within a CVM. Our framework leverages hardware-based isolation mechanisms, i.e., VMPL within AMD SEV-SNP, to establish a secure environment for executing vulnerable processes. Notably, with VMPL, one can assign read, write and execute permissions independently, allowing XOM to work efficiently. Specifically, in our framework, untrusted programs are placed at a lower VMPL, ensuring the protection of the guest OS from vulnerable or malicious applications. A trusted proxy kernel within the lower VMPL acts as an intermediary, facilitating communication between confined processes and the trusted guest OS. To minimize the overhead of VMPL switches, we have designed an asynchronous method for handling events triggered by the application, such as system calls, page faults, interrupts, and exceptions. This approach reduces the number of required VMPL switches and improves overall efficiency. Additionally, our framework allows for flexible monitoring and tracing of processes running within the user-space of the lower VMPL without requiring intervention from the guest OS. This enables the CVM owner to define custom policies for monitoring confined processes. Lastly, our framework incorporates monitoring and logging capabilities to detect any suspicious activities and provide valuable insights into potential threats. This additional layer of security enables proactive threat detection and response. We have implemented a prototype of the Cabin framework on commodity AMD SEV-SNP servers, utilizing the system to provide execution protection and syscall filtering. Through evaluations on various benchmarks, including syscall routing, page fault handling, Nbench, and WolfSSL, we observed that despite that the VMPL switch is costly (in particular, syscall routing is about several times slower than the baseline), Cabin introduces acceptable overhead in real world applications – approximately 5% and 10% for the Nbench and WolfSSL benchmarks respectively. Overall, our confined secure execution framework provides a pratical solution for enhancing the security of CVMs, ensuring the protection of sensitive data from unauthorized access. Contributions The contributions of this paper are as follows. * Designing and implementing a secure execution framework for processes within CVMs based on the fine-grained control of VMPL privilege, protecting the guest OS from direct threats posed by vulnerable or malicious programs. * We propose VMPL-enhanced cross-layer execute-only protection for vulnerable programs and proxy-kernel running in lower VMPL, making it harder to find exploitable gadgets. * We introduce asynchronous forwarding mechanism to minimize the performance impact on confined processes. Self-managed memory provided by the proxy-kernel further reduces the performance impact. * We evaluate the performance impact of the framework on the Nbench and WolfSSL benchmarks. The evaluation results demonstrate modest overhead of the proposed framework. § BACKGROUND The emergence of new hardware-based privilege separation mechanisms within CVM presents new opportunities to enhance system and application security. With advancements in research on execute-only protection, intra-process isolation, and syscall filtering, we strive to leverage these technologies to further strengthen the system security. Therefore, we adhere to the traditional paradigm of software security, which emphasizes protecting the guest OS from potential threats posed by untrusted programs. §.§ SEV-SNP and VMPL It is crucial to protect the guest VM from malicious host in confidential computing. AMD SEV (Secure Encrypted Virtualization) is the first generation of hardware-assisted virtualization technology that solves the problem with memory encryption and isolation enhanced security <cit.>. To defend against malicious hypervisors, the SEV and SEV-ES (Encrypted State) are proposed in succession by AMD to encrypt the memory pages and the private register contents of VMs with different keys <cit.>. However, the nested paging is still in the control of the hypervisor, so the SEV VM's pages could be mapped to another VM or the hypervisor <cit.>. Although the private status and pages of VM is encrypted under different keys, SEV/SEV-ES lacks integrity protection, e.g., the hypervisor can perform memory replay attacks. In 2020, AMD introduced SEV-SNP (Secure Nested Paging), further enhancing the protection for CVM from malicious hypervisor <cit.>. In SEV-SNP, an encrypted physical page can not be mapped to multiple owners by a malicious hypervisor. This mechanism is realized by the introduction of a Reverse Mapping Table (RMP). The RMP is a metadata table managed by the AMD Platform Security Processor (AMD PSP). It records the ownership of each system physical page and dictates read, write and execute permissions for each VMPL. On every nested-page table walking, the RMP is consulted for the permission and ownership of each system physical memory page. A nested page fault (#NPF) will be raised on illegal access to physical pages. It is captured and handled by the hypervisor. The hypervisor manages VM Saved Areas (VMSAs) corresponding to four VMPLs. The access permission to the physical memory pages is restricted by configuring the VMPL of each page in the RMP. A vCPU can run in different VMPL contexts by switching the corresponding VMSAs with the help of the hypervisor. Compared to page table protection, the RMP managed VMPL privilege is more flexible. Traditionally, we have NX, R/W, and U/S bits to denote non-executable, read-only, and user pages. However, the read and write permissions are not orthogonal in the page table. The RMP therefore separates the read and write access to guest physical pages, allowing one-way information flow between different VMPLs. Moreover, it separates the user and supervisor execution privilege for guest physical pages, preventing the code regions from being executed by unauthorized supervisor or user applications running in the lower VMPL. It is complementary to traditional SMEP (Supervisor Memory Execution Prevention) mechanism on x86 platform, combined with U/S and NX bits. The fine-grained privilege separation allows for strong execution protection. §.§ Execute-only memory Over the last thirty years, there has been substantial advancement in software attack and defense technology. The memory safety issue has been a long standing unsolved problem. Strategies like address space layout randomization (ASLR), stack canaries, and data execution prevention (DEP) have been used to address memory safety weaknesses. Despite these improvements, attackers persist in discovering new methods to exploit software vulnerabilities, underscoring the ongoing competition between attackers and defenders in the cyber-security realm. The absence of code confidentiality enables attacker to gain arbitrary access to a running process by analyzing the code region for exploitable gadgets resides in the vulnerable software <cit.>. Various software and hardware mitigation have been proposed to enhance the code confidentiality through eXecute-Only Memory (XOM) <cit.>. XOM stands out as a straightforward and effective method that minimizes the attack surface and significantly raises the bar for attackers seeking to exploit software vulnerabilities. Through restricting access to code regions during runtime, XOM offers an additional security layer that prevents unauthorized access and manipulation of critical processes. Previous researches have demonstrated the effectiveness XOM in strengthening software security <cit.>. By preventing access to code pages, attackers are hard to find gadgets for subsequent attacks. Numerous Protection Key Registers User-space (PKRU)-based sandbox frameworks have emerged recently <cit.>. However, due to the unprivileged nature of these hardware-based intra-process isolation mechanisms, they can be easily circumvented by exploiting the confused-deputy of the virtual-memory related syscalls <cit.>. Despite efforts to bolster the isolation between trusted and untrusted components, it is still considered to be weak in security-sensitive environments. Essentially, this mechanism offers safety rather than security. Traditional page table-based memory protection is inadequate due to the absence of read/write access separation. The R/W bit on the x86 platform cannot be used to enable execute-only memory for vulnerable programs, allowing attackers to easily locate gadgets and compromise the software system in either the kernel or user-space. Even with PKRU-based execute-only protection, where read and write permissions are separated for each memory domain, it remains coarse-grained and can be circumvented in user-space <cit.>. §.§ Syscall filtering Syscall filtering plays a vital role in safeguarding OS from vulnerable and malicious software <cit.>. Existing syscall filtering mechanisms often reside in the kernel space. Once bypassed, the entire system is in danger. The PKRU-based in-process sandboxes is lightweight and efficient in ensuring the security of software <cit.>. However, the non-privileged hardware intra-process isolation primitives can be easily bypassed through the confused deputy of the syscall <cit.>. In recent years, syscall filtering is widely used to ensure the security of such hardware-based intra-process isolation mechanisms <cit.>. However, most of these syscall filtering mechanism are within the kernel space. Once compromised, the entire system is in dangers. The lack of layered defence poses a great threat to the kernel. We argue that the user and supervisor separation is insufficient and exploitable. To reduce the attack surface, the untrusted programs should be isolated from direct interact to the guest OS within CVM. §.§ Threat model Our threat model aligns with that of confidential computing, where everything outside the virtual machine is considered untrusted. This includes the host OS. Our system relies on critical services from the guest OS, which is trusted. The proxy-kernel acts as a bridge between the confined processes and the guest OS, and it is also trusted. We assume that the applications are untrusted and may contain memory safety errors. Additionally, side channels and hardware attacks are outside the scope of our considerations. We operate under the assumption that the hardware functions as described in the official documentation. Furthermore, memory encryption and integrity protection measures are in place to provide an extra layer of security. § DESIGN We observe that the precise control over VMPL privilege on each guest physical page enables the execution of programs under a lower VMPL. However, merely possessing this control is insufficient to propose a secure isolated execution framework. The introduction of four permission bits in the VMPL mechanism addresses issues associated with traditional page table protection flags and is specifically tailored to ensuring the security of code running in the user and kernel space of the lower VMPLs. Therefore, to safeguard the guest OS, we introduce the framework, which confines untrusted programs to the user space of lower VMPL through fine-gained VMPL privilege management. To accomplish this, the architectural design is detailed as follows. §.§ Overview Fig. <ref> presents an overview of the framework. We introduce a proxy-kernel within the lower VMPL to facilitate the scheduling of processes at lower VMPLs. The proxy-kernel directly monitors confined processes and mediates the communication between the guest OS and these processes. This mediation enables the application of flexible security policies before forwarding syscalls and exceptions to the guest OS. Consequently, it establishes a layer of defense against untrusted processes. The owner of the CVM is allowed to customize policies to monitor these processes without requiring intervention from the guest OS. This design ensures the flexibility in process monitoring and tracing. §.§ System design The shields untrusted programs in the user-space of lower VMPL. We should ensure a secure and reliable environment for untrusted applications running at lower VMPLs. Managing runtime state of confined processes is crucial. To address this, we introduce a proxy kernel to serve these confined processes. The proxy kernel functions as an intermediary between the restricted processes and the underlying guest OS, managing syscalls, and interrupts on their behalf. The system design of the framework consists of four main components: the life-cycle management of confined threads, the context switch, syscall routing, and exception model. Below, we elaborate on each aspect of the design. Life-cycle management The framework supports scheduling each thread independently to the user-space of lower VMPL. The life-cycle of each thread comprises three stages: creation, entry, and exit. The guest OS manages the life-cycle of the untrusted processes as illustrated in Fig. <ref>. During the initialization, the guest OS prepares the runtime environment for all lower VMPLs. Before entering the lower VMPL, the guest OS assigns a specific VMPL to each thread and synchronize the hardware state of the thread to the corresponding VMSA. Then, by requesting the hypervisor to execute in the specified VMPL, the current CPU directly switches to the corresponding VMPL and resumes the execution. Initially, Cabin enters the kernel mode of the lower VMPL, performing a series of initialization tasks for syscall and interrupt handling. Then it directly switches to the user-space, and continues the execution of the user thread. The proxy-kernel waits for syscall and interrupt events from the user-space, and forwards these events to the guest OS or handles event by itself. Upon receiving a request from the lower VMPL, the guest OS decides whether it is an interrupt or syscall event, and calls the corresponding handler in the guest OS. The request loop continues until receiving the exit and exit_group syscalls from the lower VMPL, the guest OS no longer schedules the thread to the lower VMPL. Finally, the guest OS releases the resource for confined processes. Context switch The guest OS manages the context switch of confined process as usual. Compared to normal context switch in the guest OS, the hardware state of the confined process is saved in the VMSA of the lower VMPL, which is allocated by the guest OS during initialization. Because the guest OS has direct access to the hardware state of all lower VMPLs, synchronizes these state from VMSA with guest OS managed Task Control Block (TCB) on context switch. Therefore, the guest OS just loads and restores the hardware task state for confined processes at a different place. To optimize resource utilization, supports all lower VMPLs to minimize contention for limited VMSA. The confined processes are assigned to different lower VMPLs, eliminating the need to restore context when the lower VMPL is not preempted by other processes. Syscall routing The syscall routing logic is outlined in Fig. <ref>. For confined processes running in the user space of lower VMPLs, syscalls are handled by the proxy-kernel before forwarded to the guest OS. By switching VMPL, the syscall arguments are automatically saved in the VMSA of the lower VMPL. The guest OS can directly access this hardware state. The result is returned to the proxy-kernel by modifying the VMSA of the lower VMPL. Meanwhile, certain syscalls can be directly handled by the proxy-kernel. To this end, we just simulate the syscall and sysret semantics with VMPL switching, allowing syscalls to be handled by the guest OS as usual. Exception model The exception in the lower VMPL should be forwarded to the guest OS in principle. All necessary information is stored in the trap frame during a trap event, which will then be forwarded to the guest OS. Exceptions are managed in a standard manner. After handling the trap event, the guest OS requests the hypervisor to schedule the confined process. To reduce context switch, the proxy-kernel handles certain exceptions by itself. Exceptions are redirected to the guest OS as regular syscalls but are managed in a different setting. Handling exceptions involves changing the preempt mode and interrupt status of VMPL0 to ensure that the handler is invoked in a correct environment. With the above design, we enable untrusted processes to be scheduled to the user-space of a lower VMPL, isolated from the guest OS with the VMPL hardware mechanism. The proxy-kernel mediates the communication between the untrusted programs and the guest OS. Unlike existing works, the guest OS in the framework manages all resources needed by the lower VMPLs. This innovative design brings numerous opportunities in the security aspect, which are detailed in the following sections. §.§ Performance optimization Asynchronous forwarding Most kernel operations execute quickly, rendering it costly to forward syscalls and exceptions synchronously via VMPL switching. To improve the performance, incorporates an asynchronous forwarding mechanism into the proxy-kernel. With no barriers between threads in different VMPLs, this mechanism relies on shared-memory and spinlock-based cross-thread communication. During the initialization stage, initiates a service thread that waits for requests using a spinlock. Upon entering the lower VMPL, the proxy-kernel of the lower VMPL can utilize this interface to forward syscalls and interrupts to the service thread. Once the request is completed, the proxy-kernel returns the result to the confined process, which then resumes execution until the next syscall or interrupt occurs. Compared to other asynchronous forwarding mechanisms <cit.>, directly intercepts the syscall and exception in the proxy-kernel, requiring no modification to the confined programs. The untrusted programs are not allowed to directly utilize this mechanism to bypass the proxy-kernel, reducing the attack surface at the user-space. Self-managed memory To mitigate the performance impact of expensive VMPL switching, further incorporates anonymous memory management into the proxy-kernel, allowing direct handling of virtual memory related syscalls on anonymous pages requirement. The physical pages are granted by the guest OS, and managed by the proxy-kernel directly. When needed, the proxy-kernel requests additional memory pages from the guest OS. These pages are allocated on demand for confined processes, with any page faults on these anonymous pages being handled by the proxy-kernel, bypassing the guest OS. § CASE STUDIES With the proxy-kernel, framework enables a series of optimization and security mechanisms for confined processes. The case studies on the framework cover three main points: execute-only protection for untrusted processes and proxy-kernel, syscalls filtering for untrusted processes, and exceptions intercepting for flexible process monitoring and tracing. These studies showcase potential applications of the framework. §.§ Execute-only protection According to the official document <cit.>, there are four distinct permission bits for each guest physical page: read, write, user, and super execution permissions. This approach is orthogonal and distinct from traditional page table flags, where read and write access are not independent. It adds an extra layer of protection against guest physical pages. Here we present two security enhancement mechanisms based on fine-grained management of VMPL privilege. Firstly, we propose VMPL-enhanced XOM. The guest OS revokes the read access to the code regions and then assigns execution privilege to user or super-level based on security needs. By restricting execute-only VMPL privilege to the code pages, we prevent attackers from exploiting vulnerabilities both in the kernel and user-space of lower VMPL. Due to the privileged nature of VMPL mechanism, it overcomes the short comings that arises in most non-privileged hardware-based intro-process isolation mechanism, i.e., PKRU-based XOM. Secondly, we introduce VMPL-enhanced cross-layer execute-only protection, serving as an enhanced SMEP mechanism. This is achieved through fine-grained separation of user and super execute privilege. As the VMPL further separates the execution privilege for user and kernel space, we can not only prevent the execution of untrusted user code in the kernel space, but also forbid the privileged code from being executed in user space even in the absence of U/S bit protection in the page table. By utilizing the VMPL hardware mechanism, establishes a strict boundary between the kernel and user space at lower VMPLs. It enforces both intra-process isolation and cross-layer protection. It makes the attacker more difficult to exploit vulnerabilities at the lower VMPL. Overall, we utilizes VMPL to enable "one-way visibility" of a reference monitor, ensuring that code regions cannot be inspected and altered at the lower VMPL. §.§ Process monitoring Since the proxy-kernel mediates the communication between the guest OS and untrusted programs. It can directly handles the syscall and exception from user-space before forwarding to the guest OS. This mechanism can be leveraged to enhance the performance or track the execution of confined user programs. Syscall filtering introduces VMPL-enforced execute-only protection to reduce the attack surface for vulnerable programs. However, it is not sufficient for malware. The syscall filtering can be leveraged as a layer of defense in the lower VMPL without intervention from the guest OS. Process tracing By intercepting the breakpoint exception, enable dynamic monitoring of untrusted programs without guest OS intervention, offering a flexible tracing mechanism. This allows us to utilize the hardware breakpoint based dynamic intercepting mechanism without relying on the guest OS. Similar to the kprobes mechanism in the Linux kernel, we enable automatic process tracing running in the lower VMPL. Additionally, dynamic instrumentation can be readily supported on for closed-source binaries. Malware analysis For malware where no source code can be accessed, the exception intercepting mechanism allow flexible security policies to be applied to each confined process without requiring intervention from guest OS. It is especially useful in analyzing the behaviour of malware. Since the policies are outside of the guest OS, modifying the security policy is made simple. § IMPLEMENTATION The current implementation of the framework supports Linux running on AMD SEV-SNP enabled CPUs. It is based on the lasted infrastructure from AMD SEV[<https://github.com/AMDESE>]. To streamline the management of confined processes, consists of a kernel module and a proxy-kernel. The kernel module manages the life-cycle of the confined processes, while the proxy-kernel serves these processes in the lower VMPL. The kernel module comprises approximately 6600 lines of code (LoCs), the proxy-kernel has 11000 LoCs, and the musl-libc [<https://musl.libc.org>] contributes around 500 LoCs for GHCB protocol-based syscall forwarding mechanism. Application interface We offer two interfaces for applications that need confinement. The vmpl_init is utilized to setup the environment at the process level. The vmpl_enter_user is used to prepare thread-level resources and enter the lower VMPL. The thread can be scheduled independently to the lower VMPL. Besides, we introduce a preload library for unmodified binary programs. There is no need to modify or statically instrument the source code, greatly reducing the deployment effort. §.§ Syscalls and interrupts handling In the framework, the proxy-kernel directly handles the syscalls and the interrupts from the confined process. The forwarding mechanism follows standard GHCB protocol <cit.>. The MSR (Model-Specific Registers) protocol serves as a bootstrapping mechanism for GHCB protocol before GHCB registration. Once the GHCB is registered at the lower VMPL, directly shifts to the GHCB-based forwarding mechanism. To ensure the functionality and efficiency of syscalls and interrupts handling, the implementation of the framework includes the following features: the vDSO support, asynchronous forwarding, and transparent debugging. Syscall routing supports anonymous memory management in the proxy-kernel. Certain virtual memory related syscalls, such as mmap, munmap, mprotect, and mremap can be handled by the proxy-kernel without VMPL switching. Unsupported syscalls are still forwarded to the guest OS. A simple filtering mechanism is also implemented in the forwarding logic, allowing intercept each syscalls independently with priority. To enforce syscall security, security policies can be enforced prior to entering the lower VMPL. vDSO support The vDSO (virtual Dynamic Shared Object) is a conventional mechanism that allows programs to make syscalls directly without transition to kernel mode <cit.>. It is a memory area used by the kernel to provide optimized versions of commonly used syscalls (i.e., clock_gettime). This improves performance by reducing the overhead of context switch. The vDSO is mapped into the address space of every user-space process, allowing programs to access it easily when making these syscalls. naturally supports such mechanism by allowing access to those memory pages at lower VMPL. Asynchronous forwarding To reduce the costly VMPL switching, the asynchronous forwarding mechanism is derived from SGX-HotCalls <cit.>. By removing the Intel SGX-related components, it seamlessly integrates with the framework. Unlike the original version, this mechanism is integrated into the syscall and interrupt handlers of the proxy-kernel. Currently, the framework supports asynchronous forwarding for syscalls, while exceptions and interrupts remain GHCB protocol-based synchronous forwarding mechanism. Transparent debugging Transparent debugging is essential in the framework for confined processes. It ensures seamless debugging capabilities for the lower VMPL. The hardware state of the lower VMPL is synchronized with the guest OS-managed TCB, encompassing debug registers, during context switches. The trap frame from the user-space of lower VMPLs is delivered to the guest OS to facilitate the handling of breakpoints and debug exceptions triggered at the lower VMPL, allowing transparent debugging for confined processes. §.§ Dynamic VMPL management It is crucial to adjust the VMPL permission of each physical memory pages for a confined process to run in the user space of the lower VMPL on AMD SEV-SNP platform. This process mainly includes intercepting syscalls and exceptions, as outlined below. Syscall interposition To update VMPL permission on time, we adjust the permission of the relevant physical pages after each system call and page fault, so that the process can be running at a lower VMPL. We identified several virtual memory-related syscalls (e.g., brk, mmap) in Table. <ref>. However, these syscalls do not always populate the page table due to lazy allocation. To streamline the process, we still traverse the page table and grant access to corresponding memory area. Despite being imprecise and inefficient, the evaluation shows a modest overhead through other optimizations. The syscalls mentioned above typically accept memory address and length as arguments and can be easily monitored for VMPL management. However, certain other syscalls (e.g., read) implicitly alter the page tables by synchronizing memory contents between hardware storage and memory. In these cases, the kernel will inform subscribers before and after modifying the page table. We utilize such notification mechanism to adjust the VMPL permission. Page fault interception Due to the lazy-allocation and demand paging mechanism, we update the corresponding VMPL permission after the guest OS successfully handles the page fault on non-present and copy-on-write (COW) pages. However, it is not sufficient to run the process at lower VMPL. The kernel pre-allocates physical pages before actually accessing those pages due to the prefault mechanism. Therefore, we promptly adjust the VMPL permission for prefault pages. Otherwise, it may cause RMP permission violations caused by being unable to access these physical pages at a lower VMPL. Notably, a better way to improve the performance is to grant the entire memory access rights to all lower VMPLs according to the firmware specification <cit.>. In this way, all guest physical pages are allowed to access at lower VMPL. However, it is still necessary to conditionally adjust the VMPL permissions for security. It is a complex task to track all updates to the page table of a process. Our prototype focuses on demonstrating the viability and security of running untrusted programs within the user-space of a lower VMPL. Therefore, we do not focus on a precise tracking mechanism in this work. However, a precise page table tracking mechanism can be realized with further efforts. § PERFORMANCE EVALUATION In this section, we evaluate the performance of the framework. The evaluation is performed in a single-threaded environment. This includes using GHCB protocol and HotCalls to forward syscalls and page faults to the guest OS. Afterwards, we measure the performance on Nbench and WolfSSL benchmarks. The evaluation is performed on a dual-socket 3rd Gen AMD EPYC processor (code-named Milan) with 128 logical cores and 64GB RAM, supporting the SEV-SNP technology. The host system operates QEMU 6.1.50 on Ubuntu 22.04 (kernel version 6.5.0-rc2-snp-host), while the VM is allocated with 64 vCPUs and 16GB RAM, running Ubuntu 22.04 (kernel version 6.5.0-snp-guest). Syscall Fig <ref> depicts the time taken to execute each syscall 10,000 times under various conditions. The GHCB protocol-based forwarding mechanism incurs more time consumption than the original syscall instruction. Employing the HotCalls mechanism for syscall forwarding shows a noticeable reduction in execution time compared to the GHCB protocol. However, HotCalls still lags behind in speed compared to the original syscall method due to its asynchronous nature, resulting in varying latency across syscalls. Notably, the dynamic VMPL management mechanism introduces significant overhead on read and mmap syscalls. Importantly, with supporting the vDSO mechanism, there is no impact on the clock_gettime syscall. Page fault Table <ref> presents the duration for handling page faults in different scenarios. When assigning 10000 private memory pages via mmap syscall, each memory page access prompts a page fault without preloading. Remarkably, the time taken to manage page faults is considerable, almost matching the overhead from GHCB protocol. In the lower VMPL, the page fault forwarding mechanism operates approximately three times as slowly as in the original user-space. Compared to the forwarding syscall, the page fault has a greater performance impact because it involves synchronizing the trap frame to guest OS. Forwarding certain page faults with HotCalls is possible, but the current implementation hasn't adopted a HotCalls-based forwarding mechanism. In the following, we evaluate the impact on classical performance benchmarks, showcasing the advantages of the framework. SPEC 2006 Figure <ref> shows the experimental results on spec 2006, including the performance loss with and without HotCalls. It can be seen that the performance loss is significantly reduced when HotCalls is used, indicating that HotCalls plays an important role in improving system performance. SPEC 2017 Figure <ref> shows the experimental results on spec 2017, including the performance loss with and without HotCalls. It can be seen that the performance loss is significantly reduced when HotCalls is used, indicating that HotCalls plays an important role in improving system performance. MbedTLS We evaluated the performance impact of on the the MbedTLS benchmark, which covered different encryption algorithms. We can see the performance impact of on these encryption algorithm implementations, which is crucial for evaluating the performance of in real system applications. Through these tests, we can better assess the advantages and limitations of in practical applications, providing a stronger basis for system design and optimization. OSMark <cit.> Nbench <cit.> Fig. <ref> shows the evaluation of the framework on Nbench. This benchmark includes ten calculation-intensive tasks. We utilize proxy-kernel provided mmap and munmap syscalls for small-scale anonymous memory requirement. The GHCB-512 and 1024 indicate that the proxy-kernel manages 512 and 1024 memory pages continuously without guest OS intervention. It is evident that despite implementing the self-managed memory mechanism, there is still an overall performance overhead. This is due to the necessity of forwarding all other syscalls and interrupts. Although supports vDSO based clock_gettime, it is still forwarded to the guest OS in Nbench. Nevertheless, as the proxy-kernel manages more physical pages, the performance impact notably decreases across most benchmarks. Additionally, there is a substantial performance enhancement observed in FP EMULATION and ASSIGNMENT when the proxy-kernel manages more memory pages. WolfSSL <cit.> We evaluate the framework on WolfSSL benchmark. This benchmark consists of evaluation on cryptography algorithms, such as encryption, decryption, digests, and signature verification. Here, the anonymous memory allocation is also handled by the proxy-kernel rather than the guest OS. As illustrated in Figure <ref>, over a half of tasks perform significant better than baseline, while the other remains an overall performance overhead of about 1% to 10%. This indicates that in certain cases, using autonomous management of anonymous page memory allocation can bring performance improvements. In above evaluations, the incurs significant overhead on each syscall due to costly VMPL switching. Both syscall and exception forwarding require more cycles when the process is scheduled to the lower VMPL. However, incurs modest overhead in most cases on Nbench and WolfSSL benchmarks. The performance impact can be reduced significantly with asynchronous HotCalls mechanism and self-managed memory mechanisms, thereby outstanding the advantage of confined execution of . IOzone Filesystem Benchmark <cit.> §.§ Macro-benchmarks The macro-benchmarks consists of the performance of in confined execution of varies programs. We run the benchmarks with the following settings: in original user-space (vanilla), in the user space of lower VMPL (vmpl), in the user space of lower VMPL with hotcalls enabled (hotcalls). Compared to Nbench, these group of benchmarks are I/O intensive, requiring more syscall forwarding. Lighttpd The lighttpd is a classical light-weight single-threaded server framework. Figure <ref> shows the performance loss of Lighttpd on the proposed framework, including the performance loss with and without HotCalls. From the above evaluations, the incurs acceptable on most benchmarks. The performance loss of Lighttpd mainly focuses on the increase in request response time, especially under high load conditions. With HotCalls enabled, the performance loss has been significantly improved, and the request response time is noticeably reduced. We evaluated lighttpd version 1.4.41. We evaluated its performance using http_load <cit.>. The measurement consisted of 100 concurrent clients connections, fetching a total of 1 million 20 KB pages. The connections were over the local loopback to maximize available link bandwidth. Unmodified lighttpd was able to serve an average of 53,400 pages per second, with average response latency of 1.52 milliseconds. thttpd The thttpd <cit.> is a classical light-weight single-threaded server framework. We evaluated its performance using http_load <cit.>. § DISCUSSION Every security mechanism comes with a cost, and the security framework we propose is no exception. The advantages and limitations of the proposed framework are outlined in the following. §.§ Advantages Defense in-depth Compared to traditional sandbox frameworks, shields untrusted processes to the lower VMPL within the same CVM, preventing vulnerabilities from malicious exploits with VMPL-enhanced execute-only memory and cross-layer execution prevention. By isolating processes at the user space of lower VMPL, provides layered protection for the guest OS within the CVM. This framework allows for flexible process monitoring and tracking of untrusted legacy applications without requiring intervention from the guest OS. Compatibility One advantage of the is the compatibility with other frameworks. In the Secure Virtual Machine Service Module (SVSM) <cit.>, the guest OS operates in lower VMPL other than VMPL0. Our schema naturally aligns with this framework. In this case, the process is scheduled to at most two VMPLs. To accommodate other frameworks like Veil <cit.>, require at least one VMPL lower than the guest OS. The trusted services and enclaves are positioned at higher VMPLs, while the untrusted processes are scheduled to a lower VMPL. However, Veil positions the guest OS at the lowest VMPL, rendering it challenging to integrate the framework. Cabin is also naturally supports PKRU-based sandbox frameworks <cit.>, which can still be used to enhance intra-process isolation for confined processes at lower VMPLs. Alternative design Compared to safeguarding the proxy-kernel with VMPL, an alternative design to the proposed execution protection mechanism is based on the SVSM framework. This approach restricts read access to the code regions of guest OS. However, there exist numerous code regions necessitating read access and even modification rights. Modifying these code regions allows the kernel to dynamically change behavior during runtime. Consequently, it is less practical than protecting a minimal proxy-kernel in lower VMPL. §.§ Limitations One drawback of the framework is the performance impact. The VMPL switching leads to delays in syscall and exception handling. The imprecise page tracking for dynamic VMPL management results in extra overhead. Currently, does not well support thread migration across CPUs. Because the GHCB is not shared among CPU cores, binds the thread to one CPU, limiting task scheduling flexibility. Other constraints involve multi-threading and multi-processing. Although supports preemptive scheduling, the incomplete support for fork and clone syscalls limits the application to single-thread environment. Nevertheless, it is possible to schedule child threads to the user space of lower VMPLs while keeping the main thread in the original user space. Most issues can be solved with further effort, but the delays from VMPL switching remain a challenge to efficiently address. §.§ Extending to other CVM platforms Although the framework is based on the latest feature from AMD SEV-SNP, it can be extended to other CVM platforms such as Intel TDX and ARM CCA. By introducing a proxy-kernel within an isolated CVM, we shield the guest OS from potential threats posed by untrusted processes. The communication between confined processes and the guest OS is managed by the proxy-kernel and the trusted hypervisor located outside the CVM. As for the Intel TDX, the TDX Module facilitates communication between the proxy kernel and the guest OS across different CVMs. Meanwhile, in ARM CCA, the Realm Management Monitor (RMM) oversees the interaction between the proxy-kernel and the guest OS. In both scenarios, trusted hypervisors like Intel TDX Module and ARM RMM play a crucial role in establishing a secure channel between different CVMs. Recently, ARM CCA introduced support for different planes within a CVM <cit.>. Each plane is essentially a separate VM, with a shared guest physical address space. Plane 0 holds more privilege and can host a paravisor to control switches between planes and restrict other planes' memory access. Similarly, less privileged planes can be used to shield untrusted applications from guest OS. § RELATED WORK AMD SEV-SNP and VMPL Various researches are underway to enhance the security of AMD SEV <cit.>. The SVSM <cit.> framework leverages VMPL0 to protect secure service from untrusted guest OS. Hecate <cit.> uses VMPL0 as a trusted L1-hypervisor to facilitate communication between the guest OS and untrusted hypervisor. SVSM-vTPM <cit.> is a security-enhanced vTPM based on the SVSM framework, leveraging VMPL0 to isolate the virtual TPM (vTPM) from the guest OS, ensuring the integrity of vTPM's functions. CoCoTPM <cit.> reduces the trust needed towards the host and hypervisor by running a vTPM in an encrypted VM using AMD SEV. Honeycomb <cit.> is a secure GPU computation framework that runs a validator within VMPL0, which inspects the binary code of a GPU kernel to ensure that every memory instruction in the kernel can solely reach designated virtual address space, utilizing static analysis. The mushroom <cit.> framework runs integrity protected workloads based on AMD's SEV-SNP technology, which could be the basis of a secure remote build system. Veil <cit.> is a service framework providing secure enclave and services for process and the guest OS respectively. In general, these works follow traditional threat model of confidential computing, and do not focus on untrusted applications in CVM, while the framework protects the guest OS by confining untrusted programs to the user space of lower VMPLs. Execute-only memory Execute-only memory (XOM) <cit.> is an effective method in software security. PicoXOM <cit.> is an efficient XOM mechanism based on ARM's Data Watchpoint and Tracing unit for embedded systems. Nojitsu <cit.> leverages XOM-Switch to enforce execute-only permission for static code regions in JIT. SECRET <cit.> protects COTS binaries from disclosure-guided code reuse attacks, while MonGuard <cit.> applies PKRU-based XOM protection to the multi-variant execution (MVX) monitor. IskiOS applies XOM to safeguard code pages of a unikernel <cit.>. Cerberus <cit.> is a notable sandbox framework that protect the reference monitor with PKRU-based XOM. To the best of our knowledge, the fine-grained control over VMPL permissions has not been utilized to enhance execute-only protection for untrusted programs in previous studies. Intro-process isolation The lightweight PKRU-based intra-process isolation mechanism is also a hot research topic in recent years <cit.>. Various research efforts have been made to enhance the security of PKRU-based isolation mechanisms <cit.>. However, its unprivileged nature makes it susceptible to bypassing in user-space through side-effects or confused-deputy issues from syscalls <cit.>. Attackers can exploit this vulnerability by constructing unsafe instruction sequences to gain unauthorized access to sensitive data and code <cit.>. Such systems require complex syscall filtering policy to prevent WRPKRU exploitation and enforce the security of their sandbox <cit.>. Syscall filtering Securely confining untrusted legacy applications has been a long-standing challenge for the past decades <cit.>. The syscall filtering plays a crucial role in traditional software system security <cit.>, including container security <cit.>. The syscall filtering is also widely applied in PKRU-based intro-process isolation mechanism <cit.>. PHMon <cit.> and FlexFilt <cit.> introduces new hardware design for efficient syscall filtering and process monitoring on RISC-V platform. Nevertheless, due to limited privilege separation, these mechanisms still confine to conventional user and kernel separation. § CONCLUSION is an isolated execution framework that effectively shields untrusted programs from guest OS within CVM. By introducing a trusted proxy-kernel for untrusted applications, enables efficient and flexible process monitoring and tracing, enhancing a layered security defense outside of the guest OS. By utilizing VMPL-enforced execute-only protection, making it harder for vulnerabilities to be exploited at lower VMPL. With fine-grained control over VMPL execution privilege, further isolates the proxy-kernel and confined processes, strengthening the cross-layer isolation between the user and kernel space of lower VMPLs. To reduce the performance impact, integrates asynchronous forwarding mechanism and self-managed memory allocation in the proxy-kernel. In essence, the framework can be generalized to other commercial CVM platforms as well. The evaluation results on Nbench and WolfSSL benchmarks demonstrate modest performance overhead for confined processes. Acknowledgment This work was supported by National Natural Science Foundation of China (Grant No.62272452). Corresponding author: Wenhao Wang (mailto:wangwenhao@iie.ac.cnwangwenhao@iie.ac.cn). splncs04
http://arxiv.org/abs/2407.12381v1
20240717075837
Flow Matching Imitation Learning for Multi-Support Manipulation
[ "Quentin Rouxel", "Andrea Ferrari", "Serena Ivaldi", "Jean-Baptiste Mouret" ]
cs.RO
[ "cs.RO" ]
Modeling reflection spectra of super-Eddington X-ray sources Andrew Young July 22, 2024 ============================================================ § ABSTRACT Humanoid robots could benefit from using their upper bodies for support contacts, enhancing their workspace, stability, and ability to perform contact-rich and pushing tasks. In this paper, we propose a unified approach that combines an optimization-based multi-contact whole-body controller with Flow Matching, a recently introduced method capable of generating multi-modal trajectory distributions for imitation learning. In simulation, we show that Flow Matching is more appropriate for robotics than Diffusion and traditional behavior cloning. On a real full-size humanoid robot (Talos), we demonstrate that our approach can learn a whole-body non-prehensile box-pushing task and that the robot can close dishwasher drawers by adding contacts with its free hand when needed for balance. We also introduce a shared autonomy mode for assisted teleoperation, providing automatic contact placement for tasks not covered in the demonstrations. Full experimental videos are available at: <https://hucebot.github.io/flow_multisupport_website/> § INTRODUCTION In spite of the many advances in whole-body control, the tasks of most current humanoid robots are implicitly split into two parts: feet for locomotion and support, and hands for manipulation and other interactions with the world. This view overlooks all the possible uses of arms as additional support as well as non-prehensile manipulation like pushing with the side of the forearm, sliding and, more generally using the body of the robot as a potential contact surface. By contrast, humans routinely lean on a table to grasp a distant object, push on a wall while pulling a heavy door, exploit handrails to increase their stability, keep a door open with their shoulder, etc. In this work, we focus on these scenarios that leverage whole-body motion and multi-contact strategies to extend the manipulation capabilities. We term them multi-support manipulation tasks, by analogy with the traditional single and double support cases for humanoids. Our objective is to design control policies for humanoid robots that can leverage contacts when needed, both for adding support and perform non-prehensile tasks. On the one hand, model-based planners could search for support contacts, as this is often done with footstep planning <cit.>, but this requires a very good understanding of the world, as many surfaces are not suitable contact surface (fragile surfaces like windows, slippery surfaces, ...). On the other hand, model-based approaches do not work well for pushing or sliding tasks because of the non-linear dynamics of sliding and friction <cit.>. In this work, we address these two challenges with a single, unified method: imitation learning for whole-body multi-support motions. Hence, by demonstrating when and how to establish contacts, we can leverage the human “common sense” to choose contacts, avoid modeling explicitly the environment and sliding dynamics, and achieve real-time performance. While imitation learning has been applied to many tasks, it has not yet been investigated, to the best of our knowledge, for whole-body multi-contact and contact switching scenarios. Many approaches have been proposed for imitation learning in robotics. The most traditional approach is behavior cloning, in which a neural network is learned with supervised learning to associate states to actions <cit.>. To exploit the structure of trajectories and control, a popular approach has been Dynamic Motion Primitives <cit.> and various extensions like Probabilistic Motion Primitives <cit.>. However, these methods tend to not scale well to high-dimensional inputs, like images, and large datasets. In they are typically unable to model multi-modal distributions of demonstrations, whereas multi-modality is critical for many humanoid tasks. For example, if a humanoid can reach two contact locations, left or right (Fig. <ref>), to add an extra support for balance, then averaging all demonstrations assuming a unimodal distribution will result in the policy averaging left and right positions and placing the contact in-between the two, causing the robot to fall. The recent successes on generative processes for images and sound, like DALL-E, have inspired a new breed of behavior cloning algorithms <cit.>. In essence, instead of generating an image conditioned by a text input, these algorithms generate trajectories conditioned by a state. The heart of these generative algorithms is a diffusion process <cit.> that learns the probability distribution of actions demonstrated by human operators and then sample new actions from this learned model. Diffusion methods were recently connected to optimal transport theory <cit.>, and linked with flow-based methods <cit.> within a unified framework, where Diffusion represents the stochastic counterpart and Flow Matching the deterministic counterpart. In this work, we hypothetize that the flow-based approaches, specifically Flow Matching, is best suited for robotics applications: it offers a simpler framework than the initial diffusion approach, that can yields deterministic outputs, and allows for faster inference without loss of quality compared to Diffusion. In this paper, we show that a policy trained from demonstrations can effectively provide useful assistance for multi-support manipulation tasks, especially in the automatic placement of contacts. We are interested in both autonomous task execution as well as in assisted teleoperation/shared autonomy <cit.>, where a human operator controls one robot end-effector (e.g., the left hand) while the robot autonomously controls its entire body and notably autonomously determines support contacts (e.g., with the right hand), regulates contact forces, to make sure that the task commanded by the human can be executed without the robot falling. In summary, the contributions of our work are three-fold: * We introduce an imitation learning formulation and architecture that enables multi-support manipulation tasks. * We showcase the Flow Matching generative method for generating whole-body movements on a full-size humanoid robot, demonstrating its advantages over Diffusion methods and its potential for robotic applications. * We demonstrate that the autonomous policy learned from demonstrations can assist the human operator in a shared autonomy mode. This assistance performs automatic contact placement and is valuable in situations where the task varies from the demonstrated scenario, making the policy unable to solve the task alone. § RELATED WORK Classical model-based approaches address multi-contact tasks hierarchically. Simplified template models or heuristics determine contact placement and sequence <cit.>. Then trajectory planning <cit.> and control methods generate optimized whole-body motions, tracking them on the actual system while regulating interaction forces and maintaining balance. The control of complex robots, humanoids, and multi-limb systems <cit.>, is well understood through model-based optimization approaches and these methods have been demonstrated on both torque-controlled <cit.> and position-controlled <cit.> robots. However, optimizing contact placement and sequencing is highly challenging, involving both continuous and discrete decisions. Contact-rich tasks, such as non-prehensile <cit.> tasks, pose significant challenges due to sliding contacts, diverse valid strategies of sequences, and the requirement to consider the entire object geometry rather than predefined contact points. Instead, we adopted an imitation learning approach <cit.> which learns from human demonstrations, and specifically the Behavior Cloning (BC) method <cit.> building a policy that directly maps observations to actions in a supervised manner. Traditional BC such as DMP <cit.> or ProMPs <cit.> performs well on simple tasks within the demonstrated state-space distribution but suffers from accumulation of prediction errors, which can lead to state divergence and failure. To address this, our policy predicts trajectories of actions, aligning with recent works <cit.>, enhancing temporal coherence and mitigating error compounding. Another limitation of traditional BC is handling the variability in human demonstrations, idle actions, and different strategies used to solve the same tasks. These demonstrations form a multi-modal distribution that can be non-convex, making averaging data dangerous as it can lead to task failure. Recent approaches address this by reformulating BC's policy as a generative process. Denoising Diffusion Probabilistic Models (DDPM) <cit.> have emerged as a new class of generative models that outperform previous generative models. DDPM reverses a diffusion process that adds noise to a clean sample until it becomes Gaussian noise. By solving a Stochastic Differential Equation, it then generates a clean sample from this noise. Denoising Diffusion Implicit Models (DDIM) <cit.> instead solve the reverse process as an Ordinary Differential Equation, reducing inference steps for faster computation at the expense of quality. Originally used for image generation, recent works <cit.> have applied these techniques to reinforcement and imitation learning. They generate action trajectories that mimic human demonstrations conditioned on the task's state, effectively capturing high-dimensional probability distributions and handling non-convex, non-connected distributions with multiple modes. Flow Matching <cit.> is a novel generative method based on optimal transport theory <cit.>, sharing theoretical similarities with DDPM and DDIM. It is simpler with fewer hyperparameters and more numerically stable than DDIM. Flow Matching produces straighter paths in the transport flow, enabling faster inference due to fewer required integration steps, which is crucial for robotic applications with real-time requirements. In line with <cit.>, which demonstrated improvements of flow over diffusion in simulated robotic tasks, we investigate the application of Flow Matching in robotics and deploy it on real humanoid. § METHOD We present a learning-from-demonstration approach for humanoid robots to perform multi-support manipulation tasks, enhancing manipulation capabilities through additional contacts and whole-body motions. These tasks are executed either autonomously or through assistive shared autonomy, where the human operator partially commands the robot while the learned policy provides assistance and contact placement. We highlight how our method handles contact switch transitions and controls the resulting multi-contact motions on real hardware. §.§ Overall Architecture We designed our architecture (Fig. <ref>) with two hierarchical modules to enhance robustness. A model-based low-level controller addresses whole-body optimization, multi-contact force distribution, contact switching and tracking with strict feasibility constraints. A learning-based high-level controller handles Cartesian effector commands, contact locations, and sequencing. It outputs a Cartesian pose target in world frame and a contact switch command for each effector. Effectors can either be fixed in contact with the environment (enabled state), actively applying forces to balance the robot, or not in contact and free to move (disabled state). The contact switch command is a discrete signal that triggers the transition between enabled and disabled states implemented by the low-level controller. The high-level controller in Fig. <ref> operates in three different modes. The teleoperation mode is used to create a dataset recording effector commands sent to the low-level controller and poses of external markers detected by the robot's head camera. The human operator directly commands the robot to collect demonstrations, solving the task from randomized initial states or performing recovery actions from manually selected states outside nominal execution. The autonomous mode uses the policy trained by imitation of collected demonstrations to solve the task. The assistive shared autonomy mode combines human and policy commands to address out-of-distribution tasks. The operator commands one effector while the policy autonomously manages the others. The policy uses identical inputs and post-processing in both shared and full autonomous modes. However, in shared autonomy mode, the operator's commands replace the policy's output for the effector they control. Despite <cit.> showed that diffusion-based imitation learning can learn from raw images or point clouds, we opt to use fiducial markers in this work to monitor the task's exteroceptive state. This allows us to focus instead on the challenges related to contact switches and multi-contact. An RGB-D camera on the robot's head detects these markers in the color image using the AprilTags system <cit.>. The 3D positions and orientations of the markers in the camera frame are extracted from the point cloud. These coordinates are then transformed into the robot's world frame using the forward kinematic model. The poses of the markers are recorded in the dataset during human expert demonstrations and fed as input to the autonomous policy. §.§ Behavioral Cloning Policy and Contact Switch The behavioral cloning policy takes as input the current effector pose commands, contact states, and detected marker poses. It outputs a trajectory of future effector pose commands and contact switch commands for all effectors. Formally, the policy is defined as follows: Policy π: s_k ⟶a_k with s_k = [ Xeff i_k ceff i_k τeff i_k ⋯ Xtag j_k τtag j_k ⋯ ], a_k = [ Xeff i_k Xeff i_k+1 ⋯ Xeff i_k+N; γeff i_k γeff i_k+1 ⋯ γeff i_k+N; ⋮ ⋮ ⋮; ], where i ∈ indexes the effectors, j ∈ indexes the markers, N ∈ is the number of predicted time steps, k ∈ is the inference time step, Xeff i_k ∈ is the pose command in world frame of effector i at time step k, ceff i∈{0,1} is the boolean contact state command of effector i (0 for disabled or 1 for enabled), γeff i_k ∈ is the continuous contact state command (disabled or enabled) for effector i at time step k, τeff i∈ is the (clamped) elapsed time since last contact switch of effector i, Xtag j∈ is the latest updated pose estimate in world frame of marker j, τtag j∈ is the (clamped) elapsed time since marker j pose was last detected and its pose was updated. Fig. <ref> depicts the signals employed by the policy to implement contact switching commands. When adding or removing a contact, the low-level retargeting and controller necessitate a time delay to smoothly transfer the robot's weight and redistribute contact forces. The policy uses τeff i to observe the progression of the contact transition and reproduce the waiting behaviors demonstrated by the operator upon triggering a contact switch. The policy outputs the continuous signal γeff indicating the desired state for a contact. The contact transition is activated and sent to the low-level whole-body retargeting module upon a state change of the discretized desired state ceff defined by the following hysteresis threshold: ceff i_k = 1 if ceff i_k-1 = 0 and γeff i_k ⩾ 0.8 and τeff i_k ⩾ 20.0 0 if ceff i_k-1 = 1 and γeff i_k ⩽ 0.2 and τeff i_k ⩾ 20.0 ceff i_k-1 else To avoid unbounded and out-of-distribution states, we clamp the time inputs τeff and τtag to a maximum of 20 s (see Fig.<ref>). We apply data augmentation during training by randomizing detected marker times τtag to enhance robustness against occlusions. §.§ Trajectory Generation with Flow Matching We build the behavioral cloning policy as a generative process, which learns from data a probability distribution and sample new elements from it. The resulting policy is stochastic, and samples trajectories that mimic the ones demonstrated by the human operator in the same state. Specifically, we employ the Flow Matching method <cit.>, which constructs a flow vector field that continuously transforms a source probability distribution into a destination distribution. Fig. <ref> illustrates a flow transforming a 1D simple source distribution which can be easily sampled, into a more complex, multi-modal distribution. Flow Matching, grounded in optimal transport theory, can be seen as the deterministic counterpart to Diffusion methods <cit.>. After sampling from the source distribution, the integration of the flow produces samples from the destination distribution deterministically, contrasting with Diffusion <cit.>, which introduces noise during transport. Flow Matching typically yields straighter flows, enabling faster inference. The training of Flow Matching is defined as follows: asrc∼𝒫src,  adst∼𝒫dst,  t ∼𝒰[0,1],  z_t = (1-t)asrc + tadst, ℒflow = 𝔼_asrc,adst,tf(z_t, t, s) - (adst - asrc)^2, where 𝒫src is the source distribution, chosen as a multivariate normal distribution 𝒫src = 𝒩(0, I), 𝒫dst is the destination distribution (here the demonstration trajectories), asrc and adst are the trajectories sampled from source and destination distributions (see (<ref>)), t ∈ is the scalar flow transport time uniformly sampled between 0 and 1 representing the progression of the transformation from source to destination, s is the input state associated to the command trajectory adst, z_t is the interpolated trajectory at transport time step t between source and destination trajectories, ℒflow∈ is the scalar training loss function. f is the flow model conditioned by the state s: Flow f: z,t,s⟶Δz, implemented as a neural network and trained using back-propagation minimizing the loss ℒflow. The inference procedure is illustrated in Fig. <ref>. A noisy trajectory is first sampled from the source distribution and then transformed into the destination trajectory by integrating the flow from t=0 to t=1 over several steps. Formally, the inference process is defined as follows: z_0 = asrc, z_1 = adst z_t+Δ t = z_t + Δ t f(z_t,t,s) for t = 0 ... 1 §.§ Trajectories Stitching and Processing Since inference is not instantaneous and the policy outputs a trajectory of future commands, online stitching (Fig. <ref>) and processing are required to ensure smoothness, safety, and robustness of the commands sent to the low-level controller. The autonomous high-level controller computes the policy's next commands trajectory in parallel while continuously sending effector commands to the SEIKO low-level, sampled from the previous trajectory. When a new inference starts, the policy uses the latest effector commands and marker pose estimates as the current state. Upon completion, a smooth transition to the new commands trajectory is achieved through linear interpolation over a fixed time. The policy produces trajectories represented as 5 Hz time series, which are resampled at 100 Hz using linear interpolation for use by the high-level controller. A zero-phase low-pass filter (first-order exponential filter) is applied on the trajectory to remove residual noise from flow inference and interpolation. §.§ Multi-Contact SEIKO Retargeting and Controller Robots with multiple limbs in multi-contact exhibit redundancy both in kinematics and contact force distribution. Many different whole-body postures can achieve a desired effector pose, and many contact force distributions can maintain equilibrium for a given posture. To perform multi-support manipulation on real robots, it is essential to consider kinematic and actuator torque limits, contact and balance constraints to prevent slipping, failing, and ensure operational safety. Contact switch transitions are discreet decisions that significantly impact system's balance, requiring careful consideration for smoothness and safety. These transitions are not always feasible and typically take a few seconds. Smoothly removing a contact requires gradually reducing the contact force to zero by adjusting the whole-body posture and redistributing the contact forces, necessitating precise regulation of the contact forces on the actual system. Our proposed method relies on the SEIKO (Sequential Equilibrium Inverse Kinematic Optimization) Retargeting and Controller methods developed in our previous work <cit.> to address these diverse challenges. SEIKO Retargeting <cit.> uses a model-based Sequential Quadratic Programming (SQP) optimization to compute a feasible whole-body configuration (joint positions and contact forces) tracking the effector pose commands. It integrates the command filtering pipeline detailed in <cit.>. The retargeting adapts the robot's motion to enforce safety constraints in response to risky or infeasible commands from either human operator or the policy. SEIKO Controller <cit.> integrates an explicit modeling of joint flexibility and utilizes an SQP whole-body admittance formulation to regulate the contact forces on a position-controlled humanoid robot. The controller improves robustness to model errors and enable real robot experiments by regulating contact forces. To further enhance robustness against inaccuracy in contact placement, we extended SEIKO Controller with the effector admittance control scheme named “damping control” detailed in <cit.>. This scheme addresses scenarios where the learned policy activates a contact too early while still in the air, or too late after already exerting forces on the environment. The presentation, comparison, and discussion of this control scheme are provided in the supplementary material of <cit.>. § RESULTS §.§ Implementation Details The Talos robot is a humanoid robot manufactured by PAL Robotics of 1.75 m height, 99.7 kg and 32 degrees of freedom. We replaced the robot's right-hand gripper and forearm with a 3D-printed, ball-shaped hand to withstand high force contact. In our experiments, we control only 22 joints, all in position-control mode, excluding those in the neck, forearms, and wrists. We mounted an Orbbec Femto Bolt RGB-D camera on the robot's head, replacing the original camera and providing color images and point clouds. In our experiments, the robot's left and right hands are used as effectors commanded by the operator for manipulation tasks, while the feet remain fixed. Only the right hand is used for making contact with the environment on the 3d-printed ball shape. Depending on the experiment, we use between 1 and 3 external fiducial markers. The human operator teleoperates the robot with a direct line of sight, and uses separate 6-DoF input devices[3Dconnexion SpaceMouse: <https://3dconnexion.com/uk/spacemouse/>] to command the velocity of each hand, providing after integration the effector pose commands. The policy is trained in Python using the PyTorch library with GPU acceleration, whereas online inference is performed in C++ on the CPU (Intel i9-9880H 2.30 GHz). See Table <ref> for hyperparameters. The flow model is implemented as a 1D convolutional U-Net neural network with residual connections, akin to the model implemented[Diffusion Policy code: <https://github.com/real-stanford/diffusion_policy>] in <cit.>. For each effector, the predicted poses in the output trajectories (Xeff i_l)_l=k^k+N are encoded relative to the pose in the input state Xeff i_k such that all predicted positions and orientations trajectories start from zero. The effector orientations in the input state are encoded using the 6D rotation representation <cit.>, whereas the relative orientations in the predicted output trajectories are expressed as 3D axis-angle vectors. SEIKO Retargeting and Controller are implemented[SEIKO implementation: <https://github.com/hucebot/seiko_controller_code>] in C++, using the Pinocchio rigid body library, the QP solver QuadProg and run onboard the robot at 500 Hz. The fiducal external markers are detected in the color image using the AprilTag library at 30 Hz. §.§ Simulated Reaching and Contact Placement Task We compare in our experiments the Flow Matching method for robotics applications with its Diffusion counterpart and a classical supervised learning baseline. We present statistical results for the following variant methods: * Demonstrations: dataset collected by the expert human operator and used to train all autonomous policies. * Flow 20 steps: Flow Matching method described in Section <ref>. The flow is integrated (see Fig. fig:flow) over 20 steps. * DDPM 100 steps: vanilla DDPM method <cit.> trained with 100 denoising steps, and inferred with 100 steps. * DDIM 20 steps: uses the same trained model as DDPM, but is inferred with the Diffusion implicit variant <cit.> and 20 steps, expected to be faster than DDPM at the expense of quality. * Supervised Learning: classical behavior cloning method <cit.> trained with Mean Square Error (MSE) loss. It has same inputs-outputs and also predicts trajectories, but it is not a generative process and does not capture the data distribution. We evaluate the main capabilities of policies: first, their ability to autonomously perform contact switching as described in Section <ref>; second, their ability to learn from demonstrations with a multi-modal distribution; third, their accuracy in placing contacts, which is crucial for robotics applications; and fourth their inference time. Within the simulated task illustrated in Fig. <ref>, we teleoperated 86 demonstrations totaling 2442 s. The hand was placed randomly on either the left or right platform, regardless of the initial state to create a bimodal distribution. An external marker is attached on top of the left platform, with the position of the right platform fixed relative to the left. We also assess how the policies generalize out-of-distribution, where initial hand positions and platform positions are uniformly sampled from a wider range that encompass and excludes the range used for in-distribution cases. Both Flow and Diffusion approaches outperform the baseline (Fig. <ref>), as supervised behavior cloning is hindered by the multi-modal nature of the distribution, causing the baseline to average out across non-convex spaces. Flow Matching also slightly outperforms Diffusion in out-of-distribution cases, with favorable inference time and accuracy, which is in line with other works published on this topic <cit.>. §.§ Simulated Non-Prehensile Manipulation Task We then evaluated our proposed method on the more challenging non-prehensile manipulation task shown in Fig. <ref>. This task aims to thoroughly test multi-support and whole-body strategies with higher multi-modality, necessitating both the addition and removal of contacts. The humanoid robot must use both hands to push a concave T-shaped 3D object on a planar table surface, maneuvering it to match a target position and orientation fixed on the table. Solving the task strongly relies on multi-support capabilities, as the robot can not reach forward far enough to push the object from behind without using its right hand as additional support. The robot interacts with the box using contact-rich dynamics that heavily depend on geometries of the box and robot's effector, as well as friction and sliding properties of surfaces. The task allows for various multi-modal strategies by applying different pushing sequences on the box's sides. It requires several contact switches to push the box left and right with both hands, followed by precise final adjustments. This box-pushing task is a more challenging 3D whole-body extension of a simpler 2D top-down environment used as a benchmark in previous work <cit.>. In a real-time simulated environment, we teleoperated the robot to record 68 demonstrations totaling 6161 s. The initial position and orientation of the box were randomized on the table, while both the target pose for the object and the position of the robot's feet remained fixed. A single marker is placed and attached on top of the object, providing its pose to the policy. After training, the resulting policies were evaluated in the simulated environment for 300 s and across 100 trials each. We quantify the task performance of how the box's pose matches the target by measuring the planar overlapping surface between the manipulated T-shaped object and the fixed T-shaped target of the same size. Specifically, we define the task error metric as the normalized overlapping distance √(1-(overlapping surface)/(shape surface)), where an error of 0.0 indicates a perfect match, and an error of 1.0 indicates no overlap between the two shapes. Since policies lack stopping criteria and continuously interact with the object, we consider the lowest error achieved so far within each trial. Fig. <ref> showcases an example of autonomous execution, while Fig. <ref> presents the comparison statistical results. All compared autonomous methods fail to solve the task in some cases. The two most common failure scenarios are when the robot collides with the top of the box, which is considered a stopping criterion, or mistakenly pushes the box into a configuration where necessary adjustments are no longer reachable. The Flow and Diffusion methods, theoretically very similar, exhibit comparable behavior and performance by the end of episodes. Both methods outperform the supervised behavioral cloning baseline. Flow method tends to marginally outperform its Diffusion counterparts which is coherent with <cit.>, achieving statistically faster task completion and exhibiting a slightly less dispersed distribution. As expected, Diffusion with 100 steps performs marginally better than Diffusion 20 steps, showcasing the trade-off between inference time and performances. §.§ Hardware Experiments The attached and additional videos[Additional videos: <https://hucebot.github.io/flow_multisupport_website/>] showcase our multi-support hardware experiments on the Talos humanoid robot. §.§.§ Autonomous Mode In Distribution First, we validated our proposed contact placement and switching capabilities by having the robot push and close both upper and lower drawers of a dishwasher (Fig. <ref>, Fig. <ref>). This task is straightforward for humans but remains challenging for humanoid robots. The robot must add an extra contact on top of the dishwasher to reach down to the lower drawer (40 cm to the ground) without falling. We teleoperated the robot to collect 35 demonstrations with a total length of 1734 s. Three markers were used, one on top of the dishwasher and one on each drawer. As shown in the additional videos, the reactive policy learned with Flow Matching successfully solves the task autonomously and responds to disturbances that may reopen already closed drawers. The robot first closes the upper drawer in double support and then reaches to close the lower drawer, placing additional right hand support on top of the dishwasher. This experiment also validates our architectural choice, where the low-level retargeting and controller successfully execute the multi-support manipulation tasks commanded by the learned policy. Without the controller enabled, any far-reaching motion tends to cause the robot to fall due to model errors. Second, we demonstrated the box pushing task (Fig. <ref>) on the real Talos robot (Fig. <ref>, Fig. <ref>). We recorded 51 demonstrations of 5521 s with the red T-shaped box, and we used three markers on the object instead of only one to mitigate sensor noise and self-occlusion. The policy learned with Flow Matching successfully solves the red T-shape case using both hands, dynamically adding or removing right-hand contacts, and effectively responds to disturbances applied to the object (see additional videos). §.§.§ Assistive Shared Autonomy Out of Distribution Imitation learning only performs well in-distribution for the task it was trained on. We assessed this by testing the box-pushing task with a blue U-shaped box, representing an out-of-distribution scenario. As we expected, the autonomous policy trained on the red T-shape performed poorly with the blue U-shaped box, frequently failing and getting stuck while attempting to push on non-existent sides. We evaluated our assistive shared autonomy mode <cit.> (Fig. <ref>) on the real robot (see additional videos) aiming to address this known downside of imitation learning. In this mode, the human operator commands only the left hand, while the policy commands the right hand, which is responsible for adding or removing the upper body support. We solved the box pushing task in the blue U-shape out-of-distribution case using this assisted teleoperation approach. The operator makes fine adjustments with the left hand while the policy adds the right-hand contact to enable distant reach. When the object moves right and becomes unreachable with the left hand, the policy removes the right-hand contact, and pushes the object back toward the left side. In the dishwasher task, the shared autonomy mode automatically places the right-hand contact on top of the dishwasher when the operator commands the left hand to go below a certain height while attempting to reach down. § DISCUSSION AND CONCLUSION Our experiments with multi-support tasks show that Diffusion and Flow Matching both outperform the traditional behavior cloning with supervised learning approach (our baseline). We hypothesize that this advantage arises because these tasks are diverse, multi-modal, and require intricate strategies. The learned policies are robust enough to be deployed on a real, full-size humanoid robot (Talos), enabling it to autonomously perform multi-support tasks, including pushing a box and closing drawers with the help of the free hand for balance. Additionally, our findings indicate that policies learned from demonstrations can assist in automatic contact placement, even for tasks different from the demonstrations, when used in a shared autonomy assisted teleoperation approach. Like other methods based on behavior cloning, the performances depends on the quality of expert demonstrations. The human operator must not only demonstrate the desired behavior but also include recovery actions that enable the policy to correct deviations from the nominal path and handle potential disturbances. Recent prior work <cit.> have shown that autonomous policies can be learned directly from raw images or point clouds, eliminating the need for fiducial markers. This aligns with our work and represents a natural extension, complementing the contact switch capability we propose. Moreover, our pipeline supports both hand and foot effectors, paving the way for complex multi-support loco-manipulation tasks. Further investigations into learning contact placement from human expertise across diverse and practical scenarios are worth pursuing. IEEEtran
http://arxiv.org/abs/2407.13425v1
20240718115005
The Effects of Selected Object Features on a Pick-and-Place Task: a Human Multimodal Dataset
[ "Linda Lastrico", "Valerio Belcamino", "Alessandro Carfì", "Alessia Vignolo", "Alessandra Sciutti", "Fulvio Mastrogiovanni", "Francesco Rea" ]
cs.RO
[ "cs.RO" ]
Effects of Object Features on Pick-and-Place 1 The Engine Room, Department of Informatics, Bioengineering, Robotics, and Systems Engineering (DIBRIS), University of Genoa, Italy 2 Cognitive Architecture for Collaborative Technologies Unit (CONTACT), Italian Institute of Technology, Italy Alessandro Carfì, Department of Informatics, Bioengineering, Robotics, and Systems Engineering (DIBRIS), University of Genoa, Genoa, Viale Causa 13, Italy alessandro.carfi@dibris.unige.it § ABSTRACT We propose a dataset to study the influence of object-specific characteristics on human pick-and-place movements and compare the quality of the motion kinematics extracted by various sensors. This dataset is also suitable for promoting a broader discussion on general learning problems in the hand-object interaction domain, such as intention recognition or motion generation with applications in the Robotics field. The dataset consists of the recordings of 15 subjects performing 80 repetitions of a pick-and-place action under various experimental conditions, for a total of 1200 pick-and-places. The data has been collected thanks to a multimodal set-up composed of multiple cameras, observing the actions from different perspectives, a motion capture system, and a wrist-worn inertial measurement unit. All the objects manipulated in the experiments are identical in shape, size, and appearance but differ in weight and liquid filling, which influences the carefulness required for their handling. The Effects of Selected Object Features on a Pick-and-Place Task: a Human Multimodal Dataset Linda Lastrico1,2, Valerio Belcamino1, Alessandro Carfì1, Alessia Vignolo2, Alessandra Sciutti2, Fulvio Mastrogiovanni1, Francesco Rea2 July 22, 2024 =========================================================================================================================================== § INTRODUCTION The physical characteristics of objects can play a significant role in how humans handle them. It has been shown that the weight of an object directly relates to the speed adopted to lift and transport it, therefore influencing the kinematics of the movement <cit.>. Similarly, when engaging with an item difficult to transport without damage, we approach it carefully and move it with particular attention, reducing the speed and prolonging the deceleration phase <cit.>. Human-to-human communication heavily relies on implicit non-verbal signals for coordination and intention understanding. Therefore, how an object is handled conveys relevant information to the observers. When dealing with robots, humans tend to attribute human-like abilities to them. For example, we expect a robot to focus its attention on where its cameras are looking or be immediately ready to hand over an object we need. When our expectations are disappointed, the whole interaction can be compromised <cit.>. Furthermore, for robots to collaborate with humans, it is fundamental to understand what the partner is doing and when it is the right moment to act. In this context, it is necessary to study how humans communicate through implicit clues to enhance the interaction capabilities of robots <cit.>. Recently, there has been a growth of research interest in solutions to estimate the physical characteristics of objects when humans manipulate them. The main insight is that a correct estimate of object properties would allow a robot to interact with a human more appropriately, especially when physical interactions are involved, e.g., a handover. Researchers explored the usage of deep neural networks to estimate containers capacity, dimension, and weight while observing human manipulations with RGB-D <cit.> or even simple RGB cameras <cit.>. However, a robot should also consider if an object requires particular care for being handled to avoid changing irreparably some of its properties. The state-of-the-art describes this problem as a binary classification, whereby the robot should distinguish whether or not the object manipulation requires carefulness. In this context, proposed solutions include template matching approaches relying on Gaussian Mixture Models <cit.> or deep neural network classifiers <cit.>. The definition of carefulness is not straightforward, since many factors may solicit cautious actions: from the physical context where the action takes place, to the properties of the object involved such as its fragility, precarious balance, or content about to spill <cit.>. Given the difficulty in framing this feature, not many studies explicitly refer to it, although it is addressed, for example, in the handling of filled containers <cit.>. From the same perspective, we define carefulness as the modulation of arm motion that minimizes liquid spilling during the manipulation of containers. Therefore, in the following, we will investigate the concept of carefulness in scenarios where humans transport containers and adapt their motion to the presence or absence of liquid inside. Since most of these studies rely on data-driven approaches, datasets must ground the learning process. The literature presents different examples of datasets for the study of object manipulations <cit.>, most of the time focusing on the interaction between the humans and the objects in specific applications, e.g., activities of daily living <cit.>, kitchen-related actions <cit.>, or handovers <cit.>. A recently published dataset, the CORSMAL Container Manipulation by <cit.>, collects actions such as pouring and handover initiations with containers with various shapes, materials, and content recorded with RGB-D cameras and microphones. However, the datasets currently available are more oriented to classical object recognition problems, proposing a high variability in the shapes and sizes of the objects examined, considering fewer sensors or a limited pool of participants; moreover, no one takes into consideration nor is aimed at modeling carefulness. Therefore, the contribution of this paper is twofold. First, we introduce a novel dataset describing careful and non-careful manipulations and narrow the focus on human pick-and-place actions of transparent cups, with or without water filling and balanced weights. Limiting the variability of objects allows the effect of features on human motion to be studied in detail. Data are recorded with a synchronized multisensory setup, i.e., motion capture system (MoCap), cameras, wrist-worn inertial measurement units (IMUs) and the robot point of observation, allowing for a complementary description of the scene from different perspectives. We then present an example of usage of the dataset by training Long Short-Term Memory Neural Networks, using the different sources of information provided by the dataset, to discriminate if carefulness was adopted during the manipulation and if the cup transported was light or heavy. The novelty of the approach lies in using the modulations of human kinematics to foster the inference process and understand the objects' latent features, completely overlooking their appearance. Moreover, the controlled design of the setup allows for deeply studying human strategies during the pick-and-place of objects with specific properties. By identifying the general rules to act on a set of objects, independently from their use, shape, or material, they can be used to design a robot's behavior appropriately. The article is organized as follows. Section <ref> describes, in detail, the study design and the data acquisition process. In Section <ref> we present a description of the dataset and its organization. Section <ref> describes the code provided to inspect and visualize the dataset. Finally, Section <ref> presents an example of how to use the dataset. Conclusions follow. § EXPERIMENTAL SETUP This section describes the study design and its main technical characteristics. Liguria Regional Ethical Committee approved the research protocol for this study (protocol 396REG2016 of July 25th, 2019), and all participants provided written informed consent to publish the collected data. §.§ Study Design An object may require careful manipulation for different reasons. A glass full of water requires carefulness to avoid spilling, while a ceramic vase requires carefulness to avoid breaking it. Likewise, the different reasons inducing carefulness also influence its physical manifestation. The careful manipulation of a glass of water would manifest in slow motions with constant orientation. Instead, carefully manipulating a ceramic vase would maximize the distance to nearby objects to avoid collisions. Given the limitations of previous research in the field, we decided to narrow the study limiting the notion of carefulness to the one induced by the need to move a container filled with a liquid while avoiding spills. In particular, the human actions recorded in the dataset consist of reaching, transportation, and departing movements involving four possible objects. In order to allow for simple reproduction of the experiments, we chose plastic glasses, which are easy to manipulate and of everyday use. The glasses, identical in shape and material, differed in their contents. In order to induce careful behavior in the recorded actions, we filled two of the glasses with water to the brim so that they required a high level of carefulness. Two different weights are considered: light (W1: 167 grams) and heavy (W2: 667 grams). Such values were determined by the fact that we wanted the light and heavy objects to be consistently different (500 grams) while balancing the presence or the absence of water in containers with the same volume. The desired weights were obtained by adding screws and coins inside the glasses, until reaching W1 or W2, balancing the presence of water. In this way, we defined four classes of actions, depending on the properties of the manipulated object, namely light and not careful (W1-NC), light and careful (W1-C), heavy and not careful (W2-NC), heavy and careful (W2-C). The sequence of performed actions is the same for every participant. It is designed to alternate the manipulation of the four categories of objects together with the direction of the movements. At the beginning of the experiment, the volunteer sits at a table, with their hands resting on it. On the table, covered with a black cloth, there is one shelf at each end, a scale right in front of the subject, and a keyboard on the left side, see Figure <ref> and Figure <ref>. Each shelf has four possible positions, denoted by a letter marked on the frontal edge of the shelf, where the glasses could be positioned, i.e., two on the bottom level and two on the upper one, see Figure <ref>. The shelves measure 36×23 cm, the top level is 36 cm above the bottom one and there is a border of 6 cm, delimiting each shelf and constituting an obstacle. The two positions on each shelf level are indicatively 18 cm apart. A blue cross on the table marks the resting position that the participants' right hand should reach after each movement, see Figure <ref>. The distance between the starting position and the shelves is indicatively 50 cm. The humanoid robot iCub <cit.> is placed in front of the table and passively records the scene with its left camera. As instructed by a synthetic voice, the participants perform a series of reaching, transportation, and departing movements of the four glasses. The volunteers interacted with the items with their right hand and received instructions on the next movement to perform by pressing a key on the keyboard with their left hand. The experiment is set up as summarized in Figure <ref>: * The experiment starts with the volunteer in the resting position and with the four objects distributed on the shelves. * When a key of the keyboard is pressed, a synthetic voice indicates the position on the shelf of the object to be transported. The position is referred to using the corresponding letter. * The volunteer reaches for the glass and grasps it in the specified location (reaching phase), as in Figure <ref>. * In the transportation phase, the volunteer moves the glass from the shelf (shelf initial position) to the scale. * The volunteer releases the glass and returns with the dominant hand to the resting position. * The volunteer presses the key a second time, and the synthetic voice suggests where the glass should be placed on the other shelf. The shelf spot chosen this time is vacant. * The volunteer reaches for the scale and takes the glass, see Figure <ref>. * The volunteer moves the glass from the scale to its final location on the shelf, performing a transportation action. * The volunteer places the glass down and returns to the resting position. The order in which the volunteers performed the experiment is detailed in Table <ref>. The first sixteen trials were used as a practice before the main experiment started. In the main experiment, each volunteer performed 64 reaching movements, 64 transportation movements, and 64 departing movements to the resting position. After the 16th and the 48th trial, the objects' position is changed by an experimenter to maintain the properties of the manipulated objects and the initial position of the glasses equally balanced. To prevent fatigue and gesture automatism, at the end of each sequence of 16 pick-and-place actions, participants could rest as much as they wanted, both physically and mentally. §.§ Data Acquisition The data collection process involved 15 healthy right-handed participants (8 males, 7 females, age: 28.6± 3.9). The participants are part of our research organizations, but none of them is directly involved in this research. Each subject performed 80 trials, ensuring 300 interactions for each of the 4 objects. Such participants' numerosity is generally higher than that of other object manipulation datasets in the literature, as highlighted by the review of <cit.> and, as recent examples, the works by <cit.> and <cit.>. The framework adopted during the experiment was designed to ensure a high degree of automation in the acquisition phase. Most sensors were directly interfaced with the YARP middleware <cit.>, allowing timestamp synchronization, whereas the wrist-worn IMU was using a Robot Operating System (ROS)-YARP interface. The data have been segmented to separate each action presented in Figure <ref>. The MoCap and the IMU segmentation have been performed automatically, exploiting the participants' pressures on the key. Instead, the camera images were organized offline into separate folders according to the saved timestamps. For each participant, we saved a log file containing information on the experiment. These log files contain the YARP timestamp of each instruction communicated by the synthetic voice and the time for each key's pressures. §.§.§ Motion Capture System As a motion capture system (MoCap), we used the Optotrak Certus®, NDI, with active infrared markers. In total, we recorded the signal from 15 markers. As pictured in Figure <ref>, the five markers on the hand were placed, respectively, on the metacarpophalangeal joints of the index and the little finger, on the diaphysis of the third metacarpal, and on the smartwatch in correspondence to the radial and ulnar styloid. Additionally, two markers were positioned on the watch strap, one per side, to better characterize wrist movements. Even though the main focus of the recording was to acquire hand and wrist motions, we decided to position a few markers on the participants' arm and forearm. We used two rigid cardboards where we put four markers each. See Figure <ref> for reference. The frequency of the acquisition is 100 Hz. For every frame, the three-dimensional coordinates of every marker (in millimeters) were saved into the file associated with the trial. Moreover, the timestamp at the beginning and the end of each trial was saved. This was used to retrieve the timestamp corresponding to each frame by applying a linear interpolation. §.§.§ Inertial Sensors On the right wrist of the volunteers, we mounted an LG G Watch R smartwatch equipped with a 6-axis IMU. The sampling rate was 71 Hz. As for the MoCap data, a separate file was created for each trial, whenever the key on the keyboard was pressed at the end of the departing movement. The file was saved in format containing the ROS and YARP timestamps at each sample, the internal Android timestamp, and the three components of the linear acceleration in m/s2 and those of the angular velocity in rad/s. The data were published by the Android app on ROS for the smartwatch acquisition and then saved. Through the ROS-YARP interface, the corresponding YARP timestamp was sent to ROS at each key's pressure and written on the file. §.§.§ Cameras Two cameras with a resolution of 1920×1080 pixels were positioned in the room where the experiments took place. The former was placed at the back of the participant's chair and recorded the scene from an overhead viewpoint, remaining elevated by 130 cm with respect to the table. The latter was on the left side of the participants, with an oblique point of view, 65 cm higher than the table with a distance from the hand starting position of circa 140 cm. The reader is referred to the scheme in Figure <ref> for reference. The frame rate was set to 30 Hz. The last sensor used in the experiment was iCub's left camera. The robot was located opposite the table, in front of the volunteer, with a complete perspective of the table and the shelves. The camera's resolution was 320×240 pixels, and the frame rate was 22 Hz. Even though such resolution is particularly low, this third camera offers a complementary point of view to the other two, which is relevant for a possible deployment of the data acquired in a human-robot interaction context. However, we suggest future researchers interested in acquiring a similar setup, to use a standard webcam for the frontal view in addition to the robot one, to grant the best quality view from all perspectives. As previously mentioned, the images acquired with the cameras were not automatically segmented during the acquisition. Segmentation occurred afterward using the YARP timestamps, which were saved for each image from each camera in a log file. Indeed, each saved camera frame was associated with a YARP timestamp, making it possible to relate the acquired images with the events triggered by the key's pressure. § DATA RECORDS The dataset is available on Kaggle[<www.kaggle.com/dataset/cec218d6597e7c2cac28c7d6a1e8cbd381e451a77192c16b648d2b4c5de70697>], while the software utilities can be found on GitHub[<https://github.com/lindalastrico/objectsManipulationDataset>]. While we used MATLAB as a reference software to create the utilities functions, we chose to make sensory data available in a non-proprietary format (, , , or , depending on the sensor). A comprehensive table with a summary of performed movements, with the details about their direction and the properties of the object involved is available in the utilities, also with a machine readable structure. Table <ref> summarizes the characteristics of each trial. The same sequence was performed by each participant. In the repository, the data are organized into separate folders on a sensor basis. Inertial and MoCap recordings can be found in folders and , respectively. While cameras recording are divided in three folders according to the following naming: * : low resolution frontal images from the iCub left camera, * : high resolution lateral camera, * : high resolution images from behind. The main folder also includes one folder , which contains the experiments log files. They report the YARP timestamp corresponding to the key's pressures at the end of each trial, together with the information relative to the transport movement: the instruction given by the synthetic voice is reported as “sending speech: Prendi/Metti in {Position letter}”. The verbs “Prendi” and “Metti” mean, respectively, “Take from” and “Put in” in Italian, and were used to tell participants where to take the glass from or where to put it back on the shelf, using the letter corresponding to the position, as in Figure <ref>. Each one of the described folders is organized into sub-folders named from to , containing the files associated to each participant. In the case of MoCap and inertial sensors, the files are sequentially named from trial 1 to 80. Instead, the camera folders are divided, for each subject, into 80 subfolders (one for each trial), e.g., the folder data/cam_1/P001/P001_Trial_001 contains the images from the lateral high resolution camera for the first trial of the first subject. The same sub-folders also contain a data.txt file reporting the correspondence between the YARP timestamps and images names for the specific trial. The MoCap raw data are organized in files, with 62 columns and as many rows as the samples in the trial. The first column contains a progressive ID, the second one the YARP timestamp for each sample. As previously mentioned, this timestamp was computed after the acquisition by linearly interpolating the timestamps indicating the start and the end of the trial. The remaining columns represent triplets of three-dimensional trajectories (x,y,z coordinates) followed by an additional column for each marker containing 0, if the marker was visible in that particular frame, or 1 if not. The order in which the markers appear in the files follows the numbering introduced in Figure <ref>, therefore the first triplet being the coordinates of the marker on the metacarpophalangeal joint of the index, the second triplet the marker on the joint of the little finger, and so on. Regarding the inertial sensor raw files, they are saved in format. For each sample, the available information is the ROS timestamp, the YARP timestamp, the Android one, and the three components of linear acceleration and angular velocity. §.§ Data Assessment A first evaluation on the quality of the provided data is related to their frequency. As already mentioned, the acquisition frequency depends on the specific sensor, and it is 100 Hz for the MoCap, 71 Hz for the inertial sensor, 30 Hz for the cameras, and 22 Hz for the robot camera. Figure <ref> represents the frequencies for the sensors considering the whole experiment for all the participants. Even though some outliers are present, downsampling the data to the lowest frame rate (22 Hz) allows for automatically cleaning them and keeping a frequency well above the range of human motion, that is generally below 10 Hz <cit.> and lies in the interval [0.3, 4.5] Hz for hand motion <cit.>. An in-depth kinematics analysis can be carried out to study how the properties of the different objects affect the movement of the arm and condition how the transport action is completed. Our dataset allows not only to calculate different kinematic parameters, but also to compare the information which can be retrieved from the different sensors. In Figure <ref> can be found an example, which compares kinematics parameters extracted from the synchronized MoCap and inertial sensors during the transportation of the four glasses. Figure <ref> illustrates, as a proof of concept, how the hand mean velocity during the transport phases evolves depending on the number of participants considered. As the sample becomes larger, the measure stabilizes, and from the point when the dataset has 11 subjects, the velocity is almost completely constant. This analysis hints that the sample size of 15 participants (for a total of 1200 trials) is suitable for a reliable description of the proposed pick-and-place scenario; indeed, the acquired data have an internal coherency and well represents the diversity and the natural variance of human motion in this context. In detail, in Figure <ref> are shown the median, the 25th and the 75th percentiles of the hand velocity calculated deriving the 3D position of one of the markers placed on the volunteer's hand (markers 1 to 4, see Figure <ref> for reference). According to the Kruskal-Wallis test for non-normal distributions, we found a significant difference in the velocity adopted for transporting the glasses between those filled with water (W1-C and W2-C) and those empty, with simply a weight difference (W1-NC and W2-NC). Even though a trend is visible, no significant difference was found concerning the weight. The same results emerge in Figure <ref> as well, where the wrist's mean angular velocities are recorded with the inertial sensor in the smartwatch. Again, a significant difference in the magnitude of the angular velocity appears between the Careful and Not Careful transport motions. Finally, Figure <ref> provides an insightful qualitative overview of the hand velocity profile acquired with the MoCap system during the reaching towards the cups (first peak) and the transport phase (second peak). To create such a global representation, the mean of the hand velocities of all the trials, separated into the four classes of motion, was computed for every time instant, together with its standard deviation. It can be noted how not only the maximum velocity decreases when transporting the full cups, but also the peak is anticipated, especially for the full, heavy container. Participants tended to be especially cautious in the final phase of the movement, gently leaning the glass so as not to spill the content. § CODE AVAILABILITY The Github repository[<https://github.com/lindalastrico/objectsManipulationDataset>] includes a number of MATLAB scripts allowing users to load and process the data. Further details on how to use the available functions are contained in each of them. * and allow users to load and save, respectively, the motion capture and the inertial data coming from the smartwatch in an easy-to-use data structure in the form of a 15×80 cell array (number of subjects × number of trials). * saves in a data structure the YARP timestamp for each one of the three cameras during the experiment. * produces a 3D plot with the visible markers positioned on the hand for specified trial and subject combo. * creates a video from the images, specifying the desired participant, trial and camera (one of the two high resolution cameras or the robot camera). It also saves the YARP timestamps corresponding to the considered frames. * renders the video previously created with together with the trajectories of the markers on the hand during the specified trial and the three components of the acceleration recorded with the smartwatch. The 3D trajectories, the acceleration and the video are reproduced simultaneously; the number of the markers to visualize can be easily modified. § EXAMPLE OF USAGE In this section, we provide insights into the dataset usage. In particular, we compare the performances of the same classifier, a Long Short Term Memory Neural Network (LSTM-NN), applied to the data sources present in the dataset, i.e., MoCap, Robot Camera, and IMU. In this example, we focus on transportation motions because of the direct influence of the objects' physical characteristics given by the contact presence. After being segmented, the transportation phase of each trial was appropriately labeled according to the features of the cup involved; thus, we trained binary classifiers to discriminate its weight or the presence of carefulness in the motion. This study's objective is to provide insight on which sensing approach better fits the problem and if fusing information from different sensing modalities could help. §.§ Data Preprocessing Before using them to train and test the LSTM-NNs, it is necessary to preprocess the data. This preprocessing involves segmentation, feature extraction, data filtering, resampling, data normalization, and padding. As shown in Figure <ref>, each trial contains the whole pick-and-place action divided into reaching, transportation, and departing. At the beginning and the end of the transportation, the subject hand stops to grasp and release the object. This characteristic can be leveraged to isolate the transportation motion. We did it by computing the norm of the hand velocity, using the MoCap data, and applying a thresholding mechanism to identify the timestamps related to the start and end of the transportation. We used the resulting timestamps to segment the IMU and the robot camera data. Given the different nature of the three sources of data selected, to adopt the same classification pipeline, it is necessary to extract consistent features. From the robot camera images, we computed the Optical Flow (OF) <cit.>, and we used it to extract the components of the motion velocity on the image plane. From these data, we then calculated the norm velocity, the angular velocity, the curvature, and the radius of curvature <cit.>. We selected these four features since, in the past, they have been successfully adopted to characterize biological motions <cit.> and discriminate between careful and non-careful transportations <cit.>. Instead, we selected six features both for the MoCap and the IMU. For the MoCap, we computed the hand triaxial linear acceleration and velocity. For the IMU, we used the raw data, i.e., linear acceleration and angular velocity. We suggest filtering the data for noise reduction as part of the data preprocessing. In this case, we applied to every temporal sequence a first-order Butterworth filter with a threshold frequency equal to the original sampling rate of the sensor (i.e., 71 for the IMU, 100 for the MoCap, and 22 for the camera). To simplify the comparison and easily combine information from the different sources, we resampled the IMU and MoCap data to match the camera sampling frequency. This choice also finds support in previous research suggesting that 20 is an ideal sampling frequency for the perception of human daily activities <cit.>. To perform the resampling, we interpolated the data and used the camera timestamp to extrapolate the new data. The resulting sequences are then scaled using min-max normalization to decrease the difference in scale between the features. For the IMU, we selected as maximum and minimum values the full-scale range of the sensors, i.e., ± 2g for accelerations and ± 8.73/ (± 500 deg/s) for the angular velocities. Finally, since all the trials have a different temporal length, we padded the data with zeros to be able to use batch training. In particular, we used pre-padding since it is considered more robust to the noise introduced by the zeros <cit.>. §.§ Model Training and Validation As mentioned before, our experiment aimed to compare the results of different classifiers for distinguishing, separately, whether carefulness was adopted during the transportation and if the object involved was heavy or light. The evaluation focuses on four classifiers that differ from each other for the used data source. The study comprehends one classifier for each sensor (i.e., camera, IMU, and MoCap) and a classifier using both IMU and camera data. We explore the combination of IMU and camera data to determine if an autonomous robot could leverage it for more reliable perception. We chose a simple LSTM model followed by fully connected layers for each of the four models. The networks were implemented in Python using the sequential layers provided by Keras[<https://keras.io/>]. The LSTM layer has 64 hidden units and an input shape equal to [sequence_length × n_features], where the sequence length is fixed to 134 samples (maximum sequence length after resampling), and the number of features varies according to the data source (i.e., 6 for MoCap and IMU, 4 for the camera and 10 for IMU plus camera). The next layer is fully connected, with 32 neurons, and it is preceded and followed by dropout layers with a value of 0.5. The output layer is another fully connected one with two output neurons corresponding to the two classes: careful and not careful. Given the double output, we chose a softmax function for the activation and the categorical cross-entropy to evaluate the loss. An L1-L2 kernel regularization was added to the last layer to prevent the model from overfitting with L1 = 0.001 and L2 = 0.001 as parameter values. The chosen optimization algorithm was AdamOptimizer, with a learning rate of 0.0002 and batch size of 16. For each of the four models, we carried out the training and testing phases by adopting the Cross-Validation with a Leave-One-Out approach to test the ability of the model to generalize over different participants. Therefore, to split the 1200 sequences (15 participants × 80 sequences) into training, validation and test sets, we adopted the following procedure. One at a time, the data of each participant were used as a test set, and the remaining 14 were further divided, 80% for the training and 20% for the validation. The validation set has been picked randomly from the 14 volunteers. The models have been trained for 100 epochs using an early stopping on the validation loss with patience set to 5 epochs to avoid overfitting. §.§ Results The classification results of the four models are reported in Figure <ref> using boxplots showing the median, the average, and the distribution of the values for each data source. Regarding the carefulness, the overall performances of the models are comparable, i.e., 91.3% for the IMU, 91.4% for the camera, 91.6% for the MoCap, and 91.1% for IMU plus camera (see Figure <ref>). Therefore, considering this evidence, the 4 data sources are almost equivalent for classification purposes. On the other hand, if we exclude the outliers, the minimum values displayed in the chart appear to reflect some differences between the models. The lowest accuracy achieved while using camera and MoCap data is around 84%, for the IMU reaches approximately 82%, and for the combination of IMU and camera achieves 75%. This result suggests that combining IMU and camera data may not be particularly effective for estimating motion carefulness. However, since all the single classifiers reached high performances, it is possible to imagine an autonomous system combining the results of different classifiers to obtain more stable and reliable inferences. With the same approach, the accuracy of the weight classification is instead not as satisfying as the one achieved for carefulness. The mean accuracy values, as reported in Figure <ref>, are 55% for the IMU, 56% for the camera, 54% for the MoCap, and 59% for IMU plus camera. This result is particularly interesting, as the inference process for the weight is not as simple as expected. Our hypothesis is that the greatest challenge for the volunteers during the experiment was to safely handle the filled glasses and that the difference in weight between the objects did not have an impact as strong as the water filling on the kinematics. This observation is confirmed by Figures <ref> and <ref>, where the significant difference in the considered kinematics features emerged only between full and empty cups. Objects can have multiple concomitant features, which do not always have the same effect on kinematics as their interaction may lead to the attenuation of their effect as compared to when considered individually. Our dataset, by attentively balancing the combinations of water filling and weight along multiple directions, can be particularly useful to deepen the understanding of human motor strategies in this context. §.§ Discussion Based on the results presented in the previous section, it appears to be inconsequential which sensing modality we adopt, regardless of the object feature under study. However, we should make a few observations to provide a more comprehensive view of the matter. Firstly, we should note that, in our approach, we processed RGB images with the optical flow to extract features that describe only human motion. We proceeded in this way to isolate human kinematics, but this procedure reduces the richness of information of the sensing modality. In fact, we would expect a classifier trained on raw images to achieve better accuracies since it can leverage the visual features of objects as well. Secondly, the presented results have been achieved in a specific experimental setting designed to be as fair as possible for each sensing modality. Therefore, future studies could benefit more from specific sensing modalities depending on the constraints and characteristics of the application and human actions under consideration. For example, in a scenario where humans interact with crowded environments or use different motor actions, the frequent occlusions with other objects could impair visual sensing. On the other hand, wearable sensors such as IMUs are not affected by occlusions, but it is not always possible or convenient to sensorize humans. § CONCLUSIONS This article provides a multimodal dataset of human object pick-and-place tasks under different experimental conditions. In particular, the dataset describes the effect of object weight and associated carefulness on human motions during a pick-and-place. The dataset is collected in a controlled environment with 15 subjects performing 80 different pick-and-place actions. The dataset contains multiple camera views, IMU, and MoCap data providing the possibility of integrating and comparing the diverse sensing modalities. The data collection was prompted by the need to create reliable models of human strategies in the context of object manipulation, which could be used by robots to understand the scene and facilitate interaction. In this article, as an example of usage, we propose a simple comparison of the performances of classifiers using different sensing modalities to distinguish between careful and non-careful transportations or between heavy and light cups. The motions which constitute the dataset are also strictly controlled concerning their direction (left, right, up, down, adduction, receding) and the different phases of the actions can be easily split, distinguishing between the reach-to-grasp, the transport, and the departing motions (see schema in Figure <ref>). This makes the dataset particularly suitable for studying human intention recognition; understanding what is going to be grasped, and where, can lead to significant improvements in the HRI experience, allowing robots to anticipate our goals and adapt accordingly. This kind of studies can also be conducted by comparing the different sensors available, so as to build a framework that can be adapted to different contexts: for instance, by relying more on inertial data in case of obstructions to the cameras. Other applications to Robotics fall under the theme of motion generation. The presented dataset, however, is not aimed at classical robot learning from human demonstration; indeed, useful sensors for such use, such as grip and force sensors, are not included. Still, the acquired human velocity profiles have been used to build on top of standard robot trajectories (pick-and-place and collaborative handovers) by adding a communicative layer to the motions. Generative Adversarial Networks were trained on human examples to produce velocity profiles falling within the desired careful/not careful attitude and such time-series used to control the end-effector trajectories. Such an approach has been successfully deployed on humanoids and robot manipulators producing gestures adapted to cups' content and improving the interaction efficiency (see <cit.> for reference). This dataset represents a contribution to the study of how specific object characteristics influence human motion. In future works, it could be interesting to expand the dataset to consider different levels of carefulness and study which factors (such as context, value, fragility, potential danger, and so on) impact human motion going beyond the spillability of the water content. § DECLARATION OF COMPETING INTEREST The authors declare no competing interests. § ACKNOWLEDGEMENTS A.S. is supported by a Starting Grant from the European Research Council (ERC) under the European Union’s Horizon 2020 research and innovation program. G.A. No 804388, wHiSPER. This work is partially supported by the CHIST-ERA (2014-2020) project InDex, and received funding from the Italian Ministry of Education and Research (MIUR). This research is partially supported by the Italian government under the National Recovery and Resilience Plan (NRRP), Mission 4, Component 2 Investment 1.5, funded by the European Union NextGenerationEU and awarded by the Italian Ministry of University and Research. SageH
http://arxiv.org/abs/2407.12647v1
20240717151623
Fusion Flow-enhanced Graph Pooling Residual Networks for Unmanned Aerial Vehicles Surveillance in Day and Night Dual Visions
[ "Alam Noor", "Kai Li", "Eduardo Tovar", "Pei Zhang", "Bo Wei" ]
cs.CV
[ "cs.CV", "cs.AI" ]
kaili@ieee.org [mycorrespondingauthor]Corresponding author 1. CISTER Research Center, Porto, Portugal 2. University of Michigan, Ann Arbor, Michigan, USA. 3. Newcastle University, Newcastle, UK. [myfootnote1]CISTER Research Center, Porto, Portugal [myfootnote2]University of Michigan, Ann Arbor, Michigan, USA. [myfootnote3]Newcastle University, Newcastle, UK. § ABSTRACT Recognizing unauthorized Unmanned Aerial Vehicles (UAVs) within designated no-fly zones throughout the day and night is of paramount importance, where the unauthorized UAVs pose a substantial threat to both civil and military aviation safety. However, recognizing UAVs day and night with dual-vision cameras is nontrivial, since red-green-blue (RGB) images suffer from a low detection rate under an insufficient light condition, such as on cloudy or stormy days, while black-and-white infrared (IR) images struggle to capture UAVs that overlap with the background at night. In this paper, we propose a new optical flow-assisted graph-pooling residual network (OF-GPRN), which significantly enhances the UAV detection rate in day and night dual visions. The proposed OF-GPRN develops a new optical fusion to remove superfluous backgrounds, which improves RGB/IR imaging clarity. Furthermore, OF-GPRN extends optical fusion by incorporating a graph residual split attention network and a feature pyramid, which refines the perception of UAVs, leading to a higher success rate in UAV detection. A comprehensive performance evaluation is conducted using a benchmark UAV catch dataset. The results indicate that the proposed OF-GPRN elevates the UAV mean average precision (mAP) detection rate to 87.8%, marking a 17.9% advancement compared to the residual graph neural network (ResGCN)-based approach. Unmanned Aerial Vehicles SurveillanceResidual Convolutional NetworksSplit Attention NetworkOptical Flow Fusion. § INTRODUCTION Detecting illegal unmanned aerial vehicles (UAVs) within designated no-fly zones is of paramount importance, where the illegal UAVs pose a substantial threat to civil and military aviation safety due to their potential to interfere with flight paths, causing severe accidents <cit.>. The illegal UAVs also present a risk to sensitive infrastructure, such as power plants and communication networks <cit.>, where an accidental or intentional collision could result in widespread service disruptions or catastrophic failures <cit.>. As shown in Fig. <ref>, an unauthorized UAV outfitted with cameras or other surveillance devices in a no-fly zone can infringe on privacy rights and present substantial security threats to both civil and military operations. Such UAVs have the potential to obtain unauthorized imagery or data, thereby providing malicious entities with invaluable information <cit.>. Identifying UAVs through camera imagery presents a considerable challenge. This arises from UAVs' propensity to integrate inconspicuously with environmental elements, such as structures and foliage, particularly during nocturnal hours when their hues can closely resemble the backdrop. Moreover, during daylight, the variability in illumination conditions further complicates the detection process. Deep learning models, referenced in <cit.>, have been utilized for UAV detection based on their color congruence with the background. Notably, these models often demonstrate proficiency in detecting UAVs in daytime color images or in infrared (IR) images captured at night. However, its efficacy tends to diminish when faced with homogenous backgrounds at nighttime or fluctuating illumination during daylight hours. In our antecedent research <cit.>, we explored a deep learning model, leveraging transformations and cosine annealing strategies to reduce classification and regression discrepancies for UAV detection utilizing both RGB and IR imagery. However, the detection efficacy using RGB images is compromised under less-than-ideal lighting conditions, such as during overcast or tempestuous days. On the other hand, while using IR (monochromatic) imagery, discerning UAVs that merge with backgrounds of a similar hue presents its own set of challenges. In this paper, we propose optical flow-assisted graph pooling residual networks (OF-GPRN) designed for intricate UAV detection using combined RGB and IR images. While the IR image remains unaffected by light conditions, the RGB image retains vital color data. The proposed OF-GPRN takes advantage of integrating RGB and IR images to optimize contrast, edge definition, color, and texture in each frame. This combined image also mitigates distortions arising from lighting variances, color, and background interference. Relying on this image integration, the OF-GPRN system produces a comprehensive composite frame enriched with features such as fine-texture, broad-texture, and contrast, which facilitates the extraction of UAV movement patterns. To isolate the UAV from its background, our proposed OF-GPRN system harnesses the fusion of RGB and IR images, subsequently processed through optical flow <cit.>, facilitating the segregation of the UAV from the amalgamated image. This system innovatively augments graph neural networks (GCN) by integrating graph residual split-attention networks (GRSaN) <cit.>, aiming to optimize the mAP for UAV detection <cit.>. Given the diminutive representation of the extracted UAV within the image, it poses challenges in distinguishing it from other entities, such as avian creatures or aircraft. Specifically, the OF-GPRN model refines the extracted object's contours and ascertains pixel correlations across the pre-processed RGB and IR imagery. This aids in UAV identification and augments predictive accuracy by capitalizing on the expansive feature-learning prowess offered by the feature pyramid during model calibration. The main contributions of this paper are listed as follows: * The OF-GPRN system is proposed to enable precise UAV detection in day and night dual visions that experience time-varying lighting conditions and high background similarities. The OF-GPRN system develops a fusion of RGB and IR frames, which enhances the quality of output frames by reducing noise and adjusting illumination and color. The OF-GPRN system also extends a new optical flow model to eliminate background and foreground similarity while extracting the UAV’s mobility. * The GRSaN is extended in the OF-GPRN system to stabilize the learning capability, enhance feature learning during training, and reshape the UAV. The OF-GPRN system also uses a Quickshift-based algorithm to represent the adjacency matrix from pixels to nodes. This makes for an accurate graph with clear frame-pixel information. * We conducted experiments to assess the performance of the proposed OF-GPRN system in UAV detection during both daytime and nighttime conditions. In comparison to preceding residual GCN (ResGCN) object detectors, our enhanced model demonstrates superior performance, achieving a commendable mAP of 87.8% on the stringent RGB-IR combined benchmark UAV catch dataset. This marks a significant improvement over the ResGCN, which obtains an mAP of 69.9%. This paper is organized as follows: Section <ref> presents the literature overview on deep-learning-based UAV detection. The proposed OF-GPRN system is presented in Section <ref>. In Section <ref>, we study the system implementation, experimental setup, as well as the performance evaluation. Section <ref> concludes the paper. The symbols used in the paper have been listed below in the Table. <ref>. § RELATED WORK The literature encompasses several UAV detection methodologies based on RGB or IR images, leveraging Convolutional Neural Networks (CNN) <cit.>. In particular, Tian et al. introduced a YOLOv5-based detection paradigm tailored for small UAVs in <cit.>. This model refines detection by designing anchor box sizes, incorporating a convolutional block attention module (CBAM), and revising the loss function. Its prowess is accentuated by its capability to detect UAVs in complex and challenging environments. Similarly, Alsoliman et al. put forth a UAV detection approach that leverages random forest classification, as articulated in <cit.>. This technique discerns patterns in video data to curtail the influx of packets emanating from UAVs. Furthermore, a distinctive method is delineated in <cit.>, introducing the notion of a pivot fingerprint model designed for pinpointing anchor packets within video streams for UAV detection. The framework of this model uses a two-tiered feature selection process, with the first phase being model-independent and the second phase being model-dependent. Lui et al. studied an HR-YOLACT algorithm, which is an amalgamation of HRNet and YOLACT techniques. This model is architected to feature a lightweight prediction head, facilitating the detection of UAVs and extracting their features through instance-based semantic segmentation <cit.>. Muhammad et al., on the other hand, delved into a transfer learning technique for the identification of UAVs, using both VGG16 and Faster-RCNN <cit.>. Meanwhile, Jihun et al. brought forward an approach in <cit.>, leveraging a Pan-Tilt-Zoom camera system for UAV detection using the Faster R-CNN Inception Resnet algorithm. Furthermore, Wei et al. tailored the YOLOv3 model combined with transfer learning to detect UAVs using an RGB camera. This integration has further potential for real-time surveillance applications, especially on platforms like the NVIDIA Jetson TX2 <cit.>. In the same way, Reddy et al. showed how YOLOv3 could be used to find UAVs, highlighting how useful it is for effective monitoring in a variety of daytime situations <cit.>. Lee et al. ventured into a machine learning-centric approach, focusing on the identification of UAVs from RGB images. Their system keeps a vigilant eye on surveillance zones, pinpointing and cataloging UAVs using an RGB camera and subsequently determining their geographic position and manufacturer model <cit.>. In another innovative approach, Wang et al. launched a semi-supervised object detection technique termed Decoupled Teacher. Built on Faster-RCNN's foundation, this method employs unlabeled data with the aim of counteracting the imbalance between foreground and background observed in RGB camera feeds <cit.>. In <cit.>, Basak et al. describe a YOLO-based UAV detection strategy that uses spectrogram images to classify and group spectral instances. Byunggil and Daegun, in <cit.>, showcased various micro-Doppler signatures of UAVs. These were discerned using the short-time Fourier transform along with the Wigner-Ville distribution, both of which serve as tools to aid in the identification of UAVs. Multiple trainable object detection models were put to the test, comparing them with UAV identification tasks. Complementarily, Suh et al. embarked on research tailored for UAV detection on platforms that are constrained in terms of resources, particularly focusing on video hardware. Their study in <cit.> meticulously melded algorithmic optimizations with FPGA hardware, aiming to adeptly scrutinize the intricacies of video streaming. Qi et al. put forth a technique tailored to the recognition of consumer-grade UAVs <cit.>, employing static infrared imagery. Their model uses an approach based on importance and integrates basic convolution, adaptive thresholding, linked domain filtering, and SVM-based discrimination. Sun et al., in <cit.>, introduced TIB-Net, a specialized model designed for UAV detection through an RGB camera. This model, uniquely structured with a cyclic pathway, is particularly adept at detecting small-sized UAVs. Augmenting its efficacy, a spatial attention module is integrated, working to pare down data redundancy and extraneous noise. Further research, as outlined in <cit.>, showcases efforts wherein various CNN-based strategies are deployed to identify UAVs. These detection tasks are undertaken under a myriad of lighting scenarios, harnessing the power of RGB video feeds. However, despite the extensive study into CNN algorithms tailored for both RGB and IR-based UAV detection, their potency tends to be encumbered. This restriction results primarily from the difficult challenge of backgrounds with a high degree of similarity to the UAVs. The prevailing models as depicted in the literature predominantly operate on single-stream data, either RGB or IR, as opposed to processing fused optical flow images, where the visual presentation of UAVs and other entities can exhibit substantial variation between day and night vision scenarios. Additionally, the feature quality of the resultant frames is found to be considerably influenced by varying lighting conditions, which, in turn, often exacerbates the challenges due to the heightened similarities between the UAVs and their background environments. § THE PROPOSED OF-GPRN SYSTEM In this section, we study the proposed OF-GPRN system that learns the feature layers while improving the detection accuracy, which is illustrated in Fig. <ref>. The OF-GPRN is developed to retain and recognize the original structure of the complex day and night vision data. §.§ Proposed System Model §.§.§ RGB-IR Fusion A fusion module is developed in the proposed OF-GPRN to merge two input frames from RGB and IR sources while enhancing the quality of the output frames by reducing noise and adjusting illumination and color. First, multi-level edge preservation filtering is used to separate the input frames into base layers (B_s), fine-structure (F_s), and coarse-structure (C_s). This enables the extraction of fine texture and large texture features at F_s and C_s layers respectively, while contrast and edge details are preserved in the B_s layer. To retain edge information, weighted mean curvature (W_f) is applied to each input layer, and a Gaussian filter (G_f) is utilized to remove Gaussian noise. The weighting matrix filter w(P, Q) assigns higher weights for center pixels within the square-shaped window, with parameters P and Q. A modified Laplacian operator ℒ_(x,y) is defined to obtain high-quality output frames that capture detailed features, such as edges and contours, from both RGB and IR sources <cit.>. Thus ℒ_(x,y) is given as ´ ℒ_(x,y) = ∑_P = - p^p ∑_Q = - q^q w (P,Q)[ L _(x + P,y + Q)]^2 Furthermore, the modified Laplacian can be given by L _(x,y) = |2L ”_(x,y)-L ”_(x-1,y)-L ”_(x+1,y)|+|2L ”_(x,y)-L ”_(x,y-1)-L ”_(x,y+1)|, where L ” is a linear differential operator that approximates the second derivative at the F_s layer, i.e., L ”_F_s_(x,y)= {δ^2 F_s/δ x^2 + δ^2 F_s/δ y^2}. Likewise, L”_C_s(x,y) at the C_s layer can also be obtained. In the proposed OF-GPRN, a pulse-coupled neural network with parameter adaptation is used to fuse the F_s and C_s layers while determining the optimal number of features in each layer <cit.>. Specifically, the network evaluates the edge features of the respective layers, with priority given to those extracted from ℒ_F_s_(x,y) if they are more prominent. If the edge features of ℒ_C_s_(x,y) are more prominent, the pixel features extracted from this layer are added to the fused frame. The fusion of the B_s layers in the RGB and IR frames combines the contrast information from an IR frame with the texture information from an RGB frame. In particular, fusing the B_s layers in RGB and IR frames suffers from each pixel contrast and texture information composition <cit.>. Therefore, a visual saliency map , denoted by 𝒱_p, is constructed based on calculating the intensity value of the pixels, which examines the difference between each pixel and all of its neighbors to generate a saliency value (<ref>). As a result, 𝒱_p preserves the contrast and texture features of the RGB or IR frame, improving the quality of the fusion. In (<ref>), n represents a specific pixel intensity value within the range of 0 to 255 in the frames B_s^(RGB,IR) (denoted as N), and ℐ_n is the number of pixels with similar intensity to n ∀ I={i|(x_n_i,y_n_i)=n,1≤ i≤ N} in which i is the individual pixel index. The proposed system uses a feature scaling normalization function S(.) to make sure that the frame features fall in the same range. This is due to the fact that different scales in (<ref>) affect many quantitative pixel features. 𝒱_p= ∑_p=x,y^x_n,y_nℐ_n |S(B_s_p^(RGB,IR)) - S(B_s_x_n,y_n^(RGB,IR)) |, where x and y are spatial pixel coordinates, and x_n and y_n represent specific pixel coordinates. For the fusion of the B_s layer, we formulate (<ref>) to merge the saliency maps in the RGB and IR frames, which are represented by 𝒱_p^RGB and 𝒱_p^IR, respectively. BASE_fusion =α + β/2, where α = (𝒱_p^IR B_s^IR + (1 - 𝒱_p^IR )B_s^RGB ) and β = (𝒱_p^RGB B_s^RGB + (1 - 𝒱_p^RGB )B_s^IR ). Based on the output of the parameters-adaptive pulse coupled neural network and BASE_fusion, the fusion of the RGB and IR frame can be obtained by applying the inverse multi-level edge preservation filtering. To monitor the UAV's movement, we used optical flow BRAFT <cit.> to process the frames, which relies on merged frames. an effective approximation of the actual physical motion being projected onto the fused frame, which provides a concise representation of the parts of the frame that are in motion. Integrating spatio-temporal information helps with background elimination. Once the videos are processed, the best next move for the mobile UAV is determined, and routine execution initiates each new iteration of the model. §.§.§ Graph Residual Split Attention Network (GRSaN) The construction of graph nodes and edges for input images in the proposed OF-GPRN model depends on using region adjacency graphs. Superpixel segmentation approaches, such as SLIC, Quickshift, and Felzenszwalb, are used to precisely split the frames into regions, which then function as nodes in a system. Subsequently, each of these regions connects together on the basis of their adjacency, which leads to the formation of the edges of the graph. The decision of the superpixel segmentation technique plays a role in determining the precision of a generated graph. These nodes and edges are used as inputs for the GCNs model. GCNs are graph-based architectures based on graph nodes and edges G=(N, E). Instead of using conventional convolutional filters, GCNs use graph convolutional filters in each layer of unordered nodes N with edges E, and aside from that, GCNs are just like CNN's. Stacks of pointwise nonlinearities in GCNs serve as the building blocks of filters, while stability and permutation equivariance of GCN architectures with good performance are attributed to the graph characteristics <cit.>. UAV pixels in the frame are represented as nodes N=[n_1,....,n_k]∈ℝ, with edges E=[e_1,...,e_k] ∈ℝ defining the relationship between the i-th and j-th UAV pixels in the order pair k=(i, j). The vector H=[h_n_1, h_n_2,...,h_n_k]^T ∈ℝ concatenates the feature vector h_n∈ℝ with D-dimensional features of n nodes. Here's how the information from the input layer (G_l) which is ConvOper(G_l, W_l) and added to and changed in the output layers. G_l+out= ConvOper(G_l+out, W_l+out) +τ (G_l+G_l+1+...+G_l+out-1) W=[W_1,W_2,...,W_out] is the learnable weighted parameter of the n layers for node aggregation and updating the graph function to compile neighborhood pixel information <cit.>. Each G_l+r represents the graph residual split attention network layer of the graph residual network <cit.> as shown in Figure <ref>. The output of G_l+r is given in equation <ref>. G_out=∑_r=1^R (G^Conc_l+r)+τ (G_l), r=1,2,3,4, … R} G_l is the input of the graph residual split attention network layer, G_out is the output, and τ is the strided graph convolution or combined graph convolution with max pooling. If the dimensions of G_out and G_l equal, then τ replaced by the identity matrix (𝐈) <cit.>. The GCN main route output is scaled using τ as a linear projection with the previous input. τ is scaled with input using a strided graph convolution or a combined graph convolution linear filter with max pooling. As a result, the number of parameters for ResGCN remains constant rather than increasing, as it does for plain GCN or ResGCN without τ. A simple ResGCN without a projection matrix block can add an input channel to an GCN output; however, as the number of layers increased, performance decreased significantly due to shortcut path accumulation. Furthermore, if the input and output have the same dimension, then 𝐈 can reduce computational complexity and have the same effect. Where G^Conc_l+r represents the concatenation of the cardinality groups denoted by (k) for each set of hyperparameters (R). To have a better understanding of each expression, we listed a more in-depth comprehension of each term: * G^Conc_l+r: This notation refers to a specific data structure that results from concatenating multiple groups, where each group corresponds to a different choice of hyperparameters (R). * ∑_r=1^R (G^Conc_l+r): This expression represents the sum of these concatenated groups. Moreover, it is adding together the information contained in all the different groups. The result is a comprehensive dataset represented as G_1 + G_2 + … + G_k, where each G^k∈R^N× D corresponds to a specific combination of nodes and the cardinality of the set k. * G^Conc = H_G(G^1, G^2, …, G^k): We define a function H_G that takes individual groups G^1, G^2, …, G^k as inputs and concatenates their gradients (features). This step ensures that we capture information from all the vertices in each group. G^k=∑_j=1^k F((G_j)+(w^agg_j))+w^update_j where, the updated and aggregated learnable parameters are w^agg_j and w^update_j. Where F(.) is the aggregation function. w^agg_j compiles information from vertices in the same cardinal k's neighborhood, whereas w^update_j applies a non-linear function to the aggregated information to compute new vertex representations in cardinal k. The G_out is processed by global max pooling, followed by batch normalization and ReLU to stabilize the input by softmax and 1x1 convolution, and then transfer for the next layer, as shown in Fig. <ref>. §.§.§ Graph-Pooling Feature Pyramid Network Mapping The last layers of the GCN extract high-level features of the input. We selected the last 5 layers for the graph feature pyramid network to map features between GCN and the pooling feature pyramid (PFP) <cit.>. We update the last 5 layers of the GCN with a feature pyramid network <cit.>, which has the ability of the superpixel hierarchy to make recursively larger groups of pixels from the smaller features of the last layers (high-level features) and a similarity measure <cit.>. The superpixel hierarchy matches the graph layers, and when moving from one layer of the residual attention GCN last layer to the next, the number of nodes decreases by a factor of 4. Contextual and hierarchical edges are used in different ways in the layer that connects the ancestor and descendant layers, which are called superpixels. Hierarchical edges connect semantic gaps, and contextual edges spread information about the context of the different levels in each layer. The node features used are the same for both hierarchical and contextual layers, but the edges are different for both. Of the last 5 layers, the first and last two are contextual, while the middle layers are hierarchical. Moreover, the learning parameters are different for both and are not shareable. The mapping from GCN to PFP is necessary to transfer the features at multiple scales and make it possible to be in line with the PFP input. Every input from the GCN layer is stride 2, which keeps the input feature from vanishing. An upsampling factor of 2 is applied to features with a higher resolution. Each lateral link combines feature maps of the top-down pathways that are the same size in space. Convolutions are performed on the top-down route feature maps to lower the channel dimensions, and the feature maps from both pathways (input: N/4, N/16, N/64, and N/256; + output: N, N/4, N/16, N/64, and N/256) are combined using element-wise addition. Each combined map is given a 3 x 3 convolution with a factor of 2 to get the final forecast for each layer. For the UAV's position, generative localization of bounding boxes makes a single box with the highest score. §.§.§ Loss Function In the case of one-stage detection, focal loss is specifically tailored to meet the needs of the user. In the suggested training model, an imbalance between UAVs in the foreground and those in the background could be fixed to put less weight on making accurate predictions. It is common practice to use cross-entropy as a loss function because of its high level of accuracy in comparing the approximation models. Cross Entropy(_p,_t)= -λ1_t log(p), if t=1 -λ1_t log(1-p), otherwise When, t∈{± 1}, p∈ [0,1] In (<ref>), t is the value of the UAV detection target, and p is a probabilistic estimate of that target value based on the probability distribution. Where λ1_t are the balanced parameters for positive and negative examples; however, it cannot discriminate between simple and challenging cases. The down weight approach requires the modulated focal loss factor (1-p_t)^γ for numerical stability. However, when training naively with (<ref>), the classifier is unable to discriminate between the more accurate candidate and the loose counterpart, resulting in an unanticipated learning scenario as shown in Fig. <ref>. Because the candidate boxes with more precise locations are suppressed with non-maximum-suppression procedures, this may have a negative impact on performance. As shown in Fig. <ref>, the consistent cross-entropy loss function is a dynamically scaled cross-entropy loss with the scaling factor determined by the overlap between the current bounding box and the target ground-truth item <cit.>. This scaling factor, intuitively, automatically downweights the contribution of loose samples during training, allowing the model to concentrate on more accurate predictions. The consistent cross-entropy loss function may help train our model to better identify which prediction is the best among numerous clustered choices. So, modulating factors are added to (<ref>) using a consistent cross-entropy loss function <cit.> to accommodate localization quality, and the more precise targets are augmented to reflect it. The (<ref>) updated form is shown in (<ref>). Cross Entropy_(p,t)= [-λ1_t + λ2 (o_k-λ1)z_k] log(p), if t=1 [-λ1_t + λ2 (o_k-λ1)z_k] log(1-p), otherwise When, t∈{± 1}, p∈ [0,1] The z_k:= 1 (if o_k > α, k=1,...,L) represents the candidate box of the targets using IoU overlap for the predicted bounding box, and o_k shows IoU overlapping of the predicted and ground truth bounding boxes. Here k is the location of the UAV in the frames. Frames with IoU overlap more than may use the modifying factor in (<ref>) for more favorable examples, which increases the modifying factor and the loss in proportion to the overlap with ground truth targets. As a result of using cross-entropy, the consistent cross-entropy loss function prioritizes cases with bigger IoU overlaps. The (<ref>) is updated in the focal loss as: Focal Loss(_p,_t)=-α_t (1-p_t)^γ log(p_t) The focal loss down-weighted tuning process is dependent on the γ, and it varies from 0 to 2 to adjust the rate of easy examples. If γ is 0, then the focal loss is equal to the cross-entropy. The UAV detection scenarios increase the γ up to 2 to obtain the best training result. Moreover, during experimentation, we systematically varied γ and observed its impact on the performance of the model. Higher γ led to better detection rates for challenging scenarios, such as low contrast or occlusion cases. However, excessively high values might lead to overemphasis on hard examples, potentially causing instability. § EXPERIMENTS AND PERFORMANCE ANALYSIS §.§ Experimental Training Extensive experiments on the benchmark UAV catch detection dataset were conducted to evaluate the efficacy of OF-GPRN in enhancing the learning performance for UAV detection with day and night vision cameras. The OF-GPRN model is developed in Tensorflow and trained on a workstation with a GeForce RT 3060. The training procedure used 4 batch sizes. Compared to other optimizers, the Adam optimizer is preferred because of its rapid and intuitive convergence on the best solution. The learning rate decays to 10^-4 and 10^-6 when the value of β1 is set to 0.9 and when the value of β2 is set to 0.999. In addition, the model is trained for 45 hours and 200 epochs. To build the region adjacency graph and decrease the input size, the frames are converted to super pixels using the algorithms SLIC <cit.>, Quickshift <cit.> and Felzenszwalb <cit.> for OF-GPRN training. Progressive focal loss is used to prevent the unanticipated learning scenario and to discriminate between the more accurate candidate and the loose counterpart. textcolorblue The comparative performance of UAV detection across various models is shown in Table <ref>. The effectiveness of detection demonstrates substantial variation among models, with the proposed model obtaining the highest performance. The assessed approaches consist of RGBIR-ResGCN, Fusion-ResGCN, OF-ResGCN, and the proposed OF-GPRN. The RGBIR-ResGCN model obtained a mAP of 35.1% but suffered from a comparatively high loss of 4.253, suggesting that its performance is unstable. Fusion-ResGCN demonstrated an enhanced mAP of 55.5%. However, there is a 2.147 increase in loss value along with this improvement, primarily as a result of difficulties maintaining background features, which reduces stability. The optical flow-based model OF-ResGCN achieved an mAP of 69.9%. A loss of 2.042, however, indicates that the model's performance is unsatisfactory. In contrast, the proposed OF-GPRN proved significant improvements, with a remarkable mAP of 87.8% with a small loss of 0.026. This model showed higher efficiency than other models and also maintained consistent accuracy in its predictions. The significant increase in mAP, along with minimal loss, indicates that OF-GPRN has the potential to be a very effective method for the given task of UAV detection.  §.§ Datasets Pre-processing In this study, we employ freely accessible anti-UAV capture video datasets <cit.>. There are a total of 320 clips here, 160 of which are HD videos shot in both standard definition (SD) and high definition (HD) (RGB and IR), which have different variations like backgrounds (cloud, building, mountain, and sea), fast movement, out of focus, and size variations from small to large. The UAVs seen in each video come in a range of sizes, from large to small. We selected UAVs with a size range of 300mm to 1200mm to train and validate our proposed model. Moreover, the UAVs cruise at speeds between 50 and 100 miles per hour and occasionally stop altogether. One hundred validation videos from each stream were used for model training. All except 80 of the videos are used to train the model, while the remaining 20 are used for testing and validation. There are a wide variety of variables in the background that may be visible in the video clips, including day and night, lighting conditions, cloudy and clear skies, buildings, and varied degrees of occlusion, as shown in Fig. <ref>. These background variations make it difficult for the detection algorithms to identify the movable object due to low contrast, weak edge details, colour similarities, and texture information. The dataset frames are preprocessed through RGB-IR fusion and an optical flow algorithm to make the movable objects visible. §.§ Effect of Optical Fusion The conversion of RGB or IR to optical flow is seen in Fig. <ref>. When compared side by side, all of these frames have distinct levels of performance. A closer inspection reveals obvious artifacts, blurriness, and distinctions in the results of the three columns. When compared to fusion-generated frames, we observe that optical flow fails to inject as many of the bright feature characteristics from the RGB (first column) and IR (second column) frames. The fusion-generated RGB and IR (third column) frames support the optical flow to detect the moveable UAV. Without fusion, the results of optical flow show that the moveable UAV is unidentified due to its high background similarities. Therefore, the spatial properties of the source frames are enhanced in the fusion plus optical flow output frames, which are also free of artifacts, clearer, include more structural details, and have a higher overall visual quality. §.§ Effect of Region Adjacency Graphs Training the GCN model relies on accurate region adjacency graphs. We can see that the pixel segmentation before the optical, as shown in Fig. <ref> is very complicated compared to the after optical flow. The optical flow for background removal significantly reduces the segmentation of pixels for the region adjacency matrix up to 30%. Moreover, if the appropriate pixel segmentation procedure is used, the resulting region adjacency graph would be accurate. We used three different superpixel segmentation techniques (SLIC, Felzenszwalb, and Quickshift) to significantly minimize the size of the nodes and edges in the matrix to train the OF-GPRN, as shown in Fig. <ref>. Each of the algorithms identifies regions with similar visual properties. Each frame's features and their associated graph are generated by the adjacency matrix, which also provides concise information regarding the frame's pixels. From our experience training the OF-GPRN, we can say that the SLIC algorithm is both fast and space-efficient, and it is able to successfully segment in terms of color boundaries without having to remove the background. However, it recorded the pixels in the background, which makes it less accurate. Additionally, OF-GPRN training using the Felzenszwalb algorithm performs less well due to contrast-based training when it comes to loss minimization. While the Quickshift method is used for the adjacency matrix to achieve OF-GPRN-based promising results of loss 0.026, as shown in Fig. <ref>. Quickshift improves in this regard since it uses hierarchical segmentation computation to separate the image into visually distinct parts. The Quickshift algorithm is used for the proposed model due to its high performance. We achieved the optimal loss of all three algorithms with the parameters mentioned in the Table. <ref>. We use the Quickshift algorithm for exceptional performance after applying optical flow, a method that successfully removes the background. The decision to strategically employ the Quickshift algorithm is based on its notable efficiency in comparison to SLIC and Felzenzswalb. The Quickshift method works better than SLIC and Felzenzswalb in terms of loss efficiency and accurate capture of important features in the image, as shown in Fig. <ref>. §.§ Residual Split Attention Network Effect Extensive studies, beyond the results of the Quickshift algorithm, are done to illustrate the efficacy of the Split Attention Network in enhancing the learning performance ofGCN with deep architectures. In Fig. <ref>, we show how well, without overfitting, the proposed OF-GPRN (based on the Split Attention Network) with a loss of 0.026 with learning rate 10^-6 performs compared to the other ResGCN and observe that the proposed OF-GPRN performs best compared to the RGBIR-ResGCN, Fusion-ResGCN and OF-ResGCN. The RGBIR-ResGCN has very high instability for learning rate 10^-4 and low learning variations during training with learning rate of 10^-6 but has a high loss of 4.253. Even Fusion-ResGCN has a very high loss due to background existence in the frame, which leads to a loss of 2.147. Moreover, OF-ResGCN has a lower loss than RGBIR-ResGCN and Fusion-ResGCN; however, it is still unstable, and the loss ends up at 2.042. Fig. <ref> shows both the optimal training loss score of 0.026 and the actual validation loss of 0.331 and the testing loss of 0.35. Moreover, Fig. <ref> presents the results of the qualitative mAP evaluation with learning rates of 10^-4 and 10^-6 and makes a comparison between the OF-GPRN method and the RGBIR-ResGCN, Fusion-ResGCN, and OF-ResGCN. The OF-GPRN model that has been proposed offers several benefits when it comes to the detection of UAVs. It is clear to observe that the proposed OF-GPRN has a higher mAP with stability during training for different UAV sizes compared to the RGBIR-ResGCN, Fusion-ResGCN, and OF-ResGCN. Moreover, the proposed OF-GPRN achieved 87.8% more than RGBIR-ResGCN, Fusion-ResGCN, and OF-ResGC, which only reach 35.1%, 55.5%, and 69.9% and are unstable while the model is being trained. As shown in the results of the experiments in Table <ref> clearly shows that the mAP of the proposed OF-GPRN is much better when it is made up of RGBIR-fused optical flow data. Our proposed OF-GPRN model achieves a highly remarkable mAP of 87.8%. The OF-GPRN model outperforms previous models, such as the Hybrid-DL, which obtained a mAP of 68.1%, and the EfficientDet model, which achieved 67.3% mAP. Importantly, our model shows higher accuracy compared to the DETR model from anti-UAV <cit.>, which attained a mAP of 83.2%. The anti-UAV DETR model, EfficientDet, and Hybrid-DL with the Fusion method all have different performance metrics, as shown in the Table. <ref>. Anti-UAV DETR model has a high precision of 87.22%, whereas EfficientDet has a similar but slightly lower precision of 71.11%. Hybrid-DL lies somewhere in the middle, with a precision of 73.55%. In terms of recall, the anti-UAV DETR model is at 79.61%, followed by Hybrid-DL at 67.14% and EfficientDet at 65.33%. The F1 score shows that EfficientDet scored 68.10%, Hybrid-DL scored 70.12%, and anti-UAV DETR model scored 83.24%. EfficientDet and Hybrid-DL come in second and third, with the anti-UAV DETR model achieving the highest mAP at 83.2% and 67.3%, respectively. In terms of computing efficiency, EfficientDet has the lowest inference time of 102 ms, followed by Hybrid-DL at 107 ms and anti-UAV DETR model at 140 ms. Despite the longer inference time, the proposed model OF-GPRN, outperforms its competitors in precision 91.36%, recall 89.52%, and F1 score 90.43%, indicating a favorable trade-off between performance and computational cost. The proposed OF-GPRN model has a slightly longer inference time of 250 ms but surpasses its competitors in terms of precision, proving its superiority in the application of UAV detection. As shown in the results of the experiments in Table <ref> clearly shows that the mAP of the proposed OF-GPRN is much better when it is made up of RGBIR-fused optical flow data. Our proposed OF-GPRN model achieves a highly remarkable mAP of 87.8%. The OF-GPRN model outperforms previous models, such as the Hybrid-DL, which obtained a mAP of 68.1%, and the EfficientDet model, which achieved 67.3% mAP. Importantly, our model shows higher accuracy compared to the anti-UAV DETR model, which attained a mAP of 83.2%. Upon analyzing the training performance, our proposed approach shows an incredibly small training loss of 0.026. In comparison, the hybrid-DL model has a training loss of 1.13, the EfficientDet model has a loss of 1.20, and the anti-UAV DETR model has a training loss of 0.11. The significant decrease in training loss for our model shows its high efficiency and efficacy in acquiring features from the data. The results highlight the importance of using RGBIR-fused optical flow data to improve the overall performance of the proposed model compared to other methods. Upon analyzing the training performance, our proposed approach shows an incredibly small training loss of 0.026. In comparison, the hybrid-DL model has a training loss of 1.13, the EfficientDet model has a loss of 1.20, and the anti-UAV DETR model has a training loss of 0.11. The significant decrease in training loss for our model shows its high efficiency and efficacy in acquiring features from the data. The results highlight the importance of using RGBIR-fused optical flow data to improve the overall performance of the proposed model compared to other methods. Moreover, the experiment rigorously examined the entire processing time for the fusion method, Quickshift algorithm, and Graph Convolutional Network (GCN). More precisely, the fusion technique required a processing time of 17 milliseconds, the Quickshift algorithm cost 130 milliseconds, and the GCN algorithm utilized roughly 103 milliseconds. The total processing time for all three parts is 250 milliseconds. This detailed timing study provides valuable insights into the computational efficiency of each part, facilitating the evaluation of their respective contributions to the overall time required for processing. The proposed OF-GPRN relies on optical flow-assisted graph-pooling and residual networks to enhance UAV detection. One limitation is its sensitivity to changes in environmental conditions, such as abrupt rain, which creates noises in the image backgrounds, and it would be challenging for the optical flow to identify the UAVs. In our future work, we could explore strategies to improve the algorithm's robustness under varying conditions. Moreover, the effectiveness of the OF-GPRN system is demonstrated using a benchmark UAV catch dataset. However, the algorithm's performance may be influenced by variations in different UAV shapes, tiny sizes, and very fast motion patterns not covered comprehensively in the training data. Discussing strategies for handling diverse UAV scenarios and potential limitations related to data variations would strengthen the paper. The proposed system is evaluated on a specific dataset, and its generalization to new data environments may be a concern due to the feature variations. The OF-GPRN system model uses a multi-stage approach that includes fusion RGB and IR signals, optical flow processing, and the Graph-Pooling Residual Network. While these stages contribute to improved detection rates using high-computational real-time ground-based static servers, the OF-GPRN system model may face computational complexities, especially in real-time applications with onboard devices. Addressing the trade-off between accuracy and computational efficiency could be an important consideration. § CONCLUSION In this paper, we propose a new OF-GPRN system to enable the precise detection of UAVs in dual day and night visions that suffer from time-varying lighting conditions and high background similarities. The proposed OF-GPRN system incorporates optical fusion techniques and effectively eliminates extraneous backgrounds, resulting in enhanced clarity of RGB/IR imaging. Moreover, the GRSaN is extended in the OF-GPRN system to stabilize the learning capability, improve feature learning during training, and reshape the UAV. In addition, the OF-GPRN system extracts the pixels-to-node representation of the adjacency matrix to achieve an accurate graph with information about the pixels of the conspicuous frame and has the precise learning ability to capture correlations among the pixels in the fused images to identify the suspicious UAV. Experimental results show that our proposed OF-GPRN system achieves an impressively low loss of 0.026. Compared to previous ResGCN object detectors, which recorded an mAP of 69.9%, the OF-GPRN system delivers superior performance, reaching an mAP of 87.8% on the demanding RGB-IR-based benchmark UAV catch dataset. Moreover, optimization of the model should be the objective of future studies for onboard real-time system models like UAVs. In addition, optical fusion limits the present study to static cameras; we aim to improve the model for implementation with moving objects, such as autonomous vehicles and UAVs. Research on its possible applications in different environmental circumstances, such as rain and different types of UAVs, is required. Improving accuracy and efficiency requires testing on more and more diverse datasets. § ACKNOWLEDGMENT This work was supported by the CISTER Research Unit (UIDP/UIDB/04234/2020) and project ADANET (PTDC/EEICOM/3362/2021), financed by National Funds through FCT/MCTES (Portuguese Foundation for Science and Technology). Also, this article is a result of the project NORTE-01-0145-FEDER-000062 (RETINA), supported by Norte Portugal Regional Operational Programme (NORTE 2020), under the PORTUGAL 2020 Partnership Agreement, through the European Regional Development Fund (ERDF).
http://arxiv.org/abs/2407.12611v1
20240717144125
Deep Mutual Learning among Partially Labeled Datasets for Multi-Organ Segmentation
[ "Xiaoyu Liu", "Linhao Qu", "Ziyue Xie", "Yonghong Shi", "Zhijian Song" ]
cs.CV
[ "cs.CV" ]
Deep Mutual Learning among Partially Labeled Datasets for Multi-Organ Segmentation Xiaoyu Liu, Linhao Qu, Ziyue Xie, Yonghong Shi, and Zhijian Song This work was supported by the National Natural Science Foundation of China under Grant 82072021. (Corresponding authors: Yonghong Shi and Zhijian Song) All the authors are with Digital Medical Research Center, School of Basic Medical Science, Fudan University, Shanghai 200032, China. They are also with Shanghai Key Lab of Medical Image Computing and Computer Assisted Intervention, Shanghai 200032, China. (e-mail: {liuxiaoyu21, zyxie22}@m.fudan.edu.cn), {lhqu20, yonghong.shi, zjsong}@fudan.edu.cn). July 22, 2024 ============================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================== § ABSTRACT The task of labeling multiple organs for segmentation is a complex and time-consuming process, resulting in a scarcity of comprehensively labeled multi-organ datasets while the emergence of numerous partially labeled datasets. Current methods are inadequate in effectively utilizing the supervised information available from these datasets, thereby impeding the progress in improving the segmentation accuracy. This paper proposes a two-stage multi-organ segmentation method based on mutual learning, aiming to improve multi-organ segmentation performance by complementing information among partially labeled datasets. In the first stage, each partial-organ segmentation model utilizes the non-overlapping organ labels from different datasets and the distinct organ features extracted by different models, introducing additional mutual difference learning to generate higher quality pseudo labels for unlabeled organs. In the second stage, each full-organ segmentation model is supervised by fully labeled datasets with pseudo labels and leverages true labels from other datasets, while dynamically sharing accurate features across different models, introducing additional mutual similarity learning to enhance multi-organ segmentation performance. Extensive experiments were conducted on nine datasets that included the head and neck, chest, abdomen, and pelvis. The results indicate that our method has achieved SOTA performance in segmentation tasks that rely on partial labels, and the ablation studies have thoroughly confirmed the efficacy of the mutual learning mechanism. multi-organ segmentation, partially labeled, mutual learning. § INTRODUCTION Multi-organ segmentation is crucial for various clinical tasks but remains a challenging problem in medical image processing. Although deep learning has advanced multi-organ segmentation models, training them typically requires the annotation of multiple organs as a prerequisite, which is both time-consuming and labor-intensive on a single medical image, such as CT or MRI scans <cit.>. As a result, compared to natural image datasets, there is a limited number of public datasets available for training multi-organ segmentation model, making it difficult to meet the substantial data demands of deep learning. Additionally, the difficulty of annotating multiple organs has led many institutions to annotate only specific organs, resulting in numerous datasets with annotations for only some organs. For instance, the LITS dataset <cit.> annotates only the liver and its tumor, while the KITS dataset <cit.> focuses on kidney and kidney tumor. Leveraging these partially labeled datasets to develop models capable of segmenting multiple organs concurrently can reduce the annotation workload, improve segmentation accuracy, and meet urgent clinical needs. A straightforward strategy to use these partially labeled datasets is to train a segmentation model independently for each dataset, and then combine the outputs of these models to obtain the final multi-organ segmentation result. This approach is known as multiple networks and is shown in Fig. <ref> (a). Although simple, it has several obvious shortcomings: first, both the training and inference processes are time-consuming, while also necessitating substantial memory allocation for storing multiple models; second, during the inference stage, a challenge arises when integrating the outputs of individual models due to the voxel prediction conflicts, i.e., the prediction results for the same voxel may be not consistent from different models; and lastly, if datasets are trained independently with segmentation models, the prior information (such as size and position) contained between organs labeled in different datasets will not be effectively utilized, which makes it difficult to achieve optimal segmentation results <cit.>. The prevailing approach is to concurrently train a unified segmentation model using multiple datasets. Existing methods for training such models can be classified into three categories: Pseudo-Labeling: As shown in Fig. <ref> (b), this approach initially trains models on individual datasets to segment specific organs, and then uses these models to generate pseudo labels for corresponding organs on other datasets. These combined labels are then used to train a unified multi-organ segmentation model. Research in this area focuses on enhancing the quality of pseudo labels <cit.>. Channel Adjustment: Illustrated in Fig. <ref> (c) , this method employs a multi-channel output model. Due to the lack of labels for all channels, unlabeled channels are treated as background during loss calculation, which is called Target Adaptive Loss (TAL) <cit.>. Shi et al. <cit.> introduced additional marginal loss from other datasets to improve segmentation accuracy. Liu et al. <cit.> initially trained a model with TAL and then iteratively refined it using self-training with pseudo labels. Conditional Information Guidance: Depicted in Fig. <ref> (d), this method integrates conditional information into the segmentation model, allowing the model to produce organ-specific segmentation result based on this information during inference. This conditional information is typically embedded into the network's final layers, guiding the network's output to correspond to the given condition <cit.>. However, existing methods do not fully integrate labeled organs from each dataset, leading to incomplete supervised information and limiting segmentation accuracy. Some methods also involve complex inference processes and voxel conflicts. Additionally, most current methods are based on abdominal datasets, and each dataset only labeled with a single organ. Other parts of the body (e.g., head and neck) also have partially labeled datasets, which may have more than one organ labeled, and the number of organs labeled varies from dataset to dataset, which poses a challenge to the model training and its generalization. We've observed that there are interconnections among partially labeled datasets, and the models trained from each dataset are capable of learning from one another. Therefore, we introduce the concept of mutual learning for partial supervision. Mutual learning is a paradigm where multiple student networks collaborate, sharing knowledge to produce a more robust and adaptable network. <cit.>. Recent studies have shown that multiple students learning together outperform individual learning <cit.>. The application of mutual learning facilitates the exchange of the knowledge between datasets and models, thereby enhancing the segmentation accuracy through collaborative improvement. Our method consists of two stages, with each stage leveraging complementary information across datasets to enhance model performance. In the first stage, each student model is trained under supervision not only from the labels of the current dataset but also from the labels of other datasets and the features extracted by other models, as shown in Fig. <ref> (a). This stage enhances each model's ability to segment the current organ and improves the quality of pseudo labels, resulting in high-quality, fully labeled datasets with pseudo labels. In the second stage, each student model is supervised by the combined labels of the current dataset and the true labels from other datasets, while also being supervised by the correct features dynamically conveyed by other models in the latent space, as shown in Fig. <ref> (b), thus making full use of supervision information to improve the performance of multi-organ segmentation. The framework we proposed is capable of accommodating varying numbers of labeled organs across different datasets, including those contain multiple labeled organs. The main contributions are as the following three aspects: * We introduce a two-stage mutual learning approach for partially labeled multi-organ segmentation. Each stage leverages complementary information across datasets to enhance supervised information, resulting in a model capable of accurately segmenting multiple organs simultaneously. * Our mutual learning approach generalizes across various body regions, including the head and neck, chest, abdomen, and pelvis, and our method is adaptable to scenarios where multiple organs are annotated per dataset. * The feasibility and effectiveness of our mutual learning mechanism are validated through experiments on datasets of multi-body regions, showcasing improved the accuracy and robustness of the segmentation. § RELATED WORK §.§ Multi-organ segmentation Accurate segmentation of multiple organs from the head and neck, chest and abdomen has always been a matter of great concern. In recent years, many effective methods have been proposed aiming to improve the performance of multi-organ segmentation. Some of these methods are from the perspective of network architecture design, such as the transformer <cit.> and the two-stage method <cit.>. Some utilise multi-view information <cit.>; and some introduce effective modules, such as the attention module <cit.> and dilated convolution <cit.>, or design new loss functions <cit.>, etc. The advancement of these methods effectively improves the performance of multi-organ segmentation. However, since it is very difficult to obtain a large number of fully labeled datasets, the training of models in the above studies is mostly restricted to a limited number of public datasets for multi-organ segmentation. §.§ Partially labeled segmentation The substantial workload of annotating multiple organs in medical images (e.g., CT or MRI) has resulted in many datasets being only partially labeled, such as LITS <cit.>, KITS <cit.>, and PDDCA <cit.>. To address this, various methods have been developed to train unified multi-organ segmentation models using these partially labeled datasets. Existing methods are categorized into three primary categories: Pseudo-Labeling: These methods mostly depended on generating pseudo labels for unlabeled organs. Existing methods involve using pairs of networks for cooperative training to refine pseudo labels <cit.> or leveraging label information from other datasets to enhance the quality of pseudo labels <cit.>. However, the influence of pseudo labels and incomplete supervisory information make it difficult for multi-organ segmentation models to achieve significant further improvement. Channel Adjustment: These methods adjust the output channels of models to compute specialized TAL, by incorporating marginal losses from other datasets <cit.>. This kind of methods primarily deal with each dataset individually, and when testing, they also need to adjust the channels of the output results in order to obtain the segmentation results of the corresponding organs. Conditional Information Guidance: These methods <cit.> incorporate conditional information to guide segmentation. They sequentially infer organ-specific results, potentially leading to voxel prediction conflicts, and require extensive training and inference time. §.§ Mutual Learning Unlike previous studies, our method employs mutual learning to address the partially labeled multi-organ segmentation problem. Mutual learning involves multiple student networks that guide and learn from each other during training, leading to the development of a more robust and widely applicable network. This approach deviates from traditional distillation method that rely on a teacher-student model, instead, mutual learning is emphasized where students exclusively instruct one another. Zhang et al. <cit.> first proposed the concept of mutual learning, demonstrating that peer teaching results in better performance than isolated supervised learning. Fang et al. C. <cit.> extended this concept to medical image segmentation, employing dual segmentation models to reduce noise in imperfect annotations by providing clean training data to each other. Zhu et al. <cit.> introduced an online mutual learning strategy where a CNN and a ViT collaborate, leveraging their complementary strengths and compensating for their respective limitations. In our study, each model learns new features and patterns from the other during training. This reciprocal learning enriches the supervisory signals and enhances segmentation accuracy, thereby resulting in a more effective model for multi-organ segmentation. § METHOD §.§ Overview Given N datasets {D_i}_i=1^N, the i-th dataset D_i = {I_i, G_i, O^i}, where I_i = {I_ij}_j=1^N_i and G_i = {G_ij}_j=1^N_i, I_ij represents the j-th image in D_i, and G_ij denotes the corresponding Ground Truth (GT). N_i indicates the number of images in the i-th dataset. O^i represents the set of annotated organs in D_i. G_i(O^i) represents the GT of organ O^i in G_i. The union of labeled organs across all datasets is O. Clearly, O^i ⊆ O. For any two datasets D_m and D_n, their annotated organs do not overlap, i.e., O^m ∩ O^n = ∅. The union of all annotated organs across datasets equals the total set of annotated organs, i.e., ⋃_i=1^NO^i = O. Our goal is to train a model F using these datasets. When given an unlabeled input image, F will get segmentation results for all target organs. Fig. <ref> illustrates the proposed method, which involves two stages. The goal of first stage is to obtain multiple Partial-Organ Segmentation (POS) models, where each POS model, denoted as P_i, acts as a student model to segment different organs. Beyond the supervised learning using its own labels, each P_i also engages in additional difference mutual learning, which includes both Prediction Difference (PD) and Feature Difference (FD). The trained models then generate pseudo labels for other datasets, resulting in combined labeled datasets. The goal of second stage is to train multiple Full-Organ Segmentation (FOS) models using the fully labeled datasets with pseudo labels. Each FOS model, denoted as F_i, acts as a student model to segment all target organs. Similarly, additional Prediction Similarity (PS) and Dynamic Feature Similarity (DFS) mutual learning are introduced during training. Detailed information are as follows. §.§ Difference Mutual Learning We trained N POS models {P_i}_i=1^N on N partially labeled datasets {D_i}_i=1^N, where each P_i model is considered a student model. In addition to employing its own labels for supervised learning, each P_i participates in addtional mutual learning processes, encompassing prediction and feature difference learning: §.§.§ Prediction Difference Learning As illustrated in Fig. <ref>, taking dataset D_i as an example, it contains I_i (Input), G_i (GT) and O^i (labeled organ set). The main segmentation loss L_i^1 for P_i is calculated between the network's predicted results pre_i_i^1 and G_i, as shown in Equation (2). However, when other datasets, such as D_j and D_k, are input into P_i, predictions for organ O^i are generated. Since the labels of organs are mutually exclusive, the predictions for organ O^i should not overlap with the GT in D_j and D_k. Hence, we propose a Prediction Difference (PD) Loss L_i^1_l, where a larger loss indicates that the predictions of P_i on other datasets do not overlap with the annotated organs, implying better segmentation performance, as shown in Equation (3). §.§.§ Feature Difference Learning Simultaneously, to further distinguish the segmentation capabilities of different models, we introduced a Feature Difference (FD) Loss. Specifically, when the image from D_i is input into different student models P_i, P_j, and P_k, the highest-level semantic features extracted by the encoder are f_i_i^1, f_j_i^1, and f_k_i^1. The greater the difference between f_i_i^1 and f_j_i^1 and f_k_i^1, the larger the difference in features extracted by different models. Therefore, based on L_i^1 and L_i^1_l, we incorporate a feature difference mutual learning loss L_i^1_f, as shown in Equation (4). The introduction of difference learning not only enhances the segmentation accuracy of P_i but also enables P_i to perceive the presence of unannotated organs, thereby improving the quality of pseudo labels generated for these organs on other datasets. This method is extended to all training datasets, where PD loss and FD loss is calculated between any two datasets. The specific loss function calculations are as follows: =2pt L^1 = ∑_i=1^N L_i^1 - λ_l ( L_i^1_l) - λ_f ( L_i^1_f) L_i^1=L_D_i(pre_i_- i^1(O^i), G_i(O^i)) L_i^1_l=1/N-1∑_j, j ≠ i L_D_j(pre_i_- j^1(O^i), G_j(O^j)) L_i^1_f=1/N-1∑_j, j ≠ icos(f_i_i^1, f_j_i^1) Among them, L_i^1 represents the segmentation loss, L_i^1_l represents the PD loss, and L_i^1_f represents the FD loss. pre_i_i^1 and pre_i_j^1 denote the predictions of P_i applied to D_i and D_j, respectively. G_i and G_j represent the labels for D_i and D_j, and f_i_i^1 and f_i_j^1 indicate the features extracted by P_i and P_j on dataset D_i. The parameters λ_l and λ_f represent the hyper-parameters for the first stage. §.§ Generating pseudo labels After completing the training of {D_i}_i=1^N in the first stage, pseudo labels are generated on other datasets, resulting in fully labeled datasets containing pseudo labels. When generating pseudo labels, if there is an overlap with the true labels of the current dataset, the true labels are prioritized. Upon obtaining the fully annotated dataset, the second stage of training begins. §.§ Similarity Mutual Learning In the second stage, we obtain the fully annotated datasets with pseudo labels, denoted as {D̅_i}_i=1^N, the i-th dataset D̅_i = {I_i, G̅_i, O̅^i}. The labels G̅_i include both the true labels G_i and pseudo labels Ĝi generated by other models. i.e., G̅_i = {G_i, Ĝ^i}. The organ sets are denoted as O^i and Ô^i, respectively, i.e., O̅_i = {O_i, Ô^i}, with Ô^i = ⋂_j ≠ i^N O^j. Despite efforts to enhance the quality of pseudo labels in the first stage, inaccurate pseudo labels still negatively impact model training. To fully exploit the supervision information provided by the label characteristics of each dataset, we introduce similarity learning among multiple FOS models {F_i}_i=1^N. §.§.§ Prediction Similarity Learning Specifically, taking dataset D̅_i as an example, the segmentation loss L_i^2 of model F_i is computed from the network output pre_i_i^2 and the labels G̅_i. The presence of pseudo labels provides additional supervision but can also misguide the model. We note that dataset D̅_j contains the true labels for organs O^j. When F_i is applied to dataset D̅_j, it yields prediction results for organs O^j. The loss computed between these results and the true labels of O^j can enhance the performance of F_i in segmenting organs O^j. This loss is termed the Prediction Similarity (PS) Loss, with the main idea being the use of true labels from other datasets for supervision. §.§.§ Dynamic Feature Similarity Learning In addition to PS loss, we introduce Feature Similarity (FS) Loss similar to the first stage. However, unlike the first stage, each F_i in this stage can segment all organs comprehensively. The highest-level semantic features f_i_i^2 extracted by each student model includes features for all organs. Directly computing mutual learning loss could lead to inaccuracies, as features are extracted from models trained on pseudo labels. Given that F_i and F_j have different segmentation capabilities for different organs, we dynamically transfer the correct features between the two student models in the latent space to benefit each other. However, determining the direction of knowledge transfer during training is a challenging problem. Inspired by the mutual learning between CNN and Transformer <cit.>, we propose managing the direction of knowledge transfer by combining prediction results with true labels, as follows: Given the features f_i_i^2 extracted by F_i from I_i and f_j_i^2 extracted by F_j from I_i, we first compute the cosine similarity S_(i,j) = cos(f_i_i^2, f_j_i^2). Then, we quantify the reliability of the knowledge between the two students using the cross-entropy loss between prediction and true labels. Specifically, we use M_(i,j)∈{0, 1} to represent the direction of feature transfer. We calculate the prediction results pre_i_i^2 of F_i on I_i and pre_i_j^2 of F_j on I_i, respectively, and then compute the cross-entropy loss with true labels G_i to obtain C_i_i and C_i_j. If C_i_i is larger than C_i_j, it indicates that F_j is more accurate than F_i, so M_(i,j) = 0, meaning the feature is transferred from F_j to F_i. Otherwise, M_(i,j) = 1. Through this approach, F_i and F_j can exchange reliable knowledge, enabling the correct transfer of features, which is called Dynamic Feature Similarity (DFS) Loss. The similarity learning allows each model to fully utilize supervisory information, including true labels, pseudo labels, and correct features. This method is generalized to all training datasets, with similarity loss computed between any datasets. The specific loss function calculation is as follows: =3pt L^2=∑_i=1^N L_i^2+β_l(L_i^2_l)+β_f(L_i^2 _f) L_i^2=L_D̅_i(pre_i_i^2(O̅^i), G̅_i(O̅^i)) L_i^2_l=1/N-1∑_j, j ≠ i(L_D̅_i(pre_i_- j^2(O^j), G̅_j(O^j))) L_i^2_f=1/N-1∑_j, j ≠ i(1-M_(i, j)) S_(i, j) In this context, L_i^2 represents the primary segmentation loss, L_i^2_l represents the Prediction Similarity loss, and L_i^2_f represents the dynamic feature mutual learning loss. pre_i_i^2 denotes the prediction results of F_i applied to G̅_i, The parameters β_l and β_f denote the hyper-parameters for the second stage. §.§ Inference Stage During the inference stage, the multi-organ segmentation model F_i that performed best in the second stage is used as the final model F for inference. § EXPERIMENTS §.§ Dataset In this experiment, we established four tasks using nine public datasets, covering the head and neck, chest, abdomen, and pelvis. The specific datasets used for each region are as follows: Head and Neck: We used the PDDCA<cit.> and StructSeg (https://structseg2019.grand-challenge.org/Dataset/) datasets. PDDCA contains 48 cases with 9 labeled organs; we selected the organ of brainstem, left optic nerve, and right optic nerve. StructSeg includes 60 cases with 22 labeled organs; we selected the organ of the chiasm, left parotid gland, right parotid gland, and mandible. Chest: We used the SegThor<cit.> and StructSeg datasets. SegThor has 40 cases with 4 labeled organs (heart, aorta, trachea, and esophagus); we selected the heart and trachea. StructSeg includes 60 cases with 6 labeled organs (left lung, right lung, spinal cord, esophagus, heart, and trachea); we selected the left lung, right lung, and esophagus. Abdomen: We used the LITS <cit.>, KITS<cit.>, and PANCREAS <cit.> datasets. LITS contains 131 cases, KITS has 210 cases, and PANCREAS includes 82 cases. These datasets are labeled with the liver, kidney, pancreas, and corresponding tumors; we only selected organs for training. Pelvis: We used the Word <cit.> and CT-ORG <cit.> datasets. Word contains 150 cases with 16 labeled organs; we selected the rectum, left femur, and right femur. CT-ORG includes 140 cases with 4 labeled organs (lungs, liver, kidneys, and bladder); we selected the bladder. Since the head and neck datasets include labels for all selected organs, we utilized these datasets to analyze the quality of generated pseudo labels and for feature visualization. §.§ Experiment Setup §.§.§ Implementation Details In pre-processing, we divided all 3D CT images into 2D slices and adjusted the intensity of the CT scans to filter out irrelevant regions. Our built models whose backbone network is 2D Res U-Net. We used the stochastic gradient descent (SGD) algorithm with Nesterov momentum (µ= 0.999) as the optimiser, the initial learning rate was set to 0.001 in both the first and second stages and decayed as training proceeded. Using Dice loss as the segmentation loss. All the experiments were performed on an NVIDIA RTX 4090. §.§.§ Evaluation Metrics Dice Similarity Coefficient (DSC) and Average Symmetric Surface Distance (ASSD) were used to evaluate the segmentation results. DSC calculates the overlap between prediction and GT, and ASSD evaluates the quality of segmented boundaries by calculating the average of all the surface distances between the predicted and true boundaries. It is important to note that in the experiments, the validation set was also a partially labeled dataset, so the average metrics for each organ were calculated based only on the dataset in which the organ was labeled. §.§ Comparison With State-of-the-Art Methods We compared our method with several existing methods: (1) Multi-Net: separate segmentation models for each dataset; (2) TAL <cit.>: channel adjustment; (3) ME <cit.>: marginal loss with channel adjustment; (4) DoDNet <cit.>: Conditionally-guided approach; (5) CLIP-driven method <cit.>: using "a CT of [organ]" prompt instead of DoDNet's one-hot encoding; (6) Co-training <cit.>: two stage pseudo label-based approach. To ensure fairness, we used the same backbone architecture and training strategy for all methods, as detailed in Implementation Details. Tables  <ref>,  <ref>,  <ref> and  <ref> present the segmentation results for the head and neck, chest, abdomen, and pelvis, respectively. Figs. <ref>,  <ref>,  <ref> and  <ref> provide visualizations of these methods. The following conclusions can be drawn from these results: Unified segmentation models generally outperform Multi-Net, as indicated by combined DSC and ASSD metrics. Among channel adjustment methods, ME outperforms TAL by leveraging non-overlapping organ annotations across datasets. The conditionally-guided DoDNet achieves sub-optimal results in the chest and abdomen but performs poorly in the head and neck and pelvis, particularly for the chiasm and parotid glands; The method also struggles to distinguish between symmetrical structures (e.g., parotid glands and humerus), as noted in COSST <cit.>. The CLIP-driven method performs poorly across all regions, especially for less frequent organs and structures like the chiasm. The Co-training method, based on two stage pseudo-labeling, achieves competitive results, particularly in the pelvis. Overall, our method outperforms others across all regions, especially for small organs like the chiasm and elongated organs like the esophagus. Visually, our method aligns more closely with GT, avoiding the segmentation errors of channel adjustment and the issues with symmetric structures in conditional guidance methods. §.§ Ablation Studies §.§.§ Effectiveness of Difference Mutual Learning The difference learning introduced in the first stage can enhance the model's ability to segment the current organ, improve the distinctiveness of different student models, and enhance the quality of the generated pseudo labels. To evaluate its effectiveness, we conducted extensive ablation studies and analyzed the following aspects: Metric. Table  <ref> compares mean DSC across models. After introducing the label-level difference mutual learning loss (PD), the DSC for all body parts were improved, especially with the DSC of chest increasing to 90.05 (an increase of 1.06). The addition of feature-level difference loss (FD) further improved the average DSC to 78.62 (head and neck), 90.07 (chest), 90.43 (abdomen), and 89.37 (pelvis), demonstrating that incorporating information from other datasets can enhance the segmentation performance of the current model. Pseudo-Label Analysis. To verify that the introduction of Prediction-level Difference (PD) and Feature-level Difference (FD) can produce higher-quality pseudo labels, we took the head and neck PDDCA and StructSeg datasets as examples. The models trained on these datasets are referred to as P_1 and P_2 (partial-organ segmentation model), respectively. P_1 generated pseudo labels for the brainstem and left and right optic nerves on the StructSeg dataset, while P_2 generated pseudo labels for the chiasm, left and right parotid glands, and the mandible on the PDDCA dataset, and then compared these with the true labels of the two datasets. The DSC between the pseudo labels and the true labels showed that, without the introduction of difference learning, the average DSC for the seven organs was 56.67 (53.95 for PDDCA, 58.70 for StructSeg). Introducing PD loss increased the average DSC to 57.64, with a particularly significant improvement in the PDDCA dataset, rising to 57.17 (an increase of 3.22). Further introducing feature difference loss (PD + FD) improved the average DSC to 58.44, with PDDCA rising to 57.87 and StructSeg to 58.88. Fig.  <ref> also shows that the pseudo labels generated with label and feature difference losses more closely resemble the true labels, especially for organs such as the brainstem and mandible, indicating that the introduced difference learning enable the model to perceive the presence of other organs, thereby generating higher quality pseudo labels on other datasets. Feature Visualization. To verify that the difference learning can make the features of different organs extracted by different models more distinguishable, we used t-SNE to visualize the high-dimensional features extracted by different models on the same dataset. As shown in Fig. <ref>, without difference learning (Fig. <ref> (a)), the features of different organs significantly overlap, which reduces the segmentation performance of the model and the quality of the pseudo labels. Introducing PD loss (Fig. <ref> (b)) provides some distinguishability among the features of different organs, but overlap still persists. After adding FD loss (Fig. <ref> (c)), the features of different organs are clearly separated, leading to higher precision in segmenting different organs by different models, and also improving the quality of pseudo labels generated on other datasets, significantly reducing the occurrence of overlap with labels from other organs. §.§.§ Effectiveness of Similarity Mutual Learning In the second stage, we conducted an ablation study to verify the effectiveness of the proposed similarity learning, as shown in Table  <ref>. The baseline model was trained under the supervision of combined labels, achieving an average DSC of 77.34 (head and neck), 88.72 (chest), 88.63 (abdomen), and 90.30 (pelvis). The introduction of LS improved the average DSC, especially for the abdomen, increasing by 1.37. Further addition of the FS (especially DFS) brought the average DSC to 79.29 (head and neck), 90.94 (chest), 90.74 (abdomen), and 90.99 (pelvis). These results confirm that similarity learning can fully utilize the true labels of other datasets, increase supervisory information, and enhance the performance of multi-organ segmentation. §.§.§ Effectiveness of DFS We also evaluated dynamic feature similarity mutual learning in the second stage (Table  <ref>). Comparing with static feature similarity mutual learning, our proposed DFS outperformed it across all regions: it improved mean DSC by 1.37 (head and neck), 1.1 (chest), 0.53 (abdomen), and 0.29 (pelvis). This demonstrates that DFS effectively transfers correct knowledge, enhancing model supervision and performance. § DISCUSSION In this paper, we have proposed a two-stage mutual learning approach to utilize partially labeled datasets. The mutual learning between models for segmenting different organs in the first stage not only improves each model's ability to segment the labeled organs, but also enhances its perception of unlabeled organs to generate higher quality pseudo labels. The second stage is to train models to learn from each other with fully labeled datasets containing pseudo labels. The supervised information includes the true labels of different datasets as well as the pseudo labels generated after the first stage, while the features extracted by different models can also be dynamically transferred to each other to achieve mutual enhancement between models, thus improving the performance of multi-organ segmentation models. The effectiveness of our method has been demonstrated through the experiments on diverse datasets encompassing the head and neck, chest, abdomen, and pelvis, which has consistently achieved superior performance in each of these regions, surpassing the state-of-the-art methods (see Tables  <ref>,  <ref>,  <ref> and  <ref>). Additionally, visual results show that our method's segmentation results closely matches the ground truths (see Figs. <ref>,  <ref>,  <ref> and  <ref>). According to the results obtained by different methods, incorporating organ-specific priors, as evidenced in TAL <cit.> and ME <cit.>, and employing pseudo-labelling through Co-training <cit.>, effectively enriches the supervisory signals, thereby enhancing segmentation outcomes. Conditional information-guided methods excel with specific organs but struggle with smaller structures, and they are unable to differentiate between symmetric structures, such as the left and right parotid glands and the left and right humers, as shown in the fifth column of Figs.  <ref> and  <ref>. Although CLIP-driven method has achieved significant success in the segmentation of abdominal organs<cit.>, it relies on large datasets for training, and most of the images pre-trained by CLIP are natural images. Therefore, further exploration is needed to adapt this method to medical imaging. In medical images, organ sizes and locations are relatively fixed, serving as valuable prior information for multi-organ segmentation tasks. Despite variations in labeled organs across different datasets, the size and location information is crucial for multi-organ segmentation. Previous methods leveraged this information to regulate predictions of unlabeled organs or used organ size and location as priors to improve performance over independent training<cit.>. However, these methods often overlook the richer feature-level information extracted by different models. The first stage of our model integrates both label and feature-level mutual difference learning, enhancing segmentation accuracy for labeled organs and improving the reliability of pseudo labels for unlabeled organs in other datasets. After generating pseudo labels in the first stage, each dataset contains true labels and pseudo labels. pseudo labels can enhance supervisory information, and previous work <cit.> has shown that training with pseudo labels can yield results comparable to or better than independent training. However, the presence of pseudo labels makes it difficult to further improve segmentation accuracy. Previous methods include co-training networks to update pseudo labels <cit.>, the introduction of organ priors <cit.>, and pseudo-label filtering mechanisms <cit.>. Our method differs by fully integrating label and feature information across datasets. At the label level, we use true labels from other datasets to assist in training; at the feature level, we introduce a dynamic feature mutual learning mechanism that allows models to exchange accurate feature information. As a result, compared to previous methods, our method supervises with more information, including true and pseudo labels as well as correct features, thus achieving superior performance. The proposed mutual learning strategy exhibits significant potential in multi-organ segmentation across diverse anatomical regions, providing novel insights for the tasks of medical image processing, such as imaging diagnosis and classification. Our method still has limitations. First, training several models concurrently is a demanding task. In the future, more straightforward and efficient training methods will be designed. Second, The concept of mutual learning has inspired us to believe that incorporating datasets from multiple anatomical regions in training can potentially improve the accuracy even further. The future work will focus on leveraging abdominal parameters to optimize the segmentation of the organs in the head and neck. Lastly, the paucity of public datasets for regions such as the head and neck restricts our method's scope, indicating a need for expanded research. § CONCLUSION In this study, we propose a two-stage multi-organ segmentation method based on the idea of mutual learning, which can maximise the use of label information in each dataset and prompt beneficial information exchange between different student models. By performing difference mutual learning as well as similarity mutual learning for multiple models in two stages respectively, the optimal model that can segment all organs at once without additional post-processing steps is obtained. The experimental results show that our method outperforms previous approaches on nine publicly available datasets containing the head and neck, chest, abdomen and pelvis. In addition, the ablation study also validates the effectiveness of each module proposed in the study. IEEEtran
http://arxiv.org/abs/2407.12120v1
20240716191020
Optimizing Design and Control of Running Robots Abstracted as Torque Driven Spring Loaded Inverted Pendulum (TD-SLIP)
[ "Reed Truax", "Feng Liu", "Souma Chowdhury", "Ryan St. Pierre" ]
cs.RO
[ "cs.RO" ]
Optimizing Design and Control of Running Robots Abstracted as Torque Driven Spring Loaded Inverted Pendulum (TD-SLIP) Reed Truax^1^† ^1 Ph.D. Student, Department of Mechanical and Aerospace Engineering University at Buffalo Buffalo, New York, 14260 reedtrua@buffalo.edu Feng Liu^1^† ^† Joint First Author University at Buffalo Buffalo, New York, 14260 fliu23@buffalo.edu Souma Chowdhury^2 ^2 Associate Professor, Mechanical and Aerospace Engineering; Adjunct Associate Professor, Computer Science and Engineering University at Buffalo Buffalo, New York, 14260 soumacho@buffalo.edu Ryan St. Pierre^3^* ^3 Assistant Professor, Mechanical and Aerospace Engineering; Adjunct Assistant Professor, Computer Science and Engineering ^* Corresponding Author, ryans@buffalo.edu This work is accepted at the IDETC/CIE2024 Forum. Copyright 2024 ASME. Personal use of this material is permitted. Permission from ASME must be obtained for all other uses, in any current or future media, including reprinting/republishing this material for advertising or promotional purposes, creating new collective works, for resale or redistribution to servers or lists, or reuse of any copyrighted component of this work in other works University at Buffalo Buffalo, New York, 14260 ryans@buffalo.edu Received date; accepted date =========================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================== specialfooter § ABSTRACT Legged locomotion shows promise for running in complex, unstructured environments. Designing such legged robots requires considering heterogeneous, multi-domain constraints and variables, from mechanical hardware and geometry choices to controller profiles. However, very few formal or systematic (as opposed to ad hoc) design formulations and frameworks exist to identify feasible and robust running platforms, especially at the small (sub 500 g) scale. This critical gap in running legged robot design is addressed here by abstracting the motion of legged robots through a torque-driven spring-loaded inverted pendulum (TD-SLIP) model, and deriving constraints that result in stable cyclic forward locomotion in the presence of system noise. Synthetic noise is added to the initial state in candidate design evaluation to simulate accumulated errors in an open-loop control. The design space was defined in terms of morphological parameters, such as the leg properties and system mass, actuator selection, and an open loop voltage profile. These attributes were optimized with a well-known particle swarm optimization solver that can handle mixed-discrete variables. Two separate case studies minimized the difference in touchdown angle from stride to stride and the actuation energy, respectively. Both cases resulted in legged robot designs with relatively repeatable and stable dynamics, while presenting distinct geometry and controller profile choices. Co-design, legged locomotion, spring-loaded inverted pendulum, MDPSO § INTRODUCTION Legged robots use their legs and actuators to push against the ground, accelerating and decelerating their bodies to run over terrain. While utilizing wheels, instead of legs, can be easier for design and control, legged robots have the advantage of being able to traverse complex terrain and jump over obstacles, compared to a similarly sized wheeled robot. Although legs offer the ability to dynamically shift contact and traverse complex environments, the design and control of legged robots can be challenging <cit.>. These challenges compound as the scale of the system decreases, placing constraints on sensing, computation, and actuation options <cit.>. Addressing these challenges requires careful joint consideration of the morphology (geometry and component choices) and the behavior (controls) of autonomous legged systems. A critical consideration therein is the requirement to find optimized parameters to return stable and cyclic gaits <cit.>. One approach is to mimic biological runners by creating robots that are dynamically similar. For example, the relative stiffness, a non-dimensionalized stiffness term in legged locomotion models, must be designed to be between 7 and 30 <cit.> for legged runners. The relative stiffness can be used to inform design decisions by specifying relationships between the system's mass, leg stiffness, and leg length. In addition to designing the physical robot parameters, a motor must be selected that can satisfy the speed and torque requirements necessary to maintain a stable gait. Once all mechanical properties are selected, a controller must be designed to ensure stable and cyclic gaits. This controller must be designed to accommodate the stride frequency of the system, which scales in proportion to the size and spring-mass ratio of the robot <cit.>. These design and control challenges in robotics often place cyclic constraints on one another through choices in components. For example, requiring larger motors to generate larger torques requires larger batteries to supply that power. In turn, this requires more torque, and thusly a larger motor. These cyclic constraints require concurrent design frameworks that consider all component and control choices, as well as coupled constraints <cit.>. Co-design has been successfully used in a range of legged robotic applications across a number of morphologies such as: single bipedal walkers, legged hoppers, and multi-legged robots. The design variables used in these works vary, with some just considering mass distribution, leg properties, or actuator selection while others look at the entire system design to accomplish specific tasks. In the area of bipedal walkers, <cit.> examines mass distribution across a number of body morphologies in simulation, and <cit.> expands on this by using co-design to select actuators in addition to mass distribution. Fadini et al. used co-design to select actuators and gearing while minimizing the energy of a hopper <cit.>. Yesilevskiy et al. expanded the design parameters by considering mass distribution, actuator type, and actuation placement while minimizing the cost of transport for a hopping robot <cit.>. In <cit.>, parameters and an open loop controller for the monopod hopping robot Skippy were selected while considering motor dynamics which would maximize jump height and travel distance. Diguarti et al. used co-design to select leg mass and linkage lengths along with a controller that would achieve specific gait types in StarlETH <cit.>. Reference <cit.> had similar design parameters as in <cit.> but formulated in a general sense for all four-legged robots. Most of the above mentioned works consider legs with multiple joints. While these have the benefit of adjustable leg stiffness and allow for the center of mass of the robot to be controlled with six degrees of freedom, they add additional control and design complications that are not realizable at small scales (10-500). This observation leads to the guiding question of the work presented in this paper – i.e., how optimization based co-design can be leveraged to help provide design guidance at smaller scales. However, while optimization can uniquely help in exploring design choices and trade-offs well beyond what is conceivable even by domain experts, it gives rise to a few challenges when designing complex or novel robotic systems in abstract forms. Expressing the desired capabilities of a novel bio-inspired robotic system (such as the running robot considered here) in a quantitative form amenable to optimization is far from trivial. Firstly, this calls for an understanding of what set of constraint functions are needed to describe the feasible behavior of the system at the conceptual abstracted stage. Secondly, it calls for the imposition of bounds on components (e.g., actuators) and geometric choices that make the system practically realizable. Third, it demands the identification of control architectures (inner loop computations) and the optimization method (outer loop search) that enable a computationally efficient co-design process; not to mention, the outer loop search is likely to present a mixed-integer non-linear programming or MINLP problem (where component choices/features are discrete and geometric choices are continuous). This paper makes the following specific contributions to address the above stated challenges and present a computationally efficient framework for morphology/control concurrent design of a small-scale (sub 500 g) running robot that has a stable gait, and is energy efficient: 1) We develop a simulation of the running legged concept, by combining a torque driven spring loaded inverted pendulum model with a leg stiffness estimation model and an open-loop control system (for the abstracted DC motor) that switches between the flight and stances phases of the running motion. 2) We formulate a novel set of constraints to collectively capture the symmetry of the stance phase, repeatability of the periodic forward motion (w/o having to expensively compute many steps per candidate design), adherence to the assumed spring-loaded inverted pendulum abstraction of the system and comparability with biological running systems. 3) We adopt a mixed-discrete Particle Swarm Optimization approach <cit.>, a well-known MINLP solver, to present and analyze (observably distinct) optimized design trade-offs that respectively minimize average energy consumption and difference in the touchdown angle (expresses motion repeatability). § MODEL FORMULATION The dynamics of legged locomotion can be abstracted to a minimal model of a point mass atop a spring leg. This model, the spring-loaded inverted pendulum (SLIP) model, has been useful for describing the running locomotion dynamics of organisms across a range of mass scales, from gram-scale cockroaches to humans (70) <cit.>. Despite its simplicity, the SLIP model has been used successfully to describe the gaits of biological systems <cit.> and has been used to design robotics ranging in size and morphology <cit.>. Here, we include both actuation and dissipation in the SLIP model, formulated as a torque-driven damped SLIP (TD-SLIP), similar to <cit.>. §.§ TD-SLIP Model The TD-SLIP model, like the SLIP model, is a cyclic hybrid dynamical model with two phases: a stance phase when the leg is in contact with the ground and a flight phase when the leg is not in contact with the ground. The events of liftoff and touchdown occur between these two phases to mark the transitions in the hybrid dynamics. The model switches from stance to flight in liftoff when the leg reaches its natural length and has a y acceleration greater than gravity the system returns to stance in touchdown when the leg makes contact with the ground. Using the convention defined in Figure <ref>, the equations of motion are written in polar coordinates for the stance phase as: θ̈=-2ζ̇θ/ζ-gcos(θ)/ζ+τ/mζ^2 ζ̈=ζθ̇^2-gsin(θ)-k_0/m(ζ-l_0)-b_l/mζ̇ where θ and ζ represent the leg angle and leg length. Table <ref> lists the system parameters associated with the TD-SLIP model. It should be noted that in this formulation, any dissipation from contacting the ground is neglected, and a point contact is assumed. The flight phase is treated as ballistic motion, with the center of mass moving without energy loss, and can be expressed as ẍ=0 and ÿ=-g. Where x and y represent the horizontal and vertical body positions in Cartesian coordinates. In the model, the input torque is provided by a DC motor. While motors are often represented as first-order systems with a mapping between input voltage and speed or output torque, the full second-order representation is used here. The full second-order representation of the model allows for more fidelity in the motor model. For example, this allows us to monitor current draw demands, which would dictate battery discharge rate requirements in hardware. The equations of motion describing the motors current i_a and rotational speed ω are expressed as: L_aδ i_a/δ t=V_a-R_ai_a-k_bω Jω̇=τ-cω-τ_L=k_Ti_a-cω-τ_L DC motors often require a gearbox to provide the torques and speeds necessary for robot mechanisms. Here, we consider that the motors need a gearbox. The relationship between the motor shaft speed (ω) and the rotation of the leg (θ̇) is ω=Rθ̇ where R is the gear ratio. J must then be modified to account for the inertial effects of the motor shaft and the gearbox, J=J_m+J_GB. Combining the equations for the TD-SLIP model, with the DC motor equations, requires solving for the load torque (τ_L) in equation (<ref>), and substituting τ_L for the applied torque τ in equation (<ref>). This gives the final equation of motion during stance with the geared DC motor: θ̈(1+R^2J/mζ^2)=-2ζ̇θ/ζ-gcos(θ)/ζ-cR^2θ̇/mζ^2+k_ti_aR/mζ^2 ζ̈=ζθ̇^2-gsin(θ)-k_0/m(ζ-l_0)-b_l/mζ̇ Equations (<ref>) and (<ref>) along with the equation for current (<ref>) are used to describe the dynamics of the system. §.§ Modeling the Leg as a Linear Spring While the TD-SLIP model abstracts the leg as a linear spring, translating this lumped parameter spring to a mechanical design can be challenging. For this work, we consider the same C-shaped leg with a rectangular cross-section seen in robots across scales, like RHex <cit.>, EduBot <cit.>, X-RHex <cit.>, Mini-RHex <cit.>, C-Quad <cit.>, and the robots in <cit.>. The stiffness of the C-shaped leg was calculated using Castigliano's theorem to approximate the linear elastic deflection of the leg under a load. Therefore, the stiffness of the leg can be modeled as a linear elastic spring of the form F=k_0δ, and the stiffness of the C-shaped leg is: k_0=bh^3E/6ρ^3 π where the geometric and material properties of the leg, detailed in Table <ref>, will dictate the overall stiffness of the leg. These parameters are used within the optimization framework outlined in Section <ref> to guide future hardware implementation. §.§ Voltage Control In hardware, the motors will be actuated with an open-loop time-varying voltage. This control strategy has been successfully used in robots, such as RHex, where the time-varying voltage results in the legs rotating slowly during the stance phase and faster during the flight phase <cit.>. To understand the nominal voltage profile given our model formulation, the TD-SLIP model was simulated over a stance cycle. Figure <ref> shows how the scaled center of mass trajectory evolves with time across a single stance phase. Using the rotational speed of the leg (θ̇) and the calculated hip torque at each time step, a voltage profile shown by Figure <ref> was calculated using equations (<ref>), and (<ref>). This estimated voltage represents the voltage profile required to match the system's speed and torque requirements at each time. While the voltage profile is well represented by a third-order system, it does not necessarily represent the most energy-efficient option, which is one of the objectives studied in section <ref>. To provide more flexibility to the optimization, the voltage profile was expressed as a fifth-order system. A fifth-order system allows for another voltage directional change during the stance phase, if needed, or can be collapsed to a lower order model. Therefore, the time-varying voltage will be represented through the constants a_5 to a_0 and can be expressed as: V=a_5t^5+a_4t^4 + ⋯ + a_1t+a_0 With a DC motor as the method of actuation, the robot model will continuously rotate the leg, preparing for the next stance during flight. In this formulation, the touchdown angle is able to vary from step to step rather than assuming a constant touchdown angle as was done in <cit.>. The leg position during flight is controlled through a bang-on bang-off voltage profile, whose objective is to reposition the leg such that the touchdown angle (θ_TD) is held constant step to step. The control voltage is taken to be the maximum voltage rating of the motor used in the system and is applied for a period, T_FC, the time to reposition the leg to the prescribed touchdown angle. Section <ref> outlines motor options in detail, but all options are rated to be run at 3V. Using the maximum voltage, the motor is rated for causes the leg to reposition as quickly as possible. While this is not necessarily the most energy-efficient option, it guarantees the leg will be in position before touchdown, assuming the motor and gearbox are sized correctly. With this control scheme, the control method switches at the events of take-off and touchdown. After detecting an event, the controller will execute the appropriate control profile, which is a function of time. §.§ Simulation Framework All equations were simulated using the ode45 function in MATLAB with a variable time step, starting in the stance phase. An event function was used to switch between the differential equations of stance and flight while maintaining continuity. All source code is given in <cit.>. § OPTIMIZATION FRAMEWORK This work considers two co-design case studies using the TD-SLIP framework. First, co-design is used to design a system that maximizes the symmetry of the first stride through minimizing the change in touchdown angle between the first two cycles. For a robot operating under open-loop control, any error in the system's dynamics from the control will accumulate from cycle to cycle until the system eventually becomes unstable. This error can come from the lack of symmetry in the stance cycle and from the repositioning of the leg during flight, preventing repeatable stance cycles. An error in repositioning will cause the dynamics of a future step to vary from the previous step, causing the dynamics to deviate from their marginally stable gait. Therefore, it is essential for the robot to minimize the change in the touchdown angle cycle in the presence of uncertainty. This first case study aims to find a set of design and control parameters that allow the robot to achieve repeatable stance cycles when such accumulation errors exist. During locomotion the motor is constantly adding energy into the system to propel the body forward and to compensate for energy loss due to damping within the motor and leg. As batteries can only provide a finite amount of energy, being conservative with power usage is critical to maximizing the endurance of the robot. Energy usage is distributed across actuation, computation, and sensing. However, actuation generally consumes the highest amounts of energy and power compared to sensing and computation. This work just considers the contribution of actuation, as was done in  <cit.>. By minimizing energy consumption, the robot is capable of executing a longer mission and possesses the potential for more complex missions. Therefore, a second case study is defined to find the optimized design and control variables that maintain stable and repeatable gaits with minimal energy consumption. Finding the optimized energy consumption is also done in the presence of noise in the initial touchdown angle. §.§ Case Study 1: Optimizing Touchdown Angle Difference To represent an accumulated error in the touchdown angle, a Gaussian-distributed noise is added to the initial touchdown angle (θ_0). Thus, the initial cycle within the simulation can be viewed as a continuation of a preceding, uninterrupted sequence of cycles, representing an ongoing process rather than a beginning of a standalone new cycle. The standard deviation of the Gaussian noise, ϵ, is defined to be 1.290^∘ which will result in a ± 3^∘ noise with a 98% confidence interval. Therefore, the initial touchdown angle after adding the noise is defined as: θ_TD1 = θ_0 + 𝒩(0, ϵ^2) For a sequence of cycles to be repeatable and stable, the initial touchdown angle among each cycle should be similar, which means after a flight phase, the robot can return to its initial touchdown angle; this ensures that after a flight phase, the robot can return to its initial touchdown angle, allowing it to repeat the previous motion when the same control mechanism is applied. Thus, the objective of the optimization is to minimize the touchdown angle difference between the first stance phase and the second stance phase in the simulation. Here, by definition, the touchdown angle of the second stance phase is the same as the angle between the robot leg and the ground at the end of the first cycle. The formulation of the single objective optimization is defined in equation (<ref>). min_𝐗 θ_Diff(𝐗) = |θ_0 - θ_TD2(𝐗)| s. t. 𝐗∈[𝐗_L,𝐗_U] Touchdown angle constraints: [ g_1 = min (θ_TD1, θ_TD2) > 0.45, g_2 = θ_Diff < ϵ,; g_3 = max (θ_TD1, θ_TD2) < 1.48 ; ] Position constraints: [ g_4 = y_s1_M < 0.85 y_s1_S, g_5 = y_s1_M < 0.85 y_s1_E,; g_6 = x_s1_E > 0, g_7 = min (𝐲_s1) > 0; g_8 = |Δ x_f1| - 4 · l_0 < 0 , g_9 = min (𝐲_s2) > 0; ] g_10 = min(Δ x_si, i = 1, 2, ..., n-1 ) ≥1e-03, Velocity constraints: [ g_11 = ẋ_s1_E > 0, g_12 = ẏ_s1_E > 0,; g_13 = ẏ_s1_E < 5, g_14 = S_s1 < 0.3; g_15 = ẋ_s2_E > 0, g_16 = ẏ_s2_E > 0,; g_17 = ẏ_s2_E < 5, g_18 = S_s2 < 0.3; ] Rotation constraints: [ g_19 = δω_f1 > π, g_20 = δω_f1 < 2π; ] Additional constraints: [ g_21 = T_1 > 1/15, g_22 = T_1 < 2 , g_23 = N ≥ 8; ] where: 𝐗 = [Motor Label, m_add, E, ρ, b, h, b_l, ζ̇_0, θ_0, θ̇_0, a_i, i=0,1, ..., 5, t_FC] In the equation, θ_Diff is the absolute value of the designed variable initial touchdown angle, θ_0, and the second cycle's touchdown angle, θ_TD2. The upper and lower bounds of the input variables are defined by 𝐗_L and 𝐗_L respectively. All constraints were weighted equally, and the definitions of the notations in the constraints shown in Eq. (<ref>) are explained below: * Touchdown angle constraints: ϵ is the standard deviation of the Gaussian noise adding to the initial touchdown angle. g_1 and g_3 bounds the touchdown angles of the first two stance phases within the pre-defined initial condition. g_2 ensures the difference between the two touchdown angles is smaller than the standard deviation of the Gaussian noise, which instructs the robot to return to the same range of the touchdown angle as at the beginning of the simulation. * Position constraints: y_s1_S, y_s1_M and y_s1_E represents the y coordinate of the first stance phase's start point, midpoint, and endpoint respectively. 𝐱_s1, 𝐲_s1 are arrays of the x and y coordinates of the first stance phase, and 𝐲_s2 is the array of the y coordinates of the second stance phase. Δ x_f1 is the x direction displacement during the first flight phase. Δ x_si, i = 1, 2, ..., n-1 is the x direction displacement of each stance phase from the first phase to the second to last phase, where n is the total number of phases completed in the simulation. g_4 and g_5 ensure the leg compresses and decompresses during stance, resulting in SLIP dynamics, as opposed to staying rigid and resulting in a vaulting motion. g_6 prevents the robot from ending in a position backward of its starting position. g_7 and g_9 ensure the position of the robot's center of mass does not fall below the terrain for the first two stance phases. g_8 constrains the distance traveled during flight to mimic travel distances seen in biological runners <cit.>. g_10 was applied to ensure a minimal travel distance during stance, which prevented the system from converging towards a hopping gait without forward movement. * Velocity constraints: g_11 and g_15 ensure that the direction of travel is in the positive x direction at the end of the first two stance phases. g_12, g_13, g_16, and g_17 specify the minimum and maximum bounds on the y velocity at the end of the stance phase. This velocity must be nonzero to ensure a flight phase, and the upper bound prevents excessive and unrealistic jump heights. Legged locomotion is a periodic dynamical system. To achieve a stable gait, the initial conditions of the first step must match those of the subsequent steps. This results in a symmetric position and velocity vector during stance. Constraints g_14 and g_18 apply upper bounds on the symmetry of the velocity vectors for the first two stance phases where S is defined in (<ref>). Velocity was chosen for the constraint as positional or trajectory symmetry does not guarantee the symmetry in velocity magnitude. However, a symmetry in velocity magnitude can imply a symmetry in position or trajectory in SLIP. S=(1-ẋ_si_E/ẋ_si_S)^2+(1-ẏ_si_E/ẏ_si_S)^2 Equation (<ref>) specifies the change in normalized x and y velocities from the start to the end of the stance. The smaller the value of S, the more symmetric the gait is. * Rotation constraints: g_19 and g_20 limit the robot's rotation in the flight phase within 180^∘ to 360^∘. This ensures the leg rotates during the flight phase to reposition for the following phase but does not over rotate. * Additional constraints: T_1 is the period for the first cycle. g_21 and g_22 bound T_1 in the range [1/15,2] seconds, which matches stride periods seen in biological runners, and would be seen in robots with similar dynamics <cit.>. The minimum number of cycles N was constrained by g_23 to be greater than 8. As constraints were only applied to the first two cycles, this ensures the system is able to complete additional steps. §.§ Case Study 2: Optimizing Energy Consumption Minimizing energy usage due to actuation can be calculated by integrating the power consumed by the motor over one stance and flight cycle which is described by equation (<ref>). F=∫_0^t V_a(t) · i_a(t) dt The objective function in this problem is defined as equation <ref> min_𝐗 f(𝐗) = F(𝐗) Constraints between the two case studies are the same except for g_3, g_8, and g_24, which are shown in the following equations: g_2 = |θ_0 - θ_TD2(𝐗)| < 0.859 g_8 = 5 · (|Δ x_f1| - 4 · l_0) < 0 g_24 = -0.001 - min (𝐕_s1·𝐈_s1) < 0 The new g_3 constraint is defined based on the result of optimizing the touchdown angle difference, which will be discussed in the next section. The value of 0.859^∘ is approximately two times the converged touchdown angle difference from case study 1. Therefore, this second case study is encouraged to achieve similar stable results as the previous optimization. The new g_8 was weighted by a factor of 5 as it was the most difficult constraint to satisfy. This weighting factor helps MDPSO prioritize the constraint. g_24 was added to prevent the system from stalling but was not needed when optimizing the touchdown angle. A slightly negative value was used to help with numerical instability. §.§ Optimization Parameter Bounds The optimization framework for component choice and design requires setting feasible physical bounds. These bounds are detailed in this section. * Motor Label Equations (<ref>) and (<ref>) show the parameters that affect motor performance, which cannot be independently varied without designing new motors. As such, the motor input is a discretized list where each value corresponds to a different motor and gearbox combination. For this work the 3 brushed 6, 8, and 10 motors from Maxon (maxongroup.com) were considered. The 3 option was chosen as they can be powered from a single-cell lithium polymer battery, which typically operates at 3.7. When the three motor sizes are paired with their corresponding gearbox options, the optimizer is presented with 18 options to choose from. * 𝐦_𝐚𝐝𝐝 is the additional mass added to the system on top of the minimum required mass. Here the minimum mass is estimated as two motors (m_motor), a microcontroller development board estimated at 5 (m_mcu), and a battery mass (m_B) of 3 which represents a 3.7, 100 lithium polymer battery. This minimum mass is multiplied by two to estimate the supporting structure mass, m_min = 2 (m_B + m_mcu + 2 m_motor). * E The bounds of the elastic modulus correspond to castable polymers (10) to brass (130). A material will have to be chosen based on the optimized value since a discrete option is not simulated. * ρ,b,h correspond to the leg properties. A lower bound of 0.5 was chosen since this corresponds to 26 gauge sheet metal or a few layers on most 3D printers. In hardware, geometry will need to be refined to accommodate both modulus mismatch and manufacturing limitations. * 𝐛_𝐥 It is difficult to design specific damping without extensive testing of leg materials and geometries. Damping is included in the model since damping improved stability in legged locomotion <cit.>, though damping would have to be experimentally characterized. * ζ̇_̇0̇,θ_0,θ̇_̇0̇ correspond to the systems initial conditions. ζ̇_0 must be negative for the leg to compress. Bounds on θ_0 were chosen based on observed values in biology and robotics, which tend to fall within the range of 47^∘ to 82^∘ <cit.>. To get more flexibility in the search space, the lower bound on θ_0 is set to 25^∘. Similarly, bounds on θ̇ were chosen based on biological data and speeds of similarly sized robots <cit.>. For example, cockroaches (2.6) and horses (680) have gait frequencies 15Hz and and 2Hz, respectively <cit.>. As the leg in the proposed design will complete one rotation per stride, this correlates to a rotational speed of 2-15 RPS. These bounds on θ and θ̇ impact motor choice as they affect the torques and speeds the motor must supply to the system. * 𝐓_𝐅𝐂 represents the time period the control voltage will be applied during the stance phase. Using the bounds placed on θ̇_̇0̇ above, the system will operate in the range of 2-15 Hz, which correlates to a flight cycle time of less than 0.5s. * 𝐚_5 - 𝐚_0 The bounds on these polynomial coefficients were chosen heuristically to be as generous as possible without imposing an unnecessary constraint. These bounds do not limit the maximum voltage, which is handled by constraint g_2. § OPTIMIZED DESIGNS: RESULTS & DISCUSSION The MDPSO optimization process was performed on an AMD Ryzen 9 5950X 16-Core Processor CPU with 64 GB RAM Windows workstation. The population size of MDPSO was set to 128, and the optimization ran with 32 parallel workers in MATLAB. Optimized design and control parameters are listed in Table <ref> for both case studies, and Figure <ref> shows a rendering of the optimum designs resulting from the two cases. The following termination criteria were used in the optimizations: 1) the minimum objective is infeasible and the net constraint violation does not decrease for 15 consecutive iterations; 2) the minimum feasible objective does not decrease for 5 consecutive iterations. The case of minimizing touchdown angle difference finished in 0.93 hours with 47 iterations, and the optimization of minimizing energy cost finished in 1.58 hours with 95 iterations. The convergence histories of the two case studies are shown in Figure <ref> and Figure <ref>. Case Study 2 took longer to converge most likely since finding feasible solutions took relatively greater number of iterations, as seen from Figure <ref>. The convergence history plots show that the objectives of the two case studies converged to 0.446^∘ and 9.81mJ respectively, but due to the Gaussian noise added to the first touchdown angle, the converged objectives still need to be validated. To validate the converged objectives of the two optimizations and to quantify the robustness to uncertainty, designs from both case studies were simulated 100 times with noise on the optimized touchdown angle. In these evaluations, the noise was uniquely instantiated for each sampling, following a normal distribution denoted by 𝒩(0, ϵ^2). Both case studies were validated with the same set of noise, and the touchdown angle difference, θ_Diff, was 1.053^∘, which is within ϵ of 1.2896^∘ that defines the Gaussian noise during optimization. The result shows that case study 2 can achieve the same θ_Diff, due to the constraint g_2, which was defined based on the converged θ_Diff of the first case study. The value 1.053 is slightly greater than the threshold set in the constraint due to the noise, but it shows using the result of the first case study to define the constraint can encourage the optimizer to achieve stable gaits. This addition of noise in both case studies verifies that the optimization approach here could result in robots that are robust to noise in hardware realization or during operation. The energy cost of case study 2 is much smaller than that of case study 1, which shows that given the same motor selection, the energy cost can be reduced with tuning of the physical design parameters and control voltage profile. In addition to evaluating the standard deviation of the touchdown angle difference, the number of gait cycles was quantified during this validation step. In case study one, the average cycle completed before the system was no longer stable was 5, and the maximum number of cycles completed in a single evaluation was 12 cycles, failing during the last flight phase. In Case Study 2, the average cycles completed in each evaluation was 6.5, which is a step and a half greater than the first case study, and the maximum cycles completed in a single evaluation was 20. Despite the higher variance in touchdown angle difference, the optimized design, which minimized energy, results in more stable dynamics, completing more gait cycles in comparison to the optimized design in case study one. Figure <ref> plots the trajectories of the optimized designs during this validation step. The trajectories show similar dynamics, largely as a result of the similar system parameters found in each optimization case study. For example, both used motor option 15, which corresponds to Maxon's 10mm diameter motor PN 118383 with a gear reduction of 16. While the overall system sizes, i.e., masses and lengths, are different, resulting in calculated leg stiffnesses of 1769 and 3382, receptively, their relative stiffnesses are similar. The corresponding relative stiffnesses (k_rel = k_0l_0/mg) were found to be 12 and 16, implying similar dynamics between the two systems, and falling within the stable regimes reported in <cit.>. The open-loop control voltages are shown in Figure <ref>, corresponding to the trajectories shown in Figure <ref>. As the control is an open-loop profile repeating for each stance and flight cycle, the control was only plotted through the second stance phase. While the control profile was specified as a fifth-degree polynomial, the optimization returned a piece-wise linear profile, which is easier to implement in hardware. This profile is similar to the Buehler clock utilized in the RHex robot <cit.>, but with an added delay during flight. It is possible that this strategy is a robust open-loop strategy that can be used in a diversity of robots. § CONCLUSIONS In this paper, we present an efficient computational framework to explore optimized designs of legged robots with the torque-driven spring-loaded inverted pendulum (TD-SLIP) abstraction operating on flat terrain. Two case studies were performed, the respective objectives of which involved optimizing the touchdown angle difference and the energy consumption between the first two cycles. Hardware and physical parameters such as motor selection, leg geometry, and representative mass were concurrently considered alongside control parameters, such as voltage profiles and touchdown angles, in the optimized design process. The optimized designs obtained in this work were observed to adhere to the current understandings of the dynamics of legged locomotion (albeit subject to the assumed fidelity of the modeling process); the optimized designs presented relative stiffnesses in the range of biological and robotic runners <cit.>, providing initial evidence for the suitability of the set of constraints formulated to drive the design process while preserving a degree of realism. These designs were observed to be robust against a 3^∘ noise in the first touchdown angle with a 98% confidence interval. When tested over 100 validation runs, these open loop designs completed over five (flight/stance) cycles on average. There remains scope to further improve the repeatability of the TD-SLIP motion; potential future extensions in this regard include closed loop controls, and efficient uncertainty propagation techniques to impose reliability constraints. In addition, higher fidelity analysis or physical testing of the leg's structural dynamics and terrain interaction in the future could provide further insights into both the effectiveness of the proposed design framework and the achievable capabilities for such small-scale running robots. § ACKNOWLEDGMENTS This work was supported by the startup funds provided by the Department of Mechanical and Aerospace Engineering and the School of Engineering and Applied Sciences at the University at Buffalo, and the National Science Foundation (NSF) award CMMI 2048020. IEEEtran
http://arxiv.org/abs/2407.13257v1
20240718080931
Predictive control for nonlinear stochastic systems: Closed-loop guarantees with unbounded noise
[ "Johannes Köhler", "Melanie N. Zeilinger" ]
eess.SY
[ "eess.SY", "cs.SY", "math.OC" ]
Predictive control for nonlinear stochastic systems: Closed-loop guarantees with unbounded noise Johannes Köhler, Melanie N. Zeilinger Institute for Dynamic Systems and Control, ETH Zürich, Zürich CH-8092, Switzerland (email:{jkoehle|mzeilinger}@ethz.ch). Johannes Köhler was supported by the Swiss National Science Foundation under NCCR Automation (grant agreement 51NF40 180545). July 22, 2024 ================================================================================================================================================================================================================================================================================================ § ABSTRACT We present a stochastic predictive control framework for nonlinear systems subject to unbounded process noise with closed-loop guarantees. First, we first provide a conceptual shrinking-horizon framework that utilizes general probabilistic reachable sets and minimizes the expected cost. Then, we provide a tractable receding-horizon formulation that uses a nominal state and a simple constraint tightening. Both formulations ensure recursive feasibility, satisfaction of chance constraints, and bounds on the expected cost for the resulting closed-loop system. We provide a constructive design for probabilistic reachable sets of nonlinear systems using stochastic contraction metrics. We demonstrate the practicality of the proposed method through a simulation of a chain of mass-spring-dampers with nonlinear Coulomb friction. Overall, this paper provides a framework for computationally tractable stochastic predictive control approaches with closed-loop guaranteed for nonlinear systems with unbounded noise. predictive control, chance constraints, nonlinear systems, stochastic systems, constrained control § INTRODUCTION Model predictive control (MPC) is an optimization-based method that yields high-performance control for general nonlinear systems and ensures satisfaction of safety-critical constraints <cit.>. Robust MPC methods predict over-approximations of the robust reachable set to ensure constraint satisfaction for any admissible bounded noise <cit.>. However, such worst-case bounds can be overly conservative or simply inadequate if no bound on the noise is available. Stochastic MPC (SMPC) formulations avoid these problems by leveraging distributional information about the noise and allowing for a user-chosen probability of constraint violation <cit.>. In this paper, we address the design of SMPC schemes that yield suitable closed-loop guarantees for nonlinear systems subject to unbounded noise. §.§ Related work The design of SMPC schemes faces two key challenges: * Reformulate chance constraints as tractable deterministic conditions; * Derive a tractable finite-horizon problem that yields system-theoretic properties for the closed-loop system. Probabilistic reachable sets for nonlinear systems: The reformulation of chance constraints can be equivalently posed as the computation of probabilistic reachable sets (RPS), i.e., sets which contain uncertain future states at least with a specified probability <cit.>. For linear stochastic systems, such PRS can be efficiently computed using analytical bounds <cit.> or offline sampling <cit.>. For nonlinear stochastic systems, there exist many approaches to approximate PRS, such as (generalized) polynomial chaos <cit.>, sampling <cit.>, or linearization <cit.>, see also the overviews <cit.>. These methods typically trade off computational complexity with approximation errors and thus probabilistic containment is not guaranteed. In case of bounded noise, there exist many methods to compute valid over-approximations of the robust reachable set <cit.>, such as contraction metrics <cit.>. Similarly, valid PRS can be effectively computed for nonlinear stochastic systems if robust bounds on the noise are available <cit.>. The computation of valid PRS for nonlinear systems subject to unbounded stochastic noise remains a challenging problem. Closed-loop properties in SMPC: A key challenge in the design of SMPC schemes is that constraint violations are explicitly permitted with some non-zero probability. Hence, naïve implementations may lose feasibility and thus all closed-loop properties during online operation <cit.>. Two conceptual SMPC frameworks to address this problem are robust techniques and feasibility-preserving algorithms (cf. <cit.>). Robust techniques impose more restrictive constraints to robustly ensure recursive feasibility for the worst-case noise realization. Corresponding linear and nonlinear SMPC schemes can be found in <cit.> and <cit.>, respectively. Key limitations include the requirement of a known worst-case bound on the noise and conservatism (cf. comparison in <cit.>). In contrast, feasibility-preserving algorithms specify the initial condition in the SMPC formulation such that closed-loop properties are preserved independent of the realized noise <cit.>. One particularly promising approach is indirect-feedback SMPC <cit.>, which has been the basis for many recent extensions and developments <cit.>. This approach leverages linear system dynamics and a nominal state initialization to define stochastic error dynamics that evolve completely independent of the variables optimized by the SMPC. As a result, chance constraints on the closed-loop system can be efficiently formulate as tightened deterministic constraints on the nominal state predicted by the SMPC. However, these developments in <cit.> strongly rely on the independence between the error and the nominal trajectory computed by the SMPC, which prohibits application to nonlinear systems. In <cit.>, a feasibility-preserving SMPC for nonlinear systems is proposed that uses online adjustments on the probability level in the constraints to obtain closed-loop guarantees. However, application requires online evaluation of probability levels through sampling, which results in a significant increase in computational complexity and limits application to short finite-horizon problems. Overall, the design of tractable nonlinear SMPC schemes with closed-loop guarantees remains largely unsolved <cit.>. §.§ Contribution We provide a computationally tractable framework for nonlinear SMPC that yields closed-loop guarantees for unbounded noise. This is based on three main technical contributions: * We extend the indirect-feedback SMPC framework <cit.> to nonlinear systems by removing the independence assumption and using general RPS; * We provide a tractable SMPC using a nominal system and tightened constraints; * We design PRS for nonlinear systems with unbounded noise using stochastic contraction metrics <cit.>. The resulting nonlinear SMPC scheme has a computational demand comparable to a nominal MPC scheme and ensures the following closed-loop properties: * Recursive feasibility; * Chance constraint satisfaction; * Bound on expected cost. We demonstrate the practical applicability of these theoretical results with a numerical example involving a chain of mass-spring-dampers with nonlinear Coulomb friction. §.§.§ Outline We first present the problem setup (Sec. <ref>) and propose a conceptual framework for SMPC with a shrinking-horizon formulation (Sec. <ref>). Then, we provide a tractable formulation with nominal predictions (Sec. <ref>) and show how to compute RPS for nonlinear systems using stochastic contraction metrics (Sec. <ref>). Afterwards, we summarize the overall design and provide a discussion (Sec. <ref>). Finally, we illustrate the results with a numerical example (Sec. <ref>) and end with a conclusion (Sec. <ref>). Additional details regarding the offline design of contraction metrics, tightened constraints, and terminal cost/set can be found in Appendices <ref>–<ref>. §.§.§ Notation The set of integers in an interval [a,b] is denoted by 𝕀_[a,b]. By u(a:b)∈𝕌^b-a+1, a,b∈𝕀_≥ 0, we denote the sequence with elements u(k)∈𝕌, k∈𝕀_[a,b]. We denote the prediction for time i computed at time k by 𝐮_i|k∈𝕌 and the sequence containing elements i∈𝕀_[a,b] by 𝐮_a:b|k∈𝕌^b-a+1. Whenever clear from the context, we denote the full predicted sequence by 𝐮_·|k. By Q≻ 0 (≽ 0) we denote that a symmetric matrix Q is positive (semi)-definite and by Q^1/2 we denote the symmetric matrix square-root, i.e., Q^1/2Q^1/2=Q. We denote the Euclidean norm of a vector x by x=√(x^⊤ x) and the weighted norm w.r.t. a positive definite matrix M by x_M:=√(x^⊤ M x). For two sets 𝔸,𝔹⊆ℝ^n, the Minkowski sum is defined a 𝔸⊕𝔹={a+b| a∈𝔸, b∈𝔹} and the Pontryagin difference is 𝔸⊖𝔹={c| c+b∈𝔸∀ b∈𝔹}. The probability of an event A is denoted by A. The expectation of a function δ(w) over a random variable w is denoted by 𝔼_w[δ(w)] and the expectation conditioned on an event A is given by 𝔼_w[δ(w)| A]. By 𝒦_∞, we denote the set of continuous functions α:ℝ_≥ 0→ℝ_≥ 0 that are strictly increasing, unbounded, and satisfy α(0)=0. For a continuously differentiable function f:ℝ^a ×ℝ^b →ℝ^c, f(x,u), the partial derivative w.r.t. x evaluated at some point (x,u) = (z,v) is defined as .∂ f/∂ x|_(z,v)∈ℝ^c × a and the total derivative w.r.t. a variable w∈ℝ^d is denoted by .d f/dw|_(z,v)∈ℝ^c × d. § PROBLEM FORMULATION We consider a nonlinear stochastic system x(k+1)=f(x(k),u(k),w(k)), x(0)=x_0, with state x(k)∈ℝ^n, input u(k)∈𝕌⊆ℝ^m, process noise w(k)∈ℝ^q, discrete time k∈𝕀_≥ 0, initial condition x_0∈ℝ^n, and input constraint 𝕌. The dynamics f are known and the state x(k) can be measured. (Stochastic noise) The noise w(k) is independently distributed according to distributions 𝒬_w(k) with zero mean and a variance bound Σ_w≻ 0, i.e., 𝔼_w(k)[w(k)]=0, 𝔼_w(k)[w(k)w(k)^⊤]≼Σ_w, ∀ k∈𝕀_≥ 0. We impose chance constraints of the form x(k)∈𝕏≥ p ∀ k∈𝕀_≥ 0, where 𝕏⊆ℝ^n is some closed set and p∈(0,1) is a desired probability level. We define the finite-horizon cost 𝒥_N(x(0:N),u(0:N-1)) := ∑_k=0^N-1ℓ(x(k),u(k))+V_f(x(N)), with a user chosen stage cost ℓ:ℝ^n×𝕌→ℝ, a terminal cost V_f:ℝ^n→ℝ, and a horizon N∈𝕀_≥ 1. We assume that f,ℓ,V_f are continuous and 𝕌 is compact. Ideally, we would like to solve the following stochastic optimal control problem inf_π  lim_N̅→∞1N̅𝔼_w(0:N̅)[𝒥_N̅(x(0:N̅),u(0:N̅-1))] s.t. (<ref>), (<ref>), u(k)=π_k(w(0:k-1))∈𝕌,  k∈𝕀_≥ 0, where π are causal policies that minimize the infinite-horizon expected cost and ensure satisfaction of the chance constraints (<ref>) ∀ k∈𝕀_≥ 0. Problem (<ref>) is not computationally tractable for multiple reasons: (i) optimization over policies π; (ii) infinite prediction horizon, and (iii) chance constraints (<ref>). In this paper, we derive a computational tractable SMPC scheme that uses a receding-horizon implementation, optimizes open-loop inputs, and uses PRS to ensures satisfaction of the chance constraints. (Probabilistic input constraints) In SMPC, input constraints u(k)∈𝕌 are often relaxed to probabilistic input constraints u(k)∈𝕌≥ p <cit.>. We consider hard input constraints due to their prevalence in practical applications and to simplify the exposition of PRS. However, the presented results can be naturally adjusted to this setup, see Section <ref> for details. § SMPC USING PRS - THE SHRINKING-HORIZON CASE In this section, we present the theoretical framework for nonlinear SMPC using general PRS. We first consider a shrinking-horizon problem with some finite horizon N∈𝕀_>0 and treat the more general receding-horizon problem in Section <ref>. We focus on how to incorporate the state measurement x(k) in the SMPC and the resulting closed-loop properties. To this end, we consider the following definition of PRS. Consider system (<ref>) in closed loop with any causal policy u(k)=π_k(w(0:k-1))∈𝕌, k∈𝕀_≥ 0. A sequence of sets ℛ_k, k∈𝕀_≥ 0 are probabilistic reachable sets (PRS) if x(k)∈ℛ_k≥ p, ∀ k∈𝕀_≥ 0. In this paper, we focus on optimizing open-loop input sequences and consider the following parametrization of PRS. [PRS] We know a sequence of closed parametrized sets ℛ_k:ℝ^n×𝕌^k→2^ℝ^n, k∈𝕀_≥ 0, such that ℛ_k(x_0,u(0:k-1)), k∈𝕀_≥ 0, are PRS (Definition <ref>). The constructive design of such PRS is studied in Section <ref>. At each time k∈𝕀_[0,N-1], the proposed shrinking-horizon SMPC considers the following optimization problem: inf_𝐮_0:N-1|k∈𝕌^N  𝔼_w(k:N-1)[𝒥_N-k(𝐱_k:N|k,𝐮_k:N-1|k)] s.t. ℛ_i(x_0,𝐮_0:i-1|k)⊆𝕏, 𝐮_0:k-1|k=u(0:k-1), 𝐱_k|k=x(k), 𝐱_i+1|k=f(𝐱_i|k,𝐮_i|k,w(i)), w(i)∼𝒬_w(i), i∈𝕀_[k,N-1], which depends on the current state x(k) and the past applied inputs u(0:k-1). We assume that a minimizing input sequence exists[ If f,ℓ,V_f are uniformly continuous, then the expected cost is a well-defined continuous function of 𝐮_·|k, see <cit.>. In this case, compact constraints 𝕌 and the closed constraints (<ref>) ensure that a minimizer exists for each x(k)∈ℝ^n, k∈𝕀_[0,N-1], assuming the problem is feasible.], which is denoted by 𝐮^⋆_·|k∈𝕌^N. In closed-loop operation, we solve Problem (<ref>) at each time k∈𝕀_[0,N-1] and apply the optimized input for this point in time, i.e., u(k)=𝐮^⋆_k|k. Similar to the indirect-feedback SMPC framework <cit.>, the state measurement x(k) only appears in the expected cost through the stochastic prediction (<ref>)–(<ref>). The chance constraints (<ref>) are enforced through the constraints (<ref>), which do not directly dependent on the past random noise realizations w(0:k-1). The constraint (<ref>) ensures that the optimized inputs are consistent with the already applied inputs. The following theorem formalizes the closed-loop properties of the proposed shrinking-horizon SMPC formulation. Let Assumptions <ref>–<ref> hold and suppose Problem (<ref>) is feasible at k=0. Then, Problem (<ref>) is feasible and the chance constraints (<ref>) are satisfied for the resulting closed-loop system for all k∈𝕀_[0,N-1]. Furthermore, the closed-loop cost satisfies the following bound: 𝔼_w(0:N-1)[𝒥_N(x(0:N),u(0:N-1))] ≤ 𝔼_w(0:N-1)[𝒥_N(𝐱_0:N|0^⋆,𝐮_0:N-1|0^⋆)]. Recursive feasibility: Given that an optimal input sequence 𝐮^⋆_0:N-1|k at some time k∈𝕀_[0,N-2], 𝐮_0:N-1|k+1=𝐮^⋆_0:N-1|k is a feasible solution to Problem (<ref>) at time k+1. In particular, the constraints (<ref>) remain unaltered, the added constraint in (<ref>) remains valid with 𝐮^⋆_k|k=u(k), and the stochastic prediction 𝐱_·|k+1 (<ref>) does not affect feasibility. Closed-loop chance constraint satisfaction: First note that Problem (<ref>) yields a causal policy u(k)=π_k(w(0:k-1)), considering also (<ref>) and independently distributed noise w (Asm. <ref>). For any k∈𝕀_[0,N-1], feasibility of Problem (<ref>) ensures ℛ_k(x_0,u(0:k-1)=ℛ_k(x_0,𝐮^⋆_0:k-1|k)⊆𝕏. Assumption <ref> with the PRS definition (Def. <ref>) yields x(k)∈𝕏(<ref>)≥ x(k)∈ℛ_k(x_0,𝐮_0:k-1|k^⋆) (<ref>)= x(k)∈ℛ_k(x_0,u(0:k-1))(<ref>)≥p, i.e., the chance constraint (<ref>) holds for the closed-loop system. Expected cost: Denote 𝐱^⋆_0:k|k=x(0:k) and by 𝐱^⋆_k+1:N|k the stochastic prediction (<ref>)–(<ref>) using 𝐮^⋆_k:k+N-1|k. Consider an arbitrary time k∈𝕀_[0,N-1] and note that Problem (<ref>) minimizes the cost 𝔼_w(k:N-1)[𝒥_N(𝐱_·|k,𝐮_·|k)|w(0:k-1)], where the first k elements of the cost are constant and the conditioning on w(0:k-1) uniquely invokes x(0:k) and u(0:k-1). For any k∈𝕀_[0,N-2], it holds that 𝔼_w(k:N-1)[𝒥_N(𝐱^⋆_·|k,𝐮_·|k^⋆)| w(0:k-1)] = 𝔼_w(k)[𝔼_w(k+1:N-1)[𝒥_N(𝐱_·|k^⋆,𝐮_·|k^⋆)| w(0:k)]] ≥ 𝔼_w(k)[𝔼_w(k+1:N-1)[𝒥_N(𝐱^⋆_·|k+1,𝐮^⋆_·|k+1)| w(0:k)]], where the equality uses the law of iterated expectation. The inequality uses the fact that the optimal solution to Problem (<ref>) at time k is a feasible solution at time k+1 and the inner expectation corresponds to the objective of Problem (<ref>) at time k+1. Iteratively applying Inequality (<ref>) for k∈𝕀_[0,N-2] yields 𝔼_w(0:N-1)[𝒥_N(x(0:N),u(0:N-1))] (<ref>)= 𝔼_w(0:N-2)[𝔼_w(N-1)[𝒥_N(𝐱^⋆_·|N-1,𝐮^⋆_·|N-1)| w(0:N-2)]] ≤ …≤𝔼_w(0)[𝔼_w(1:N-1)[𝒥_N(𝐱^⋆_·|1,𝐮^⋆_·|1)| w(0)]] ≤ 𝔼_w(0:N-1)[𝒥_N(𝐱^⋆_·|0,𝐮^⋆_·|0)]. Inequality (<ref>) ensures that the (expected) closed-loop cost is no larger than the cost of the open-loop optimal input sequence, i.e., re-optimization improves performance. Furthermore, Theorem <ref> ensures that the constrained optimization problem (<ref>) is feasible and the chance constraints (<ref>) are satisfied for all k∈𝕀_[0,N-1]. Thus, this approach achieves all the desired properties, <ref>, <ref>, <ref>. However, the explicit dependence of Problem (<ref>) on all past inputs limits applicability to short horizon problems. Furthermore, Problem (<ref>) requires the evaluation of the expected cost (<ref>) over the stochastic prediction (<ref>)–(<ref>), which is computational expensive to evaluate. The next section addresses both issues by providing a tractable formulation. § TRACTABLE SMPC SCHEME In this section, we derive a tractable receding-horizon SMPC by introducing a nominal state z and over-approximating the PRS ℛ_k with a simpler parametrization. We define the nominal dynamics z(k+1)=f(z(k),u(k),0), z(0)=x_0, which correspond to (<ref>) with w(·)≡ 0. The following condition introduces an over-approximation of the PRS ℛ_k using the nominal state z and known compact sets 𝔻_k⊆ℝ^n, k∈𝕀_≥ 0. (PRS over-approximation) For any x_0∈ℝ^n, k∈𝕀_≥ 0 and any u(0:k-1)∈𝕌^k, it holds that ℛ_k(x_0,u(0:k-1))⊆{z(k)}⊕𝔻_k, with z(k) according to (<ref>) and ℛ_k from Assumption <ref>. With this more structured over-approximation, we can design a tractable finite-horizon SMPC scheme. At each time k∈𝕀_≥ 0, we solve the following optimization problem using the measured state x(k) and the nominal state z(k): min_𝐮_k:k+N-1|k∈𝕌^N  𝒥_N(𝐱_k:k+N|k,𝐮_k:k+N-1|k) s.t. 𝐳_k|k=z(k), 𝐳_i+1|k=f(𝐳_i|k,𝐮_i|k,0), 𝐱_k|k=x(k), 𝐱_i+1|k=f(𝐱_i|k,𝐮_i|k,0), 𝐳_i|k∈𝕏̅_i:=𝕏⊖𝔻_i, 𝐳_k+N|k∈𝕏_f, i∈𝕀_[k,k+N-1]. This problem uses a finite (receding) horizon N∈𝕀_≥ 1 and a later specified closed terminal set 𝕏_f⊆𝕏. The closed-loop system is given by applying the minimizer[ For any x(k),z(k)∈ℝ^n, a minimizer exists since the cost is a continuous function of 𝐮_k:k+N-1|k∈𝕌^N and 𝕌 is compact. ] u(k)=𝐮^⋆_k|k to the true system (<ref>) and the nominal dynamics (<ref>). Analogous to indirect-feedback SMPC <cit.>, the tightened constraints (<ref>) and the terminal constraint (<ref>) are posed on the nominal prediction 𝐳 (<ref>), which is initialized in (<ref>) independent of the new measured state x(k). The measured state x(k) is used in a separate certainty-equivalent state prediction (<ref>)–(<ref>), which determines the cost (<ref>). The terminal set 𝕏_f⊆𝕏 and terminal cost V_f need to be chosen appropriately according to the following conditions <cit.>. (Terminal set and cost) There exists an input u_f∈𝕌, such that: * (Positive invariance) f(x,u_f,0)∈𝕏_f ∀ x∈𝕏_f; * (Constraint satisfaction) 𝕏_f⊆𝕏̅_k=𝕏⊖𝔻_k, k∈𝕀_≥ 0; * (Lyapunov) V_f(f(x,u_f,0))≤ V_f(x)-ℓ(x,u_f) ∀ x∈ℝ^n. In Appendix <ref>, we show how to constructively satisfy Assumption <ref>. The restriction to a constant terminal input u_f is a consequence of the optimization over open-loop input sequences (cf., e.g., <cit.>), see also Section <ref> for relaxations. In order to derive bounds on the closed-loop cost, we additionally consider the following standard regularity conditions. (Regularity conditions) The cost is given by ℓ(x,u)=x_Q^2+u_R^2, V_f(x)=x_P^2 with Q,R,P≻ 0. The origin is an equilibrium, i.e., f(0,0,0)=0, and 0∈𝕌. The dynamics f are Lipschitz continuous. The following theorem derives the closed-loop properties of the proposed SMPC formulation. Let Assumptions <ref>–<ref> hold and suppose that Problem (<ref>) is feasible at k=0. Then, Problem (<ref>) is feasible and the chance constraints (<ref>) are satisfied for the resulting closed-loop system for all k∈𝕀_≥ 0. Furthermore, there exists a function σ∈𝒦_∞, such that the closed-loop cost satisfies the following bound: lim sup_T→∞𝔼_w(0:T-1)[1T∑_k=0^T-1ℓ(x(k),u(k))]≤σ(Σ_w). Recursive feasibility: Given an optimal input sequence 𝐮^⋆_k:k+N-1|k at some time k∈𝕀_≥ 0, denote 𝐮^⋆_k+N|k=u_f∈𝕌 and consider the candidate input sequence 𝐮_i|k+1=𝐮^⋆_i|k, i∈𝕀_[k+1,k+N]. The corresponding nominal state sequence is given by 𝐳_i|k+1=𝐳_i|k^⋆, i∈𝕀_[k+1,k+N] using u(k)=𝐮_k|k^⋆, (<ref>), (<ref>), and (<ref>). The noise w(k) only affects the state x(k+1) and hence 𝐱_·|k+1, which is not subject to any constraints. Feasibility of the candidate solution and thus recursive feasibility follows with standard nominal MPC arguments <cit.> using Assumptions <ref> <ref>–<ref>, 𝐳_k+N|k+1=𝐳_k+N|k^⋆∈𝕏_f⊆𝕏̅_k+N, and 𝐳_k+N+1|k+1=f(𝐳_k+N|k^⋆,u_f,0)∈𝕏_f. Closed-loop chance constraint satisfaction: Consider the chance constraint (<ref>) for some k∈𝕀_≥ 0 and z(k)=𝐳_k|k^⋆∈𝕏̅_k=𝕏⊖𝔻_k from Problem (<ref>). Assumption <ref> ensures ℛ_k(x_0,u(0:k-1))(<ref>)⊆{z(k)}⊕𝔻_k(<ref>),(<ref>)⊆𝕏̅_k⊕𝔻_k(<ref>)⊆𝕏. The inputs u(0:k-1) generated by the SMPC (<ref>) correspond to a causal policy and hence Assumption <ref> yields x(k)∈𝕏≥x(k)∈ℛ_k(x_0,u(0:k-1))(<ref>)≥ p, i.e., the chance constraint (<ref>) holds. Cost bound: This proof consistent of three steps: (i) deriving a bound on the one step decrease using the candidate solution; (ii) bounding the optimal cost of Problem (<ref>) for any x(k)∈ℝ^n; (iii) deriving the asymptotic expected cost bound (<ref>). (i): We denote the state sequence satisfying (<ref>)–(<ref>) with the candidate input 𝐮_·|k+1 by 𝐱_·|k+1 and define 𝐱_k+N+1|k^⋆=f(𝐱_k+N|k^⋆,𝐮^⋆_k+N|k,0). Lipschitz continuity (Asm. <ref>) of the dynamics (<ref>)–(<ref>) and (<ref>) implies 𝐱_k+1|k+1-𝐱_k+1|k^⋆≤ L_fw(k), 𝐱_i|k+1-𝐱_i|k^⋆≤ L_f^i-kw(k), i∈𝕀_[k+1,k+N+1], with Lipschitz constant L_f≥ 0. For any x,y∈ℝ^n, any positive semi-definite matrix S∈ℝ^n× n, and any ϵ>0: 11+ϵx+y_S^2≤x_S^2+1ϵy_S^2. Hence, for any ϵ>0, the quadratic cost (Asm. <ref>) satisfies 11+ϵℓ(𝐱_i|k+1,𝐮_i|k+1)≤ℓ(𝐱_i|k^⋆,𝐮_i|k^⋆)+1/ϵQ̅ L_f^i-kw(k)^2, 11+ϵV_f(𝐱_k+N+1|k+1)≤ V_f(𝐱_k+N+1|k^⋆)+1/ϵP̅L_f^N+1w(k)^2, where P̅≥Q̅>0 are the maximal eigenvalues of P,Q, respectively. Let us denote the optimal cost of Problem (<ref>) at time k by 𝒥_N^⋆(k)=𝒥_N(𝐱_k:k+N|k^⋆,𝐮_k:k+N-1|k^⋆). The feasible candidate solution implies 11+ϵ𝒥_N^⋆(k+1) ≤ 11+ϵ𝒥_N(𝐱_k+1:k+N+1|k+1,𝐮_k+1:k+N|k+1) ≤ ∑_i=k+1^k+Nℓ(𝐱_i|k^⋆,𝐮_i|k^⋆)+V_f(𝐱_k+N+1|k^⋆) +1ϵw(k)^2(∑_i=k+1^k+NQ̅L_f^i-k +P̅ L_f^N+1) Asm. <ref><ref>≤ ∑_i=k+1^k+N-1ℓ(𝐱_i|k^⋆,𝐮_i|k^⋆)+V_f(𝐱_k+N|k^⋆) +1ϵw(k)^2(∑_i=1^NQ̅ L_f^i +P̅L_f^N+1)_=:c_𝒥 = 𝒥_N^⋆(k)-ℓ(x(k),u(k))+c_𝒥ϵw(k)^2, with x(k)=𝐱_k|k^⋆, u(k)=𝐮_k|k^⋆. (ii): Next, we derive an upper bound on 𝒥_N^⋆(k) for any x(k)∈ℝ^n, using feasibility of Problem (<ref>). Notably, the feasibility of Problem (<ref>) does not imply a uniform bound on the state x(k), as constraints are only imposed on the nominal prediction 𝐳. This is also the main reason why existing techniques, such as <cit.>, cannot be leveraged to derive bounds on 𝒥_N^⋆(k). We define an auxiliary input sequence ũ_i|k=u_f∈𝕌, i∈𝕀_[k,k+N-1], with the corresponding state sequence 𝐱̃_i|k according to (<ref>)–(<ref>). The cost of this sequence satisfies 𝒥_N(𝐱̃_·|k,ũ_·|k)≤ V_f(𝐱̃_0|k)=x(k)_P^2 using the global properties of the terminal cost (Asm. <ref> <ref>) recursively. Although this input sequence is not feasible solution to Problem (<ref>), we can leverage it to bound the cost of the optimal solution. Using the bound u̅=max_u∈𝕌u_f-u and Lipschitz continuity, we have 𝐱^⋆_k+i|k-𝐱̃_k+i|k≤∑_j=0^i-1 L_f^i-j𝐮_k+j|k^⋆-u_f≤u̅∑_j=0^i-1 L_f^i-j. Applying (<ref>) with ϵ=1, we can bound the cost of the optimal sequence by ℓ(𝐱^⋆_i|k,𝐮_i|k)≤ 2ℓ(𝐱̃_i|k,ũ_i|k)+2Q̅u̅^2∑_j=0^i-1 L_f^2(i-j), V_f(𝐱^⋆_k+N|k)≤ 2V_f(𝐱̃_k+N|k)+2P̅u̅^2∑_j=0^N-1 L_f^2(i-j). Thus, the optimal cost satisfies 𝒥_N^⋆(k)=𝒥_N(𝐱^⋆_·|k,𝐮^⋆_·|k) ≤ 2𝒥_N(𝐱̃_·|k,ũ_·|k) +2u̅^2max{Q̅,P̅}∑_i=0^N∑_j=0^i-1 L_f^2(i-j)_=:c_𝒥,2 (<ref>)≤ 2 x(k)_P^2+c_𝒥,2Asm. <ref>≤ c_ℓℓ(x(k),u(k))+c_𝒥,2, with c_ℓ:=2P̅/Q>1, where Q>0 is the smallest eigenvalue of Q≻ 0. (iii): Inequalities (<ref>) and (<ref>) yield 𝒥_N^⋆(k+1)-𝒥_N^⋆(k) (<ref>)≤ ϵ𝒥_N^⋆(k)-(1+ϵ)ℓ(x(k),u(k))+1+ϵϵc_𝒥w(k)^2 (<ref>)≤ -ℓ(x(k),u(k))(1-ϵ c_ℓ) + ϵ c_𝒥,2 + 1+ϵϵc_𝒥w(k)^2. Consider ϵ=min{√(Σ_w),1/2c_ℓ}∈(0,1), which satisfies (1-ϵ c_ℓ)≥1/2, ϵ≤√(Σ_w), and 1+ϵϵΣ_w≤2ϵΣ_w=2max{√(Σ_w),Σ_w/2 c_ℓ}. This yields 𝔼_w(k)[𝒥_N^⋆(k+1)-𝒥_N^⋆(k)+1/2ℓ(x(k),u(k))| w(0:k-1)] (<ref>)≤ 𝔼_w(k)[ϵ c_𝒥,2 +1+ϵϵc_𝒥w(k)^2 | w(0:k-1)] Asm. <ref>≤ ϵ c_𝒥,2 +1+ϵϵc_𝒥Σ_w (<ref>)≤ c_𝒥,2√(Σ_w)+2c_𝒥max{√(Σ_w),Σ_w/2 c_ℓ} =: σ̃(Σ_w), σ̃∈𝒦_∞. Applying the law of iterated expectation for k∈𝕀_[0,T-1] yields 0 ≤𝔼_w(0:T-1)[𝒥_N^⋆(T)] ≤ 𝒥_N^⋆(0)+Tσ̃(Σ_w)-1/2𝔼_w(0:T-1)[∑_k=0^T-1ℓ(x(k),u(k))], where the first inequality used non-negativity of the cost. Given x(0)=z(0)∈ℝ^n, f Lipschitz continuous, 𝕌 compact, and 𝒥_N quadratic, we have 𝒥_N^⋆(0)<∞. Thus, dividing by T and taking the limit T→∞ yields (<ref>) with σ=2σ̃∈𝒦_∞. Overall, Theorem <ref> provides all the desired closed-loop guarantees: recursive feasibility <ref>, satisfaction of chance constraints <ref>, and a bound on the expected cost <ref>. The fact that the asymptotic expected cost scales with the variances mirrors existing results in SMPC <cit.>, see Section <ref> for a more detailed discussion. In the next section, we discuss how to design the PRS (Asm. <ref>–<ref>). § PROBABILISTIC REACHABLE SETS USING STOCHASTIC CONTRACTION METRICS In this section, we show how to construct a PRS satisfying Assumptions <ref>-<ref> using contraction metrics. First, we uses stochastic contraction metrics to bound the expected prediction error (Sec. <ref>) and then derive a PRS (Sec. <ref>). §.§ Stochastic contraction metrics Contraction metrics utilize conditions on the Jacobian of the nonlinear dynamics f to ensure incremental system properties. Hence, we restrict or attention to continuous differentiable dynamics f that are linear in the (unbounded) noise w. The dynamics are linear in w, i.e., f(x,u,w)=f(x,u,0)+Gw with some constant matrix G∈ℝ^n× q. Furthermore, f is continuously differentiable. We denote the Jacobian of the dynamics f w.r.t. x by A(x,u):=.∂ f∂ x|_(x,u,0). The following theorem derives bounds on the expected error using stochastic contraction metrics. Let Assumptions <ref> and <ref> hold. Consider a state-dependent matrix M:ℝ^n→ℝ^n× n, which satisfies ddw M(x+Gw)=0 ∀ x∈ℝ^n, w∈ℝ^q. Suppose there exist positive definite matrices M,M∈ℝ^n× n, a contraction rate ρ∈[0,1), and a constant w̅≥ 0, such that the following conditions hold for all x∈ℝ^n, u∈𝕌: M≼ M(x)≼M, A(x,u)^⊤ M(f(x,u,0)) A(x,u)≼ρ M(x), Σ_wG^⊤M̅ G≤w̅. Then, there exists an incremental Lyapunov function V_δ:ℝ^n×ℝ^n→ℝ, such that for any x,z∈ℝ^n, u∈𝕌, k∈𝕀_≥ 0: x-z_M^2≤ V_δ(x,z)≤ x-z_M^2, 𝔼_w(k)[V_δ(f(x,u,w(k)),f(z,u,0))]≤ ρ V_δ(x,z)+w̅. We first define V_δ and show the bounds (<ref>) before deriving the decreases condition (<ref>). Part (i): Let us denote the set of piece-wise smooth curves γ:[0,1]→ℝ^n that satisfy γ(0)=z and γ(1)=x by Γ(z,x). We define the incremental Lyapunov function V_δ as the Riemannian energy corresponding to M(x), i.e., V_δ(x,z):=min_γ∈Γ(z,x)∫_0^1 .∂γ∂ s|_s^⊤ M(γ(s)) .∂γ∂ s|_s ds. A minimizer is called a geodesic γ^⋆, which exists due to the uniform lower bound (<ref>), see <cit.>. Inequalities (<ref>) follow from (<ref>) using standard arguments, see, e.g., <cit.>. Part (ii): Denote x^+=f(x,u,w(k)), z^+=f(z,u,0). We derive a bound on V_δ(x^+,z^+) using the candidate curve γ^+(s)=f(γ^⋆(s),u,γ_w(s)) with γ_w(s)=s· w(k). With some abuse of notation, we abbreviate A(s)=A(γ^⋆(s),u),  M(s)=M(γ^⋆(s)),  M^+(s)=M(γ^+(s)). We denote the derivative of the geodesic by γ^⋆_s(s)=.∂γ^⋆/∂ s|_s. The derivate of the candidate curve γ^+ satisfies γ_s^+(s):=.∂γ^+/∂ s|_s=A(s)γ^⋆_s(s)+G w(k). Note that γ^⋆∈Γ(z,x) implies that γ^+∈Γ(z^+,x^+) and hence the energy of the curve γ^+ provides an upper bound to V_δ(x^+,z^+) according to (<ref>). Thus, we have: 𝔼_w(k)[V_δ(x^+,z^+)] ≤𝔼_w(k)[∫_0^1 γ^+_s(s)^⊤ M^+(s) γ^+_s(s) ds] (<ref>)= 𝔼_w(k)[∫_0^1 A(s)γ^⋆_s+G w(k)_M^+(s)^2 ds] = 𝔼_w(k)[∫_0^1 A(s)γ^⋆_s(s)_M^+(s)^2 +G w(k)_M^+(s)^2 . . +2(A(s)γ^⋆_s(s))^⊤ M^+(s)G w(k) ds]. Next, we bound each of the three terms individually. Condition (<ref>) ensures that M^+(s) is independent of w(k), which will be crucial when taking the expectation later. For the first term, it holds that 𝔼_w(k)[∫_0^1A(s)γ^⋆_s(s)_M^+(s)^2 ds] (<ref>)≤ 𝔼_w(k)[∫_0^1 ργ^⋆(s)_M(s)^2ds](<ref>)=ρ V_δ(x,z). For the second term, we get 𝔼_w(k)[∫_0^1Gw(k)_M^+(s)^2 ds] (<ref>)≤𝔼_w(k)[∫_0^1Gw(k)_M̅^2 ds] = 𝔼_w(k)[∫_0^1w(k)w(k)^⊤ G^⊤M̅ Gds] = ∫_0^1𝔼_w(k)[w(k)w(k)^⊤] G^⊤M̅ Gds Asm. <ref>≤ ∫_0^1 Σ_w G^⊤M̅Gds(<ref>)≤∫_0^1 w̅ds=w̅. where the first equality used the cyclic property of the trace and the second equality used linearity of the trace operator. For the last term, we get 𝔼_w(k)[∫_0^1 2 (A(s)γ^⋆_s(s))^⊤ M^+(s) G w(k) ] ds (<ref>) = ∫_0^1 2 (A(s)γ^⋆_s(s))^⊤ M^+(s) G 𝔼_w(k)[w(k)] d s Asm. <ref>= 0, where the first equality leverages the fact that M^+(s) is independent of w(k) due to (<ref>). In combination, these three bounds yield (<ref>). Condition (<ref>) can be satisfied by choosing a suitable parametrization of M(x). This ensures that the posed conditions (<ref>)–(<ref>) are independent of w, which is crucial to enable tractable designs. In Appendix <ref>, we provide additional details on the computation of a suitable contraction metric M satisfying (<ref>) using linear matrix inequalities (LMIs). The following corollary highlights how the result simplifies in the special case of a constant contraction metric M. Let Assumptions <ref> and <ref> hold. Suppose there exists a positive definite matrix M∈ℝ^n× n, a contraction rate ρ∈[0,1), and a constant w̅≥ 0, such that the following conditions hold for all x∈ℝ^n, u∈𝕌: A(x,u)^⊤ M A(x,u)≼ ρ M(x), Σ_wG^⊤ M G≤ w̅. Then, Conditions (<ref>) hold with V_δ(x,z)=x-z_M^2. Conditions (<ref>) and (<ref>) hold trivially with M̅=M=M. Furthermore, Conditions (<ref>) are equivalent to (<ref>)–(<ref>) for M constant. Lastly, the constant metric M ensures that the geodesic (shortest path) is a straight line, i.e., γ^⋆(s)=z+s(x-z). Hence, the Riemannian energy (<ref>) reduces to the weighted norm x-z_M^2. (Related results on contraction metrics) For nonlinear dynamics with bounded model-mismatch, robust contraction metrics can be leveraged to derive robust reachable sets around nominal trajectories z <cit.>. Compared to Theorem <ref>, these conditions leverage worst-case bounds instead of the variance bound Σ_w (cf. (<ref>) vs. <cit.>). The proposed analysis uses a killing condition (<ref>) to ensure that the posed conditions are independent of the unbounded stochastic noise w(k). Similar killing conditions are leveraged in existing approaches to avoid dependence on the control input u (cf. <cit.>). The key difference in the result is the establishment of deterministic bound for bounded model-mismatch compared to the developed expected bound for unbounded stochastic process noise. Considering results for stochastic contraction metrics: For continuous-time systems and state dependent metrics M(x), bounds on the expected error are derived in <cit.> and  <cit.>. These results relax (<ref>) with bounds on the derivative of M(x). These derivations rely on the fact that the continuous-time contraction conditions are linear in w and hence cannot be generalized to the considered discrete-time setting. For discrete-time stochastic systems, <cit.> derives a valid expected bound of the form (<ref>) without requiring Condition (<ref>). However, the result can only ensure contraction of the nonlinear system if the condition number of M is sufficiently small, thus limiting practical applicability. Discrete-time stochastic contraction metrics are also studied in the preprint <cit.>. However, the results do not apply to state-dependent metrics M(x) due to the induced correlation with M(f(x,u,w)), which we resolved though Condition (<ref>) and Assumption <ref>. Overall, results for stochastic contraction metrics in discrete-time comparable to Theorem <ref> are lacking in the literature, see also the overview paper <cit.>. §.§ Probabilist reachable set In the following, we derive a PRS (Def. <ref>) using the expected bounds of the stochastic contraction metrics (Thm. <ref>). Due to the nonlinear dynamics, the error x(k)-z(k) does not follow any know distribution and is not even zero-mean. Hence, many standard inequalities from the literature on linear SMPC, such as the Chebyshev inequality <cit.>, cannot be applied. The following theorem provides a valid PRS by combing the stochastic contraction metrics (Thm. <ref>) with the Markov inequality. Sppose the conditions in Theorem <ref> hold. Then, Assumptions <ref>–<ref> hold with 𝔻_k={x| x_M^2≤σ_x,k}, σ_x,k:=1-ρ^k1-ρw̅1-p ℛ_k(x_0,u(0:k-1))=z(k)⊕𝔻_k, k∈𝕀_≥ 0, with z(k) according to (<ref>). Given Definition <ref>, we consider an arbitrary causal policy u(k)=π_k(w(0:k-1))∈𝕌, k∈𝕀_≥ 0 with the stochastic dynamics (<ref>) and the nominal dynamics (<ref>). Define δ(k):=V_δ(x(k),z(k)), k∈𝕀_≥ 0. Given the fixed initial condition x(0)=x_0, the noise w(0:k-1) uniquely invoke x(0:k), z(0:k), u(0:k), δ(0:k) leveraging causality of the policies π_k and the dynamics (<ref>), (<ref>). For any k∈𝕀_≥ 0, the law of iterated expectation ensures 𝔼_w(0:k)[δ(k+1)] = 𝔼_w(0:k-1)[𝔼_w(k)[δ(k+1)| w(0:k-1)]] = 𝔼_w(0:k-1)[𝔼_w(k)[δ(k+1)| x(k),z(k))]] (<ref>)≤ 𝔼_w(0:k-1)[ρδ(k)+w̅] ≤ …≤ρ^k+1δ(0)+∑_i=0^kρ^iw̅=1-ρ^k+11-ρw̅, where the last equality uses the geometric series and δ(0)=0 due to the fixed initialization in (<ref>)/(<ref>). The Markov inequality <cit.> with δ(k) non-negative ensures that δ(k)≤𝔼_w(0:k-1)[δ(k)]/1-p≥ p, k∈𝕀_≥ 0. Combining (<ref>) and (<ref>) yields δ(k)≤1-ρ^k/1-ρw̅/1-p≥ p. Furthermore, the lower bound (<ref>) yields x(k)-z(k)_M^2≤1-ρ^k/1-ρw̅/1-p≥ p. Thus, ℛ_k(x,u(0:k-1)) in (<ref>) is a PRS according to Definition <ref>, i.e., Assumption <ref> holds. Assumption <ref> follows due to the structure of ℛ_k in (<ref>). Theorem <ref> provides a simple recursive formula to compute a probabilistic reachable set using a nominal simulation z(k) and a scaled ellipsoid 𝔻_k. This result is valid for inputs computed by any causal policy and thus also for the inputs generated by the SMPC scheme. We highlight that the shape of the PRS ℛ_k computed through stochastic contraction metrics is analogous to the robust reachable sets computed through robust contraction metrics in <cit.>. The main difference is that w̅ is proportional to the variance of the stochastic noise w(k), while these existing works leverage uniform norm bounds on the noise w(k). § DISCUSSION In this section, we summarize the overall design and discuss: relation to existing work, computation complexity, and closed-loop predictions. §.§.§ Overall algorithm The overall offline design and online implementation are summarized in Algorithm <ref>. The following corollary demonstrates that this constructively satisfies the posed conditions. Suppose Assumptions <ref>, <ref>, and <ref> hold, and the state constraints are given by the polytope (<ref>). Consider any feasible solution (W,X,ρ) to Problem (<ref>) in Appendix <ref> with X≤ (1-ρ)(1-p). Then, M=W^-1, w̅=X, u_f=0, V_f=c_fx_M^2, 𝕏_f={x| x_M^2≤α_f} with c_f>0, α_f≥ 0 from Proposition <ref> and ℛ_k, 𝔻_k from Theorem <ref> satisfy Assumptions <ref>, <ref>, and <ref>. Proposition <ref> with (W,X,ρ) from Problem (<ref>) ensures that the conditions in Corollary <ref> hold and thus Assumptions <ref> and <ref> hold. Furthermore, the posed bound on X ensures that 0∈𝕏̅_k, k∈𝕀_≥ 0. Thus, the conditions in Proposition <ref> hold and the resulting terminal cost and set satisfy Assumption <ref>. §.§.§ Relation to existing SMPC approaches The online operation (Alg. <ref>) and SMPC formulation (Problem (<ref>)) mirror the indirect-feedback SMPC paradigm <cit.>: The constraints are imposed on the nominal state z(k), while the measured state x(k) is only used in the cost function. However, the technical derivation and analysis of the proposed approach differs significantly. In <cit.>, linearity of the dynamics is leveraged to define error dynamics which evolve completely independent of the SMPC, thus facilitating the construction of PRS for the error. On the other hand, the presented analysis leverages parametrized PRS that are valid for any causal policy generated by the SMPC and we showed how these PRS can be constructed using stochastic contraction metrics. This relaxation of the independence assumption expands the applicability of this paradigm to more general nonlinear stochastic optimal control problems, while still containing the approach <cit.> as a trivial special case. Considering existing nonlinear SMPC formulations with closed-loop guarantees: In contrast to <cit.>, the proposed approach does not leverage worst-case bound on the model mismatch, thus reducing conservatism and ensuring applicability to unbounded stochastic noise. Compared to <cit.>, the proposed approach avoids computationally expensive sampling-based approximations of the PRS and ensures computational tractability by adopting a receding-horizon formulation with nominal predictions. §.§.§ Expected cost bounds Theorem <ref> shows that the asymptotic expected cost is bounded by a function of the variance, which is comparable to the results in <cit.> and <cit.>. The proposed analysis utilizes the specific structure of the quadratic cost and the (global) properties of the terminal cost to derive this expected cost bound. Compared to <cit.>, this analysis does not rely on uniform bounds on the state x(k), nor does it require the optimal cost to be only a function of the state x(k). While <cit.> minimize the expected cost, the proposed formulation and analysis (Sec. <ref>) considers a certainty equivalent cost, which avoids computationally intensive sampling-based approximations. As shown in <cit.>, the benefits of minimizing the expected cost compared to a certainty equivalent cost are often negligible. §.§.§ Computational complexity The main offline design concerns the computation of a contraction metric M using LMIs. This is comparable to existing nonlinear robust MPC designs <cit.>, see Appendix <ref> for details. Considering the online computation complexity, Problem (<ref>) can be equivalently written as a nominal/standard MPC problem with state (x,z)∈ℝ^2n and input u∈ℝ^m. §.§.§ Closed-loop predictions In the following, we discuss the generalization to closed-loop predictions with relaxed probabilistic input constraints u(k)∈𝕌≥ p, k∈𝕀_≥ 0 (cf. Remark <ref>). Suppose we have designed a feedback u=κ(x,z,v), such that for any x,z∈ℝ^n, v∈𝕌: 𝔼_w(k)[V_δ(f(x,κ(x,z,v),w),f(z,v,0))]≤ρ V_δ(x,z)+w̅, κ(x,z,v)-v^2≤ L_κx-z_M^2, L_κ≥ 0. Such a feedback can be jointly synthesized with the contraction metric M by adapting Theorem <ref> and Proposition <ref> from Appendix <ref>, see control contraction metrics <cit.>. Similar to Theorem <ref>, for any causal policy v(k)=π_k(w(0:k-1)), the closed-loop system z(k+1)= f(z(k),v(k),0), z(0)=x(0)=x_0, x(k+1)= f(x(k),u(k),w(k)), u(k)=κ(x(k),z(k),v(k)) satisfies |x(k)-z(k)_M^2≤σ_x,k, u(k)-v(k)^2≤ L_κσ_x,k≥ p with σ_x,k from (<ref>). Hence, closed-loop chance constraints on state and input can be efficiently enforced by imposing tightened constraints on the nominal state z(k) and input v(k). Considering for simplicity 𝕏_f={0}, M∈ℝ^n× n and Assumption <ref>. Then, the expected cost bound from (<ref>) also holds by choosing a suitable terminal cost V_f=c_fx_M^2 and leveraging (<ref>). Overall, by including the feedback κ in the prediction, we can deal with unstable systems subject to unbounded stochastic noise and in general reduce the conservatism of the PRS. § NUMERICAL EXAMPLE This numerical example demonstrates the applicability of the proposed framework to nonlinear stochastic systems. The code is available online.[https://gitlab.ethz.ch/ics/SMPC-CCM] §.§ Setup We consider a chain of three mass-spring-dampers with nonlinear Coulomb friction, shown in Figure <ref>. The system has six states x(x)∈ℝ^6, is under-actuated with a scalar control input u(k)∈ℝ^1, has nonlinear dynamics, and process noise w(k)∈ℝ^3 corresponding to unpredictable wind forces. The model constants are mass of 5 [kg], spring constant of 2 [N/m], and damping constant of 1 [N s/m]. The linearized system is under-damped with the slowest mode having a time constant of 50 [s] and fast oscillations at a frequency of about 1 [Hz]. The noise w has a covariance of Σ_w=10^-3· I_3 with a later specified distribution. The model is discretized using Euler with a sampling time of dt=0.25 [s]. The actuator force is bound by 𝕌=[-10^2,10^2] [N]. The state constraints 𝕏 are a polytope and encode a maximal velocity of 2 [m/s] and a minimal distance between all masses and the wall to avoid undesirable collisions[ The equilibrium (x,u)=0 corresponds to a fixed positive length of the springs and collision avoidance requires that this length is at most compressed by 1 [m], which are linear inequality constraints on the state x.]. The chance constraints (<ref>) are enforced with a probability of p=95%. §.§.§ Offline design The offline design is performed according to Corollary <ref> with a contraction rate of ρ≈exp(-dt/50). Executing the overall offline design (Alg. <ref>) required about two seconds on a standard laptop, where the LMIs were formulated with YALMIP <cit.> and solved using SeDuMi1.3 and Matlab. §.§ Probabilistic reachable set First, we study the proposed PRS design using stochastic contraction metrics (Sec. <ref>), verifying probabilistic containment and studying conservatism. We consider a zero initial condition. Two input signals u are taken into consideration: * periodic forcing with amplitude 1 [kN] and period 12.5 [s], operating mostly in the saturated region of the Coulomb friction (cf. Fig. <ref>); * zero input, operating close to the origin with almost linear dynamics. We examine five different noise distributions 𝒬_w with zero mean and covariance Σ_w: * Gaussian w∼𝒩(0,Σ_w); * discrete-distributions[ For each coordinate i=1,2,3, w_i∈{-c,0,c} with c such that 𝔼_w[ww^⊤]=Σ_w. These distribution are the (point-wise) worst-case when applying the Markov inequality to bound |w|^2 using only 𝔼[w^2].] with w(k)=0∈{99%,99.5%,99.9%,99.95%,99.99%}. Figure <ref> shows the resulting prediction error and probabilistic containment compared to the bounds derived in Theorem <ref>. For the expected squared error, we see a large disparity (factor 50) between the two different input signals, which highlights the significant effect of the nonlinearities. This also implies that naïve tuning of constraint tightening based on offline sampling (cf., e.g., <cit.>) may not generalize well for nonlinear dynamics. As expected from Theorem <ref>, the derived results provides a valid bound independent of the specific input u applied to the system or the exact distribution 𝒬_w. For the containment probability x(k)∈ℛ_k, we only focus on the more critical case of periodic forcing. Here, the exact distribution w(k)∼𝒬_w has also a significant effect. Nevertheless, in accordance with Theorem <ref>, the empirical containment is always above the set specification of p=95%, highlighting the distributional robustness of the derived result. §.§ Closed-loop simulations The following closed-loop simulations consider the proposed SMPC formulation with Problem (<ref>) and a prediction horizon of N=15. The initial condition x(0) is zero, except for the position of last mass, which is moved by 10 [m]. Statistical results are obtained by simulating the closed-loop system for 10^5 different noise realizations following the discrete distribution (iv) from Figure <ref>. Figure <ref> shows the closed-loop results. The expected cost decreases during closed-loop operation and converges to a small constant, in accordance with Theorem <ref>. The variations in the applied input highlight how the indirect-feedback paradigm generates feedback through re-optimization. Lastly, although we operate close to the constraints, constraint violations are rare and the posed chance constraints (<ref>) are met with max_k x(k)∈𝕏=99.83%> p=95%. Considering the online computational complexity: Solving the proposed nonlinear SMPC formulation (<ref>) with zero warm-start took on average 25 [ms] using IPOPT <cit.> formulated in CasADi <cit.> in Matlab on a standard laptop. This is around 37% longer than a nominal MPC (x=z), which took 18 [ms] on average. §.§.§ Summary Overall, we have applied the proposed stochastic predictive control framework to a nonlinear system subject to unbounded noise. The statistics of the prediction error depend strongly on the applied inputs and exact distribution of the noise. The derive theory provides bounds on the expected error and probabilistic containment that hold uniformly for all possible input sequences and all distributions adhering to the assumed covariance bound (Asm. <ref>). Thus, the closed-loop system satisfies the posed chance constraints (<ref>). Furthermore, the proposed formulation is recursively feasible and the expected cost decreases to a small constant. The online computational complexity of the proposed SMPC scheme is only marginally increased compared to a nominal formulation, which is crucial for implementation on embedded hardware. § CONCLUSION We presented a computationally tractable predictive control framework for nonlinear systems subject to unbounded stochastic noise. The resulting closed-loop system ensures recursive feasibility, chance constraint satisfaction, and an expected cost bound. The design leverages stochastic contraction metrics to design probabilistic reachable sets. We demonstrated practical applicability with a numerical example involving nonlinear dynamics and unbounded stochastic noise. Future work aims at designing less conservative probabilistic reachable sets by leveraging more distributional information. In this appendix, we provide details regarding the computation of the tightened constraints (App. <ref>); the contraction metric M (App. <ref>), and the terminal set/cost (App. <ref>). §.§ Constraint tightening The following proposition provides a simple formula for the constraint tightening in case of polyotpic state constraints of the form: 𝕏={x| h_j^⊤ x≤ 1, j∈𝕀_[1,p]}, h_j∈ℝ^n, j∈𝕀_[1,p]. Suppose the conditions from Theorem <ref> hold and the state constraint set is given by (<ref>). Then, for all k∈𝕀_≥ 0, the tightened constraints (<ref>) are given by 𝕏̅_k=𝕏⊖𝔻_k={x| h_j^⊤ x≤ 1-c_j√(σ_x,k) j∈𝕀_[1,p]}, c_j=M^-1/2h_j, j∈𝕀_[1,p]. Recall the shape of the PRS from Theorem <ref>: 𝔻_k={e| e_M^2≤σ_x,k} ={M^-1/2ẽ| ẽ≤√(σ_x,k)}. The support function yields max_e∈𝔻_k h_j^⊤ e=√(σ_x,k)M^-1/2 h_j and the Pontryagin difference is given by (<ref>). While this construction uses polytopic constraints, similar inner-approximations can be constructed for continuously differentiable constraints following the derivation in <cit.>. §.§ Design of stochastic contraction metric In the following, we discuss the computation of a contraction metric M satisfying the conditions in Theorem <ref>. We first pose an optimization problem that minimize the resulting constraint tightening and then discuss how to formulate it as a finite-dimensional semi-definite program (SDP). For ease of exposition, we focus on the constant parametrization M∈ℝ^n× n from Corollary <ref> and comment on addressing state-dependent metrics M(x) at the end. §.§.§ Minimize constraint tightening In the following, we formulate LMIs that minimize the constraint tightening resulting from the PRS in Theorem <ref>. Suppose we have polytopic state constraints of the form (<ref>), Assumption <ref> holds, and consider the following optimization problem: inf_W,X,ρ  11-ρX s.t. [ W Wh_j; (Wh_j)^⊤ 1 ]≽ 0, j∈𝕀_[1,p], [ ρ W (A(x,u)W)^⊤; A(x,u)W W ]≽ 0, [ X ( GΣ_w^1/2)^⊤; GΣ_w^1/2 W ]≽ 0, W≻ 0, ρ∈[0,1), ∀ (x,u)∈ℝ^n×𝕌. Let Assumptions <ref> and <ref> hold. Consider any feasible solution (W,X,ρ) to Problem (<ref>). Then, the conditions in Corollary <ref> hold with M=W^-1, ρ, and w̅=X. Furthermore, if the state constraint set is given by (<ref>), then for all k∈𝕀_≥ 0: 𝕏_k⊇{x| h_j^⊤ x≤ 1-√(X/(1-ρ)(1-p)), j∈𝕀_[1,p]}. Part I: Applying a Schur complement to Condition (<ref>) yields X- Σ_w^1/2G^⊤ M GΣ_w^1/2≽ 0, where we used M=W^-1≻ 0. Given that C_1≥C_2 for any matrices C_1≽ C_2≽ 0, this implies w̅=X≥Σ_w^1/2G^⊤ M GΣ_w^1/2 =Σ_wG^⊤ M G, i.e., (<ref>) holds. The Schur complement of (<ref>) yields ρ W - W A(x,u)^⊤ M A(x,u)W≽ 0. Multiplying with M=W^-1≻ 0 from left and right yields (<ref>). Thus, the conditions in Corollary <ref> hold. Part II: Applying Schur complement to (<ref>) yields 0≤ 1-(W h_j)^⊤ W^-1 W h_j=1-W^1/2 h_j^2, which implies c_j= M^-1/2h_j≤ 1 with W^1/2=M^-1/2. Hence, the constraint tightening formula (<ref>) from Proposition <ref> with M=M satisfies c_j√(σ_x,k)≤√(w̅(1-ρ)(1-p)) = √(X(1-ρ)(1-p)), yielding the inner-approximation (<ref>). Inequality (<ref>) shows that the objective in Problem (<ref>) bounds the constraint tightening. Thus, Problem (<ref>) yields contraction metrics that minimize the constraint tightening in the SMPC formulation (<ref>), see also <cit.> for similar procedures in robust MPC. §.§.§ Finite-dimensional convex problem Next, we discuss how to solve Problem (<ref>). For a fixed constant ρ, Problem (<ref>) is an SDP with a linear cost and LMI constraints. Hence, ρ is computed using a line-search or bi-section, as similarly suggested in <cit.>. As is standard in contraction metrics <cit.>, Inequality (<ref>) needs to be verified for all (x,u)∈ℝ^n×𝕌, which is not directly computationally tractable. Standard solutions to this problem include heuristic gridding or sums-of-squares programming <cit.>. However, both approaches are difficult to apply for the considered global discrete-time conditions. Hence, we instead use a convex embedding <cit.>. In particular, we write the Jacobian as a linear combination of basis-functions A(x,u)=∑_i=1^n_θA_θ,iθ_i(x,u), with A_θ,i∈ℝ^n× n, θ(x,u)=[θ_1(x,u),…,θ_n_θ(x,u)]^⊤∈ℝ^n_θ. Then, we determine a polytope Θ⊆ℝ^n_θ, such that θ(x,u)∈Θ ∀ (x,u)∈ℝ^n×𝕌. Thus, we can replace ∀ (x,u)∈ℝ^n×𝕌 in Inequality (<ref>) with the sufficient condition ∀θ∈Θ. Since θ appears linearly in in Problem (<ref>), it suffices to verify the LMIs on the vertices of the set Θ, thus reducing the problem to a finite-dimensional SDP. For further details, see the theoretical derivations in <cit.> and the online available code. §.§.§ State-dependent contraction metric To address state-dependent contraction metrics M(x) two main steps are needed. First, a finite parametrization is needed, typically of the form W(x)=∑_iW_θ_W,iθ_W,i(x) with basis functions θ_W,i:ℝ^n→ℝ. Given that uniform bounds (<ref>) need to hold globally for all x∈ℝ^n, the basis functions θ_W,i(x) should be globally bounded, such as sigmoid functions. Condition (<ref>) restricts the parametrization to only use component of the state x that are not in the image of G∈ℝ^n× q. To ensure satisfaction of (<ref>) for all (x,u)∈ℝ^n×𝕌 extra care is required for the term M(f(x,u,0)). A typical approach is to derive a hyperbox Ω, such that θ_W(f(x,u,0))∈{θ_W(x))}⊕Ω. Then, the convex embeddings can again be utilized to formulate the problem as a finite-dimensional SDP <cit.>. §.§ Design of terminal cost/set The following proposition shows how to design the terminal set 𝕏_f and the terminal cost V_f. Suppose that Assumptions <ref> and <ref> hold, the conditions in Corollary <ref> are satisfied, and 𝔻_k⊆𝕏, k∈𝕀_≥ 0. Then, there exist constants α_f≥ 0,c_f> 0, such that Assumption <ref> holds with V_f(x)=x_P^2, P=c_fM, 𝕏_f={x| x_M^2≤α_f}, and u_f=0. Analogous to the proof of (<ref>) in Theorem <ref>, for any x∈ℝ^n: f(x,0,0)_M^2=f(x,0,0)-f(0,0,0)_M^2 ≤ ρx-0_M^2=ρx_M^2, where we used f(0,0,0)=0 (Asm. <ref>). The positive invariance condition (Asm. <ref> <ref>) holds since 𝕏_f is a sublevel set of the Lyapunov function x_M^2. Constraint satisfaction (Asm. <ref> <ref>) holds by choosing α_f≥ 0 as the largest constant satisfying 𝕏_f⊆𝕏̅_k, k∈𝕀_≥ 0, with α_f≥ 0 since 0∈𝕏̅_k=𝕏⊖𝔻_k, k∈𝕀_≥ 0, by assumption. Note that x_Q^2≤λ_max(Q,M)x_M^2 where λ_max(Q,M)>0 is the maximal generalized eigenvalue of Q w.r.t. M. The terminal cost condition (Asm. <ref> <ref>) holds with V_f(f(x,0,0))=c_ff(x,0,0)_M^2(<ref>)≤ c_fρx_M^2 ≤ c_fx_M^2-(1-ρ)c_fλ_max(Q,M)x_Q^2 = V_f(x)-ℓ(x,0), where the last equality uses c_f:=λ_max(Q,M)/1-ρ> 0. IEEEtran [ < g r a p h i c s > ]Johannes Köhler received the Ph.D. degree from the University of Stuttgart, Germany, in 2021. He is currently a postdoctoral researcher at ETH Zürich, Switzerland. He is the recipient of the 2021 European Systems & Control PhD Thesis Award, the IEEE CSS George S. Axelby Outstanding Paper Award 2022, and the Journal of Process Control Paper Award 2023. His research interests are in the area of model predictive control and control and estimation for nonlinear uncertain systems. [ < g r a p h i c s > ]Melanie N. Zeilinger is an Associate Professor at ETH Zürich, Switzerland. She received the Diploma degree in engineering cybernetics from the University of Stuttgart, Germany, in 2006, and the Ph.D. degree with honors in electrical engineering from ETH Zürich, Switzerland, in 2011. From 2011 to 2012 she was a Postdoctoral Fellow with the Ecole Polytechnique Federale de Lausanne (EPFL), Switzerland. She was a Marie Curie Fellow and Postdoctoral Researcher with the Max Planck Institute for Intelligent Systems, Tübingen, Germany until 2015 and with the Department of Electrical Engineering and Computer Sciences at the University of California at Berkeley, CA, USA, from 2012 to 2014. From 2018 to 2019 she was a professor at the University of Freiburg, Germany. Her current research interests include safe learning-based control, as well as distributed control and optimization, with applications to robotics and human-in-the loop control.
http://arxiv.org/abs/2407.12078v1
20240716180000
Resolving the nano-Hz gravitational wave sky: the detectability of eccentric binaries with PTA experiments
[ "Riccardo J. Truant", "David Izquierdo-Villalba", "Alberto Sesana", "Golam Mohiuddin Shaifullah", "Matteo Bonetti" ]
astro-ph.GA
[ "astro-ph.GA" ]
Detectability of eccentric binaries with PTA experiments Dipartimento di Fisica “G. Occhialini”, Università degli Studi di Milano-Bicocca, Piazza della Scienza 3, I-20126 Milano, Italy INFN, Sezione di Milano-Bicocca, Piazza della Scienza 3, 20126 Milano, Italy INAF - Osservatorio Astronomico di Brera, via Brera 20, I-20121 Milano, Italy Pulsar Timing Array (PTA) collaborations reported evidence of a nano-Hz stochastic gravitational wave background (sGWB) compatible with an adiabatically inspiraling population of massive black hole binaries (MBHBs). Despite the large uncertainties, the relatively flat spectral slope of the recovered signal suggests a possible prominent role of MBHB dynamical coupling with the environment or/and the presence of an eccentric MBHB population. This work aims at studying the capabilities of future PTA experiments to detect single MBHBs under the realistic assumption that the sGWB is originated from an eccentric binary population coupled with its environment. To this end, we generalize the standard signal-to-noise ratio (SNR) and Fisher Information Matrix calculations used in PTA for circular MBHBs to the case of eccentric systems. We consider an ideal 10-year MeerKAT and 30-year SKA PTAs and apply our method over a wide number of simulated eccentric MBHB populations. We find that the number of resolvable MBHBs for the SKA (MeerKAT) PTA is ∼ 30 (4) at SNR > 5 (> 3), featuring an increasing trend for larger eccentricity values of the MBHB population. This is the result of eccentric MBHBs at ≲ 10^-9 Hz emitting part of their power at high harmonics, thus reaching the PTA sensitivity band. Our results also indicate that resolved MBHBs do not follow the eccentricity distribution of the underlying MBHB population, but prefer low eccentricity values (< 0.6). Finally, the recovery of binary intrinsic properties and sky-localization do not depend on the system eccentricity, while orbital parameters such as eccentricity and initial orbital phase show clear trends. Although simplified, our results show that SKA will enable the detection of tens of MBHBs, projecting us into the era of precision gravitational wave astronomy at nano-Hz frequencies. Resolving the nano-Hz gravitational wave sky: the detectability of eccentric binaries with PTA experiments Riccardo J. Truant1r.truant@campus.unimib.it David Izquierdo-Villalba1,2 Alberto Sesana1,2 Golam Mohiuddin Shaifullah1,2 Matteo Bonetti1,2,3 Received —; accepted — =============================================================================================================================================================================================== § INTRODUCTION In the last three decades, multi-wavelength observations have pointed out that massive black holes (> 10^6 M_⊙, MBHs) reside at the centre of most of the galaxies, co-evolving with them and powering quasars and active galactic nuclei <cit.>. Galaxies do not evolve in isolation and, in the context of the currently favored hierarchical clustering scenario for structure formation, they are expected to merge frequently <cit.>. Consequently, the presence of MBHs lurking in the centers of galaxies and the important role of galactic mergers suggest that massive black hole binaries (MBHBs) have formed and coalesced throughout cosmic history. The dynamical evolution of MBHBs is ruled by many different processes <cit.>. Following the merger of the two parent galaxies, dynamical friction, exerted by dark matter stars and gas, drags the two MBHs towards the nucleus of the newly formed system, reducing the initial MBH separation (∼ kpc scales) dawn to a few parsecs <cit.>. At these distances, a bound binary forms and dynamical friction ceases to be efficient. Interactions with single stars or torques extracted from a circumbinary gaseous disc take the main role in further evolving the MBHB separation <cit.>. These processes harden the MBHB down to sub-pc scales, where the emission of gravitational waves (GWs) drives it to final coalescence. During this last evolutionary stage, MBHBs are powerful GW sources, whose emission spans over a wide range of frequencies. In particular, low-z, high mass (> 10^7 M_⊙) inpiralling MBHBs emit GWs in the nano-Hz frequency window (10^-9 - 10^-7 Hz), probed by Pulsar Timing Array (PTA) experiments <cit.>. By monitoring an array of millisecond pulsars and measuring the changes in the time-of-arrival of their pulses, PTAs are sensitive to the incoherent superposition of all the GWs coming from the cosmic population of MBHBs <cit.>. The overall signal is thus expected to have the properties of a stochastic GW background (sGWB). The specific amplitude and spectral shape of the signal are closely related to the galaxy merger rate and the environment in which MBHBs shrink <cit.>, and it can be disentangled from other stochastic noise processes affecting PTA measurements thanks to its distinctive correlation properties <cit.>. Moreover, because of the sparseness of the most massive and nearby binaries, individual deterministic signals, usually referred to as continuous GWs (CGWs) might also be resolved <cit.>. Those would provide precious information about the most massive and nearby MBHBs in the universe and are ideal targets to extend multimessenger astronomy in the nano-Hz GW band <cit.>. For this reason, both types of signals (CGW and sGWB) are of great interest for PTA observations. There is currently several operational PTA collaborations around the world: the European Pulsar Timing Array <cit.>, the North American Nanohertz Observatory for Gravitational Waves <cit.>, the Parkes Pulsar Timing Array <cit.>, the Indian PTA <cit.>, the Chinese PTA <cit.> and the MeerKAT PTA <cit.>. The latest results published by several of those collaborations report evidence about the presence of an sGWB <cit.>, compatible with the existence of low-z MBHBs <cit.>. Quite interestingly, the amplitude of the signal is at the upper end of the predicted range of MBHB populations <cit.>, and the best fit to the logarithmic spectral slope of the signal appears to deviate from the vanilla -2/3 value, expected from a circular population of MBHBs evolving solely through GW emission. Although uncertainties are too large to draw any conclusion, these two facts might bear important implications for the underlying MBHB population. On the one hand, the high signal amplitude likely implies a large contribution from very massive binaries at the upper-end of the MBH mass function. On the other hand, the tentative deviation in the spectral slope can hint at a strong coupling of the binaries with their stellar environment <cit.> or at non-negligible orbital eccentricities <cit.>. In light of the above considerations, it is therefore interesting to investigate the expected statistical properties of CGWs that might be resolved by future PTA experiments. In fact, although the topic has been addressed by several authors <cit.>, most of the current literature focuses on fairly idealised cases. Both <cit.> and <cit.> assume circular binaries, and a vast range of models that are not necessarily tailored to the currently detected signal. <cit.> brings eccentricity in the picture, but employs a very simplified description of the signal and investigates a scenario where the GWB is almost a factor of three smaller than what currently inferred from the data. Moreover, none of the work above touches on PTA capabilities to estimate the source parameters. Work on this subject has so far involved only circular binaries <cit.> and when eccentricity has been considered <cit.>, results have never been scaled at the overall MBHB population level. In this work, we aim to relax several of the assumptions made in previous investigations with the goal of providing an extensive assessment of future PTA experiments capabilities of resolving CGWs. To this end, we employ state-of-the-art MBHB populations including environmental coupling and eccentricity, tailored to reproduce the observed PTA signal. We also adapt the formalism of <cit.> to PTA sources to develop a fast Fisher Information Matrix algorithm for eccentric binaries, limiting the description of the signal to the Earth term only. We apply this machinery to putative 10-year MeerKAT and 30-year SKA PTAs, and we identify resolvable CGWs via iterative subtraction <cit.> over a wide range of simulated eccentric MBHB populations compatible with the latest amplitude of the sGWB, offering a realistic assessment of the future potential of PTAs. The paper is organized as follows. In Section <ref> we overview the methodology used to characterize the emission of eccentric MBHBs and the time residuals that they imprint in PTA data. In Section <ref>, we describe the computation of the signal-to-noise ratio and Fisher Information Matrix for eccentric MBHBs. In Section <ref>, we present the population of eccentric MBHBs and the PTA experiments that we use. In Section <ref>, we discuss the results, focusing on the number of resolvable sources and the effect of the eccentricity in determining their number and their parameter estimation. In Section <ref>, we discuss some of the caveats of the present implementation and finally, in Section <ref>, we summarise the main results of the paper. A Lambda Cold Dark Matter (ΛCDM) cosmology with parameters Ω_ m = 0.315, Ω_Λ = 0.685, Ω_ b = 0.045, σ_8 = 0.9 and h = H_0/100 = 67.3/100 km s^-1 Mpc^-1 is adopted throughout the paper <cit.>. § THE GRAVITATIONAL EMISSION OF ECCENTRIC SUPERMASSIVE BLACK HOLES BINARIES In this section, we outline the basic concepts used to explore the detectability of CGWs generated by eccentric MBHBs. §.§ The gravitational wave signal The GW metric perturbation h_ab(t) in the trace-less and transverse (TT) gauge can be written as a linear superposition of two polarizations (h_+ and h_×) and their base tensor (e^+_ab and e^×_ab): h_ab(t,Ω̂) = h_+(t)e^+_ab (Ω̂)+h_×(t) e^×_ab (Ω̂) where Ω̂ is the GW propagation direction. Differently from the monochromatic emission of a circular binary, the GW signal of an eccentric source is spread over a spectrum of harmonics of the orbital frequency. To model the signal of these eccentric sources, we use the GW waveform presented in <cit.>, which provides the analytic expression of the GW emission of an eccentric binary. Specifically, adopting the quadrupolar approximation and making use of the Fourier analysis of the Kepler problem, h_+,× can be written as: h_+(t) = ∑_n = 1^∞ -(1+cos^2ι) [a_ncos(2γ) - b_nsin(2γ)] + (1-cos^2ι)c_n, h_×(t) = ∑_n = 1^∞ 2cosι[b_n cos(2γ) + a_n sin(2γ)], where a_n = - nζω^2/3J_n-2(ne) -2eJ_n-1(ne)+(2/n) J_n(ne) + 2eJ_n+1(ne)-J_n+2(ne)cos(nl(t)), b_n = -nζω^2/3√(1-e^2)[J_n-2(ne)-2J_n(ne) + J_n+2(ne)]sin(nl(t)), c_n = 2ζω^2/3J_n(ne)cos(nl(t)). being e the eccentricity of the MBHB, n the harmonic number and J_n(x) the n- th Bessel Function of the first kind. ζ is the GW amplitude given by the combination of the redshifted chirp mass, ℳ_z, and the luminosity distance, D_L: ζ = (Gℳ_z)^5/3c^4D_L. Here, c is the light speed and G is the gravitational constant. The redshifted chirp mass is expressed as: ℳ_z = ℳ (1+z) = M q^5/3(1+q)^6/5(1+z), being ℳ the rest-frame chirp mass, M = m_1 + m_2 the total mass of the binary in the rest frame, and q = m_2/m_1 < 1 its mass ratio. With this definition of q, m_1 and m_2 are identified as the mass of the primary and secondary MBH, respectively. The variable ι is the inclination angle defined as the angle between the GW propagation direction and the binary orbital angular momentum, L̂. The quantity l(t) refers to the binary mean anomaly l(t) = l_0 + 2π∫_t_0^t f_k(t') dt', wheref_k(t') corresponds to the observed Keplerian frequency defined as f_k(t) = (1+z) f_k,r(t) with f_k,r(t) = (2π)^-1√(GM/r_bin(t)^3) the rest frame Keplerian frequency and r_bin(t) the semi-major axis of the MBHB orbit. In principle, r_bin(t) can evolve during the observation time due to the GW emission and environmental interaction. However, here we assume that the orbital frequency and eccentricity do not evolve during the observation time and, hence, f_k(t) can be treated as a constant value, i.e f_k (see discussion about this assumption in Section <ref>). The orbital angular frequency is then given by ω = 2π f_k, while γ is the angle that measures the direction of the pericenter with respect to the direction x̂, defined as x̂≡ (Ω̂ + L̂cosi)/√(1-cos^2i). §.§ The timing residuals A GW passing between the pulsar and the Earth perturbs the space-time metric, causing a modification in the arrival time of the pulse to the Earth. This induces a fractional shift in the pulsar rotational frequency, z(t,Ω̂), given by <cit.>: z(t,Ω̂) = 12p^ap^b1+Ω̂·p̂Δ h_ab, where p̂ is the unit direction vector to the pulsar and Δ h_ab=h_ab(t,x⃗_E) - h_ab(t_P,x⃗_P) is the difference in the metric perturbation computed at the moment in which the GW arrives at the solar system barycenter (t), and when it passed through the pulsar (t_P=t-L/c, where L denotes the distance to the pulsar). We define x⃗_E to coincide with the solar system barycenter, which is the origin of the adopted coordinate system, while x⃗_P = L p̂ corresponds to the pulsar sky position. In practice, what PTA experiments measure is the timing residual, which corresponds to the time integrated effect of Eq. (<ref>): s(t) = ∫_t_0^t dt' z(t') =F^+(Ω̂)[s_+(t)-s_+(t_0)] + F^×(Ω̂)[s_×(t)-s_×(t_0)]. Here, s_+,×(t) = ∫_t_0^t h_+,×(t') dt', t_0 is the time at which the observational campaign starts and t = t_0 + T_ obs is the epoch of the considered observation. The variable F^+,× denote the antenna pattern functions and encode the geometrical properties of the detector (for a PTA experiment the test masses are the Earth and the pulsar). In particular, F^+,× depends on the GW propagation direction (Ω̂) and the pulsar sky location (p̂). By making use of the polarization basis tensor {n̂,û,v̂} (see ), the pattern functions F^+,× can be can be written as: F^+(Ω̂) = 12[û·p̂]^2 -[v̂·p̂]^21+Ω̂·p̂, and F^×(Ω̂) = [û·p̂][v̂·p̂]1+Ω̂·p̂, where n̂ corresponds to the vector pointing to the GW source: n̂ = -Ω̂ = [cosθcosϕ , cosθsinϕ, sinθ ], and û and v̂ are defined as: û = n̂×L̂/|n̂×L̂| = [cosψsinθcosϕ -sinψcosθ, cosψsinθsinϕ + sinψcosϕ, -cosψcosθ], v̂ = û×n̂ = [ sinψsinθcosϕ + cosψsinϕ , sinψsinθsinϕ -cosψcosϕ , -sinψcosθ ]. Here θ and ϕ are the sky location of the MBHB expressed in spherical polar coordinates (θ,ϕ) = (π/2 -DEC,RA), being DEC the declination and RA the right ascension of the binary. Finally, ψ is the polarization angle, ranging between [0,π]. For simplicity, we consider only the Earth term[For further information about the impact of including the pulsar distance in the computation of the SNR, we refer to .], we ignore any time evolution of the MBHB frequency and neglect higher-order post-Newtonian effects such as the pericenter precession and orbit-spin coupling. Those are expected to play a minor role in the output of PTA GW signal <cit.>, and can be safely neglected, at least for a first order estimate. We will comment on the validity of these assumptions in Section <ref>. Under the framework outlined above, the values of the timing residuals can be written analytically as <cit.>: s_+(t) = ∑_n=1^∞ -(1+cos^2i)[ã_ncos(2γ) - b̃_nsin(2γ)] + (1-cos^2i)c̃_n, s_×(t) = ∑_n=1^∞ 2cosi [b̃_ncos(2γ)+ã_nsin(2γ)], being: ã_n = -ζω^-1/3[J_n-2(ne)-2eJ_n-1(ne) (2/n)J_n(ne) + 2eJ_n+1(ne) -J_n+2(ne) ]sin(nl(t)) b̃_n = ζω^-1/3√(1-e^2)[J_n-2(ne)-2J_n(ne)+J_n+2(ne)]cos(nl(t)) c̃_n = (2/n) ζω^-1/3J_n(ne)sin(nl(t)). § SNR AND PARAMETER ESTIMATION OF SUPERMASSIVE BLACK HOLE BINARIES IN PTA DATA In this section, we introduce the methodology used to compute the signal-to-noise ratio (hereafter, SNR) and the Fisher information matrix from the gravitational wave signal emitted by a single eccentric MBHB. §.§ Signal-to-noise ratio for eccentric binaries In general, to assess the possibility of detecting a nano-Hz CGW signal generated by an MBHB it is required to determine how its signal compares with the background noise present in the detector. This is usually done by computing the SNR. Given the deterministic nature of the CGW signal, the optimal way to compute the SNR is through matched filtering. Assuming that a CGW is present in the timing residual of a pulsar, the match filtering procedure gives the expression: (SN)^2=4 ∫_0^∞ df |s̃(f) | ^2S_k(f). Estimating the SNR therefore requires the characterization of the noise properties encoded in S_k(f), i.e the noise power spectral density (NPSD) of the k-th pulsar inside our array, besides the knowledge of the signal s̃(f), which is the Fourier transforms of s(t) given by Eq. (<ref>). As shown by Eq. (<ref>), the computation of the SNR requires the time residuals in the frequency domain, i.e. s̃(f). However, as described in Section <ref>, in PTA experiments the time residuals are framed on the time domain. Transforming these in the frequency domain can be easily addressed in the case of circular binaries given that the CGW signal is monochromatic <cit.>. However, in the generic case of an eccentric binary, the signal is spread over a spectrum of harmonics and the term |s̃(f) | ^2 contains mixed products between residuals originated at different harmonics. To address this scenario we worked under the assumption that the noise is a Gaussian and zero-mean stochastic stationary process and adopted a similar approach to the one presented in <cit.>. In brief, even if Eq. (<ref>) is given by the mixed product generated by different harmonics, their signal in the frequency domain is described by a delta function centred at the emission frequency nf_k. Consequently, the product of the residuals generated by harmonics emitting at different frequencies are orthogonal and cancel out. We can therefore treat each harmonic separately as a monochromatic signal and compute its SNR by exploiting the fact that 4S_k(f_n)∫_0^∞|s̃_̃ñ(f)|^2 df ≃2S_k(f_n)∫_0^∞ s_n(t)^2 dt. The SNR from an eccentric MBHB, for a single pulsar, is thus given by the summation in quadrature over all the harmonics: (SN)^2 = ∑_n=1^∞2S_k(f_n)∫_t_0^t dt' s_n^2(t'), while the total SNR in the PTA is given by the sum in quadrature of the SNRs produced in all the N_ Pulars included in the array: (SN)^2_tot = ∑_k=1^N_ Pulars(SN)^2_k. Finally, the computation of the SNR of Eq. (<ref>) requires a summation over all the harmonics, i.e n ∈ [1,+∞]. However the contribution to the SNR goes to zero for n→ +∞ and the sum can be appropriately truncated. To select the harmonic of truncation, n_ max, we adopted the simple criteria of n_ max = 4 n_ peak, being n_ peak the harmonic number at which the power of the GW emission is maximized for the selected eccentricity. To compute this value we follow the numerical fit presented in <cit.>: n_peak (e) ≃ 2(1+∑_k=1^4 c_k e^k)(1-e^2)^-3 / 2 , where c_1 = -1.01678, c_2 = 5.57372, c_3 = -4.9271, c_4 = 1.68506. We have checked how the exact value of n_ max affects our results. Specifically, less than 1% relative difference is seen in the SNR when it is computed assuming n_ max = 10^4 instead of n_ max = 4 n_ peak. §.§ Parameter Estimation Once the methodology to derive the SNR from an eccentric MBHB has been framed, the natural subsequent step is determining how well the system parameters can be measured. In the case of high SNR, they can be quickly estimated through the Fisher Information Matrix formalism. Specifically, the GW signal we are considering is characterized by 9 free parameters (see their definition in Section <ref>): λ⃗=(ζ,f_k,e,i,ψ,l_0,γ,ϕ,θ). To reconstruct the most probable source parameters, λ⃗, given a set of data, d⃗, it is possible to work within the Bayesian framework and derive the posterior probability density function p(λ⃗|d⃗): p(d⃗|λ⃗) ∝ p(λ⃗)p(d⃗|λ⃗), where p(λ⃗| d⃗) is the likelihood function and p(λ⃗) is the prior probability density of λ⃗. If we assume that near the maximum likelihood estimated value, λ̂_i, the prior probability density is flat, the posterior distribution p(λ⃗| d⃗) will be proportional to the likelihood and can be approximated as a multi-variate Gaussian distribution: p(λ⃗| d⃗) ∝ exp[-1/2Γ_ijΔλ_i Δλ_j], where the indexes i and j run over all the components of the source parameter vector λ⃗ (in our case from 1 to 9), and Δλ_i = λ̂_i - λ_i (Δλ_j = λ̂_j - λ_j) are the differences between the 'true' source parameters (λ⃗) and their most probable estimated values (λ̂). Finally, Γ_ij is the Fisher information matrix, and its inverse provides a lower limit to the error covariance of unbiased estimators[For further details about the Fisher Information Matrix, we refer the reader to <cit.> and <cit.>.]. In the PTA case, the Fisher matrix is computed as: Γ_ij = 4 ∫_0^∞ df ∂_i s(f) ∂_j s(f)S_k(f), where ∂_i and ∂_j are the partial derivatives of time residual in the frequency domain, s(f), with respect to the λ_i and λ_j parameters, respectively. As for the SNR integral, the scalar product is defined in the frequency domain. However, we can apply the approximate identity given by Eq. (<ref>) to write: Γ_ij ≃ ∑_n=1^n_max2S_k(f_n)∫_t_0^t dt' ∂_i s_n(t') ∂_j s_n(t'), in which the partial derivatives are calculated numerically through: ∂_i s_n(t) = [ s_n(t,λ_i + δλ_i/2) - s_n(t,λ_i - δλ_i/2) /δλ_i], where the time step is set to be equal to δλ_i = 10^-5λ_i. We note that when calculating the SNR and the Fisher information matrix we always assume to know all the parameters that fully specify the residuals s(t). By assuming independent data streams for each pulsar in the array, the Fisher information matrix obtained from the full PTA, ( Γ_ij)_T, is simply given by the sum of the single Fisher information matrices derived for each pulsar, ( Γ_ij)_k: ( Γ_ij)_ tot = ∑_k = 1^N_ Pulsars( Γ_ij)_k. We stress that the covariance matrix is simply the inverse of the Fisher information matrix (Γ^-1), thus the elements on the diagonal represent the variances of the parameters (σ_ii^2 = Γ^-1_ii), while the off-diagonal terms correspond to the correlation coefficients between parameters (σ_ij^2 = Γ^-1_ij/√(σ_i^2σ_j^2)). §.§ Characterising the noise The next fundamental ingredient in our computation is the noise description in PTA experiments. In particular, the pulsar NPSD can be broken down in two separate temrs: S_k(f) = S_h(f) +S_p(f). The term S_h(f) describes the red noise contributed at each given frequency by the sGWB generated by the incoherent superposition of all the CGWs emitted by the cosmic population of adiabatically MBHBs <cit.>: S_h(f) = h_c^2(f)12π^2f^3, For a real PTA, the noise is estimated at each resolution frequency bin of the array. In fact if we assume an observation time T, the PTA is sensitive to an array of frequency bins Δf_i=[i/T,(i+1)/T], with i=1,...,N. If we now identify each frequency bin Δf_i with its central frequency f_i, then we can associate to each frequency resolution element the characteristic strain produced by all the MBHBs emitting in that element as: h_c^2(f_i) = ∑_j=1^N_S h_c,j^2(f)δ(Δf_i - f). where the sum is over all sources, N_S, and δ(Δf_i - f) is a generalized delta function that assumes the value 1 when f∈Δf_i, and 0 otherwise, thus selecting only MBHBs emitting within the considered bin. h_c,j^2(f) is the squared characteristic strain of the j- th source. Since we consider eccentric MBHBs, h_c,j^2(f) is the sum of the strain emitted at all the harmonics nf_k, among which one has to select only those that lie within the frequency bin Δf_i. Eq. (<ref>) thus generalizes to h_c^2(f_i) = ∑_j=1^N_S∑_n=1^n_ max h_c,n,j^2(nf_k) δ(Δf_i - nf_k). For each of the N_S binaries the value of h^2_c,n is given by: h^2_c,n = (𝒜^2 + ℬ^2)/2( Gℳ_z)^10/3 c^8 D_L^2 (2π f_k/n)^4/3g(n,e)/(n/2)^2 nf_kΔ f, where 𝒜 = 1+ cos(i)^2, ℬ =-2cos(i) and Δ f=1/T is the frequency bin width. The value of g(n,e) is computed according to: g(n,e) = n^432( B_n^2 + (1-e^2)A^2_n + 43n^2J_n(ne)^2 ), where A_n and B_n are: B_n = J_n-2(ne) - 2eJ_n-1(ne) + 2n J_n(ne) + 2eJ_n+1(ne) - J_n+2(ne), A_n = J_n-2(ne) - 2J_n(ne) + J_n+2. We stress that when evaluating the detectability of a given MBHB we will not take into account the contribution of its h_c,j^2(f) when computing value of S_h(f). The term S_p(f) in the NSPD encodes all sources of noise unrelated to the sGWB, which are related to the telescope sensitivity, intrinsic noise in the pulsar emission mechanism, pulse propagation effects and so on. PTA collaborations parameterize the pulsar noise as a combination of three different terms: S_p(f) = S_w + S_ DM(f) + S_ red(f). S_w accounts for processes that generate a white stochastic error in the measurement of a pulsar arrival time. These include pulse jitter, changes in the pulse profile with time, or instrumental artefacts. Such processes are uncorrelated in time and the resulting noise is modelled as S_w = 2Δ t_ cadσ^2_w, where it is commonly assumed that the pulse irregularity is a random Gaussian process described by the root mean square value σ_w. Δ t_ cad is the time elapsed between two consecutive observations of the same pulsar, i.e. the observation cadence. S_ red(f) and S_ DM(f) describe the achromatic and chromatic red noise contributions, respectively. While the former is the result of the pulsar intrinsic noise, the latter is the result of spatial variations in the interstellar electron content along the line of sight between the observer and the pulsar. These two red noises are usually modeled as a stationary stochastic process, described as a power law and fully characterized by an amplitude and a spectral index. § SUPERMASSIVE BLACK HOLE BINARY POPULATIONS AND PULSAR TIMING ARRAYS §.§ The population of binaries In this section, we briefly present the procedure used to generate the different populations of eccentric MBHBs that will be used throughout this paper. For further details, we refer the reader to <cit.> and <cit.>. To study the detectability of single MBHBs it is required to characterise their cosmological population as a whole. The sGWB spectrum generated by such a population can be calculated as the integrated emission of all the CGW signals emitted by individual binaries. Thus, the inclination and polarization average[The sky and polarization average implies that (𝒜^2 + ℬ^2) = 64/5] characteristic strain of the sGWB can be expressed as: h_c^2(f)=∫_0^∞d z ∫_0^∞d m_1 ∫_0^1  d q d^5 N/ d z  d m_1  d q  de  dln f_k, r× .32/5(Gℳ_z)^10/3/c^8 D_L^2 (1+z)^4/3 (2π f_k, r)^4/3∑_n=1^∞g(n, e)/(n / 2)^2|_f_k, r=f(1+z) / n, where d^5N/( d m_1 dq  dz  de  dt_r) is the comoving number of binaries emitting in a given logarithmic frequency interval, dln f_k, r, and primary mass, mass ratio, eccentricity and redshift in the range [m_1,m_1 + δ m_1], [q, q + δ q], [e, e + δ e] and [z, z+δ z], respectively. In particular, this quantity can be re-written as: d^5 N/ d z  d m_1  d q  de  dln f_k, r= d^3n/ d z  d m_1  d q( 1/f_k,rdf_k,r/dt_r)^-1[  d z/ d t_r d V/ d z] = d^3n/ d z  d m_1  d q ( 1/f_k,rdf_k,r/dt_r)^-1 4π c D_L^2/(1+z), where n = dN/dV, d^3n /( d z  d m_1  d q) is the differential merger rate comoving density of MBHBs and f_k,r ( d t_r /  d f_k,r) represents the binary evolution timescale, which implicitly takes into account the variation of the hardening rate with the binary eccentricity (i.e. at fixed orbital frequency, eccentric binaries evolve faster). Following <cit.>, the merger rate of MBHs can be expressed in terms of the galaxy merger rate, (d^3n_ G /(d z d M_* dq_*), as: d^3n/ d z  d m_1  d q =  d^3n_ G/ d z  d M_*  dq_* d M_*/ d m_1 d q_*/d q = = [ ϕ(M_*,z)/M_* ln 10ℱ(z,M_*,q_*)/τ(z,M_*,q_*)dt/dz]  d M_*/ d m_1 d q_*/d q, where ϕ(M_*,z) is the galaxy stellar mass function and ℱ(z,M_*,q_*) the differential fraction of galaxies with mass M_* at a given redshift paired with a satellite galaxy of mass in the interval [q_* M_*, (q_* + dq_*) M_*][Specifically, ℱ(z,M_*,q_*) was computed by setting ℱ(z,M_*,q_*) = -f_0(1+z)^γ/(q_*ln q_m), being f_0 and γ free parameters inferred from observational studies and q_m the minimum mass ratio selected in counting pairs.]. The value τ(z,M_*,q) is deduced from N-body simulations and corresponds to the typical merger timescale for a galaxy pair with a given mass, redshift and mass ratio. The term (d M_*/ d m_1) (dq_*/dq) associates an MBH to each galaxy in the pair by using the MBH galaxy bulge mass scaling relation: log_10(M_ BH) = α + βlog_10(M_ Bulge) + ℰ where ℰ represents an intrinsic scatter, generally around 0.3-0.5 dex <cit.>, and α and β define the zero point and logarithmic slope of the relation, respectively. To transform the total stellar mass into bulge mass, the relation M_* = f_ b M_ Bulge described in <cit.> is assumed. Finally, the hardening of the binary in Eq. (<ref>) is determined by using the stellar models of <cit.>: df_k,r/dt_r = ( df_k,r/dt_r)_* + ( df_k,r/dt_r)_GW = = 3G^4/3 (m_1 + m_2)^1/3 H ρ_i /2(2π)^2/3σ_if_k,r^1/3 + 96(G ℳ)^5/3/5c^5 (2π)^8/3 f_k,r^11/3ℱ(e), and de/dt_r = ( de/dt_r)_* + ( de/dt_r)_GW = = G^4/3(m_1 + m_2)^1/3ρ_i H K /(2π)^2/3σ_i f_k,r^-2/3 - (G ℳ)^5/3/15c^5 (2π f_k,r)^8/3𝒢(e), where ℱ(e) = 1+(73/24)e^2 + (37/96)e^4/(1-e)^7/2, 𝒢(e) = 304e + 121e^3/(1-e^2)^5/2, and σ_i and ρ_i are the velocity dispersion and stellar density at the binary influence radius. H and K represent the hardening rate and the eccentricity growth rate, calibrated against numerical three-body scattering experiments <cit.>. §.§.§ Generating MBHB populations consistent with PTA measurements As described above, the cosmological coalescence rate of MBHBs depends on different assumptions about the galaxy merger rate and correlations between MBHBs and their hosts. In particular, the library of models presented in <cit.> combines a number of prescriptions from the literature which we summarize here: * Galaxy stellar mass function. Five different observational results are taken from the literature <cit.> and matched with the local mass function <cit.>. For each of these functions, upper and lower limits were added to account for the errors given by the authors best-fit parameters. On top of this, an additional 0.1 dex systematic error was included to consider the uncertainties in the stellar masses determination. For all the mass functions, we separate between early/late-type galaxies and the analysis was restricted to z < 1.3 and M_* > 10^10 M_⊙, since we expect that these systems contribute the most to the sGWB signal <cit.> * Differential fraction of paired galaxies. The observational results of <cit.>, <cit.>, <cit.> and <cit.> were used when accounting for the evolution of the galaxy pair fraction. * Merger timescale for a galaxy pairs. We follow the fits done from the N-body and hydrodynamical simulations of <cit.> and <cit.>. * Galaxy-MBH scaling relation. The masses assigned to each merging galaxy pair were drawn from several observational relations. However, given the high normalization of the observed PTA signal, we only considered relations presented by <cit.> and <cit.> To save computation time, we perform an ad-hoc down-selection of the models, and limit our investigation to 108 combinations of the above prescriptions producing a distribution of sGWB amplitudes consistent with the measured PTA signal, as per Figure 2 of <cit.>. As for the environmental coupling and eccentricity evolution, we adopt the following prescriptions: * Stellar density profile. Following <cit.>, the stellar density profile is assumed to be a broken power law following an isothermal sphere outside the influence radius, r_i = 1.2 pc (M/10^6 M_⊙)^0.5, and a profile ρ = C ρ_i( r/r_i)^-1.5 at r<r_i. Here, ρ_i = σ^2 / (2π G r^2_i) and σ is determined from the <cit.> scaling relation <cit.>. C is a normalization factor of the stellar density profile and is assumed to take three different values (0.1, 1 and 10), to investigate the effect of changing the typical density of the environment. * Initial eccentricity. During the tracking of the hardening evolution, all the binaries are assumed to start with an initial eccentricity e_0 at binary formation. Throughout the paper, we consider 10 initial values of e_0 = 0, 0.1, 0.2, 0.3, 0.4, 0.5, 0.6, 0.7, 0.8, 0.9. Using the 108 population model, 10 eccentricity values and 3 environment normalizations defined above, we generate 3240 numerical distributions of MBHBs using Eq. (<ref>), and for each distribution, we perform 10 Monte Carlo sampling for a gran total of 32400 MBHB populations. Each population consists of a list of ≈10^5 binaries characterized by their chirp mass, redshift, orbital frequency and eccentricity. Due to the computational cost required to compute the fisher information matrix, the latter has been calculated only for a subsample of 3240 populations (10% of the total). §.§ The array of pulsars We explore the feasibility of detecting CGW signals using two different pulsar timing arrays: MeerKAT (N_ Pulsars = 78) and SKA (N_ Pulsars = 200). While pulsar monitoring at MeerKAT has been ongoing for 4.5 years, it will be superseded by the SKA Mid array by 2027. Given these constraints, we choose 10 years for MeerKAT while the 30-year time span of SKA follows projections commonly used in the literature. i) MeerKAT is a 64-antenna radio interferometer telescope located in South Africa. The regular monitoring of millisecond pulsar timing by MeerKAT is the basis of the MPTA. Recently, it was released the initial 2.5-years MPTA data <cit.>. While the current data includes 88 pulsars, the release only contains the 78 pulsars that have at least 30 observations over this observing span, with a typical cadence of 14 days. The upper panel of Fig. <ref> shows the position of those pulsars in the sky. Table A1 of <cit.> also reported the noise properties of each of the 78 pulsars, accounting for white-noise terms, frequency-dependent DM variations, and an achromatic red-noise process (see Section <ref>). In this work, we will use a 10-year MPTA-like system, featuring the same set of pulsars (number, sky position, and noise model) as the one presented in <cit.>. ii) Square Kilometer Array Mid telescope (SKA, ) planned to be operative in 2027, will be a large radio interferometer telescope whose sensitivity and survey speed will be an order of magnitude greater than any current radio telescope. For this work, we simulate a 30-year SKA PTA with 200 pulsars featuring a white noise of σ_w = 100 ns and an observing cadence of 14 days. To picture a more realistic scenario we also include red noise to the total noise power spectral density in Eq. (<ref>), parameterized as a power law <cit.> of the form S_ red(f) = A_ red^212 π^2( ff_ yr)^-γ_ red yr^3, where A_ red the amplitude at one year and γ_ red the spectral index. Red noise properties are drawn to be consistent with those measured in the EPTA DR2Full using the following procedure. We fit a linear log A_ red-γ relation to the measured red noises in Table 4 of <cit.>. We then assign A_ red and γ parameters consistent with this relation to 30% of the pulsars in the SKA array, drawing A_ red randomly from a uniform log-distribution in the range to -15< logA_ red<-14, for which the corresponding γ is >3. In this way, we mimic in the SKA array the fraction and properties of EPTA DR2Full pulsars with a robust red noise contribution. Note that while the remaining 70% of the pulsars will likely display some lower level of red noise, this is unlikely to affect the properties of the detected CGWs. In fact, for those pulsars the main stochastic red noise component is going to be the sGWB itself, which is already included in our calculation. We then employ the pulsar population synthesis code [https://github.com/samb8s/PsrPopPyhttps://github.com/samb8s/PsrPopPy.] <cit.> to draw a realistic distribution of pulsars in the array. generates and evolves realistic pulsar populations drawn from physically motivated models of stellar evolution and calibrated against observational constraints on pulse periods, luminosities, and spatial distributions. The final population of pulsars (10^5) is selected such that they would be observable (SNR > 9) by a SKA survey with an antenna gain of 140 K/Jy and integration time of 35 minutes. In order to avoid a particularly lucky/unlucky pulsar sky disposition, from this distribution we select a different set of 200 pulsars for each one of the MBHB population presented in Section <ref>. The sky distribution of the whole pulsar sample of SKA is presented Fig. <ref>. Since simulates hyper-realistic distributions of pulsars generated using theoretical considerations and observational constraints, the bulk of the generated full population of pulsars will lie close to the Galactic plane. However, the exceptional sensitivity of the SKA would also allow to choose the most isotropic distributions of pulsars in the PTA, maximizing sensitivity to any GW signals searched for by the PTA. Fig. <ref> presents the sensitivity curve of our SKA PTA and MPTA computed using Python package <cit.>. As expected, SKA PTA features better sensitivity than the MPTA. However, at low frequencies, both of them are limited by the sGWB. To guide the reader, Fig. <ref> also displays the sensitivity curves of SKA and MeerKAT PTAs when only white noise is considered. As shown in Fig. <ref>, for MPTA the two sensitivity curves are almost identical since only 4 of the 78 pulsars listed in <cit.> have a reported red noise. Conversely, when achromatic red noise is included in the SKA PTA, due to the larger fraction of pulsars affected by it, the red noise slightly hider the array's sensitivity at the lowest frequencies (< 10^-8 Hz). §.§ Identifying individually resolvable MBHB To extract individually resolvable CGWs, we employ a recursive technique similar to <cit.> and <cit.>. We sort the MBHB population by strain amplitude according to the expression in Eq. (<ref>), but selecting only the second harmonic (n = 2). Following this ranking, we calculate the SNR of each source according to Eq. (<ref>), including in the sGWB contribution to the noise the signals produced by all the other MBHBs. Whenever one source exceeds the SNR > 5 for SKA or SNR > 3 for MPTA, the source is deemed resolved and its contribution to the sGWB is subtracted. As a consequence, the level of the noise in the pulsar array is lowered as well (see Eq. (<ref>)), making more feasible the detection of dimmer CGWs that might be otherwise unobservable. We therefore re-evaluate the detectability of all the remaining sources by making use of the new (lowered) background. This procedure is repeated until there are no resolvable sources left in the analyzed MBHB population. The above recursive procedure must be applied to several thousands of MBHB populations, each including ∼ 10^5 systems, which becomes extremely time-consuming. To boost the efficiency of our pipeline, we established a criterion that allows us to select only those sources with the largest chance of being resolvable. We established a threshold in the value of h = 2ζ G^5/3 (π f_k)^2/3 / c^4 (hereafter h^ th) below which we do not compute the SNR, deeming the source too dim to be resolved. To determine the exact value of the threshold, we have computed the number of resolvable sources (N_ res) at different h^ th cuts for 96 randomly selected MBHB catalogs at three different values of e_0. We imposed the condition SNR > 5 for CGW detection and computed the number of resolvable sources using the SKA PTA, because of its larger performance in resolving dim GW sources compared to MPTA. Since, for this analysis, we are interested in the dimmest MBHB that the PTA experiment can resolve, we conservatively consider SKA PTA which features only white noise. Fig. <ref> shows the median number of N_ res as a function of h^ th. As expected, N_ res increases towards small values of h^ th, but it saturates below a certain threshold. This behavior is seen for all e_0 used to start the MBHB evolution. Taking into account Fig. <ref>, throughout this work we will use the conservative value of h^ th = 6 × 10^-17. We stress that small fluctuations are seen in the N_ res median below our fiducial threshold. However, they are not statistically significant (± 1 source) and the selected h^ th provides a good compromise between accuracy and computational efficiency. The recursive SNR evaluation-subtraction procedure is thus performed only on the subset of binaries with h > h^ th, providing a considerable speedup of the calculation. § RESULTS In this section, we present the main results of our work. The analysis has been performed taking into account different values of e_0. This has enabled us to characterize the effect of eccentricity in determining the number of resolvable sources and the accuracy of the parameter estimation from the detected signal. To avoid confusion with the initial eccentricity used in the hardening model, e_0, throughout the whole section we will tag the eccentricity of the detected MBHB as e_ rs. §.§ Number of resolvable sources The upper panel of Fig. <ref> shows the median number of resolvable sources (N_ res) detected by the SKA and MPTA. The results have been divided according to the eccentricity at which the MBHB population was initialized (e_0). This classification allows us to understand the role of the eccentricity of the global MBHB population on the prospects of CGW detection. The median number of resolvable sources for 10-year MPTA is 4, independently of e_0. Conversely, 30-years SKA provides larger N_ res values (∼ 35), increasing with eccentricity. In particular, the number of detected binaries starts to increase when e_0 > 0.2. This trend can be ascribed to the appearance of resolvable high-eccentric MBHBs with observed Keplerian frequency outside of the PTA frequency range. Given their large eccentricity, these systems can push a large fraction of their GW signal inside the PTA band (more details in the description of Fig. <ref> and Fig. <ref> below). The eccentricity distribution of the detected MBHBs is presented in the lower panel of Fig. <ref>. Regardless of the adopted array, the eccentricity distribution of resolved sources peaks at lower values compared to the underlying overall MBHB population. Therefore, the eccentricity of the detected systems is not a good tracer of the eccentricity of the global MBHB population. This is because the more massive binaries, which circularize faster (see Eq. <ref>), are also the more likely to be detected. Compared to MPTA, SKA PTA can generally observe more eccentric MBHBs, which is expected due to its longer timespan. In fact, the SKA PTA sensitivity extends to lower frequencies, where MBHBs had less time to circularize due to GW emission. For completeness, Fig. <ref> depicts the number of resolvable sources for SKA PTA when only the pulsar white noise is taken into account. As shown, when the red noise is neglected the number of resolvable sources increases by ∼ 30%. Finally, Fig. <ref> shows the distribution of the SNR of resolvable sources. For clarity, we only presented the results for the SKA PTA given that MPTA features the same trends (but extended down to SNR = 3). As we can see, 90% of the detected systems present SNR < 15, but there is a large tail towards larger values. Despite being just a few, the remaining 10% of sources with SNR > 15 will be optimal targets for multimessenger astronomy, since their sky localization will be small enough to perform electromagnetic follow-ups (see Section <ref> and ). Finally, to compare the SNR distributions for models initialized with different eccentricities we compute the ratio between the number of sources at different SNR bins for e_0=0.5,0.9 by the detected population with e_0=0.0. As can be seen, no major differences in the SNR distribution are found. §.§ Properties of the resolvable sources In this section, we study the properties of the resolvable sources and explore possible dependencies with the eccentricity of the underlying MBHB population. For the sake of clarity, the analysis has been done only using three reference eccentricity models: e_0 = 0.0, 0.5 and 0.9. The left panels of Fig. <ref> present the chirp mass distribution of the MBHB population detected by SKA and MPTA experiments. As shown, both PTAs will detect MBHBs with ℳ ∼ 10^9.5 M_⊙, although MPTA will be biased towards more massive systems given its lower sensitivity. Interestingly, the detection of ℳ ≲ 10^9 M_⊙ binaries by SKA is preferred when the underlying MBHB population is initialized with low eccentricities (e_0 < 0.5). This is due to the typical eccentricity and observed Keplerian frequency of ℳ < 10^9 M_⊙ systems. These MBHBs are placed at f_k ∼ 10^-8.5 Hz independently of e_0, but their eccentricity raises when e_0 increases (e.g. ∼ 0.4 and ∼ 0.6 for e_0 = 0.5, 0.9 models, respectively). These relatively high values of f_k and eccentricity cause these systems to emit part of their GW strain at high frequencies where the PTA sensitivity is already degrading. The net effect is the decrease of the source SNR with respect to a non-eccentric case. To illustrate this, the top panel of Fig. <ref> presents the characteristic GW strain versus observed GW frequency for three binaries with the same mass and Keplerian frequency but different eccentricities. As shown, for circular binaries all the emitted power falls in the frequency region in which the PTA has the best sensitivity. However, for the extreme case of e > 0.5 most of the power is pushed at f > 3 × 10^-8 Hz where the PTA is the less sensitive. Consequently, our analysis suggests that the detection of low-mass MBHBs (ℳ < 10^9 M_⊙) will be hindered in highly eccentric populations. The redshift distribution of the resolvable sources is presented in the middle-left panels of Fig. <ref>. The distribution peaks at z < 0.25, independently of the PTA experiment used and the eccentricity of the underlying MBHB population. The SKA resolved population has a longer tail at high redshifts, due to its better sensitivity. Moreover, there is a small trend towards higher redshifts with increasing eccentricity, more prominent in SKA than MPTA. The frequency distribution of the resolved binaries is shown in the middle-right panels of Fig. <ref>. The peak of the distribution seats around ∼10^-8.5-10^-8 Hz, being systematically higher for SKA PTA given its better sensitivity at high frequencies. As anticipated in Section <ref>, models with eccentric binaries enable the detection of MBHBs whose f_2 = 2f_k is smaller than the minimum frequency allowed by the PTA observing time. In the extreme case of e_0 = 0.9, up to half of the detected systems display this feature in the MPTA array. An illustrative example of how the strain is distributed among all the harmonics for a source with Keplerian frequency outside the PTA band can be seen in the lower panel of Fig. <ref>. Finally, the inclination distribution shown in the rightmost panels of Fig. <ref> is bimodal, preferring face-on/face-off binaries with respect to the observer (i < 50 deg and i > 125 deg). This is simply due to the angular pattern emission of GWs, which are stronger along the binary orbital angular momentum axis. §.§ SMBHB parameter estimation Here, we explore the precision to which the CGW source parameters can be determined. To this end, we make use of the procedure presented in Section <ref>. We focus on parameters of astrophysical relevance. Specifically, the GW amplitude, ζ, the observed Keplerian frequency, f_k, the orbital eccentricity, e_rs, the inclination angle, i, and the initial orbital phase, l_0. The latter parameters might help the identification of distinctive electromagnetic counterparts. In fact, accreting binaries at low inclination angles might appear as Type I AGN, displayng considerable variability in the optical/UV, while relativistic jets could be observable for nearly face-on systems <cit.>. Moreover, precise phase determination of eccentric binaries allows to clearly identify the periastron passage epochs, which can be associated to a dimming in the electromagnetic emission due to temporary mini-disc disruptions caused by the close flyby of the two MBHs <cit.>. Finally, we combine the two angles defining the source position in the sky to determine the 2D sky localization uncertainty as <cit.>: ΔΩ = 2π√((sinθΔθΔϕ)^2 - (sinθ σ_θϕ)^2), where σ_θ, ϕ is the correlation coefficient between θ and ϕ computed from the Fisher matrix. With this definition the probability of a GW source to be found outside a certain solid angle ΔΩ_0 is proportional to e^-ΔΩ_0/ΔΩ. Consequently, ΔΩ is an important quantity to take into account given that it provides information about the accuracy of pinpointing the GW source in the sky. Moreover, its specific value will shed light on the possibility of carrying out multimessenger studies by placing constraints on the size of the area to scan for electromagnetic follow-ups <cit.>. Results are presented in Fig. <ref> for the SKA PTA as a function of the eccentricity of the detected source, e_ rs, to determine its potential impact on the parameter estimation. MPTA parameter estimation features the same trends and is shown in Appendix <ref>. Since the estimation precision depends on the source SNR, we have performed this exploration at fixed bins of SNR: 3 < SNR < 10, 10 < SNR < 15, and SNR > 15. For each case, we show the median value on the error of the parameter recovery and the central 68% of the distribution. The recovery of the GW amplitude displays a small correlation with e_rs, slightly improving for high eccentric binaries. For instance, the median relative error for systems with SNR > 15 and e_rs < 0.1 is ∼ 30% while for high eccentric cases is reduced down to ∼ 20%. As it is the case for all parameters, the GW amplitude recovery precision scales linearly with the inverse of the SNR. Notably, while at SNR > 15 the source amplitude can be determined with a median relative error of 0.2-0.3, at SNR < 10 it is poorly constrained. Conversely, the Keplerian frequency is extremely well determined, with a relative error that is always smaller than 1%. In this case, the trend with eccentricity is reversed, with the median error increasing with e_rs. The error associated with the binary eccentricity improves for highly eccentric systems, with values as low as ∼ 1 - 5% at e_rs > 0.6. The inclination of the MBHB orbit is essentially unconstrained, especially for systems at small SNR and for small eccentricities. The initial phase of the orbit displays a clear dependence on e_ rs, being better constrained for large eccentric cases. For instance, at 10 < SNR < 15 Δ l_0 associated with e_ rs < 0.2 MBHB is ∼ 10 deg while it drops down to ∼ 1 deg for e_ rs > 0.6. This is not surprising since for an eccentric orbit GW emission is strongly localized close to the pericenter, allowing a precise measurement of the orbital phase of the system. Finally, the lower right panel of Fig. <ref> presents the sky-localization. Interestingly, it does not show any dependence with e_ rs but, as expected, it strongly improves with SNR, since the parameter has a theoretical scaling with SNR^-2. Binaries detected at 5 < SNR < 10 have a median sky-localization of ΔΩ ∼ 200 deg^2, making multimessenger follow-ups extremely challenging. On the other hand, systems with SNR > 15 feature median ΔΩ ∼ 20 deg^2. Note that the 68% confidence region extends down to ≈ 2 deg^2. Since SKA can resolve 30-40 binaries and about 10% of them will have SNR>15 (see Fig. <ref>), we can therefore expect at least one CGW with source localization at the ∼deg^2 level, which would be a perfect target for electromagnetic follow-ups. § CAVEATS In this section, we discuss the main caveats and assumptions related to the methodology. §.§ Time evolving binaries We have assumed that the MBHB orbital frequency does not evolve during the PTA observation time. However, this simplification may not hold, especially for massive and high-frequency binaries given their shorter GW timescales (see Eq. <ref>). To explore the fraction of MBHBs in our catalogues in which the non-evolving assumption is not fulfilled, we have computed the following quantity: 𝒟_f = [df_k/dt T_obs]/Δ f. Here df_k/dt is determined according to Eq. (<ref>) but for simplicity accounting only the GW term, while the factor (df_k/dt) × T_obs corresponds to the variation of the observed Keplerian frequency over the PTA observational time. The division by Δ f accounts for the total variation of the MBHB frequency within the frequency bin width given by the PTA observation span. The upper panel of Fig. <ref> presents the distribution of 𝒟_f for all the binaries in our catalogue whose h > h^ th (see Section <ref>). As shown, the distribution peaks at low values of 𝒟_f (∼ 10^-6), implying an almost null evolution of the binary frequency. Nevertheless, some cases show 𝒟_f > 1, but they correspond to less than 0.1% of the MBHB population. The lower panel of Fig. <ref> presents the 𝒟_f distribution only for the subset of sources that are resolvable by SKA PTA. Not surprisingly, the distribution for this sub-sample of binaries peaks at larger values. This shift is caused by the fact that individually resolvable MBHBs are intrinsically systems with large masses and high frequencies (see Fig. <ref>). Despite this, the bulk of the system feature 𝒟_f ∼ 10^-3, consistent again with non-evolving binaries. Also for this sub-sample, binaries with 𝒟_f > 1 only account for 0.1% of the resolvable sources. In light of these results, we can conclude that our assumption of non-evolving binaries can be safely adopted. §.§ Pericenter precession Another assumption that is relevant to discuss concerns the inclusion of the pericenter precession. To quantify its impact on the SNR recovery, we adopted the same criteria presented in <cit.>. The precession of the pericenter induces an additional shift in the observed Keplerian frequency given by f_k + γ̇/π with γ̇ = dγdt = 6 π f_k (2π f_k (1+z) M G)^2/3(1-e^2)c^2. This causes a bias in the recovery of the orbital frequency. However, this effect can be neglected as long as the shift caused by pericenter precession over the observed time is small compared to the frequency resolution of the detector, Δ f. This is equivalent to enforce the condition 𝒟_γ << 1, where: 𝒟_γ = [d^2γ/dt^2 T_obs]/Δ f, with: d^2γdt^2 = 96 (2π)^13/3(1-e^2)c^7 [(1+z)M]^2/3ℳ_z^5/3G^7/3f_k^13/3. Fig. <ref> shows the distribution of 𝒟_γ for all MBHBs with h > h^th (top panel) and for those detected by the SKA PTA (lower panel). Similar to the 𝒟_f result, the systems with 𝒟_γ > 1 represent only 0.1% of the resolved MBHBs. As a consequence, the effect of the pericenter precession can be ignored for our astrophysical-motivated populations of MBHBs. §.§ Pulsar term Similarly to other works, we did not account for the pulsar term since its inclusion in the matched filtering methodology requires a precise estimate of the distance between the pulsar and the Earth. Up to date, only a very reduced sample of pulsars has such an accurate measurement of this quantity. Despite this, ongoing efforts are being made to calculate the pulsar distance via the measurement of pulsar spin down and annual parallax motion <cit.>. When it comes to 2D sky localization, including the Pulsar term in the analysis does not appear to make a significant difference in the size of the localization area, at least in the case of circular, GW-driven binaries. It can, however, cause a small bias compared to the true sky location (Ferranti et al. in prep.). Therefore, while Earth term-only estimates are robust in terms of localization precision, including the pulsar term in the analysis might be required to pinpoint the correct direction in the sky. Including the pulsar term in the CGW searches could also provide key information on how the MBHB evolves during the times needed for the pulse to cover the Earth-Pulsar distance. Under the assumption of GW-driven binaries, identification of the pulsar term allows to effectively separate the system chirp mass from the distance in the signal amplitude parameter ζ (Ferranti et al. in prep for examples involving circular binaries), greatly improving 3D localization of the source in the sky. This assumption, however, is not necessarily fulfilled. At the low frequencies probed by PTAs, MBHBs can be still coupled to their environment, especially at the time of Pulsar-term production. In fact, since the typical Earth-Pulsar separation can range up to thousands of light years, the Pulsar and Earth terms of the signal could inform us about the binary properties at two different evolutionary stages. In this case, the change of parameters such as the orbital frequency and the eccentricity could help to better understand the environment in which the MBHB resided. As shown in Eq. (<ref>), the Keplerian frequency of a binary evolving only due to GW emission varies as ∝ f^11/3, while if its dynamics is ruled by stellar scattering events, it changes as ∝ f^1/3. Including environmental coupling, however, increases the number of parameters in the model, and whether GW and environmental effects can be efficiently separated in a real analysis has still to be investigated. §.§ Further complications in source detectability Throughout this work, we used a simple SNR criterion to define source detectability, regardless of the nature of the GW signal. However, the shape of the waveform can be significantly different for circular and highly eccentric binaries and while the detectability of the former has been extensively demonstrated in the literature <cit.>, much less has been done on the eccentric binary front <cit.>. This is especially true for sources with f_2<1/T, which can constitute up to 50% of the resolvable CGWs in the limit of high eccentricities for the MPTA (see Fig. <ref>). For these systems, the waveform consists of a single burst-like spike coincident with the binary periastron passage <cit.>, which is very different from a repeated sinusoidal pattern. Although analytical templates can certainly be constructed for such signals, the effectiveness of match filtering in extracting them from real data has still to be investigated. § CONCLUSIONS In this work, we studied the capability of future PTA experiments of detecting single MBHBs under the natural assumption that the sGWB is produced by an eccentric MBHB population. To this end, we have generalized the standard approach used in PTA to assess the observability of circular MBHBs, by computing the SNR and Fisher Information Matrix for eccentric systems. We have adopted a 10-year MPTA and 30-year SKA PTAs and applied our analysis to a wide number of simulated eccentric MBHB populations, compatible with the latest measured amplitude of the sGWB. The main results can be summarized as follows: * The expected number of resolvable sources detected by a 10-year MPTA (SNR > 3) is 4^+3_-2 (68% credible interval) with no dependence on the eccentricity of the underlying MBHB population. * The extraordinary sensitivity of a 30-year SKA PTA will enable the detection of 30^+11_-10 (68% credible interval) sources with SNR>5 for initially circular MBHB population. This number grows to 40^+15_-15 in the case of very high MBHB initial eccentricity (e_0 = 0.9). This is mostly caused by highly eccentric binaries with Keplerian frequency ≲ 10^9 Hz pushing part of their power into the SKA sensitivity band. * The resolved MBHBs do not follow the eccentricity distribution of the underlying MBHB population. Instead, they tend to favor lower eccentricities. This is caused by the fact that the bulk of the detected MBHB population is placed in the frequency range 10^-8.5 - 10^-8 Hz. At those frequencies, GW emission is expected to dominate and, as a consequence, partial circularization of the binary orbit has already taken place. Practically, this means that massive and high-frequency systems, most likely to be detected, should display low eccentricities with respect to the bulk of the population. * The chirp mass (ℳ) of the resolvable sources is ≳ 10^8.5 M_⊙, but it depends on the specific PTA experiment. While the median value for MPTA is ∼ 10^9.5 M_⊙, that for SKA shifts down to ∼ 10^9 M_⊙. The results also show that the detection of binaries with ℳ ≲ 10^9 M_⊙ is strongly disfavored, especially when the eccentricity of the underlying MBHB population is large. * The distribution of resolvable sources peaks at z < 0.25, regardless of the PTA used but, unsurprisingly, it is more skewed towards low-z for MPTA. Their typical frequency at the second harmonic (f_2) sits at ∼ 10^-8.5 Hz for SKA PTA and increases to ∼ 10^-8 Hz for MPTA. The eccentricity of the MBHB population shifts the f_2 median value towards low frequencies. This is caused by the fact that highly eccentric populations have a significant number of resolvable sources with f_2 < 1/T_ obs. Finally, the inclination between the MBHB and the observer shows a bi-modal distribution with maximum probability for face-one configurations. No correlation with the eccentricity of the MBHB population is seen. * The accuracy of recovering the source properties shows a mild dependence on the eccentricity of the system. Whereas the frequency, amplitude, and orbital inclination are almost independent of it, the eccentricity and initial orbital phase of the MBHB orbit show a clear trend. Specifically (and unsurprisingly), these parameters are better constrained for sources with large eccentricities. * The sky-localization does not show any dependence on the MBHB eccentricity. However, it roughly follows the expected SNR^-2 trend. In particular, binaries detected with 5 < SNR < 10 feature a median ΔΩ ∼ 200 deg^2, hindering any possible multimessenger follow-up. MBHBs with SNR > 15 display a median ΔΩ ∼ 20 deg^2. We note that the scatter around these median values is up to 1dex (68% confidence), due to the anisotropy of the pulsar distribution and intrinsic properties of the MBHB population. In the most optimistic case, we can expect 30-yr SKA to localize a particularly loud MBHB at a ∼deg^2 accuracy. In this work, we developed a theoretical framework to assess the detectability and parameter extraction of eccentric MBHB from realistic populations. This allowed us to investigate the performance of future radio facilities such as MPTA and SKA. Being able to fully detect the presence of single MBHB sources at nHz frequencies will be fundamental in determining the astrophysical or cosmological nature of the signal recently reported by worldwide PTAs and will open the era of multimessenger astronomy with MBHBs. In a future work, we plan to implement the procedure presented here in the populations of galaxies, MBHs and MBHBs generated by galaxy formation models to explore the capabilities of associating CGW sources detected by PTAs with galaxies and AGNs. We thank the B-Massive group at Milano-Bicocca University for useful discussions and comments. R.T., D.I.V., A.S. and G.M.S acknowledge the financial support provided under the European Union’s H2020 ERC Consolidator Grant “Binary Massive Black Hole Astrophysics” (B Massive, Grant Agreement: 818691). M.B. acknowledges support provided by MUR under grant “PNRR - Missione 4 Istruzione e Ricerca - Componente 2 Dalla Ricerca all'Impresa - Investimento 1.2 Finanziamento di progetti presentati da giovani ricercatori ID:SOE_0163” and by University of Milano-Bicocca under grant “2022-NAZ-0482/B”. aa § ACCURACY IN THE RECOVERY OF SMBHB PARAMETERS FOR THE MPTA CASE In this section, we investigate the accuracy of recovering the binary parameters when using the 10-year MPTA. We point out that at very high eccentricities the results are noisy given that in MPTA these types of binaries are rarer than in SKA PTA (see lower panel of Fig <ref>). For these reasons, the results above e_ rs > 0.8 are affected by low statistics and should be taken with caution. Fig. <ref> presents the results. As shown, the errors in the parameter estimation for the 10-year MPTA generally follow the same trend as the ones for SKA. This confirms that the main driving factor in parameter estimation is the signal-to-noise ratio. Notably, the error in the frequency is worse in MPTA than in SKA at a fixed SNR. This is because the frequency resolution of the PTA is set by the observation time T_obs. The second interesting result regards the better estimation of the source sky location for MPTA with respect to SKA. This is because the sky position of the MPTA pulsars follows a more isotropic distribution. Hence, is able to better triangulate the GW source sky position. On the contrary for SKA PTA we select an ultra-realistic pulsar sky distribution, and hence most of the pulsars are located inside the Galatic plane. This highlights the need to choose a distribution of pulsars in the sky as isotropic as possible.
http://arxiv.org/abs/2407.12402v1
20240717082855
TurkishMMLU: Measuring Massive Multitask Language Understanding in Turkish
[ "Arda Yüksel", "Abdullatif Köksal", "Lütfi Kerem Şenel", "Anna Korhonen", "Hinrich Schütze" ]
cs.CL
[ "cs.CL" ]
Asymptotic behaviour of the heat equation in an exterior domain with general boundary conditions I. The case of integrable data. Joaquín Domínguez-de-Tena^*,1 Aníbal Rodríguez-BernalPartially supported by Projects PID2019-103860GB-I00 and PID2022-137074NB-I00, MICINN and GR58/08 Grupo 920894, UCM, Spain ^,2 July 22, 2024 ====================================================================================================================================================================================================== § ABSTRACT Multiple choice question answering tasks evaluate the reasoning, comprehension, and mathematical abilities of Large Language Models (LLMs). While existing benchmarks employ automatic translation for multilingual evaluation, this approach is error-prone and potentially introduces culturally biased questions, especially in social sciences. We introduce the first multitask, multiple-choice Turkish QA benchmark, , to evaluate LLMs' understanding of the Turkish language. includes over 10,000 questions, covering 9 different subjects from Turkish high-school education curricula. These questions are written by curriculum experts, suitable for the high-school curricula in Turkey, covering subjects ranging from natural sciences and math questions to more culturally representative topics such as Turkish Literature and the history of the Turkish Republic. We evaluate over 20 LLMs, including multilingual open-source (e.g., Gemma, Llama, MT5), closed-source (GPT 4o, Claude, Gemini), and Turkish-adapted (e.g., Trendyol) models. We provide an extensive evaluation, including zero-shot and few-shot evaluation of LLMs, chain-of-thought reasoning, and question difficulty analysis along with model performance. We provide an in-depth analysis of the Turkish capabilities and limitations of current LLMs to provide insights for future LLMs for the Turkish language. We publicly release our code for the dataset and evaluation: <https://github.com/ArdaYueksel/TurkishMMLU>. § INTRODUCTION Benchmarking plays an important role in understanding and measuring the capabilities of language models. Recent multitask multiple-choice question answering (QA) benchmarks like MMLU <cit.> cover a wide range of use cases for language models, making them highly popular as one of the main evaluation benchmarks in recent LLMs such as GPT 4 <cit.> and Gemini <cit.>. For the multilingual adaptation of the MMLU benchmark, recent works <cit.> have focused on automatic translations. However, automatic translations are often prone to errors and may fail to capture the linguistic and cultural nuances of the target language. Consequently, there have been manual efforts to create multitask multiple-choice benchmarks in various languages, including Arabic <cit.>, Korean <cit.>, and Chinese <cit.>. In our work, we introduce , the first multitask multiple-choice QA benchmark specifically designed for the Turkish language. Our dataset includes 10,032 multiple-choice questions, each with five options, spanning nine subjects categorized into four groups: Natural Sciences, Mathematics, Turkish Language and Literature, and Social Sciences and Humanities. These questions are sourced from a high-quality online learning platform created by the Turkish Ministry of Education, which aims to support high school students in preparing for the university entrance exam. A unique feature of is the correctness ratio, which reflects the actual performance of students on these questions, offering a more accurate measure of question difficulty. We illustrate the distribution of subjects and an example from in Figure <ref>. After introducing this dataset for benchmarking in Turkish, we evaluate a wide range of current language models, more than 40, including multilingual autoregressive LLMs, both open models like Gemma <cit.>, Llama-3 and Aya-23 <cit.> and closed-source models such as GPT 4o, Claude and Gemini. In addition, we also cover multilingual encoder-decoder models such as MT5, MT0, Aya and Turkish-adapted LLMs such as Trendyol-LLM, a LoRA adaptation of multilingual LLMs. We also cover many different setups including zero-shot, few-shot, and chain-of-thought <cit.>. We further provide analysis of LLMs based on subjects and difficulty. Our additional analysis provides insights for the design of future LLMs for Turkish and beyond. We publicly release our code for the dataset and evaluation: <https://github.com/ArdaYueksel/TurkishMMLU>. Our contributions are as follows: * We introduce the first large-scale multitask multiple-choice benchmark for Turkish, consisting of 10,032 questions across nine subjects. * We evaluate a wide range of LLMs, varying in size from 60M to 141B, including both open and closed-source models, and provide a comprehensive leaderboard featuring over 40 models. * We conduct an in-depth analysis of LLM performance in chain-of-thought setups and based on question difficulty. § RELATED WORK LLM Benchmarking: Benchmarks are crucial for understanding the capabilities of NLP models, identifying their weaknesses and facilitating the development of more capable models. Historically, most NLP benchmarks focused on linguistic tasks <cit.> and followed the paradigm of supervised fine-tuning of a model on a training set and evaluation on an unseen test set. However, with the advent of powerful LLMs, this type of evaluation became obsolete as these models showed impressive zero-shot and few-shot learning skills, even for higher level tasks closer to real world applications. To evaluate the emerging capabilities of the LLMs, new benchmarks are proposed that focus on more advanced capabilities such as common sense reasoning <cit.>, multi-hop reasoning <cit.>, programming <cit.> and multi-turn conversations. Additionally, some studies aimed at evaluating these capabilities through extensive datasets that cover a broad range of knowledge-based topics <cit.>. One prominent example is MMLU (Massive Multitask Language Understanding) <cit.>; it covers 57 diverse fields from basic arithmetic to intricate areas like legal studies and computer science. Although many of these benchmarks have focused on English, there have been significant efforts to adapt and develop similar benchmarks for other languages <cit.>. Turkish Benchmarks: One of the initial efforts in Turkish benchmarking was THQUAD <cit.>, a variant of the SQuAD question-answering benchmark <cit.> that focuses on extracting information from historical passages and answering questions about Ottoman and Islamic history in an open-book format. MUKAYESE <cit.>, another Turkish benchmark, was created by combining multiple existing datasets for various tasks. However, most of the tasks that are included in MUKAYESE, such as NER (named entity recognition), sentence segmentation and spellchecking, do not effectively capture the knowledge and the language understanding capabilities of LLMs due to their low level nature. Several other studies that created multilingual benchmarks for specific tasks such as XCOPA (Cross-lingual Choice of Plausible Alternatives) <cit.> and XNLI (Cross-lingual Natural Language Inference) <cit.> also include Turkish among several other languages. A recent study that focuses on Turkish LLMs <cit.> created the Turkish versions of the TruthfulQA Multiple Choice (MC) <cit.> and ARC (AI2 Reasoning Challenge) <cit.> datasets to evaluate Turkish LLMs. These benchmarks are constructed by machine translating the English versions of the corresponding datasets, which is usually followed by manual verification and editing to ensure good quality. Overall, despite some efforts to evaluate the capabilities of LLMs for Turkish, Turkish still lacks a high quality and comprehensive evaluation resource that covers multiple domains. In this study, we address this by introducing Turkish MMLU. § DATASET is curated using resources from online learning materials for Turkish high school education. In the Turkish educational system, high school education spans four years, and students take the National University Entrance Exams after completing their studies. This exam contains multiple-choice questions covering various subjects from the curricula. To assist students in preparing for these exams, official and commercial exam preparation booklets, video guides, and online practice tests in multiple-choice question-answering format are available. The Turkish Ministry of Education (MEB) has developed an online platform called the Education Information Network (EBA), which aims to provide electronic resources such as lecture notes, videos, tests and solutions, and interactive books to facilitate the learning process for students. This platform[https://ogmmateryal.eba.gov.tr/panel/MSoruDers.aspxhttps://ogmmateryal.eba.gov.tr/panel/MSoruDers.aspx] contains multiple-choice questions and their solutions that form the basis of our study. Figure <ref> illustrates the EBA platform interface. Users generate tests by specifying grade level and subject, upon which the platform provides multiple 10-question tests. After test completion, users can review ground-truth answers and video solutions. Each question's difficulty is denoted by a Correctness Ratio (black boxes in Figure <ref>), calculated as the percentage of correct user responses. For each test, we extract question text, multiple-choice options, correct answer, topic, subject, grade, and difficulty level. Table <ref> details the distribution of test questions by grade and subject in . The dataset includes nine high school subjects across four domains: Math (Mathematics); Natural Sciences (Biology, Chemistry, Physics); Language (Turkish Language and Literature); and Humanities and Social Sciences (History, Geography, Philosophy, Religion and Ethics). The test set comprises 9,807 multiple-choice questions, with an additional 225 (25 per subject) in the development set. While Philosophy is limited to grades 10 and 11, other subjects span all four grades. Many questions include mathematical formulas/notations (in LaTeX or text) and images, however, we exclude image-based questions to focus on evaluating text models. Figure <ref> displays the distribution of Correctness Ratios. Questions are categorized as Easy (top 30%), Medium (middle 40%), or Hard (bottom 30%), with percentile thresholds at 41 and 28, respectively. We manually selected 25 questions per subject for the development set, maintaining subject-grade distributions and mirroring the overall difficulty distribution. For few-shot examples, we focus on 5-shot experiments with 5 questions per subject due to context window constraints and compute budget limitations, each with different correct answers to avoid selection bias. For Chain-of-Thought (COT) prompting, we manually provide step-by-step solutions for these 5 questions per subject. The large scale of our test dataset, including 9,807 questions, raises significant challenges. Experiments with state-of-the-art proprietary models like GPT 4 and Claude-Opus face budget constraints, while using Chain-of-thought (COT) prompting with open-source models generates excessively long responses, resulting in long inference times. To address these issues while maintaining comprehensive evaluations, we create a smaller version of , called with 100 randomly selected questions per subject, totaling 900. We uniformly sampled 25 questions per grade for each subject, except for Philosophy, which has 50 questions evenly distributed between grades 10 and 11. This sample is representative of grades and subjects, enabling in-depth model evaluation, but can be easily used in resource-constrained scenarios. We measure the correlation between and in <ref>, finding a strong correlation across 32 models. § EVALUATION RESULTS After finalizing , we now evaluate various multilingual and Turkish-adapted open- and closed-source LLMs. We cover a wide range of models, from 60M to 141B parameters, and various experimental setups. Experimental Setup Our main evaluation setup is 5-shot in-context learning evaluation, following the prior evaluation setups in recent LLMs <cit.> on English MMLU <cit.>. From the development set proposed in <ref>, we select a fixed set of questions for each subject and include 5 of them in our few-shot prompt, with the question, multiple-choice options, and the answer. We carefully design these prompts to ensure that each question has a different option (in our dataset, the five options are always A, B, C, D, E) as the answer. For evaluation, we report accuracy by using the lm-evaluation-harness framework from EleutherAI <cit.>. For open-source models, we perform log-prob based evaluation; for closed-source models we perform greedy decoding and then parse the prediction. Our second evaluation is a zero-shot evaluation to compare few-shot and zero-shot performance of the models. Additionally, we evaluate LLMs with a 5-shot chain-of-thought (CoT) evaluation. Especially for questions requiring further reasoning and elaboration, such as mathematics, directly giving answers may be a limitation in our main evaluation. Therefore, we evaluate a wide range of models, including closed-source models, with CoT reasoning <cit.>. In this setup, we provide CoT solutions for each question in our few-shots for each subject and perform greedy decoding. We put the final answer option at the end of the solution in the prompts, and then parse the predicted answer in the generated solution. Since includes real-world data for difficulty, we also conduct a difficulty analysis to evaluate models. This expands our evaluation setup from comparing models on different subjects to varying difficulty levels. In all of our evaluations, we use a small subset of , , because the closed-source experiments are quite expensive.[For example, a 5-shot CoT evaluation with Claude-3 Opus on the entire dataset would cost more than $750.] With public models, we calculate performance on both and to test our assumption that they would yield similar results. Language Models: We evaluate a diverse range of models, including Turkish-adapted, multilingual open-source and closed-source LLMs. For Turkish-adapted models, we use Trendyol-LLM 7B, a Llama-2 model further pretrained on Turkish[<https://huggingface.co/Trendyol/Trendyol-LLM-7b-base-v0.1>], available in base, chat, and chat-dpo forms on HuggingFace. We also include Kanarya <cit.>, a pretrained autoregressive 2B Turkish model. In the multilingual open-source category, we evaluate models with encoder-decoder architectures such as mT5 <cit.> (from small to xxl), mT0 <cit.> (with the same sizes as mT5), and Cohere's Aya-101 <cit.>. For autoregressive models, we include Meta's Llama-2 <cit.> (7B, 7B-Chat, 13B, 13B-Chat) and Llama-3 (8B, 8B-Instruct, 70B, and 70B-Instruct). From MistralAI, we evaluate Mistral 7B variants <cit.>, Mixtral 8x22B, and 8x7B <cit.>. We also include Cohere4AI's Command-R and Aya-23 models <cit.>, Google's Gemma <cit.> (7B and 2B with their instruction versions), and Microsoft's Phi-3-Mini <cit.>. For multilingual closed-source models, we evaluate OpenAI's GPT models (3.5, 4-Turbo, and 4o), Anthropic's Claude-3 models (Haiku, Sonnet, and Opus versions), and Google's Gemini models (pro versions 1.0 and 1.5). §.§ Few-shot Evaluation We present the 5-shot evaluation of models in Table <ref>. We show scores in four categories: Natural Sciences, Math, Turkish Language & Literature, and Social Sciences and Humanities, as well as the macro-averaged scores over nine subjects. The best-performing model is a closed-source model, GPT 4o, with 83.1% accuracy. It outperforms all other models in each category as well. The best-performing open-source model is Llama-3 70B-IT (Instruction-Tuned) with 67.3% accuracy. While it is better than many closed-source models such as Claude-3 Sonnet and Gemini 1.0-pro, it is still 15.8% worse than GPT 4o. Another interesting point is that the best encoder-decoder model, Aya-101, performs much worse than autoregressive models, achieving only 40.7% accuracy. The results suggest that mathematics is the most difficult subject for almost all models, as it is usually challenging to answer these questions correctly in a single token, given that they require multi-hop reasoning. The easiest category in is Social Sciences and Humanities. For STEM courses, models perform poorly compared to other subjects. We also observe that many closed-source models switch to COT-like problem-solving rather than providing the answer directly, even though we provided single-answer style few-shots. We parse the predicted option in those answers with manually-designed patterns and indicate these “CoT” models with the * symbol in Table <ref>. Among 7B-8B models, Llama-3 8B-IT exhibits the best performance, but Aya-23 and Gemma show comparable results. Mistral 7B-IT and Llama-2 7B lag more than 10% behind these three models. Among mT5-xxl (13B) based models, Aya-101 achieves the best performance, however, encoder-decoder based models perform worse than autoregressive models of similar sizes. We note that recent open-source models such as Llama-3, Command-R, Aya-23, and Mixtral 8x22B (all released after April 2024) outperform older closed-source models like GPT 3.5 (released in March 2022), signaling promise for open-source models. However, Turkish-adapted models like Trendyol-LLM, despite outperforming their base model (Llama-2 7B), are significantly behind newer variants of similar size (Llama-3 8B). We provide the results for all nine subjects and all models in the Appendix in Table <ref>. §.§ Zero-Shot Evaluation To assess the performance gain from few-shots, we also compare models in zero-shot settings. Table <ref> summarizes the results for selected open-source models. We observe the most significant performance improvement via few-shot in the Gemma 7B model. Llama-3 70B-IT, the best-performing model in the few-shot setting, also leads in the zero-shot setting among public models with a minimal performance drop of just 2.7%. Interestingly, mT0-xxl performs considerably better in the zero-shot setting than in the few-shot setting, contrary to the trends in the other models. We attribute this to mT0's <cit.> primary focus on zero-shot adaptation. This finding suggests that mT0's zero-shot performance even surpasses Aya's few-shot performance. §.§ Chain-of-Thought Evaluation We evaluate 5-shot chain-of-thought (CoT) in Table <ref>, showing the performance difference between non-CoT and CoT few-shot experiments. We include CoT evaluations for three reasons: (i) to evaluate reasoning capabilities of recent LLMs, which show promising results <cit.>, (ii) some subjects like mathematics require multi-hop reasoning, and (iii) CoT also indicates NLG performance of models in Turkish, complementing our NLU evaluation. All models performing below 60% accuracy in the non-CoT few-shot scenario, except GPT 3.5-turbo, show worse performance with CoT reasoning. This suggests these models may have limited generation and reasoning capabilities in Turkish. Across all subjects, the most significant improvement is observed in mathematics, with +25.0% for the best-performing model, GPT 4o. With this approach, GPT 4o sets the best performance on at 88.2% accuracy across all settings. We also observe improvements in Natural Sciences, though not as substantial as in Mathematics. However, for Turkish Language & Literature and Social Sciences and Humanities, we observe no consistent improvements and even performance drops across models, including strong ones. One exception to our findings is Gemini 1.5-pro. In our 5-shot non-CoT experiments, we found that Gemini 1.5-pro generates solutions for all questions in the few-shot, even when provided with gold answers. This prevents us from getting predictions for test questions since it exceeds our maximum generation length (it attempts to generate solutions for 5 few-shot questions + 1 test question). This causes mispredictions in many 5-shot non-CoT cases for Gemini 1.5-pro. Therefore, the apparent large improvement (+45.1) between non-CoT and CoT settings for Gemini 1.5-pro is misleading. In the CoT setting, we see that Gemini is the fourth-best model overall, placing it in a competitive position. §.§ Difficulty Analysis We analyze model performance across question difficulty levels using the correctness ratio in , categorizing questions as Easy, Medium, or Hard based on the 30th and 70th percentiles. Table <ref> presents these results along with point-biserial correlation coefficients (rpb), which all show statistically significant positive correlations (p < 0.001), confirming that model performance decreases as question difficulty increases. This pattern holds across all models, from smaller ones like Trendyol-LLM 7B-C (rpb = 0.152) to state-of-the-art models like GPT 4o (rpb = 0.211), validating the difficulty categorization in . On the other hand, when we apply point-biserial correlation coefficients to the grade instead of the question difficulty, we do not observe any significant correlation (p > 0.1) for any of the models. Surprisingly, difficult questions at the lower grades seem to be as hard for models as difficult questions at the higher grades. Models generally perform well on easy questions (up to 96.1% accuracy) but struggle with hard ones (19.5% to 80.1%). We also observe that for some models, the largest differences come from the hard questions. For example, Gemini 1.5-pro is only 6% lower than GPT 4-turbo in easy and medium questions, however the gap is 17% in hard questions. §.§ Small Set - All Set Correlation To reduce the inference time and cost of the experiments, many analyses in this paper are conducted on . In this section, we computed 5-shot average scores for the open-source models in both the small and full sets. The correlation plot is shown in Figure <ref>. Pearson's r correlation between the two sets is 0.999, confirming that findings based on are likely to hold as well for . § CONCLUSION In this study, we introduced , the first Turkish multitask Question Answering benchmark designed for evaluating LLMs. Our dataset consists of 10,032 multiple-choice questions covering nine subjects from the Turkish high school curriculum and university entrance exams, complete with correctness ratios to indicate question difficulty. We evaluated a wide range of LLMs, including Turkish-adapted and multilingual models, in various setups such as zero-shot, few-shot, and chain-of-thought reasoning. Our results highlighted the superior performance of closed-source models like GPT 4o and Claude-3 Opus and the notable improvements in newer open-source autoregressive models like Llama-3 70B-IT. The benchmark demonstrates significant performance variation by subject and question difficulty, emphasizing the strengths and limitations of current LLMs in understanding and reasoning in Turkish. Furthermore, as LLMs mature, it will become increasingly crucial to shift the focus of the field from English to broader coverage of the languages of the world. We see as a promising contribution towards ensuring that all language communities will be equally served by NLP in the future. § LIMITATIONS While we believe will significantly contribute to Turkish NLP and the design of next multilingual LLMs, it does have some limitations. First, is focused solely on text-based assessment. Exploring multimodal questions that involve images or audio is left for future work. Second, the dataset covers high school curriculum and university entrance exam questions in a multiple-choice format. However, future efforts should aim to expand Turkish benchmarking datasets to include assessments of generative abilities and more open-ended questions. acl_natbib § LEADERBOARD For a comprehensive overview of model performance across all nine subjects, we provide detailed leaderboard in this sectoon. Table <ref> presents the 5-shot evaluation scores for 43 models, covering a wide range of LLMs. This detailed breakdown allows for a deeper analysis of model performance variations across different subjects, providing valuable insights into the strengths and weaknesses of each model.
http://arxiv.org/abs/2407.13500v1
20240718133236
FADE: A Task-Agnostic Upsampling Operator for Encoder-Decoder Architectures
[ "Hao Lu", "Wenze Liu", "Hongtao Fu", "Zhiguo Cao" ]
cs.CV
[ "cs.CV" ]
Hao Lu^1 hlu@hust.edu.cn Wenze Liu^1 wzliu@hust.edu.cn Hongtao Fu^1 htfu@hust.edu.cn Zhiguo Cao^1 zgcao@hust.edu.cn ^1 The Key Laboratory of Image Processing and Intelligent Control, Ministry of Education; School of Artificial Intelligence and Automation, Huazhong University of Science and Technology, Wuhan 430074, China FADE: A Task-Agnostic Upsampling Operator for Encoder-Decoder ArchitecturesCorresponding author: Zhiguo Cao. Hao Lu^1 Wenze Liu^1 Hongtao Fu^1 Zhiguo Cao^1 Received: date / Accepted: date ======================================================================================================================== § ABSTRACT The goal of this work is to develop a task-agnostic feature upsampling operator for dense prediction where the operator is required to facilitate not only region-sensitive tasks like semantic segmentation but also detail-sensitive tasks such as image matting. Prior upsampling operators often can work well in either type of the tasks, but not both. We argue that task-agnostic upsampling should dynamically trade off between semantic preservation and detail delineation, instead of having a bias between the two properties. In this paper, we present FADE, a novel, plug-and-play, lightweight, and task-agnostic upsampling operator by fusing the assets of decoder and encoder features at three levels: i) considering both the encoder and decoder feature in upsampling kernel generation; ii) controlling the per-point contribution of the encoder/decoder feature in upsampling kernels with an efficient semi-shift convolutional operator; and iii) enabling the selective pass of encoder features with a decoder-dependent gating mechanism for compensating details. To improve the practicality of FADE, we additionally study parameter- and memory-efficient implementations of semi-shift convolution. We analyze the upsampling behavior of FADE on toy data and show through large-scale experiments that FADE is task-agnostic with consistent performance improvement on a number of dense prediction tasks with little extra cost. For the first time, we demonstrate robust feature upsampling on both region- and detail-sensitive tasks successfully. Code is made available at: <https://github.com/poppinace/fade> FADE: A Task-Agnostic Upsampling Operator for Encoder-Decoder ArchitecturesCorresponding author: Zhiguo Cao. Hao Lu^1 Wenze Liu^1 Hongtao Fu^1 Zhiguo Cao^1 Received: date / Accepted: date ======================================================================================================================== § INTRODUCTION Feature quality, being an important yet hard-to-quantify indicator, significantly influences the performance of a vision system <cit.>. This is particularly true for dense prediction tasks such as semantic segmentation <cit.> and object detection <cit.>, where the predictions highly correlate with the responses of feature maps <cit.>. Prior art has proposed various ways to enhance the feature quality by operating features, including, but not limited to, spatial pooling <cit.>, feature pyramid fusion <cit.>, attention manipulation <cit.>, context aggregation <cit.>, and feature alignment <cit.>. Yet, the most famous segmentation model <cit.> so far still struggles to generate accurate boundary predictions, which suggests feature quality remains unsatisfactory. In this work, we delve into an easily overlooked yet fundamental component that closely relates to feature quality—feature upsampling. Feature upsampling, which aims to recover the spatial resolution of features, is an indispensable stage in most dense prediction models <cit.> as almost all dense prediction tasks prefer high-res predictions. Since feature upsampling is often close to the prediction head, the quality of upsampled features can provide a direct implication of the prediction quality. A good upsampling operator would therefore contribute to improved feature quality and prediction. Yet, conventional upsampling operators, such as nearest neighbor (NN) or bilinear interpolation <cit.>, deconvolution <cit.>, max unpooling <cit.>, and pixel shuffle <cit.>, often have a preference of a specific task. For instance, bilinear interpolation is favored in semantic segmentation <cit.>, while pixel shuffle is preferred in image super-resolution <cit.>. A main reason is that each dense prediction task has its own focus: some tasks like semantic segmentation <cit.> and instance segmentation <cit.> are region-sensitive, while some tasks such as image super-resolution <cit.> and image matting <cit.> are detail-sensitive. If one expects an upsampling operator to generate semantically consistent features such that a region can share the same class label, it is often difficult for the same operator to recover boundary details simultaneously, and vice versa. Indeed empirical evidence shows that bilinear interpolation and max unpooling have inverse behaviors in segmentation and matting <cit.>, respectively. In an effort to evade `trials-and-errors' from choosing an upsampling operator for a certain task at hand, there has been a growing interest in developing a generic upsampling operator for dense prediction <cit.>. For example, CARAFE <cit.> shows its benefits on four dense prediction tasks, including object detection, instance segmentation, semantic segmentation, and image inpainting. IndexNet <cit.> also boosts performance on several tasks such as image matting, image denoising, depth prediction, and image reconstruction. However, a comparison between CARAFE and IndexNet <cit.> indicates that neither CARAFE nor IndexNet can defeat its opponent on both region- and detail-sensitive tasks (CARAFE outperforms IndexNet on segmentation, while IndexNet can surpass CARAFE on matting), which can also be observed from the inferred segmentation masks and alpha mattes in Fig. <ref>. This raises a fundamental research question: What makes for task-agnostic upsampling? After an apples-to-apples comparison between existing dynamic upsampling operators (Fig. <ref>), we hypothesize that it is the inappropriate and/or insufficient use of high-res encoder and low-res decoder features that leads to the task dependency of upsampling. We also believe that there should exist a unified form of upsampling operator that is truly task-agnostic. In particular, we argue that a task-agnostic upsampling operator should dynamically trade off between semantic preservation and detail delineation in a content-aware manner, instead of having a bias between the two properties. To this end, our main idea is to make the full use of encoder and decoder features in upsampling (kernels). We therefore introduce FADE, a novel, plug-and-play, lightweight, and task-agnostic upsampling operator for encoder-decoder architectures. The name also implies its working mechanism: upsampling features in a `fade-in' manner, from recovering spatial structure to delineating subtle details. In the context of hierarchical encoder-decoder architectures such as feature pyramid networks (FPNs) <cit.> and U-Net <cit.>, semantic information is rich in low-res decoder features, and detailed information is often abundant in high-res encoder features. To exploit both information in feature upsampling, FADE Fuses the Assets of Decoder and Encoder with three key observations and designs: i) By exploring why CARAFE works well on region-sensitive tasks but poorly on detail-sensitive tasks, and why IndexNet and A2U <cit.> behave conversely, we observe that what features (encoder or decoder) to use to generate the upsampling kernels matters. Using low-res decoder features preserves regional coherence, while using high-res encoder features helps recover details. It is thus natural to seek whether combining encoder and decoder features enjoys both merits, which underpins the core idea of FADE, as shown in Fig. <ref>. ii) To integrate high-res encoder and low-res decoder features, a subsequent obstacle is how to deal with the problem of resolution mismatch. A standard way is to implement U-Net-style fusion <cit.>, including feature interpolation, feature concatenation, and convolution. However, we show that this naive implementation can introduce artifacts into upsampling kernels. To solve this, we introduce a semi-shift convolutional operator that unifies channel compression, concatenation, and kernel generation. Particularly, it allows granular control over how each feature point contributes to upsampling kernels. iii) Inspired by the gating mechanism used in FPN-like designs <cit.>, we further refine upsampled features by enabling selective pass of high-res encoder features via a simple decoder-dependent gating unit. To improve the practicality and efficiency of FADE, we also investigate parameter-efficient and memory-efficient implementations of semi-shift convolution. Such implementations lead to a lightweight variant of FADE termed FADE-Lite. We show that, even with one forth number of parameters of FADE, FADE-Lite still preserves the task-agnostic property and behaves reasonably well across different tasks. The memory-efficient implementation also enables direct execution of cross-resolution convolution, without explicit feature interpolation for resolution matching. We conduct experiments on seven data sets covering six dense prediction tasks. We first validate our motivation and the rationale of our design via several toy-level and small-scale experiments, such as binary image segmentation on Weizmann Horse <cit.>, image reconstruction on Fashion-MNIST <cit.>, and semantic segmentation on SUN RGBD <cit.>. We then show through large-scale evaluations that FADE reveals its task-agnostic property by consistently boosting both region- and detail-sensitive tasks, for instance: i) semantic segmentation: FADE improves SegFormer-B1 <cit.> by +2.73 mask IoU and +4.85 boundary IoU on ADE20K <cit.> and steadily boosts the boundary IoU with stronger backbones, ii) image matting: FADE outperforms the previous best matting-specific upsampling operator A2U <cit.> on Adobe Composition-1K <cit.>, iii) object detection and iv) instance segmentation: FADE performs comparably against the best performing operator CARAFE over Faster R-CNN <cit.> (+1.1 AP for FADE vs. +1.2 AP for CARAFE with ResNet-50) and Mask R-CNN <cit.> (+0.4 mask AP for FADE vs. +0.7 mask AP for CARAFE with ResNet-50) baselines on Microsoft COCO <cit.>, and v) monocular depth estimation: FADE also surpasses the previous best upsampling operator IndexNet <cit.> over the BTS <cit.> baseline on NYU Depth V2 <cit.>. In addition, FADE retains the lightweight property by introducing only a few amount of parameters and FLOPs. It has also good generality across convolutional and transformer architectures <cit.>. Overall, our contributions include the following: * For the first time, we show that task-agnostic upsampling is made possible on both high-level region-sensitive and low-level detail-sensitive tasks; * We present FADE, one of the first task-agnostic upsampling operator, that fuses encoder and decoder features in generating upsampling kernels, uses an efficient semi-shift convolutional operator to control per-point contribution, and optionally applies a gating mechanism to compensate details; * We provide a comprehensive benchmarking on state-of-the-art upsampling operators across five mainstream dense prediction tasks, which facilitates future study. A preliminary conference version of this work appeared in <cit.>. We extend <cit.> from the following aspects: i) to highlight the task-agnostic property, we validate FADE comprehensively on more baseline models, e.g., UPerNet <cit.>, Faster RCNN <cit.>, Mask RCNN <cit.>, and BTS <cit.>, on different network scales, from SegFormer-B1 to -B5 <cit.> and from R50 to R101 <cit.>, and on three additional vision tasks including object detection, instance segmentation, and monocular depth estimation; ii) we carefully benchmark the performance of state-of-the-art dynamic upsampling operators on the evaluated tasks to provide a basis for future studies; iii) we further explore parameter-efficient and memory-efficient implementations of semi-shift convolution to enhance the practicality of FADE, which also leads to a lightweight variant called FADE-Lite; iv) by observing some unexpected phenomena in experiments, we rethink the value of the gating mechanism in FADE and provide additional analyses and insights on when to use the gating unit, particularly for instance-level tasks; v) we extend the related work by comparing feature upsampling with other closely related techniques such as feature alignment and boundary processing; vi) we also extend our discussion on the general value of feature upsampling to dense prediction. § LITERATURE REVIEW We review upsampling operators in deep networks, techniques that share a similar spirit to upsampling including feature alignment and boundary processing, and typical dense prediction tasks in vision. §.§ Feature Upsampling Unlike joint image upsampling <cit.>, feature upsampling operators are mostly developed in the era of deep learning, to respond to the need for recovering spatial resolution of encoder features (decoding). Conventional upsampling operators typically use fixed/hand-crafted kernels. For instance, the kernels in the widely used NN and bilinear interpolation are defined by the relative distance between pixels. Deconvolution <cit.>, a.k.a. transposed convolution, also applies a fixed kernel during inference, despite the kernel parameters are learned. Pixel Shuffle <cit.> first employs convolution to adjust feature channels and then reduces the depth dimension to increase the spatial dimension. While the main purpose of resolution increase is achieved, the operators above also introduce certain artifacts into features. For instance, it is well-known that, interpolation smooths boundaries, and deconvolution generates checkerboard artifacts <cit.>. Several recent work has shown that unlearned upsampling has become a bottleneck behind architectural design <cit.>, and dynamic upsampling behaviors are more expected <cit.>. Among hand-crafted operators, unpooling <cit.> perhaps is the only operator that implements dynamic upsampling, i.e., each upsampled position is data-dependent conditioned on the max operator. The importance of such a dynamic property has been exemplified by some recent dynamic kernel-based upsampling operators <cit.>, which leads to a new direction from considering generic feature upsampling across tasks and architectures. In particular, CARAFE <cit.> implements context-aware reassembly of features with decoder-dependent upsampling kernels, IndexNet <cit.> provides an indexing perspective of upsampling and executes upsampling by learning a soft index (kernel) function, and A2U <cit.> introduces affinity-aware upsampling kernels by exploiting second-order information. At the core of these operators is the data-dependent upsampling kernels whose kernel parameters are not learned but dynamically predicted by a sub-network. However, while being dynamic, CARAFE, A2U, and IndexNet still exhibit a certain degree of bias on specific tasks. In this work, we show through FADE that the devil is in the use of encoder and decoder features in generating upsampling kernels. §.§ Feature Alignment and Boundary Processing Different from dynamic upsampling that aims to enhance feature quality during resolution change, much existing work also attempts to enhance the feature quality after matching resolution. Two closely related techniques are feature alignment and boundary processing. Feature alignment explores to align multi-level feature maps by warping features with, for example, either sampling offsets <cit.> or a dense flow field <cit.>, which has been found effective in reducing semantic aliasing during cross-resolution feature fusion. Another idea is to use a gating unit to align and refine features <cit.>, which prevents encoder noise from entering decoder feature maps. FADE has also a similar design as post-processing, but is much simpler. Considering that, most fragile predictions in segmentation are along object boundaries, boundary processing techniques are developed to optimize boundary quality. In particular, PointRend <cit.> views segmentation as a rendering problem and adaptively selects points to predict crisp boundaries by an iterative subdivision algorithm. <cit.> improves boundary prediction with decoupled body and edge supervision. Boundary-preserving Mask R-CNN <cit.> presents a boundary-preserving mask head to improve mask localization accuracy. Gated-SCNN <cit.> introduces a two-stream architecture that wires shape information as a separate processing branch to process boundary-related information specifically. Compared with dynamic upsampling, feature alignment and boundary processing are typically executed after naive feature upsampling. Since feature upsampling is inevitable, it would be interesting to see whether one could enhance the feature quality during upsampling, which is exactly one of the goals of dynamic upsampling. In this work, we show that FADE is capable of mitigating semantic aliasing as feature alignment and of improving boundary predictions as boundary processing. FADE also demonstrates universality across a number of tasks more than segmentation. §.§ Dense Prediction Dense prediction covers a broad class of per-pixel labeling tasks, ranging from mainstream object detection <cit.>, semantic segmentation <cit.>, instance segmentation <cit.>, and depth estimation <cit.> to low-level image restoration <cit.>, image matting <cit.>, edge detection <cit.>, and optical flow estimation <cit.>, to name a few. An interesting property about dense prediction is that a task could be region-sensitive or detail-sensitive. The sensitivity is closely related to what metric is used to assess the task. In this sense, semantic/instance segmentation is region-sensitive, because the standard Mask Intersection-over-Union (IoU) metric <cit.> is mostly affected by regional mask prediction quality, instead of boundary quality. On the contrary, image matting can be considered detail-sensitive, because the error metrics <cit.> are mainly computed from trimap regions that are full of subtle details or transparency. Note that, when we emphasize region sensitivity, we do not mean that details are not important, and vice versa. In fact, the emergence of the Boundary IoU metric <cit.> implies that the limitation of a certain evaluation metric has been noticed by our community. Feature upsampling can play important roles in dense prediction, not only for generating high-resolution predictions but also for improving the quality of predictions. The goal of developing a task-agnostic and content-aware upsampling operator capable of both regional preservation and detail delineation can have a broad impact on a number of dense prediction tasks. In this work, we evaluate FADE and other upsampling operators on both types of tasks using both region-aware and detail-aware metrics. § TASK-AGNOSTIC UPSAMPLING: A TRADE-OFF BETWEEN SEMANTIC PRESERVATION AND DETAIL DELINEATION? Before we present FADE, we share some of our view points towards task-agnostic upsampling, which may be helpful to understand our designs in FADE. Encoder and decoder features play different roles in upsampling, particularly in the generation of upsampling kernels. In dense prediction models, downsampling stages are involved to reduce computational burden or to acquire a large receptive field, bringing the need of peer-to-peer upsampling stages to recover the spatial resolution, which together constitutes the basic encoder-decoder architecture. During downsampling, details of high-res features are impaired or even lost, but the resulting low-res encoder features often have good semantic meanings that can pass to decoder features. Hence, we believe an ideal upsampling operator should appropriately resolve two issues: 1) preserve the semantic information already extracted; 2) compensate as many lost details as possible without deteriorating the semantic information. NN or bilinear interpolation only meets the former. This conforms to our intuition that interpolation often smooths features. A reason is that low-res decoder features have no prior knowledge about missing details. Other operators that directly upsample decoder features, such as deconvolution and pixel shuffle, can have the same problem with poor detail compensation. Compensating details requires high-res encoder features. This is why unpooling that stores indices before downsampling has good boundary delineation <cit.>, but it hurts the semantic information due to zero-filling. Dynamic upsampling operators, including CARAFE <cit.>, IndexNet <cit.>, and A2U <cit.>, alleviate the problems above with data-dependent upsampling kernels. Their upsampling modes are shown in Fig. <ref>(a)-(b). From Fig. <ref>, it can be observed that, CARAFE generates upsampling kernels conditioned on decoder features, while IndexNet <cit.> and A2U <cit.> generate kernels via encoder features. This may explain the inverse behavior between CARAFE and IndexNet/A2U on region- or detail-sensitive tasks <cit.>. Yet, we find that generating upsampling kernels using either encoder or decoder features can lead to suboptimal results, and it is critical to leverage both encoder and decoder features for task-agnostic upsampling, as implemented in FADE (Fig. <ref>(c)). How each feature point contributes to upsampling matters. After deciding what the features to use, the follow-up question is how to use the features effectively and efficiently. The main obstacle is the mismatched resolution between encoder and decoder features. Per Fig. <ref>, one may consider simple interpolation for resolution matching, but this can lead to sub-optimal upsampling. Considering the case of applying ×2 NN interpolation to decoder features, if we use 3×3 convolution to generate the upsampling kernel, the effective receptive field of the kernel can reduce to be <50%: before interpolation there are 9 valid points in a 3×3 window, but only 4 valid points are left after interpolation. Besides this, another more important issue remains. Still in the ×2 upsampling in Fig. <ref>, the four windows which control the variance of upsampling kernels w.r.t. the 2×2 neighbors of high resolution are affected by the naive interpolation. Controlling a high-res upsampling kernel map, however, is blind with the low-res decoder feature. It contributes little to the variance of the four neighbors. A more reasonable choice may be to let encoder and decoder features cooperate to control the overall upsampling kernel, but let the encoder feature alone control the variance of the four neighbors. This insight exactly motivates the design of semi-shift convolution (Section <ref>). High-res encoder features can be leveraged for further detail refinement. Besides helping structural recovery via upsampling kernels, there remains useful information in encoder features. Since encoder features only go through a few layers of a network, they preserve `fine details' of high resolution. In fact, nearly all dense prediction tasks require fine details, e.g., despite regional prediction dominates in instance segmentation, accurate boundary prediction can significantly boost performance <cit.>, not to mention the stronger request of fine details in detail-sensitive tasks. The demands of fine details in dense prediction need further exploitation of encoder features. Following existing ideas <cit.>, we explore the use of a gating mechanism by leveraging low-res decoder features to guide where the high-res encoder features can pass through. Yet, in some instance-aware tasks, we find that the gate is better left fully open (more discussion can be found in Section <ref>). § FADE: FUSING THE ASSETS OF DECODER AND ENCODER Here we elaborate our designs in FADE. We first revisit the framework of dynamic upsampling, then present from three aspects on how to fuse the assets of decoder and encoder features in upsampling, particularly discussing the principle and the efficient implementations of the semi-shift convolution. §.§ Dynamic Upsampling Revisited Here we review some basic operations in recent dynamic upsampling operators such as CARAFE <cit.>, IndexNet <cit.>, and A2U <cit.>. Fig. <ref> briefly summarizes their upsampling modes. They share an identical pipeline, i.e., first generating data-dependent upsampling kernels, and then reassembling the decoder features using the kernels. Typical dynamic upsampling kernels are content-aware, but channel-shared, which means each position has a unique upsampling kernel in the spatial dimension, but the same ones are shared in the channel dimension. CARAFE learns upsampling kernels directly from decoder features and then reassembles them to high resolution. Specifically, the decoder features pass through two consecutive convolutional layers to generate the upsampling kernels, of which the former is a channel compressor implemented by 1×1 convolution used to reduce the computational complexity and the latter is a content encoder with 3×3 convolution. IndexNet and A2U, however, adopt more sophisticated modules to leverage the merit of encoder features. Further details can be referred to <cit.>. FADE is designed to maintain the simplicity of dynamic upsampling. Hence, we mainly optimize the process of kernel generation with semi-shift convolution, and the channel compressor will also function as a way of pre-fusing encoder and decoder features. In addition, FADE also includes a gating mechanism for detail refinement. The overall pipeline of FADE is summarized in Fig. <ref>. In what follows, we explain our three key designs and present our efficient implementations. §.§ Generating Upsampling Kernels from Encoder and Decoder Features We first showcase a few visualizations on some small-scale or toy-level data sets to highlight the importance of both encoder and decoder features for task-agnostic upsampling. We choose semantic segmentation on SUN RGBD <cit.> as the region-sensitive task and image reconstruction on Fashion MNIST <cit.> as the detail-sensitive one. We follow the network architectures and the experimental settings in <cit.>. Since we focus on upsampling, all downsampling stages use max pooling. Specifically, to show the impact of encoder and decoder features, in the segmentation experiments, we use CARAFE as the baseline but only modify the source of features used for generating upsampling kernels. We build three baselines: 1) decoder-only, the standard implementation of CARAFE; 2) encoder-only, where the upsampling kernels are generated from encoder features; 3) encoder-decoder, where the upsampling kernels are generated from the concatenation of encoder and NN-interpolated decoder features. We report Mask IoU (mIoU) <cit.> and Boundary IoU (bIoU) <cit.> for segmentation, and Peak Signal-to-Noise Ratio (PSNR), Structural SIMilarity index (SSIM), Mean Absolute Error (MAE), and root Mean Square Error (MSE) for reconstruction. From Table <ref>, one can observe that the encoder-only baseline outperforms the decoder-only one in image reconstruction, but in semantic segmentation the trend is on the contrary. To understand why, we visualize the segmentation masks and reconstructed results in Fig. <ref>. We find that in segmentation the decoder-only model tends to produce regionally coherent masks, while the encoder-only one generates clear mask boundaries but blocky regions; in reconstruction, by contrast, the decoder-only model almost fails and can only generate low-fidelity reconstructions. It thus can be inferred that, high-res encoder features help to predict details, while low-res decoder features contribute to semantic preservation of regions. Indeed, by considering both encoder and decoder features, the resulting mask seems to integrate the merits of the former two, and the reconstructions are also full of details. Therefore, albeit a simple tweak, FADE significantly benefits from generating upsampling kernels with both encoder and decoder features, as illustrated in Fig. <ref>(c). §.§ Semi-shift Convolution Given encoder and decoder features, we next address how to use them to generate upsampling kernels. We investigate two implementations: the naive one presented in Fig. <ref> and our customized one – semi-shift convolution. We first illustrate the principle of semi-shift convolution and then present its efficient implementations. Finally, we compare the computational workload and memory occupation among different implementations. §.§.§ Principle of Semi-shift Convolution The key difference between naive and semi-shift convolution is how each decoder feature point spatially corresponds to each encoder feature point. The naive implementation shown in Fig. <ref> includes five operations: i) feature interpolation, ii) concatenation, iii) channel compression, iv) standard convolution for kernel generation, and v) softmax normalization. As aforementioned in Section <ref>, naive interpolation can have a few problems. To address them, we propose semi-shift convolution that simplifies the first four operations above into a unified operator, which is illustrated in Fig. <ref>. Note that the 4 convolution windows in encoder features all correspond to the same window in decoder features. This design has the following advantages: 1) the role of control in the kernel generation is made clear where the control of the variance of 2×2 neighbors is moved to encoder features completely; 2) the receptive field of decoder features is kept consistent with that of encoder features; 3) memory cost is reduced, because semi-shift convolution directly operates on low-res decoder features, without feature interpolation; 4) channel compression and kernel generation can also be merged in semi-shift convolution. Mathematically, the single window processing with naive implementation or semi-shift convolution has an identical form if ignoring the content of feature maps. For example, considering the top-left window w.r.t. the index `1' in Figures <ref> and <ref>, the (unnormalized) upsampling kernel takes the form w_m = ∑_l=1^d∑_i=1^h∑_j=1^hβ_ijlm(∑_k=1^2Cα_klx_ijk + a_l) + b_m = ∑_l=1^d∑_i=1^h∑_j=1^hβ_ijlm(∑_k=1^Cα_kl^ enx_ijk^ en + ∑_k=1^Cα_kl^ dex_ijk^ de + a_l) + b_m = ∑_l=1^d∑_i=1^h∑_j=1^hβ_ijlm∑_k=1^Cα_kl^ enx_ijk^ en               + ∑_l=1^d∑_i=1^h∑_j=1^hβ_ijlm(∑_k=1^Cα_kl^ dex_ijk^ de + a_l) + b_m , where w_m, m=1,...,K^2, is the weight of the upsampling kernel, K the upsampling kernel size, h the convolution window size, C the number of input channel dimension of encoder and decoder features, and d the number of compressed channel dimension. α_kl^ en and {α_kl^ de, a_l} are the parameters of 1×1 convolution specific to encoder and decoder features, respectively, and {β_ijlm, b_m} the parameters of 3×3 convolution. Following CARAFE, we set h=3, K=5, and d=64. §.§.§ Efficient Implementations of Semi-shift Convolution Given the formulation above, here we discuss the efficient implementations of semi-shift convolution. According to Eq. (<ref>), by the linearity of convolution, the two standard convolutions on 2C-channel features are equivalent to applying two distinct 1×1 convolutions to C-channel encoder and C-channel decoder features, respectively, followed by a shared 3×3 convolution and summation. Such decomposition allows us to process encoder and decoder features without matching their resolution explicitly. However, we still need to address the mismatch implicitly. There are two strategies: i) downsampling the high-res encoder output to match the low-res decoder one, or ii) upsampling the low-res decoder output to match the high-res encoder one. To process the whole feature map following the first strategy, the window can move s steps on encoder features but only ⌊ s/2 ⌋ steps on decoder features. This is why the operator is given the name `semi-shift convolution'. We split the process to 4 sub-processes; each sub-process focuses on the top-left, the top-right, the bottom-left, and the bottom-right window, respectively. Different sub-processes have similar prepossessing strategies. For example, for the top-left sub-process, we add full zero padding to the decoder feature, but only pad the top and left side of the encoder feature. Then all the top-left window correspondences can be satisfied by setting convolutional stride of 1 for the decoder feature and of 2 for the encoder feature. Finally, after a few memory operations, the four sub-outputs can be reassembled to the (unnormalized) upsampling kernel. This process is illustrated in the left of Fig. <ref>, which can be called the high-to-low (H2L) implementation. The H2L implementation above is provided in our conference version <cit.>. We later notice that the key characteristic of semi-shift convolution lies in the same decoder feature point corresponds to 4 encoder feature points, which shares the same spirit of NN interpolation. Following this interpretation, we provide a more efficient implementation with less use of memory, as shown in the right of Fig. <ref>, named the low-to-high (L2H) implementation. First, unshared 1× 1 convolutions are used to compress the encoder and decoder features, respectively. Then the shared 3×3 convolution is applied, of which the decoder feature is NN-interpolated to the size of the encoder one. Finally they are summed to obtain the (unnormalized) kernel. Both implementations can be implemented within the standard PyTorch library. In the H2L implementation, the kernel 𝒲_i of the i-th sub-process (with specific padding applied), i=1,2,3,4, takes the form 𝒲_i = conv_/2(CC(𝒳_en, θ_en), θ)+ conv_/1(CC(𝒳_de, θ_de), θ) , where conv_/s(𝒳, θ) denotes the stride-s 3×3 convolution over the feature map 𝒳, parameterized by θ. CC is the channel compressor implemented by 1×1 convolution. 𝒳_en and 𝒳_de are the encoder and the decoder feature, respectively. Note that, the parameters θ_en and θ_de in CC are different, while the parameters in conv_/1 and conv_/2 are the same θ. The four 𝒲_i's need to be aggregated and reshaped to form the full kernel 𝒲. In contrast, the L2H implementation does not require sub-process division and computes the full kernel 𝒲 directly. It can be formulated as 𝒲 = conv_/1(CC(𝒳_en, θ_en), θ)+ NN ( conv_/1(CC(𝒳_de, θ_de), θ)) , where NN is the ×2 NN interpolation operator. SemiShift-Lite and FADE-Lite. We also investigate a simplified variant of semi-shift convolution, which uses depthwise convolution to further reduce the computational complexity, named SemiShift-Lite. Specifically, SemiShift-Lite sets d=K^2 and adopts 3×3 depthwise convolution to encode the local information. Its whole number of parameters is 2CK^2+9K^2. The use of SemiShift-Lite also leads to a lightweight variant of FADE, i.e., FADE-Lite. We use this variant to show that the task-agnostic property indeed comes with the careful treatment of encoder and decoder features, even with much less parameters. When C=256, d=64, and K=5, despite FADE-Lite only includes 27.6% parameters of its standard version FADE, we observe that FADE-Lite is still task-agnostic and outperforms most upsampling operators (see Section <ref> for details). §.§ Extracting Fine Details from Encoder Features Here we further introduce a gating mechanism to complement fine details from encoder features to upsampled features. We again use some experimental observations to motivate our design. We use a binary image segmentation dataset, Weizmann Horse <cit.>. The reasons for choosing this dataset are two-fold: (1) the visualization is made simple; (2) the task is simple such that the impact of feature quality can be neglected. When all baselines have nearly perfect region predictions, the difference in detail prediction can be amplified. We use SegNet pretrained on ImageNet as the baseline and alter only the upsampling operators. Results are listed in Table <ref>. An interesting phenomenon is that CARAFE works almost the same as NN interpolation and even falls behind the default unpooling and IndexNet. An explanation is that the dataset is too simple such that the region smoothing property of CARAFE is wasted, but recovering details matters. A common sense in segmentation is that, the interior of a certain class would be learned fast, while mask boundaries are difficult to predict. This can be observed from the gradient maps w.r.t. an intermediate decoder layer, as shown in Fig. <ref>. During the middle stage of training, most responses are near boundaries. Now that gradients reveal the demand of detail information, feature maps would also manifest this requisite with some distributions, e.g., in multi-class semantic segmentation a confident class prediction in a region would be a unimodal distribution along the channel dimension, and an uncertain prediction around boundaries would likely be a bimodal distribution. Hence, we assume that all decoder layers have gradient-imposed distribution priors and can be encoded to inform the requisite of detail or semantic information. In this way fine details can be chosen from encoder features without hurting the semantic property of decoder features. Hence, instead of directly skipping encoder features as in feature pyramid networks (FPNs) <cit.>, we introduce a naive gating mechanism following existing ideas <cit.> to refine upsampled features using encoder features, conditioned on decoder features. The gate is generated through a 1×1 convolution layer, a NN interpolation layer, and a sigmoid function. As shown in Fig. <ref>(c), the decoder feature first goes through the gate generator, and the generator then outputs a gate map instantiated in Fig. <ref>. Finally, the gate map G modulates the encoder feature ℱ_ encoder and the upsampled feature ℱ_ upsampled to generate the final refined feature ℱ_ refined as ℱ_ refined = ℱ_ encoder· G + ℱ_ upsampled· (1-G) . From Table <ref>, the gate works on both NN and CARAFE. We remark that our initial motivation for developing the gating mechanism comes from semantic segmentation and image matting tasks. In semantic segmentation, the model outputs a set of logits and uses argmax to select one channel as the predicted class. This form of prediction renders the model working in a one-class-one-value manner. To preserve this manner, we expect the gate to extract only the details that require from the encoder (Fig. <ref>) and to influence the decoder feature as less as possible. Similarly in matting, despite the number of classes can be considered to be infinity, the model still follows the one-class-one-value paradigm. However, in instance-sensitive tasks, such as object detection, given the one-class-one-value feature maps, one cannot tell the instance difference with argmax. In addition, object detection is rather different from semantic segmentation, where high-res features are responsible for precise localization, so in <cit.> the FPN is adopted to improve Faster-RCNN <cit.>. For the reasons above, gating, as a mechanism strengthening decoder features, may not tackle the improvement for localization. In this case, FADE without gating, denoted by FADE (G=1), would be a better choice. We will discuss more in the experiments on object detection (Section <ref>) and instance segmentation (Section <ref>). § APPLICATIONS Here we demonstrate the applications and the task-agnostic property of FADE on various dense prediction tasks, including semantic segmentation, image matting, object detection, instance segmentation, and monocular depth estimation. In particular, we focus our experiments on segmentation to analyze the the upsampling behaviors of FADE from different aspects and design ablation studies to justify our design choice on FADE. §.§ Semantic Segmentation Semantic segmentation is region-sensitive. To prove that FADE is architecture-independent, SegFormer <cit.> and UPerNet <cit.> are chosen as transformer and convolutional baselines, respectively. §.§.§ Data Set, Metrics, Baseline, and Protocols We use the ADE20K dataset <cit.>. It covers 150 fine-grained semantic concepts, including 20,210 training images and 2,000 validation images. In addition to reporting the standard mask IoU (mIoU) <cit.>, we also report the boundary IoU (bIoU) <cit.> to assess the boundary quality. SegFormer-B1 <cit.> is first evaluated. We keep the default model architecture in SegFormer except for modifying the upsampling stages in the MLP head. In particular, feature maps of each scale need to be upsampled to 1/4 of the original image. Therefore, there are 3+2+1=6 upsampling stages in all. All training settings and implementation details are kept the same as in <cit.>. Since SegFormer follows a `fuse-and-concatenate' manner, where the feature maps are all upsampled to the max-resolution one, we verify two styles of upsampling strategies: direct upsampling and 2 by 2 iterative upsampling. We also test B3, B4, and B5 versions of SegFormer to see if a similar boost could be observed on stronger backbones. In addition, considering that stronger backbones often produce better feature quality, this also allows to see whether feature upsampling still contributes to improved feature quality on stronger backbones. For UPerNet <cit.>, we use the implementation provided by .[https://github.com/open-mmlab/mmsegmentation] We use the ResNet-50 and ResNet-101 backbones and modify the upsampling operators in the FPN and train the model with 80 K iterations. The original skip connection is removed due to the inclusion of the gating mechanism. Because FADE upsamples by ×2 times of the input at once, we use the aligned resizing in inference to match the resolution. Other settings are kept the same. §.§.§ Semantic Segmentation Results Quantitative results of different upsampling operators are reported in Table <ref>. FADE is the best performing operator on both mIoU and bIoU metrics. In particular, it improves over the Bilinear baseline by a large margin, with +2.73 mIoU and +4.85 bIoU. Qualitative results are shown in Figures <ref> and <ref>. FADE generates high-quality predictions both within mask regions and near mask boundaries. Stronger Backbones. We also test stronger backbones on SegFormer, including the B3, B4, and B5 versions. From Table <ref>, when stronger backbones are used, we observe both mIoU and bIoU improve (B1→B3, B3→B4, and B4→B5). However, on B3, B4, and B5, the benefits of FADE are almost invisible in terms of mIoU, which suggests improved feature quality brought by improved backbones have addressed many misclassifications that upsampling can amend, particularly for interior regions. Yet, steady boosts in bIoU (>1) can still be observed. This means improved features only address the boundary errors to a certain degree (cf. bIoU improvements in B1→B3 vs. that in B3→B4), and FADE can still improve feature quality near mask boundaries. Our evaluations connote improved feature upsampling indeed makes a difference, particularly being useful for resource-constrained applications where a model has limited capacity. Upsampling Styles. We also explore two styles of upsampling in SegFormer: direct upsampling and iterative ×2 upsampling. From Table <ref> we can see that iterative upsampling is better than the direct one in performance. Compared with CARAFE, FADE is more sensitive to the upsampling style, which implies the occurrence of features of different scales matters. Applicability to CNN Architecture. We further evaluate FADE on UPerNet. Results are shown in Table <ref>. Compared with Bilinear, FADE boosts around +1% mIoU and outperforms the strong baseline CARAFE with ResNet-50, which confirms the efficacy of FADE for the FPN architecture. On the ResNet-101 backbone, FADE also works, and we observe a even more significant improvement in bIoU, which suggests FADE is good at amending boundary errors. Visualization of Learned Upsampling. We also visualize the learning process of CARAFE and FADE with increased iterations. From Fig. <ref>, we can see that the two upsampling operators have different behaviors: FADE first learns to delineate the outlines of objects and then fills the interior regions, while CARAFE focuses on the interior initially and then spreads outside slowly. We think the reason is that the gating mechanism is relatively simple and learns fast. By the way, one can see that there are checkerboard artifacts in the visualizations of CARAFE (on the leg of the bottom left person) due to the adoption of Pixel Shuffle. Such visualizations suggest that upsampling can significantly affect the quality of features. While there is no principal rule on what could be called `good features', feature visualizations still proffer a good basis of the feature quality, and one at least can sense where is wrong when clear artifacts present in visualizations. §.§ Image Matting Our second task is image matting <cit.>. Image matting is a typical detail-sensitive task. It requires a model to estimate an accurate alpha matte that smoothly splits foreground from background. Since ground-truth alpha mattes can exhibit significant differences among local regions, estimations are sensitive to a specific upsampling operator used <cit.>. §.§.§ Data Set, Metrics, Baseline, and Protocols We conduct experiments on the Adobe Image Matting dataset <cit.>, whose training set has 431 unique foreground objects and ground-truth alpha mattes. Following <cit.>, instead of compositing each foreground with fixed 100 background images chosen from MS COCO <cit.>, we randomly choose background images in each iteration and generate composited images on-the-fly. The Composition-1K testing set has 50 unique foreground objects, and each is composited with 20 background images from PASCAL VOC <cit.>. We report the widely used Sum of Absolute Differences (SAD), Mean Squared Error (MSE), Gradient (Grad), and Connectivity (Conn) <cit.>. A2U Matting <cit.> is adopted as the baseline. Following <cit.>, the baseline network adopts a backbone of the first 11 layers of ResNet-34 with in-place activated batchnorm <cit.> and a decoder consisting of a few upsampling stages with shortcut connections. Readers can refer to <cit.> for the detailed architecture. We use max pooling in downsampling stages when applying FADE as the upsampling operator to train the model, and cite the results of other upsampling operators from A2U Matting <cit.>. We strictly follow the training configurations and data augmentation strategies used in <cit.>. §.§.§ Image Matting Results We compare FADE with other state-of-the-art upsampling operators. Quantitative results are also shown in Table <ref>. Akin to segmentation, FADE consistently outperforms other competitors in all metrics, with also few additional parameters. Note that IndexNet and A2U are strong baselines that are delicately designed upsampling operators for image matting. Also the worst performance of CARAFE indicates that upsampling with only decoder features is not sufficient to recover details. Compared with standard bilinear upsampling, FADE invites 16%∼32% relative improvements, which suggests a simple upsampling operator can make a difference. Our community may shift more attention to upsampling. Additionally, it is worth noting that FADE-Lite also outperforms other prior operators, and particularly, surpasses the strong baseline A2U with even less parameters. Qualitative results are shown in Figures <ref> and <ref>. FADE generates high-fidelity alpha mattes. Task-Agnostic Property. By comparing different upsampling operators across both segmentation and matting, FADE is the only operator that exhibits the task-agnostic property. A2U is the previous best operator in matting, but turns out to be the worst one in segmentation. CARAFE is the previous best operator in segmentation, but the worst one in matting. This implies that current dynamic operators still have certain weaknesses to achieve task-agnostic upsampling. In addition, FADE-Lite also exhibits the task-agnostic property (being the consistent second best in both tasks in all metrics), which suggests such a property is insensitive to the number of parameters. §.§ Object Detection The third task is object detection <cit.>. Object detection addresses where and what objects are with category-specific bounding boxes. It is a mainstream dense prediction problem. Addressing `what' is a recognition problem, while addressing `where' requires precise localization in feature pyramids. Upsampling is therefore essential to acquire high-res feature maps. §.§.§ Data Set, Metrics, Baseline, and Protocols We use the MS COCO dataset <cit.> and report the standard AP, AP_50, AP_75, AP_S, AP_M, and AP_L. We use Faster R-CNN as the baseline and replace the default NN interpolation with other upsampling operators. We follow the Faster R-CNN implementation provided by [https://github.com/open-mmlab/mmdetection] and only modify the upsampling stages in FPN. Note that, the original skip connection in FPN is removed due to the inclusion of the gating mechanism. All other settings remain unchanged. We evaluate on both ResNet-50 and ResNet-101 backbones. Moreover, since the FPN is used, in addition to the dynamic upsampling operators, we also compare with some feature alignment modules designed for FPN, including the FA^2M used in FSANet <cit.>, the FAM used in SFNet <cit.>, and the GD-FAM used in SFNet-Lite <cit.>. §.§.§ Object Detection Results Quantitative and qualitative results are shown in Table <ref> and Fig. <ref>, respectively. We find that, while FADE still improves detection performance, it is not at a level comparable to CARAFE. However, when setting the gate G=1 in FADE, the performance improves from 37.8 AP to 38.5 AP, approaching to CARAFE. We are interested to know why. After a careful check at the upsampled feature map (Fig. <ref>), we see that the detector favors more detailed upsampled features than blurry ones (CARAFE vs. FADE). Perhaps details in features can benefit precise localization of bounding boxes. In the use of CARAFE, high-res encoder features are directly skipped in the FPN. In contrast, FADE uses a gate to control of pass the encoder features. The resulting features of FADE show that the gate does not work as expected: the decoder features dominate in the output. Why does not the gate work? We believe this can boil down to how the detector is supervised. Since the gate predictor has few parameters, the generated gate is mostly affected by the feature map. In semantic segmentation and image matting where per-pixel ground truths are provided, the features can be updated delicately. Yet, in detection where the ground truth bounding boxes are sparse, the feature learning could be coarse, therefore affecting the prediction of the gate. Fortunately, the gating mechanism works in FADE as a post-processing step and can be disabled when unnecessary. In addition, we observe FADE (G=1) outperforms feature alignment modules, which suggests manipulating kernels seems more effective than manipulating features. A plausible explanation is that, feature alignment needs to correct additional artifacts introduced by naive feature upsampling (NN or bilinear upsampling is typically executed before feature alignment is performed). Moreover, with a stronger backbone ResNet-101, FADE can also boost the performance. This implies that, while a better backbone is often favored, there are still feature issues that cannot be addressed with increased model capacity. In this case, some improved components within the architecture such as improved upsampling may help. §.§ Instance Segmentation The forth task is instance segmentation <cit.>. Instance segmentation is an extended task of semantic segmentation. In addition to labelling object/scene categories, it needs to further discriminate instances of the same category. It can also be considered a region-sensitive task. §.§.§ Data Set, Metrics, Baseline, and Protocols Akin to object detection, we use the MS COCO dataset <cit.> for instance segmentation and report box AP, mask AP, and boundary AP. Following <cit.>, we select Mask R-CNN as our baseline and only replace the default NN interpolation with other upsampling operators in the FPN. Since the gate in FADE would reduce to the skip connection when G=1 according to Eq. (<ref>), the original skip connection in FPN is removed. We also follow the Mask R-CNN implementation provided by and the training setting used in <cit.>. We test on both ResNet-50 and ResNet-101 backbones. In addition, we also compare against the feature alignment modules as in detection, because Mask R-CNN uses the FPN as well. §.§.§ Instance Segmentation Results Quantitative and qualitative results are shown in Table <ref> and Fig. <ref>, respectively. We have similar observations to object detection: i) the standard implementation of FADE only shows marginal improvements; ii) FADE without gating works better than FADE and is on par with CARAFE. Compared with other tasks, all upsampling operators have limited improvements (<1) in terms of mask AP. A reason may be the limited output resolution (28×28) of the mask head. In this case, the benefits of improved boundary delineation of upsampling may not be revealed, which can also be observed from the marginal improvements on the boundary AP. Indeed the more significant relative improvements on box AP than mask AP indicate that the improved mask AP could be mostly due to the improved detection performance. Nevertheless, FADE without gating could still be a preferable choice if taking its task-agnostic property into account. With a stronger backbone ResNet-101, FADE invites an improvement of 0.6 box AP and 0.4 mask AP, which provides a similar boost as ResNet-50. Compared with feature alignment modules, dynamic upsampling operators generally work better. From the visualizations of feature maps in Fig. <ref>, one can see that, despite being empirical, the quality of the feature maps generally seems an good indicator of final performance: feature maps more resembling to the ground truth at the relatively low resolution (the second row) generally have better performance (cf. the feature maps of NN and A2U). §.§ Monocular Depth Estimation Our final task is monocular depth estimation <cit.>. This task aims to infer the depth from a single image. Compared with other tasks, depth estimation is a mixture of region- and detail-sensitive dense predictions. In a local region, depth values could remain constant (an object plane parallel to the image plane), could be gradually varied (an object plane oblique to the image plane), or could be suddenly changed (on the boundary between different depth planes). The recovery of details in depth estimation is also critical for human perception, because boundary artifacts can be easily perceived by human eyes in many depth-related applications such as 3D ken burns <cit.> and bokeh rendering <cit.>. §.§.§ Data Set, Metrics, Baseline, and Protocols We use the NYU Depth V2 <cit.> dataset and standard depth metrics used by previous work to evaluate the performance, including root mean squared error (RMS) and its log version (RMS (log)), absolute relative error (Abs Rel), squared relative error (Sq Rel), average log_10 error (log10), and the accuracy with threshold thr (δ<thr). Readers can refer to <cit.> for definitions of the metrics. We use BTS[https://github.com/cleinc/bts] as our baseline and modify all the upsampling stages except for the last one, because there is no guiding feature map at the last stage. We follow the default training setting provided by the authors but set the batch size as 4 in our experiments (due to limited computational budgets). §.§.§ Monocular Depth Estimation Results Quantitative and qualitative results are shown in Table <ref> and Fig. <ref>, respectively. Note that FADE requires more number of parameters in this task. The reason is that the number of channels in encoder and decoder features are different, and we need a few 1×1 convolutions to adjust the channel number for the gating mechanism. Overall, FADE reports consistently better performance in all metrics than other competitors, and FADE-Lite is also the steady second best. It is worth noting that A2U degrades the performance, which suggests only improving detail delineation is not sufficient for depth estimation. FADE, however, fuses the benefits of both detail- and region-aware upsampling capable of simultaneous detail delineation and regional preservation. We believe this is the reason why FADE behaves remarkably on this task. §.§ Ablation Study Here we conduct ablation studies to justify our three design choices. We follow the settings in segmentation and matting, because they are sufficiently representative to indicate region- and detail-sensitive tasks. In particular, we explore how performance is affected by the source of features, the way for upsampling kernel generation, and the use of the gating mechanism. We build six baselines: 1) b1: encoder-only. Only encoder features go through 1×1 convolution for channel compression (64 channels), followed by 3×3 convolution layer for kernel generation; 2) b2: decoder-only. This is the CARAFE baseline <cit.>. Only decoder features go through the 1×1 and 3×3 convolution for kernel generation, followed by Pixel Shuffle; 3) b3: encoder-decoder-naive. NN-interpolated decoder features are first concatenated with encoder features, and then the same two convolutional layers are applied; 4) b4: encoder-decoder-semi-shift. Instead of using NN interpolation and standard convolutional layers, we use semi-shift convolution to generate kernels as in FADE; 5) b5: b4 with skipping. We directly skip the encoder features as in feature pyramid networks <cit.>; 6) b6: b4 with gating. The full implementation of FADE. Results are shown in Table <ref>. By comparing b1, b2, and b3, the results confirm the importance of both encoder and decoder features for upsampling kernel generation. By comparing b3 and b4, semi-shift convolution is superior than naive implementation in the way of generating upsampling kernels. As aforementioned, the rationale behind such a superiority can boil down to the granular control on the per-point contribution in the kernel (Section <ref>). We also note that, even without gating, the performance of FADE already surpasses other upsampling operators (b4 vs. Table <ref>), which means the task-agnostic property is mainly due to the joint use of encoder and decoder features and the semi-shift convolution. In addition, skipping in these two task is clearly not the optimal way to move encoder details to decoder features, at least worse than the gating mechanism (b5 vs. b6). Hence, we think gating is generally beneficial. §.§ Limitations and Further Discussions Computational Overhead. Despite FADE outperforms CARAFE in 4 out of 6 tasks, FADE processes 5 times data more than CARAFE and thus consumes more FLOPs due to the involvement of high-res encoder features. Our efficient implementations do not change this fact but only help prevent extra calculations on interpolated decoder features. A thorough comparison of the computational complexity and inference time of different dynamic upsampling operators can be found in Appendix <ref>. Prerequisite of Using FADE. The use of the gating mechanism in FADE requires an equal number of channels of encoder and decoder features. Therefore, if the channel number differs, one needs to add a 1×1 convolution layer to align the channel number. However, this would introduce additional parameters, for example depth estimation with BTS. If the gate is not used, i.e., FADE (G=1), this trouble could be saved. In addition, if there is no high-res feature guidance, for instance, the last upsampling stage in BTS or in image super-resolution tasks, FADE cannot be applied as well. When to Use the Gating Mechanism. At our initial design <cit.>, we mainly consider the one-class-one-value mapping as in semantic segmentation or regressing a dense 2D map as in image matting, but do not explore instance-level tasks like object detection and instance segmentation, where the situation differs from what we initially claim. We find that the high-res encoder feature plays an important role in localization. If forcing the feature map to be alike to that in semantic segmentation, the model cannot learn instance-aware information effectively. In this case the gating mechanism can fail, and we propose to use direct addition (G=1) as a substitution. One should also be aware that, semi-shift convolution can introduce encoder noise in the generated kernel such that the precise localization of bounding box could be affected (the obviously lower AP_75 of FADE than CARAFE in object detection and instance segmentation). General Value of Upsampling to Dense Prediction. As closing remarks, here we tend to share our insights on the general value of upsampling to dense prediction. Compared with other operators or modules studied in dense prediction models, upsampling operators have received less attention. While we have conducted extensive experiments to demonstrate the effectiveness of upsampling, one may still raise the question: Is upsampling an intrinsic factor to influence the dense prediction performance? Indeed current mainstream ideas are to scale the model <cit.>, and results from Table <ref> also indicate that, under a certain evaluation metric, a strong backbone with a simple bilinear upsampling is sufficient. Yet, we remark that, if one keep pursuing the increment of a certain metric in a specific task, e.g., mIoU in semantic segmentation, some other important things would be overlooked such as the boundary quality. From also Table <ref>, we can observe that enhanced upsampling steadily boosts the bIoU metric. This is only in segmentation. From a broad view across different dense prediction tasks, the value of upsampling can even be greater, particularly for low-level tasks. For instance, it has been reported that, with learned upsampling, the Deep Image Prior model can use 95% fewer parameters to achieve superior denoising results than existing methods <cit.>. Our previous experience in matting also suggests inappropriate upsampling even cannot produce a reasonable alpha prediction <cit.>. From the perspective of architecture design, different operators or modules function differently, but their ultimate goal is alike, i.e., learning high-quality features. If enabling an upsampling operator that has a high probability of being used in an encoder-decoder architecture to have equivalent or even better functions implemented by other optional modules, the architecture design could be simplified. Task-agnostic upsampling at least demonstrates such a potential. Indeed upsampling matters. We believe the value of upsampling is not only about improved performance but also about the design of new, effective, efficient, and generic encoder-decoder architectures. Another closely-related question is that: Does one still need new fundamental (upsampling) operators, particularly in the era of vision foundation models <cit.> when the idea of scaling typically wins? Indeed current foundation models are made of standard operators such as convolutional layers <cit.> and self-attention blocks <cit.>. The classic U-Net architecture <cit.> is also used in StableDiffusion <cit.>. The adoption of sophisticated operators or architectures seem unnecessary if the model capacity reaches to a certain level. Yet, we note a phenomenon that the SAM model <cit.> still cannot generate accurate mask boundaries. We believe one of the reasons is that it still uses the deconvolution upsampling in the decoder, which smoothes boundaries. Hence, we think designing fundamental and task-agnostic network operators would remain to be an active research area. Here we make a tentative prediction: a real sense of the vision foundation model should be made of task-agnostic operators. We expect this work can inspire the new design of such operators. § CONCLUSION In this paper, we provide feature upsampling with three levels of meanings: i) being basic, the ability to increase spatial resolution; ii) being effective, the capability of improving performance; and iii) being task-agnostic, the generality across tasks. In particular, to achieve the third property, we propose FADE, a novel, plug-and-play, and task-agnostic upsampling operator by fully fusing the assets of encoder and decoder features. For the first time, FADE demonstrates that task-agnostic upsampling is made possible across both region- and detail-sensitive dense prediction tasks, outperforming or at least being comparable with the previous best upsampling operators. We explain the rationale of our design with step-to-step analyses and also share our view points from considering what makes for generic feature upsampling. Our core insight is that an upsampling operator should be able to dynamically trade off between detail delineation and semantic preservation in a content-aware manner. We encourage others to try this operator on many more dense prediction tasks, particularly on low-level tasks such as image restoration. So far, FADE is designed to maintain the simplicity by only implementing linear upsampling, which leaves ample room for further improvement, e.g., by exploring additional nonlinearity. Funding This work is supported by the National Natural Science Foundation of China Under Grant No. 62106080 and the Hubei Provincial Natural Science Foundation of China under Grant No. 2024AFB566. § COMPARISON OF COMPUTATIONAL COMPLEXITY A favorable upsampling operator, being part of overall network architecture, should not significantly increase the computation cost. This issue is not well addressed in IndexNet as it introduces many parameters and much computational overhead <cit.>. In this part we analyze the computational workload and memory occupation among different dynamic upsampling operators. We first compare the FLOPs and number of parameters in Table <ref>. FADE requires more FLOPs than CARAFE (note that FADE processes 5 times more feature data than CARAFE), but less parameters when the number of channels is small. For example, when C=256, d=64, K=5, and H=W=112, CARAFE and FADE cost 2.50 and 4.56 GFLOPs, respectively; the number of parameters are 74 K and 47 K, respectively. FADE-Lite, in the same setting, costs only 1.53 GFLOPs and 13 K parameters. In addition, we also test the inference speed by upsampling a random feature map of size 256× 120 × 120 (a guiding map of size 256× 240 240 is used if required). The inference time is shown in Table <ref>. Among compared dynamic upsampling operators, FADE and FADE-Lite are relatively efficient given that they process five times more data than CARAFE. We also test the practical memory occupation of FADE on SegFormer-B1 <cit.>, with 6 upsampling stages. Under the default training setting, SegFormer-B1 with bilinear upsampling costs 22,157 MB GPU memory. With the H2L implementation of FADE, it consumes 24,879 MB, 2,722 MB more than the original one. The L2H one reduces the memory cost by 24.2% (from 2,722 to 2,064 MB), and is within an acceptable range compared with the decoder-only upsampling operator CARAFE (664 MB) if taking the five times more data into account. spbasic
http://arxiv.org/abs/2407.13026v1
20240717212819
Strichartz estimates for the Schrödinger equation on compact manifolds with nonpositive sectional curvature
[ "Xiaoqi Huang", "Christopher D. Sogge" ]
math.AP
[ "math.AP", "math.CA", "math.DG", "58J50, 35P15" ]
theoremTheorem lemma[theorem]Lemma corr[theorem]Corollary prop[theorem]Prop proposition[theorem]Proposition deff[theorem]Definition remark[theorem]Remark
http://arxiv.org/abs/2407.11925v1
20240716171533
Calibration and simulation of ionization signal and electronics noise in the ICARUS liquid argon time projection chamber
[ "ICARUS collaboration", "P. Abratenko", "N. Abrego-Martinez", "A. Aduszkiewicz", "F. Akbar", "L. Aliaga Soplin", "M. Artero Pons", "J. Asaadi", "W. F. Badgett", "B. Baibussinov", "B. Behera", "V. Bellini", "R. Benocci", "S. Berkman", "S. Bertolucci", "M. Betancourt", "M. Bonesini", "T. Boone", "B. Bottino", "A. Braggiotti", "D. Brailsford", "S. J. Brice", "V. Brio", "C. Brizzolari", "H. S. Budd A. Campani", "A. Campos", "D. Carber", "M. Carneiro", "I. Caro Terrazas", "H. Carranza", "F. Castillo Fernandez", "A. Castro", "S. Centro", "G. Cerati", "A. Chatterjee", "D. Cherdack", "S. Cherubini", "N. Chitirasreemadam", "M. Cicerchia", "T. Coan", "A. Cocco", "M. R. Convery", "L. Cooper-Troendle", "S. Copello", "A. A. Dange", "A. De Roeck", "L. Di Noto", "C. Di Stefano", "D. DiFerdinando", "M. Diwan", "S. Dolan", "L. Domine", "S. Donati", "F. Drielsma", "J. Dyer", "S. Dytman", "A. Falcone", "C. Farnese", "A. Fava", "A. Ferrari", "N. Gallice", "F. G. Garcia", "C. Gatto", "D. Gibin", "A. Gioiosa", "W. Gu", "A. Guglielmi", "G. Gurung", "H. Hausner", "A. Heggestuen", "B. Howard", "R. Howell", "I. Ingratta", "C. James", "W. Jang", "Y. -J. Jwa", "L. Kashur", "W. Ketchum", "J. S. Kim", "D. -H. Koh", "J. Larkin", "Y. Li", "C. Mariani", "C. M. Marshall", "S. Martynenko", "N. Mauri", "K. S. McFarland", "D. P. Méndez1 A. Menegolli", "G. Meng", "O. G. Miranda", "A. Mogan", "N. Moggi", "E. Montagna", "C. Montanari", "A. Montanari", "M. Mooney", "G. Moreno-Granados", "J. Mueller", "M. Murphy", "D. Naples", "V. C. L. Nguyen", "S. Palestini", "M. Pallavicini", "V. Paolone", "R. Papaleo", "L. Pasqualini", "L. Patrizii", "L. Paudel", "G. Petrillo", "C. Petta", "V. Pia", "F. Pietropaolo", "F. Poppi", "M. Pozzato", "G. Putnam", "X. Qian", "A. Rappoldi", "G. L. Raselli", "S. Repetto", "F. Resnati", "A. M. Ricci", "G. Riccobene", "E. Richards", "M. Rosenberg", "M. Rossella", "P. Roy", "C. Rubbia", "M. Saad", "S. Saha", "P. Sala", "S. Samanta", "P. Sapienza", "A. Scaramelli", "A. Scarpelli", "D. Schmitz", "A. Schukraft", "D. Senadheera", "S-H. Seo", "F. Sergiampietri", "G. Sirri", "J. S. Smedley", "J. Smith", "L. Stanco", "J. Stewart", "H. A. Tanaka", "F. Tapia", "M. Tenti", "K. Terao", "F. Terranova", "V. Togo", "D. Torretta", "M. Torti", "F. Tortorici", "R. Triozzi", "Y. -T. Tsai", "S. Tufanli", "T. Usher", "F. Varanini", "S. Ventura", "M. Vicenzi", "C. Vignoli", "B. Viren", "Z. Williams", "R. J. Wilson", "P. Wilson", "J. Wolfs", "T. Wongjirad", "A. Wood", "E. Worcester", "M. Worcester", "M. Wospakrik", "H. Yu", "J. Yu", "A. Zani", "J. Zennamo", "J. C. Zettlemoyer", "C. Zhang", "S. Zucchelli" ]
hep-ex
[ "hep-ex", "physics.ins-det" ]
Code Documentation and Analysis to Secure Software Development Paul Attie, Anas Obeidat, Nathaniel Oh, Ian Yelle School of Computer and Cyber Sciences Augusta University Augusta, Georgia 30912 July 22, 2024 ========================================================================================================================================== § INTRODUCTION Liquid argon time projection chamber (LArTPC) detectors track particle trajectories with high spatial resolution and precise calorimetry by imaging ionization electrons from charged particle tracks and showers. Ionization charge is drifted by a large electric field to multiple planes of readout wires which detect the charge as induced currents on each wire. The ICARUS LArTPC neutrino detector, after a previous run at Gran Sasso <cit.> and subsequent refurbishment, has been installed at Fermilab since 2020 <cit.>. It has been taking physics data since 2022 as part of the Short-Baseline Neutrino (SBN) Program <cit.>. ICARUS sits at the intersection of two neutrino beams; it is on-axis to the Booster Neutrino Beam (BNB) <cit.> and is 5.7^∘ off-axis to the Neutrinos at the Main Injector (NuMI) beam <cit.>. This paper addresses the calibration and simulation of electronic noise and charge signals in the ICARUS time projection chamber at Fermilab <cit.>. The calibration reported here addresses the data taken in the first two ICARUS physics data collection periods: Run 1, from June 9th to July 9th 2022, and Run 2, from December 20th 2022 to July 14th 2023. ICARUS is a 760 liquid argon detector consisting of two LArTPC modules. Each module is a cryostat with dimensions 3.6×3.9×19.6. Both cryostats contain two TPCs divided by a central cathode plane. Each TPC has an active volume of 1.5×3.16×17.95. The TPCs are all operated at a drift voltage of about 500. They all have three planes of charge sensing wires: an unshielded front induction plane, a middle induction plane, and a collection plane. The wires on the front induction plane are oriented along the horizontal (beam) direction, and the wires on the middle induction and collection plane are oriented at ± 60^∘ to the horizontal direction, depending on the TPC. The wires on each plane are each spaced 3 apart and the wire planes are spaced 3 from each other. The front induction wire plane is split in two at the center of the TPC by a mechanical support. In the nominal configuration, the wire bias is -250V on the front induction plane, -30V on the middle induction plane, and 250V on the collection plane. A diagram of the layout of the four ICARUS TPCs is shown in figure <ref>. Each TPC wire is instrumented to digitize the charge signals while maximizing the signal-to-noise ratio <cit.>. Signals are run through a signal processing chain which subtracts noise that is coherent across wires in the same readout board and deconvolves the signal to provide a Gaussian shape with further reduced noise <cit.>. These signals provide the input to reconstruction algorithms which group together hits into tracks (from muons, protons, or other charged hadrons) and electromagnetic showers (from electrons or photons) for use in analysis. The reconstruction applied here is supplied by the Pandora framework <cit.>, optimized for the ICARUS detector. The detection of charge in the ICARUS detector is not perfectly uniform in space and time. Effects such as argon impurities <cit.> and space charge effects <cit.> can perturb the amount of charge measured across the detector. Furthermore, we have observed a non-uniform in-transparency on the middle induction plane across the ICARUS TPCs that perturbs the charge response on all three planes. We have developed a procedure to calibrate these effects so that the non-uniformity they induce can be removed from the data. This procedure relies on the copious number of cosmic ray muons available to ICARUS, which operates under only 10m.w.e. of concrete overburden. The procedures we have developed leverage many of the ideas first developed by the MicroBooNE surface LArTPC experiment <cit.>, applied to the specific conditions observed in ICARUS. The simulation of TPC signals in ICARUS is organized in the LArSoft framework <cit.>. A Geant4 simulation <cit.> tuned for use in argon propagates the trajectories of particles in the detector and computes their ionization and scintillation depositions in the active volume. The Wire-Cell package <cit.>, with these depositions as input, drifts the ionization charge to the wire planes, including parameterized effects from attenuation due to argon impurities and ionization diffusion. Wire-Cell simulates the signal of ionization electrons on each wire plane by applying a field response computation from the GARFIELD program <cit.> with the nominal ICARUS wire plane configuration as input. The field response computation is two-dimensional (2D): it includes the dependence of the induced current on both the drift time of the ionization and the pitch of the ionization in the direction perpendicular to each wire. This accounts for long-range induced currents which are important for accurately modeling the charge signal. We have measured the characteristics of the ICARUS TPC noise and signal for use in the simulation. Electronic noise observed in the detector includes sources intrinsic to each readout channel and sources which are common to readout channels sharing electronics. Updating the simulation with a data-driven electronic noise model is an important first step in carrying out tuning of the ionization signal response given that noise can lead to “smearing” in the estimated signal response shape, necessitating use of an accurate Monte Carlo simulation to account for this effect. The signal shapes we observe in the detector depart from the nominal prediction made by Wire-Cell and GARFIELD. This departure is significant, although it is not drastically different from the level of disagreement observed by prior experiments applying the same (2D) simulation <cit.>. It is critical to precisely simulate the signal shapes in LArTPCs in order to accurately characterize the performance of signal processing and its impact on physics analysis. The leading systematic uncertainty associated with detector performance in prior LArTPC experiments has been the TPC signal shape (see, e.g., Ref. <cit.>). We have developed a novel approach to tune the underlying field responses input to Wire-Cell which match the simulated signal shapes precisely to what is measured in the detector. This paper is organized as follows. In section <ref>, we describe the procedure used to remove non-uniformities in the ionization charge response of the ICARUS TPCs and demonstrate its impact on the charge resolution of the detector. In section <ref>, we detail the measurement of electronics noise in the ICARUS TPCs. In section <ref>, we describe the measurement of TPC signal shapes in ICARUS data, as well as the novel procedure we have developed to tune simulation to match the data. Section <ref> shows the comparison of charge resolution performance between ICARUS Monte Carlo simulation and data after the calibration techniques described here are used in improvements to the simulation. Finally, section <ref> concludes the paper. § CHARGE SCALE EQUALIZATION The goal of the charge scale equalization procedure is to make the TPC response to charge uniform in space and time. This is expressed in terms of the charge per length, or dQ/dx, of hits along particle tracks and showers. This quantity is used to compute energy loss (dE/dx) after correcting for electron-ion recombination <cit.>. As is detailed below (section <ref>), a number of effects perturb the charge response in ICARUS. To account for these effects, we have elected to equalize the charge response in three steps: an equalization in the drift direction (section <ref>), an equalization in the two wire plane directions, ŷ and ẑ (section <ref>), and a final TPC equalization (section <ref>). The performance of charge reconstruction in ICARUS after these equalization steps is shown in section <ref>. As a surface detector, ICARUS has access to a copious number of cosmic muon tracks for use in these calibrations. Most muon tracks pass through the detector as nearly minimum-ionizing particles (MIPs). We use a selection of cosmic muons to do these calibrations. The muon tracks are required to cross the cathode. For such tracks, matching the energy depositions in both TPCs on either side of the central cathode enables the identification of the arrival time (t_0) of the track. Knowledge of this time is needed to properly compute and apply the drift time correction. §.§ Effects Leading to Non-Uniformities in Charge Scale §.§.§ Argon Impurities As the ionization cloud from a particle deposition drifts to the wire plane, impurities in the argon (primarily O_2 and H_2O <cit.>) absorb electrons. The attenuation is exponential and can be described by an electron lifetime, which is the mean time an electron will survive in the argon before it is absorbed. The electron lifetime in ICARUS ranged from 3-8 over the dataset considered here, which corresponds to a ∼5-15% average attenuation in the charge signal across the ∼1 ICARUS drift time. §.§.§ Drift Field Distortions The drift electric field in ICARUS is not perfectly uniform. While it is very stable across time, a few effects perturb its value spatially across the detector. The constant rate of cosmic muons ionizing the argon produces a build-up of positive argon ions, or space-charge, that affect the electric field <cit.>. In addition, the cathode plane in ICARUS is not perfectly flat. This is an effect that was previously observed during the ICARUS run at Gran Sasso <cit.>. It is still present in the refurbished ICARUS installation at Fermilab at a much reduced magnitude. The biggest bending is in the East Cryostat, where the cathode is shifted by up to 1.5cm. This perturbs the electric field by a few percent, especially close to the cathode. Finally, there is a failure in the field cage in TPC EE that distorts the drift electric field in that TPC. The drift field distortions can affect the charge scale in two ways. First, changes to the drift field affect the quantity of electric charge that recombines with argon ions at the point of ionization. Second, distortions to the drift field can deflect ionization tracks and therefore bias the reconstruction of the track pitch – the dx in dQ/dx. At this time, we have not specifically calibrated the impact of drift field distortions. We have measured the broad magnitude of the distortions and found them to be small – distortions to the drift field of at most a couple percent which deflect tracks by at most a couple centimeters. The charge scale calibrations here should be understood as folding in the (small) impact of drift field distortions. §.§.§ Diffusion Diffusion in the two directions transverse to the drift field direction has been shown to impact the measured charge scale from cosmic muons <cit.>. This effect is due to a dependence of the Landau-Vavilov <cit.> distribution of energy loss from muons on the magnitude of diffusion. The shape of the Landau-Vaviolv distribution depends on the length of the segment of the muon track observed by the individual readout wire. In particular, as this segment length increases, the width of the distribution narrows and the location of the peak (which is typically used as the observable of the distribution <cit.>) rises, approaching the mean energy loss. Transverse diffusion smears the energy depositions observed by each wire, and thus broadens the length of the muon track segment observed by each wire. In the presence of diffusion, this length () is given by <cit.> (t_drift, γ) = /cosγ exp(-∫dx/ w[σ_T(t_drift), x] log w[σ_T(t_drift), x] ) σ_T(t_drift) = √(2 D_T t_drift) w(σ_T, x) = ∫_-/2^/2dx'/σ_T √(2π) e^-(x-x')^2/2σ_T^2 , where γ is a track angle (see figure <ref>), t_drift is the drift time, is the wire pitch (3 in ICARUS), σ_T is the transverse smearing width, and D_T is the transverse diffusion constant (which has been estimated to be around 5-12 <cit.>). In the limit of no diffusion, approaches the track pitch (/ cosγ). At the maximum ICARUS drift time (∼1), the transverse smearing width is ∼1.0-1.5, on the order of the wire pitch. Therefore, transverse diffusion does not affect the detector response to charge (except perhaps indirectly through any impact from the broadening of the charge signal), but rather makes the “standard-candle” used to equalize the charge scale – cosmic muons – not truly standard in the drift direction. As a result, using cosmic muon depositions to equalize the charge scale produces a biased result, since such a procedure applies a non-uniform dE/dx distribution. The impact of diffusion can be mitigated by summing together adjacent hits on a cosmic muon track into a “coarse-grained" dQ/dx <cit.>. For example, summing together 10 hits obtains a dQ/dx with an effective spacing of 10 wires, or 3, much larger than than the smearing width of transverse diffusion. The coarse-graining method also allows us to study the impact of diffusion in data. The drift direction profile of the “coarse-grained" dQ/dx is impacted exactly the same by detector non-uniformities (mostly argon impurities and drift field distortions) as a “wire-by-wire" dQ/dx, but the two observables have different underlying dE/dx distributions which are impacted differently by diffusion. This is demonstrated in ICARUS data in figure <ref>. Both the coarse-grained and wire-by-wire dQ/dx attenuate across the drift direction, since the biggest effect is from argon impurities. However, the two values also get closer at larger drift times. The coarse-grained measurement has a longer track sensitive length that is constant across the detector, and thus a larger dE/dx peak value. The wire-by-wire measurement has a smaller track sensitive length that increases with increasing transverse diffusion across the drift direction. Therefore, the wire-by-wire peak dE/dx is smaller but approaches the coarse-grained dE/dx peak at large drift times. The magnitude of the effect is hard to predict because it depends on the unknown momentum of the through-going muons used in the measurement. Its direction though is consistent with expectation. This is the first confirmation in data of the impact of transverse diffusion on the muon charge scale, which has previously only been predicted from simulation. It validates the approach we have taken in ICARUS towards drift direction charge equalization, as detailed in section <ref>. §.§.§ Induction Wire Plane Intransparency The induction wire planes in ICARUS (primarily the middle induction plane) absorb charge in a position dependent way across the detector. This effect induces significant variations (∼ 20%) in the charge response on all three readout wire planes. On the collection plane, it directly reduces the amount of visible charge. On the induction planes, the unipolar collection signal from absorbing charge competes with the bipolar induction signal from non-absorbing charge. A GARFIELD simulation <cit.> of the nominal ICARUS wire plane configuration predicts that the induction planes absorb 7% of the charge. Thus, to explain the observed variations, the wire plane configuration of ICARUS must depart from the design specification in a position dependent way across the detector. We have checked all components of the wire bias outside of the cryostat and have found only a couple of discrete failures, which have been correlated to features in the non-uniformity but do not explain all of them. Inside the cryostat, some departure of either the wire bias or the inter-plane wire spacing from the nominal configuration must conspire to produce the spatial variations observed in ICARUS. We have investigated the possibility of changing the ICARUS detector configuration to mitigate the impact of this effect. The supplied wire bias cannot be turned any higher due to the rating of cables carrying the bias inside the cryostat. We tested operating at a reduced drift electric field of 350 (which increases the relative effect of the wire bias) in Summer 2022. We found that the increased effect of recombination and larger attenuation from argon impurities from the longer drift time reduced the signal-to-noise in ICARUS by too much to be feasible, especially on the induction planes. Although significant, the variation in the induction wire plane intransparency has been found to be very stable across time. Thus, we can calibrate out the effect using the very large sample of cosmic muons available to ICARUS. §.§.§ Channel Gain Variation Ionization signals in ICARUS on charge-sensing wires are transmitted via cables through a set of feed-trough flanges. Each flange connects a group of 64 channels to a readout board, which amplifies and digitizes the signal on each channel. The process of signal transmittance to the front-end, amplification, and digitization may lead to channel-to-channel variations in gain and electronics response. To characterize this variation, a study was performed using the injection of test pulses at distinct points in the electronics chain. Methodology The ICARUS electronics chain and the test pulse injection points are shown schematically in Figure <ref>. The electronics can be pulsed with either an external or an internal signal. The external pulse (EP) method allows for arbitrary signals to be propagated to the end of the wires through a simple connection available on the external side of the feed-through flange. The signal is transmitted internally via coaxial cables that connect through a capacitor to the wires. Only wires which terminate on the bottom-edge of the detector are instrumented in this way, meaning that most of middle induction and collection wires and none of the front induction wires can be characterized with this method. The internal pulse (IP) method allows for a 2 square wave with configurable amplitude to be injected through a capacitor onto the pre-amplifier of either odd or even channels. This is a feature that is integrated into the readout boards and is configurable in software during data acquisition. The method allows for the simultaneous pulsing of channels across the entire detector with no hardware changes required, but allows less freedom of the test pulse signal parameters. Neither method is individually capable of providing a complete characterization of the full electronics chain due to limitations in precision, ease of configuration, channel coverage, and the portion of the electronics chain probed. However, by comparing results obtained from both methods it is possible to make quantitative conclusions. Consequently, a data-taking campaign was performed to produce a dataset for each of these two methods. Both methods of test pulse injection were performed with a 2 square wave using amplitudes large enough to span most of the range of the 12-bit ADC. The waveforms collected with each method are averaged many readouts. The inherent noise in the system sums to zero on average, whereas the signal adds coherently. The result is a 500 average pulse waveform per channel as shown in figure <ref>. Note that although both signals have a similar amplitude, they are not identical and all subsequent comparisons are normalized to account for this. All pulses were then fit with a function resulting from a sum of several orders of Bessel functions of the first kind: f(x, a⃗) = ∑_β=0^5 a_2β+1 * J_β(x - a_2β), 2emα: Order parameter This functional form was chosen primarily to empirically characterize the shape of the pulse; the fit parameters themselves do not have a physical interpretation. The shape is impacted both by the intrinsic electronics response of the ICARUS electronics and by the width of the injected pulse. The fit result is used to calculate the peak height, full width at half maximum (FWHM), and the integral. The continuous nature of the fit result mitigates the bias associated with the discrete nature of the signal. Results Both methods of pulse injection show variations in the resulting pulse integral that are not well correlated with each other. This is demonstrated in figure <ref> which shows the distribution of the measured pulse integral for the internal method of pulse injection and the relative bias between the two methods. Both methods of pulse injection show a higher degree of uniformity across the detector than the relative bias between the two methods, as measured by the widths of each distribution. This motivates the decision to not apply a channel-dependent gain calibration based on the observed pulse integral. Instead, this comparison can be used to bound the amount of channel-to-channel gain variation as less than the width of the relative difference between the methods, 3.9%, as shown in the right plot of figure <ref>. In addition, there is a small systematic decrease of about 5% in the pulse integral on the front induction plane as shown in Figure <ref>. This correlates with the increasing channel capacitance from the longer cable length needed to reach wires lower down in the detector. Due to the imperfections in the pulsing methods, we elect to use only muon ionization signals to calibrate non-uniformities along each wire plane. This method, described in section <ref>, is able to calibrate out all non-uniformities with 10×10 bins. This spatial resolution is adequate to correct for the coarse non-uniformities in the gain observed by this study. A variation in the pulse width of about 2% or less is observed across all channels in the detector using both pulses injected on the ends of the wires and internally in the front-end. This is shown for both methods in Figure <ref>. This variation appears to be driven by a trend in the pulse width as a function of the channel's position amongst the 64 channels on the readout board. The size of this variation is negligible relative to other uniformity calibrations, so no correction is applied. §.§ Drift Direction Equalization The drift direction equalization step corrects charge reconstruction for effects that vary with the ionization drift time. The primary such effect is attenuation from argon impurities. Since the electron lifetime varies across the ICARUS dataset, this equalization is done per-DAQ-run. One DAQ run in ICARUS lasts from a few hours to a few days, and the electron lifetime does not vary significantly over such a period. The calibration is done with depositions from anode-cathode crossing tracks. The cathode crossing identification is done by matching a pair of aligned tracks in the two TPCs on either side of the central cathode plane. A cut on the drift direction length of the track in either TPC ensures that it also crosses the anode in that TPC. As described in section <ref>, to mitigate the impact of diffusion, the charge is summed in groups of 10 wires into a “coarse-grained" dQ/dx. The coarse-grained depositions are grouped by drift time and are fit with a Landau distribution convolved with a Gaussian distribution to extract the most-probable-value (MPV) of the distribution. The MPV as a function of drift time is fit to an exponential to obtain an effective electron lifetime that parameterizes the non-uniformity. We have found that an exponential is able to model the charge non-uniformity in all runs across the ICARUS dataset. This electron lifetime should be understood to be effective because, while argon impurities are the dominant effect, the measured lifetime also includes impacts from field distortions and imperfections in signal processing. Figure <ref> shows the electron lifetime across the ICARUS dataset, as well as the corresponding mean signal attenuation in the drift direction. The electron lifetime was maintained at ∼3 in the West cryostat and at ∼5 in the East cryostat across the Run A and Run 1 datasets. During Run 2, the lifetime in the West cryostat reached 8-10. There are slight differences between the East and West TPCs in both Cryostats. Differences in the purity between the TPCs in each Cryostat would have to be small since the same argon circulates in both TPCs in a cryostat. This effect may also be an indication of different field distortions in the TPCs which perturb the effective electron lifetime measured here. §.§ Wire Plane Equalization The wire plane equalization step corrects charge reconstruction for detector effects that vary across the two directions in the plane of the readout wires: ŷ, the vertical direction, and ẑ, the (BNB) beamline direction (see figure <ref>). The calibration is done with coarse-grained depositions from through-going cathode-crossing tracks. Depositions are binned in terms of their ŷ-ẑ location on the wire plane in 10×10 bins. This is as small a spatial resolution as is possible given the statistics of cosmic muons (∼ 3 million) available for the calibration. As in the drift direction equalization, in each spatial bin the distribution of dQ/dx values is fit with a Landau distribution convolved with a Gaussian distribution to extract the MPV. The MPV in each spatial bin is converted into a scale factor computed to keep the mean MPV across the TPC fixed. The scale factors are therefore relative on each plane and are not computed with reference to an absolute gain (unlike the drift direction equalization). The scale factors are computed separately for both runs on each wire plane in each TPC. The spatial uniformity maps are shown in the Run 1 dataset for the front induction, middle induction, and collection planes in figures <ref>, <ref>, and <ref>, respectively. Some, but not all, of the features in the map have been traced to known faults in the detector. For example, the band of small dQ/dx around z=0 in each TPC is due to perturbations to the applied wire bias field by the presence of a mechanical bar supporting the front induction wires. The uniformity maps are positively correlated between the three planes: where the collection plane measures less charge, so do the induction planes. This correlation is caused by the induction plane intransparency. Where the induction plane is opaque, less charge reaches the collection plane to be observed by that plane. In addition, the intransparency reduces the measured charge on the induction planes, for two reasons. First, the unipolar collection pulse interferes partially destructively with the bipolar induction pulse. Second, the collection pulse causes the induction plane signal shape to depart from the deconvolution kernel (which is computed with no intransparency), reducing the efficacy of the deconvolution in forming a Gaussian pulse shape and therefore reducing the measured charge. There are a couple of discrete changes in the uniformity between Runs 1 and 2. The difference in the uniformity between Run 1 and Run 2 is shown for the collection plane in figure <ref>. These changes have been traced to a couple changes to the detector operation during the 2022 technical shutdown: two additional failures of middle induction plane wire bias voltage supplies, and a few readout board replacements on the collection plane (which have a slightly different gain). We have not observed any time dependence of the spatial uniformity within either Run 1 or Run 2. §.§ TPC Equalization As a final step, the gains in the four separate TPCs in ICARUS are equalized. This equalization is done separately for both ICARUS runs. This corrects for any broad differences in the gain between the different TPCs or runs. The charge scale for this equalization is computed using stopping cosmic muons, as opposed to throughgoing muons. This choice is made because stopping cosmic muons are used to measure the absolute gain in ICARUS in the ionization energy scale calibration. Equalizing the TPC gain with the same sample ensures that different TPCs are completely consistent in the gain fit. The charge scale is computed from distributions of coarse-grained dQ/dx with the drift and wire plane direction equalizations applied. The distributions are split up in terms of the stopping muon track residual range and drift time to select for a single peak dE/dx in each distribution. The residual range is binned in steps of 5 from 200-300. The drift time is binned in steps of 100 from 500-900. Each histogram of equalized dQ/dx is fit to a Landau distribution convolved with a Gaussian distribution to extract the MPV. The MPVs are averaged over residual range to obtain a single average for each drift time. Then, a scale factor is fit to the mean MPVs per each TPC per each run. This scale factor is normalized so that TPC EW in Run 1 has a scale of 1. The average MPVs and the scale factors are shown in figure <ref>. §.§ Equalization Results Figure <ref> plots the distribution of coarse-grained dQ/dx values from throughgoing cathode-crossing cosmic muons before and after the equalization procedure. After both corrections are applied, the Gaussian width divided by the MPV of the distribution decreases by 13% on the front induction plane, 11% on the middle induction plane, and 43% on the collection plane. The narrowing is most significant on the collection plane because the inherent broadening from readout noise is the smallest on that plane. § NOISE MEASUREMENT AND SIMULATION The characterization of noise is important for understanding its impact on ionization signals and the reconstruction of particle interactions in the detector volume. The ICARUS TPC noise is characterized principally through measurements of the absolute noise scale, frequency characteristics, and channel-to-channel correlations, as is detailed below (section <ref>). These measurements of the noise are used as input to a data-driven model (section <ref>). The performance of this noise model is presented in section <ref>, highlighting areas for future improvement. §.§ Noise Measurement The geometry of the ICARUS TPC readout is important for understanding the noise observed in the detector. Wires are connected in groups of 32 to cables, which serve to transmit the signal on the wires up to the feed-through flanges and to the readout crates. A readout crate holds nine readout boards, each having the capability to digitize signals from two cables for a total of 576 channels per readout crate. The TPC electronics are described in more detail in <cit.> and <cit.>. Data taken with the cathode voltage turned off was chosen for the measurements of noise due to the lack of signal in the waveforms from drifting ionization electrons. The channel-to-channel correlation, ρ, can be defined as ρ_ij = w⃗_⃗i⃗·w⃗_⃗j⃗/σ_i σ_j where w⃗_⃗i⃗ is the waveform for channel i and σ_i is the root-mean-square (RMS) of its waveform. This can be calculated pairwise for each channel within a readout crate and averaged across many events. Channel-to-channel correlations between channels not in the same readout crate are not significant and are subsequently not shown here. The geometry of the readout motivates two distinct classifications for readout crates: crates which connect only to front induction wires and crates which connect to a mix of middle induction and collection wires. The additional wire and cable length for channels in the front induction plane results in higher overall noise. The cables connecting the wires to the front-end are also in significantly closer proximity due to the path down to the wires and the presence of three sets of cables in a single feed-through flange. Figure <ref> shows the channel-to-channel correlations for channels within the same readout crate. The left plot shows only channels belonging to readout crates serving front induction wires, whereas the right plot shows only channels belonging to readout crates which serve a mix of middle induction and collection wires. The main block diagonal structure of highly-correlated channels reflects the presence of noise that is coherent across channels of the same readout board. The off-diagonal structure of anti-correlation is believed to be due to capacitive or inductive coupling between the cables of adjacent boards. This effect is observed to be stronger in front induction, which is consistent with the closer proximity and higher path overlap of cables for these wires. Though each cable represents 32 wires, the broader correlated component for channels on the same readout board introduces anti-correlations are nearly uniform across the full group of 64 channels. The presence of these significant correlations necessitates some degree of noise filtering. ICARUS employs a coherent noise removal algorithm similar to the one used by MicroBooNE <cit.>. The coherent noise removal algorithm operates on a group of channels and first defines the coherent noise component for that group using the median value of the waveforms for each time tick. This produces a waveform which is expected to represent noise fluctuations that are common to the entire group, which are visually identifiable in images of the raw waveforms. The resulting waveform is then subtracted from each waveform in the group to produce a set of corrected waveforms. In ICARUS, channels belonging to the same readout board are used to define the groups for this noise filtering as motivated by the noise correlation matrices. A downside of this algorithm is that signal from tracks which are isochronous across the group of channels may be impacted as the track itself is coherent in the same manner as the targeted noise. This is partially mitigated by the fact that no group of 64 channels contains signal from a single spatially connected region - each sub-group of 32 channels always sees a distinct region of the detector. Figure <ref> shows an event display with cosmic muon tracks before and after the coherent noise removal algorithm is applied. The coherent noise is visible as the vertical streaks running through adjacent channels. The portion of the waveform that remains after the removal of the coherent noise component is representative of the noise naturally present on the channel due to intrinsic noise sources. The separation between coherent and intrinsic components of the noise allows for a more detailed characterization of the noise. The absolute noise levels are measured by the RMS of each waveform before and after the removal of coherent noise using data taken with the cathode voltage turned off. The distributions per plane are shown in figure <ref> in both units of ADC and Equivalent Noise Charge (ENC) using the conversion factor of 550 e^-/ADC found in <cit.>. Front induction exhibits significantly higher noise due to the longer flat cables and wires. Middle induction and collection have similar noise levels owing to the fact that they have similar wire and cable lengths. The substantial variation in cable length for front induction channels drives most of the additional width of front induction noise distributions. The signal-to-noise ratio for each plane can be calculated using the mean hit amplitude from a sample of throughgoing cosmic muons and the measured value of noise after coherent noise filtering. Respectively, these are 4.7, 7.8, and 10.8 for the front induction, middle induction, and the collection planes. Over the course of Run 1 and Run 2, the detector noise has been very stable: less than a few percent on average. The frequency characteristics of each component of the noise can be measured using the discrete Fast Fourier Transform (FFT). Figure <ref> shows the FFT spectra per plane before and after the coherent noise removal. The underlying intrinsic noise populates the expected Rayleigh distribution, and is similar for all three planes. The coherent noise is present as an additional, less smooth spectra on top of the intrinsic noise. The coherent noise also exhibits two broad peaks at specific frequencies that are not yet attributed to a specific source. At the lowest frequency bins, there is a sharp increase due to low-frequency oscillations in the waveforms. These oscillations are not coherent across groups of channels as evidenced by the full noise and intrinsic noise spectra exhibiting the same low-frequency trend. §.§ Noise Simulation The noise model is implemented with an algorithm provided by the Wire-Cell toolkit <cit.>. The algorithm allows for the freedom to configure a noise component by defining groups of channels, the noise spectrum associated with each group, and whether the noise component is coherent across the group or intrinsic. The noise component is simulated for each channel by drawing the amplitudes from the associated spectrum with a randomly chosen phase, then applying the inverse FFT. Coherent noise components have the additional requirement that the phases are shared for all channels within the same coherent grouping. The total waveform for each channel is calculated as the sum of the signal waveform and all noise waveforms from all components. As discussed in the previous section, the dominant coherent noise component observed is across channels within the same readout board. After removal of this coherent component, the noise shows no significant remaining correlated components at other channel groupings. Therefore, the noise model is configured with two noise components: an intrinsic one that is uncorrelated and a coherent one that is correlated across the full group of 64 channels. In both cases, the input spectra reflect the average of the group of 64 wires. §.§ Data and Monte Carlo Simulation Comparisons After configuring the noise model, it is used to generate a sample of events for a comparison with data. The data sample used for this comparison is the same as was used to generate the input spectra. Both the Monte Carlo sample and data sample were analyzed with the exact same code, so the only differences in the results are expected to be from the noise model. The correlation matrices for front induction crates and standard crates is shown in figure <ref>. The noise model accurately reproduces the coherent component of the noise common to channels within the same readout board. The anti-correlated component between channels of adjacent readout boards is not modeled and therefore not reproduced. This anti-correlated component observed in data is itself coherent across the readout board and is removed by the coherent noise removal process, so this is not expected to have a noticeable impact on the overall noise levels. The RMS calculated per channel and per event can be used to characterize the performance of the noise model in modeling the absolute noise scale. Figure <ref> shows these distribution per plane before (top) and after (bottom) coherent noise removal. The full noise shows good agreement for front induction, but there is a systematic bias in middle induction and collection. Each readout board in a standard crate contains equal amounts of middle induction and collection wires, so the noise model effectively models the average of the two. This, along with the fact that middle induction exhibits slightly higher noise, results in a bias in opposite directions for both planes. After coherent noise removal, the agreement between data and Monte Carlo improves significantly, and the bias shown by middle induction and collection is reduced. It is worth noting that the waveforms after coherent noise removal are used downstream in the reconstruction, so inefficiencies in the noise model that appear in the full noise and not in the intrinsic noise are expected to have only second-order effects. The noise spectra in data and Monte Carlo can be compared to verify that the same spectra that were used as input are observed. Figure <ref> shows a comparison of the average FFT spectra per plane for data and Monte Carlo. The shapes are reproduced nearly identically, but there are some minor discrepancies in the overall magnitude. Front and middle induction are consistently slightly under-predicted by Monte Carlo, whereas collection is slightly under-predicted after coherent noise removal. These observations are consistent with the results from the RMS noise comparisons shown earlier. Further characterization of the noise model performance can be done by examining two different metrics: the event-to-event variations and the channel-to-channel variations. The event-to-event variations are calculated as the difference of the measured RMS value from the median for the corresponding channel. The width of this distribution is driven by statistical fluctuations from the combination of frequency components of random phases and by short-scale variations in the noise itself. The overall width of the data distribution can be represented as the quadrature sum of the Gaussian widths associated with the two underlying processes. Monte Carlo, which uses a static noise distribution, is only broadened by the statistical fluctuations. By comparing data and Monte Carlo, the additional Gaussian broadening necessary to match Monte Carlo to data, parameterized by σ_t, can be extracted and used to directly characterize the variations in the noise on short time scales. Figure <ref> shows these distribution and the associated results after coherent noise filtering for all three planes. Channel-to-channel variations are calculated as the difference of the measured RMS value from the median for the group of 64 channels. In addition to the statistical and temporal variations discussed above, this distribution experiences broadening from the spatial variation in noise levels across channels within the same readout board. As before, the quadrature difference of the widths of the data and Monte Carlo distributions, parameterized as σ_s, characterizes the inefficiency of the noise model at modeling spatial variations smaller than the 64 channel grouping used to configure each noise component. These distributions are shown in figure <ref>. The relative sizes of σ_s and σ_t suggest that spatial variations in the noise are the dominant contribution to mis-modeling of the noise. Correspondingly, the temporal variations in the noise over short time scales are negligible and do not need to be modeled. Further improvements to the noise model should target spatial variations in the noise by decreasing the group size for the intrinsic noise or by adding additional coherent components for smaller group sizes. § SIGNAL SHAPE MEASUREMENT AND SIMULATION Potential disagreement of the TPC electronics response shape or ionization electron field response shape (the convolution of which determines the shape of ionization signal on the ICARUS TPC waveforms) between data and Monte Carlo simulation can lead to associated biases in charge extraction and thus dQ/dx measurements. In order to minimize these biases, we carry out a data-driven tuning of the ionization signal shape used in ICARUS Monte Carlo simulation. Section <ref> details the methodology used for extracting an estimate of the ionization signal shape in both data and Monte Carlo simulation, both of which are used in the tuning procedure described in section <ref>. §.§ Signal Shape Measurement Reconstructed signal shapes are parameterized in terms of the angle θ_xw, the angle of the track projected into the x̂-ŵ plane with respect to the ŵ axis, where x̂ is the drift direction of the ionization electrons in the TPC and ŵ is the direction perpendicular to the wire orientation within the wire plane. This single angle controls the shape of the ionization signal response observed within the TPC waveform <cit.>; it is diagrammed in figure <ref>. We estimate the average signal waveform associated with anode-cathode-crossing cosmic muon tracks for each angle (θ_xw) bin in our angle range, carried out independently for each of the three wire planes. The requirement of angle-cathode-crossing tracks allows us to obtain signal shape measurements associated with ionization charge undergoing very little diffusion due to the ionization originating near the anode plane; as for the studies discussed in section <ref>, the track crossing the cathode allows for the position of the track in the drift direction to be known. We use an angle range of 20^∘ to 76^∘ in track angle bins of 2^∘. This is the accessible range of angles from the anode-cathode-crossing track selection, as angles lower than 20^∘ will not yield many tracks as cosmic muons are likely to range out due to energy loss in the argon over the longer path length, while angles higher than 76^∘ are more difficult to reliably line up in time across different tracks, particularly on the two induction planes where the bipolar nature of the signal shape on the waveform leads to complications. For each of the track angle bins and for each of the three TPC wire planes, we select anode-cathode-crossing track waveform data that is near the anode, chosen to be 13–16 cm away from the anode. This distance from the anode plane is chosen to be small enough to have minimal impact from diffusion that could broaden the signal shape on the waveform while being large enough to not lead to significant bias in estimation of the signal shape on the unshielded first induction plane. Coherent noise across common electronics channels is removed from the waveforms before the waveform data is included in the running average. We then pick the wire with peak waveform signal (negative for the two induction planes given the bipolar nature of the signals and positive for the unipolar collection plane signals) closest to the center of the above range (14.5 cm) using the known ionization electron drift velocity to convert the drift time to distance in the drift direction. Waveform data is saved within ±200 time ticks (80 s) of the peak signal. The waveforms associated with individual tracks in this selection are then aligned by the minimization of a nonlinearity metric, η. For a given track, η is defined as η = log_10(∑_i^β [(t_i - t_i, exp)cos(θ_xw)]^2/N), where i denotes the index of individual charge measurements (in ADC) on the waveform within a given time bin, spanning both the waveform of interest and the associated waveforms from the nearest ±5 wires, β represents the condition that only charge above threshold is considered, t_i is the actual drift time associated with a charge measurement, t_i,exp is the expected drift time for the ionization charge for a straight track in the corresponding track angle bin, and N is the total number of charge measurements above threshold. A threshold of 8 ADC is used for the second induction plane and collection plane, while a threshold of 12 ADC is used for the first induction plane given the higher noise levels on that plane. The entire track is shifted by up to ±5 time ticks to minimize the nonlinearity metric on a track-by-track basis. This is to ensure that the tracks are lined up with each other in time as well as possible. We do this instead of lining up by the peak waveform signal on the primary waveform of interest as this is susceptible to noise fluctuations on the waveform. Tracks with a lower η show less deviation from a hypothetical straight line associated with a given angle bin. Such deviations may be caused by delta ray production along the track which would interfere with the signal response shape measurement. Tracks with η > 1.2 are excluded from the measurement for the first induction plane and collection plane, while a tighter exclusion cut value of η > 0.9 is used for the second induction plane given that the bipolar nature of the signal waveforms leads to smaller η values on average. Next, we combine all selected waveform data for a given track angle bin and wire plane, lined up in time as described above. For each angle bin and plane we truncate the smallest and largest 10% of ADC measurements in each time bin and use the mean of the remaining ADC measurements in each time bin for the estimate of the average signal waveform. This truncation removes the impact from noise and delta rays, as well as unusually low/high signal fluctuations that may skew the results in a particular time bin. Finally, because the waveform baseline may vary wire-to-wire in our data, we add an additional baseline correction for each average waveform using linear interpolation across the waveform sidebands (where no signal is present), taking the average of time ticks [-200, -160] for the left sideband and time ticks [160, 200] for the right sideband (measured with respect to the time tick associated with the peak signal on the average waveform). The measured signal shapes across three angle bins are compared between data and Monte Carlo simulation in figure <ref>. There is significant disagreement between the data and the nominal simulation on all three planes, especially at high track angle (θ_xw). We have identified three main sources of the cause of this disagreement: a tail in the channel electronics response not in the nominal shape, distortions in the field response indicated by the intransparency of the induction planes to charge, and differences in the signal-to-noise ratio between the simulation and the data. The first two causes are directly connected to the signal shape. The last is driven by variations in the detector response to charge (such as the varying electron lifetime measured in section <ref>, and the varying induction intransparency measured in section <ref>) that are not simulated. These variations primarily impact the front induction plane, where the noise is larger and the dependence on the exact signal-to-noise ratio is significant. In the tuning procedure we describe in the next section, we therefore elect to apply the fit only to the measurements on the middle induction and collection planes. The tuning procedure can in principle be applied to the front induction plane once the differences in the variation of signal-to-noise are addressed. §.§ Fit Procedure We have developed a novel procedure to tune simulated signal shapes to match their measurement in data. This procedure tunes the input single electron field generated by GARFIELD <cit.> and applied by the Wire-Cell package, which is used in signal shape simulation for ICARUS Monte Carlo simulation. We defer to the initial paper on Wire-Cell for a detailed description of how it works <cit.>. Here we include a abbreviated discussion necessary to understand the fitting procedure. Wire-Cell forms signal shapes by summing the single electron field response on each wire from ionization electrons in particle energy depositions. The single electron field response depends on the location of the electron in the direction perpendicular to the wire direction (which we call ŵ, see figure <ref>). The field response is significant even when the electron is not directly adjacent to the wire. Wire-Cell applies single electron field responses calculated every 0.3 for 3 (10 wire-spacings) on either side of each wire. Individual clusters of electrons in the simulation arrive in between the discrete locations where the field responses are calculated, so for a given deposition Wire-Cell linearly interpolates between the two on either side. The combined field response from all ionization electrons in a readout on a given wire is then convolved with the electronics response to create the signal shape for that readout. The nominal ICARUS single electron field responses are computed by a GARFIELD simulation of the nominal ICARUS wire plane configuration. The nominal electronics response is a Bessel shaping function with a width of 1.3. That the observed signal shapes depart from the nominal simulation indicates that the ICARUS detector departs from this nominal configuration in some way. First, we have observed a tail in the electronics response due to imperfect pole-zero cancellation that we measure for separately and include in the tuned electronics response. This step is discussed in section <ref>. The remaining differences are harder to attribute and ultimately depend on the inaccessible state of the TPCs inside both cryostats. We have thus taken the perspective that a data driven approach is an apt fix to these discrepancies. We fit the signal shapes in the simulation directly to the measurement. The objects in the fit are the position dependent single electron field responses and the electronics response[Imperfections in the means to directly pulse the ICARUS TPC readout electronics prevent a precise direct measurement of the electronics response. Instead, the fit described here produces an “effective" electronics response adequate to describe the final signal shape.], although we do not claim to be more accurately measuring any of these objects individually after the fit. We only attempt to model their combined impact on the signal shape. This fit is detailed in section <ref>. Monte Carlo simulation with the signal shapes tuned by this procedure demonstrates a much improved match in the signal shape between data and Monte Carlo simulation, as is shown in section <ref>. §.§.§ Electronics Response Tail Measurement We have observed a long tail, with a time constant of ∼50, in the electronics response of ICARUS. The origin of the effect is imperfect pole-zero cancellation in the transfer function of the electronics. The tail is measured by averaging together waveforms on the collection plane from a large number of high angle muons (large θ_xw) on the collection plane. High angle muons are used because the coherent noise subtraction can depress the effect of the tail when the track is closer to perpendicular to the drift direction. An exponential (e^-t/τ) is fit to the averaged waveform values between 40-80 (100-200ticks). This time range is selected to exclude the region of the waveform where its shape is impacted by the field response, which in particular creates a visible dip in the waveform after the peak that extends out to about 25. The exponential fit obtains a time constant of 48.8 that contains 15.9% of the charge from the pulse. The fit exponential is convolved with the nominal electronics response to obtain the effective electronics response. This effective electronics response is the input to the fits to data on all three wire planes as described in section <ref>. Figure <ref> displays the data and the fit. The exponential describes the waveform shape well in the fit region. §.§.§ Signal Shape Model We have developed a model of the signal shape measurement that takes the single electron field responses and the electronics response as input and produces the expected signal shape as a function of the track angle θ_xw. The fit of the field and electronics responses are done by fitting this model to the measured signal shapes. The model first turns the set of single electron field responses (201 total, each spaced 0.3 apart) into an angle dependent “track field response". This is done by sub-sampling the single electron field responses. Each sample linearly interpolates the responses on either side (as is done in Wire-Cell). The samples are shifted in time according to the chosen track angle and summed together. Given the single electron field responses s_-30(t), s_-29.7(t), …, s_30(t) relative to the wire at time t, the track field response f(θ_xw; t) is equal to f(θ_xw; t) = ∑_x_i(1-x_i - ⌊ x_i ⌋/0.3)· s_⌊ x_i ⌋(t - x_i tanθ_xw/v_D) + (1- ⌈ x_i ⌉ - x_i/0.3)· s_⌈ x_i ⌉(t - x_i tanθ_xw/v_D) , where v_D is the drift velocity, x_i are the sampled locations, ⌊ x ⌋ is the position immediately below x of a sampled single electron field response, and ⌈ x ⌉ is the position immediately above x of a sampled single electron field response. In our implementation, we sub-sampled the single electron field responses every 0.03 for 2001 sub-samples. The sampled nominal field responses shifted by a few example track angles is shown in figure <ref>. The sum of these samples (i.e., the track field response f(θ_xw; t)) is shown for a few example track angles in figure <ref>. The un-physical spikes in the field responses are caused by the finite sampling spacing of the single electron field responses and are smoothed out by the electronics response, as specified below. The track field response is convolved with the electronics response e(t) to obtain the track signal shape. In addition, the measurement of the signal shape will not perfectly reproduce the signal. The alignment of signals from different muons in the averaged waveform will not be perfect due to detector noise. The resolution from this misalignment smears the shape. (Other broadening effects, such as diffusion, are insignificant.) To account for this effect, the signal shape is convolved with a “measurement kernel" m(θ_xw; t). The final measured track signal shape S(θ_xw; t) is thus equal to S(θ_xw; t) = (f(θ_xw) ⊛ e ⊛ m(θ_xw))(t) , where ⊛ denotes a convolution. The measurement kernel is determined from a fit on ICARUS simulation where the underlying field and electronics response is known. It is found to be well approximated by a Gaussian with a width σ depending on the track angle by a form σ(θ_xw) = √(a^2 + b^2 tan^2θ_xw), where a and b are parameters individual to each wire plane. The measurement kernel width as determined in Monte Carlo simulation on each wire plane is shown in figure <ref> for the middle induction and collection planes, on which the fit is performed. The fit to data is done by matching the measured signal shape S to the data by fitting the electronics response e and the single electron field responses s_i (implicit in the track field response f). In this fit, non-linear transformations parameterized by the fit are applied to the nominal field and electronic responses. The details of these transformations are in appendix <ref>. The fit is done on all measured track angles at once. §.§.§ Fits The results of the fit are shown in figures <ref> and <ref> for the middle induction plane and collection plane respectively. The fit is done in angle bins 2^∘ in width from θ_xw = 20^∘ - 76^∘. The fit improves the signal shape model at all angles on each plane. §.§ Tuned Signal Shape Results As a validation of the tuned signal shapes, we compare the signal shape measurement from data against Monte Carlo simulation generated with the tuning applied. The comparison is shown above for the nominal signal shapes in figure <ref>. Figure <ref> shows the comparison on the middle induction and collection planes with the tune applied. The modeling is improved in all angle bins on both planes. § CHARGE RESOLUTION COMPARISON To validate the equalization and simulation results of this paper, we compare the distribution of equalized dQ/dx for throughgoing cosmic muons between data and Monte Carlo simulation. This is shown in figure <ref>. The data is shown after applying the corrections discussed in section <ref>. The simulation uses the noise simulation described in section <ref>. The signal shape applies the nominal GARFIELD simulation on the front induction plane, and the tuned signal shape (as described in section <ref>) on the middle and collection planes. The Monte Carlo simulation does not include any y-z detector response variations. It is simulated with a uniform 3 lifetime, which is corrected for using the same methodology as in the data. The simulated gain was tuned on each plane so that the peaks of the distributions matched. Taken together, the final comparison shows very good agreement on all planes. There is a small residual underestimation of the charge resolution in simulation. This is observed on all three planes and is biggest on the front induction plane. There are a number of possibilities that could explain this effect: variations in the effective channel gain (from, e.g., the varying electron lifetime) not included in the simulation, deficiencies in the noise model, or differences in the inherent fluctuations from recombination, for example. The source of these residual disagreements are currently being investigated. § CONCLUSION This paper has described the procedure developed on ICARUS to equalize charge measurements in data and tune the simulated TPC noise and signal shapes to data. Charge measurements are equalized in the drift and wire plane directions. These corrections predominantly address the attenuation of charge signals due to argon impurities and a variable in-transparency to charge across the induction planes in ICARUS. The noise is simulated directly from measurements of signal-less wires. The signal shape is modeled by a GARFIELD simulation of the ICARUS wire planes, with tuning done on the middle induction and collection planes to match the simulated signal shapes to distortions observed in the data. This tuning is a novel procedure we have developed for ICARUS. At this stage of calibration, the modeling of charge resolution is satisfactory on all three wire planes of the ICARUS TPC. Residual uncertainties on the detector performance arise predominantly from variations in the detector response not included in the simulation. The impact of such variations on charge calorimetry are removed by the charge equalization procedure, but this calibration cannot remove the variation in detector performance that is baked-in by (e.g.) the varying signal-to-noise ratio. Future work to calibrate the ICARUS TPC will address the simulation of these variations in signal-to-noise across the runtime of the experiment. When these variations are included, it may also be possible to apply the signal shape tuning procedure on the front induction plane. § ACKNOWLEDGEMENTS This document was prepared by the ICARUS Collaboration using the resources of the Fermi National Accelerator Laboratory (Fermilab), a U.S. Department of Energy, Office of Science, HEP User Facility. Fermilab is managed by Fermi Research Alliance, LLC (FRA), acting under Contract No. DE-AC02-07CH11359. This work was supported by the US Department of Energy, INFN,EU Horizon 2020 Research and Innovation Program under the Marie Sklodowska-Curie Grant Agreement No. 734303, 822185, 858199 and 101003460, and the Horizon Europe Research and Innovation Program under the Marie Sklodowska-Curie Grant Agreement No. 101081478 Part of the work resulted from the implementation of the research Project No. 2019/33/N/ST2/02874 funded by the National Science Centre, Poland. We also acknowledge the contribution of many SBND colleagues, in particular for the development of a number of simulation, reconstruction and analysis tools which are shared within the SBN program. JHEP § FIELD AND ELECTRONIC RESPONSE TRANSFORMATIONS IN SIGNAL SHAPE FIT The single electron field response fit applies a set of non-linear transformations to the nominal Wire-Cell responses. The transformations depend on the time t and the location x along the direction perpendicular to the wire orientation (ŵ). The fit is done by splitting each field response into a left (denoted with a subscript ℓ) and right (denoted with a subscript r) side of a central time tick. The time tick is defined as the peak of the field response on the collection plane and the zero-cross point on the induction planes. Both the shape of the field response s and the time input to the field response t is transformed. All position dependence is encoded in an “offset parameter" o. The fit single electronics field response s(x,t), in terms of the nominal WireCell single electronics field response s^0(x,t), is defined below. s(t, x) = s_ℓ(t'(t, x), x)· (t'(t, x) < 0) + s_r(t'(t, x), x) · (t'(t, x) ≥ 0) s_ℓ,r(t,x) = a_ℓ,r^0(x) s^s_ℓ,r(t,x) + a_ℓ,r^1(x) (s^s_ℓ,r(t,x))^2·sign(s^s_ℓ,r(t,x)) s^s_ℓ,r(t,x) = (s^0(t, x) + ds_ℓ, r(t, x)) ·exp[(t > t^start_ℓ,r)·e_ℓ,r|t|] a_ℓ,r^0,1(x) = a0_ℓ,r^0,1 + a1_ℓ,r^0,1· o(x) ds_ℓ(t, x) = 0 , ds_r(t, x) c^p_r·exp[-|x|/ℓ^p_r]·(|x| ≤ 1.5 mm)/1 + (|t|/τ^p_r)^a^p_r·(|t| > t^p-start_r) t'(t, x) = t'_ℓ(t, x)·(t < 0) + t'_r(t, x)·(t ≥ 0) + c· o(x) t'_ℓ,r(t,x) = t·(s_ℓ,r^0(x) + s_ℓ,r^2(x)/1 + (t/τ_ℓ,r^2(x))^2 + s_ℓ,r^4(x)/1 + (t/τ_ℓ,r^4(x))^4) s^0,2,4_ℓ, r(x) = s1^0,2,4_ℓ, r· o(x) + s2^0,2,4_ℓ, r· o(x)^2 τ^2,4_ℓ, r(x) = τ1^2,4_ℓ, r + τ2^2,4_ℓ, r· o(x) o(x) = 1 - e^-|x|/1.5mm . The fit parameters in these equations are in bold. In these equation, there are 16 fit parameters on both sides of t = 0 (ℓ, r): t^start_ℓ, r, e_ℓ,r, a0^0_ℓ, r, a0^1_ℓ, r, a1^0_ℓ, r, a1^1_ℓ, r, s1^0_ℓ,r, s1^2_ℓ,r, s1^4_ℓ,r, s2^0_ℓ,r, s2^2_ℓ,r, s2^4_ℓ,r, τ 1^2_ℓ,r, τ 1^4_ℓ,r, τ 2^2_ℓ,r and τ 2^4_ℓ,r. There are 5 parameters only on the right side of t=0: c^p_r, ℓ^p_r, τ^p_r, a^p_r, and t^p-start_r. Finally, there is the time shift parameter c. In total, there are 38 parameters in the field response fit. The electronics response is also fit for. The nominal electronics response e^0(τ; t) is a Bessel shaping function with a nominal shaping time τ =1.3. This nominal shape is convolved with the long RC-tail measured externally to the signal shape fit as described in section <ref>. To allow for further distortions, the response is convolved with an RC-RC tail function with a time constant τ_RCRC. The full fit electronics response e(t) is e(t) = (e^0(τ) ⊛RC(A, τ_RC) ⊛RC-RC(τ_RCRC))(t) RC(A, τ; t) = δ(t) + A e^-t/τ RC-RC(τ; t) = ( t/τ - 2 ) e^-t/τ/τ , where ⊛ denotes a convolution and δ is the dirac-delta function. The four fit parameters, τ, τ_RC, τ_RCRC, and A, are all in bold.
http://arxiv.org/abs/2407.13212v1
20240718065309
Probe the regolith characteristics of asteroids from 9-years infrared observations of WISE/NEOWISE: A case study of the Main-Belt Object (656) Beagle
[ "Liang-Liang Yu" ]
astro-ph.EP
[ "astro-ph.EP" ]
Liang-Liang Yu yullmoon@nju.edu.cn Institute of Science and Technology for Deep Space Exploration, Nanjing University-Suzhou Campus, Suzhou, 215163, China § ABSTRACT This work presents data processing, fitting procedure, modelling and analyzing of 9-years infrared light curves provided by the WISE/NEOWISE telescope, by which the regolith characteristics of Main-Belt Object (656) Beagle is studied. We determine Beagle's effective diameter D_ eff=57.3^+4.5_-2.2 km, geometric albedo p_ v=0.05^+0.004_-0.007, mean roughness θ_ RMS=44±4^∘, mean grain size b=100^+350_-90 μm, mean specific heat capacity c_ p=173∼516 JKg^-1K^-1, mean thermal conductivity κ=0.7∼1.3×10^-3 Wm^-1K^-1 and mean thermal inertia Γ=14∼32 Jm^-2s^-0.5K^-1. The albedo of Beagle is a little anomalous that the albedos of Beagle's neighbouring asteroids are more close to Themis, rather than Beagle itself. The W1-band near-infrared light curves don't reveal significant heterogeneous NIR features on the surface of Beagle, being inconsistent with the expectation of a family parent that has members with diverse NIR spectral types. These results add new clues of Beagle probably being an interloper or a sister, rather than the parent of its neighbouring asteroids including the first main-belt comet (MBC) 133P, hence may lead to new scenarios about the origin of famous MBC 133P. Besides, we found that asteroidal shape models from inversion of optical light curves are imperfect for modeling infrared lightcurves, thus could mislead evaluations of both the heterogeneity of regolith reflectivity at near infrared and thermophysical characteristics at thermal infrared. § INTRODUCTION The main-belt asteroid (656) Beagle (hereafter Beagle for short) came into the sight of astronomers since it was proposed to be the parent of the famous main-belt comet (MBC) 133P/Elst-Pizarro <cit.>. And the Beagle family is proposed to be a young family with an age of <10 Myr <cit.> or ∼14 Myr <cit.>. Then the story about the origin of 133P is widely accepted as the following scenario: 133P is a young icy fragment of Beagle, which is a daughter of the icy main-belt object (24) Themis, on which absorption feature of water ice was observed by the IRTF spectra <cit.>. The formation scenario was somewhat supported by the work of <cit.>, which reported Beagle to have a geometric albedo of p_ v=0.0782±0.0222, being similar to the geometric albedos of its neighbouring asteroids (mean p_ v=0.0941±0.0055) and potential parent (24) Themis (p_ v=0.0641±0.0157). Besides, <cit.> also found that the neighbouring asteroids of Beagle exhibit diverse spectral types based on the reflectance spectrum at near infrared (NIR), implying that the composition of these asteroids differs from each other. If these neighbouring asteroids of Beagle were formed from the same impact event, then their diverse compositions may originate from the parent, or from the impactor, or produced by the high temperature and pressure during impact process. According to <cit.>, if a parent body is heterogeneous in composition, then the resulting family is expected to show a variety of spectral properties within its members. On the other hand, although the original parent may not have spectral variability on its surface, the impact that produced heterogenous members will make the survived parent to show surface heterogeneity. For example, Vesta is observed to show heterogeneous surface features due to impact exposing heterogeneous subsurface and upper crust materials <cit.>. Therefore, if Beagle is the survived parent of these neighbouring asteroids with diverse compositions, then Beagle is expected to contain diverse compositions on its surface and show heterogeneous NIR features across its surface. However, <cit.> reported Beagle to have a very low geometric albedo p_ v=0.045±0.005, being even lower than that of (24) Themis. On the other hand, the recent work of <cit.> found that 133P is more likely to have an old age >100 Myr, being inconsistent with the young age  <10 Myr of the Beagle family. These new observations and theoretical predictions thus raise the question of whether Beagle having genetic connections with 133P and (24) Themis. So in this work, we attempt to use more data and new method to investigate whether asteroid Beagle has a geometric albedo as low as ∼0.05 and significant heterogeneous reflectivity at near infrared. This objective can be achieved by analyzing the multi-year infrared light curves of Beagle from WISE/NEOWISE with the well-tested thermophysical model — RSTPM <cit.>. As the WISE/NEOWISE observations of main-belt objects at band-W1 are dominated by sunlight reflection at near infrared. In addition, heterogeneity of regolith thermophysical characteristics can also be evaluated from the other three bands of W2, W3, and W4. § THE RADIOMETRIC MODEL §.§ Observations and data processing The Wide-field Infrared Survey Explorer (WISE) mission has mapped entire sky in four bands around 3.4 (W1), 4.6 (W2), 12 (W3), and 22 (W4) μm with resolutions from 6.1^'' to 12^'' <cit.>. All four bands were imaged simultaneously, and the exposure times were 7.7 s in W1 and W2, and 8.8 s in W3 and W4 <cit.>. The four-bands survey started from 2010 January 7, and ended on 2010 August 6 after the outer cryogen tank was exhausted, making the W4 channel be no longer able to be used to obtain survey data. The W3 channel continued operation until 2010 September 29 when the inner cryogen reserve was exhausted, while the W1 and W2 channel kept working until the telescope was set into hibernation on 2011 February 1 <cit.>. The two-band survey was then resumed on 2013 December 13 (known as NEOWISE) <cit.>;, and is still in service, which has obtained nearly 10-year observations. We found 9-years observations of Beagle from the WISE archive (see the website of the NASA/IPAC Infrared Science Archive http://irsa.ipac.caltech.edu/). The datasets are summerized in Table <ref>. §.§.§ Flux colour corrections All the four-band data need colour corrections, especially for the W1 and W2-band data, the infrared fluxes of which contain not only self-thermal emission but also sunlight diffused from the object’s surface, thus the color corrections should be done separately for the thermal component and the reflection component, as the color correction factors are different. Besides, the color correction factor for the thermal component is temperature dependent, thus will have different value if the helio-centric distance was different for each observation epoch. So we implement the color-correction procedure as follows: First, convert the data-base magnitude to flux without any color correction, giving the total integrated flux F(λ±Δλ)_ tot,obs for each band. The derived band-integrated fluxes together with the observation geometry are listed in Table <ref> and <ref>, and each flux should at least have an associated uncertainty of ±10 percent according to <cit.>; Second, F(λ±Δλ)_ tot,obs contains both integrated thermal emission and sunlight reflection, and can be modelled as F(λ±Δλ)_ tot,model= F(λ)_ rl*f_ c,rl+ F(λ)_ th*f_ c,th, where F(λ)_ rl and F(λ)_ th represent the monochromatic sunlight reflection and thermal emission that can be calculated from reflection and thermal models; f_ c,rl and f_ c,th stand for the color correction factor of the reflection component and the thermal component respectively; here f_ c,rl can be chosen to be the color correction factor of G2V star, thus fixing f_ c,rl=1.0049,1.0193,1.0024,1.0012 for the four band — W1, W2, W3 and W4 respectively <cit.>; Third, since f_ c,th is temperature dependent, so its value for each band will be obtained from an interpolation procedure according to the effective temperature T_ eff=[(1-A_ B)L_⊙/εσπ d_⊙^2]^1/4≈300 K/√(d_⊙), of the asteroid at the time of observation, on the basis of the Table 1 of <cit.>. In Equation (<ref>), d_⊙ means the heliocentric distance in AU; L_⊙ is the Solar constant, about 1361.5 Wm^-2, A_ B is the surface bond albedo; ε is the averaged thermal emissivity over the entire emission spectrum of the surface and σ is the Stefan Boltzmann constant; Finally, calculate the theoretical monochromatic sunlight reflection F(λ)_ rl and thermal emission F(λ)_ th via our RSTPM code, and then obtain the theoretical total flux F(λ±Δλ)_ tot,model via Equation (<ref>), so as to do comparison with the observed total flux F(λ±Δλ)_ tot,obs at the first step. §.§.§ Anomalous data removal Due to some noise pollution, e.g. cosmic rays and zodiacal light, some of the observed fluxes may deviate too much from the actual emission of the target. Such anomalous data need to be removed from the parameter inversion procedure, so as to make the results about the physical characteristics of the target get to be more close to reality. The basic way to remove anomalous data is on the basis of 'signal to noise ratio (snr)'. For the WISE/NEOWISE observations of asteroids, generally we remove the data with snr<3. For asteroids with shape model, there is also an additional way to remove anomalous data. As the multi-year data of WISE and NEOWISE make it possible to generate infrared light curves, so by doing comparisons between the observational and theoretical infrared light curves, the data sets deviate too much from the theoretical curves can be treated as anomalous data and remove them can make the reversion procedure get better results. §.§ Themophysical model As has been mentioned above, this work uses the thermophysical model for realistic surface layers on airless small bodies — RSTPM <cit.>. The model considers not only real shape and rough surface, but also real orbital cycle, rotational cycle, and even temperature dependent thermal parameters in the simulation process, as well as contribution of sunlight reflection in the infrared radiometric procedure. In comparison to previous models, the advantages of the RSTPM are: (1) A different mathematical technique is used to solve the influence of surface roughness on the energy balance equation of surface boundary; (2) For the aim to remove the degeneracy of thermal inertia and roughness by interpreting multi-epoch thermal light-curves, variation of thermal parameters due to temperature variation caused by orbital cycle and rotation cycle is taken into consideration; (3) A combination model of simultaneously computing thermal emission and sunlight-reflection under the same surface topography is proposed to fit infrared data in case of the data containing significant sunlight reflection. So RSTPM has advantages for the small bodies which has regolith on the surface and has large orbital eccentricity or an obliquity close to 90 degrees. And for the target asteroid Beagle, we assume it have fine regolith on its surface, as it has a much larger size than Eros, Ryugu and Bennu. § FITTING PROCEDURE AND RESULTS §.§ The Input and Free Parameters In order to interpret the multi-year observations, RSTPM needs several input parameters, including observation geometry, shape model, spin orientation, rotation phase ph, scattering weight-factor w_ f, geometric albedo p_ v, effective diameter D_ eff, mean grain radius b, and roughness fraction f_ r, which can be related to the root-mean-square slope as θ_ RMS=√(f_ r)×50^∘ <cit.>. The observation geometry at the time of each observation can be easily obtained according to the orbit of Beagle and WISE. The spin period and spin orientation together with shape model of Beagle are also known by light curve inversion method from the DAMIT database. The utilized shape model of Beagle is shown in Figure <ref>, where ph represents rotational phase defined as ph=1-φ/(2π), in which φ is the local longitude of the observer in the body-fixed coordinate system. The scattering weight-factor w_ f is introduced in the combined Lambert-Lommel-Seeliger law via C_ L(ψ_i,ψ_ o,i,α,w_ f)=f(α)(w_ f+1/ψ_i+ψ_o, i), where C_ L represents a correction coefficient to the Lambertian reflection, ψ_i and ψ_ o,i are the cosines of the incident angle and emergence angle on facet i respectively, α is the solar phase angle; f(α) is the phase correction function, according to <cit.>, f(α)∼0.5exp(-α/0.1)-0.5α+1. Parameter w_ f represents the weight of Lambertian term in the scattering law, and is an artificial factor to interpret sunlight refection. Its physical significance isn't that clear, thus we only need a scattering weight-factor w_ f that could achieve best-fitting degree to the observations. The effective diameter D_ eff, defined by the diameter of a sphere with the same area to that of the shape model, can be related to the geometric albedo p_v and absolute visual magnitude H_v via: D_ eff=1329× 10^-H_v/5/√(p_v) ( km)  . In addition, the geometric albedo p_v is related to the effective Bond albedo A_ eff,B by A_ eff,B=p_v(0.290+0.684G) , in which G is the slope parameter in the H, G magnitude system of <cit.>. As for Beagle, this work uses H_v=10.8 and G=0.15 (IAU MPC). So parameters including geometric albedo p_ v, roughness θ_ RMS, and mean grain radius b, are the free parameters that would be evaluated from the fitting procedures. §.§ Wavelength-dependent Thermal Emissivity In general cases, an constant thermal emissivity ϵ≈0.9 is assumed for each band in the model to calculate the thermal emission component. However, in the case of Beagle, we find that, if using wavelength-dependent thermal emissivity, the model results match the observations better, as shown in Figure <ref>. After a process of adjusting the thermal emissivity to fit the observations, we find that, the fitting degree can be the best by using ϵ≈0.72,0.72,0.72,0.95 for the W1, W2, W3, W4 band respectively. The thermal emissivity of band W4 is almost close to 1, indicating that band W4 is near the Wien peak of the thermal emission spectra of Beagle. On the other hand, with RSTPM, the fraction of sunlight-reflection in the observed flux of Beagle for each band of WISE/NEOWISE can be evaluated, the results are plotted in Figure <ref>. The W1-band is reflection dominated with reflection ratio ∼95%, the W2-band becomes thermal emission dominated but have significant reflection ratio between 10∼45%, finally the W3 and W4 are almost all thermal emission with reflection ratio <0.1%. §.§ Results of Roughness, Grain Size and Albedo By adopting different thermal emissivity for the four band, we then fit the observations by scanning roughness θ_ RMS in the range of 0∼50^∘ and mean grain radius b in the range of 1∼1000 μm. With each pair of (θ_ RMS,b), a best-fit geometric albedo p_ v together with effective diameter D_ eff is found to compute the reduced χ^2_ r. The results are presented in Figure <ref> as a contour of χ^2_ r(θ_ RMS,b). According to Figure <ref>, a well constrained 1σ-level limit is derived for roughness θ_ RMS and mean grain radius b, giving θ_ RMS=44±2^∘, b=100^+130_-70 μm respectively. 3σ-level constraint for roughness is derived as θ_ RMS=44±4^∘, whereas for mean grain radius, a relatively wide 3σ-level limit is obtained as b=100^+350_-90(10∼450) μm. According to the above derived 1σ and 3σ ranges of roughness and mean grain radius, the corresponding geometric albedo p_ v and χ^2_ r are picked out, leading to the p_ v∼χ^2_ r relation as shown in Figure <ref>. In this way, we obtain the 1σ and 3σ-level limits of geometric albedo as p_ v=0.05^+0.002_-0.003 and p_ v=0.05^+0.004_-0.007 respectively, and simultaneously the effective diameter of Beagle is obtained as D_ eff=57.3^+1.8_-1.1 km (1σ) and D_ eff=57.3^+4.5_-2.2 km (3σ). §.§ Regolith thermophysical characteristics Thermal inertia is a strong function of temperature. Now with the above derived profile of mean grain radius, we can evaluate the change of surface thermal inertia of Beagle due to the influence of seasonal temperature variation according to the relationships between thermal inertia, thermal conductivity, specific heat capacity and temperature given in <cit.>. In the left panel of Figure <ref>, a map of surface temperature of Beagle is plotted as a function of local latitude and orbital mean anomaly. Each temperature has been averaged over one rotational period. We can clearly see that temperature on each local latitude can reach maximum (summer) or minimum (winter) at different orbital positions as a result of seasonal effect. Temperature on the poles can vary from ∼26 K to ∼183 K. Figure <ref> shows that, if considering seasonal temperature variation and 3σ-level range of mean grain radius b, the regolith thermal inertia, thermal conductivity, specific heat capacity of Beagle can be estimated to vary between 2∼45 Jm^-2s^-0.5K^-1, 0.3∼2.3×10^-3 Wm^-1K^-1, 7∼562 JKg^-1K^-1. Despite the fact that thermal parameters is temperature dependent, the majority of relevant works ignore such temperature dependence and only estimate the average values. For comparison with such existing results, we estimate the seasonal average thermal parameters of Beagle with inputs of the derived mean grain radius and seasonal averaged temperature. The seasonal average temperature would be a function of local latitude, T̃(θ), and can be estimated as (1-A_ eff,B)L̃_ s(θ)=εσT̃(θ)^4, where A_ eff,B is the bond albedo, ε∼0.9 is the average thermal emissivity, L̃_ s(θ) is the annual average incoming solar flux on each latitude. The results are presented in The right panel of Figure <ref>, giving the seasonal average temperature of Beagle to be 90∼170 K, and accordingly the average specific heat capacity c_ p=173∼516 JKg^-1K^-1, the average thermal conductivity κ=0.7∼1.3×10^-3 Wm^-1K^-1, and the average thermal inertia Γ=√((1-ϕ)ρ_ gc_ pκ)=14∼32 Jm^-2s^-0.5K^-1, where the porosity ϕ uses 0.5, and the regolith grain density ρ_ g uses 3110 kgm^-3 <cit.>. §.§ Infrared light curves With the above derived physical and thermophysical parameters of Beagle, now we are able to determine the rotation phase of Beagle at each time of WISE/NEOWISE observation by doing comparisons between the observational and theoretical light curves. To do so, the 3D shape model is used to define the local body-fixed coordinate system, where the z-axis is chosen to be the rotation axis. Moreover, if we define the view angle of one observation with respect to the body-fixed coordinate system to be (φ,θ), where φ stands for local longitude, and θ means local latitude, then the rotational phase ph of this observation can be related to the local longitude φ via ph=1-φ/(2π), and "zero" rotational phase is chosen to be the "Equatorial view (0^∘)" as shown in Figure <ref>. If selecting a reference epoch, and assuming the rotational phase at this epoch to be zph, then all the rotational phases of other data could be derived in consideration of the observation time and geometry. Furthermore, for some particular epoch, light curves can be derived for each band by correcting the observed flux at various epochs into one rotation period at this epoch, where the correction is implemented via F_i, corr=F_i(r_i, helio/r_0, helio)^2 (Δ_i, obs/Δ_0, obs)^2, in which F_i, corr is the flux after correction, F_i is the original observed flux, r_i, helio and r_0, helio are the heliocentric distance of epoch i and the reference epoch, while Δ_i, obs and Δ_0, obs are the observation distance. Following the above method, firstly we select '2010-01-31 12:06' as the reference epoch for deriving the reference rotational phase zph. But for correction of flux, in order to reduce flux errors caused by correction Equation (<ref>), we select 28 separate reference epochs, so as to use data close to each reference epoch (data within three days) to generate infrared light curves. By this way, we obtain 28 infrared light curves. Then for each of the reference epoch, theoretical infrared light curves are simulated by RSTPM to fit the above generated observational light curves. The best-fit results are plotted in Figure <ref>. Roughly speaking, our modeled infrared light curves match well with all the four-band observational light curves of WISE/NEOWISE in consideration of the observation errors, indicating that our model results are reliable. To investigate whether Beagle has surface heterogeneity, the ratios of observation/model are plotted as a function of rotation phase for each band respectively in Figure <ref>, where a clear rotation phase dependent feature is observed in the data of band-W4, W3, and W2, showing that the observation/model ratios appear to be significantly higher around ph=0.2 and 0.7, whereas the band-W1 data don't show such feature. Since both irregular shape and surface heterogeneity can contribute to the rotational variation of light-curves, and bands of W4, W3, and W2 are thermal emission dominated, so the heterogeneous features of the thermal light curves of WISE/NEOWISE may imply that: 1, the light-curve inversion shape model is not perfect for modeling thermal emission; or 2, the regolith of Beagle may have heterogeneous thermophysical characteristics along longitude. § DISCUSSION The multi-epoch infrared data of WISE/NEOWISE make it possible to probe the regolith characteristics of asteroids in detail. As an example, this work uses 9 years of infrared data from WISE/NEOWISE to study the regolith of Main-Belt Object (656) Beagle by the well-tested thermophysical model — RSTPM <cit.>. Details about the data processing, fitting procedure, modelling and analyzing of infrared light curves are presented. The results show that Main-Belt Object Beagle has a low mean thermal inertia of Γ=14∼32 Jm^-2s^-0.5K^-1, being consistent with that of most large main belt objects <cit.>; and Beagle has a geometric visual albedo as low as 0.043∼0.054, being far lower than the geometric albedos of its neighbouring asteroids (mean p_ v=0.0941±0.0055) <cit.>, even lower than the geometric albedo of (24) Themis (p_ v=0.064^+0.008_-0.011) <cit.>. Although the low albedo ∼0.05 of Beagle still lies on the edge of 1σ range of the Beagle family and the Themis family, it is a little anomalous that the albedos of Beagle's neighbouring asteroids are more close to Themis, rather than Beagle itself. In addition, Figure <ref> shows that Beagle exhibits significant rotation-phase dependent features at band-W4, W3 and W2, but not at W1-band. According to Figure <ref>, band-W4, W3 and W2 are thermal emission dominated, whereas W1-band is dominated by NIR reflection. So the result implies that the surface of Beagle doesn't have significant heterogeneous NIR reflectivity. As mentioned before, <cit.> shows that the neighbouring asteroids of Beagle have diverse NIR spectral types, and Beagle is expected to show heterogeneous NIR features across its surface if Beagle is the parent of those neighbouring asteroids. However, here our result shows that Beagle has no significant heterogeneous NIR reflectivity, indicating that Beagle may be not the parent of its neighbouring asteroids, or surface heterogeneity of Beagle at NIR are eliminated for some reasons, for example, the utilization of imperfect light-curve inversion shape model, which doesn't remove the degeneracy between regional NIR reflectivity and slope. On the other hand, for band-W4, W3 and W2, if doing comparisons between Figure <ref> and <ref>, we can find that the higher observation/model ratios appear around ph=0.2 and 0.7, the phases of which Beagle happens to have the minimum cross-section area, implying that the rotation-phase dependent feature of band-W4, W3 and W2 has a strong relationship with the utilized shape model. Also we have done several tests by adjusting the rotation period within 7.033±0.001 to do the fitting procedure, but the above rotation-phase dependent feature remains unchange, indicating that the utilized shape rather than rotation period is more likely to be cause of the rotation-phase dependent feature. Hence the light-curve inversion shape model from optical light curves is not perfect for infrared light curves, similar phenomenons have been found by <cit.> and <cit.>. Nevertheless, the possibility of heterogeneous regolith thermophysical characteristics across the surface of Beagle remains positive, because it would be reasonable to expect different physical characteristics (e.g. roughness) between the body part and the smaller head part, as such phenomenons have been observed on other asteroids by in-situ space missions, for example (25143) Itokawa <cit.>. § CONCLUSION By analyzing 9-years infrared light curves of Beagle from WISE/NEOWISE, we get the following results: 1). Beagle has an effective diameter D_ eff=57.3^+4.5_-2.2 km, geometric albedo p_ v=0.05^+0.004_-0.007, mean roughness θ_ RMS=44±4^∘, mean grain size b=100^+350_-90(10∼450) μm, mean specific heat capacity c_ p=173∼516 JKg^-1K^-1, thermal conductivity κ=0.7∼1.3×10^-3 Wm^-1K^-1 and thermal inertia Γ=14∼32 Jm^-2s^-0.5K^-1. 2). We confirm that the Beagle has an anomalous low albedo that the albedos of Beagle's neighbouring asteroids are more close to Themis, rather than Beagle itself. In addition, the W1-band NIR light curves of Beagle don't reveal significant heterogeneous NIR reflectivity across the surface of Beagle. This result does not support Beagle to be the parent of its neighbouring asteroids which have diverse NIR spectral types <cit.>. A possible explanation to these results is that Beagle may be an interloper or a sister, rather than the parent of its neighbouring asteroids including the first MBC 133P. Considering Beagle's albedo anomaly, we prefer to surmise Beagle to be an interloper to the Themis family, although it still remains possible for Beagle being an anomalous member of the Themis family. So our results may add new clues of Beagle probably having no genetic connection with its neighbouring asteroids and even (24) Themis, and may lead to new scenarios about the origin of the famous MBC 133P. 3). Asteroidal shape models from inversion of optical light curves could have obvious imperfections. As a result, the heterogeneity of surface reflectivity at near infrared is difficult to be discovered if using such shapes to model the theoretical infrared light curves. Besides, the imperfect shapes can also lead to a rotation dependent features in thermal light curves (e.g. band W2, W3, W4 of WISE and NEOWISE), which would mislead the evaluation of the heterogeneity of regolith thermophysical characteristics. Therefore, to obtain truthful information about the surface heterogeneity of asteroids, it is necessary to input shapes that are more close to reality. § ACKNOWLEDGMENTS We would like to thank the WISE teams for providing public data. This work was supported by the grants from The Science and Technology Development Fund, Macau SAR (File No. 0051/2021/A1). and Faculty Research Grants of The Macau University of Science and Technology (File). named [Abe et al.2006]Abe2006 Abe, M., Takagi, Y., Kitazato, K., Abe, S., et al., 2006. Near-Infrared Spectral Results of Asteroid Itokawa from the Hayabusa Spacecraft, Science 312: 1334-1338 [Bowell et al.1989]Bowell Bowell, E., Hapke, B., Domingue, D., et al., 1989. Application of photometric models to asteroids. In Asteroids II, pp. 524-556 [Campins et al.2010]Campins2010 Campins, H., Hargrove, K., Pinilla-Alonso, N., et al., 2010, Nature, 464(7293): 1320-1321 [Carruba2019]Carruba2019 Carruba, V. 2019, P&SS, 166, 90 [Cutri et al.2015]Cutri2015 Cutri, R., Mainzer, A., Conrow, T., et al. 2015, Explanatory Supplement to the NEOWISE Data Release Products, http://wise2.ipac.caltech.edu/docs/release/neowise/expsup/ [Cutri et al.2012]Cutri2012 Cutri, R., Wright, E., Conrow, T., et al. 2012, Explanatory Supplement to the WISE All-Sky Data Release Products, http://wise2.ipac.caltech.edu/docs/release/allsky/expsup/index.html [Ďurech, et al.2017]Durech2017 Ďurech, J., Delbo’, M., Carry, B., Hanuš, J., & Alí-Lagoa, V., 2017, A&A, 604, A27 [MacLennan & Emery 2021]MacLennan2021 MacLennan, E. M., & Emery, J. P., 2021, Planet. Sci. J., 2(161): 1-12 [MacLennan & Emery 2022]MacLennan2022 MacLennan, E. M., & Emery, J. P., 2022, Planet. Sci. J., 3(47): 1-23 [Fornasier et al.2016]Fornasier2016 Fornasier, S., Lantz, C., Perna, D., & Campins, H., et a., 2016, Icarus, 269: 1-14 [Hanus et al.2015]Hanus2015 Hanuš, J., Delbo’, M., Ďurech, J., & Alí-Lagoa, V., 2015, Icarus, 256, 101 [Huang et al.2022]Huang2022 Huang, D., Hanuš, J., Masiero, J. R., & Tholen, D. J., 2022, Planet. Sci. J., 3(56): 1-25 [Kaasalainen, Torppa & Muinonen2001]Kaasalainen2001b Kaasalainen, M., Torppa, J., & Muinonen, K., 2001, 153, 37 [Mainzer et al.2011]Mainzer2011 Mainzer, A., Bauer, J., Grav, T., et al. 2011, ApJ, 731, 53 [Mainzer et al.2014]Mainzer2014 Mainzer, A., Bauer, J., Cutri, R., et al., 2014, ApJ, 792, 30 [Mainzer et al.2016]Mainzer2016 Mainzer, A. K., Bauer, J. M., Cutri, R. M., et al. 2016, PDSS, EAR-A-COMPIL-5-NEOWISEDIAM-V1.0 [Michel et al.2015]Michel2015 Michel P., Richardson D. C., Durda D. D., Jutzi M., and Asphaug E., 2015, Collisional formation and modeling of asteroid families. In Asteroids IV (P. Michel et al., eds.), pp. 341–354. Univ. of Arizona, Tucson [Nesvorný et al.2008]Nesvorny2008 Nesvorný, D., Bottke, W. F., Vokrouhlický, D., et al., 2008, ApJL, 679: 143-146 [Opeil et al.2010]Opeil2010 Opeil, C. P., Consolmagno, G. J., Britt, D. T., 2010, Icarus, 208, 449-454 [Rivkin & Emery2010]Rivkin2010 Rivkin, A.S., & Emery, J. P., 2010, Nature, 464(7293): 1322-1323 [Rousseau, De Sanctis & Raponi et al2021]Rousseau2021 Rousseau, B., De Sanctis, M. C., Raponi, A., & Ciarniello, M., et al., 2021, A&A, 653, 118 [Wright et al.2010]Wright2010 Wright, E. L., Eisenhardt, P. R. M., & Mainzer, A. K., et al., 2010, AJ, 140, 1868 [Yu et al.2020]Yu2020 Yu, L. L., Hsia, C.H., & Ip, W. H., 2020, AJ, 159(66): 1-10 [Yu & Ip2021]Yu2021 Yu L. L., & Ip W. H., 2021, ApJ, 913(96): 1-22
http://arxiv.org/abs/2407.12260v1
20240717021320
HuBar: A Visual Analytics Tool to Explore Human Behaviour based on fNIRS in AR guidance systems
[ "Sonia Castelo", "Joao Rulff", "Parikshit Solunke", "Erin McGowan", "Guande Wu", "Iran Roman", "Roque Lopez", "Bea Steers", "Qi Sun", "Juan Bello", "Bradley Feest", "Michael Middleton", "Ryan Mckendrick", "Claudio Silva" ]
cs.HC
[ "cs.HC" ]
A difference-free conservative phase-field lattice Boltzmann method [ =================================================================== § INTRODUCTION The concept of an AI-assisted task guidance system, which guides a user through a task using wearable sensors to detect objects and actions, is quickly shifting from science fiction to an impending reality. The potential applications of task guidance systems include physical tasks across a wide variety of domains such as medicine, mechanics, and military endeavors. Such a system could introduce tasks to trainees starting a new role and track their performance improvement over time, both for their own benefit and for the retrospective analysis of their peers. It could also serve as a second pair of eyes for domain experts, increasing task efficiency, especially during repetitive or stressful tasks. In recent years, enormous advancements in machine perception and reasoning, along with hardware innovations, have made it possible to begin developing robust AI-assisted task guidance systems <cit.>. This is a complex undertaking, requiring several heterogeneous sensors and machine learning models to work together to perceive the physical environment and reason about object state changes relevant to a given task. These systems typically involve an augmented reality (AR) headset, which superimposes graphics onto the performer’s real-world environment and collects data relevant to their behavior, (e.g. egocentric video, audio, gaze, hand interactions) <cit.>. Moreover, these data can be augmented with external sensors that gather information about human behavior, such as sensors to perform functional near-infrared spectroscopy (fNIRS), a popular technique for studying brain activity which is widely used to quantify mental workload <cit.>. This performer behavior and mental workload data enable task guidance systems to adapt task instructions based on the performer's mental state (for clarity, we refer to subjects using the AR system to perform tasks during a session as “performers” and subjects using to analyze data as “users”). The recent development of increasingly sophisticated AR headsets (e.g., Microsoft Hololens, Meta Quest, Apple Vision Pro) provides the hardware necessary for AI-assisted task guidance, and has also piqued the interest of stakeholders who could benefit from such a system. This increase in popularity has also prompted initiatives to collect task performance data from subjects with varying expertise levels. However, turning these data into useful insights requires intuitive systems which enable developers and researchers to understand human behavior at scale and under heterogeneous constraints. Previous efforts have proposed approaches to explore performer actions (e.g. position and gaze) over time using custom visualizations <cit.>. These approaches, however, lack mechanisms to understand the performer's mental state and how it correlates to their actions. Furthermore, these previous works do not explore comparison between individuals with different levels of expertise at the given task. More detailed performer modeling could make AR systems more adaptable and aide in coaching or performance report applications, especially if this performer modeling is situated in the context of data describing the surrounding environment. Challenges in modeling performer behavior To effectively model performer behavior, we must determine a method of summarizing and comparing performer behavior across sessions. This necessitates a meaningful way to compare multimodal time series data (e.g. gaze origin and direction, acceleration, angular velocity, fNIRS sensor readings) of different durations. This is a nontrivial task, especially since two performers may both successfully complete the same task by performing the same steps in different orders, or even by repeating some steps. Moreover, performer behavior modeling requires a robust method for visualizing any correlation between cognitive workload (e.g. from fNIRS sensor data) and the sensor data streams capturing the motion of the performer. Our Approach We propose , a visual analytics tool for summarizing and comparing task performance sessions in AR based on performer behavior and cognitive workload using fNIRS, gaze, and inertial measurement unit (IMU) data. The interface is composed of a hierarchy of four visual components that allow the user to compare recorded task guidance sessions at varying levels of detail. At the overview level, users can compare sessions based on IMU, gaze, or fNIRS data, explore aggregated metrics for performer perception, attention, and memory workload, and select sessions of interest (see Sec. <ref>). users can then use the Event Timeline View to understand correspondences between task procedures, human errors, workload effects, and task phases for selected sessions (see Sec. <ref>). The Summary Matrix View increases the level of granularity of this analysis by showcasing how human error varies with each task procedure. Finally, the Detail View shows the video, IMU, and gaze data for selected portions of a given session (see Sec. <ref>). All views are linked and interactive. In short, our tool facilitates post-hoc analysis of task guidance in AR through visualizations that highlight similarities and differences in performer behavior between multiple task sessions, flag human errors in task performance, and display how the performer's cognitive workload level responds to events in the physical environment. Our design was inspired by requirements and intermittent feedback from developers of AR systems and experts that create and evaluate these systems in the context of the Defense Advanced Research Projects Agency’s (DARPA) Perceptually-enabled Task Guidance (PTG) program <cit.>. To summarize, our main contributions are: * An interactive visualization tool, , containing a hierarchy of visualizations that facilitate the exploration and comparison of performer behavior at varying levels of detail, specifically highlighting the correlations between cognitive workload, IMU, gaze, and actions during task performance. This interface was designed to enable the comparison of multimodal time series data corresponding to interleaved task procedures of differing durations. * We illustrate the value of through two case studies that demonstrate how domain experts leverage the tool as an after-action report and in a coaching scenario using real-world data. * We validate our design decisions through interviews with 5 domain experts with extensive experience (collectively) in human factors, fNIRS, biovisualization, neuroinformatics design, and AR. This paper is organized as follows: Sec. <ref> reviews the relevant literature on human motion analysis based on time series, measuring workload effects in AR environments, and human behavior based on fNIRS data. Sec. <ref> describes the data. Sec. <ref> specifies the requirements we aim to achieve and describes in detail, including each aspect of the visualization design. Sec. <ref> outlines two case studies in which proves useful to experts in our chosen domain, followed by an expert interview and discussion of the feedback we received on our system. Sec. <ref> includes a discussion of limitations of our system, potential future works, and concluding remarks. § RELATED WORK §.§ Human Behavior Analysis based on Time Series The analysis of human behavior using time series data from various sensors, including wearable and AR devices, is well-studied. Activity recognition, a core application, leverages data from IMUs found in smartphones, watches, and earbuds to estimate and predict body movements over time, illustrating the potential of wearable sensors in capturing dynamic human data <cit.>. Key to analyzing human behavior is the extraction of meaningful features from sensor data. Studies have demonstrated the use of advanced techniques, such as time series shapelets, to segment behavior activities from sensor data <cit.>. Fulcher’s work further underscores the significance of integrating multiple data streams for a holistic view of behavioral patterns <cit.>. §.§ Visualization Tools for Human Behavior Analysis based on Time Series Various visualization tools have been introduced to analyze and interpret sensor data for human behavior analysis. Chan et al.'s Motion Browser for analyzing upper limb movements <cit.> and Xu et al.'s ensemble of techniques for multimodal data analysis <cit.> represent significant advancements. These tools facilitate understanding of muscle coordination, behavior distribution, and interdependence among behavioral variables through sophisticated visual analytics. Notably, a study by Öney et al. provided insight into best practices for visualizing time series data collected by an AR headset using gaze data <cit.>. This system utilized both qualitative and quantitative analysis methods to provide insights into human attention and behavior in AR applications. Together these works demonstrate a well studied area of visualization and human behavioral analysis. However, one area that these visualization techniques rarely accommodate is in human behavioral analysis with physiological measures. This is especially sparse in augmented reality tools where physiological measures are often paired with AR sensor suits to monitor an individual's activity on real world tasks. §.§ Insights into human performance with fNIRS Functional Near-Infrared Spectroscopy (fNIRS) provides physiological measurements through non-invasive tracking of brain activity by monitoring oxygenated and deoxygenated hemoglobin levels <cit.>. FNIRS are often used as a brain-computer interface (BCI) when movement and portability are paramount to the task being measured <cit.>. Notably, this is often the case in virtual reality, augmented reality, and real-world tasks. Human behavior understanding can be amplified through these concentration measurements by inferring cognitive workload states in conjunction with synchronous multimodal measurements of an individual's actions and tasks. A key application of fNIRS is assessing cognitive workload, namely employing behavioral models to infer workload capacity from structured tasks. These models facilitate understanding across both laboratory and real-world settings, predicting cognitive states from hemoglobin concentration data <cit.>. Research, including works by McKendrick et al., validates the cross-person and cross-task applicability of these models, demonstrating their significance in translating lab findings to practical environments <cit.>. §.§ Human Behaviour based on fNIRS Multimodal data, synchronously collected with fNIRS-based cognitive workload, enriches the analysis of human behaviour, guiding the design of more responsive and adaptive real-world systems. Mark et al. provides a comparison study which incorporates various brain-body measures to offer insights into cognitive processes over time <cit.>. Similarly, Yuksel et al. demonstrate how adjusting task difficulty based on cognitive load readings and behavioral measurements can significantly improve learning efficiency, as seen in their adaptive piano training program <cit.>. In high-stakes environments like aviation, fNIRS is often used for monitoring cognitive workload and fatigue, offering insights into pilot engagement and decision-making processes <cit.>. Various studies <cit.> highlight the role of fNIRS in evaluating pilot performance in varied scenarios, including real and simulated flights, thereby showcasing the modality's adaptability and effectiveness in critical applications. On top of this, many of these studies rely upon the need for multiple modes of synchronous sensor data in addition to fNIRS physiological measurements to understand human behavior. Integrating fNIRS with other synchronous time series modalities enriches behavioral analysis, allowing for a nuanced understanding of human cognition. This prompts the need for visualization tools to assist behavioral specialists in interpreting complex interconnected time series datasets, specifically tools which link physiological measurements to specific behaviors and decisions. §.§ Measuring workload effects in AR environments The vast majority of previous studies about cognitive workload effects in AR environments focus on measuring the impact of using an AR headset on the wearer's mental workload during a task <cit.>, rather than measuring the cognitive workload of a person who just so happens to be performing the task in an AR environment. Caarvida <cit.> and AutoVis <cit.>, for instance, provide tools to explore automotive test data, but do not draw correlations between cognitive load and performer actions and errors. More recent AR studies involve interfaces that evolve as a function of the individual's workload state, requiring real-time and multimodal behavioral analysis <cit.>. This brings a new set of challenges to visualization tools. We need tools that can generalize to many environments, run in real time, and visualize many synchronous streams of data. Galati, Schoppa, and Lu implement a visualization tool in their AR pipeline <cit.>. This tool features an interactive exploration of user movements with respect to raw fNIRS signals, allowing experts to compare and identify areas of cognitive activity in the raw signal. However, this tool is tailored to handle data from this specific study. Furthermore, it is visualizing raw fNIRS data instead of classified workload states. While this is useful for neuroscientists who want a better understanding of spatial brain data, it is not as helpful for the broader study of human behavior, particularly by AR system developers who may not have a neuroscience background. To the best of our knowledge, there are no such generalized tools that aide human behavior specialists in analyzing these many synchronous streams of data for augmented reality systems. Such a tool will improve both the efficiency of analysis as well as the conclusions that can be drawn from human behavior data. § OCARINA DATASET The Ocarina dataset, collected by NGC as part of the DARPA PTG program, consists of data from simulated UH-60V helicopter copilot sessions, totaling approximately 3TB. It encompasses data from 7 participants and 33 sessions. Each participant participated in one to eleven sessions. Every session corresponded to a specific task scenario, with each task comprising multiple non-sequential procedures that could be completed in different orders. After recording, these procedures were extracted from the mission logs and designated alphabetical names from “a” to “f”. This subset of six procedures was chosen from the nine possible ones as they accounted for 98.5% of all procedure occurrences. Each unique task scenario is identified by a distinct “trial ID” within the dataset, except for trials 2, 10, and 23, that represent the same task. Participants Data collection for the Ocarina dataset involved 7 participants. Three participants had previous piloting experience. No pilot had direct experience with the specific UH-60V cockpit. Three participants were engineers with experience developing the software for the UH-60V cockpit. Each had varying levels of experience with the system, but all were familiar with the cockpit. Two of the engineers were highly-versed in the logic of the system and had directly developed several of its capabilities. The seventh participant was a computer science professor at a large North American university. In total, participants completed 47 flights. Data Collection Protocol Participants were seated in front of a physical recreation of the UH-60V cockpit, with mission computers that replicate flight systems and simulate flight routes and in-flight events. They were outfitted with recording devices, including an fNIRS neuroimaging system. Additionally, a Microsoft Hololens 2 was placed atop the fNIRS device. The Hololens collected audio, video, IMU data containing accelerometer, gyroscope, and magnetometer readings, eye tracking data consisting of 3-dimensional vectors for gaze origin positions and directions, and hand tracking data consisting of 26 joint points for each hand. Throughout the data collection process, participants performed full flights, comprising pre-flight and flight phases. To advance to the flight phase, participants needed to complete nine procedures during the pre-flight phase. The mission computer logs recorded all physical interactions the participants made with the simulator during each trial. fNIRS Workload Classification Predictions of the participants' cognitive workloads, spanning working memory, attention, and perception, are derived from the fNIRS time series measurements of hemoglobin concentration. These measurements, captured at a frequency of 10 Hz, include raw light intensities, HbO (oxyhemoglobin), and HbR (deoxyhemoglobin) concentrations. These measurements serve as inputs for dedicated classifiers; at each time-step, three mental states are specified: perception, attention, and workload. Each mental state is classified as either “optimal”, “overload”, or “underload” with an associated classification confidence (we elaborate upon the interpretation of these classes in the following paragraph). Each workload category has its own classifier, which runs concurrently during recording sessions. The classifiers are generalized mixed effects models trained on data gathered from a previous test bed study that showed cross-task and cross-participant transfer <cit.>. The performer's classified mental state is first base-lined before recording and is not shown to the performer during collection. Working memory capacity (which we hereafter refer to as memory) is an individual's ability to retain and manipulate information during task execution. Attention pertains to the individual's capacity to concentrate on specific tasks selectively. Perception is the individual's ability to interpret stimuli, both visual and auditory. The Ocarina dataset categorizes these cognitive facets into three states: optimal, overload, and underload. An optimal state denotes a balanced cognitive load conducive to task performance. An overloaded state suggests a cognitive burden exceeding an individual's capacity, potentially impairing the incorporation of new information <cit.>. Conversely, an underloaded state indicates a cognitive engagement below the individual's capacity, which may result in diminished focus <cit.>. It is critical to monitor multiple synchronous information streams alongside cognitive state data, as the presence of an overloaded or underloaded state does not invariably correlate with decreased task performance. § METHOD §.§ Domain Requirements The design requirements of were defined during a year-long collaboration with researchers, who are coauthors of this paper, actively developing an end-to-end task guidance system to support pre-flight procedures. In addition, we conducted multiple interviews with data scientists who actively work in fNIRS data analysis and data visualization to validate our design choices. * Performer behavior overview The experts stressed the importance of having the ability to visualize all performers' behavior in a single view. This helps trainers categorize performer expertise based on their behaviors across sessions. Trainers want to identify performers, for example, who may need additional training. Additionally, the trainers would like to know specific procedures where a certain performer excels or struggles. Finally, the trainers would like to detect and investigate clusters of similar sessions or performers. We propose a combination of the Scatter Plot and Summary Matrix Views to tackle this requirement.  * Aligning and comparing multiple sessions Visualizing and comparing multiple sessions, each with multiple attributes based on time series data, can be challenging when there is no implicit sequentiality (as is the case with the Ocarina dataset). A time-aligned view of the procedures, errors, workload information, and task phase information would help trainers discern important landmarks within a particular session. Furthermore, this would enable trainers to compare landmarks and timestamps across multiple sessions and performers. To tackle this problem, we propose the Event Timeline View that combines the various streams of time series data along a common time axis, enabling seamless comparison across sessions.  * Compare fNIRS data across sessions and visualize correlations between fNIRS data and performer behavior The experts stated they were interested in visualizing and comparing fNIRS data across different performers at different levels of granularity. They would like to investigate fNIRS summaries for subjects across all sessions, while also being able to drill down into a single session and make comparisons between sessions. Furthermore, experts were particularly interested in understanding correlations between the mental states (overload, underload, and optimal) of performers for each workload category (attention, perception, memory) and errors made during sessions. Finally, experts would like to know the correlations between mental states and specific procedures. To this end, we propose the Workload Aggregations in the Overview, along with detailed fNIRS data for individual sessions in the Event Timeline and Summary Matrix Views. Furthermore, we display the correlations between errors and the mental states for both sessions and performers.  * Detailed visualization of performer behavior To uncover associations between performer behavior, fNIRS predictions, actions, and errors, trainers need to explore individual sessions in great detail. Trainers would greatly benefit from being able to analyze data from the IMU and gaze sensors in conjunction with the egocentric video captured by the AR headset, as it would allow them to detect patterns and establish connections between the various data streams. To meet this requirement, we propose the Detail View, which includes interactive visualizations for IMU and gaze data linked to the session video.  §.§ Visualization Design We employed the “overview-first, zoom and filter, details-on-demand” strategy <cit.> as our guiding principle while formulating the visual design, with the goal of ensuring a user-centric approach that facilitates efficient exploration and comprehension of the multimodal data associated with performer sessions. The resulting tool, , is composed of four linked interactive views: the Overview (Figure <ref>(A) and <ref>(B)), the Event Timeline View, the Summary Matrix View, and the Detail View (Figures <ref>(C), <ref>(D), and <ref>(E), respectively). §.§.§ Overview The Overview consists of two sub-views: the Scatter Plot View shown in Figure <ref>(A), which allows users to select sessions based on various features, and the Workload Aggregation View shown in Figure <ref>(B), which displays cognitive workload and session duration information based on user selection. Scatter Plot The Scatter Plot View shown in Figure <ref>(A) serves as the starting point for exploration in . The Scatter Plot View categorizes sessions to facilitate the identification and comparison of similar sessions or outliers <ref>. The user can adjust the scatter plot to represent only the performers' physical activity (IMU, gaze) or brain activity (fNIRS) by selecting the desired data stream. Below, we detail the process of transforming time series data into 2D scatter plot points. Each point in the plot represents a session, and different symbols represent either trials or subjects, depending upon user selection. The user can lasso-select these symbols, and remaining views will update accordingly. Users can also opt to display only the types of tasks which appear most frequently in the dataset (e.g. “top 10”). Brain and Physical Activity The user can toggle whether the points in the Scatter Plot View represent IMU, gaze, or fNIRS data. Toggling to IMU or gaze enables the user to select sessions based on the performer's physical activity throughout the session; IMU data represents the performer's body movement, whereas gaze data represents the displacement of their visual attention. To compare these time series, we transform them into 2D vector representations. We chose to do this using a shapelet-based <cit.> technique due to its ease of use and robust implementation through the library <cit.>. Although this algorithm requires some preprocessing of the data, such as normalizing time series to the same length, our system is agnostic of the technique. Other approaches (e.g. TS2Vec <cit.>) could be substituted in cases where the normalization of the time series could hide important information about the sessions. In contrast, the user may toggle the Scatter Plot View to show fNIRS time-series data, shifting the focus of the exploration to brain activity throughout the sessions. A similar process could be applied to generate the projection of points based on fNIRS data as was used for the IMU and gaze data. However, in the current implementation of , we transform the raw fNIRS signal using the workload classification models described in Section <ref> before generating the 2D vectors that are ultimately rendered in the Scatter Plot View. Workload Aggregation View We showcase the proportion of time spent in each mental state (overload, optimal, and underload) across the three workload categories (memory, attention, and perception) aggregated by the selection made in the scatter plot view (Figure <ref>(B)). To convey the ordinality of these mental states, we employ a sequential red color scale where light red represents underload, a medium shade indicates optimal conditions, and dark red signifies overload < g r a p h i c s > . Furthermore, we present the error contribution linked to each mental state across all categories for each group. This metric is crucial, as it presents the correlation between performers' errors and their respective mental states for the three categories. To display this information effectively, we opted for an aligned position against a common scale plot. This choice facilitates easy comparison and identification of data of interest while saving vertical space and reducing visual clutter in the overall system. We intentionally avoided using a bar chart-based plot to minimize potential confusion with the workload bar chart. To avoid overlapping from identical correlation scores, we adapt the scale accordingly. Given a selected group of sessions g_i, we estimate the error correlation using Pearson Correlation (PC) between e, the error duration, and s, the state duration of each workload for the three categories (optimal, overload and underload). The sample points for these variables were collected for each procedure measured in seconds. Last, we highlight the average session duration for the selected groups <ref>. §.§.§ Event Timeline View In the Event Timeline View shown in Figure <ref>(C), we coalesce data from four different data streams recorded during performer sessions into a unified, time-aligned visualization for each selected session. These sessions are organized by trial ID or subject ID, as chosen in the Scatter Plot View. Duration is represented along the x-axis, beginning at zero for each session. Task steps or procedures are visualized using horizontal bars that extend for the duration of each session. Segments within these bars are color-coded according to the ongoing procedure at the corresponding timestamp. We excluded shades of red from this color scale to prevent any conflict with the scale used for the workload variable. Furthermore, we have an error bar that employs black segments to indicate errors at their corresponding timestamps. Next, we have the workload bar, where segments illustrate the performer's mental states (underload, optimal, overload) for the chosen workload category. Furthermore, the model confidence score for its predicted mental state is depicted using a line within the bar graph. Finally, we have the task phase indicator, which may be used to group task steps or procedures (e.g. in the case of the Ocarina dataset, this is where we use “PF” and “FL” to denote the pre-flight and flight stages, respectively). The rationale for aligning the various data streams along the time axis is multi-faceted. First, employing a unified time scale across all selected sessions facilitates convenient evaluation of their respective durations. Moreover, it allows users to compare the performers' mental states and the errors committed across different sessions. In addition to inter-session evaluation, the design facilitates intra-session evaluation by enabling users to promptly identify error occurrences and establish potential correlations between errors and the corresponding procedures, mental states, and flight phase <ref> <ref>. Consider the scenario where the user wants to investigate a particular session. To do this, they simply brush the Event Timeline View along the time axis. This updates the Summary Matrix View, which employs transparency and opacity to highlight the procedures involved in the brushed section. This also updates the Detail View to display egocentric video and sensor data corresponding to the brushed timestamps, enabling users to see the pilot's perspective and sensor readings for the selected period. §.§.§ Summary Matrix View The interviewed experts showed great interest in comparing errors, mental states, and prevalence of procedures within a session as well as across sessions <ref>. However, due to the non-linear nature of the procedures performed in many tasks, it can be challenging to discern these nuances when the data is visualized sequentially. To address these challenges, we propose the Summary Matrix View (Figure <ref>(D)), which complements the Event Timeline View to give a more nuanced picture of performer data. It includes pie charts for every procedure, where chart radius corresponds to procedure prevalence. The pie charts are shaded black and gray to based on the proportion of errors (represented by the black slice) within the corresponding procedure. Pie charts are employed here specifically to communicate two proportions simultaneously: (1) the proportion of a particular procedure in the duration of a session and (2) the proportion of error within each procedure for the session. This allows the user to compare procedures and associated errors with different procedures for the same session (horizontally), as well as with the same procedure for different sessions (vertically) <ref>. In addition to the pie charts, we show the proportion of errors and the distribution of mental states for the chosen workload category for each session. The provided checkbox can be used to show or hide the error contribution for mental states within the selected workload category <ref>. Since this error correlation corresponds to the individual session, we used the regular PC to calculate it (similar to the Workload Aggregation View). users can select the desired category either through the dropdown in this view, or by clicking the corresponding category label in the Workload Aggregation View. Additionally, transparency is used to fade out the non-selected categories in the workload aggregations view. This design decision is intended to help users retain focus by reducing visual clutter. Furthermore, the pie charts provide an on-hover tooltip which displays the correlation between errors e and the mental states s within the corresponding procedure p (p is a vector where p_i=1 if the procedure i is the procedure in question and p_i=0 otherwise). We calculate these values using Partial Correlation <cit.>. Let r_se be the correlation between s and e; r_sp, the correlation between s and p; and r_ep, the correlation between e and p. The Partial Correlation is computed as: r_se,p = r_se - r_sp· r_ep/√( ( 1-r_sp^2 )· ( 1-r_ep^2 )). §.§.§ Detail View One of the major requirements expressed by the interviewed experts was the ability to investigate individual sessions and observe performer (e.g. pilot) actions in detail  <ref>. The Detail View was designed to meet this requirement (Figure <ref>(E)). The video view plays the egocentric video from the performer's perspective corresponding to the brushed timestamps. Below this, we use line plots to visualize data from the IMU and gaze sensors, capturing the performer's body and eye motion over time. The user can switch between the variables corresponding to the IMU and gaze sensors using their respective dropdowns. Finally, the segmented bar graph depicts the mental states for the chosen workload category throughout the session. The time window brushed in the Event Timeline View is highlighted in all three visualizations within this Detail View. All three visualizations can be brushed, similar to the Event Timeline View. Moreover, the brushes are all synchronized with each other and with the video player, facilitating seamless navigation and exploration of sessions. Aligning the IMU and gaze data with the video, workload information, and procedures enables the user to identify procedures with high levels of human motion, establish associations between motion levels and mental workload as well as errors, and navigate to these regions of interest in the video by simple brushing <ref>. § EVALUATION In the aviation industry, pilots often experience mental states of overload or underload which can have immediate consequences such as heightened stress, monotony, mental exhaustion, or fatigue. In addition to posing significant risks to flight safety, these short-term effects, if not addressed, can escalate into long-term issues such as psychosomatic or mental health disorders. In the following case studies, we describe how a pilot trainer and an AR guidance system developer can use to recognize and evaluate the factors contributing to overload or underload mental states in copilots. §.§ Case Study 1: Unraveling the Triggers of Mental Underload in Copilots To showcase how the system supports effective exploration of task sessions within real-world contexts, we present a case where a pilot trainer utilized the system. The trainer aimed to identify instances during flight procedures where a copilot might experience an underloaded mental state, discern potential causes behind such occurrences, and extract valuable insights from the data. The underload mental state is particularly concerning during a flight as it may indicate that the copilot is overly relaxed or not sufficiently focused. Uncovering Data Quality in Flight Sessions In any study focused on unraveling cognitive processes, data quality plays a critical role. Acknowledging this, the trainer began the analysis by utilizing the Scatterplot View to visualize the collected data across multiple sessions. For this particular task, she organized the data by trial, seeking to identify sessions where the underloaded mental state predominated. Analysis of the scatterplot first enabled her to identify outliers and anomalies that could indicate data quality issues, such as sensor failures or inaccuracies in data collection (see the right side of Figure <ref>). The trainer investigated these outliers using the Event Timeline View, which provided a detailed breakdown of data acquired throughout the sessions. As shown on the left side of Figure <ref>, this examination revealed missing data points in Trials 8, 19, and 20: Trials 8 and 19 only contained fNIRS data, and lacked crucial information like procedures and errors, likely due to mission log failures. Meanwhile, though Trial 20 appeared comprehensive at first glance, it exhibited notable gaps in fNIRS data, implying potential technical glitches or inconsistencies in recording procedures. After identifying the sessions with potential issues, the trainer opted to analyze a different cluster of sessions for further examination. Understanding the Link Between Errors and Underloaded Mental States Acknowledging that errors often signify underlying issues, the trainer scrutinized the Workload Aggregation View, concentrating on sessions displaying a notable correlation between errors and underloaded mental states using the error contribution plot. As shown in Figure <ref>, only one session (Trial 13) out of 10 sessions exhibited significant correlations. This implies that sessions falling under this trial demonstrate a strong association between the underloaded mental state and errors. Based on these findings, the trainer selected Trial 13 for deeper analysis. Understanding Copilot Expertise Disparities through Motion Analysis and Error Correlation In Trial 13, the trainer notes a correlation between performer errors and the underloaded mental state, with a correlation coefficient very close to 1 (see Figure <ref>). Upon transitioning to the Event Timeline View to analyze the sessions, the trainer quickly discovers that Trial 13 comprises very short sessions, specifically a task duration of under 10 minutes per subject (as shown in Figure <ref>). Examination of the phase feature in the Event Timeline View reveals that all sessions exclusively included the preflight phase (PF), explaining their brevity. Further scrutiny reveals that the tasks in the sessions performed by Subject 293 and Subject 9636 were completed in approximately 9 minutes. However, Subject 293 predominantly maintained an optimal attention workload state and exhibited relatively few errors, while Subject 9636 encountered numerous errors and primarily operated under an underload attention state. To delve deeper into this discrepancy, the pilot trainer navigates the Event Timeline View, brushing over the entire session for Subject 293 and subsequently moves to the Detail View to assess human motion using IMU data (see the right side of Figure <ref>). Notably, Subject 293's linear acceleration plots demonstrate consistent, controlled motion, contrasting with Subject 9636's plots, which exhibit considerable variation, suggesting frequent stops and starts. This disparity leads to the hypothesis that human motion correlates with the copilot's expertise level. To validate this conjecture, the pilot trainer reviews videos for each session, confirming her hypothesis. In the videos, it becomes evident that both subjects have a manual in front of them, but Subject 293 appears less reliant on it, whereas Subject 9636 frequently pauses to flip through the manual. This observation aligns with the notion that individuals less familiar with the task are prone to more errors and increased reliance on reference materials. §.§ Case Study 2: Enhancing AR Guidance Systems through User Analysis To showcase how facilitates the advancement of AR guidance system development, we present two scenarios wherein an AR guidance system developer uses the platform. Leveraging User Profiles to Optimizing AR Flight Guidance Understanding end-users' characteristics is paramount for effective guidance system development. This example delves into the correlation between mental states and user characteristic profiles, emphasizing the importance of tailoring guidance measures to assist specific user groups. To achieve this, the AR guidance system developer aims to identify emerging patterns based on pilots' performance across various tasks. Unlike the previous case study, this one focuses on the overloaded mental state, rather than the underloaded one. The developer first groups the data by subject, assuming the issue stems from user profiles rather than the tasks themselves triggering mental states. The developer identifies Subjects 4352 and 293 as having significant error contributions to the overloaded mental state, despite both having previous piloting experience. Examining the Event Timeline View, the developer notes that Subject 293 and Subject 4352 completed five and three flights, respectively. Further investigation reveals that while Subject 293 displays higher overall percentages of the overload mental state during task performance, this condition is predominant in only one out of their five sessions, indicating variability in performance. Conversely, Subject 4352 consistently experiences overload across all sessions, despite task variations (see Figure <ref>). Furthermore, upon examining the correlations between errors and mental states in each session conducted by Subject 4352, it becomes evident that the overload mental state exhibits a strong correlation with errors, as shown in Figure <ref>. Examining their profiles further, Subject 293 emerges as a pilot with recent flight experience, having flown the most flights among their cohort, while Subject 4352 has been inactive in flying for 20 years. This underscores the need to consider user profiles in designing AR flight guidance systems, specifically different system versions or adaptive features tailored to individual user profiles. Improving Performance and Mental State in AR Flight Guidance Systems Consider an AR guidance system developer who sets out to evaluate the progression of novice engineers over multiple flight tasks, aiming to discern the factors underlying improvement and refine guidance mechanisms to minimize errors. The developer focuses on Subject 9636, a novice engineer, who performed the same flight task under normal conditions three times: Trial 2, Trial 10, and Trial 23, in sequential order. The Event Timeline View shows Subject 9636 consistently encountered challenges during the preflight phase across all trials (as shown in Figure <ref>). However, due to the sporadic nature of errors, pinpointing the specific procedures where the copilot struggled the most proved to be challenging. Further analysis through the Summary Matrix View revealed a consistent execution of tasks by Subject 9636 across sessions, with the most time spent on Procedure C during each session. Notably, significant errors were observed in Procedures A, D, and E during the first attempt (Trial 2). For Procedure E, the tooltip visualization reveals a significant correlation (0.97) between errors and the overload mental state. Subsequent trials displayed improvement, particularly in Procedures A and E during the second attempt (Trial 10), where errors notably diminished, especially in Procedure E, dropping from over 70% to zero. However, errors emerged in Procedure F during this trial. This trend persisted in the third attempt (Trial 23), with a decline in performance in Procedure F but improvements in other procedures. Examining the Event Timeline View provided insights into the correlation between errors in Procedure F and the transition from preflight to flight phase, suggesting the necessity for additional guidance during this phase. Furthermore, analyzing the copilot's mental state through workload summaries revealed positive impacts with improved performance. Despite high levels of underload mental state during the first attempt (Trial 2), subsequent trials witnessed a decrease in underload mental state, albeit accompanied by an increase in overload mental state during the second attempt (Trial 10). By the third attempt (Trial 23), the copilot achieved minimal deviations from the optimal mental state. These findings emphasize the interplay between overcoming flight errors and improved copilot mental state. The developer acknowledges the imperative to focus efforts on enhancing guidance during the transition from preflight to flight to not only mitigate errors but also optimize the copilot's mental state. This case study underscores the iterative nature of analysis and adaptation essential in optimizing AR guidance systems for novice engineers' in-flight tasks. §.§ Expert Interview To validate our design decisions, we conducted a second round of interviews with five domain experts: three human factors and fNIRS experts (E1, E2, and E5), one biovisualization expert (E3), and one neuroinformatics algorithmic design expert (E4). All of them have experience with AR-enabled applications. Four of the experts had not previously seen the tool in action (E1, E2, E4, E5), whereas only one expert (E3) was part of the group that had previously assisted in identifying the system requirements (Section <ref>). In the experiment, experts were asked to explore a group of sessions of their choice according to their interests. The fNIRS experts were specifically asked to utilize the system to gain insights into the mental states of copilots by subject. Additionally, the fNIRS experts were queried on how this tool could be integrated into their workflows to enhance efficiency. Note that fNIRS experts needed to manually synchronize different data sources, such as video and workload, to analyze this data before they had . The design and visualization experts, on the other hand, were instructed to use the tool to explore the data with the goal of evaluating its usability. Each interview took 50 minutes, and began with an overview of the project and gathering relevant background information from the participant (5 minutes). Second, we presented our system, including a demonstration, to the participant, addressing any questions or concerns (20 minutes). Third, participants were given the opportunity to select sessions of their preference from the Ocarina dataset (Section <ref>) to explore within the system (15 minutes). Finally, we engaged the participants in a discussion, asking questions about their initial impressions of the tool, its functionalities and features, and its potential application to their workflows (10 minutes). The participants were given the freedom to use and explore the available sessions. However, they were also tasked with completing three specific assignments based on the selected sessions: 1) Identify sessions demonstrating a high correlation between the overload mental state and errors, 2) Identify the most prevalent procedures within and across sessions, and 3) Utilize to interpret the sessions. They were instructed to speak while using the system, following a “think aloud” protocol. While the participant performed the task, an investigator took notes related to the actions performed. After completion, the participants filled out a questionnaire to express their impressions on the usability of the system. In this section, we describe the insights gathered by the participants. §.§.§ Expert Insights Data Quality Assessment Participants also use to assess the quality of the data. In particular, E2 was very interested in ensuring that all sensors data were included before drawing any conclusions about copilot behaviors. He utilized the Scatterplot View for this purpose. He identified outliers, hypothesizing that these sessions might have issues. Subsequently, he moved to the Event Timeline View to inspect the data confirming his hypothesis. Crucial information about procedures, errors, and flight phases was missing for the selected sessions, with only fNIRS data present. He noted that such occurrences are common due to sensor failures and expressed a desire to utilize the tool to identify such issues using the Scatterplot and Event Timeline Views. Afterward, he selected another group of sessions to continue his analysis. On the other hand, E1 was not particularly focused on identifying issues in the data. While analyzing the overload mental state of Subject 4352, she observed that in trial 20, the copilot remained mostly under the optimal mental state, unlike other trials where the overload mental state was predominant. However, upon referring to the Event Timeline View, she noticed that more than half of the fNIRS data was missing for this trial. Consequently, she determined that this trial should not be included in the analysis. E3 followed a similar approach to E1. Procedures Analysis E1, E2, and E4 extensively used the Summary Matrix View to identify the predominant procedures within and across sessions for each subject. Their approach was straightforward and effective. In contrast, E3 attempted to extract this information using the Event Timeline View by comparing the duration of each procedure across the session. However, after a brief attempt, they switched to the Summary Matrix View and quickly identified the predominant procedures. E3 highlighted that the Summary Matrix View, with its normalized data across sessions, provided a clearer focus on procedures compared to the Event Timeline View. E5 took a a different approach, primarily using the Summary Matrix View to identify key procedures but also extensively employing the Event Timeline View to observe the frequency of these procedures throughout the session, focusing on copilot performance during each procedure. Understanding Human Behavior through Error Analysis The majority of participants began their analysis by examining the error contribution plot located in the Workload Aggregation View, organizing the data by subjects rather than trials. For instance, E1 used this view to identify subjects exhibiting a predominant overload mental state across various trials. To interpret the subjects' mental states during these trials, she navigated through the Event Timeline View and delved into the Detail View. By using the Detail View, specifically the IMU signals, she observed a significant amount of human motion at the beginning of the session, transitioning to a phase characterized by consistent and controlled motion. Upon revisiting the Event Timeline View, she noted that this pattern correlated with the occurrence of errors and the flight phase, leading her to hypothesize that the pre-flight phase might be a contributing factor to errors and subsequent overload mental states due to heightened stress levels. Managing Multimodal Data Most participants extensively explored the various modalities available in the tool. Notably, E3 and E4 emphasized the tool's capability to visualize different data sources, including events (such as procedures and errors), fNIRS, IMU, gaze, and video, all of which were seamlessly integrated and synchronized. Among these modalities, video emerged as the preferred choice for all experts, serving as a vital resource for session analysis. E5, in particular, heavily relied on video analysis. With a clear understanding of the conditions for each trial, E5 was keen on inspecting segments within the session where abnormal events, such as weather disturbances, occurred, to evaluate the copilot's performance. To identify these events, E5 also used the Event Timeline View to locate procedures throughout the session. Additionally, E5 heavily relied on the IMU signal, particularly the accelerometer signal, to pinpoint segments in the video characterized by significant variance. These variations, evident through peaks and valleys in the accelerometer signal, helped identify critical moments for closer examination. §.§.§ Expert Feedback The participants provided highly positive feedback, demonstrating interest in utilizing for their tasks and providing suggestions for enhancing the system. Following the think-aloud experiment, they were asked for any additional comments or suggestions. Here are some of their responses: * E1 liked the design and interactive part of the tool. She highlighted the selection of colors to encode mental states: “I really liked that progression (colors), a lot of people use like green, red and yellow to represent those states. And I really prefer what you guys have done, which is like the light to the dark red. I think that makes way more sense.”. She also liked the usage of pie charts: “I really liked the use of pie charts here. I am not usually a big fan of them, but I think that that's an appropriate place for them. So I was happy to see a good pie chart.”. Regarding interactivity, she appreciated the synchronized behavior exhibited by all components: “I think it was both intuitive and user friendly. Being able to lasso on the scatter plots makes things really, really, really easy to capture like little clusters that you're more interested in. I liked the brushing. It was responsive on both sides of the screen (components), so I don't have to go back and forth between different sections (components) in order to look at something else, or just to switch things around.” * E1 also found the Event Timeline View and Detail View useful to compare and validate hypotheses: “The bottom section, where I could see everything in comparison, side by side, the IMU data with the overload/underload state and having the video there so that you're able to validate what it is you're seeing and why, you're seeing it. I thought that was very useful.” * E2 liked the usage of scatterplot to detect outliers: “The outlier detection, or the outlier capability in the upper left was kind of really powerful. I would maybe like to see that expanded from just IMU (gaze or fNIRS) data, and maybe look at other kind of outliers, or be able to group by other kinds of data up there. So that was really useful.”. He also found the Event Timeline View very powerful: “being able to see the procedure with the error and the workload state on the timeline view in the lower left. That was, also, I think, very helpful, really powerful, to be able to see those 3 things stacked up against each other.” * E3 found the system's capability to identify correlations to be effective and useful: “I found myself working a lot with the time (timeline view), with the event sequence. Even though I know that you cannot directly compare. You know the procedures with each other because they have multiple options to do these things. It still showed me like, very well what's correlated? In which procedure didn't the error occur? And then, how was that correlated to the mental state. I think, the timeline helped a lot.” E3 also liked the usage of different modalities to interpret the data: “It was definitely cool to look into the video because you kind of wanna know what's going on. The other things are kinda abstract, and that just helps to relate a little bit to the situation. It is good to really connect what was exactly happening.” * E4 emphasized the Event Timeline View's capability to enable detailed segment inspection through brushing, facilitating deeper analysis: “I like the most, was the ability to take like a section of a trial, and then like overlay that with the raw measurements of behavior such as the IMU and other markers, and the videos really nice to also see, like a raw behavior there a little bit, I really like that.”. He also appreciated the system's full interactivity: “… each panel seems to complement each other, which is nice. I like that. I like that all the panels are tied to one another, so you can select trials in one, and then it shows it updates all the other panels and shows you nice statistics. It seems like well thought out and smooth interface.” * E5 wants to integrate in his workflow: “part of what I do is to go through these kind of videos. Having more of that data ('s features), would kind of allow me to jump to things easier … As soon as you did that (brush segments of the timeline view and synchronize this with the video), I was like, that's I wish I had that earlier.”. He also found helpful to compare different sessions: “when you're trying to make sense of the data in your analysis, you know what you might find. For example, you know there is something significantly different between two people or something. This tool would allow you to kind of quickly drill down into what's actually going on. Either cognitively or behaviorally. So yeah, it's helpful.” * Suggestions: E1 and E5 suggested enriching the Summary Matrix View, for example, by including the proportion of mental states within the pie chart associated with errors. E2 suggested support for real-time monitoring. E3 suggested the use of pattern detection to presort the sessions. Finally, E4 suggested displaying raw fNIRS data, such as an activation map for brain signals, along with the locations of fNIRS sensors. §.§ Usability We assessed the usability of using the System Usability Score (SUS) <cit.>, a robust tool widely recognized for evaluating system interfaces <cit.>. A mean SUS score above 80 is in the fourth quartile and is acceptable. To compute the SUS, we administered a survey at the conclusion of the second interview, prompting participants to complete the standard SUS questionnaire, grading each of the 10 statements on a scale from 1 (strongly disagree) to 5 (strongly agree). The SUS grades systems on a scale between 1 and 100, and our system obtained an average score of 87 ± 9.58. § DISCUSSION AND CONCLUSION We presented , a novel visual analytics tool tailored for summarizing and comparing task performance sessions in Augmented Reality (AR). By integrating time series data from fNIRS measurements, gaze, and IMU data with session logs and videos, enables users to explore performer behavior and cognitive workload at various levels of granularity. Through interactive visualizations reveals patterns and anomalies in task performance, such as human errors and workload fluctuations, and their correlations with task phases. These insights support post-hoc analysis, aiding developers in refining task guidance strategies and enhancing AR-based training environments. We believe integrates seamlessly into the ecosystem of AR-enabled task guidance development by enabling developers to assess the impact of different design decisions on performer cognitive workload. For example, specific 3D interfaces designed to guide users through tasks can trigger variation in performer cognitive load depending on design. ARTiST <cit.>, for instance, leverages this by proposing a text simplification approach to reduce performer cognitive load. In turn, could facilitate more detailed exploration of the actual impact of such systems on performers. This capability could help developers create more adaptable task guidance systems that customize instructions to the performer's mental state. Limitations While enables users to understand the frequency and magnitude of performer movement through IMU and gaze data, it does not include an explicit representation summarizing spatial relationships between the performer and their surrounding environment. In other words, makes it easy to tell when the performer moves, but their exact pose and, in turn, action may not always be clear. This limitation persists even when IMU and gaze time series are analyzed in conjunction with egocentric video, as many AR headsets have a limited field of view which could leave important hand movements and interactions with the environment off-camera. Furthermore, does not include a visual representation of the raw data output by fNIRS sensors, instead opting for aggregated workload classification labels at each time step. While this approach enhances data interpretability for a broader audience, it may occlude anomalies in sensor performance or details that could be of interest to a brain data expert. This also creates blind reliance on the workload classifiers, with limited ability to identify potential classification errors. Future work First, we plan to conduct a larger user study with participants from different backgrounds to understand how well our design can adapt to new users. To intervene promptly in response to emerging issues or fluctuations in cognitive workload, we plan to enable real-time monitoring of task performance sessions. To aid in better data quality assessment and model interpretation, we also plan to explore scalable visual metaphors for analyzing fNIRS raw time series data, which may be composed of up to several dozen streams. This raw data will enable users to understand how different brain parts respond to specific stimuli and note data quality issues. On the machine learning front, we would like to explore techniques to automate the detection of relevant patterns and anomalies within task performance data <cit.>. This may include developing algorithms to classify human errors and identify optimal task guidance strategies based on historical data. Finally, we have primarily explored the aviation domain in our use cases due to the availability of relevant data, but it is important to note that our tool is applicable across various domains, which we plan to explore in future work, as our methods work with any multimodal time series data. This work was supported by the DARPA PTG program. Any opinions, findings, and conclusions or recommendations expressed in this material are those of the authors and do not necessarily reflect the views of DARPA. abbrv-doi
http://arxiv.org/abs/2407.13494v1
20240718132408
Streaming Technologies and Serialization Protocols: Empirical Performance Analysis
[ "Samuel Jackson", "Nathan Cummings", "Saiful Khan" ]
cs.SE
[ "cs.SE", "cs.NI" ]
Streaming Technologies and Serialization Protocols: Empirical Performance Analysis Samuel Jackson 0000-0001-5301-5095, Nathan Cummings 0000-0003-4359-6337, and Saiful Khan 0000-0002-6796-5670 Samuel Jackson and Nathan Cummings are with Computing Division, Culham Centre for Fusion Energy, Culham Science Centre, Abingdon, OX14 3EB, Saiful Khan is with Scientific Computing Department, Science and Technology Facilities Council, Rutherford Appleton Laboratory, Didcot, OX11 0QX. This work has been submitted to the IEEE for possible publication. Copyright may be transferred without notice, after which this version may no longer be accessible. July 22, 2024 ======================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================== § ABSTRACT Efficiently streaming high-volume data is essential for real-time data analytics, visualization, and AI and machine learning model training. Various streaming technologies and serialization protocols have been developed to meet different streaming needs. Together, they perform differently across various tasks and datasets. Therefore, when developing a streaming system, it can be challenging to make an informed decision on the suitable combination, as we encountered when implementing streaming for the UKAEA's MAST data or SKA's radio astronomy data. This study addresses this gap by proposing an empirical study of widely used data streaming technologies and serialization protocols. We introduce an extensible and open-source software framework to benchmark their efficiency across various performance metrics. Our findings reveal significant performance differences and trade-offs between these technologies. These insights can help in choosing suitable streaming and serialization solutions for contemporary data challenges. We aim to provide the scientific community and industry professionals with the knowledge to optimize data streaming for better data utilization and real-time analysis. Data streaming, messaging systems, serialization protocols, web services, performance evaluation, empirical study, and applications. § INTRODUCTION With the exponential increase in data generation from large scientific experiments and the concurrent rise of data-intensive machine learning algorithms within scientific computing <cit.>, traditional methods of data transfer are becoming inadequate. The trend necessitates efficient data streaming methods that allow end-users to access subsets of data remotely. Additionally, the drive for FAIR and open data <cit.> mandates that such data are, ultimately, publicly accessible to end users over a wide-area network connection. The Mega-Ampere Spherical Tokamak (MAST) <cit.> was a spherical tokamak in operation at the UK Atomic Energy Authority (UKAEA), Culham Centre for Fusion Energy (CCFE) from 1999 to 2013 and its upgraded successor was the MAST-U <cit.> began operation in 2020. These facilities generate gigabytes of data per experimental shot, accumulating substantial data daily. The lack of public accessibility to the historical archive of data produced by the MAST experiment has limited collaborative opportunities with international and industry partners. The pressing need to facilitate real-time data analysis <cit.> and leverage recent advancements in machine learning <cit.> further emphasizes the necessity for efficient data streaming technologies. These technologies must not only handle the sheer volume of data but also integrate seamlessly with analytical tools. In this paper, we extend the work conducted in <cit.> for the SKA's radio astronomy data streaming and visualization. We explore an array of streaming technologies available. We consider the combination of two major choices of technology when implementing a streaming service: (a) the choice of a streaming system, which performs the necessary communication between two endpoints, and (b) the choice of encoding used to convert the data into transmittable formats. Our contributions are as follows: * We provide a comprehensive review of 11 streaming technologies and 11 encoding methods, categorized by their underlying principles and operational frameworks. * We introduce an extensible software framework designed to benchmark the efficiency of various combinations of streaming technology and serialization protocols, assessing them across 11 performance metrics. * By testing 132 combinations, we offer a detailed comparative analysis of their performance across six different data types. * Our findings not only highlight the performance differentials and trade-offs between these technologies, but we also discuss the limitations of this study and potential directions for further research. Through this comprehensive study, we aim to equip the scientific community with deeper insights into choosing appropriate streaming technologies and serialization protocols that can meet the demands of modern data challenges. Section <ref> briefly reviews the related work in this area. Section <ref> provides an overview of the different serialization protocols and data streaming technologies reviewed in this study. Section <ref> outlines our experimental methodology, including the performance metrics considered, implementation details of our benchmark framework and the choice of datasets used for evaluation. Section <ref> discusses the results of our experiments across all performance metrics and datasets. Finally, sections <ref> and <ref> reflect on the results of our study and draw recommendations of technology choices. § RELATED WORK Khan et al. <cit.> evaluated the performance of streaming data and web-based visualization for SKA's radio astronomy data. They also conducted a limited analysis on the serialization, deserialization, and transmission latency of two protocols - ProtoBuf and JSON. Our work builds on their research by covering a more extensive range of combinations. Proos et al. <cit.> consider the performance of three different serialization formats (JSON, Flatbuffers, and Protobuf) and a mixture of three different messaging protocols (AMQP, MQTT, and CoAP). They evaluate the performance using real “CurrentStatus” messages from Scania vehicles as JSON data payload data. They monitor communication between a desktop computer and a Raspberry Pi. They consider numerous evaluation metrics such as latency, message size, and serialization/deserialization speed. The authors of <cit.> compare 10 different serialization formats for use with different types of micro-controllers and evaluate the size of the payload from each method. They test performance with two types of messages 1) JSON payloads obtained from “public mqtt.eclipse.org messages” and 2) object classes from smartphone-centric studies <cit.>. Fu and Zhang <cit.> presented a detailed review of different messaging systems. They evaluate each method in terms of throughput and latency when sending randomly generated text payloads. They evaluate each method only on the local device to avoid bias from any network specifics. Orthogonal to our work, they are focused on evaluating the scaling of each system over a number of producers, consumers, and message queues. Churchill et al. <cit.> explored using ADIOS2 <cit.> for transferring large amounts of Tokamak diagnostic data from the K-STAR facility in Korea to the NERSC and PPPL facilities in the USA for near-real-time data analysis. We differentiate our study from these related works by evaluating 1) A wide variety of different streaming technologies, both message broker-based and RPC-based. 2) considering a large number of data serialization formats, including text, binary, and protocol-based formats. 3) We evaluate the combination of these technologies, developing an extensible framework for measuring and comparing serialization and streaming technologies. 4) Evaluating the performance over 10 different metrics. We comprehensively evaluate 10 different streaming technologies with 12 different serialization methods over 8 different datasets. § BACKGROUND In this paper, we study how the choice of streaming technologies and serialization protocols critically affects data transfer speed. Specifically, we analyze the application of popular messaging technologies and serialization protocols across diverse datasets used in machine learning. Before discussing our experimental setup and results, this section provides an overview of message systems and serialization protocols suitable for streaming data. §.§ Serialization Protocols In this section, we provide a brief overview of three different categories of serialization protocol: text formats, binary formats, and protocol formats. §.§.§ Text Formats Extensible Markup Language (XML) <cit.> is a markup language and data format developed by the World Wide Web consortium. It is designed to store and transmit arbitrary data in a simple, human-readable format. XML adds context to data using tags with descriptive attributes for each data item. It has been extended to various derivative formats, such as XHTML and EXI. JavaScript Object Notation (JSON) <cit.> is another human-readable data interchange format that represents data as a collection of nested key-value pairs. JSON is commonly used for data exchange protocol in RESTful APIs. Due to the smaller payload size, it is often seen as a lower overhead alternative to XML for data interchange. YAML Ain’t Markup Language (YAML) <cit.> is a simple text-based data format often used for configuration files. It is less verbose than XML and supports advanced features such as comments, extensible data types, and internal referencing. §.§.§ Binary Formats Binary JSON (BSON) <cit.> is a binary data format based on JSON, developed by MongoDB. Similar to JSON, BSON also represents data structures using key-value pairs. It was initially designed for use with the MongoDB NoSQL database but can be used independently of the system. BSON extends the JSON format with several data types that are not present in JSON, such as a datetime format. Universal Binary JSON (UBJSON) <cit.> is another binary extension to the JSON format created by Apache. UBJSON is designed according to the original philosophy of JSON and does not include additional data types, unlike BSON. Concise Binary Object Representation (CBOR) <cit.> is also based on the JSON format. The major defining feature of CBOR is its extensibility, allowing the user to define custom tags that add context to complex data beyond the built-in primitives. MessagePack <cit.> is a binary serialization format, again based on JSON. It was designed to achieve smaller payload sizes than BSON and supports over 50 programming languages. Pickle <cit.> is a binary serialization format built into the Python programming language. It was primarily designed to offer a data interchange format for communicating between different Python instances. §.§.§ Protocol Formats Protocol Buffers (ProtoBuf) <cit.> were developed by Google as an efficient data interchange format, particularly optimized for inter-machine communication. Specifically, ProtoBuf is designed to facilitate remote procedural call (RPC) communication through gRPC <cit.>. Data structures used for communication are defined in .proto files, which are then compiled into generated code for various supported languages. During transmission, these data structures are serialized into a compact binary format that omits names, data types, and other identifiers, making it non-self-descriptive. Upon receipt, the messages are decoded using the shared protocol buffer definitions. Thrift <cit.> is another binary data format developed by Apache Software Foundation or Apache, similar in many respects to ProtoBuf. In Thrift, data structures are also defined in a separate file, and these definitions are used to generate corresponding appropriate data structures in various supported languages. Before transmission, data is serialized into a binary format. Thrift is also designed for RPC communication and includes methods for defining services that use Thrift data structures. However, Thrift has a smaller number of supported data types compared to ProtoBuf. Capn'Proto <cit.> is a protocol-based binary format that competes with ProtoBuf and Thrift. Capn'Proto differentiates itself with two main features. First, its internal data representation is identical to its encoded representation, which eliminates the need for a serializing step. Second, its RPC service implementation offers a unique feature called “time travel” enabling chained RPCs to be executed as a single request. Additionally, Capn'Proto offers a byte-packing method that reduces payload size, albeit with the expense of some increase in serialization time. In our experiments, we refer to the byte packed version of Capn'Proto as "capnp-packed" to differentiate it from the unpacked version, "capnp". Avro <cit.> is a schema-based binary serialization technology developed by Apache. Avro uses JSON to define schema data structures and namespaces. These schemas are shared between both producer and consumer. One of Avro's key advantages is its dynamic schema definition, which does not require code generation, unlike competitors such as ProtoBuf. Avro messages are also self-describing, meaning they can be decoded without needing access to the original schema. We also considered the PSON format <cit.> and Zerializer <cit.>. PSON is a binary serialization format with a current implementation limited to C++ and lacks Python bindings, which restricts its applicability for our study. Zerializer, on the other hand, necessitates a specific hardware setup for implementation, placing it outside the scope of our study due to practical constraints. Consequently, while these formats might offer potential advantages, their limitations in terms of language support and hardware requirements precluded their inclusion in our experimental evaluation. A summary of serialization protocols can be found in Table <ref>. The text-based formats represent data during a text-based markup. While human-readable, text-based formats suffer from larger payload and serialization costs due to the overhead of the markup describing the data. In contrast, binary formats serialize the data to bytes before transmission. These formats are not human-readable, but achieve a better payload size with lower serialization costs. Protocol-based formats also encode data in binary, but differ in that they rely on a predefined protocol definition shared between sender and receiver. Using a shared protocol frees more information out of the transmitted packet, yielding smaller payloads and faster serialization time. §.§ Data Streaming Technologies In this section, we discuss three different categories of data streaming technologies: message queue-based, RPC-based, and low-level. §.§.§ Message Queues ActiveMQ <cit.>, developed in Java by Apache, is a flexible messaging system designed to support various communication protocols, including AMQP, STOMP, REST, XMPP, and OpenWire. The system's architecture is based on a controller-worker model, where the controller broker is synchronized with worker brokers. The system operates in two modes: topic mode and queuing mode. In topic mode, ActiveMQ employs a publish-subscribe (pub/sub) mechanism, where messages are transient, and delivery is not guaranteed. Conversely, in queue mode, ActiveMQ utilizes point-to-point messaging approach, storing messages on disk or in a database to ensure at-least-once delivery. For our experiments, we utilize the STOMP communication protocol. Kafka <cit.> is a distributed event processing platform written in Scala and Java; initially developed by LinkedIn and now maintained by Apache. Kafka leverages the concept of topics and partitions to achieve parallelism and reliability. Consumers can subscribe to one more topic, with each topic divided into multiple partitions. Each partition is read by a single consumer, ensuring message order within that partition. For enhanced reliability, topics and partitions are replicated across multiple brokers within a cluster. Kafka employs a peer-to-peer (P2P) architecture to synchronize brokers, with no single broker taking precedence over other brokers. Zookeeper <cit.> manages brokers within the cluster. Kafka uses TCP for communication between message queues and supports only push-based message delivery to consumers while persisting messages to disk for durability and fault tolerance. RabbitMQ <cit.>, developed by VMWare, is a widely used messaging system known for its robust support for various messaging protocols, including AMQP, STOMP, and MQTT. Implemented in Erlang programming language, RabbitMQ leverages Erlang's inherent support for distributed computation, eliminating the need for a separate cluster manager. A RabbitMQ cluster consists of multiple brokers, each hosting an exchange and multiple queues. The exchange is bound to one queue per broker, with queues synchronized across brokers. One queue acts as the controller, while the others function as workers. RabbitMQ supports point-to-point communication and both push and pull consumer modes. Although message ordering is not guaranteed, RabbitMQ provides at-least-once and at-most-once delivery guarantees. RabbitMQ faces poor scalability issues due to the need to replicate each queue on every broker. Our experiments utilize the STOMP protocol for communication with the pika python package. RocketMQ <cit.>, developed by Alibaba and written in Java, is a messaging system that employs a bespoke communication protocol. It defines a set of topics, each internally split into a set of queues. Each queue is hosted on a separate broker within the cluster, and queues are replicated using a controller-worker paradigm. Brokers can dynamically register with a name server, which manages cluster and query routing. RocketMQ guarantees message ordering, and supports at-least-once delivery. Consumers may receive messages from RocketMQ either using push or pull modes. Message queuing is implemented using the pub/sub paradigm, and RocketMQ scales well with a large number of topics and consumers. Pulsar <cit.>, created by Yahoo and now maintained by Apache, is implemented in Java and designed to support a large number of consumers and topics while ensuring high reliability. Pulsar's innovative architecture separates message storage from the message broker. A cluster of brokers is managed by a load balancer (Zookeeper). Similar to Kafka, each topic is split into partitions. However, instead of storing messages within partitions on the broker, Pulser stores partition references in bookies. These bookies are coordinated by a bookkeeper, which is also load-balanced using Zookeeper. Each partition is further split into several segments and distributed across different bookies. The separation of message storage from message brokers means that if an individual broker fails, it can be replaced with another broker without loss of information. Similarly, if a bookie fails, the replica information stored in other bookies can take over, ensuring data integrity. Pulsar's architecture allows it to offer a global ordering and delivery guarantee, although this high reliability and scalability come at the cost of extra communication overhead between brokers and bookies. For a detailed overview of different message queue technologies, please refer  <cit.>. §.§.§ RPC Based gRPC <cit.>, developed by Google, is an RPC framework that utilizes ProtoBuf as its default serialization protocol. To define the available RPC calls for a client, gRPC requires a protocol definition written in ProtoBuf. While ProtoBuf is the standard, sending arbitrary bytes from other serialization protocols over gRPC is possible by defining a message type with a bytes field. The Python gRPC implementation supports synchronous and asynchronous (asyncio) communication. For all our experiments with gRPC, we use asynchronous communication. Capn'Proto <cit.> and Thrift also have their own RPC frameworks. Similar to gRPC, these frameworks define remote procedural calls within their protocol definitions, using their own syntax specification. Like gRPC, they allow the transmission of arbitrary bytes by defining a message with a bytes field. Avro provides RPC-based communication protocol as well. Unlike other RPC-based methods, Avro does not require the RPC protocol to be explicitly defined. This flexibility comes at the expense of stricter type validation, setting Avro apart from systems such as gRPC and Thrift. §.§.§ Low Level In addition to RPC and messaging systems, we consider two low-level communication systems: ZeroMQ and ADIOS2. Like RPC systems, they do not rely on an intermediate broker for message transmission. ZeroMQ (ZMQ) <cit.> is a brokerless communications library developed by iMatix. It is a highly flexible message framework that uses TCP sockets and supports various messaging patterns, such as push/pull, pub/sub, request/reply, and many more. Notably, ZeroMQ's zero-copy feature minimizes the copying of bytes during data transmission, making it well-suited for handling large messages. In our experiments, we implement a simple push/pull messaging pattern to avoid the additional communication overhead associated with RPC methods. The ADaptable Input Output System (ADIOS) <cit.> is a unified communications library developed as part of the U.S Department of Energy's (DoE) Exascale Computing Project. It is designed to stream exascale data loads for interprocess and wide area network (WAN) communication. In this study, we compare the WAN capabilities of ADIOS, which uses ZeroMQ for it's messaging protocol. We use ADIOS2 for communication and the low-level Python API to facilitate communication between client and server. We do not consider other RPC systems such as Apache Flight. Instead, we rely on ProtoBuf and gRPC for their communication protocols. A summary of the comparison of various data streaming technologies can be found in Table <ref>. Message queue-based technologies use message queues and a publish/subscribe model to transmit data. Producers publish messages to a topic, and multiple consumers can subscribe to these topics to read messages from the queue. These systems operate in push mode, where the system delivers messages to consumers, or in pull mode, where consumers request messages from the message queuing system. RPC-based technologies define a communication protocol shared between producers and consumers, eliminating the need for an intermediate broker. Producers respond to remote procedure calls (consumer requests) to provide data. Low-level communication protocols and the ADIOS also do not require an intermediate broker. Unlike RPC technologies, they do not wait for clients' requests to send messages, reducing communication overhead. ZeroMQ and ADIOS support zero-copy messaging transfer of raw bytes, which is particularly beneficial for large array workloads where encoding and copying data can be costly. These technologies differ in their fault tolerance. Message queuing systems prioritize reliability by caching messages to disk to prevent load shedding during high message rates. In contrast, RPC systems keep all requests in memory, offering faster performance at the expense of lower fault tolerance. Many protocol-based serialization formats introduced earlier include RPC communications libraries that support sending arbitrary bytes. For example, Protobuf-encoded messages can be sent using Avro RPC communication library. § EMPIRICAL STUDY DESIGN The objective of this empirical study is to investigate and compare various streaming technologies and serialization protocols for scientific data. We examine the interplay between serialization protocol and streaming technology by exploring different combinations of them. We conduct experiments on all the technologies discussed in section <ref>, which includes 11 different streaming technologies and 15 different serialization protocols. We test each combination of technology across eight different payloads, resulting in 11 × 15 × 8 = 1320 different combinations. §.§ Performance Metrics We consider 11 performance metrics: seven of these metrics are associated with each serialization protocol, and the remaining four are linked to the combination of streaming technology and serialization protocol. To define the metrics, we first need to establish the different sizes of data as it passes through our pipeline. We denote the size of the data straight from the stream as S_d, the size of the data after object creation as S_o, and the size of the payload after encoding to bytes as S_p. Additionally, we define the number of samples in a dataset as N. To evaluate the performance of each serialization technology we measure: * Object Creation Latency (L_o) – This measures the total time taken to convert the program-specific native format (e.g. array, dataset) into the format required for transmission. This is an important metric because some formats, such as Capn'Proto store their data internally in a serialization-ready format. However, in reality, we often need to work with arrays that are in an analysis-ready format, such as array or dataset. Converting between the two models naturally incurs a penalty since it involves copying the data. * Object Creation Throughput (T_o = S_d N /∑ L_o^(i)) – This is similar to serialization and deserialization throughput, we consider object creation throughput is a sum of latencies for all samples sent) of converting a native object (e.g., array, dataset) to the transmission format expected by the protocol (e.g., a ProtoBuf object or a Capn'Proto object). * Compression Ratio (C = S_p/S_o× 100) – This is defined as the ratio of the size of the payload S_p after serialization to the size of the object S_o, for a given encoding. A smaller compression ratio ultimately means less data to be transmitted over the wire, and therefore, protocols that produce a smaller payload should be more performant. * Serialization Latency (L_s) – This is the total time taken in seconds to encode the original data into the serialized format for transmission. Encoding data with any serialization protocol incurs a non-zero cost due to the need to format, copy, and compress data for transmission. A larger serialization time can potentially negate the benefits of a smaller payload size because it increases the total transmission time. * Deserialization Latency (L_d) – This is similar to serialization time, this metric measures the total time required to deserialize a payload after transmission across the wire. As with serialization time, a slow deserialization time can also negate the effects of a smaller payload. * Serialization Throughput (T_s = S_o/L_s) – This is the serialization time divided by the size of the object to be transmitted. This measures how many bytes per second a serialization protocol can handle, independent of the size of the data stream. * Deserialization Throughput (T_d = S_oN/∑ L_d^(i)) – This is the deserialization time divided by the size of the object received. This measures how many bytes per second a deserialization protocol can handle, independent of the size of the data stream. For streaming technologies, we consider two different performance metrics: * Transmission Latency (L_trans) – This is the time taken for a payload to be sent over the wire, excluding the time taken to encode the message. * Transmission Throughput (T_trans = S_dN/∑ L_trans^(i)) – This is similar to total throughput, but considers the payload size divided by the time taken to send the message over the wire, exclusive of the serialization time. * Total Latency (L_tot) – This is the total time for a payload to be transmitted from producer to consumer, inclusive of the serialization time. * Total Throughput (T_tot = S_dN/∑ L_tot^(i)) – This is the original data object size divided by the total time to send the message. Throughput measures the rate of bytes that can be communicated over the wire. Finally, we also investigate the effect of batch size on the throughput. Grouping data into batches is a common requirement during machine learning training, and we show increasing the batch size while lowering the number of communications has a positive effect on throughput. We make a distinction between transmission time and total time (Fig. <ref>). The total time is the end-to-end transmission of a message, including the time to serialize the message and send it over the wire. Transmission time is the time taken to transmit the payload excluding the serialization and deserialization times. Similarly, we can calculate total and transmission throughput. §.§ Dataset In our experiments, we consider eight different payloads, ranging from simple data to common machine learning workloads, and include fusion science data. Our goal is to cover a range of scenarios. This section briefly describes the datasets used to evaluate performance with various streaming technologies and serialization protocols. * Numerical Primitives: As a baseline comparison, we use simple datasets consisting of randomly generated numerical primitives for , , and types. * BatchMatrix: A synthetic dataset where each message consists of a randomly generated 3D tensor of type with shape {32, 100, 100} to simulate sending a batched set of image samples. * Iris Data: This is a dataset using the well-known Iris dataset <cit.>. The Iris dataset contains an array of four features and a one-dimensional target variable. * MNIST: We use the widely used MNIST machine learning image dataset <cit.> as a realistic example of streaming 2D tensor data. * Scientific Papers: The scientific papers dataset is a well-known dataset in the field of NLP and text processing <cit.>. The dataset comprises 349,128 articles of text from PubMed and arXiv publications. Each sample is repeated as a collection of for properties such as article, abstract, and section names for transmission. * Plasma Current Data: A more realistic example of scientific data, we use plasma current data from the MAST tokamak <cit.>. Each set of plasma current data contains three 1D arrays of type : data, time, and errors. The “data” array represents the amount of current at each timestep, the “time” represents the time the measurement was taken in seconds, and the “errors” represents the standard deviation of the error in the measured current. §.§ Implementation and Experimental Setup We developed a framework to measure the performance of streaming and serialization technology. The architecture diagram of our framework is shown in Figure <ref>, which follows service-oriented architecture <cit.> and is implemented in Python. We used the appropriate Python client library for each streaming and serialization technology. The source code can be found in our GitHub repository <cit.>. The user interacts with the framework through a command-line interface. A test runner sets up both the server-side and client-side of the streaming test. The server side requires the configuration of three components: * DataStream: component handles loading data for transmission. This can be any one of the payloads described in section <ref>. * Producer: functions as the server side of the application. It packages data from the selected data stream and transmits it over the wire using the selected streaming technology, which may be any of the technologies described in section <ref>. * Marshaler: handles the serialization of the data from the stream using the specified serialization protocol. This can be any of those described in section <ref>. The configuration of the client side is similar but only requires a marshaler to be configured to match the one used for the producer. It does not require knowledge of the data stream. * Consumer: functions as the client side of the application. It receives data transmitted by the producer using the selected streaming technology, processes the incoming messages, and performs the necessary actions. Producers and consumers interact using a configured protocol. * Broker: required by the streaming protocol (e.g., for Kafka, RabbitMQ, etc.) are run externally from the test in the background. In our framework, we configure all brokers using docker-compose <cit.> to ensure that our broker configurations are reproducible for every test. * Logger: is used by the marshaler to capture performance metrics for each test in a JSON file. For each message sent, the logger captures four timestamps: 1) before serialization, 2) after serialization, 3) after transmission, and 4) after deserialization. Using these four timestamps, we can calculate the serialization, deserialization, transmission, and total time. Additionally, the logger captures the payload size of each message immediately after serialization. With this additional information, we can calculate the average payload size and throughput of the streaming service. ADIOS and ZeroMQ can directly send array data without copying the input array. However, to achieve this, the array data must be directly passed to the communication library without serialization. Therefore, we additionally consider ZeroMQ and ADIOS to have their own "native" encoding strategy for each stream, which is only used with their respective streaming protocol. This allows for a fair comparison with other technologies because sending an encoding array with ADIOS or ZeroMQ incurs an additional copy that could be circumvented by properly using their zero-copy functionality. Following the convention of previous work <cit.>, we run each streaming test locally, with the producer and consumer on the same machine to avoid network-specific issues. § RESULTS In this section we present the results of our experiments with the combination of different streaming technologies, serialization protocols, and data streams. 1) Object Creation Latency – We use different datasets that originate from various data analysis types, like NumPy or Xarray dataset. Depending on the encoding protocol, we may need to copy the data from its native format to a specific format like Capn'Proto or Protobuf objects. This copying process adds some overhead that should be taken into consideration. However, for encoding protocols like JSON, BSON, and Pickle that do not require format changes, we store the data in a Pydantic class. The results in Figure <ref> show that for larger array datasets like BatchMatrix, Plasma, and MNIST encoding methods such as Protobuf, Thrift, and Captn'Proto tend to have higher object creation latency as they need to copy data into their own data types. 2) Object Creation Throughput – We consider the object creation throughput for each serialization method. The object creation time measures the time to convert data from the native data structure (such as a NumPy array) to the serialization format. Object creation time is important to consider if the format that the data will be used in will be different from the format it is sent over the wire. Typically, object creation force a copy of the data to be sent, which impacts the total throughput, especially when considering large array like data. Figure <ref> shows the object creation throughput for each dataset and each encoding method. It is interesting to note here that protocol based methods incur a greater penalty for the object creation. This effect is more noticeable in larger datasets such as the BatchMatrix and Plasma datasets. 3) Compression Ratio – The payload size, a crucial performance metric of serialization protocols, is independent of the choice of streaming protocols. Therefore, we have calculated the average compression ratio over all runs for each serialization protocol. Figure <ref> presents the results for each protocol and data stream. Notably, Pickle, Avro, and XML consistently produce the largest payload sizes, often exceeding the original size. This is due to the inefficiency of their text-based encodings and the additional meta-data tags they add as overhead. Pickle, a binary format for storing Python objects, is particularly known for its large sizes and is not optimal for encoding data for streaming. The results show that the serialization protocol, Capn`Proto, outperforms others in terms of payload size. The packed option of Capn`Proto, also known as , is responsible for additional size efficiency. The format is closely followed by several binary serialization formats that show similar performance. The reason behind this performance can be attributed to their ability to achieve near-identical compression, which is close to the limits of what is possible for that particular data stream. Examining across data streams, it can be seen that the BatchMatrix dataset is fundamentally limited. This is because it is made up of randomly generated numbers, making it incompressible due to the lack of redundancy in the data. Conversely, for more realistic data such as MNIST and Plasma, a much higher compression ratio is achieved.. Better compression is achieved for formats such as Capn'Proto Packed, which exploit the redundancy in the data to achieve greater compression. Text-based formats, such as YAML, JSON, XML, and Avro, achieve significantly worse compression in comparison. In fact, due to the extra markup required for these formats, they can produce a larger payload size that the original data. 4) Serialization Latency – The results for serialization time are shown in figure <ref>. There is a clear trend across all data streams from text based protocols being the slowest (Avro, YAML, etc.) towards binary encoded protocol-based methods (Capn'Proto, protobuf, etc.) being the fastest. Binary encoded but no-protocol methods fall in between these two extremes. It is interesting to note that Capn'Proto has the fastest serialization time. This is likely due to the fact that Capn'Proto stores data in format that is ready for serialization over the wire. 5) Deserialization Latency – The results for serialization time are shown in figure <ref>. Again, a clear trend may be seen across all data streams from text based protocols to binary encoded protocol based methods. Like serialization, Capn'Proto is generally the fastest deserialization methods across all tests. As mentioned above, this is likely due to Capn'Proto storing the data in a pre-serialized form. 6) Serialization Throughput – Figure <ref> shows the average throughput for serialization of the data using different types of protocols. It is evident from the graph that serialization techniques based on protocols such as ProtoBuf, Thrift, and Capn'Proto offer the highest serialization throughput. Binary methods that are protocol independent offer moderate throughput performance with the added advantage of greater flexibility as compared to protocol methods. Text-based methods perform the worst due to their high serialization overhead. Surprisingly, Avro also performs quite well by this metric. We believe that despite being a human-readable text-based method, it is also a protocol-based method. This means that both the producer and consumer are aware of the types and data structures being transmitted over the wire, facilitating faster throughput. 7) Deserialization Throughput – Figure <ref> also shows the average throughput for deserialization of the data using different types of protocols. It is noticeable that deserialization throughput time across all methods is smaller, indicating that deserialization is a main bottleneck to transmission. 8) Transmission Latency – Figure <ref> shows the transmission latency for various combinations of serialization and streaming technologies. The heatmap in each combination is sorted by the average latency from lowest to highest for each streaming technology. Across all technologies, it is observed that transmission latency is largely dependent on the choice of streaming technology rather than the choice of serialization protocol. In streaming technologies, a broker is required as an intermediary, which increases the overall latency, whereas RPC technologies have no broker, and hence, have lower latency. Among messaging technologies, RabbitMQ performs better with larger payloads, while ActiveMQ achieves lower latency with smaller payloads but performs worst on the largest payload (e.g., BatchMatrix). In RPC-based methods, Thrift consistently has the lowest latency except for the BatchMatrix stream, where Capn'Proto narrowly beats Thrift. With larger payloads such as BatchMatrix and plasma data streams, the impact of serialization protocol becomes more noticeable. It is challenging to identify a trend between encoding protocols in terms of latency, except that it is crucial to note the inefficiency of using XML and YAML for larger payloads. For the BatchMatrix data stream, an issue arises when sending a large YAML-encoded payload through the Python API, which causes ADIOS to produce a segmentation fault. Therefore, subsequent latency and throughput plots result in NaN, the empty cells in Figure <ref>. 9) Transmission Throughput – By examining the throughput, we gain better understanding of how different protocols affect transmission. Figure <ref> shows that RPC methods achieve higher transmission throughput than message streaming technologies. When dealing with larger payloads, such as the BatchMatrix and Plasma data streams, protocol-based serialization choices such as Thirft, Capn'Proto, and Protobuf provide higher throughput than other methods. Interestingly, messagepack also performs well with larger payloads. Similar to latency, the choice of streaming technology is more important than the encoding. However, a trend towards protocol encoding methods can be observed on some larger datasets, such as the plasma dataset. 10) Total Latency – Figure <ref> shows the total latency for all combinations tat were tested. As before, it is clear that the Thrift, Capn'Proto, and ZeroMQ all perform well in these tests. ZeroMQ offers the lower latency in the BatchMatrix test because it avoids the overhead of copying the data into a new structure, as is the case with Thift or Protobuf. Among the broker based methods, RabbitMQ consistently performs well. When it comes to encoding methods, protocol-based methods generally perform the best across all datasets and streaming methods. However, it is not clear which method offers the lowest latency in general. Protocol-based methods can achieve high throughput by mixing encoding protocols and RPC frameworks. For example, considering the MNIST dataset, Capn'Proto achieves the lowest latency with the thrift protocol. There is a clear trend towards protocol encoding for complex data sets such as Iris, MNIST, and Plasma. Among streaming technologies, Thrift generally shows the best performance. 11) Total Throughput – Figure <ref> shows total throughput, which is consistent with the total latency results discussed in the previous section. Protocol-based methods achieve the highest throughput. Among all the serialization protocols, Thrift is generally the best performing one. ZeroMQ performs well with the biggest dataset, BatchMatrix. Although the best encoding method is inconclusive, it shows a trend toward protocol-based methods, which give the highest throughput. 12) Effect of Batch Size on Throughput – In machine learning applications, data is often processed in batches. Our findings underscore the potential of batching data before transmission to enhance throughput. However, it is crucial to note that the batch size can significantly impact the throughput of the method used. Figure <ref> shows the throughput of the dataset with a variable batch size. As the batch size increases, the throughput reduces. This is due to the increased overhead of copying and serializing data for transmission. However, when the batch size is increased beyond 32 images per batch, the overall throughput begins to improve because fewer packets are needed to be communicated over the network. For binary and protocol encoding methods, increasing the batch size is shown to also increase the throughput. This observation is consistent with previous results; generally, protocol-based methods offer the best throughput. At larger batch sizes (>128), the throughput continues to increase because fewer transmission time is significantly slower than the serialization/deserialization cost. So grouping many examples into a single transmission improves throughput. § DISCUSSION We can draw several conclusions based on the experiments presented in this work. We identify the following key points from our results: §.§ Recommendations RPC systems are faster than messaging broker systems due to the overhead of the intermediate broker. This makes RPC highly efficient for high throughput and low latency transmission of large data, although they do not offer the same delivery guarantees as message broker systems. We found that the choice of messaging technology has a greater impact than the encoding protocol. Protocol-based encoding methods such as Capt'n Proto and ProtoBuf perform best for complex data that can be compressed, while MessagePack is a competitive choice for smaller or random data. Protocol-based encoding methods offer the fastest serialization and best compression, with Thrift offering the best throughput and Capn'Proto offering the best compression. Binary encoding methods offer more flexibility at the cost of slower encoding speed. Among the binary encoding methods we tested, MessagePack generally performed the best. Considering text-based protocols, JSON offered the best performance due to its lightweight markup and smaller payload size compared to YAML or Avro. Among different messaging technologies, we generally found that Apache Thrift achieves very high throughput and low latency across various scenarios. With message broker systems, RabbitMQ generally demonstrates the best performance. Surprisingly, we did not observe much of a difference when combining different protocol-based encoding and messaging systems. We hypothesized that ProtoBuf would be most efficient when combined with the gRPC, or that Capn'Proto's RPC implementation would perform best with Capn'Proto. However, this appears not to be the case. Larger batch sizes facilitate higher throughput for array datasets, as shown by our throughput and batch size experiment in Figure <ref>, when using either a binary or protocol-based serialization method. For text-based encoding methods, adding the required markup and lack of compression destroys any advantage of batching data for transmission. §.§ Limitations and future directions One notable limitation is that in this study we did not investigate the potential of scaling with multiple clients. Previous research has examined this aspect for message queuing systems <cit.>. A future study could focus on examining the reliability of various RPC technologies based on the number of consumers. § CONCLUSION In this work, we investigated 132 combinations of different encoding methods and messaging technologies. We evaluated their performance across 11 different metrics and benchmarked each combination against 6 different datasets, ranging from toy datasets to machine learning, to scientific data from the fusion energy domain. We found that messaging technology has the biggest impact on performance, regardless of over the specific serialization method used. Protocol-based methods offer the highest throughput and lowest latency but at the expense of flexibility and robustness. Protocol encoding methods offered the best performance, at the cost of flexibility. Notably, we did not see much difference when combining different protocol-based encoding and messaging systems. Finally, we found that the batch size affects the data throughput for all binary and protocol-based encoding methods. § CONTRIBUTION SJ: Designed and implemented the experimental framework, shaping the research methodology and contributions to the writing and conceptualization of the paper. NC: Provided the MAST data for the study and offered expertise in the fusion domain, enhancing the scientific rigor of this empirical study and editing and refining the manuscript. SK: Provided technical supervision and introduced the core idea, building upon SK's prior work at the University of Oxford, and contributed to the writing and editing of the paper, figures, and plots. § ACKNOWLEDGMENT We would like to thank our colleagues at UKAEA and STFC for supporting the FAIR-MAST project. Additionally we would like to thank Stephen Dixon, Jonathan Hollocombe, Adam Parker, Lucy Kogan, and Jimmy Measures from UKAEA for assisting our understanding of the Fusion data. We would also like to extend our thanks to the wider FAIR-MAST project which include Shaun De Witt, James Hodson, Stanislas Pamela, Rob Akers from UKAEA and Jeyan Thiyagalingam from STFC. We also want to extend our gratitude to the MAST Team for their efforts in collecting and curating the raw diagnostic source data during the operation of the MAST experiment. IEEEtran
http://arxiv.org/abs/2407.12330v1
20240717061455
Uncertainty Calibration with Energy Based Instance-wise Scaling in the Wild Dataset
[ "Mijoo Kim", "Junseok Kwon" ]
cs.LG
[ "cs.LG", "cs.AI" ]
Uncertainty Calibration with Energy Based Instance-wise Scaling M. Kim et al. Chung-Ang University, Seoul, Korea {mijoo707,jskwon}@cau.ac.kr Uncertainty Calibration with Energy Based Instance-wise Scaling in the Wild Dataset Mijoo Kim10000-0002-0397-1852 Junseok Kwon10000-0001-9526-7549 July 22, 2024 =================================================================================== § ABSTRACT With the rapid advancement in the performance of deep neural networks (DNNs), there has been significant interest in deploying and incorporating artificial intelligence (AI) systems into real-world scenarios. However, many DNNs lack the ability to represent uncertainty, often exhibiting excessive confidence even when making incorrect predictions. To ensure the reliability of AI systems, particularly in safety-critical cases, DNNs should transparently reflect the uncertainty in their predictions. In this paper, we investigate robust post-hoc uncertainty calibration methods for DNNs within the context of multi-class classification tasks. While previous studies have made notable progress, they still face challenges in achieving robust calibration, particularly in scenarios involving out-of-distribution (OOD). We identify that previous methods lack adaptability to individual input data and struggle to accurately estimate uncertainty when processing inputs drawn from the wild dataset. To address this issue, we introduce a novel instance-wise calibration method based on an energy model. Our method incorporates energy scores instead of softmax confidence scores, allowing for adaptive consideration of DNN uncertainty for each prediction within a logit space. In experiments, we show that the proposed method consistently maintains robust performance across the spectrum, spanning from in-distribution to OOD scenarios, when compared to other state-of-the-art methods. The source code is available at https://github.com/mijoo308/Energy-Calibrationhttps://github.com/mijoo308/Energy-Calibration. § INTRODUCTION Despite the impressive performance demonstrated by recent AI systems, their deployment should be carefully considered, particularly in safety critical situations (autonomous driving, finance, health care, and medical diagnosis), because these systems cannot consistently ensure accurate predictions. For example, in the field of medical diagnosis, incorrect predictions have the potential to result in catastrophic outcomes. To address this concern, the system must transparently reveal the uncertainty associated with its prediction. Recently, DNNs rely on confidence as a way of expressing uncertainty, but they tend to assign higher confidence scores than their actual accuracy <cit.>. This discrepancy stems from the inherent inability of DNNs to express appropriate uncertainty during inference. To solve this problem, an increasing body of research has focused on refining confidence representations to reflect the uncertainty, a field known as uncertainty calibration. This effort aims to adjust confidence scores to align more closely with accuracy, ultimately improving the reliability of predictions. As a result of active researches in this field <cit.>, the discrepancy between confidence and accuracy has been significantly mitigated. However, these methods mainly deal with samples drawn from the same distribution on which DNNs were trained (in-distribution), often overlooking distribution shift scenarios. Consequently, they face difficulties in achieving calibration effects when confronted with distribution shift scenarios. When considering real-world deployment, calibration methods should demonstrate robustness in handling samples from unknown distributions. In this context, this issue has been considered by Tomani et al. <cit.> within the domain of post-hoc calibration. This method exhibits relatively effective performance in out-of-distribution (OOD) scenarios. However, conversely, it suffered from miscalibration in-distribution (ID) scenarios, exhibiting even greater miscalibration compared to the pre-calibration state. In this paper, we propose a novel energy-based calibration method that exhibits robustness across the spectrum, from ID to OOD scenarios, including various distribution shift scenarios. We address uncertainty calibration in the context of multi-class classification, particularly in a post-hoc manner. The proposed method utilizes the energy score to adeptly capture the uncertainty in DNNs. Previous studies <cit.> have shown that the energy model produces scores that are discriminative between ID and OOD samples. Beyond this work, we demonstrate that the energy function can induce distinctive scores not only between ID and OOD samples but also between correct and incorrect samples, which can be effectively utilized for uncertainty calibration. Before delving into the mathematical derivation for this in Section <ref>, we provide an intuitive overview of the proposed energy score in Fig.<ref>. The energy score exhibits a superior ability to produce distinctive scores between ID and OOD samples, as well as between correct and incorrect samples. This implies that the energy score can more accurately represent the uncertainty of DNNs compared to the confidence score. Motivated by this observation, we utilize the energy score as an uncertainty estimator for each prediction, which inspires us to propose an instance-wise robust calibration method that adjusts the calibration factor accordingly. In Section <ref>, we demonstrate that the proposed method achieves remarkable robust calibration performance across various baseline DNN models in the wild datasets. This includes scenarios involving OOD samples with semantic shift and varying degrees of covariate shift, as well as ID scenarios. To sum up, our main contributions are as follows: * We demonstrate the effectiveness of utilizing energy scores for uncertainty calibration through both mathematical derivation and empirical validation. * We introduce a novel post-hoc calibration method that utilizes the energy score to adaptively capture the uncertainty of predictions in DNNs for each individual input. * We illustrate that the proposed calibration method shows robustness in the wild datasets across a wide range of distribution shifts such as covariate and semantic shifts, as well as in complete ID. § RELATED WORK §.§ Post-hoc Calibration Confidence calibration can be divided into two categories. The first category is known as training-time calibration <cit.>, such as focal loss <cit.> and label smoothing <cit.>. These methods train DNNs to exhibit calibrated behavior during training. The second category is referred to as post-hoc calibration <cit.>. In this approach, pre-trained neural networks are utilized along with hold-out validation datasets to learn calibration mappings in a post-hoc manner. Post-hoc calibration can be further categorized into non-parametric and parametric approaches. Non-parametric approaches include Histogram Binning (HB) <cit.> and Isotonic Regression (IR) <cit.>. HB divided predicted probabilities into multiple intervals, each associated with representative confidences. IR utilized isotonic regression with uncalibrated confidences as the x-axis and the expected accuracy values as the y-axis. BBQ <cit.>, non-parametric extension of HB, incorporated Bayesian model averaging to enhance calibration. On the other hand, the most common parametric calibration method is Temperature Scaling (TS) <cit.>. As the temperature increases, the distribution of logits becomes more uniformly distributed, resulting in a decrease in the confidence score associated with the predicted label. TS has a significant advantage in terms of accuracy preservation, as it maintains the originally predicted label with the highest confidence score unchanged. However, TS exhibits limited expressiveness as it relies on only a single parameter that is fixed on the validation set. To address this issue, ensemble approaches, which combine both non-parametric and parametric methods, have been proposed. Ensemble TS <cit.> introduced additional parameters to enhance expressiveness, building upon TS. IRM <cit.> leveraged the accuracy-preserving property of the parametric approach and the expressiveness of the non-parametric approach, representing a multi-class extension of IR. Parameterized TS <cit.> employed a similar strategy to address the expressiveness limitation of TS. In other approaches, Beta calibration <cit.> was extended to Dirichlet calibration <cit.> and Spline calibration <cit.> utilized spline fitting to approximate the empirical cumulative distribution. We employ a post-hoc calibration approach, however, our method differs from this body of work. While these methods only addressed scenarios where test samples are drawn from the same distribution on which DNNs were trained, our approach can handle more diverse situations. It ensures effective calibration for ID samples while maintaining the original classifier accuracy. §.§ Beyond In-distribution Calibration Conventional research has mainly focused on investigating post-hoc uncertainty calibration methods. However, these methods often overlook scenarios involving distribution shifts. Due to their dependence on a fixed calibration map optimized for ID validation sets, they struggle to effectively handle unknown test samples. Recently, the importance of the robustness of post-hoc calibration method across various distribution shift scenarios has been emphasized <cit.>. In <cit.>, various degrees of Gaussian perturbations were injected into the ID validation dataset. The parameters of the calibration method were then adjusted using the perturbed validation data, resulting in enhanced robustness against shifted distributions. However, this method tends to achieve notable performance only within scenarios where a certain degree of distribution shift is present. Moreover, in a complete ID scenario, it exhibits even worse calibration compared to the pre-calibration state. To solve this problem, DAC <cit.> has been proposed as a pre-processing step before employing the existing post-hoc calibration methods. Notably, it leveraged additional output information for uncertainty estimation and enhanced the calibration performance in distribution shift scenarios. Similar to these methods, we focus on a broad range of scenarios, ranging from ID to various OOD scenarios. However, unlike these methods, our approach can facilitate robust calibration without requiring any additional DNN's layer information except for the last layer. In experiments, we demonstrate that our proposed method performs comparably to, and in some cases even surpasses, other state-of-the-art methods that utilize DAC as a preprocessing step. § PROBLEM SETUP In this section, we establish preliminaries for uncertainty calibration. We define key notations in the context of multi-class classification and present a representative calibration metric derived from the concept of perfect calibration. Additionally, we address calibration in OOD scenarios. §.§ Notation Let 𝐱∈𝒳 and y ∈𝒴={1, ..., K} be random variables that denote d-dimensional inputs and labels, respectively, in multi-class classification tasks with K classes. These random variables follow a joint distribution π(𝐱,y) = π(y | 𝐱)π(𝐱). The dataset 𝒟 = {(𝐱_n, y_n)}^N_n=1 consists of the N number of i.i.d. samples drawn from π(𝐱,y). Let f be a pre-trained neural network and f(𝐱) = (ŷ, 𝐳) be the output of the neural network, where ŷ is a predicted label and 𝐳 is an original non-probabilistic output of the network, referred to as the logit. The logit 𝐳 is converted into a probabilistic value p̂ using the softmax function σ_SM. Thus, p̂ represents a confidence score associated with the predicted label ŷ. To summarize, the output of the neural classifier f, p̂ and ŷ, can be obtained as follows: p̂ = *max_kσ_SM(𝐳), and ŷ = *argmax_k σ_SM(𝐳) for k ∈{1, ..., K}. §.§ Calibration Metric Perfect calibration is achieved when the predicted probability (confidence) matches with the actual likelihood of a correct prediction. If a neural network predicts a label as y with confidence p, the actual likelihood of the prediction should ideally be p. Thus, the perfect calibration in multi-class classification can be represented as follows: ℙ(ŷ = y | p̂ = p) = p, ∀ p ∈ [0,1]. The goal of uncertainty calibration is to minimize the gap between the ground-truth likelihood and the predicted confidence by calibrating the confidence value. Using this definition of perfect calibration, the calibration error can be computed by modifying (<ref>): 𝔼_p̂[ | ℙ(ŷ = y | p̂ = p) - p | ]. Subsequently, the expected calibration error (ECE) <cit.> empirically approximates the calibration error in (<ref>) using a binning technique. By discretizing the confidence interval into M equally sized bins, {B_m}^M_m=1, the ECE calculates a weighted average of differences between accuracy acc(·) and confidence conf(·) in each bin. With N samples, the ECE is defined as follows: ECE = ∑_m=1^M|B_m|N| acc(B_m) - conf(B_m) |, where all ECE values in this paper are calculated with M=15 and multiplied by 100. In addition to ECE, there are other metrics such as Maximum Calibration Error (MCE)<cit.>, which represents the highest error among bins, Static Calibration Error (SCE)<cit.> that evaluates calibration errors on a classwise manner, and KDE-ECE<cit.> that utilizes Kernel Density Estimation (KDE). As ECE is the most representative metric, we primarily evaluate the proposed method using ECE, but we also employ other metrics. §.§ Calibration in OOD Scenarios In general, OOD refers to a distribution that differs from the training distribution <cit.>. In this paper, the term OOD includes two types of distribution shifts: covariate shift and semantic shift. Covariate-shifted samples are drawn from a different joint distribution π_ood^cov(𝐱,y) such that π_ood^cov≠π. In other words, while the samples may belong to the same class, they are presented in different forms <cit.>. In the case of semantic shift, the data is drawn from π_ood^sem(𝐱,y̅), where 𝒴∩𝒴̅ = ∅, indicating that the data is from classes not present in the training set 𝒟 <cit.>. Therefore, in semantic shift scenarios, all predictions by a pre-trained classifier may be incorrect, as they may correspond to one of the K in-distribution classes in 𝒴. From the perspective of calibration, in such scenarios, the lower the confidence, the better the calibration. § PROPOSED METHOD To overcome the limitations of conventional calibration methods when distribution shifts exist, we propose a robust calibration method that exhibits calibration improvements across various OOD scenarios. Previous approaches <cit.> that initially addressed shift scenarios in post-hoc calibration could not properly handle ID inputs. In contrast, our method achieves calibration improvements in both OOD and ID scenarios by adaptively capturing the uncertainty of pre-trained neural networks for each input. To accomplish this, our method leverages the concept of the energy model, which is technically derived from energy-based OOD detection methods <cit.>. Before introducing our calibration method, we lay out the mathematical motivation behind the proposed method by establishing a correlation between our method and the energy model. The overall pipeline for understanding the propose method is illustrated in Fig.<ref>. §.§ Mathematical Motivation The energy function makes scores on ID and OOD more distinguishable than the softmax function <cit.>. As demonstrated in <cit.>, there is a connection between Gibbs distributions (Boltzmann distributions) and softmax functions: P(y|𝐱) = e^- β E(𝐱, y)/∫_y ∈𝒴 e^-β E(𝐱, y), and σ_SM(y|𝐱) = e^f_y(𝐱)/∑_i=1^K e^f_i(𝐱), where P(y|𝐱) denotes the Gibbs distribution with the energy E(𝐱,y) : ℝ^D→ℝ and σ_SM(y|𝐱) indicates the softmax function for the K-class classifier f(𝐱) : ℝ^D→ℝ^K. In (<ref>), f(𝐱) outputs a vector of length K and f_i(𝐱) denotes the i-th element of the vector. By comparing P(y|𝐱) and σ_SM(y|𝐱) in (<ref>), we can derive the energy as E(𝐱, y) = -f_y(𝐱), where the positive constant value β is set to 1. The denominator of P(y|𝐱) in (<ref>) is a partition function, transforming each energy value corresponding to y into a probability value within the range of [0,1]. In particular, Helmholtz free energy is defined as a log partition function <cit.>. Then, the free energy ℱ can be represented using the connection between the Gibbs distribution and the softmax function: ℱ(𝐱) = - log∑_i=1^K e^f_i(𝐱). A mathematical relationship between the energy and the negative log likelihood (NLL) loss has been derived in <cit.>. Based on this, it is demonstrated that the NLL loss inherently decreases the energy for ID samples, while increasing the energy for OOD samples <cit.>. From these findings, we define the NLL loss, ℒ_NLL=-logσ_SM(y|𝐱), as a combination of E(𝐱, y) in (<ref>) and -ℱ(𝐱) in (<ref>). ℒ_NLL = -loge^f_y(𝐱)/∑_i=1^K e^f_i(𝐱) = -f_y(𝐱) + log∑_i=1^K e^f_i(𝐱) = E(𝐱, y) - ℱ(𝐱), where the free energy ℱ can be interpreted as a contrastive term that aggregates the energies for all classes of i ∈{1,...,K}. From the third equation in (<ref>), we can see that the NLL loss inherently lowers the energy for the correct label y and raises the energy for the other labels. Additionally, the derivative of ℒ_NLL over the network parameter θ is calculated as follows. ∂ℒ_NLL/∂θ = ∂ E(𝐱,y)/∂θ - ∂ℱ(𝐱)/∂θ = ∂ E(𝐱,y)/∂θ - ∑^K_i=1∂ E(𝐱,i)/∂θe^-E(𝐱,i)/∑^K_j=1e^-E(𝐱,j) = ∂ E(𝐱,y)/∂θ - ∑^K_i=1∂ E(𝐱,i)/∂θ P(i|𝐱) = ∂ E(𝐱,y)/∂θ (1-P(y|𝐱)) - ∑^K_i≠ y∂ E(𝐱, i)/∂θ P(i|𝐱), where the second equality holds by using the Gibbs distribution in (<ref>) (β = 1). From the last equation in (<ref>), we can see that the energy function is weighted by each probability, pushing down the energy for the correct label and pulling up the energy for the incorrect labels. Liu et al. <cit.> explained that increasing the energy, except for the correct labels, inherently boosts the energy of OOD samples. Following this, the free energy ℱ can serve as the energy score, which can make distinctive values between ID and OOD. This is because it is a smooth approximation of E with the dominance of the ground-truth label y over all other labels. They focused on the distinguishability of the energy score on ID and OOD samples in the OOD detection task, particularly in cases of semantic shift. To further expand this interpretation, we bring the concept into the perspective of uncertainty calibration. In our context, we focus on not only the distinguishability between ID and OOD samples but also between correct and incorrect predictions. Interpreting the final equation in (<ref>) in a simpler manner, it reveals that the energy score has the capability to differentiate between correct and incorrect samples. This implies that it can make distinctive scores between correct and incorrect predictions not only in ID but also in covariate shifts, as 𝒴 remains consistent. Furthermore, for semantic shift cases, as demonstrated in <cit.>, all predictions are considered incorrect, indicating distinguishability since there is no overlap in labels ( 𝒴∩𝒴̅ = ∅). Building upon the motivation outlined so far, we introduce the incorporation into post-hoc uncertainty calibration in the next section. §.§ Robust Instance-wise Calibration Most existing calibration methods are limited by the assumption of the same distribution on which the classifier has been trained. As the parameters are optimized using the consistent distribution of the validation set, these methods lack the adaptiveness to effectively address distribution shift scenarios. To solve this problem, an uncertainty calibration method should possess the capability to capture the uncertainty of the neural network for each individual sample. In this context, the energy score can effectively fulfill this role within the framework of post-hoc calibration. As shown in the motivation and Fig.<ref>, it is evident that the energy score is capable of producing distinct values for both cases: ID and OOD samples, as well as correct and incorrect samples. By facilitating our motivation, we adjust the scaling factor for each input samples to achieve uncertainty calibration. Our method is fundamentally built upon the temperature scaling (TS) technique introduced in <cit.> to incorporate the advantages of accuracy-preserving property. The proposed scaling factor is defined as follows: h(T_ts, 𝐱;θ) = T_ts_Fixed on validation set- λ_1θ_1 + λ_2θ_2_ Adaptive for each input, where T_ts denotes the temperature parameter obtained by the TS technique <cit.>, which is fixed on the validation set, and θ={θ_1, θ_2} denotes trainable parameters that are optimized using the loss function described in (<ref>). The term - λ_1θ_1 is designed to lower the temperature, whereas λ_2θ_2 is included to raise the temperature. Raising the temperature makes the distribution of the logits to be more uniformly distributed, ultimately reducing confidence in predictions. These adaptive terms adjust the scaling factor depending on whether a given sample is likely to be correctly classified or not. We obtain λ_1 and λ_2 in (<ref>), as follows: λ_1 = P_correct(ℱ(𝐱)), and λ_2 = P_incorrect(ℱ(𝐱)), where ℱ denotes the energy score function in (<ref>), and P_correct and P_incorrect denote the probability density functions of ℱ(𝐱) fitted to Gaussian distributions for correct and incorrect samples, respectively. By leveraging the energy scores that characterize a specific classifier on a given dataset, we can establish a distribution for these energy scores and subsequently calculate the corresponding probability density function. The energy scores corresponding to correct instances are utilized to construct the distribution of P_correct, whereas the energy scores associated with incorrect instances are used to form the distribution of P_incorrect. We utilize an ID validation set and a semantic OOD dataset to make the aforementioned probability density functions, in which the predictions made using semantic OOD data are categorized as incorrect instances, since all predictions correspond to one of the K in-distribution classes. By designing the proposed scaling factor in the manner described above, our method can gain the ability to instance-wisely distinguish between incorrect and correct samples. Ensuring the adaptability of our calibration method to each input sample is crucial for capturing the uncertainty of a particular prediction, especially in the presence of distribution shifts. This is the main reason on the robust calibration performance exhibited by our method across various test data, including ID samples and various types of distribution shifted samples (Fig.<ref>), which is empirically demonstrated in the experiment section. Then, using our instance-wise scaling with the scaling factor in (<ref>), the calibrated probability of sample 𝐱 for K classes, p̂_θ∈ℝ^K, can be expressed as follows: p̂_θ = σ_SM(f(𝐱)/h(T_ts, 𝐱;θ)), where h(T_ts, 𝐱;θ) denotes the proposed scaling factor defined in (<ref>). To train the parameters θ={θ_1, θ_2} in (<ref>), we design a mean squared error loss function: ℒ_θ = 1/N∑_i||y^(i) - p̂_θ^(i)||_2^2, where y^(i)∈ℝ^K denotes the one-hot encoded ground-truth label for the i-th sample, p̂_θ^(i)∈ℝ^K denotes p̂_θ in (<ref>) for the i-th sample, and N is the total number of training samples. Using the trained parameter θ, the calibrated confidence for a test sample 𝐱 can be calculated in an instance-wise manner: q̂ = max(σ_SM(f(𝐱)/h(T_ts, 𝐱;θ)). Algorithm <ref> describes the entire procedure for training calibration parameters, while Algorithm <ref> outlines the procedure of implementing the instance-wise calibration method with the trained parameters. § EXPERIMENT §.§ Experimental Settings For experiments, we trained classification DNNs including VGGNet <cit.>, ResNet <cit.>, WideResNet <cit.>, DenseNet <cit.> and SE-ResNet <cit.> on the CIFAR10/CIFAR100 datasets <cit.>. We employed pre-trained weights implemented in PyTorch for ImageNet-1k <cit.>. We utilized two types of datasets for training: one intended for ID scenarios and the other for semantic OOD scenarios. For tuning θ, we utilized ID validation images (5,000 for CIFAR10/CIFAR100, 12,500 for ImageNet-1k) and semantic OOD samples (100/400/3,500 SVHN <cit.> or Texture <cit.> images for CIFAR10/CIFAR100/ImageNet-1k, respectively). To evaluate our method, we employed 10,000/10,000/12,500 images each from CIFAR10/CIFAR100/ImageNet-1k as test ID samples. For covariate OOD test data, we utilized the corrupted dataset CIFAR10-C, CIFAR100-C, and ImageNet-C in <cit.>. These corrupted datasets contain five severity levels for 19 corruption types (blur, contrast, and frost). We utilized 10,000 images for each severity level of corruption types, maintaining the same approach across all datasets. For the test semantic OOD scenarios, we used Textures or SVHN dataset, which were not used during the tuning time. To demonstrate the effectiveness of our method, we compared it with five baseline post-hoc calibration methods, which are TS <cit.>, ETS <cit.>, IRM <cit.>, IROvA/IROvATS <cit.>, and SPLINE <cit.>. Among these methods, TS, ETS, IRM, SPLINE, and our method are considered as accuracy-preserving methods. In addition, we compared our method with the state-of-the-art post-hoc calibration method, DAC, proposed in <cit.>. Please note that additional experimental results based on the type of semantic OOD dataset can be found in the . §.§ Ablation Study on Energy Score We conducted an ablation study on the energy score in (<ref>). To demonstrate the capability of our method in capturing network uncertainty, we evaluated the energy score on various samples from complete ID dataset (CIFAR10), corrupted dataset (CIFAR10-C) and semantic OOD dataset (SVHN). As shown in Fig.<ref>, the energy scores tend to decrease with higher variances as the degree of distribution shift increases. Since the tendency for energy scores to decrease follows the trend of diminishing accuracy, it implies that the energy scores can indeed efficiently capture the uncertainty of DNNs. §.§ Comparison with Baseline Methods We compared our method with the aforementioned baseline methods by evaluating them across various datasets and backbone architectures in terms of ECE. Because our method emphasizes robust calibration performance on diverse datasets, we comprehensively conducted experiments on a variety of distribution shift scenarios, spanning from complete in-distribution to heavily corrupted scenarios. For this purpose, we employed a corrupted dataset comprising severity levels ranging from 1 to 5, with complete ID test data added as severity level 0. Table <ref> shows the averaged ECE across all severity levels. Our method outperforms other baseline methods for various backbone networks and datasets. Furthermore, as shown in Fig.<ref>, our method surpassed other approaches in most individual severity levels. It is noteworthy that our method shows consistent performance not only in scenarios involving corruptions but also in cases of complete ID scenarios. This is in contrast to <cit.>, which exhibited greater miscalibration even than the uncalibrated one in the context of complete in-distribution data. Fig.<ref> shows a comparison of ECE by corruption type, similar to <cit.>. Our method demonstrates robust calibration across various corruption types. Additional results on diverse calibration metrics <cit.> and transformer-based models <cit.> are available in the . §.§ Exploring Synergies with Applicable State-of-the-art Method We analyzed the results of applying an applicable post-hoc calibration method to our proposed method. DAC <cit.>, similar to our objective, aims for robust calibration performance even in OOD scenarios. Unlike most methods, including ours, which use only the output of the last layer of the DNN, DAC additionally utilized the output of other layers. Since DAC is designed to be used alongside post-hoc calibration methods, we applied it to our proposed method. We followed the DAC's layer selection method proposed by Tomani et al. <cit.>. Table <ref> shows the averaged ECE for each corrupted dataset. We compared our method with ETS+DAC and SPLINE+DAC, both of which primarily achieved state-of-the-art in <cit.>. While our method showed good synergy with DAC, it alone achieved the best performance even without DAC. Notably, our approach can attain these results without needing the additional output information from each classifier layer used by DAC. §.§ Evaluating Robustness on Semantic OOD We measured the calibrated confidence scores for semantic OOD test samples, which are different from OOD train samples used to adjust our calibration parameters. For this experiment, we utilized DenseNet201. Fig.<ref> demonstrates that our method produces the lowest confidence scores for OOD samples, because all predictions for OOD samples are incorrect. Furthermore, we conducted an experiment to investigate the potential extension into OOD detection. To accomplish this, we utilized key evaluation metrics commonly employed in OOD detection, such as AUROC, AUPR-in, and AUPR-out. Our method demonstrates superior performance compared to other approaches in most cases, as observed in Table <ref>, thus affirming the potential for extension into OOD detection with our proposed approach. § CONCLUSION In this paper, we addressed the limitations of existing post-hoc calibration methods on the wild datasets, including in-distribution, covariate shift, and semantic shift. Conventional methods could not consider all these scenarios in achieving robust calibration. To solve this problem, we introduced a novel instance-wise calibration method using the energy score. Our method adaptively captured uncertainty for each instance by leveraging the energy score. Through experiments conducted across various networks and datasets, we demonstrated that our method outperforms existing calibration methods in scenarios involving various types of distribution shifts, while consistently maintaining calibration effect in the complete in-distribution dataset. As the reliability of AI in safety-critical situations becomes increasingly important, we believe that our method can contribute to the safer deployment of AI systems in real-world scenarios. By offering a promising direction with our method, we hope to inspire future research efforts for enhancing trustworthy AI. §.§.§ Acknowledgements. This work was partly supported by the National Research Foundation of Korea (NRF) grant funded by the Korea government (NRF2020R1C1C1004907) and partly supported by Institute of Information & communications Technology Planning & Evaluation (IITP) grant funded by the Korea government(MSIT) (RS-2022-00143911, AI Excellence Global Innovative Leader Education Program and 2021-0-01341, Artificial Intelligence Graduate School Program (Chung-Ang university)). splncs04 Supplementary Material for Uncertainty Calibration with Energy Based Instance-wise Scaling in the Wild Dataset § EVALUATION ON ADDITIONAL CALIBRATION METRICS In the main text, we primarily evaluated methods using the Expected Calibration Error (ECE), which remains the most representative metric for assessing calibration. The purpose of this section is to present results evaluated using other calibration metrics besides the ECE introduced in the main text. To this end, we evaluated methods using the Kernel Density Estimation based ECE (KDE-ECE) <cit.> and the Static Calibration Error (SCE)<cit.>. Both metrics are lower when calibration is better. Note that the SCE is conceptually identical to the Class-wise Calibration Error <cit.>. Additionally, given the characteristic of the spline method, which calibrates confidence for the top label only, it is advisable to note that when evaluating spline using the class-wise SCE, there may be more penalty due to its design of the method. Results for KDE-ECE and SCE are respectively shown in Tables <ref> and <ref>. Both results are based on the same in-distribution (ID), out-of-distribution (OOD) settings, and likewise, the same networks as utilized in the main text. In most results, it can be observed that our method still shows superior performance with both KDE-ECE and SCE. § EVALUATION ON TRANSFORMER-BASED MODELS Transformer-based vision models, including <cit.> and <cit.>, are known to be relatively well-calibrated, as noted in <cit.>. However, to verify the calirbation efficacy of our method on these transformer-based models, we conducted additional experiments. Table <ref> shows the results before and after applying our method to ViT-B-32, ViT-B-16, and Swin-T, respectively. Similarly, these results represent the averaged ECE and KDE-ECE across all levels of corruption severity from no corruption (ID) to maximum corruption severity in ImageNet-C. It is evident from these findings that our method can consistently achieve calibration effects on transformer-based vision models as well. Furthermore, as detailed in Fig.<ref>, our method consistently exhibits calibration effects at every level of corruption severity, demonstrating its robustness. § EVALUATION ON ADDITIONAL COVARIATE OOD DATASET In the main text, we utilized the ImageNet-C <cit.> dataset, which features progressively diverging stages of corruption from the in-distribution (ID), to represent covariate-shifted test sets and effectively demonstrate the characteristics of our method. In this section, we introduce additional experimental results using additional covariate OOD datasets as test sets beyond ImageNet-C. We employed different covariate OOD datasets including ImageNet-Renditions (R) <cit.>, ImageNet-Adversarial (A) <cit.>, and ImageNet-Sketch <cit.>. Firstly, ImageNet-R consists of a test set of 30,000 images that encompass a variety of artistic renditions (paintings, graffiti, and embroidery), covering 200 classes from the ImageNet-1k. Secondly, ImageNet-A contains 7,500 natural adversarial examples that are real-world adversarially filtered images designed to challenge existing ImageNet-1k classifiers. Lastly, ImageNet-Sketch, similar to the aforementioned datasets, is used to evaluate generalization and robustness in situations with distribution shifts, consisting of 50,000 sketch-like images from the entire ImageNet-1k classes. § ABLATION STUDY ON SEMANTIC OOD DATA This section provides interesting detailed examinations through various ablation studies related to the semantic OOD dataset, which was utilized for parameter tuning in our method. Firstly, Section <ref> discusses the impacts of different types of semantic OOD datasets utilized for tuning. Lastly, Section <ref> offers a comprehensive analysis of the results when semantic OOD is not used for tuning at all. §.§ Semantic OOD type For the ID dataset, we used CIFAR10, CIFAR100, and ImageNet-1k, while for the semantic out-of-distribution (OOD) datasets, we utilized SVHN and the Texture dataset. When tuning the calibration parameter, if SVHN was used, then another dataset, Texture, was employed as the test set to prevent any data leakage to the test dataset. We highlight that the semantic OOD data used for tuning was not used as the test dataset. In this section, we conducted an ablation study on the types of semantic OOD used for parameter tuning, using DenseNet201. Table <ref> indicates that regardless of the semantic OOD used for tuning the parameters of our method, it consistently maintained high performance without significant changes in the results. §.§ Analysis of Results Without Using Semantic OOD for Tuning Our intuition suggests that using energy to distinguish between correct and incorrect predictions should be effective with our algorithm alone. However, exposure to semantic OOD, which is drastically different from the train distribution, likely enhances robust calibration against OOD scenarios. To investigate this further, we analyzed the results when not using semantic OOD data for tuning at all. Under the same experimental settings as in the main text, we configured and tested our method without using semantic OOD data, adding these results for comparative analysis. Firstly, when comparing the `Ours w/o' results in Table <ref>, where semantic OOD was not used, it interestingly shows better performance in many cases compared to other state-of-the-art methods, albeit less effective than our full proposed method utilizing semantic OOD. In more details, as shown in Fig.<ref>, the DenseNet201(CIFAR100) results according to the corruption type at the maximum corruption level, while not as good as our original method, still outperform other state-of-the-art methods for most corruption types. Therefore, this demonstrates that the results consistent with our intuition and the intended design of our method. Lastly, consistent results were also observed for the semantic OOD test set. As shown in Fig.<ref>, although the results without using the semantic OOD dataset for tuning are less effective compared to our original method, they still show the best calibration effect compared to other existing methods.
http://arxiv.org/abs/2407.12275v1
20240717024927
When can transformers compositionally generalize in-context?
[ "Seijin Kobayashi", "Simon Schug", "Yassir Akram", "Florian Redhardt", "Johannes von Oswald", "Razvan Pascanu", "Guillaume Lajoie", "João Sacramento" ]
cs.LG
[ "cs.LG", "cs.NE" ]
Narrowband, Fast, and Autonomous Drone Radio Mapping for Localization Paul S. Kudyba and Haijian Sun School of Electrical and Computer Engineering, University of Georgia, Athens, GA, USA paul.kudyba@uga.edu, hsun@uga.edu July 22, 2024 =============================================================================================================================================================== ^*Equal contribution, alphabetical order.footnote . § ABSTRACT Many tasks can be composed from a few independent components. This gives rise to a combinatorial explosion of possible tasks, only some of which might be encountered during training. Under what circumstances can transformers compositionally generalize from a subset of tasks to all possible combinations of tasks that share similar components? Here we study a modular multitask setting that allows us to precisely control compositional structure in the data generation process. We present evidence that transformers learning in-context struggle to generalize compositionally on this task despite being in principle expressive enough to do so. Compositional generalization becomes possible only when introducing a bottleneck that enforces an explicit separation between task inference and task execution. transformer, compositional generalization, in-context learning § INTRODUCTION Many tasks are compositional and as a result, there is a combinatorial explosion of possible tasks. In this setting, when exposed to a number of tasks sharing components, it is desirable for a learning system to master operations that can be reused and leveraged to generalize to entirely new tasks. Ideally, our learning systems could discover the constituent parts underlying the compositional task structure, and naturally generalize compositionally. Prior work has explored this question <cit.>, and recent results show that gradient-based meta-learning with hypernetworks can compositionally generalize after training only on a linear subset of tasks <cit.>. Can transformers learning in-context achieve the same thing? In-context learning is powerful, with evidence pointing towards it being able to implement mesa-optimization <cit.>. In some settings, compositional generalization appears to be in reach <cit.>. However, there are also instances where despite being able to identify latent task information, generalization fails <cit.>. To shed light on the question under what circumstances transformers can learn to compositionally generalize in-context, we study the synthetic multitask setting previously introduced by <cit.>. We find that while the transformer is able to correctly infer the task latent variable, and solve in-distribution tasks, it fails to appropriately generalize. Introducing a bottleneck into the architecture separating task inference and task execution helps to overcome this failure by enabling compositional generalization. We design several interventions and decoding analyses that suggest that the success of the bottleneck is due to encouraging the discovery of the modular structure of the underlying task generative model. This finding paves the way to possible architectural inductive biases that may promote better generalization in-context for transformers. § GENERATING MODULAR TASKS WITH COMPOSITIONAL STRUCTURE Task generation We aim to study a multitask setting where tasks share compositional structure. To this end, we consider an adapted version of the synthetic setting introduced by <cit.> which provides full control of the underlying compositional structure and allows us to measure the ability for compositional generalization. Specifically, we will leverage a task-shared linear hypernetwork <cit.> that defines a regression task given a low-dimensional task code z as shown in Figure <ref>A. The linear hypernetwork is parameterized by a set of modules {θ_m}_m=1^M. Given z∈ℝ^M, it produces task-specific parameters W = ∑_m=1^M z_m θ_m, which are used to parameterize a one hidden layer MLP with GELU nonlinearity and fixed readout weights with a single output unit, g:(x, W) ↦ g(x, W). This task network is used to define a regression task, producing labels y = g(x, W) given randomly drawn inputs x∼𝒰(-√(3), √(3)). As a result, each task is obtained through the additive composition of a set of M modules, where each module corresponds to a full set of task network parameters. We can use this structure to explicitly test for compositional generalization by holding out a subset of module combinations during training (see bottom of Figure <ref>A). At evaluation, performance measured on tasks seen during training is referred to as in-distribution, while performance on the held-out tasks is considered out-of-distribution (OOD). See Appendix <ref> for more details on the task generation and Appendix <ref> on how we define OOD tasks. In-context learning We present the tasks in-context to transformer-based sequence models as shown in Figure <ref>B. For each task, we sample a set of pairs {(x_i, y_i)}_i=1^N, concatenate each pair as a vector and present them as tokens to the transformer models. For the final query token, we mask the label y_N and train the model to predict it using a standard mean-squared error loss. We compare two models with each other. The first is a standard decoder-only transformer trained to directly predict y_N (for details please consider Appendix <ref>). We compare it to a transformer whose outputs are fed to a linear hypernetwork which parameterizes a single hidden layer task network that predicts the target of the query token, mirroring the generative process of the task. This is an instance of a hypertransformer <cit.>. While this model is still trained end-to-end using a mean-squared error loss on the target, the transformer can now specialize to infer a latent code and the parameters of the hypernetwork can be used to learn to execute the task given the latent code. § IN-CONTEXT COMPOSITIONAL GENERALIZATION As a consequence of the compositional structure, the number of possible tasks grows exponentially with the number of modules M. Naively learning every combination independently therefore quickly becomes unfeasible. Instead, ideally, our learning systems discover the underlying modular structure of the task while being exposed only to demonstrations of each task - not observing the ground-truth latent structure. In the following experiments, we evaluate out-of-distribution (OOD) performance on held-out module combinations on the modular task described above consisting of M=6 modules. Transformers learning in-context fail to generalize compositionally. We first consider the vanilla transformer trained to predict the query label after observing examples of each task in-context. As can be seen in Figure <ref>A, while being able to fit the in-distribution data relatively well (also compared to Figure <ref>), it fails to compositionally generalize to the held-out OOD tasks. Surprisingly, despite this, Figure <ref>B shows that it is possible to linearly decode the task latent code on OOD tasks from the residual stream given a linear decoder that is solely trained on in-distribution tasks (c.f. Appendix <ref> for more details). This suggests that the model is able to implicitly perform task inference in a way that generalizes compositionally, yet it is unable to leverage the inferred z to predict the correct label on OOD tasks. Separating task inference from task execution enables compositional generalization. Motivated by this observation, we equip the transformer with an explicit, learnable hypernetwork that takes as input the logits of the transformer as described above. This encodes a strong architectural prior to separate task inference and task execution. Indeed, Figure <ref>A shows that this system is able to compositionally generalize to held-out tasks while maintaining the linear decodability of the task latent code from the residual stream of the transformer (Figure <ref>B). The vanilla transformer fails to capture the compositional structure of the task. To illuminate to what extent the two models discover the compositional structure underlying the task generative model, we perform two additional experiments. First, we present both models with a control task that is also generated by a single hidden layer task network as used to produce the training tasks but crucially the parameters of this task network are not composed of the training modules but randomly initialized (see Appendix <ref> for details). Figure <ref>C shows that the hypernetwork transformer completely fails to solve this task over the course of training while the vanilla transformer displays modest performance, providing evidence that the former is strongly specialized to the particular compositional structure of the training tasks. To complement this analysis, we construct two task distributions (connected vs disconnected) for training that have been shown by <cit.> to causally affect the ability of hypernetworks to compositionally generalize (see Appendix <ref> for details). Indeed the hypernetwork transformer is highly sensitive to this intervention while the vanilla transformer is virtually unaffected. Taken together this suggests that the vanilla transformer fails to emulate the compositional structure of the task generative model which the hypernetwork transformer can capture. This is striking noting that the former is in principle sufficiently expressive to implement the hypernetwork solution, see Appendix <ref> for an explicit construction. § DISCUSSION Despite the success of transformers at scale trained on simple autoregressive next-token prediction, there are still many failure cases - compositional generalization being one of them. While we find that transformers are expressive enough to in principle encode compositional structure in a multitask setting, less powerful shortcuts dominate the solutions practically found by gradient-based optimization. Encoding inductive biases into the architecture might help overcome these problems but finding such biases that generally work well is an open challenge. Our results show that architecturally separating task inference from task execution through a bottleneck improves compositional generalization in our synthetic setting. Future work should explore to what extent similar architectural motifs allow end-to-end discovery of compositional structure from data for a general class of problems. § IMPLEMENTING A HYPERNETWORK IN A TRANSFORMER §.§ Linear attention block We first give a brief overview of the linear transformer architecture. Given input tokens E∈ℝ^L × d_m for a sequence of length L, a transformer block consists of a self-attention layer followed by a multi-layer perceptron (MLP). The transformation is done by first computing queries, keys and values Q, K, V = EW_q, EW_k, EW_v with which we then update E as E E + QK^TV W_P E E + σ(EW_1)W_2 where W_q, W_k, W_v ∈ℝ^d_m × d_k and W_p ∈ℝ^d_k × d_m as well as W_1 ∈ℝ^d_m × d_h, W_2 ∈ℝ^d_h × d_m are learnable parameter matrices and σ is a nonlinearity applied row-wise. In practice, there are H heads that perform the first attention operation in parallel, each with its own parameters W_q^(h), W_k^(h), W_v^(h), W_p^(h) for all h, resulting in the following forward function E E + ∑_h^H Q^(h)K^(h)⊤V^(h)W_P^(h) §.§ Construction We will now provide a construction of how a linear transformer can implement a fixed hypernetwork in the forward pass given any input x∈ℝ^d and latent z∈ℝ^M. Hypernetwork Let us consider the following linear hypernetwork: x,z→Aσ(W(z) x) where W(z) = ∑_m=1^M z_mθ_m, θ_m ∈ℝ^h × d for all m and A∈ℝ^o × d. Token construction We assume there are only 2 tokens, e_1 = (x^⊤,0_M,1_h+o)^⊤ and e_2 = (0_d,z^⊤,0_h+o)^⊤ where 0_k, 1_k indicate the k dimensional row vector of 0 resp 1. The output will be computed on the token stream of e_2. Linear attention First, the attention layer will compute the forward pass W(z) x. To do this, let us fix H=M heads, d_q=d_k=1 and d_v=h. For each head m, we can construct the value matrix such that the first token has a value vector θ_mx while the second has 0_h. By choosing the key and query matrices correctly, the attention score between the first and second token can be made to be exactly z_m. By letting the projection matrix be constant across heads, the attention operation would then be e_2 ←e_2 + ∑_m^M z_m(θ_mx)^⊤W_P by appropriately choosing W_P the residual stream would then equal (0_d,z^⊤,W(z)x,0_o)^⊤ after the attention layer. MLP Finally, the MLP layer simply applies the correct nonlinearity σ to W(z)x and applies the readout weight A to finally write the result on the remaining 0_o in the residual stream. § ADDITIONAL RESULTS § EXPERIMENTAL DETAILS §.§ Data generation The data is generated using a teacher hypernetwork. We first initialize the teacher parameters once. Then, for each sequence, we sample a task latent variable z which induces a noiseless mapping from inputs x to a scalar target y following equation y = a^⊤σ(W(z) x), where σ is the GELU nonlinearity. The weight W is the linear combination of modules {θ_m}_m by z, i.e. W(z) = ∑_m θ_m z_m. In order to make sure the forward pass is well-behaved, we furthermore normalize the generated weight W by its operator norm. For all experiments, we fix the task latent variable dimension to M=6, input dimension to d = 16, hidden dimension of the teacher to h=16, and output dimension to o=1. The teacher parameters {θ_m}_m and a are generated by sampling the entries i.i.d. from the centered truncated normal distribution, with standard deviation resp. 1/√(M) and 1/√(h). We define the distribution over inputs x to be the uniform distribution with 0 mean and standard deviation 1 on ℝ^d. Finally, we specify the distribution over task latent variable z. Task latent variable distribution. Here, we consider tasks where modules are sparsely, and linearly combined. A task distribution is specified by a set of masks, that are binary vectors of ℝ^M. Given a mask, we sample a task z as follows. We first sample an M-dimensional random variable following the exponential distribution. Then, we zero out entries corresponding to the mask being 0. We normalize the vector such that the sum equals 1. This procedure simulates the uniform sampling from the simplex spanning the directions in which the mask is non-zero. Finally, we add the mask to the vector and rescale the outcome by 0.5. This ensures two tasks generated by distinct masks do not have intersecting support (but intersecting span). See Algorithm <ref> for the pseudocode. The task distribution is then generated as follows: first, a mask is sampled randomly and uniformly from the prespecified set. Then, the vector z is sampled following the above procedure. Connected and disconnected task support Controlling the task distribution in this way allows us to study under what circumstances it is possible to generalize to the full support after having only observed demonstrations from a subset of tasks. More precisely, if 𝒫_z is a distribution on the latent code that does not have full support on ℝ^M, can a system trained only on tasks sampled from 𝒫_z generalize to the full space? Here, we assume that the support of 𝒫_z spans the whole space ℝ^M. We will investigate two situations: when 𝒫_z has connected support and when it has disconnected support. For a formal definition, we defer the reader to <cit.>. Intuitively, having connected support means that no subset of modules appears solely in isolation of the rest. To make a concrete example for the simple case of M=3 modules, if the support of 𝒫_z is (ℝ^2×{0}) ∪({0}^2 ×ℝ), the learner will have never seen the interaction of the first two modules with the last one and hence the support is disconnected. <cit.> theoretically show that when the support is disconnected, there are several failure cases impeding compositional generalization. §.§ Training and Evaluation metrics In all our experiments, we train the model on tasks generated from a set of binary masks as described in Section  <ref>. During training, both x and z are sampled online. For experiments investigating the effect of connected and disconnected task support (Panel D of Figure <ref>), we train the same model on the masks listed in the corresponding columns in Table <ref>, where we make sure the same number of tasks is used during training in both settings. For all other experiments, we use the masks of Connected+ during training. OOD R^2 To evaluate the performance of the models on compositional generalization, we compute the R^2 score of the linear regression on tasks generated by 2-hot masks that were unseen during training. For example, for models trained on tasks from masks in Connected+ (c.f. Table <ref>), the OOD evaluation is done on tasks from masks (1,0,0,1,0,0),(0,1,0,0,1,0),(0,0,1,0,0,1). Given a sequence of N-1 pairs (x_i, y_i) and a prediction ỹ for y_N, the R^2 score is defined as the MSE loss between y_N and ỹ, normalized by the MSE loss between y_N and 1/N-1∑_i^N-1 y_i. The score is averaged over 16000 OOD sequences. Probing latent In order to probe whether the transformer implicitly learned to infer the latent task variable z, we linearly probe the residual stream throughout training. Given a model, we collect a batch of 16000 sequences X sampled from the training task distribution, associated to various task latent variables Z. We then train a linear regressor from X to Z by Ridge regression, with regularization strength 1. Then, we evaluate the R^2 score of the regressor on 16000 sequences drawn from the OOD distribution, against their respective latent code. Random unstructured control A lazy way for a model to learn the task is to ignore its compositional structure, and simply infer the target solely based on the context of the current task. If that is the case, then the model should be able to have a reasonable guess of g(x_N,ω) when the context provides demonstrations (x_i, g(x_i, ω)), with ω∉{W(z) |z∈ℝ^M}. We evaluate our models on such an unstructured control task where ω is sampled as the output of a freshly initialized hypernetwork teacher and random latent code, both of which share no other structure with the training tasks. §.§ Architecture §.§.§ Vanilla transformer The input is X = ((x_1, y_1), …, (x_N-1, y_N-1), (x_N, 0)) where x_N is the "query" input whose image we have to infer. The vanilla transformer consists of a standard decoder-only multi-layer transformer, where each block is structured as X ←((X)) + X X ←((X)) + X. is multi-head softmax attention and uses T5-style relative positional embeddings <cit.>. The feedforward layer is a two-layer MLP with GELU nonlinearity. It is applied to each token in the sequence independently. A final readout layer projects the query token to the output dimension. §.§.§ Transformer Hypernetwork The Transformer Hypernetwork is a single hidden layer MLP whose weights are generated by a transformer. More precisely: * the first layer weights are generated by a vanilla transformer. It gets as input the sequence X = ((x_1, y_1), …, (x_N, y_N), (0, 0)) The output of the last token is projected to a latent code space ℝ^m̂, followed by a readout to the dimension of the weight matrix of the first MLP layer. One can reinterpret this as the transformer generating a latent code z, and then generating the weight matrix W(z) = ∑_m=1^M̂ z_m θ_m, where the θ_m are the learned modules. * the readout weights are learnable parameters (i.e. not generated by the transformer) §.§ Hyperparameters We selected the following hyperparameter based on the mean OOD R^2 score on 3 seeds:
http://arxiv.org/abs/2407.12779v1
20240717175732
Analysis of Crab X-ray Polarization using Deeper IXPE Observations
[ "Josephine Wong", "Tsunefumi Mizuno", "Niccoló Bucciantini", "Roger W. Romani", "Yi-Jung Yang", "Kuan Liu", "Wei Deng", "Kazuho Goya", "Fei Xie", "Maura Pilia", "Philip Kaaret", "Martin C. Weisskopf", "Stefano Silvestri", "C. -Y. Ng", "Chien-Ting Chen", "Iván Agudo", "Lucio A. Antonelli", "Matteo Bachetti", "Luca Baldini", "Wayne H. Baumgartner", "Ronaldo Bellazzini", "Stefano Bianchi", "Stephen D. Bongiorno", "Raffaella Bonino", "Alessandro Brez", "Fiamma Capitanio", "Simone Castellano", "Elisabetta Cavazzuti", "Stefano Ciprini", "Enrico Costa", "Alessandra De Rosa", "Ettore Del Monte", "Laura Di Gesu", "Niccoló Di Lalla", "Alessandro Di Marco", "Immacolata Donnarumma", "Victor Doroshenko", "Michal Dovčiak", "Steven R. Ehlert", "Teruaki Enoto", "Yuri Evangelista", "Sergio Fabiani", "Riccardo Ferrazzoli", "Javier A. Garcia", "Shuichi Gunji", "Jeremy Heyl", "Wataru Iwakiri", "Svetlana G. Jorstad", "Vladimir Karas", "Fabian Kislat", "Takao Kitaguchi", "Jeffery J. Kolodziejczak", "Henric Krawczynski", "Fabio La Monaca", "Luca Latronico", "Ioannis Liodakis", "Simone Maldera", "Alberto Manfreda", "Frédéric Marin", "Andrea Marinucci", "Alan P. Marscher", "Herman L. Marshall", "Francesco Massaro", "Giorgio Matt", "Ikuyuki Mitsuishi", "Fabio Muleri", "Michela Negro", "Stephen L. O'Dell", "Nicola Omodei", "Chiara Oppedisano", "Alessandro Papitto", "George G. Pavlov", "Abel Lawrence Peirson", "Matteo Perri", "Melissa Pesce-Rollins", "Pierre-Olivier Petrucci", "Andrea Possenti", "Juri Poutanen", "Simonetta Puccetti", "Brian D. Ramsey", "John Rankin", "Ajay Ratheesh", "Oliver J. Roberts", "Carmelo Sgró", "Patrick Slane", "Paolo Soffitta", "Gloria Spandre", "Douglas A. Swartz", "Toru Tamagawa", "Fabrizio Tavecchio", "Roberto Taverna", "Yuzuru Tawara", "Allyn F. Tennant", "Nicholas E. Thomas", "Francesco Tombesi", "Alessio Trois", "Sergey Tsygankov", "Roberto Turolla", "Jacco Vink", "Kinwah Wu", "Silvia Zane" ]
astro-ph.HE
[ "astro-ph.HE" ]
Josephine Wong joswong@stanford.edu 0000-0001-6395-2066]Josephine Wong Department of Physics and Kavli Institute for Particle Astrophysics and Cosmology, Stanford University, Stanford, California 94305, USA 0000-0001-7263-0296]Tsunefumi Mizuno Hiroshima Astrophysical Science Center, Hiroshima University, 1-3-1 Kagamiyama, Higashi-Hiroshima, Hiroshima 739-8526, Japan 0000-0002-8848-1392]Niccoló Bucciantini INAF Osservatorio Astrofisico di Arcetri, Largo Enrico Fermi 5, 50125 Firenze, Italy Dipartimento di Fisica e Astronomia, Università degli Studi di Firenze, Via Sansone 1, 50019 Sesto Fiorentino (FI), Italy Istituto Nazionale di Fisica Nucleare, Sezione di Firenze, Via Sansone 1, 50019 Sesto Fiorentino (FI), Italy 0000-0001-6711-3286]Roger W. Romani Department of Physics and Kavli Institute for Particle Astrophysics and Cosmology, Stanford University, Stanford, California 94305, USA 0000-0001-9108-573X]Yi-Jung Yang Graduate Institute of Astronomy, National Central University, 300 Zhongda Road, Zhongli, Taoyuan 32001, Taiwan Laboratory for Space Research, The University of Hong Kong, Cyberport 4, Hong Kong 0009-0007-8686-9012]Kuan Liu Guangxi Key Laboratory for Relativistic Astrophysics, School of Physical Science and Technology, Guangxi University, Nanning 530004, China 0000-0002-9370-4079]Wei Deng Guangxi Key Laboratory for Relativistic Astrophysics, School of Physical Science and Technology, Guangxi University, Nanning 530004, China Hiroshima University, School of Science, 1-3-1 Kagamiyama, Higashi-Hiroshima, Japan 0000-0002-0105-5826]Fei Xie Guangxi Key Laboratory for Relativistic Astrophysics, School of Physical Science and Technology, Guangxi University, Nanning 530004, China INAF Istituto di Astrofisica e Planetologia Spaziali, Via del Fosso del Cavaliere 100, 00133 Roma, Italy 0000-0001-7397-8091]Maura Pilia INAF Osservatorio Astronomico di Cagliari, Via della Scienza 5, 09047 Selargius (CA), Italy 0000-0002-3638-0637]Philip Kaaret NASA Marshall Space Flight Center, Huntsville, AL 35812, USA 0000-0002-5270-4240]Martin C. Weisskopf NASA Marshall Space Flight Center, Huntsville, AL 35812, USA 0000-0002-8665-0105]Stefano Silvestri Istituto Nazionale di Fisica Nucleare, Sezione di Pisa 0000-0002-5847-2612]C.-Y. Ng Department of Physics, The University of Hong Kong, Pokfulam, Hong Kong 0000-0002-4945-5079]Chien-Ting Chen Science and Technology Institute, Universities Space Research Association, Huntsville, AL 35805, USA 0000-0002-3777-6182]Iván Agudo Instituto de Astrofísica de Andalucía—CSIC, Glorieta de la Astronomía s/n, 18008 Granada, Spain 0000-0002-5037-9034]Lucio A. Antonelli INAF Osservatorio Astronomico di Roma, Via Frascati 33, 00078 Monte Porzio Catone (RM), Italy Space Science Data Center, Agenzia Spaziale Italiana, Via del Politecnico snc, 00133 Roma, Italy 0000-0002-4576-9337]Matteo Bachetti INAF Osservatorio Astronomico di Cagliari, Via della Scienza 5, 09047 Selargius (CA), Italy 0000-0002-9785-7726]Luca Baldini Istituto Nazionale di Fisica Nucleare, Sezione di Pisa, Largo B. Pontecorvo 3, 56127 Pisa, Italy Dipartimento di Fisica, Università di Pisa, Largo B. Pontecorvo 3, 56127 Pisa, Italy 0000-0002-5106-0463]Wayne H. Baumgartner NASA Marshall Space Flight Center, Huntsville, AL 35812, USA 0000-0002-2469-7063]Ronaldo Bellazzini Istituto Nazionale di Fisica Nucleare, Sezione di Pisa, Largo B. Pontecorvo 3, 56127 Pisa, Italy 0000-0002-4622-4240]Stefano Bianchi Dipartimento di Matematica e Fisica, Università degli Studi Roma Tre, Via della Vasca Navale 84, 00146 Roma, Italy 0000-0002-0901-2097]Stephen D. Bongiorno NASA Marshall Space Flight Center, Huntsville, AL 35812, USA 0000-0002-4264-1215]Raffaella Bonino Istituto Nazionale di Fisica Nucleare, Sezione di Torino, Via Pietro Giuria 1, 10125 Torino, Italy Dipartimento di Fisica, Università degli Studi di Torino, Via Pietro Giuria 1, 10125 Torino, Italy 0000-0002-9460-1821]Alessandro Brez Istituto Nazionale di Fisica Nucleare, Sezione di Pisa, Largo B. Pontecorvo 3, 56127 Pisa, Italy 0000-0002-6384-3027]Fiamma Capitanio INAF Istituto di Astrofisica e Planetologia Spaziali, Via del Fosso del Cavaliere 100, 00133 Roma, Italy 0000-0003-1111-4292]Simone Castellano Istituto Nazionale di Fisica Nucleare, Sezione di Pisa, Largo B. Pontecorvo 3, 56127 Pisa, Italy 0000-0001-7150-9638]Elisabetta Cavazzuti ASI - Agenzia Spaziale Italiana, Via del Politecnico snc, 00133 Roma, Italy 0000-0002-0712-2479]Stefano Ciprini Istituto Nazionale di Fisica Nucleare, Sezione di Roma "Tor Vergata", Via della Ricerca Scientifica 1, 00133 Roma, Italy Space Science Data Center, Agenzia Spaziale Italiana, Via del Politecnico snc, 00133 Roma, Italy 0000-0003-4925-8523]Enrico Costa INAF Istituto di Astrofisica e Planetologia Spaziali, Via del Fosso del Cavaliere 100, 00133 Roma, Italy 0000-0001-5668-6863]Alessandra De Rosa INAF Istituto di Astrofisica e Planetologia Spaziali, Via del Fosso del Cavaliere 100, 00133 Roma, Italy 0000-0002-3013-6334]Ettore Del Monte INAF Istituto di Astrofisica e Planetologia Spaziali, Via del Fosso del Cavaliere 100, 00133 Roma, Italy 0000-0000-0000-0000]Laura Di Gesu ASI - Agenzia Spaziale Italiana, Via del Politecnico snc, 00133 Roma, Italy 0000-0002-7574-1298]Niccoló Di Lalla Department of Physics and Kavli Institute for Particle Astrophysics and Cosmology, Stanford University, Stanford, California 94305, USA 0000-0003-0331-3259]Alessandro Di Marco INAF Istituto di Astrofisica e Planetologia Spaziali, Via del Fosso del Cavaliere 100, 00133 Roma, Italy 0000-0002-4700-4549]Immacolata Donnarumma ASI - Agenzia Spaziale Italiana, Via del Politecnico snc, 00133 Roma, Italy 0000-0001-8162-1105]Victor Doroshenko Institut für Astronomie und Astrophysik, Universität Tübingen, Sand 1, 72076 Tübingen, Germany 0000-0003-0079-1239]Michal Dovčiak Astronomical Institute of the Czech Academy of Sciences, Boční II 1401/1, 14100 Praha 4, Czech Republic 0000-0003-4420-2838]Steven R. Ehlert NASA Marshall Space Flight Center, Huntsville, AL 35812, USA 0000-0003-1244-3100]Teruaki Enoto RIKEN Cluster for Pioneering Research, 2-1 Hirosawa, Wako, Saitama 351-0198, Japan 0000-0001-6096-6710]Yuri Evangelista INAF Istituto di Astrofisica e Planetologia Spaziali, Via del Fosso del Cavaliere 100, 00133 Roma, Italy 0000-0003-1533-0283]Sergio Fabiani INAF Istituto di Astrofisica e Planetologia Spaziali, Via del Fosso del Cavaliere 100, 00133 Roma, Italy 0000-0003-1074-8605]Riccardo Ferrazzoli INAF Istituto di Astrofisica e Planetologia Spaziali, Via del Fosso del Cavaliere 100, 00133 Roma, Italy 0000-0003-3828-2448]Javier A. Garcia NASA Goddard Space Flight Center, Greenbelt, MD 20771, USA 0000-0002-5881-2445]Shuichi Gunji Yamagata University,1-4-12 Kojirakawa-machi, Yamagata-shi 990-8560, Japan 0000-0001-9739-367X]Jeremy Heyl University of British Columbia, Vancouver, BC V6T 1Z4, Canada 0000-0002-0207-9010]Wataru Iwakiri International Center for Hadron Astrophysics, Chiba University, Chiba 263-8522, Japan 0000-0001-9522-5453]Svetlana G. Jorstad Institute for Astrophysical Research, Boston University, 725 Commonwealth Avenue, Boston, MA 02215, USA Department of Astrophysics, St. Petersburg State University, Universitetsky pr. 28, Petrodvoretz, 198504 St. Petersburg, Russia 0000-0002-5760-0459]Vladimir Karas Astronomical Institute of the Czech Academy of Sciences, Boční II 1401/1, 14100 Praha 4, Czech Republic 0000-0001-7477-0380]Fabian Kislat Department of Physics and Astronomy and Space Science Center, University of New Hampshire, Durham, NH 03824, USA RIKEN Cluster for Pioneering Research, 2-1 Hirosawa, Wako, Saitama 351-0198, Japan 0000-0002-0110-6136]Jeffery J. Kolodziejczak NASA Marshall Space Flight Center, Huntsville, AL 35812, USA 0000-0002-1084-6507]Henric Krawczynski Physics Department and McDonnell Center for the Space Sciences, Washington University in St. Louis, St. Louis, MO 63130, USA 0000-0001-8916-4156]Fabio La Monaca INAF Istituto di Astrofisica e Planetologia Spaziali, Via del Fosso del Cavaliere 100, 00133 Roma, Italy Dipartimento di Fisica, Università degli Studi di Roma “Tor Vergata”, Via della Ricerca Scientifica 1, I-00133 Roma, Italy Dipartimento di Fisica, Università degli Studi di Roma “La Sapienza”, Piazzale Aldo Moro 5, I-00185 Roma, Italy 0000-0002-0984-1856]Luca Latronico Istituto Nazionale di Fisica Nucleare, Sezione di Torino, Via Pietro Giuria 1, 10125 Torino, Italy 0000-0001-9200-4006]Ioannis Liodakis NASA Marshall Space Flight Center, Huntsville, AL 35812, USA 0000-0002-0698-4421]Simone Maldera Istituto Nazionale di Fisica Nucleare, Sezione di Torino, Via Pietro Giuria 1, 10125 Torino, Italy 0000-0002-0998-4953]Alberto Manfreda Istituto Nazionale di Fisica Nucleare, Sezione di Napoli, Strada Comunale Cinthia, 80126 Napoli, Italy 0000-0003-4952-0835]Frédéric Marin Université de Strasbourg, CNRS, Observatoire Astronomique de Strasbourg, UMR 7550, 67000 Strasbourg, France 0000-0002-2055-4946]Andrea Marinucci ASI - Agenzia Spaziale Italiana, Via del Politecnico snc, 00133 Roma, Italy 0000-0001-7396-3332]Alan P. Marscher Institute for Astrophysical Research, Boston University, 725 Commonwealth Avenue, Boston, MA 02215, USA 0000-0002-6492-1293]Herman L. Marshall MIT Kavli Institute for Astrophysics and Space Research, Massachusetts Institute of Technology, 77 Massachusetts Avenue, Cambridge, MA 02139, USA 0000-0002-1704-9850]Francesco Massaro Istituto Nazionale di Fisica Nucleare, Sezione di Torino, Via Pietro Giuria 1, 10125 Torino, Italy Dipartimento di Fisica, Università degli Studi di Torino, Via Pietro Giuria 1, 10125 Torino, Italy 0000-0002-2152-0916]Giorgio Matt Dipartimento di Matematica e Fisica, Università degli Studi Roma Tre, Via della Vasca Navale 84, 00146 Roma, Italy Graduate School of Science, Division of Particle and Astrophysical Science, Nagoya University, Furo-cho, Chikusa-ku, Nagoya, Aichi 464-8602, Japan 0000-0003-3331-3794]Fabio Muleri INAF Istituto di Astrofisica e Planetologia Spaziali, Via del Fosso del Cavaliere 100, 00133 Roma, Italy 0000-0002-6548-5622]Michela Negro Department of Physics and Astronomy, Louisiana State University, Baton Rouge, LA 70803 USA 0000-0002-1868-8056]Stephen L. O'Dell NASA Marshall Space Flight Center, Huntsville, AL 35812, USA 0000-0002-5448-7577]Nicola Omodei Department of Physics and Kavli Institute for Particle Astrophysics and Cosmology, Stanford University, Stanford, California 94305, USA 0000-0001-6194-4601]Chiara Oppedisano Istituto Nazionale di Fisica Nucleare, Sezione di Torino, Via Pietro Giuria 1, 10125 Torino, Italy 0000-0001-6289-7413]Alessandro Papitto INAF Osservatorio Astronomico di Roma, Via Frascati 33, 00078 Monte Porzio Catone (RM), Italy 0000-0002-7481-5259]George G. Pavlov Department of Astronomy and Astrophysics, Pennsylvania State University, University Park, PA 16802, USA 0000-0001-6292-1911]Abel Lawrence Peirson Department of Physics and Kavli Institute for Particle Astrophysics and Cosmology, Stanford University, Stanford, California 94305, USA 0000-0000-0000-0000]Matteo Perri Space Science Data Center, Agenzia Spaziale Italiana, Via del Politecnico snc, 00133 Roma, Italy INAF Osservatorio Astronomico di Roma, Via Frascati 33, 00078 Monte Porzio Catone (RM), Italy 0000-0003-1790-8018]Melissa Pesce-Rollins Istituto Nazionale di Fisica Nucleare, Sezione di Pisa, Largo B. Pontecorvo 3, 56127 Pisa, Italy 0000-0001-6061-3480]Pierre-Olivier Petrucci Université Grenoble Alpes, CNRS, IPAG, 38000 Grenoble, France 0000-0001-5902-3731]Andrea Possenti INAF Osservatorio Astronomico di Cagliari, Via della Scienza 5, 09047 Selargius (CA), Italy 0000-0002-0983-0049]Juri Poutanen Department of Physics and Astronomy, University of Turku, FI-20014, Finland 0000-0000-0000-0000]Simonetta Puccetti Space Science Data Center, Agenzia Spaziale Italiana, Via del Politecnico snc, 00133 Roma, Italy 0000-0003-1548-1524]Brian D. Ramsey NASA Marshall Space Flight Center, Huntsville, AL 35812, USA 0000-0002-9774-0560]John Rankin INAF Istituto di Astrofisica e Planetologia Spaziali, Via del Fosso del Cavaliere 100, 00133 Roma, Italy 0000-0003-0411-4243]Ajay Ratheesh INAF Istituto di Astrofisica e Planetologia Spaziali, Via del Fosso del Cavaliere 100, 00133 Roma, Italy 0000-0002-7150-9061]Oliver J. Roberts Science and Technology Institute, Universities Space Research Association, Huntsville, AL 35805, USA 0000-0001-5676-6214]Carmelo Sgró Istituto Nazionale di Fisica Nucleare, Sezione di Pisa, Largo B. Pontecorvo 3, 56127 Pisa, Italy 0000-0002-6986-6756]Patrick Slane Center for Astrophysics | Harvard & Smithsonian, 60 Garden Street, Cambridge, MA 02138, USA 0000-0002-7781-4104]Paolo Soffitta INAF Istituto di Astrofisica e Planetologia Spaziali, Via del Fosso del Cavaliere 100, 00133 Roma, Italy 0000-0003-0802-3453]Gloria Spandre Istituto Nazionale di Fisica Nucleare, Sezione di Pisa, Largo B. Pontecorvo 3, 56127 Pisa, Italy 0000-0002-2954-4461]Douglas A. Swartz Science and Technology Institute, Universities Space Research Association, Huntsville, AL 35805, USA 0000-0002-8801-6263]Toru Tamagawa RIKEN Cluster for Pioneering Research, 2-1 Hirosawa, Wako, Saitama 351-0198, Japan 0000-0003-0256-0995]Fabrizio Tavecchio INAF Osservatorio Astronomico di Brera, Via E. Bianchi 46, 23807 Merate (LC), Italy 0000-0002-1768-618X]Roberto Taverna Dipartimento di Fisica e Astronomia, Università degli Studi di Padova, Via Marzolo 8, 35131 Padova, Italy Graduate School of Science, Division of Particle and Astrophysical Science, Nagoya University, Furo-cho, Chikusa-ku, Nagoya, Aichi 464-8602, Japan 0000-0002-9443-6774]Allyn F. Tennant NASA Marshall Space Flight Center, Huntsville, AL 35812, USA 0000-0003-0411-4606]Nicholas E. Thomas NASA Marshall Space Flight Center, Huntsville, AL 35812, USA 0000-0002-6562-8654]Francesco Tombesi Dipartimento di Fisica, Università degli Studi di Roma "Tor Vergata", Via della Ricerca Scientifica 1, 00133 Roma, Italy Istituto Nazionale di Fisica Nucleare, Sezione di Roma "Tor Vergata", Via della Ricerca Scientifica 1, 00133 Roma, Italy 0000-0002-3180-6002]Alessio Trois INAF Osservatorio Astronomico di Cagliari, Via della Scienza 5, 09047 Selargius (CA), Italy 0000-0002-9679-0793]Sergey Tsygankov Department of Physics and Astronomy, University of Turku, FI-20014, Finland 0000-0003-3977-8760]Roberto Turolla Dipartimento di Fisica e Astronomia, Università degli Studi di Padova, Via Marzolo 8, 35131 Padova, Italy Mullard Space Science Laboratory, University College London, Holmbury St Mary, Dorking, Surrey RH5 6NT, UK 0000-0002-4708-4219]Jacco Vink Anton Pannekoek Institute for Astronomy & GRAPPA, University of Amsterdam, Science Park 904, 1098 XH Amsterdam, The Netherlands 0000-0002-7568-8765]Kinwah Wu Mullard Space Science Laboratory, University College London, Holmbury St Mary, Dorking, Surrey RH5 6NT, UK 0000-0001-5326-880X]Silvia Zane Mullard Space Science Laboratory, University College London, Holmbury St Mary, Dorking, Surrey RH5 6NT, UK § ABSTRACT We present Crab X-ray polarization measurements using IXPE data with a total exposure of 300ks, three times more than the initial 2022 discovery paper. Polarization is detected in three times more pulsar phase bins, revealing an S-shaped +40^∘ polarization angle sweep in the main pulse and >1σ departures from the OPTIMA optical polarization in both pulses, suggesting different radiation mechanisms or sites for the polarized emission at the two wavebands. Our polarization map of the inner nebula reveals a toroidal magnetic field, as seen in prior IXPE analyses. Along the southern jet, the magnetic field orientation relative to the jet axis changes from perpendicular to parallel and the polarization degree decreases by ∼6%. These observations may be explained by kink instabilities along the jet or a collision with a dense, jet-deflecting medium at the tip. Using spectropolarimetric analysis, we find asymmetric polarization in the four quadrants of the inner nebula, as expected for a toroidal field geometry, and a spatial correlation between polarization degree and photon index. § INTRODUCTION Pulsar wind nebulae (PWNe) are highly-energetic astrophysical sources that consist of a central spinning neutron star (pulsar) whose powerful magnetic field (B∼10^12 G) generates a wind of relativistic (γ≳ 10^5) electrons and positrons that escape along open field lines, impinging on and carrying energy into the surrounding SN ejecta or ISM <cit.>. They can be detected across the entire electromagnetic spectrum with a non-thermal spectral energy distribution (SED) that displays a synchrotron bump that extends from the radio to hard X-rays (up to MeV) and an inverse Compton bump that can reach up to TeV <cit.> or even PeV <cit.> energies. Spatially-resolved observations reveal time-varying structures such as wisps, knots, filaments, and polar jets that point to an ongoing resupply of energy whose source we now know to be the pulsar. The pulsar itself is seen as a bright point source at the center of the PWN and its radiative signature is the pulsed light curve, which exhibits a consistent double-peaked profile from radio to gamma-rays. The origin of this radiation is believed to be located close to the boundary of the light cylinder radius, the distance at which the co-rotation velocity is equal to the speed of light. Interactions between charged particles (electrons & positrons) and magnetic fields at the light cylinder generate pulsed, polarized emission. The physical mechanism behind the pulsed emission is still an open question — a variety of models exist, each with different predictions for the pulse shape and polarization. Thus, polarization measurements of the pulsar can help constrain emission models. The Crab is one of the best-studied objects in astrophysics. It is located ∼2kpc from Earth <cit.>, and with nebular luminosity L∼ 1.3× 10^38 erg s^-1 <cit.>, is bright enough to allow study of its morphology in great spatial detail with high statistical precision. At its center, the P=33.6 ms PSR J0534+2200 is surrounded by a synchrotron nebula G184.6-5.8 that radiates strongly from radio to gamma-ray energies <cit.>. The inner nebula has polar jets and an equatorial torus wrapped around the termination shock that are prominent in X-rays. It is encased in a bubble-like structure of optical filaments formed by SN ejecta carving out cavities in the ambient ISM, with “finger-like" protrusions extending inward from the filaments toward the lower-density synchrotron nebula. Diffuse radio emission exists throughout the nebula <cit.>. Polarization has been detected in both G184.6-5.8 and PSR J0534+2200. The first polarization measurements were made in 1954 at optical energies by two independent researchers <cit.>; high polarization levels confirmed the synchrotron origin of the nebular radiation, as suggested by <cit.>. This was the first identification of synchrotron radiation in any astrophysical source. Radio polarization measurements soon followed <cit.>, and later, with the advent of photon scattering polarimeters, soft X-rays <cit.>, hard X-rays <cit.>, and gamma-rays <cit.>. In each of these cases, the electric polarization angle integrated across the nebula was approximately along the torus symmetry axis (∼125^∘, East of North), implying an azimuthal magnetic field <cit.>. Technological advances also later enabled temporal optical polarization studies, which allowed for phase-resolved analysis and separation of the pulsar and nebula components through isolation of the off-phase emission. In one of the most detailed optical studies of the Crab pulsar, <cit.> found that the polarization angle (PA) has a rapid monotomic sweep of about +130^∘ through the main pulse (MP) and +100^∘ through the interpulse (IP) and that the polarization degree (PD) seems to increase to a maximum before each pulse, then rapidly fall to a minimum close to the peak intensity. High angular resolution nebula polarization studies have also been conducted at optical energies <cit.> and reveal high polarization levels in the inner knot and wisps (∼60% and ∼40%, respectively) with directions oriented close to the pulsar spin axis. Similar studies at higher energies (e.g. X-rays) can provide critical information about the radiation of electrons and positrons closer to their injection site. The Imaging X-Ray Polarimetry Explorer (IXPE) <cit.>, the first space observatory dedicated to measuring X-ray polarization, has enabled that in the soft X-rays. With a nominal 2 - 8 keV energy range, < 100 μs resolution, and < 30” HPD (half-power diameter), IXPE has detected polarization in several PWNe, and even a few phase bins near the pulse peak for the brightest pulsars <cit.>, including the Crab <cit.>. In 2022, IXPE observed the Crab for 90 ks. <cit.> detected PD = 15.4% and PA = 105^∘ in the MP after “off-pulse” (OP) subtraction (the interval between the end of the interpulse and the start of the main pulse when the pulsar flux is at minimum). They also measured PD = 24.1% and PA = 133^∘ in the OP. The phase-integrated polarization map revealed a toroidal magnetic field wrapped around the pulsar and asymmetric PD that does not align with the intensity map. Using an improved method to separate the pulsar and the nebula polarized fluxes, <cit.> measured polarization in two MP phase bins (PD = 9%, PA = 97^∘ followed by PD = 15%, PA = 103^∘) and one IP phase bin (PD = 16%, PA = 141^∘). There was suggestion of a PA sweep through the MP, but more measurements were necessary to establish a clear pattern. They also extracted a polarization map for the pulsar-cleaned nebula, finding a toroidal magnetic field and PD asymmetries similar to those found in <cit.>. <cit.> analyzed the nebula's magnetic field structure, including a comparison with a polarization model, and deferred spectropolarimetric analysis for future studies due to uncertainties in the spectral response at the time. In 2023, IXPE observed the Crab for an additional 210 ks. In this paper, we analyze the full 300 ks IXPE dataset of the Crab, which yields ∼1.8× boost in signal-to-noise (S/N) and reduced systematic uncertainty relative to the initial discovery paper. Section <ref> describes the IXPE observations and the Chandra image used in the data analysis. Section <ref> presents the XSPEC spectral analysis of the nebula and its sub-regions. Section <ref> summarizes the reduction process for polarization, utilizing the simultaneous fitting technique of <cit.>, and presents the nebular polarization map and the phase-varying pulsar polarization. Section <ref> discusses possible physical interpretations of the spectral and polarization measurements. § OBSERVATIONS IXPE observed the Crab at three different epochs with a total ontime of about 300 ks: (1) February 21, 2022 - March 8, 2022; (2) February 22, 2023 - April 3, 2023; and (3) October 9 - 10, 2023. Among all observations, the average livetime:ontime ratio was 0.923. Associated with this campaign, the Chandra X-Ray Observatory (CXO) observed the Crab for an effective 1.33 ks exposure on March 15, 2022, one week after the first IXPE exposure. §.§ IXPE The first IXPE Crab observation (ObsID 01001099) was conducted in two segments, the first segment from February 21 - 22, 2022 with a spacecraft roll angle of 158.0^∘ (East of North) and the second segment from March 7 - 8, 2022 with a roll angle of 158.3^∘. The ontime for each segment was ∼ 43 ks and ∼ 49 ks, respectively. Because the offset between the optical axis and the spacecraft axes had not yet been measured and could not be taken into account during the target pointing, the optical axis was displaced from the target by ∼ 2.74'. We used tool to generate effective area and modulation response functions that account for this offset using the latest on-axis response files in the HEASARC CALDB database (XRT version 20231201, GPD version 20240125). The second IXPE Crab observation (ObsID 02001099) was conducted in two segments, the first segment from February 22 - 23, 2023 with a roll angle of 158.0^∘ and the second segment from April 1 - 3, 2023 with a roll angle of 158.9^∘. Each segment ontime was ∼ 74 ks. The third IXPE Crab observation (ObsID 02006001) was conducted in one segment from October 9 - 10, 2023 with a roll angle of 339.0^∘ and ∼ 60 ks ontime. For the second and third IXPE observations, we made vignetting-corrected response functions using . This was necessary to obtain a good agreement in the fitted spectral flux (see Section <ref>) between the observations with 2-3% residual differences. We obtained the Level 2 data files for these observations from the HEASARC archive[<https://heasarc.gsfc.nasa.gov/docs/ixpe/archive>]. All data files were processed with the following steps before data analysis: (1) Particle and instrumental background events were removed according to the <cit.> algorithm and by filtering for the good time intervals (GTI). The Di Marco background rejection algorithm removed less than 2% of the total events. The GTI filter, which excludes observational periods with poor aspect <cit.>, removed less than 1% of events from each detector unit (DU). (2) Barycentric correction was performed using the tool in . The JPL-DE430 solar ephemeris was utilized with the position of the source set at R.A. = 5^h 34^m 31.86^s and Decl. = 22^∘00'51.3” (J2000). (3) The WCS (world coordinate system) was bore-sighted by comparing the 125”× 125” Stokes I data map centered on the pulsar in the MP window (Δ = 0.963-0.987) with the simulated Stokes I map and adjusting each obesrvation's R.A. and Decl. in 0.1” increments to minimize the χ^2-value. See Section <ref> for a description of the simulation procedure. By using the MP window, the alignment is keyed to the pulsar position, set at the aforementioned R.A. and Decl. This aspect correction, while small (typically ∼1^'', always <4^''), proved quite important to polarization measurements, improving the agreement of the IXPE data with our flux model by a factor of ∼ 3.5×. (4) The pulse profile was folded with the software tool. Table <ref> lists the Jodrell Bank Observatory (JBO) ephemeris used for each observation epoch. Figure <ref> presents the total IXPE count map of the Crab, cropped to the 3.25' × 3.25' region used for polarization analysis. §.§ Chandra CXO ObsID 23539 was obtained on March 15, 2022, one week after the conclusion of the first IXPE Crab observation. It was taken in 1:16 subframe mode (0.2s frame time) for 10 ks for a total livetime of 1.33 ks. These data were used to simulate the IXPE observation of the Crab PWN by passing the Level 2 event file through the IXPE instrument response using the tool and instantiating an object. See Section <ref> for more details about the simulation procedure. Our IXPE observations extend out ∼ 2.5 years from the CXO observation. The bright inner wisps vary on the year-timescale, but the expected shifts are too small to affect the intensity on the IXPE PSF scale. Somewhat larger shifts are associated with the southern jet, but they appear on the decade-timescale so our CXO reference should be adequate. Several artifacts needed to be removed from the Level 2 event file to produce a good quality file for use. <cit.> noted two artifacts – CCD saturation at the pulsar position and readout streaks due to out-of-time events from the pulsar and the nebula — and reported correction methods. Here, we describe an improved technique to remove the nebula readout streak that reduces artificial jumps due to sampling in discrete regions. A 495”×345” rectangle tilted along the readout direction encompassing the nebula readout streak was divided into a grid of 15”×15” pixels. For each row, we estimated the excess counts and subtracted it from each pixel in that row. By correcting in smaller regions and adjusting the number of excess counts for each row, we produce a smoother readout-corrected image with fewer trail artifacts. We also found that pileup, which occurs when two or more events land on a CCD pixel within the same readout frame, was present in our CXO observation. The tool reported pileup fractions as high as ∼ 20% in the bright Doppler-boosted NW region of the nebula. Pileup effects underestimate the local count rate and distort the spectrum. To correct for this, we re-normalized the number of counts by 1/(1-p_f)^1.5, where p_f is the reported pileup fraction for a given pixel. The 1.5 exponent heuristically addresses the effect of spectral distortions to the count rate. Note that this renormalization scheme only corrects the count rate and does not fix the spectrum. To do so, we would need to run a forward model of the pileup distortion with a template of the true spectrum of the nebula, which was not available. To minimize the effect of this spectral distortion on the polarization measurements, we have used a single 2-8 keV energy band in our analysis. See Figure <ref> for images before and after correcting for these artifacts as well as a distribution map of the pileup fraction across the nebula. § XSPEC SPECTRAL ANALYSIS We performed spectropolarimetric analysis to investigate the positional dependence of spectral and polarization properties in the nebula. We defined a small ellipse, hereafter called “Region 1," that contains the central X-ray torus with major and minor axes of 41.3” and 18.8” and a position angle of 126.3^∘ <cit.>. We defined two outer regions, Region 2 and Region 3, with the same area as Region 1 and with respective inner and outer radii of 26.6”, 58.4” and 32.6”, 71.5”. Both regions were further divided along the major and minor axes into four sub-regions, referred to as “Region 2N", “Region 3N", etc. See Figure <ref> for a diagram of the selected regions. We extracted the Stokes I, Q, and U spectrum of each region using the standard software . For Region 1, we used only data in the off-pulse period (Δ = 1.563-1.863) to minimize pulsar contamination. For the other regions, the spectral and polarimetric parameters change by less than 5% if we use all data so we do not apply a phase cut to simplify the analysis. For background subtraction, we extracted a spectrum from an annulus centered on the pulsar position with the inner and outer radius of 2.5' and 3.0', respectively. We found that the background has negligible effects for all regions and do not subtract it in the spectral analysis for simplicity. As first discussed by <cit.>, we also needed to address the effect of “leakage", which is the spatial spreading of polarized flux, preferentially in the direction of the polarization, due to imperfect reconstruction of the photon position in the IXPE detector. <cit.> present a correction technique using detailed 2D sky-calibrated IXPE PSFs with a code library publicly available in the Github repository [<https://github.com/jtdinsmore/leakagelib>]. Using the code and the polarization parameters obtained through simultaneous fitting (see Section <ref>), we generated 5” Stokes I, Q, and U leakage maps binned into 4 energy bands (2-2.8, 2.8-4, 4-5.6, and 5.6-8 keV). For each region, we extracted the Stokes I, Q, and U leakage spectra, normalized by the Stokes I spectrum, and fit them with a power-law model. Then we calculated the model leakage spectrum in the standard energy binning (0.04 keV) and subtracted it from the total spectrum. Using our leakage-corrected Stokes I, Q, and U spectra, we performed spectropolarimetric fits using the command in with an absorbed power-law model and energy-independent polarization ( × × in ). To mitigate the low photon statistics, we fixed the absorption density to the canonical value of 0.3 × 10^22  cm^-2 <cit.> for regions other than Region 1. We fitted the Region 1 absorption density to obtain better fit statistics, but the fitted value remained close (within 2σ) of the canonical value. Fit results are summarized in Table <ref> (with 1σ statistical errors) and Figures <ref> and <ref>. We were able to obtain agreement of the total flux within 2-3% between observations. Our photon indices are systematically larger by a few percent than those reported in the literature. For example, we find a photon index of 2.160± 0.007 for the central region but <cit.> reports values around ∼ 2.0 for the inner nebula. This difference is likely due to contributions from the softer, outer regions of the nebula <cit.> and possibly also instrument calibration. Effects on the polarization parameters and the relative value of the spectral parameters should be much smaller. § SIMULTANEOUS FIT POLARIZATION We used the simultaneous fitting technique described in <cit.> to extract the nebula and the pulsar polarization parameters. The technique requires a model for the pulsar and the nebula flux as a function of phase, energy, and spatial position. The Crab pulsar model was created by running an simulation of a periodic point source with phase-varying spectral parameters obtained from CXO HRC-LETGS data by <cit.>. The Crab nebula model was made by instantiating an object with the artifact-cleaned CXO ObsID 23539 in . For both models, the different efficiencies of the CXO and IXPE detectors were accounted for by passing the CXO measurements through the ratio of the IXPE : CXO effective areas and applying the IXPE point spread function. A 750 ks simulation, more than 10× longer than any one segment exposure, was generated for each segment using its specific telescope roll and instrument response function and normalized by its specific ontime duration. A small scaling factor was applied to match the simulated and the observed IXPE light curves. The pulsar scaling factor was ∼ 0.75 and the nebula scaling was ∼ 0.90. A small normalization constant, fixed at 1 for DU1 and within 3.5% for the other DUs, was applied for each detector unit to account for calibration differences. Note that the <cit.> Crab pulsar spectrum was derived using 0.3-3.8 keV CXO data within a radius of 1.63”. The spectral analysis done by <cit.> with BeppoSAX MECS used 1.6-10 keV data, closer to the IXPE nominal energy range, but with an extraction radius of 4', which has considerably more nebula contribution. They measured phase-varying photon indices about +0.2 larger than those found by <cit.>. To test the sensitivity of our method to the spectral model, we ran the same analysis using a softer pulsar spectrum with photon indices boosted by +0.2 and found that the Stokes q and u parameters differed by at most ∼ 1.5 σ_error of the original fit for the pulsar and 0.3 σ_error for the nebula. <cit.> isolates the pulsar more reliably than <cit.> due to the use of a smaller extraction region so the larger photon index measured by <cit.> should at least partially be attributed to nebula contamination. Therefore, these values represent upper limits for the uncertainty of our measurements due to the imperfect estimate of the pulsar photon index. To correct for “leakage" effects, we took an iterative approach to estimate and subtract the leakage from the data and find the leakage-corrected polarization parameters: (1) an initial fit for the polarization is performed using uncorrected IXPE data; (2) these parameters are input into to calculate Stokes I, Q, and U leakage, which we subtract from the data; (3) we fit for new polarization parameters using the leakage-corrected dataset; and (4) this process is repeated until the average fractional change of the parameters is less than 10^-5, a standard value used in (i.e. Python) software packages for relative comparisons. We found that about three iterations were required to reach convergence and that the Stokes q and u parameters changed more substantially (within about ± 0.03) for the nebula; for the pulsar, most (95%) of the bins changed by within ± 0.01. As described in <cit.>, simultaneous fitting is a binned analysis method. For our analysis, we used 20 variable-width phase bins, tailored to be narrower in the pulses to probe the rapid sweep of the pulsar polarization at these phases, and a 13× 13 15” grid of spatial pixels. See Table <ref> for exact phase bin selection in the main and inter-pulses. Since the leakage correction algorithm requires fine (max 5”) spatial bins to resolve the PSF structure, we calculated the leakage with 5” pixels and regrouped them into 15” pixels before subtracting it from the binned data. For energy binning, we used a single 2-8 keV bin because finer energy binning significantly increases the number of low-count (< 10 counts) bins. Our fitting approach solves for the polarization parameters using least-squares regression, and thus, assumes Gaussian-distributed data. For N events with the ψ-distribution function described in <cit.>, Stokes Q and U are essentially Gaussian by N=10. With the aforementioned phase and spatial binning and three equally-spaced energy bins within 2-8 keV, 52% of data bins would have fewer than 10 counts, about half from the 6-8 keV energy bin. Without energy binning, 14% of the bins exceed 10 counts. To use energy bins while minimizing low-count bins, we could use fewer phase bins or larger pixels, but this would degrade resolution of the rapid changes in polarization from the pulsar emission and/or over-smooth the PSF structure, which separates the pulsar from the nebula flux. The nebula is indeed softer with a photon index of ∼ 2 <cit.> compared to ∼ 1.4-1.7 <cit.> for the pulsar, so an energy-dependent analysis could make a slight improvement in separating the pulsed and the nebular signals. However, this would require substantial additional exposure or a full-Poisson statistics likelihood analysis (which would be much more computationally expensive than our least squares solution), and so we have elected to retain the best spatial and temporal resolution in a single energy bin. In summary, we have utilized the simultaneous fitting technique to separate the nebula and the pulsar polarization parameters. This technique requires Stokes I models, which we generated by taking the nebula spectral map and the pulsar phase-varying spectrum obtained from CXO observations and passing them through the IXPE instrument response. The model and the data were binned into 20 variable-width phase bins and 13× 13 15” pixels and an energy range of 2-8 keV. Leakage removal was performed in conjunction with the simultaneous fitting in an iterative process using the code. The fit statistic of our final iteration was χ_red^2=1.23 with χ^2=124066.0 and DOF=100708. The χ_red^2 > 1 may possibly be due to remaining mismatches between the observation and our flux model. Our model can be further refined with improved calibration of the instrument response functions, correction of pileup spectral effects in the CXO image, energy-dependent PSFs, and joint IXPE/CXO observations to obtain the most current state of the nebula. §.§ Crab Nebula The Crab Nebula polarization map is depicted in Figure <ref>. The black lines indicate the magnetic field direction, perpendicular to the measured electric vector polarization angle (EVPA), with lengths scaled by the polarization degree. 5σ significance cut and 20000 count flux cut have been applied. The most polarized regions are located in the north and south edges of the torus, with the highest PD = (44 ± 1)% and PD = (47 ± 1)% in these regions, respectively. Two pixels in the jet have significant polarization: one in the body, where the magnetic field appears perpendicular to the jet, and another at the tip, where it appears parallel, with PD = (22 ± 2)% and PD = (19 ± 2)%, respectively. To investigate the jet polarization further, we performed simultaneous fitting with 25×25 5” pixels (and 16 phase bins, to minimize the number of low-count bins to ∼ 10%) to obtain a higher resolution map, divided the map into different regions along the jet, and determined the integrated polarization in each region. We verified that the 5” polarization map is consistent with that of the original binning, with the most polarized regions located in the same high-PD regions labeled in Figure <ref> and having similar polarization degree and polarization angle values throughout the nebula. Figure <ref> depicts the result of the jet analysis. We created four regions along the jet: two cyan regions at the base (with bracketing blue and green regions for background subtraction) and one region each for the body (white) and the tip (yellow). For the tip and body, no background subtraction was necessary since the torus does not overlap. We simply summed the pixels within each region and found that the body is polarized with PD=(27 ± 1)% and PA =(144 ± 1)^∘ and the tip is polarized with PD=(21 ± 2)% and PA=(153 ± 3)^∘. As shown in Figure <ref>, these polarization angles suggest that the magnetic field is oriented perpendicular and parallel relative to the jet axis in the body and the tip, respectively. Only 5” pixels with >3σ polarization measurements were included in the calculation. Some concern might be made about the potential for contamination from the torus and adjacent jet regions. Indeed, using to simulate the central nebula region and the two jet regions individually, we estimate that the torus contributes approximately 23% and 7% of the total flux in the body and tip regions, respectively, and that the body contributes ∼ 13% of the total flux in the tip, and that the tip contributes ∼ 5% of the total flux in the body. This means that 28% and 20% of the flux in the body and the tip, respectively, may be attributed to these regions. In fact, the total background fraction, including contributions from other areas of the nebula, is estimated to be ∼ 60% and ∼ 50% for the body and tip, respectively. To determine how significantly the background affects our measurements, we ran a simultaneous fitting procedure where we included the effect of the PSF flux redistribution in the nebula (it is always computed for the pulsar). That is, for each 5” pixel, we modeled its contribution to each of the other pixels in the flux map. Using this model should eliminate background contributions on greater than 5” scales. For this fine pixel scale, we needed additionally to regularize the cost function (the objective function that is minimized in the least-squares analysis) with a penalty for large swings between adjacent pixels. We obtained polarization values consistent with our standard analysis, with PD=30% and PA=145^∘ in the body and PD=22% and PA=159^∘ at the tip. This suggests that our polarization measurements are not significantly biased by the background. However, given the high background percentage, the true uncertainty may be larger than the simple statistical error reported here. Higher spatial resolution would be helpful to isolate the jet polarization more confidently. We also attempted to measure the polarization in the base regions. Since the jet overlaps with the torus in these regions, we selected flanking fields to estimate the torus flux and subtracted it from the flux in the base. Our results were inconclusive, producing aphysical PD > 100% with large uncertainties. We will note that, in the 5” polarization map, the polarization degree of the pixels in the base regions were slightly lower than those of the pixels in the bracketing (background) regions, which suggests that the jet may have a different polarization orientation than the background torus. Higher angular resolution polarization imaging would be immensely helpful in isolating the jet to test this hypothesis. §.§ Crab Pulsar The Crab pulsar X-ray polarization measurements are plotted against the optical (<cit.>, obtained via private communication) in traditional PD/PA format (Figure <ref>), useful for model comparison, and in Stokes format (Figure <ref>). In the traditional plots, polarization parameters below 3σ cannot reliably be reported with 1D error bars due to PD-PA covariance. Hence, we have restricted the phase range to the MP and the IP and marked marginally significant (and for continuity, one < 1.9σ in the IP) measurements with a different color. The polarization values and uncertainties are listed in Table <ref>. Among the 20 phase bins, we detect polarization in six phases within the MP and two phases within the IP. In the MP, the PD appears to rise from ∼ 7.5% and reach a maximum of ∼ 15% near the peak phase before falling back to its previous level. By comparison, in the optical, the PD is at a constant ∼ 12.5% at the rising edge of the pulse and falls to ∼ 2.5% right after the optical peak phase. In the X-rays, the PA has an approximately +40^∘ sweep between phases 0.958 and 1.0155. Before the peak, the X-ray PA curve nearly matches the optical PA curve, but afterwards, it appears to rise more slowly and lag behind the optical curve. In the IP, the two significant measurements have PD between 5% and 10%, bracketing one low-significance measurement near the pulse peak. By comparison, the optical PD is at a constant ∼ 10% at the rising edge of the pulse and falls through the pulse to 2.5%. The X-ray IP PA values lie near the optical values and hint at an upward sweep. Speculating that the low X-ray PD at the center of the IP could be attributed to a rapid angle sweep across this bin, we tried dividing it into two equally-spaced bins. However the polarization still could not be significantly measured in these smaller bins. In addition, we tested for a smooth polarization sweep across this phase range, by partitioning the data in small (0.00017-width) bins, subtracting the background nebula polarization using the simultaneous-fit measurements of Section <ref>, and fitting a linear model. The polarization slope was only significant at ∼ 1.2σ. With this result, we are not able to conclude whether there is a PA sweep at the IP center. In Figure <ref>, we can see that many bins have >1σ differences between the optical and X-ray in the Stokes parameters. In the MP, the X-ray Stokes q values fall to a minimum at the center of the pulse before sweeping rapidly up while the optical counterpart monotomically increases through the pulse. Both the X-ray and optical Stokes u values dip leading up to the pulse peak, then rise back up, but the X-ray curve has a sharper dip and reaches a lower minimum. In the IP, no conclusive trends can be inferred with only two significant X-ray measurements. Notably, however, the X-ray and optical Stokes values do differ by ≳ 1σ in both IP bins. § DISCUSSION While detailed emission models are needed for a full confrontation of these data with theory, we can discuss qualitative implications of our polarization measurements for the physical conditions in the Crab PWN and pulsar. From our spectropolarimetric analysis, we find that the polarization angle deviates from that of Region 1 in diametric ways between the north/south and east/west outer regions, as expected in a toroidal geometry. Also, we can see a clear directional dependence of the photon index, suggesting that the energy of the emission is affected not only by synchrotron burnoff, which increases with the distance from the termination shock and was also observed by <cit.>, but also by some other effect. Although the root cause of this dependence is not clear, it is worth noting that photon index is hardest in the West, where the polarization degree is smallest, and is softest in South, where the polarization degree is largest. <cit.> also report an extended wing of hard emission towards the southwestern edge of the nebula. This observation might be consistent with turbulent (re-)acceleration, which would lead to a harder spectrum in a local region of lower polarization. Indeed, Regions 2W and 3W sit adjacent to the western bay, impingement with which could cause increased turbulence. In the nebula polarization map, the high polarization at the north and south of the torus, with PD ∼ 45-50%, and the depolarization on the northeast and southwest sides, where polarization direction changes rapidly, are consistent with the findings of <cit.>. In the jet, the polarization degree drops by ∼ 6% as one moves downstream along the jet and the angle changes relative to the jet axis from perpendicular to parallel. This observation may be explained by the growth of kink instabilities along the jet. In the 3D relativistic MHD simulations by <cit.>, the jet is subject to instabilities while freely propagating within the PWN flow. Such behavior is seen in CXO monitoring of the Vela pulsar jet <cit.>. They find that the jet deflection radius increases with the magnetization parameter σ, suggesting that magnetic instabilities are driving the deflection. They also see that the horizontally-averaged direction of the magnetic field is initially perpendicular to the flow velocity but bends and acquires a parallel component by the tip. Our jet polarization measurements are consistent with these simulated observations. Alternatively, collision against a dense medium (e.g. an optical filament) may be causing a hoop stress-confined jet (with initial field dominated by a toroidal component) to bend, compressing and amplifying the magnetic field parallel to the collision shock front. If accompanied by turbulence, this would also lower the polarization fraction, as seen here. From our pulsar measurements, we obtain a 3σ upper-limit of the bridge (Δ = 1.028-1.348) phase-averaged PD=22%. Phase-averaged total pulsar polarization is insignificant with Stokes q = -0.007±0.047 and Stokes u = -0.075±0.047. Comparing our polarization sweep with that previously measured by <cit.> using only the 2022 IXPE Crab observation, a curious difference may be noted: they measured Stokes u = -0.158 ± 0.039 at the single phase bin under the IP peak, which is >2σ higher than the IP Stokes u measurements reported here. To investigate this, we ran simultaneous fitting for each observation segment and found that Stokes u was approximately -0.15 for the first segment at the single bin under the IP peak (bin 17). Meanwhile, the other segments reported lower values within ±0.05. The large Stokes u reported in <cit.> is thus likely a statistical fluctuation and is not changed by the modeling improvements described in the present paper. We have attempted to compare our pulsar polarization measurements with a simple analytic striped wind model <cit.>. From our analysis, we find that the pulse morphology is most sensitive to the assumed wind velocity structure due to the relativistic beaming of emission from the current sheet. If the emission region extends more than a few light-cylinder radii (R_lc) one finds narrow MP/IP peaks with similar intensity, incompatible with observations. While emission confined within a few R_lc produces a plausible intensity profile, the polarization sweeps are quite difficult to reproduce in this simple model. Polarization is more sensitive to the assumed magnetic field geometry, and a simple split-monopole model appears to inadequately capture the observed behavior. We note that geometry-dependent reconnection could modulate the emissivity of the current sheet and turbulence growth across the emission zone may affect the polarization degree; these may be important factors to consider in future modeling. While a detailed treatment goes beyond the scope of this paper, it is important to note that, while the excellent optical polarization data of <cit.> have been available for some time, no model has been produced which can match its behavior in detail. The fact that IXPE now detects polarization sweep structure at X-ray energies but with substantial differences with the optical values provides a new handle on the problem. Since the X-rays are generated by higher energy electrons, they have different weighting along the emission surface, and possibly different propagation and self-absorption effects, so they provide a new lever to probe this long-standing astrophysical puzzle. Of course, finer and more extended phase-resolved X-ray polarimetry would enhance this probe. In particular, additional IP phase bins are really needed to compare its structure to that seen in the optical. It could also be important to get a few well-measured bins in the bridge region, as this probes emission away from the caustic-dominated peaks. Such data may be attained with very deep IXPE observations or may require a future X-ray polarization mission with better spatial resolution and larger effective area. Acknowledgements: This work was supported in part through NASA grant NNM17AA26C administered by the Marshall Space Flight Center. The Imaging X-ray Polarimetry Explorer (IXPE) is a joint US and Italian mission. The US contribution is supported by the National Aeronautics and Space Administration (NASA) and led and managed by its Marshall Space Flight Center (MSFC), with industry partner Ball Aerospace (contract NNM15AA18C). The Italian contribution is supported by the Italian Space Agency (Agenzia Spaziale Italiana, ASI) through contract ASI-OHBI-2022-13-I.0, agreements ASI-INAF-2022-19-HH.0 and ASI-INFN-2017.13-H0, and its Space Science Data Center (SSDC) with agreements ASI-INAF-2022-14-HH.0 and ASI-INFN 2021-43-HH.0, and by the Istituto Nazionale di Astrofisica (INAF) and the Istituto Nazionale di Fisica Nucleare (INFN) in Italy. This research used data products provided by the IXPE Team (MSFC, SSDC, INAF, and INFN) and distributed with additional software tools by the High-Energy Astrophysics Science Archive Research Center (HEASARC), at NASA Goddard Space Flight Center (GSFC). N.B. was supported by the INAF MiniGrant “PWNnumpol - Numerical Studies of Pulsar Wind Nebulae in The Light of IXPE." F.X. is supported by National Natural Science Foundation of China (Grant No. 12373041). T. M. was supported by JSPS KAKENHI Grant Number 23K25882. I.L. was supported by the NASA Postdoctoral Program at the Marshall Space Flight Center, administered by Oak Ridge Associated Universities under contract with NASA. This paper employs the Chandra dataset, obtained by the Chandra X-ray Observatory, contained in[doi: 10.25574/cdc.264]https://doi.org/10.25574/cdc.264. IXPE, CXO <cit.>, <cit.> aasjournal
http://arxiv.org/abs/2407.12568v1
20240717135149
LTRL: Boosting Long-tail Recognition via Reflective Learning
[ "Qihao Zhao", "Yalun Dai", "Shen Lin", "Wei Hu", "Fan Zhang", "Jun Liu" ]
cs.CV
[ "cs.CV" ]
Q. Zhao et al. Beijing University of Chemical Technology, China Singapore University of Technology and Design, Singapore Nanyang Technological University, Singapore Xidian University, China Lancaster University, UK LTRL: Boosting Long-tail Recognition via Reflective Learning Qihao Zhao 1,2, Equal Contribution Yalun Dai 3, [1] Shen Lin4 Wei Hu1 Fan Zhang1, Corresponding Author, zhangf@mail.buct.edu.cn Jun Liu2,5 ================================================================================================================================================== § ABSTRACT In real-world scenarios, where knowledge distributions exhibit long-tail. Humans manage to master knowledge uniformly across imbalanced distributions, a feat attributed to their diligent practices of reviewing, summarizing, and correcting errors. Motivated by this learning process, we propose a novel learning paradigm, called reflecting learning, in handling long-tail recognition. Our method integrates three processes for reviewing past predictions during training, summarizing and leveraging the feature relation across classes, and correcting gradient conflict for loss functions. These designs are lightweight enough to plug and play with existing long-tail learning methods, achieving state-of-the-art performance in popular long-tail visual benchmarks. The experimental results highlight the great potential of reflecting learning in dealing with long-tail recognition. The code will be available at <https://github.com/fistyee/LTRL>. § INTRODUCTION Real-world scenarios often exhibit a long-tail distribution across semantic categories, with a small number of categories containing a large number of instances, while most categories have only a few instances <cit.>. Dealing with Long-Tail Recognition (LTR) is a challenge as it involves not only addressing multiple small-data learning problems in rare classes but also handling highly imbalanced classification across all classes. In addition, the inherent bias towards the high-frequency (head) classes may cause the low-frequency (tail) classes to be neglected, leading to inaccurate classification results. To tackle this challenge, numerous methods have investigated learning from long-tailed datasets to develop effective models, such as data re-sampling <cit.>, re-weighting <cit.>, decoupling learning <cit.>, contrastive learning <cit.>, Calibration <cit.>, transfer learning <cit.>, and multi-expert ensemble learning <cit.>. Similarly, knowledge acquisition in the human classroom often exhibits a long-tail distribution, where teachers and textbooks primarily focus on important (majority classes) knowledge. However, top students beings can only do well in exams if they have a balanced knowledge of the subject. These students habitually review studied knowledge post-class, summarize the connection between knowledge, and correct misconceptions after review summarize. Inspired by these effective learning strategies, named Reflective Learning (RL), we wonder how to help models in a human reflective learning way to improve for long-tail recognition. Review. To answer the above question, we first explore what knowledge needs to be reviewed and learned from the past. We visualize the relationship between model predictions (logits) across a random adjacent epoch in Figure <ref>. As illustrated in Figure <ref> (a), the model exhibits less overlap in the tail class compared to the head class. Concurrently, as shown in Figure <ref> (b), the KL divergence between predictions across adjacent epochs is larger for the tail class. These observations indicate that the uncertainty in predictions for the tail class across adjacent epochs is more significant compared to the head class. However, a classification model should favor functions that give consistent output for similar data points <cit.>. Therefore, we facilitate learning by promoting consistency between past and current predictions. Specifically, we employ a distillation module to enable the model to learn from past accurate predictions to achieve this goal. Summary. Humans are adept at summarizing connections and distinctions between knowledge. However, under a long-tail distribution training setting, the provided one-hot labels lack the inter-class correlation information, which is crucial. For example, as demonstrated in Figure <ref>, when the head class "Cygnus olor" shares similar features with the tail class "Pelecanus onocrotalus", one-hot labels during the supervision process strictly categorize all these features under "Cygnus olor". Given the large sample size of the head class in the long-tailed dataset, this supervision can mislead the model to misclassify "Pelecanus onocrotalus" as "Cygnus olor", exacerbating the model's recognition bias towards the head class. To address this issue, we explicitly construct a similarity matrix to model the relationships across classes and convert it as a soft label to provide supervision. l0.5 < g r a p h i c s > Correlation of features among different samples in long-tailed data. Correction. In the knowledge correction part, to emulate the behavior of humans in correcting mistakes, we introduce an effective projection technique to reduce gradient conflicts after `reviewing' and `summarizing'. It promptly rectifies erroneous knowledge and prevents the propagation of incorrect gradients. In conclusion, due to the lightweight design of these modules, our approach can easily integrate with existing long-tail learning methods as a plug-and-play solution, enhancing them to achieve state-of-the-art performance. Comprehensive experiments were conducted on famous long-tailed datasets such as CIFAR100-LT, ImageNet-LT, Place-LT, and iNaturalist. The results underscore the efficacy and potential of our method in addressing the challenges faced in long-tail recognition tasks. These results also demonstrate that learning in a manner akin to top human students, as embodied in our approach, can broadly enhance the capabilities of various deep learning methods. § RELATED WORK Long-tail recognition. Long-tail recognition methods address the challenge of imbalanced data distributions through various strategies. Re-sampling techniques, such as over-sampling minority classes or under-sampling majority classes, aim to balance the data but come with drawbacks like over-fitting and loss of crucial information, respectively <cit.>. Re-weighting methods adjust class weights based on loss modification or logits adjustment <cit.>. However, these methods can potentially hurt representation learning, and it has been observed that decoupling the representation from the classifier can lead to better features <cit.>. Ensemble methods leverage multiple experts with aggregation techniques to reduce uncertainty and have proven effective for long-tailed recognition <cit.>. Techniques such as LFME <cit.>, which trains experts on different dataset segments and distills their knowledge into a student model, and RIDE <cit.>, which employs distribution-aware diversity loss and a router for handling hard samples, are noteworthy. Additionally, MDCS <cit.> aggregates experts for the diversity of recognition. Label space adjustment methods, like label smoothing <cit.> and Mixup <cit.>, prevent models from over-fitting to head classes. Recent approaches consider category frequencies in reconstruction to achieve better results <cit.>. However, these methods do not consider inter-class similarity information, and this knowledge is necessary when working with existing long-tail methods, which our method explores. Knowledge distillation balances predictions between head and tail classes <cit.>. For instance, <cit.> transfers feature knowledge from head to tail classes, but does not ensure feature correctness. NCL <cit.> proposed a nested balanced online distillation method to collaboratively transfer the knowledge between any two expert models. However, previous knowledge distillation long-tail methods do not explore the knowledge in past epochs. Consistency regularization. Consistency regularization has become a crucial technique in semi-supervised learning since it was first introduced by Bachman <cit.> and later popularized by Sajjadi <cit.> and Laine <cit.>. This method utilizes unlabeled data by enforcing the model to produce consistent outputs for similar inputs. Specifically, the discrepancy between outputs from different perturbations of the same training sample is minimized as a loss during training. Various techniques can be used to create perturbed inputs <cit.>, with a common approach being the application of two different data augmentations on the same image <cit.>. Unlike these methods, our proposed KR module is tailored for long-tail learning, utilizing consistent knowledge without additional hyper-parameters. It integrates consistency mechanisms that extract and transfer richer information from the predictions of previous epochs, thereby providing enhanced supervision. § METHOD In this section, we propose a new long-tail learning paradigm, named Reflecting Learning, to boost the recognition performance for the existing methods. The proposed reflecting learning contains three phases, which are knowledge review, knowledge induction, and knowledge correction. In the following section, we introduce these methods in detail. §.§ Preliminaries Long-tailed recognition involves the learning of a well-performance classification model from a training dataset that is characterized by having a long-tailed category distribution. For a clear notation, we write a C-classes labeled dataset as 𝔻 = {(x_i, y_i)|1 ≤ i ≤ n}, which x_i is the i-th training sample and y_i ={1, ..., C} is its ground-truth label. In this context, we use n_j to represent the number of training samples for class j, while N = ∑_j=1^Cn_j denotes the total number of training samples. To simplify our discussion, we assume that the classes are arranged in decreasing order, such that if i j, then n_i ≥ n_j. Furthermore, an imbalanced dataset is characterized by a significant disparity in the number of instances between different classes, with some classes having significantly more samples than others, i.e., n_i ≫ n_j. Consider using a Softmax classifier to model a posterior predictive distribution. For a given input x_i, the predictive distribution is represented as follows: p_i(x_i; Θ) = e^(v_i^k/τ)/∑_c e^(v_i^c /τ) , where v_i = {f(x_i; Θ), W} denotes the logits of DNNs for instances x_i which are calculated by feature f(x_i; Θ) and classifier weight W, and τ > 1 is the temperature scaling parameter (a higher τ produces a "soften" probability distribution <cit.>). §.§ Knowledge Review In our reflecting learning paradigm, the goal of knowledge review (KR) is to look back at past knowledge during training and leverage this knowledge to improve recognition performance. From the above analysis <ref>, we observe that there is a different knowledge of past epochs, i.e., the same model has a different prediction about different augmentations for the same sample in adjacent epochs. However, when a percept is changed slightly, a human typically still considers it to be the same object. Correspondingly, a classification model should favor functions that give consistent output for similar data points <cit.>. Consequently, to learn the consistent knowledge of prediction from the previous epochs, we employ KL divergence of the previous and current epoch's prediction distribution as the minimization object function. As demonstrated in Figure <ref>, at every epoch, our KR module optimizes the current prediction to be closer to the previous prediction to distill different and richer knowledge for the current instances. We formulate the KR module denoted as: ℒ_KR= ∑_x_i∈𝔻 KL(p_i,t-1(x_i;Θ_t-1) || p_i,t(x_i; Θ_t)) In detail, our KR employs the KL divergence function to perform optimization following soft distillation<cit.> for instances, which can be formulated as: KL(p_i,t-1 || p_i,t) = τ^2 ∑_i=1^n p_i,t-1(x_i;Θ_t-1) · logp_i,t-1(x_i;Θ_t-1)/ p_i,t(x_i; Θ_t). However, blindly transferring and distilling knowledge of past predictions does not yield satisfactory results. For example, if the model misses the ground truth prediction for instance x, then the wrong knowledge is not suitable to be transferred. Therefore, to prevent our method from introducing wrong knowledge, we only transfer and distill the knowledge that is correctly classified. Although this method is a general variant of consistency learning employed in semi-supervised learning <cit.>, it experimentally proved to be very useful in our strategy. We define a correctly classified instances (CCI) set containing all correctly classified instances as: 𝔻_CCI = {x_i ∈𝔻 | argmax(p_i(x_i;Θ)) == y_i }, where y_i denotes the ground-truth label of instance x_i. With the correct predictions of the previous epoch (t-1), we re-write the KR with CCI set as: ℒ_KR= 1/𝔻^t-1_CCI∑_x_i∈𝔻^t-1_CCI KL(p_i,t-1(x_i;Θ_t-1) || p_i,t(x_i; Θ_t)) §.§ Knowledge summary During the knowledge review process, we designed the objective function to facilitate the model by learn consistent information about each instance from past predictions. However, inspired by the process of human summarising knowledge, it is also important to learn the correlations between knowledge. Correspondingly, in long-tail recognition, we find that the traditional one-hot label lacks information on correlations. When the head category contains feature similarity information of the tail category, all these features are supervised as the head category using one-hot labels during training, and the tail category will be more inclined to be judged as the head category during prediction. To this end, we reconstruct the labeling space by looking for correlations of category features in the model. Specifically, the bias of the long tail stems mainly from the classifiers rather than the backbone and the cosine distances lead to more unbiased feature boundaries <cit.>. Therefore, the features extracted by the backbone are less biased and cosine similarity for these features is a choice for learning relationships under long-tail distribution. Further to this, for C-th class, we calculate the class center of f_c by the median of all features across the C-th class, which is denoted as: f_c = Median_x_i ∈𝔻_𝕔(f(x_i; Θ_t-1)) which Median is a function that calculates the median of the features for category C. We use the median rather than the mean to avoid outliers of the features produced by data augmentation. Then, we calculate the correlation feature label by cosine similarity and reconstruct the label ŷ: M = f · f^T/||f|| · ||f||, ŷ = α· Y + (1-α) · M where α is a hyperparameter, M ∈ (0, 1) is the feature similarity matrix, and Y is the label y after extending to the label matrix. Finally, the KS loss is denoted as: ℒ_KS = 1/𝔻∑_x_i∈𝔻 CrossEntropy(p(x_i; Θ_t), ŷ) §.§ Knowledge correction During the training process, our proposed KR and KS modules can easily combine with the existing LTR methods. Therefore, the overall loss (ℒ_RL) for implementation consists of two parts, the existing ℒ_LTR loss for long-tailed recognition and our ℒ_KR, ℒ_KS for KR and KS modules, respectively. It is expressed as: ℒ_RL = ℒ_LTR + (ℒ_KR + ℒ_KS) Typically, humans review and summarise knowledge by making corrections to what they are currently learning. Inspired by this, we would like to revise the model by reviewing and summarising the knowledge currently being learned from a gradient perspective. Given that α_ij represents the angle between the gradients g_i and g_j of the i-th and j-th gradients, direction conflicts between the two gradients occur when cosα_ij < 0. Using this definition, we calculate the percentage of instances where cosα_ij is negative for each pair, as shown in Figure <ref>. (b) and (c). From observation, the pairs of (ℒ_LTR, ℒ_KR + ℒ_KS) maintain high direction conflicts during training, not only in each layer of the model (shown in Figure <ref>. (b)), but also persist during the training process (Figure <ref>. (c)). To address this issue, we introduce knowledge correction (KC) to mitigate conflicts by projecting gradients when negative transfer occurs. Negative transfer between two gradients g_i and g_j is identified when cosα(g_i, g_j) < 0. Following this identification, each gradient is projected onto the orthonormal plane of the other gradients to eliminate harmful conflicts. Therefore, we have the formula for projecting the gradient ℒ_LTR onto the orthonormal plane of gradient ℒ_KR+ℒ_KS as: ĝ_LTR := g_LTR - cos(g_LTR, g_KR+KS)/g_KR+KS^2· g_KR+KS Eventually, as shown in Figure <ref>. (a), we have the following gradient update formula for g_RL: g_RL = {[ ĝ_LTR + g_KR+KS, if cos(g_LTR, g_KR+KS) < 0; g_LTR + g_KR+KS, otherwise ]. § EXPERIMENTS We present the experimental outcomes on five widely adopted datasets for long-tailed recognition, which include CIFAR-100/10-LT <cit.>, ImageNet-LT <cit.>, Places-LT <cit.>, and iNaturalist 2018 <cit.>. Additionally, we conduct ablation studies specifically on the CIFAR-100-LT dataset to gain more comprehensive insights into the efficacy of our method. §.§ Implementation details. Evaluation Setup. For classification tasks, we assess our models after training on the long-tailed dataset by evaluating their performance on a balanced test/validation dataset, where we present the Top-1 test accuracy results. Additionally, we categorize the classes into three segments and report accuracy for each: Many-shot classes with over 100 images, Medium-shot classes containing 20 to 100 images, and Few-shot classes with fewer than 20 images. Architecture and Settings. Our experimental configuration remains consistent across all baselines and our proposed method. Following established protocols in prior research <cit.>, we deploy specific backbone architectures tailored to each dataset: ResNet-32 for CIFAR100/10-LT, ResNeXt-50/ResNet-50 for ImageNet-LT, ResNet-152 for Places-LT, and ResNet-50 for iNaturalist 2018. Standard training parameters include the use of SGD with a momentum of 0.9 and an initial learning rate of 0.1, which is reduced linearly over the training period. Others. The results from the comparative methods were sourced from their respective original publications, while our findings represent the average outcomes from three separate trials. When integrating our technique with other long-tail algorithms, we employ the optimal hyper-parameters as specified in their foundational papers. Additional details on our implementation and the statistics for hyper-parameters can be found in the Appendix. §.§ Comparisons with SOTA on benchmarks. Baselines. The proposed RL method, designed to address tail class bias through consistency regularization, can be integrated with various prevalent LT algorithms. Following previous works <cit.>, we categorize LT algorithms into three types: re-balancing, augmentation, and ensemble learning methods. For re-balancing approaches, we examined two-stage re-sampling methods such as cRT and LWS <cit.>, multi-branch models with diverse sampling strategies like BBN <cit.>, and reweight loss functions including Balanced Softmax (BSCE) <cit.> and LDAM <cit.>. For augmentation approaches, we found that general data augmentation techniques like Random Augmentation (RandAug) <cit.> are more effective than specialized long-tailed transfer learning methods. For ensemble learning methods, we followed recent trends using models like NCL <cit.>, SADE <cit.>, RIDE <cit.>, and MDCS <cit.>, which have proven to be state-of-the-art in improving performance across both head and tail categories. [t] [t]0.38 captypetable Method 3cCIFAR-100-LT IF 10 50 100 Softmax 59.1 45.6 41.4 BBN 59.8 49.3 44.7 BSCE 61.0 50.9 46.1 RIDE 61.8 51.7 48.0 SADE 63.6 53.9 49.4 Softmax+RL 59.6 46.2 41.9 BSCE+RL 64.5 52.2 47.9 RIDE+RL 62.4 53.1 48.8 SADE+RL 64.5 55.4 50.7 BSCE† 63.0 - 50.3 PaCo† 64.2 56.0 52.0 SADE† 65.3 57.3 53.2 MDCS† - - 56.1 BSCE+RL† 64.6 - 51.2 PaCo+RL† 65.1 57.1 52.8 SADE+RL† 66.8 59.1 54.7 MDCS+RL† - - 57.3 Comparisons on CIFAR100-LT datasets with the IF of 10, 50, and 100. †denotes models trained with RandAugment<cit.> for 400 epochs. [t]0.45 captypetable Method Many Medium Few All Softmax 68.1 41.5 14.0 48.0 Decouple-LWS 61.8 47.6 30.9 50.8 BSCE 64.1 48.2 33.4 52.3 LADE 64.4 47.7 34.3 52.3 PaCo 63.2 51.6 39.2 54.4 RIDE 68.0 52.9 35.1 56.3 SADE 66.5 57.0 43.5 58.8 Softmax+RL 68.6 42.0 14.7 48.6 BSCE+RL 65.6 49.7 37.9 54.8 PaCo+RL 64.0 52.5 42.1 56.4 RIDE+RL 68.9 54.1 38.6 59.0 SADE+RL 66.3 58.3 47.8 60.2 PaCo† 67.5 56.9 36.7 58.2 SADE† 67.3 60.4 46.4 61.2 MDCS† 72.6 58.1 44.3 61.8 PaCo+RL † 67.4 57.3 37.8 58.8 SADE+RL † 67.9 61.2 47.8 62.0 MDCS+RL† 72.7 59.5 46.0 62.7 Comparisons on ImageNet-LT. † denotes models trained with RandAugment<cit.> for 400 epochs. Superiority on Long-tailed Benchmarks. This subsection compares RL with state-of-the-art long-tailed methods on vanilla long-tailed recognition. Table <ref>, <ref>, <ref>, and <ref> lists the Top-1 accuracy of SOTA methods on CIFAR-100-LT, ImageNet-LT, Places-LT, and iNaturalist 2018, respectively. Our approach seamlessly integrates with existing methods, yielding performance improvements across all long-tail benchmarks. Notably, when applied to the SADE method on the ImageNet-LT dataset, our approach achieves a maximum performance boost of 4.3% in few-shot. In the Appendix, RL also outperforms baselines in experiments on long-tail CIFAR-10. RL contributes to different sample size results. To explore the reasons why RL works for long-tail scenarios, we provide a more detailed and comprehensive evaluation. Specifically, we divide the classes into multiple categories based on their sample size, namely, Many (with more than 100 images), Medium (with 20 to 100 images), and Few (with less than 20 images). Softmax trains the model using cross-entropy and performs well on many-shot classes by mimicking the long-tailed training distribution. However, it fails to perform effectively on medium-shot and few-shot classes, resulting in poor overall performance. In contrast, re-balanced long-tailed methods such as Decouple and Causal strive to achieve a uniform class distribution for better average performance, but this comes at the cost of reduced performance on many-shot classes. [tb][tb]0.43 captypetable Method Many Medium Few All Softmax 46.2 27.5 12.7 31.4 BLS 42.6 39.8 32.7 39.4 LADE 42.6 39.4 32.3 39.2 RIDE 43.1 41.0 33.0 40.3 SADE 40.4 43.2 36.8 40.9 Softmax+RL 46.1 28.0 15.6 32.8 BLS+RL 43.0 40.3 34.8 41.1 LADE+RL 42.8 39.7 35.5 41.8 RIDE+RL 43.1 41.9 36.9 42.1 SADE+RL 41.0 44.3 38.7 42.2 PaCo† 36.1 47.2 33.9 41.2 PaCo+RL † 36.4 47.7 36.6 42.8 Comparisons on Places-LT, starting from an ImageNet pre-trained ResNet-152. †denotes models trained with RandAugment<cit.> for 400 epochs. [tb]0.45 captypetable Method Many Medium Few All Softmax 74.7 66.3 60.0 64.7 BLS 70.9 70.7 70.4 70.6 LADE† 64.4 47.7 34.3 52.3 MiSLAS 71.7 71.5 69.7 70.7 RIDE 71.5 70.0 71.6 71.8 SADE 74.5 72.5 73.0 72.9 Softmax+RL 75.4 67.1 61.1 65.5 BLS+RL 68.8 72.5 75.9 73.1 LADE+RL 64.8 48.9 36.6 73.6 RIDE+RL 71.4 70.9 74.8 73.6 SADE+RL 74.7 73.1 77.8 74.2 PaCo† 69.5 73.4 73.0 73.0 SADE† 75.5 73.7 75.1 74.5 NCL† 72.7 75.6 74.5 74.9 PaCo+RL† 69.6 73.4 75.9 73.6 SADE+RL† 75.7 74.1 77.8 75.3 NCL+RL† 72.5 76.7 77.8 76.5 Comparisons on iNaturalist 2018. † denotes models trained with RandAugment<cit.> for 400 epochs. Table <ref>, <ref> and <ref> demonstrates the significant enhancement in the performance of few- and medium-shot classes achieved by the proposed RL, while maintaining high accuracy for many-shot classes. Moreover, there is a slight improvement observed in the performance of many-shot classes. RL with different backbone results. Table <ref> shows that RL obtains consistent performance improvements on various backbones. Whether the backbone is CNN-based networks (ResNet, ResNext) or Transformer-based networks (Swin Tiny and Small), RL delivers consistent accuracy gains. Comparison with other regularization-based methods. Additional experiments were conducted to evaluate and integrate our method with regularization-based methods such as Mixup <cit.>, Weight Balance <cit.>, and MiSLAS <cit.>. The Mixup stands as a representative method for data augmentation regularization, enhancing model generalization by interpolating between samples. The Weight Balance directly constrains the weights from the classifier through a regularization term, addressing the imbalance by modulating the impact of more frequent classes. The MiSLAS introduces label-aware smoothing as a regularization strategy, aimed at mitigating varying degrees of over-confidence across different classes. Unlike these methods above, our method designs a regularization loss to reduce the uncertainty of the predictions during training and provide class correlation labels for boosting existing long-tailed methods. [ht][htb]0.47 captypetable Method Resnet-50 ResNeXt-50 Swin-T Swin-S Softmax 41.6 44.4 42.6 42.9 OLTR - 46.3 - - τ-norm 46.7 49.4 - - cRT 47.7 49.9 - - LWS 47.3 49.6 - - LDAM - - 50.6 49.5 RIDE 54.9 56.4 56.3 54.2 Softmax+RL 45.8 47.3 43.7 43.6 τ-norm+RL 47.3 50.5 - - cRT+RL 48.5 51.2 - - LWS+RL 48.5 50.5 - - LDAM+RL - - 52.1 50.3 RIDE+RL 56.8 58.7 59.1 55.6 Comparisons on ImageNet-LT with different backbones. [ht]0.4 captypetable Method Many Med Few All Softmax 66.1 37.3 10.6 41.4 OLTR 61.8 41.4 17.6 - τ-norm 65.7 43.6 17.3 43.2 cRT 64.0 44.8 18.1 43.3 LDAM 61.5 41.7 20.2 42.0 RIDE 69.3 49.3 26.0 48.0 SADE 60.3 50.2 33.7 49.4 Softmax+RL 66.8 37.9 11.2 41.9 LDAM+RL 62.4 42.4 28.3 49.2 RIDE+RL 69.9 50.4 28.1 49.2 SADE+RL 60.4 50.8 35.5 50.7 Comparisons on CIFAR-100-LT(IF=100) with different sample sizes. Both MiSLAS and Weight Balance, the two regularization methods designed for long-tail distribution, employ a decoupled two-stage training approach. Therefore: a) We compared these methods with a baseline decoupled training method designed for long-tail distribution <cit.>, termed as Decouple. b) For a fair comparison, we also combined the decoupled training approach with RL (Decouple + RL), to compare it against MiSLAS and Weight Balance methods. c) For the Mixup results presented in the tables, we also utilized a decoupled training implementation. Tab. <ref> above illustrates that our method outperforms other regularization-based methods under a decoupled two-stage training setting. Additionally, the integration of other regularization-based methods into our method results in further enhancements to performance. This improvement substantiates the orthogonality and potential synergistic relationship between our approach and other regularization-based methods. § COMPONENT ANALYSIS AND ABLATION STUDY The effective of temperature τ. The temperature parameter τ is introduced to soften the previous predictions, allowing the current model to learn from a smoother, more generalized distribution. By adjusting the temperature parameter during training, we can control the trade-off between accuracy and generalization to optimize the current prediction. Higher temperature values lead to better generalization but lower accuracy, while lower temperature values lead to better accuracy but less generalization. In Figure. <ref> (a), we show several settings of τ on the CIFAR-100LT (IF=100) and ImageNet-LT, we observe that when the τ set to 2, the models achieve the best performance. The effectiveness of our components KR, KS and KC. Our proposed method is fundamentally composed of two primary components: Knowledge Review (KR) and Knowledge Summary (KS). As shown in Tab <ref>, the KR component is designed to enforce consistency across all categories. As a result, it notably enhances the accuracy of the tail classes, but this comes at the expense of a slight reduction in the accuracy of the head classes. In contrast, KS facilitates learning across all categories by leveraging the inherent feature correlations, compensating for the minor drawbacks introduced by KS, and ensuring an overall improved performance. [t]0.88 captypetable 3cMethod 2cImageNet-LT 2ciNaturalist 2018 KR KS KC RIDE SADE RIDE SADE - - - 56.3 58.8 71.8 72.9 - - 58.0 59.7 72.4 73.3 - - 58.4 59.3 72.7 73.6 - 58.6 60.0 72.9 73.8 59.0 60.2 73.6 74.2 Ablation study on the components of our methods. Comparisons with different component combinations. The effect of our CCI. The component CCI also plays a key role in the training process. During the learning process, the CCI filters out the probability distribution of incorrect predictions from the output of the previous epoch. It ensures the distribution of our current prediction to avoid wrong information. In Figure <ref> (c), we show top-1 test accuracy of BSCE+RL w/ our CCI and BSCE+RL w/o our CCI on CIFAR-100LT (IF=100). The results demonstrate that our RL with CCI leads to a significant improvement. Direct matching logits. There is another approach in the KR module to regularize the consistency, such as using Mean Square Error (MSE) to direct matching logits. The object function denotes: ℒ_MSE = 1/2(v_i,t-1-v_i,t)^2 If we are in the high-temperature limit, our KR process is equivalent to minimizing Eq. <ref>, provided the logits are zero-meaned separately for each transfer case <cit.>. In Figure <ref>, we visualize the test accuracy based on BSCE with ℒ_MSE on CIFAR-100LT (IF=100). However, we observe it has a rapid decline in results compared with our KR module. Because at lower temperatures, the KR module pays much less attention to matching logits that are much more negative than the average. This has the potential advantage that these logits are almost completely unconstrained by the cost function used to train the model, so they can be very noisy <cit.>. § CONCLUSION In this paper, we propose Reflective Learning, which is a plug-and-play method for improving long-tailed recognition. It contains three phrases including Knowledge Review: reviewing past predictions during training, Knowledge Summary: summarizing and leveraging the feature relation across classes, and Knowledge Correction: correcting gradient conflict for loss functions. Experimental results on popular benchmarks demonstrate the effectiveness of our approach, consistently outperforming state-of-the-art methods by 1% to 5%. RL seamlessly integrates with existing LTR methods and is compatible with various backbone architectures, making it a practical and versatile solution for improving LTR performance. Limitation and Future Work. For our proposed reflective learning, the predictions from the model at (t-1)-th epoch are necessary for training at the t-th epoch. When working with large datasets, such as tens of thousands of categories, this can lead to additional memory consumption. Moreover, In this paper, we have only focused on the application of reflective learning in the domain of long-tail recognition, this idea can be used in other domains (such as large language model, object or action detection, and content generation), but it needs to be combined with the characteristics of the domain to make some unique design with reflective learning, which is also our future research work. § ACKNOWLEDGEMENT This work was supported by the National Natural Science Foundation of China under Grant No.62271034. splncs04
http://arxiv.org/abs/2407.12981v1
20240717195543
Transition to turbulence in the wide-gap spherical Couette system
[ "Ankit Barik", "Santiago A. Triana", "Michael Hoff", "Johannes Wicht" ]
physics.flu-dyn
[ "physics.flu-dyn", "astro-ph.EP", "astro-ph.SR" ]
A note on the large-c conformal block asymptotics and α-heavy operators [ July 22, 2024 ======================================================================== § ABSTRACT The spherical Couette system consists of two differentially rotating concentric spheres with a fluid filled in between. We study a regime where the outer sphere is rotating rapidly enough so that the Coriolis force is important and the inner sphere is rotating either slower or in the opposite direction with respect to the outer sphere. We numerically study the sudden transition to turbulence at a critical differential rotation seen in experiments at BTU Cottbus - Senftenberg, Germany and investigate its cause. We find that the source of turbulence is the boundary layer on the inner sphere, which becomes centrifugally unstable. We show that this instability leads to generation of small scale structures which lead to turbulence in the bulk, dominated by inertial waves, a change in the force balance near the inner boundary, the formation of a mean flow through Reynolds stresses, and consequently, to an efficient angular momentum transport. We compare our findings with axisymmetric simulations and show that there are significant similarities in the nature of the flow in the turbulent regimes of full 3D and axisymmetric simulations but differences in the evolution of the instability that leads to this transition. We find that a heuristic argument based on a Reynolds number defined using the thickness of the boundary layer as a length scale helps explain the scaling law of the variation of critical differential rotation for transition to turbulence with rotation rate observed in the experiments. § INTRODUCTION The spherical Couette system consists of two concentric spheres differentially rotating about a common axis, with the space in between filled with a viscous fluid. The differential rotation is considered `positive' when the inner sphere rotates faster than the outer sphere and `negative' when it rotates slower or in the opposite direction as the outer sphere. Being the spherical analogue of the more well-known Taylor-Couette system <cit.>, it is an interesting fluid dynamical system in its own right with very different instabilities. Applications to the interiors of astrophysical bodies (e.g: planetary interiors, stellar radiative zones) seem more obvious than in the Taylor-Couette geometry. The study of the spherical Couette system goes back to the analytical asymptotic formulation of <cit.> for an infinitely fast rotating outer sphere and an infinitesimal differential rotation. He showed that most of the fluid differential rotation remains confined within the cylinder tangent to the inner sphere equator, known as the tangent cylinder (TC), while the fluid outside the TC co-rotates with the outer boundary. A complex nested shear layer at the TC, known as the Stewartson layer <cit.>, accommodates the jump in the fluid rotation rate and its derivatives. For a spherical Couette system with a wide-gap, this shear layer is the source of the first flow instabilities for a rapidly rotating outer boundary. Note that when the gap becomes narrow, the flow instabilities resemble Taylor rolls similar to the Taylor-Couette system <cit.>. Instabilities of a Stewartson layer driven by differential rotation were first studied experimentally by <cit.> for a cylindrical system with a differentially rotating disk and theoretically by <cit.>. For the case of the spherical Couette system, Stewartson layer instabilities as well as other instabilities have been extensively studied using experiments <cit.> and numerical computations <cit.>. These studies have revealed a complex zoo of instabilities and have left many open questions. Our previous study <cit.> and the present study are based on the experiments of <cit.> (hereafter H16) in a wide-gap spherical Couette set-up. Once the radius ratio of the two spheres is fixed, the system is characterized by two parameters, the Ekman number E=ν/Ω_o L^2 and the differential rotation, ΔΩ/Ω = (Ω_i - Ω_o)/Ω_o. Here, ν is the viscosity of the fluid, L is the thickness of the spherical shell, and Ω_i and Ω_o denote the rotation rates of the inner and outer sphere, respectively. H16 and B18 both focused on the case when the differential rotation was negative, i.e., when the inner sphere rotated slower than or in the opposite direction compared to the outer sphere. At intermediate or low Ekman numbers (3× 10^-6≤ E ≤ 10^-4), as the differential rotation is made progressively more negative, the flow transitions through either four or five different hydrodynamic regimes: * an axisymmetric flow described by <cit.>. * The axisymmetric flow gives rise to a linear non-axisymmetric instability of the Stewartson shear layer with a fixed azimuthal wavenumber m. * The first instability gives way to a regime with a mode with m=1. For a certain moderate to low range of Ekman numbers (3× 10^-5≤ E≤ 10^-4), these two regimes may coincide and the first non-axisymmetric instability may occur in the form of m=1. * The above regime gives way to equatorially antisymmetric (EA) wave-like `inertial modes' which have been observed in several past studies <cit.> and formed the focus of B18. * Finally, a sharp and sudden transition to bulk turbulence takes place at a critical negative differential rotation. These regimes have been observed in simulations of <cit.> and B18 and experiments of H16. More specifically, H16 observed that the transition to turbulence was characterized by a broadband temporal power spectra. Well-defined inertial mode peaks observed on top of this broadband spectra displayed an abrupt change in frequency right at the onset of turbulence. In addition, there was an increase in the spatial extent of the axisymmetric zonal flow and a decrease in the energy content of the inertial modes. They further observed a dependence of the critical differential rotation required for transition ||_c on the Ekman number as ||_c∼ E^1/5. In the present study we concentrate on this transition to turbulence, addressing the following questions: how does the flow behave during and beyond the transition and what causes its onset. There have been a few other studies on turbulence in spherical Couette flow, but not for the radius ratios and parameter ranges used in this study. <cit.> experimentally analysed a wide-gap spherical Couette system for a stationary outer sphere and postulated that the transition to turbulence seems to follow the scenario of <cit.> but with several differences like the existence of discrete peaks on top of a continuous background power spectrum. <cit.> experimentally investigated a thin-gap system (r_i/r_o = 0.9) with both spheres rotating over a wide parameter range. They noticed that the transition to turbulence involves an onset of “spatial intermittency” in the form of small-scale structures on top of large-scale flow. <cit.> studied two different gap widths (r_i/r_o = 0.75 and 0.67) and found that the transition to turbulence was characterised by broadband temporal power spectra with some well-defined peaks. The rest of the paper is arranged as follows. Section <ref> provides details of the formulation of the problem, a brief description of the numerical methods used for simulation and the methods used to construct spectrograms and distinguish regimes (i) through (v) mentioned above. Our results begin in section <ref> with a discussion of the temporal and spatial spectra of flow and their variations. Section <ref> discusses our results in the physical space with an analysis of the mean zonal flow, angular momentum transport and the effect of turbulence on inertial modes. Section <ref> provides insight into the transition to turbulence by investigating force balances in the system. Section <ref> investigates the instability of the boundary layer at the inner boundary and its effects and provides a heuristic explanation of the E^1/5 scaling law obtained by H16. Finally, section <ref> discusses our main conclusions and open questions. § NUMERICAL METHODS §.§ Simulation setup Let us denote the radii and dimensional rotation rates of the two coaxial spheres as r_i and Ω_i for the inner sphere, and r_o and Ω_o for the outer sphere, respectively. To simulate this system, we solve the Navier-Stokes and continuity equations in a reference frame rotating with the outer boundary. We use spherical coordinates (r,θ,ϕ) denoting radial distance, colatitude and longitude, respectively. We also use s=rsinθ to denote the cylindrical radius, perpendicular to the rotation axis. The equations are non-dimensionalised using L=r_o-r_i as the length scale and the viscous diffusion time τ = L^2/ν as the time scale, where ν is the kinematic viscosity of the fluid. This gives us, u/ t = - ∇ p -uu - 2Eẑ×u + ^2u , u=0 , Here, u represents velocity, p represents an effective pressure that includes centrifugal forces due to the outer boundary (system) rotation. The Ekman number E = ν/Ω_o L^2 = 1/Ω, where Ω is the non-dimensional outer boundary rotation rate. The inner sphere rotation rate (in the rotating frame) can also be similarly non-dimensionalised : ΔΩ = (Ω_i - Ω_o)L^2/ν. The system and the coordinate system is illustrated in figure <ref>. The flow problem is then characterised by three non-dimensional numbers - the Ekman number, E, the differential rotation ΔΩ/Ω, and the radius ratio η = r_i/r_o, which is set to either 0.33 or 0.35. The first is the same as used in H16, while the latter is close to the ratio for Earth's core and has been used in B18 and other previous studies. No-slip boundary conditions allow for the viscous driving of the flow: [ u(r_o) = 0,; u(r_i) = (u_r,u_θ,u_ϕ) = (0,0,ΔΩ r_isinθ). ] We numerically solve these equations using two independent pseudo-spectral codes: MagIC <cit.> (see <https://github.com/magic-sph/magic>) and XSHELLS <cit.> (see <https://bitbucket.org/nschaeff/xshells>). The details of the numerical methods can be found in the respective publications. Both codes use the SHTns library <cit.> for spherical harmonic transforms. As in H16 and B18, we concentrate on the case < 0. The evolution of the flow is studied by keeping the outer boundary rotation (or Ekman number E) constant and running a simulation at a fixed and letting the kinetic energy reach a statistically stationary state. This state is then used as an initial condition to start the simulation for more negative . The various parameters used in simulations and experiments along with the critical for transition to turbulence are listed in table <ref>, with each suite of experiments and simulations identified by a unique ID (first column). In B18 we already presented the simulation suites S1 and S3. In this study, we have run the rest of the suites, S2, S3a, S4 and S4a, with the suffix `a' representing cases where the parameters are the same as the other case but the simulation is axisymmetric. The case S2 was run to confirm that numerical calculations yield the same critical for the turbulent transition as the experimental case E1. The ranges of in the table indicate how the differential rotation was changed within a suit, each using the previous simulation as starting condition (e.g: -1.00 to -3.50 means was made more negative starting from -1). This is important since the behaviour of the system has some amount of hysteresis <cit.>. Through the rest of the paper, we will mostly focus on simulation suites S3 and S4 with some comparisons with their axisymmetric counterparts S3a and S4a, respectively and with experiments of H16 where appropriate. `Simulations' will thus refer to simulations using MagIC unless otherwise specified. Figure <ref> shows a diagram of the different regime transitions identified in simulations (filled circles) and experiments (open triangles). The suites S3 and S4 that is used throughout this paper clearly marked using squares (before transition to turbulence) and crosses (after transition). This does not show the suites S3a and S4a which would largely overlap with S3 and S4. §.§ Spectrograms and identification of inertial modes It has been shown in previous studies that inertial waves and modes are fundamental instabilities of the spherical Couette system. They obey the linear Euler equation, t = -∇ p -2Ω×. This can be written as <cit.> ^2 t^2∇^2 + 4Ω^2^2 z^2 = 0, which supports plane wave solutions (∝ e^(k·r - ω t)), called `inertial waves', in an unbounded fluid or bounded global oscillatory modes (∝ e^(mϕ - ω t)), called `inertial modes', in a bounded container <cit.>. In both cases, it can be shown that |ω|≤ 2Ω where ω is the frequency associated with a drift in azimuth ϕ. Here, k and m are the radial wavevector and the azimuthal wavenumber, respectively. For a spherical container, the solutions for inertial modes can be obtained analytically <cit.> and have the form of a spherical harmonic at the surface. Consequently, they are identified using indices (l,m) corresponding to the spherical harmonic degree and order. These, together with the drift frequency ω, uniquely determine a mode. Thus, as in our previous study <cit.>, we will denote a mode using the notation (l,m,ω/Ω). The different hydrodynamic regimes (i) through (v) mentioned in the introduction can be clearly identified using the spectrograms obtained from experimental data. The spectrograms are built by taking the single-sided FFT amplitude spectrum of the radially-averaged azimuthal velocity u_ϕ at each . The velocity measurements were performed via Particle Image Velocimetry (PIV) techniques using a laser sheet perpendicular to the axis of rotation. The method is described in further detail in <cit.> and <cit.>. Such spectrograms can also be constructed for simulations where we obtained data for the azimuthal component at eight different locations : θ = (π/4,3π/4) and ϕ = (0,π/2,π,3π/2), all on a single radial surface, r = 0.7r_o, which were stacked after correcting for their phase shift using cross-correlation of the time-series. Thereafter, we performed a Fourier transform of this stacked time series to obtain spectra at each <cit.>. The spectrograms obtained from two suites of simulations are shown in figures <ref>(a) and (b), with identified inertial modes denoted by the indices (l,m). Having access to the full three dimensional flow as well as a number of other diagnostics (kinetic energy, spatial spectra etc.) in simulations helps distinguish these different regimes much better. For example, when the first non-axisymmetric m=1 mode appears, all the equatorially symmetric m=1 spherical harmonic flow coefficients can be seen to oscillate at the same frequency. An analysis of the spectral coefficients, the frequencies in a spectrogram, combined with a visualization of the flow field is used to differentiate between the different regimes. The non-axisymmetric zonal flow fields in the three different regimes at E=10^-4 are illustrated in figure <ref>. Panel (a) shows the flow at =-1 with the m=1 Stewartson layer Instability (SI) clearly visible, panel (b) shows the flow dominated by an equatorially antisymmetric (3,2) inertial mode with some small scale features inside the tangent cylinder, while panel (c) shows the flow in the turbulent regime at =-3, with a lot of small scale features near the inner boundary and a more chaotic flow field. In the experiments of H16, the inertial modes were identified by comparing their frequencies against frequencies from theoretical works <cit.> as well as past experimental works <cit.>. Additional comparisons of morphology of modes were also made against theoretical inertial mode structures in spheres <cit.>. In case of simulations, the inertial modes can be clearly identified by a few different methods. First is by comparing the frequencies observed in the spectrograms to the oscillation frequencies of the spectral spherical harmonic coefficients. This determines the longitudinal symmetry m as well as the equatorial symmetry (l-m) of the mode. The exact mode is then be determined by comparing the frequency to the nearest analytical frequency of inertial modes in a sphere <cit.>, as well as by spectrally filtering out the structure of the mode and comparing it to the theoretical structure. § IDENTIFYING TRANSITION TO TURBULENCE In experiments as well as simulations, the temporal spectra help us determine the transition to turbulence. We examine here the spectra at individual values from the XSHELLS spectrogram presented in figure <ref>(a). We have selected three representative values, = (-0.6, -1.8, -2.7), which lie in regimes (iii), (iv) and (v), respectively, as shown in figure <ref>. At = -0.6, the spectrum consists of only discrete peaks at the drift frequency of the m=1 SI and its higher multiples. In the EA inertial mode regime, at = -1.8 (orange), there is a drastic change in the nature of the spectrum and it consists of a nearly flat background for ω/Ω≤ 2 and a sharp decay for larger Fourier frequencies. The frequencies of the m=1 SI (around ω/Ω = 0.1) and of the dominant inertial mode ( (3,2) mode, around ω/Ω = 0.7) are the most clearly visible peaks on top of the flat background. A flat background of energy for 0<ω<2Ω and a subsequent decay demonstrates the fact that most of the kinetic energy in the flow manifests in inertial waves and is characteristic of inertial wave turbulence <cit.>. Thus, there is some amount of inertial wave turbulence already present in the EA inertial modes regime. This can be seen in the small scales visible inside the tangent cylinder in figure <ref> (b), close to the inner boundary. However, the large scale inertial mode still carries the dominant amount of energy in this regime. What we define as the `turbulent' regime in this study is characterised by a further sudden increase in this flat background spectrum of inertial waves, as seen for = -2.7, while the decay beyond ω/Ω = 2 becomes less steep. Consequently, the peaks for the m=1 SI and the (3,2,0.666) mode, despite having similar energies as for = -1.8, are now less prominent with respect to the background. The small scale inertial wave turbulence is no longer limited to inside the tangent cylinder, but now can be seen in the bulk as well (figure <ref> (c)) and thus, the global large scale inertial mode no longer carries a huge fraction of the energy. The typical decay of the spectrum beyond 2Ω has also been observed by H16 <cit.> and in the 3-meter experiment <cit.>. The shallower decay of the spectrum beyond 2Ω in the turbulent regime shows a decrease in the influence of rotation which results in a greater content of energy for ω > 2Ω. This is consistent with the fact that smaller scales and increased flow velocities in the turbulent regime lead to a dominance of advection over the effect of the Coriolis force. The change in the force balance is further discussed in section <ref>. Pseudo-spectral codes provide direct information on different flow length scales and hence spatial spectra of kinetic energy. Figure <ref> shows the change in energy spectrum in the zonal flow with at different colatitudes, very close to the inner boundary at r/r_o = 0.354. In panel (a), we see that for all || ≥ 1.5 in the EA inertial mode regime, there already is a significant amount of energy in the smaller scales (high m) inside the tangent cylinder. In panel (b), we find that the energy in the smaller scales are high for || ≥ 2.3, indicating that the boundary layer at the inner boundary gets progressively destabilised at lower latitudes as becomes more negative. The turbulent regime sets in at = -2.3 and its significance is that the boundary layer at the equator gets destabilised. Figure <ref> shows the total kinetic energy spectra with respect to spherical harmonic order l at different radial levels from S3 simulations at E=10^-4. The onset of turbulence in the spatial spectra is characterised by an increase in energy in the small scales in general and close to the inner boundary in particular, which is the region of the highest flow speeds and thus most extreme Reynolds numbers. The system is driven by imposed differential rotation at the largest system scale. The energy then cascades to smaller scales via the different instabilities and non-linear interactions. This cascade becomes decisively more efficient in the turbulent regime. The decrease in the influence of rotation can be seen in the spatial spectrum at large spherical harmonic degree as it gets progressively closer to a classic Kolmogorov -5/3 spectrum, as shown in figure <ref> (b). § FLOW ANALYSIS §.§ Mean flow and angular momentum transport The transition to turbulence is characterised by a sudden increase in the axisymmetric flow, while there is a drop in the non-axisymmetric kinetic energy (figure <ref>). The axisymmetric flow is clearly dominated by the zonal component, which is by a factor E^-1/2 larger than the meridional circulation. Both components increase upon turbulence onset. The non-axisymmetric component is dominated by the equatorially antisymmetric part owing to the presence of the EA inertial modes before the transition to turbulence. This changes upon turbulence onset when EA inertial modes lose their energy. Figure <ref>(a) and (b) illustrate that the mean zonal flow not only intensifies but also starts to spread beyond the TC. The panels show the mean zonal flow, averaged in the z- and ϕ-directions and in time, as a function of the cylindrical radial distance s and the differential rotation rate for experiments E1 (a) and simulations S4 (b). Before the transition to turbulence (vertical lines), the zonal flow roughly resembles the Proudman solution for spherical Couette flow<cit.>, staying restricted to the region inside the TC (horizontal lines). Beyond the transition, the zonal flow is significantly more vigorous and extends beyond the TC. The mean flow behaves the same way for simulations at E=10^-4. From table <ref>, we can compare the onset of turbulence for full 3D simulations S3 and S4 and their axisymmetric counterparts, S3a and S4a. For E=10^-4, turbulence sets in at a 13% lower differential rotation rate, at E=3× 10^-5 the difference has reduced to 5%. Figure <ref> compares the time and azimuthally averaged zonal flow and medirional circulation in the turbulent regime at E=10^-4 of a 3D simulation (a) and an axisymmetric one (b). Both cases show an additional pair of rolls along the inner boundary. These rolls represent an outward flow at the inner boundary which can contribute to advection towards the equator and then into the region outside the tangent cylinder. In the axisymmetric case (panel (b)), the inner roll pair is more pronounced than in the 3D case at the same parameters (panel a). In addition, an additional pair of rolls develops near the inner boundary equator and subsequently joins the set of rolls at higher latitudes to form a continuous radial jet. This jet-like feature is not seen in the 3D case. The overall structure of the zonal flows is very similar in the two cases, axisymmetric turbulence is also characterized by a spreading of zonal flows beyond the TC. One can notice that the meridional circulation in panel (a) looks equatorially asymmetric compared to panel (b). Beyond the onset of the EA inertial modes, the 3D simulations continue to possess an equatorially antisymmetric component of kinetic energy (figure <ref>), while the kinetic energy for the axisymmetric simulations continues to remain purely equatorially symmetric, even beyond the onset of turbulence. Thus, the symmetry breaking with respect to the equator seems unique to the presence of non-axisymmetric flows. The extension of the mean flow into the bulk along with the additional roll pair seems to push the Stewartson layer away from the TC. Whether we should still call this a Stewartson layer is unclear. As already discussed by <cit.>, the appearance of a new pair of rolls close to the inner boundary indicates that the Coriolis force due to the outer boundary rotation ceases to be dominant. We expect this to happen when becomes smaller than -1. At =-1, the inner boundary is at rest in the inertial frame. For ≤ -1 it rotates in the opposite direction to the outer boundary. When is negative enough, centrifugal forces drive an outward flow at the inner boundary that gives rise to the additional meridional roll pair. The transition from Coriolis-force dominated dynamics to inertia dominated dynamics should start close to the inner boundary where the effective rotation in the inertial frame is minimal. Turbulence creates small scale flow and transports angular momentum more efficiently from the inner boundary to the bulk of the flow outside the tangent cylinder. This increases the viscous friction at the inner boundary so that a larger torque at the inner boundary is required to maintain the flow. Figure <ref> shows the increase of the viscous torque on the inner core with for simulations at two different Ekman numbers. Before the onset of turbulence, the torque is simply proportional to || and scales like G∼ ||^α with α=1, as shown with the compensated plot. In the turbulent regime, however, the torque increases more steeply with α∼ 2. <cit.> reported approaching α=2 in the 3-metre experiment in Maryland for a non-rotating outer boundary. The torque scalings for the axisymmetric simulations show a similar behaviour but the torque becomes smaller than in the 3D simulations where non-axisymmetric instabilities provide a more efficient transport of angular momentum. In conclusion, the instability responsible for the onset of turbulence is predominantly an instability of the axisymmetric flow. The weaker non-axisymmetric flow components help stabilise the flow and yield a later onset. §.§ Inertial modes The large scale EA inertial modes that get excited in regime (iv) continue to exist after the transition to turbulence. There is, however, a jump in the inertial mode frequencies. This can clearly be seen in the `brightest' spectral lines in both panels of figure <ref>. This goes together with the sudden spreading of the background zonal flow beyond the TC causing further deformation of the inertial modes, as shown in B18. In both 3D simulation suites that we studied, the flow is dominated by the inertial mode (3,2,0.666) when turbulence sets in. After the transition, the mode loses at least half of its energy but still clearly dominates against the broadband turbulent background, as shown in figure <ref> for experiments E1 and simulations S3. The energy estimates were determined by using a frequency filter on velocity obtained in experiments. In case of simulations, the energy in the large scale spherical harmonic coefficients (order l ≤ 6) of the equatorial and azimuthal symmetry corresponding to a mode was used to estimate the energy in a mode. In both the numerical simulations S3 at E=10^-4 (MagIC) and S1 at E=1.125× 10^-4 (XSHELLS), a new m=2 mode emerges around = -2.9 with a frequency of ω/Ω≈ 0.4. The mode is visualised at = -3 in figure <ref>(a). We project snapshots of the flow velocity and its non-axisymmetric part at different times onto equatorially symmetric inertial modes of a sphere _j e^i(mϕ - ω_j t), similar to B18, = ∑ c_j _j e^i(mϕ - ω_j t) - ⟨⟩_ϕ = ∑ c_j' _j e^i(mϕ - ω_j t) The projection coefficients are normalised by [∫· dV ∫_j·_j^† dV ]^1/2 (or [∫( - ⟨⟩_ϕ)·( - ⟨⟩_ϕ) ∫_j·_j^† dV ]^1/2 in case of c_j'). The corresponding projection coefficients c and c', respectively are shown in panel (b). It is clear that a single inertial mode cannot be used to characterise this flow structure, with dominant contributions from all modes with m=2, l ≤ 10 that were analysed. We also could not find other modes that form triadic resonances with this mode. § FORCE BALANCE The transition to the turbulent regime for both S3 and S4 goes along with a sudden rise in the nonlinear term (·∇). As a consequence, advection rather than Coriolis becomes the dominant force. To understand the force balance at different length scales, we decompose the magnitude of each force F into spherical harmonics, F(r) = ∑_l=0^l_max∑_m=0^l F_lm(r) Y_lm(θ,ϕ) , where, Y_lm(θ,ϕ) denotes a spherical harmonic of degree l and order m. We then investigate the magnitude of forces, at different specific spherical harmonics degrees l and radius levels, similar to <cit.>, F_rms^2(l,r) = 1V∑_m=0^l r^2 |F_lm(r)|^2 , where V = 4/3 π (r_o^3 - r_i^3) is the volume of the spherical shell. Figure <ref> compares the respective spectra for two simulations at E=10^-4, one before the transition to turbulence (panels (a) and (b) ) and one after (panels (c) and (d)). This is done for two different radial levels, one near the inner boundary and one in the bulk. At large scales (low l), the leading order force balance near the inner boundary is dominated by advection and the Coriolis force while the dynamics in the bulk is determined by a geostrophic balance between the Coriolis force and the pressure gradient. This remains true for both before as well as after the transition to turbulence. At small scales (large l), after the transition to turbulence, there is a clear dominance of advection close to the inner boundary. This leads to the large scale flow in the system to be aligned with the rotation axis even in the turbulent regime, while small scale flows dominate close to the inner boundary. This can be seen in the 3D flow visualisation in figure <ref>(c) combined with the zonal flow visualised in <ref>. We investigate the effect of the turbulent small scales on angular momentum transport using the azimuthal component of the Navier-Stokes equation. Separating the flow velocity and pressure into mean and fluctuating parts and a subsequent mean in azimuth and time gives us the Reynolds averaged Navier-Stokes (RANS) equation for the mean zonal flow: -2Eϕ̂·⟨ẑ×⟩ + ϕ̂·⟨∇^2⟩ - ϕ̂·⟨∇·⟩ - ϕ̂·⟨∇·''⟩ = 0, where a bar denotes a mean in azimuth, ⟨⟩ denotes an average in time and prime denotes a non-axisymmetric part. We use about 700 snapshots of the 3D flow at = -2 in the inertial mode regime and about 1000 snapshots at = -3 in the turbulent regime at E=10^-4, to compute the terms above, corresponding to 100 rotations of the outer boundary in each case. In addition, we use a time-averaged flow file for computing the terms for an axisymmetric case at = -3, where the Reynolds stress ⟨∇·''⟩ is absent. The results are shown in figure <ref>. In all cases, as expected, viscous forces are a dominant contributor to the zonal flow acceleration near the inner boundary. In the inertial mode regime (figure <ref> (a)), there is very little zonal flow generation and hence, very little forcing outside the tangent cylinder. Here the advection force ⟨∇·⟩ balances the viscous force near the equator and the Coriolis force away from the equator. Beyond the transition to turbulence (panel (b)), Reynolds stresses near the inner boundary balance the viscous force in this region, while the advective force provides the balance away from the equator. Slightly away from the boundary, the advective force balances the Coriolis force. In the axisymmetric turbulent case (panel (c)), in the absence of Reynolds stresses, the advective force balances both the Coriolis force as well as the viscous drag. In the case of 3D turbulence, the small scales in the bulk of the fluid lead to Reynolds stresses that play the dominant role in forcing the zonal flow outside the TC, while in the axisymmetric case, the advection due to the strong radial jet plays the same role. The resultant efficient transport of angular momentum manifests itself in a change in torque scaling. Before the transition to turbulence, the zonal flow is restricted to the TC and its amplitude is linearly dependent on ΔΩ, just as shown by <cit.>. Thus the torque on the inner sphere, G = r_i ∫τ_rϕ dS = r_i∫/ r(u_ϕ/r) dS, where dS = r_isinθ dθ dϕ is the differential surface area at the inner boundary, is also proportional to ΔΩ. Beyond the transition to turbulence, the Reynolds stresses and the advective force significantly contribute to the zonal flow near the equator and become the major player in enhancing angular momentum transport. Their quadratic nature thus explains the quadratic scaling law in the turbulent regime. § INSTABILITY NEAR THE INNER BOUNDARY As noted in section <ref>, as becomes increasingly negative, the flow near the inner boundary first becomes unstable at high latitudes and gives rise to small scale flows. At the transition to bulk turbulence, the flow near the inner boundary at and around the equator becomes unstable. This is illustrated in figure <ref>. Panel (a) shows radial velocity near the inner boundary before the transition to turbulence for suite S4 at = -1.98 and (b) shows the same after the transition to turbulence at = -2. The presence of small scales at all latitudes is markedly visible in panel (b). This is made even more clear if during this transition to turbulence, we track the radial velocity near the inner boundary at all latitudes and at a single longitude with respect to time. As illustrated in figure <ref>, when the fluid near the inner boundary spins up to = -2, small scale turbulent features start appearing near the equator. At the same time, the total and axisymmetric kinetic energies see a marked increase, as also explained in section <ref>. In order to visualise the dynamics of these small scales, we also produced a movie using snapshots of the simulation suite S3 at E=10^-4, = -2.4, in the turbulent regime. The initial condition was a solution at = -2.25, without any boundary layer instability at the equator. Several snapshots at regular intervals were used to produce the movie, which is available as a supplementary material (section <ref>). The movie illustrates how small scale structures of high angular momentum fluid emanate from the equatorial boundary layer and give rise to a mean flow. It also illustrates in an equatorial section how the zonal flow is close to being axisymmetric and large scale initially, destabilising soon after as it transitions into the turbulent regime. As discussed in section <ref>, a secondary pair of meridional circulation rolls are onset close to the inner boundary. However, the movie shows that their role is rather unimportant at this stage and the primary circulation is still responsible for most of the transport. The Rayleigh stability criterion in a rotating frame is given by <cit.>, Φ = s(u_ϕ s + Ω s^2)^2 < 0 , where Ω = 1/E is the outer boundary rotation rate. We use a couple of snapshots in time of the zonal flow at zero longitude to visualise the Rayleigh discriminant Φ near the inner boundary. We use a simulation at E=10^-4, =-2.3 for this purpose. Figure <ref>(a) shows the at the beginning of the simulation when turbulence has not yet set in, while (b) shows the case after 9.5 outer boundary rotations when the boundary layer is fully unstable at all latitudes. We can see that Φ is strongly negative close to the inner boundary right at the start of the simulation. Though the colour-bars are the same for both plots, in terms of actual values of Φ, at the beginning of the simulation (panel (a)), Φ_min = -2.02× 10^10 and Φ_max = 4.63× 10^8 while after 9.5 outer boundary rotations (panel (b)), Φ_min = -4.42× 10^9 and Φ_max = 7.28× 10^8. This implies that, in terms of extreme values of Φ, the fluid near the inner boundary is 4.6 times more stable and only about half as unstable in panel (a) compared to panel (b). The boundary between strongly negative and positive parts correlate quite well with the unstable regions in panel (b). We compute the thickness of the equatorial boundary layer at the inner boundary using a slope intersection method <cit.>. We use time-averaged profiles of ∂ u_h/∂ r, where u_h = √(u_θ^2 + u_ϕ^2) is the magnitude of horizontal velocity. These profiles are obtained by averaging u_h in azimuth and then in co-latitude with a window of 10^∘ centred at the equator. Fitting two lines to the respective profiles, one close to the inner boundary and a second one for the bulk, we assume that the boundary layer ends where both lines intersect. This is illustrated in figure <ref>. We then explore how the equatorial boundary layer thickness δ scales with differential rotation ||. The thickness is compensated by the theoretical E^2/5 scaling of the equatorial boundary layer thickness <cit.> in figure <ref>(a). We can see that the scaling works rather well, except for the axisymmetric suite S3a near the transition to turbulence. The equatorial boundary layer thickness increases very slowly with || before the onset of turbulence. Close to the onset, there is an increase in the boundary layer thickness. After the transition to turbulence, the averaging in azimuth and time measures the thickness of the viscous sublayer, which is thinner than the laminar boundary layer and it decreases with Reynolds number <cit.>. This is seen as a rapid decrease in δ beyond the transition and then a slow decrease with ||. We can define a Reynolds number based on the boundary layer thickness, Re_δ = (Ω_i - Ω_o) δ^2ν = (Ω_i - Ω_o)Ω_oΩ_o L^2ν(δL)^2 = ΔΩΩ1E(δL)^2 . If we assume that the boundary layer becomes turbulent once it exceeds a critical Reynolds number Re_c and use the fact that δ/L = C E^2/5, where C is a constant, we find that at criticality, C (ΔΩΩ)_c 1E E^4/5 = Re_c , ⇒(ΔΩΩ)_c E^-1/5 = Re_c/C . Figure <ref>(b) shows the variation of Re_δ with || for all simulation suites. We find that, except for suite S3a, the rest of them peak at Re_c = 42, 42, 45 for suites S3, S4 and S4a, respectively. This implies that the assumption the existence of a critical Reynolds number works fairly well. Furthermore, figure <ref> shows the compensated plot of ||_c E^-1/5 with data from both experiments of H16 as well as our simulations. We find that the spread in the compensated plot is small, especially noting the variation along the vertical axis. The higher Ekman number simulations are slightly off a flat line, with the axisymmetric suite S3a being a complete outlier. § CONCLUSION The two sets of simulations at Ekman numbers of 10^-4 and 3× 10^-5 presented here yield similar results and reproduce the experimental observations of <cit.> in the turbulent regime. These include the generation of zonal flow in the bulk, violating the classic solution for spherical Couette flow by <cit.>, the loss of energy in inertial modes and inertial wave turbulence. Unfortunately, the experimental data is extremely limited in spatial extent, being limited to a single plane perpendicular to the rotation axis just above the inner sphere. This makes it difficult to make more quantitative comparisons with experiments beyond what we already made in <cit.> and in section <ref>. However, using simulations, we have been able to generate a more complete picture of the transition to turbulence. The cause of the onset of turbulence seems to be a centrifugal instability of the boundary layer at the equator of the inner boundary, giving rise to Taylor-Görtler vortices, similar to those observed by <cit.> and <cit.>. The hysteresis exhibited by the system <cit.> implies a subcritical transition. Beyond the regimes of axisymmetric flow and the first linear instability, as the differential rotation rate is made increasingly negative, we find that the boundary layer at the inner boundary first becomes unstable at high latitudes. This is seen in both in spectral space (section <ref>) as well as physical space (section <ref>). This instability gives place to spiral structures along the boundary layer, ejecting small scale plume-like structures. These small scale structures in a rotating environment are known to excite inertial waves <cit.>, leading eventually to inertial wave turbulence as shown in section <ref>. At a critical negative differential rotation, the boundary layer at the inner boundary becomes unstable at the equator and thus, the resultant Görtler vortices can now propagate into the bulk, outside the tangent cylinder. They further contribute to an increase in the energy carried by inertial waves as well as to an increase in energy in the small scales away from the inner boundary as evidenced by the temporal and spatial spectra (section <ref>). A significant increase in Reynolds stresses driving zonal flow ensues, which leads to the zonal flow spreading outside the tangent cylinder just as seen in experiments as well as simulations (section <ref>). This also leads to a more efficient angular momentum transport, and thus to an increase in the scaling exponent of the torque at the inner boundary from linear to quadratic. A second set of axisymmetric simulations at E=10^-4 and 3× 10^-5 show a very similar behaviour in terms of creation of large scale zonal flow, torque scalings and destabilization of the inner boundary layer near the equator. However, in this case the instability of the equatorial boundary layer at the inner boundary gives rise to an equatorial jet, which makes the subsequent evolution of the centrifugal instability markedly different than the 3D simulations. This equatorial jet also serves to transport angular momentum in the axisymmetric cases as opposed to Reynolds stresses for the full 3D simulations. The Ekman layer near the inner boundary merges with the Stewartson layer into a layer that has an extent of δ_s ×δ_z = E^2/5× E^1/5 <cit.> with s and z representing the cylindrical radius and axial direction, respectively. Using a heuristic critical Reynolds number argument for the destabilisation of the equatorial boundary layer at the inner boundary, we show that this scaling can help explain the experimental E^1/5 scaling for the critical differential rotation, especially at lower Ekman numbers (section <ref>). The finer details of this transition are something that can still be explored and investigated, but the centrifugal instability of the equatorial boundary layer is the clear precursor. It remains to be seen whether this scaling law extends to asymptotically low Ekman numbers. The narrowing gaps between the values of ()_c for the full 3D and the axisymmetric simulations and their similar nature of instabilities is encouraging. This could enable us to obtain an estimate of ()_c at lower Ekman numbers with cheaper axisymmetric computations. Furthermore, our previous <cit.> and current study has been limited to < 0. An in-depth study for > 0 is still lacking. In particular, it is not clear why one obtains high wavenumber spiral Stewartson layer instabilities for > 0, but low wavenumber instabilities trapped inside TC for < 0 <cit.>. More simulations and experiments are needed to establish better scaling laws pertaining to the different hydrodynamic regimes at lower Ekman numbers. This will prove helpful not only in order to extrapolate to real objects, but also to understand the dichotomies between < 0 and > 0. The theoretical foundation for spherical Couette flow is still in its infancy as compared to the more traditional Taylor-Couette system <cit.>. Our present study shows that there is a great scope for similar studies in spherical shells as well, where the presence of spherical curvature makes the problem less tractable. [Supplementary data] The movie for transition to turbulence can be found at <doi.org/10.6084/m9.figshare.9108533>. The full 4k version can be viewed as an unlisted youtube video: <https://youtu.be/6vBWwYIapC8>. [Acknowledgements]A.B would like to thank Andreas Tilgner, Jonathan Aurnou, Nathanaël Schaeffer and Paula Wulff for insightful discussions and Sabine Stanley for feedback on the manuscript draft. We gratefully acknowledge Adrian Mazilu from the Transylvania University of Brasov (Romania) for performing most of the experiments in the frame of a Traineeship ERASMUS+ program. [Funding]A.B would like to thank the IMPRS for Solar System Science for funding him during his time in Germany and Sabine Stanley for funding him subsequently. A.B would also like to thank the North-German Supercomputing Alliance (HLRN), Max Planck Computing and Data Facility (MPCDF) and the Gesellschaft für wissenschaftliche Datenverarbeitung mbH Göttingen (GWDG) for letting him generously use their supercomputing facilities. S.A.T would like to acknowledge support from the European Research Council (ERC Advanced Grant 670874 ROTANUT) and from the infrastructure program EuHIT of the European Commission. M.H gratefully acknowledges support from the German Research Foundation (DFG Grant No. HA 2932/7-1). [Declaration of interests]The authors report no conflict of interest. [Data availability statement]Both codes used to run the simulations listed here are openly available. MagIC is available at <https://github.com/magic-sph/magic> and XSHELLS at <https://bitbucket.org/nschaeff/xshells>. The parameters used to run the codes are available in the paper and in <cit.>. [Author ORCIDs]A. Barik, https://orcid.org/0000-0001-5747-669X; S. A. Triana, https://orcid.org/0000-0002-7679-3962; J. Wicht, https://orcid.org/0000-0002-2440-5091 [Author contributions]Ankit Barik ran the simulations and performed the subsequent data analysis and wrote the first draft of the manuscript. Santiago Triana ran the XSHELLS simulations and generated the resultant spectrogram. Both Santiago Triana and Michael Hoff participated in running the experiments at BTU-CS, Cottbus and the experimental data analysis. Michael Hoff performed a large part of the post processing of the experimental data. Johannes Wicht supervised the project and provided key insights. All authors contributed to providing feedback and refining the manuscript into its present form. jfm
http://arxiv.org/abs/2407.13519v1
20240718135315
GPSFormer: A Global Perception and Local Structure Fitting-based Transformer for Point Cloud Understanding
[ "Changshuo Wang", "Meiqing Wu", "Siew-Kei Lam", "Xin Ning", "Shangshu Yu", "Ruiping Wang", "Weijun Li", "Thambipillai Srikanthan" ]
cs.CV
[ "cs.CV" ]
GPSFormer C.Wang et al. Cyber Security Research Center (CYSREN), Nanyang Technological University College of Computing and Data Science, Nanyang Technological University Institute of Semiconductors, Chinese Academy of Sciences {changshuo.wang, meiqingwu, assklam, shangshu.yu, ruiping.wang, astsrikan}@ntu.edu.sg, {ningxin, wjli}@semi.ac.cn GPSFormer: A Global Perception and Local Structure Fitting-based Transformer for Point Cloud Understanding Changshuo Wang10000-0002-4056-4922 Meiqing Wu1 Siew-Kei Lam1,2^0000-0002-8346-2635 Xin Ning30000-0001-7897-1673 Shangshu Yu10000-0002-5000-0979 Ruiping Wang10000-0002-9576-8164 Weijun Li30000-0001-9668-2883 Thambipillai Srikanthan1,2 July 22, 2024 ============================================================================================================================================================================================================================================= ^ The corresponding author. § ABSTRACT Despite the significant advancements in pre-training methods for point cloud understanding, directly capturing intricate shape information from irregular point clouds without reliance on external data remains a formidable challenge. To address this problem, we propose GPSFormer, an innovative Global Perception and Local Structure Fitting-based Transformer, which learns detailed shape information from point clouds with remarkable precision. The core of GPSFormer is the Global Perception Module (GPM) and the Local Structure Fitting Convolution (LSFConv). Specifically, GPM utilizes Adaptive Deformable Graph Convolution (ADGConv) to identify short-range dependencies among similar features in the feature space and employs Multi-Head Attention (MHA) to learn long-range dependencies across all positions within the feature space, ultimately enabling flexible learning of contextual representations. Inspired by Taylor series, we design LSFConv, which learns both low-order fundamental and high-order refinement information from explicitly encoded local geometric structures. Integrating the GPM and LSFConv as fundamental components, we construct GPSFormer, a cutting-edge Transformer that effectively captures global and local structures of point clouds. Extensive experiments validate GPSFormer's effectiveness in three point cloud tasks: shape classification, part segmentation, and few-shot learning. The code of GPSFormer is available at <https://github.com/changshuowang/GPSFormer>. § INTRODUCTION In recent years, point cloud understanding <cit.> techniques have been widely applied in fields such as autonomous driving<cit.>, robotics<cit.>, and public safety<cit.>. However, due to the unordered and irregular nature of point clouds, effectively extracting inherent shape information from them remains an extremely challenging research topic. Accurately and efficiently learning shape perception from point clouds has emerged as a prominent and noteworthy problem. Early researches <cit.> converted point cloud data into multi-view<cit.> or voxel representations, utilizing traditional convolutional neural networks to learn shape information. However, this conversion process often r0.5 < g r a p h i c s > Performance comparison on the challenging ScanobjectNN dataset. We show supervised learning-based and pre-training-based methods with parameters less than 22M. The proposed supervised learning GPSFormer outperforms state-of-the-art methods, achieving an accuracy of 95.4% with a modest parameter of 2.36M. led to the loss of inherent geometric information and incurred high computational costs. PointNet <cit.> directly encoded each point in the point cloud independently and aggregated global features through max pooling, but this approach overlooked local structural information. To address this issue, subsequent works <cit.> proposed a series of methods based on local feature aggregation. These methods divide the point cloud into different local subsets through farthest point sampling (FPS), then learn local shape representations by constructing local aggregation operators, and finally learn from local to global shape perception by constructing a hierarchical structure. However, such methods overlook the long-range dependency relationships among points. Some researchers <cit.> have utilized the powerful long-range dependency learning capabilities of Transformer<cit.> and applied this structure to point cloud analysis. For example, Point Transformer <cit.> uses self-attention in local neighborhoods to learn long-range dependencies among points. PCT <cit.> proposes an offset attention module to learn global context representations. Nevertheless, Transformers that consider both short-range dependencies and long-range dependency relationships, as well as local structure modeling, have rarely been explored. With the rapid development of self-supervised learning and large language models<cit.>, some researchers <cit.> have proposed a series of methods based on pre-training or multimodal large language models<cit.>. Although these approaches have improved performance by utilizing external data to assist point cloud models, they have not completely solved the problem of point cloud structural representation. To overcome the limitations, we propose GPSFormer to learn rich contextual shape perception from point clouds. GPSFormer consists of two core components: a Global Perception Module (GPM) and a Local Structure Fitting Convolution (LSFConv). Within the GPM, we introduce the Adaptive Deformable Graph Convolution (ADGConv) which empowers point features to dynamically navigate the entirety of the point cloud feature space. This allows for the flexible construction of suitable local neighborhoods, facilitating the learning of strong feature representations and short-range dependencies for similar structures. Following this, the features, both pre- and post-transformation, are fed into a Residual Cross-Attention (RCA), enriching the context structural understanding. Conclusively, the model harnesses a multi-head attention (MHA) to capture the long-range dependencies inherent in point clouds. Inspired by Taylor series, we design the LSFConv which treats local structure representation as a polynomial fitting problem to precisely capture subtle changes in local geometric information. Specifically, low-order terms are employed to fit the flat parts of the local structure, typically encompassing the basic shapes and overall trends of the point cloud. High-order terms are used to fit the edges and detailed parts of the local structure, thus capturing complex variations and fine features. As shown in <ref>, GPSFormer achieves excellent performance. The main contributions of this paper are as follows: * We propose GPSFormer, a global perception and local structure fitting-based transformer, to learn rich contextual information and precise shape perception from irregular point clouds. * We introduce the novel GPM and LSFConv. GPM learns both short-range and long-range point dependencies, while the Taylor series-inspired LSFConv captures low and high-frequency local geometric information through polynomial fitting. * The proposed GPSFormer achieves state-of-the-art results in three point cloud tasks, notably exceeding the current best supervised learning method by 5.0% in accuracy on the challenging ScanObjectNN dataset. § RELATED WORKS §.§ Indirect-based Representation Methods. Early methods <cit.> transformed unstructured point clouds into multi-views or voxels for 3D shape learning. Multi-view-based methods <cit.> converted point clouds into 2D multi-view images, using 2D convolutional neural networks (CNNs). For instance, MVCNN <cit.> integrated information from various perspectives into global features describing 3D objects. However, these methods faced challenges in viewpoint selection and fusion, leading to inherent loss of geometric structure information. Voxel-based methods <cit.> used 3D convolution to extract shape information from voxel grids. However, voxelization incurred significant computational overhead, and the resolution of voxels resulted in the loss of 3D shape information. §.§ Direct-based Representation Methods. To address these issues, PointNet <cit.> pioneered direct deep learning for point clouds. However, it overlooked local structure information. To address this issue, PointNet++ <cit.> grouped point clouds into different local neighborhoods through Furthest Point Sampling (FPS) and performed feature aggregation within each local neighborhood. Within the "Sampling-Grouping-Aggregation" framework, existing works <cit.> have designed methods based on point-wise Multi-Layer Perceptron (MLP), convolution operations, and attention mechanisms. Some approaches <cit.> enhanced model potential by designing local feature encoding and network structures. Recent works <cit.> significantly improved point cloud understanding by designing different Transformer structures to learn long-range dependencies in point clouds. However, challenges persisted in simultaneously capturing global context and local information. In contrast to the above methods, our proposed Adaptive Deformable Graph Convolution (ADGConv) flexibly learns short-range dependencies among similar features. The introduced Local Structure Fitting Convolution (LSFConv) employs a Taylor series fitting approach to finely analyze the local structures and details of point clouds. §.§ Pre-training-based Representation Methods. Recent research <cit.> has attempted to leverage multimodal data, such as point cloud, text and images, to pre-train point cloud models for downstream tasks, significantly enhancing shape perception capabilities. For example, ULIP <cit.> used multimodal information like images, text, and 3D point clouds to learn a unified representation space for objects or scenes, improving 3D point cloud understanding. PointGPT <cit.>, an auto-regressive generation method, partitioned point clouds into blocks and used a Transformer-based decoder and dual masking strategy to learn latent representations, predicting the next point to address point cloud-related challenges. Although the use of multimodal pre-training effectively enhanced downstream point cloud understanding tasks, it has a high demand for data and a long training time. Furthermore, the upper limit of the pre-training effect is still constrained by the expressive power of the point cloud model structure. § METHOD As shown in <ref>, we provide a detailed exposition of the proposed GPSFormer. This section is structured as follows: Firstly, we review point convolution ( <ref>). Secondly, we introduce the Global Perception Module (GPM) ( <ref>). Thirdly, we propose the Local Structure Fitting Convolution (LSFConv) ( <ref>). Fourthly, based on GPM and LSFConv, we introduce the GPSFormer architecture and its application details ( <ref>). §.§ Background Current methods based on local feature aggregation typically follow a structure design of "sampling-grouping-aggregation" to construct local feature extraction blocks. The sampling operation utilizes the Farthest Point Sampling (FPS) method to downsample the input point cloud to create a representative set of sampled points. The grouping operation often employs K-Nearest Neighbors (KNN) or spherical queries to build local neighborhoods for each representative sampled point. The aggregation step uses mapping functions and max pooling to obtain local shape representations. For each stage, we assume that representative sampled points can be described as {(p_i, f_i)}_i=1^M, where M represents the number of points, p_i ∈ℝ^1 × 3 and f_i ∈ℝ^1 × C represent the coordinates and features of the i-th point, respectively. C denotes the number of feature channels. The point convolution for local feature aggregation can be formalized as: f_i^'=𝒜({ℳ(p_i, p_j) ·𝒯(f_i, f_j) | p_j ∈𝒩(p_i)}), where 𝒜 is the aggregation function, usually max pooling. ℳ and 𝒯 are mapping functions, typically implemented as Multi-Layer Perceptrons (MLPs). N(p_i) represents the local neighborhood of p_i, and p_j are neighboring points of p_i. Another aggregation method is to aggregate in the feature space, commonly known as dynamic graph convolution (DGC) <cit.>. It aggregates local features in the feature space, meaning that points close in the feature space may be far apart in coordinate space. DGC can be formalized as: f_i^'=𝒜({𝒯(f_i, f_j) | f_j ∈𝒩(f_i)}). §.§ Global Perception Module Directly aggregating local features of point clouds may struggle to capture meaningful shape information. We find that modeling the global context of point features before local feature aggregation helps to obtain robust shape perception. Therefore, we have developed a Global Perception Module (GPM), which initially employs the innovative Adaptive Deformable Graph Convolution (ADGConv) to reinforce short-range dependencies among similar features within the feature space. Subsequently, it utilizes Residual Cross-Attention (RCA) and Multi-Head Attention (MHA) to capture long-range dependencies across all positions within the feature space. GPM provides guidance for subsequent local structure fitting. Firstly, dynamic graphs are typically constructed using KNN in the feature space, which makes the receptive field of the dynamic graph easily influenced by the number of neighboring points K. When K is small, the receptive field of the dynamic graph focuses on the local coordinate neighborhood. When K is large, the receptive field of the dynamic graph is distributed over some semantically unrelated points, making it challenging to learn distinguishable feature representations within similar components. To tackle this issue, we introduce the ADGConv. Initially, we define a feature offset Δ(f_i) for sampled points, allowing them to traverse the entire feature space and flexibly construct appropriate local neighborhoods. The offset Δ(f_i) is adaptively acquired for representative sampling points f_i through a learnable feature transformation function ϕ, indicating the preference of f_i for a specific position in the feature space. The feature f̂_i after transformation can be obtained by f_i and Δ(f_i). And we use f̂_i as the central point, with the original feature f_i defining the sampling space for constructing local neighborhoods, to build a dynamic graph for obtaining the enhanced feature f_i^d. This roaming process of ADGConv aids in learning robust feature representations among similar components. The ADGConv can be formalized as follows: Δ(f_i)=ϕ(f_i) f̂_i=f_i+Δ(f_i) 𝒯((f̂_i), f_j)=ψ([f̂_i, f_j-f̂_i]) f_i^a=𝒜({𝒯(f_i, f_j) | f_j ∈𝒩(f_i)}), where ϕ and ψ denote MLPs, and [·] represents concatenation. f_i^a ∈ℝ^1 × C is the output of ADGConv. Next, the RCA fuses the displaced feature f̂_i and the output feature f_i^a of ADGConv by cross-attention, with the formula f_i^r = f_i^a + Attn(f̂_i, f_i^a, f_i^a). Finally, the output f_i^r of RCA is then fed into the MHA to further learn long-range dependencies across all positions within the feature space, enhancing the model's representation of point cloud features. MHA can be formalized as follows: f_i^g = Attn(Q_i, K, V) = Softmax(Q_i K^T/√(h)) V, where Q_i = Z Ŵ^Q, K = ZŴ^K, and V = ZŴ^V. Ŵ^Q ∈ℝ^C × C_h, Ŵ^K ∈ℝ^C × C_h, and Ŵ^V ∈ℝ^C × C_h represent the linear mapping matrices. Z = {f_i^r}_i=1^M denotes the output matrix of the RCA. §.§ Local Structure Fitting Convolution §.§.§ Taylor Series. Inspired by Taylor series (see <ref>(a)), building upon GPM, we adopt a local fitting approach to analyze the local structure and details of the point cloud more finely. The Taylor series is given by: f(x) = f(a) + ∑_n=1^∞f^(n)(a)/n!(x - a)^n, |x - a| < ϵ , where a is a constant, and ϵ is an infinitesimal. To simplify the computation, we decompose the Taylor series into a low-frequency component and a high-frequency component, expressed as: f(x) ≈ f(a) + ∑_n=1^∞ a_n (x - a)^n, |x - a| < ϵ, where a_n = f^(n)(a)/n!. Based on  <ref>, it can be understood that, in the representation of local structures within a point cloud, the low-frequency component f(a) represents the flat parts of the local structure and overall trends of the point cloud, while the high-frequency component ∑_n=1^∞ a_n (x - a)^n represents the edges and detailed parts of the local structure. §.§.§ Local Structure Fitting Convolution. Inspired by Taylor series, we learn both the overall information (low-frequency information) f_i^L and refined details (high-frequency information) f_i^H embedded in local structures. Hence, the proposed LSFConv is given by: f({f_j}_j=1^K) ≈ f_i^L + f_i^H = 𝒜({ϕ(f_j)}_j=1^K) + 𝒜({𝒯(f_i, f_j)}_j=1^K), 𝒯(f_i, f_j)=(w_j ·(f_j-f_i)/|w_j ·(f_j-f_i)|)^s ·|w_j ·(f_j-f_i)|^p, where we refer to f_i^L as Low-Order Convolution (LOConv) and f_i^H as High-Order Convolution (HOConv, see <ref>(b)). ϕ represents MLP. 𝒯(f_i, f_j) is a novel affine basis function. Here, |·| denotes element-wise absolute value, s ∈{0,1}, and p is a learnable parameter. When s=1, p=1, 𝒯(f_i, f_j) degenerates into an Affine Basis Function (ABF <cit.>) (see <ref>); when s=0, p=2, 𝒯(f_i, f_j) degenerates into a radial basis function (RBF <cit.>) (see <ref>). Therefore, the proposed 𝒯 exhibits powerful representation capability. 𝒯=(w_j ·(f_j-0)/|w_j ·(f_j-0)|)^1 ·|w_j ·(f_j-0)|^1=w_j · f_j, 𝒯=(w_j ·(f_j-f_i)/|w_j ·(f_j-f_i)|)^0 ·|w_j ·(f_j-f_i)|^2. §.§.§ Explicit Structure Introduction. The interaction between sampled points and neighboring points in the point cloud can explicitly reflect the relevance of local point clouds. If we can utilize this prior knowledge to learn weights w_j, it significantly enhances the local structure fitting convolution's ability to perceive the shape of local point clouds. We express the interaction between sampled point p_i and neighboring point p_ij as: h(p_i, p_i j)=[p_i, p_j, p_j-p_i,p_i, p_j], where ||·|| denotes the calculation of the Euclidean distance. Therefore, w_j is defined as: w_j = ξ(h(p_i, p_j)), where ξ represents MLP. The explicit introduction of geometric information is beneficial for the local fitting convolution to learn relative spatial layout relationships between points and capture local geometric features and detailed information. §.§ GPSFormer As illustrated in <ref>, building upon the Global Perception Module (GPM) and Local Structure Fitting Convolution (LSFConv), we have designed a model based on the Transformer architecture, termed GPSFormer, for point cloud analysis. §.§.§ Point Cloud Classification. For point cloud classification tasks, we construct a stacked GPSFormer by cascading GPS blocks. In each GPS block, global perception is initially performed utilizing the GPM. Subsequently, representative sampling points are obtained through FPS. Finally, local neighborhoods are constructed around each sampling point, and local shape perception is achieved through the proposed LSFConv. We employ three stages of GPS blocks for point cloud classification, with each stage having a feature dimension set of 64, 128, 256. Concurrently, we also evaluate a compact variant, termed GPSFormer-elite, with a feature dimension output set of 32, 64, 128 for each stage. Prediction on the point cloud is carried out through max-pooling and a multi-layer perceptron (MLP). In each stage of LSFConv, a multi-scale strategy is employed for local feature extraction. Spherical queries are utilized to construct local neighborhoods with multi-scale radii 0.1, 0.2, 0.4, corresponding to neighborhood point counts 8, 16, 32. The multi-scale parameters remain consistent across stages, thereby avoiding the hassle of parameter tuning. §.§.§ Part Segmentation Task. For part segmentation, we utilize five stages of GPS blocks in the encoder phase. The method and parameters for constructing local neighborhoods in each stage are identical to those in the classification task. Additionally, a decoder with a reverse interpolation algorithm is employed to restore the point cloud's resolution. Between the encoder and decoder, a skip connection structure similar to U-Net is applied, making full use of contextual information. For more details, please refer to the supplementary material. § EXPERIMENTS In this section, we demonstrate the effectiveness of the proposed GPSFormer through extensive experiments. First, we conduct experiments on 3D shape classification, part segmentation, and few-shot classfication. Then, we conduct ablation analysis. Finally, we provide visualization to better understand the behavior of GPM and LSFConv. §.§ 3D Shape Classification §.§.§ Shape classification on ScanObjectNN. ScanObjectNN <cit.> is a real-world dataset collected for point cloud classification. The objects in this dataset include backgrounds and consider occlusion, making it more realistic and challenging compared to ModelNet40 <cit.>. This dataset comprises 2902 point cloud objects, divided into 15 object categories, and we conducted experiments on its most challenging perturbed variant (PB_T50_RS). [t]0.48 captypetable Classification results on the ScanObjectNN dataset. “-” denotes unknown. “*” denotes pre-training methods. ! [2pt] Method Year mAcc (%) OA (%) PointNet <cit.> 2017 63.4 68.2 PointNet++ <cit.> 2018 69.8 73.7 PointCNN <cit.> 2018 75.1 78.5 DGCNN <cit.> 2019 73.6 78.1 DRNet <cit.> 2021 78.0 80.3 GBNet <cit.> 2021 77.8 80.5 SimpleView <cit.> 2021 - 80.5 MVTN <cit.> 2021 83.1 85.5 PointMLP <cit.> 2022 84.4 85.7 RepSurf-U <cit.> 2022 83.1 86.0 PointNeXt <cit.> 2022 85.8 87.7 Point-NN <cit.> 2023 - 64.9 Point-PN <cit.> 2023 - 87.1 SPoTr <cit.> 2023 86.8 88.6 PointConT <cit.> 2023 88.5 90.3 DeLA <cit.> 2023 89.3 90.4 Point-BERT* <cit.> 2021 - 83.1 Point-MAE* <cit.> 2022 - 85.2 PointFEMAE <cit.> 2023 - 90.2 Point-RAE <cit.> 2023 - 90.3 ULIP-2 + PointNeXt* <cit.> 2023 91.2 91.5 ReCon* <cit.> 2023 - 91.26 PointGPT* <cit.> 2023 - 93.6 GPSFormer-elite w/o vot. - 92.2 92.9 GSPFormer-elite w/vot. - 92.5 93.3 GPSFormer w/o vot. - 93.5 95.0 GPSFormer w/vot. - 93.8 95.4 [2pt] [t]0.48 captypetable Classification results on ModelNet40 dataset. “-” denotes unknown. “*” denotes pre-training methods. ! [2pt] Methods Year mAcc (%) OA (%) PointNet <cit.> 2016 86.0 89.2 PointNet++ <cit.> 2017 - 90.7 PointCNN <cit.> 2018 - 92.2 DGCNN <cit.> 2018 90.2 92.9 Point Transformer <cit.> 2020 90.6 93.7 MVTN <cit.> 2020 92.2 93.8 SimpleView <cit.> 2021 93.9 91.8 PAConv <cit.> 2021 - 93.9 CurveNet <cit.> 2021 - 94.2 PointNeXt <cit.> 2022 - 94.0 PointMLP <cit.> 2022 91.4 94.5 RepSurf-U <cit.> 2022 - 94.7 Point-NN <cit.> 2023 - 81.8 Point-PN <cit.> 2023 - 93.8 PointConT <cit.> 2023 - 93.5 DeLA <cit.> 2023 92.2 94.0 Point-BERT* <cit.> 2021 - 93.8 Point-MAE* <cit.> 2022 - 94.0 PointFEMAE <cit.> 2023 - 94.5 Point-RAE <cit.> 2023 - 94.1 ULIP + PointMLP* <cit.> 2023 92.4 94.7 ReCon* <cit.> 2023 - 94.7 PointGPT* <cit.> 2023 - 94.9 GPSFormer-elite w/o vot. - 90.9 93.4 GPSFormer-elite w/vot. - 91.4 93.7 GPSFormer w/o vot. - 91.8 93.8 GPSFormer w/vot. - 92.2 94.2 [2pt] We categorize the comparison methods into pure supervised learning and pre-training approaches. <ref> demonstrates that the proposed GPSFormer outperforms all methods, achieving mAcc and OA of 93.8% and 95.4% respectively. This result is 1.8% higher than the pre-training method PointGPT <cit.>'s OA, and 4.5% and 5.0% higher than the pure supervised method DeLA <cit.>'s mAcc and OA, respectively. Even the compact GSPFormer-elite achieved OA of 93.3% with only 0.68M parameters. This outcome showcases GPSFormer's robust ability to capture long-range dependencies and local structural representations. §.§.§ Shape classification on ModelNet40. The ModelNet40 <cit.> dataset, widely regarded as the benchmark for point cloud analysis, consists of point clouds r0.5 Part segmentation results (%) on the ShapeNetPart. Mean IoU of all part categories (class mIoU) and mean IoU of all instances (instance mIoU) are reported. 63mm! [2pt] Methods Year class mIoU instance mIoU PointNet <cit.> 2017 80.4 83.7 PointNet++ <cit.> 2017 81.9 85.1 PointCNN <cit.> 2018 84.6 86.1 DGCNN <cit.> 2019 82.3 85.1 RS-CNN <cit.> 2019 84.0 86.2 KPConv <cit.> 2019 85.1 86.4 PointConv <cit.> 2019 82.8 85.7 Point Transformer <cit.> 2020 - 85.9 PointASNL <cit.> 2020 - 86.1 PCT <cit.> 2021 - 86.4 PAConv <cit.> 2021 84.6 86.1 AdaptConv <cit.> 2021 83.4 86.4 Point Transformer <cit.> 2021 83.7 86.6 CurveNet <cit.> 2021 - 86.8 PointMLP <cit.> 2022 84.6 86.1 PointNeXt <cit.> 2022 85.2 87.1 SPoTr <cit.> 2023 85.4 87.2 GPSFormer - 85.4 86.8 [2pt] representing composite objects. It encompasses 40 categories (such as aircraft, cars, plants, and lights), with 9843 samples used for training and the remaining 2468 for testing. As shown in <ref>, the proposed GPSFormer achieves an impressive accuracy of 94.2% on the synthetic ModelNet40 dataset, outperforming most supervised and pre-training methods. This outcome underscores the effectiveness and generalizability of GPSFormer. Currently, existing methods have essentially reached saturation on the less challenging synthetic ModelNet40 dataset, with a performance gap of less than 0.8% among these advanced approaches. However, this marginal gap belies their relatively poor performance on real-world datasets like ScanObjectNN. Consequently, the ModelNet40 dataset alone cannot serve as an accurate evaluation benchmark for model performance. §.§ Part Segmentation ShapeNetPart <cit.> is a subset of the large-scale 3D CAD template library ShapeNet, which contains 16,881 shapes of 16 common object classes (i.e., table, chair, plane, etc.). Each shape is annotated with 2-5 parts, resulting in a total of 50 part categories in the dataset. In this experiment, we used 13,807 models for training and 2,874 models for testing. As shown in <ref>, we evaluated the performance of GPSFormer for part segmentation using the mean IoU and class IoU of each instance. We randomly selected 2048 points as input and reported the results after voting for 10 times. Obviously, compared with existing methods, GPSFomer achieved competitive results, especially achieving the best performance of 85.4% in terms of class IoU. <ref> shows the results of some part segmentation, indicating that GPSFormer can effectively recognize the shape information of objects and accurately partition similar components. §.§ Few-Shot Classification The existing methods perform few-shot classification on ModelNet40 <cit.>. To better reflect the model's ability in complex environments, we provide a few-shot dataset for ScanObjectNN <cit.> following the division method of ModelNet40. According to the setting of previous works, we sampled the "n-way m-shot" setting, which randomly selects n classes from the dataset and randomly selects m samples from each class for training. During testing, 20 samples are randomly selected from the remaining samples of n classes. Therefore, we evaluated in four settings (5-way 10-shot, 5-way 20-shot, 10-way 10-shot, 10-way 20-shot), each setting conducted 10 independent experiments, and the average value of the experimental results was used as the performance indicator of the model. <ref> and <ref> provide the few-shot classification performance of GPSFormer on ModelNet40 and ScanObjectNN. It can be seen that the proposed GPSFormer can learn robust shape information in limited samples. [t]0.48 captypetable Few-shot classification results on the ModelNet40 dataset. Mean accuracy (%) and standard deviation are reported across 10 independent trials for each scenario. ! 2c5-way 2c10-way 2-3 5-6 10-shot 20-shot 10-shot 20-shot DGCNN <cit.> 31.6 40.8 19.9 16.9 FoldingNet <cit.> 33.4 35.8 18.6 15.4 PointNet++ <cit.> 38.5 42.4 23.0 18.8 PointNet <cit.> 52.0 57.8 46.6 35.2 3D-GAN <cit.> 55.8 65.8 40.3 48.4 PointCNN <cit.> 65.4 68.6 46.6 50.0 Point-NN <cit.> 88.8 90.9 79.9 84.9 GPSFormer 90.1 91.5 82.3 86.2 [t]0.48 captypetable Impact of each component of GPM on the ScanObjectNN. ! 3c| Settings 2* OA(%) 1 - 3 ADGConv RCA MHA 93.2 88.7 89.6 94.4 95.0 95.4 [t]0.48 captypetable Few-shot classification on ScanObjectNN. ^† denotes a model trained from scratch. ! 2c5-way 2c10-way 2-3 5-6 10-shot 20-shot 10-shot 20-shot PointNeXt^†<cit.> 55.7 53.4 39.6 42.8 PointNeXt <cit.> 72.4 72.2 68.9 69.5 GPSFormer^† 71.7 73.6 54.3 62.1 GPSFormer 89.3 87.0 86.6 87.0 [t]0.48 captypetable The Influence of HOConv's Parameters on GPSFormer. ! [1pt] parameter settings Accuracy (%) ABF <cit.> 92.8 RBF <cit.> 93.2 s=0,p learnable 94.6 s=1,p learnable 95.4 [1pt] §.§ Ablation Study §.§.§ The Effectiveness of the Global Perception Module. As shown in <ref>, it is evident that ADGConv, RCA, and MHA modules play crucial roles in point cloud context modeling. Individually, ADGConv effectively extracts local features through dynamic graph convolution, achieving 93.2% accuracy. RCA contributes to global relationships with 88.7% accuracy, while MHA captures dependencies, though less effectively than ADGConv and RCA. Combining ADGConv and RCA improves accuracy to 94.4%, highlighting their complementarity. The combination of ADGConv and MHA achieves 95.0%, validating their effectiveness. Finally, utilizing all three modules together attains the highest accuracy of 95.4%, emphasizing their synergistic roles in enhancing model performance for point cloud analysis. [t]0.48 captypetable The Influence of the neighborhood radius of ADGConv. 60mm! The number of neighbor points OA(%) 5 88.6 10 92.4 15 94.3 20 95.4 25 94.8 30 93.2 [t]0.48 captypetable Model complexity comparison on ScanObjectNN. 60mm! [2pt] Methods OA(%) parameter(M) FLOPS(G) PointNet <cit.> 68.0 3.5 0.5 PointNet++ <cit.> 77.9 1.5 1.7 DGCNN <cit.> 78.1 1.8 2.4 PointCNN <cit.> 78.5 0.6 - MVTN <cit.> 82.8 11.2 43.7 PointMLP <cit.> 85.4 12.6 31.4 PointNeXt <cit.> 87.4 1.4 3.6 GPSFormer 95.4 2.36 0.7 [2pt] §.§.§ Neighborhood size of ADGConv. Since ADGConv aggregates features within feature neighborhoods, neighboring points have strong semantic relationships in spatial positions. If the neighborhood is too small, it will cause ADGConv to become local for feature aggregation within the neighborhood; if the neighborhood is too large, it not only increases the search time of the model but also introduces some irrelevant features of points. As shown in Table <ref>, when the size of the neighborhood is 20, the model can learn good contextual information. However, when the neighborhood is too large or too small, valuable semantic information will not be learned. §.§.§ The Parameter Influence of High Order Convolution. The High-Order Convolution (HOConv) plays a critical role in shaping the network's performance, as demonstrated by the results in <ref>. The configuration with a learnable parameter p and setting s=1 emerged as the most efficacious, achieving a remarkable accuracy of 95.4%. This highlights the significance of adaptive parameter tuning in enhancing network outcomes. Although the baseline methodologies ABF <cit.> and RBF <cit.> recorded commendable accuracies of 92.8% and 93.2% respectively, the adaptive parameter settings clearly surpassed them. §.§ Visualization <ref> visualizes the spatial distribution of sample features on the ScanObjectNN dataset using Point-Bert <cit.> and GPSFormer. It can be seen that the proposed GPSFormer can better reduce the within-class distance of objects and make the sample distribution more compact, effectively recognizing the shape information of objects. §.§ Model Complexity <ref> provides an evaluation of the model complexity of GPSFormer on the ScanObjectNN dataset. GPSFormer achieves the best results in terms of speed and accuracy with only a slight increase in parameter count, featuring just 2.36M parameters and 0.7G FLOPS. This highlights its effectiveness and efficiency in point cloud understanding tasks. § CONCLUSION In this paper, we propose a novel Global Perception and Local Structure Fitting-based Transformer (GPSFormer) to address the challenge of effectively capturing shape information from irregular point clouds. The key contributions of GPSFormer include introducing a Global Perception Module (GPM) and a Local Structure Fitting Convolution (LSFConv). The GPM enhances the model's ability to capture global contextual information in point clouds by introducing the Adaptive Deformable Graph Convolution (ADGConv). Meanwhile, the LSFConv finely learns the local geometric structures of point clouds, acquiring both low-order fundamental information and high-order refinement details. Through extensive experiments in three point cloud understanding tasks, the proposed GPSFormer demonstrates efficient processing and analysis of point clouds without relying on external data. In the future, we plan to further explore the potential of GPSFormer in pre-training, lightweight approaches, and few-shot learning settings. splncs04
http://arxiv.org/abs/2407.13349v2
20240718094913
DCNv3: Towards Next Generation Deep Cross Network for CTR Prediction
[ "Honghao Li", "Yiwen Zhang", "Yi Zhang", "Hanwei Li", "Lei Sang" ]
cs.IR
[ "cs.IR" ]
salmon1802li@gmail.com 0009-0000-6818-7834 Anhui University Hefei Anhui Province China Corresponding author zhangyiwen@ahu.edu.cn Anhui University Hefei Anhui Province China zhangyi.ahu@gmail.com Anhui University Hefei Anhui Province China lihanwei@stu.ahu.edu.cn Anhui University Hefei Anhui Province China sanglei@ahu.edu.cn Anhui University Hefei Anhui Province China § ABSTRACT Deep & Cross Network and its derivative models have become an important paradigm in click-through rate (CTR) prediction due to their effective balance between computational cost and performance. However, these models face four major limitations: (1) while most models claim to capture high-order feature interactions, they often do so implicitly and non-interpretably through deep neural networks (DNN), which limits the trustworthiness of the model's predictions; (2) the performance of existing explicit feature interaction methods is often weaker than that of implicit DNN, undermining their necessity; (3) many models fail to adaptively filter noise while enhancing the order of feature interactions; (4) the fusion methods of most models cannot provide suitable supervision signals for their different interaction methods. To address the identified limitations, this paper proposes the next generation Deep Cross Network (DCNv3) and Shallow & Deep Cross Network (SDCNv3). These models ensure interpretability in feature interaction modeling while exponentially increasing the order of feature interactions to achieve genuine Deep Crossing rather than just Deep & Cross. Additionally, we employ a Self-Mask operation to filter noise and reduce the number of parameters in the cross network by half. In the fusion layer, we use a simple yet effective loss weight calculation method called Tri-BCE to provide appropriate supervision signals. Comprehensive experiments on six datasets demonstrate the effectiveness, efficiency, and interpretability of DCNv3 and SDCNv3. The code, running logs, and detailed hyperparameter configurations are available at: <https://anonymous.4open.science/r/DCNv3-E352>. <ccs2012> <concept> <concept_id>10002951.10003317.10003347.10003350</concept_id> <concept_desc>Information systems Recommender systems</concept_desc> <concept_significance>500</concept_significance> </concept> </ccs2012> [500]Information systems Recommender systems 20 February 2007 [revised]12 March 2009 [accepted]5 June 2009 DCNv3: Towards Next Generation Deep Cross Network for Click-Through Rate Prediction Lei Sang Received gg mm yyyy; accepted gg mm yyyy =================================================================================== § INTRODUCTION Click-through rate (CTR) prediction is an essential part of industrial recommender systems <cit.>. It uses user profiles, item attributes, and context to predict the probability of user-item interactions, thereby providing a better user experience and increasing the profitability of the recommender system <cit.>. Most feature interaction-based CTR prediction models follow the paradigm proposed by DCN <cit.>, which aims to construct both explicit and implicit feature interactions and fuse the predictions of different interaction information to enhance interpretability and accuracy. Despite the effectiveness of the current CTR paradigm, there are limitations to overcome: * Lack of interpretability. As shown in Figure <ref>, most models integrate deep neural networks <cit.> (DNN) to model implicit high-order feature interactions and achieve AUC performance between 81.3 and 81.5. This demonstrates the effectiveness of implicit feature interactions. However, implicit feature interactions lack interpretability <cit.>, which significantly reduces the trustworthiness of deep CTR model predictions. * Limited necessity for explicit interactions. As observed in Figure <ref>, most models using only explicit feature interactions achieve AUC performance below 81.3, while DNN achieves an AUC of 81.4. This indicates that the performance of most explicit modeling methods is weaker than that of implicit DNNs, which undoubtedly undermines the necessity of integrating explicit and implicit feature interactions. Therefore, FinalMLP <cit.> attempts to model features implicitly in a dual manner, discarding traditional explicit interaction methods, and thereby achieving state-of-the-art performance. * Ineffective noise filtering capability. Many studies <cit.> point out that CTR models contain a significant amount of redundant feature interactions and noise, especially in higher-order feature interactions. Consequently, most CTR models <cit.> are built with only two to three network layers, abandoning the explicit capture of effective higher-order feature interaction information. Meanwhile, filtering noise for the model often incurs additional computational costs, which can lead to longer training and inference times, potentially offsetting the benefits gained from improved model accuracy. * Insufficient and undifferentiated supervision signals. Most models using both explicit and implicit feature interaction methods require a fusion layer to obtain the final prediction <cit.>. However, they only use the final prediction to compute the loss, rather than providing appropriate supervision signals for the different methods themselves. This weakens the effectiveness of the supervision signals. Additionally, some studies <cit.>, such as the CL4CTR in Figure <ref>, attempt to introduce auxiliary loss to provide extra supervision signals. However, this often introduces additional computational costs and loss balancing hyperparameters, increasing the difficulty of hyperparameter tuning. Therefore, a simple, general, and effective method for computing supervision signals is crucial. To address the aforementioned limitations, this paper proposes the next generation Deep Cross Network (DCNv3) and the Shallow & Deep Cross Network (SDCNv3), which integrates both low-order and high-order feature interactions while ensuring model interpretability by avoiding the use of DNN for implicit high-order feature interaction modeling. Specifically, we introduce a new exponentially growing Deep Crossing method to explicitly model high-order feature interactions, and we use a Self-Mask operation to filter noise and reduce the number of parameters in the Cross Network by half. In the fusion layer, we propose a simple yet effective multi-loss balancing strategy and calculation method, called Tri-BCE, to provide suitable supervision signals for different sub-networks. The core contributions of this paper are summarized as follows: * To the best of our knowledge, this is the first work to achieve surprising performance using only explicit feature interaction modeling without integrating DNN, which may contrast with the popular paradigms in the past CTR prediction literature. * We introduce a novel feature interaction modeling method, Deep Crossing, which grows exponentially with the number of layers to achieve a genuine deep cross network (DCNv3). This method explicitly captures feature interaction information while using a Self-Mask operation to reduce the number of parameters by half and filter noise. * We propose a model, SDCNv3, that explicitly captures both low-order and high-order feature interactions. Additionally, we introduce a simple and effective multi-loss balancing and calculation method, called Tri-BCE, to ensure that different sub-networks receive appropriate supervision signals. * Comprehensive experiments on six datasets demonstrate the effectiveness, efficiency, and interpretability of DCNv3 and SDCNv3. Based on our experimental results, our models achieve 1st rankings on multiple CTR prediction benchmarks. § RELATED WORK AND BACKGROUND §.§ CTR Prediction Effectively capturing feature interactions has always been one of the key methods for improving CTR prediction models, thus receiving extensive research attention. Traditional methods include LR <cit.>, which captures first-order feature interactions, and FM <cit.> and its derivatives <cit.>, which capture second-order feature interactions. With the rise of deep learning, several models attempt to use DNN to capture higher-order feature interactions (e.g., PNN <cit.>, Wide & Deep <cit.>, DeepFM <cit.>, DCNv1 <cit.>, DCNv2 <cit.>, and DIN <cit.>), achieving better performance. Among these, the DCN series models are widely recognized for their effective trade-off between efficiency and performance, gaining significant attention from both academia and industry <cit.>. Most subsequent deep CTR models follow the paradigm established by DCN, integrating explicit and implicit feature interactions. Explicit feature interactions are often modeled directly through hierarchical structures, such as the Cross Layer in the DCN <cit.>, the Graph Layer in FiGNN <cit.>, and the Interacting Layer in AutoInt <cit.>. These methods ensure partial interpretability while allowing the capture of higher-order feature interactions. On the other hand, implicit feature interactions use DNN to automatically learn complex, non-manually defined patterns and interactions among features, further enhancing model performance <cit.>. However, as the performance of explicit feature interactions is generally weaker than that of implicit feature interactions <cit.>, several models attempt to abandon standalone explicit interaction methods and instead integrate multiplicative operations into DNN. MaskNet <cit.> introduces multiplicative operations block by block, while GateNet <cit.>, PEPNet <cit.>, and FINAL <cit.> introduce them layer by layer to achieve higher performance. However, solely pursuing implicit modeling leads to a lack of interpretability. Meanwhile, most models lack the ability to filter noise and obtain appropriate supervisory signals. This paper aims to address these limitations through our proposed methods. §.§ PROBLEM DEFINITION §.§.§ DEFINITION 1: CTR Prediction. It is typically considered a binary classification task that utilizes user profiles <cit.>, item attributes, and context as features to predict the probability of a user clicking on an item. The composition of these three types of features is as follows: * User profiles (U): age, gender, occupation, etc. * Item attributes (I): brand, price, category, etc. * Context (C): timestamp, device, position, etc. Further, we can define a CTR sample in the tuple data format: X = {x_U, x_I, x_C}. Variable y ∈{0, 1} is an true label for user click behavior: y= 1, user has clicked item, 0, otherwise. A positive sample when y=1 and a negative sample when y=0. A CTR prediction model aims to predict y and rank items based on the predicted probabilities ŷ. §.§.§ DEFINITION 2: Feature Interaction. Implicit feature interaction aims to automatically learn complex non-manually defined data patterns and high-order feature interactions using DNNs. It is characterized by high efficiency, strong performance, and poor interpretability. Additionally, <cit.> points out the inefficiency of DNNs in learning multiplicative operations. Explicit feature interaction aims to model the combinations and relationships between input features directly through predefined functions, thereby improving model interpretability. A popular explicit feature interaction method is X_2 = X_1 ⊙ X_1 <cit.>. This method uses the Hadamard Product to interact with two first-order features and generate a second-order feature. § PROPOSED ARCHITECTURE §.§ Embedding & Reshape Layer The input for the CTR prediction task typically consists of x_p, x_a, and x_c, which are multi-field categorical data and are represented using one-hot encoding. Most CTR prediction models utilize an embedding layer to transform them into low-dimensional dense vectors: 𝐞_i=E_i x_i, where E_i ∈ℝ^d × s_i and s_i separately indicate the embedding matrix and the vocabulary size for the i-th field, d represents the embedding dimension. In our model, to enable the Cross & Masked Vector to share a weight matrix and separately process two different information streams, we reshape the embeddings using the operation to divide them into two different views: 𝐞_i,a, 𝐞_i,b = (𝐞_i). Further, we can get the resulting embedding after reshaping: 𝐱_1 = [𝐞_1,a, ⋯, 𝐞_f,a, 𝐞_1,b, ⋯, 𝐞_f,b] ∈ℝ^D, where f denotes the number of fields, D=∑_i=1^f d, and 𝐱_1 denotes first-order features. §.§ Shallow & Deep Cross Network v3 The architecture of SDCNv3, shown in Figure <ref>, integrates different Crossing methods and various layers to simultaneously capture both high-order and low-order explicit feature interactions. As the core structure of SDCNv3, the shallow explicit modeling of SCNv3 recursive formula is as follows: 𝐜_l = 𝐖_l 𝐱_l+𝐛_l, 𝐱_l+1 = 𝐱_1 ⊙[𝐜_l || (𝐜_l)] + 𝐱_l, where 𝐜_l ∈ℝ^D/2 represents the Cross Vector at l-th layer, denotes the Self-Mask operation, 𝐖_l ∈ℝ^D/2× D and 𝐛_l ∈ℝ^D/2 are the learnable weight matrix and bias vector, respectively, and 𝐱_l ∈ℝ^D represents the l-th order feature interaction. The deep explicit modeling of DCNv3 recursive formula is as follows: 𝐜_l = 𝐖_l𝐱_2^l-1+𝐛_l, 𝐱_2^l = 𝐱_2^l-1⊙[𝐜_l || (𝐜_l)] + 𝐱_2^l-1, where 𝐱_2^l∈ℝ^D represents the 2^l-th order feature interaction. From Figure <ref>, we can see that the original Cross Network v2 is essentially a shallow crossing method, achieving linear growth in the order of feature interactions through the stacking of layers. Experiments show that the optimal number of cross layers in DCNv2 is 2 to 3 <cit.>, with most of its performance coming from the DNN responsible for implicit interactions. In contrast, DCNv3 modifies 𝐱_1 to 𝐱_2^l-1 to achieve exponential growth in the order of feature interactions. Figure <ref> visualizes the computation process of different crossing methods. The Cross Vector 𝐜_l aims to model interactions between features at the bit-level, while the weight matrix 𝐖_l aims to compute the inherent importance of different feature fields. However, as pointed out by some works <cit.>, not all feature interactions are beneficial for the final prediction in CTR tasks. Therefore, we introduce the Self-Mask operation to filter out noisy information from feature interactions in another view while maintaining the integrity of the interaction information of the original view. To avoid additional computational costs, we use <cit.> with element-wise affine to regularize it, ensuring a mask rate of around 0.5. The specific formalization of the Self-Mask operation is as follows: (𝐜_l) = 𝐜_l⊙ max(0, (𝐜_l)). Other mask mechanisms can also be used here, such as random Mask based on the Bernoulli distribution, learnable Mask based on Top-K selection, etc. To ensure our proposed model is simple and effective, we use LayerNorm to perform a straightforward and efficient normal distribution transformation on 𝐜_l, ensuring that its output contains approximately 50% zero values to filter out noise, and it reduces the number of parameters by half compared to Cross Network v2. §.§ Fusion Layer Most previous CTR models attempt to capture both implicit and explicit feature interactions, which essentially means capturing low-order and high-order feature interactions. Our SDCNv3 achieves this only through explicit modeling. On the other hand, existing works <cit.> often integrate sub-networks using either parallel or stacked structures. Considering the high parallelism of the former, we use a parallel structure to fuse information and compute the loss: ŷ_ D = σ(𝐖_ D𝐱_2^L + 𝐛_ D), ŷ_ S = σ(𝐖_ S𝐱_L+1 + 𝐛_ S), ŷ = (ŷ_ D, ŷ_ S). where 𝐖_ D and 𝐖_ S∈ℝ^1 × D represent learnable weights, 𝐛_ D and 𝐛_ S are biases, denotes the mean operation, ŷ_ D, ŷ_ S represent the prediction results of DCNv3 and SCNv3, respectively, and L denotes the last number of layers. Tri-BCE loss calculation and balancing method are shown in Figure <ref>. We use the widely adopted binary cross-entropy loss <cit.> (i.e., Logloss) as both the primary and auxiliary loss for the SDCNv3: ℒ =-1/N∑_i=1^N(y_i log(ŷ_i)+(1-y_i) log(1-ŷ_i)), ℒ_ D =-1/N∑_i=1^N(y_i log(ŷ_ D,i)+(1-y_i) log(1-ŷ_ D,i)), ℒ_ S =-1/N∑_i=1^N(y_i log(ŷ_ S,i)+(1-y_i) log(1-ŷ_ S,i)), where y denotes the true labels, N denotes the batch size, ℒ_ D and ℒ_ S represent the individual losses for the prediction results of DCNv3 and SCNv3, respectively, and ℒ represents the primary loss. To provide each sub-network with suitable supervision signals, we assign them adaptive weights, 𝐰_ D = max(0, ℒ_ D - ℒ) and 𝐰_ S = max(0, ℒ_ S - ℒ), and jointly train them to achieve Tri-BCE loss: ℒ_Tri = ℒ + 𝐰_ D·ℒ_ D + 𝐰_ S·ℒ_ S, As demonstrated by <cit.>, providing a single supervision signal to sub-networks is often suboptimal. Our proposed Tri-BCE loss helps sub-networks learn better by providing adaptive weights that change throughout the learning process. Theoretically, we can derive the gradients obtained by ŷ_ D: ∇_(ŷ^+_ D)ℒ_Tri = -1/N·∂(logŷ^+ + 𝐰_ Dlogŷ^+_ D)/∂ŷ^+_ D = -1/N(1/2ŷ^+ + 𝐰_ D/ŷ^+_ D), ∇_(ŷ^-_ D)ℒ_Tri = -1/N·∂(log (1 - ŷ^-) + 𝐰_ Dlog(1 - ŷ^-_ D))/∂ŷ^-_ D = 1/N(1/2(1-ŷ^-) + 𝐰_ D/1-ŷ^-_ D), where ∇_(ŷ^+_ D) and ∇_(ŷ^-_ D) represent the gradients received by ŷ_ D for positive and negative samples, respectively. Similarly, the gradient signals received by ŷ_ S are consistent with those of ŷ_ D, so we do not elaborate further. It can be observed that ŷ_ D and ŷ_ S both have the same gradient terms 1/2ŷ^+ and 1/2(1-ŷ^-), indicating that training both sub-networks with a single loss results in identical supervision signals, which is detrimental to the model's learning. However, our Tri-BCE loss additionally provides dynamically adjusted gradient terms based on 𝐰_ D and 𝐰_ S, ensuring that the sub-networks are directly influenced by the true labels y and adaptively adjust their weights according to the difference between the primary and auxiliary losses. Therefore, Tri-BCE loss provides the sub-networks with more suitable supervision signals. §.§ Complexity Analysis To further compare the time complexity of the DCN series models, we discuss and analyze the time complexity of different models. Let W_Ψ denote the predefined number of parameters in the DNN, and s denote the feature vocabulary size. The definitions of the other variables can be found in the previous sections. For clarity, we further provide a comparison of the magnitudes of different variables in Table <ref>. We can derive: * All models have the same time complexity for embedding. Therefore, we only visualize the non-embedding parameters in the experiment section. * Except for our proposed DCNv3 and SDCNv3, all other models include implicit interaction to enhance predictive performance, which incurs additional computational costs. * In terms of explicit interaction, DCNv3 only has a higher time complexity than DCNv1, and the time complexity of GDCN is four times that of DCNv3. * Since our SDCNv3 uses the Tri-BCE loss, the time complexity of loss computation for SDCNv3 is three times that of other models. However, this does not affect the model's inference speed. § EXPERIMENTS In this section, we conduct comprehensive experiments on six CTR prediction datasets to validate the effectiveness, efficiency, and interpretability of DCNv3 and SDCNv3, and address the following research questions (RQs): * RQ1 Do DCNv3 and SDCNv3 outperform other CTR models in terms of performance? Do they perform well on large-scale and highly sparse datasets? * RQ2 Are DCNv3 and SDCNv3 more efficient compared to other CTR models? * RQ3 Do SDCNv3 possess interpretability and the ability to filter noise? * RQ4 How do different configurations affect the models? §.§ Experiment Setup §.§.§ Datasets. We evaluate DCNv3 and SDCNv3 on six CTR prediction datasets: Avazu[<https://www.kaggle.com/c/avazu-ctr-prediction>] <cit.>, Criteo[<https://www.kaggle.com/c/criteo-display-ad-challenge>] <cit.>, ML-1M[<https://grouplens.org/datasets/movielens>] <cit.>, KDD12[<https://www.kaggle.com/c/kddcup2012-track2>] <cit.>, iPinYou[<https://contest.ipinyou.com/>] <cit.>, and KKBox[<https://www.kkbox.com/intl>] <cit.>. Table <ref> provides detailed information about these datasets. A more detailed description of these datasets can be found in the given references and links. §.§.§ Data Preprocessing. We follow the approach outlined in <cit.>. For the Avazu dataset, we transform the timestamp field it contains into three new feature fields: hour, weekday, and weekend. For the Criteo and KDD12 dataset, we discretize the numerical feature fields by rounding down each numeric value x to ⌊log^2(x) ⌋ if x > 2, and x = 1 otherwise. We set a threshold to replace infrequent categorical features with a default "OOV" token. We set the threshold to 10 for Criteo, KKBox, and KDD12, 2 for Avazu and iPinYou, and 1 for the small dataset ML-1M. More specific data processing procedures and results can be found in our open-source run logs[<https://anonymous.4open.science/r/DCNv3-E352/checkpoints/>] and configuration files, which we do not elaborate on here. §.§.§ Evaluation Metrics. To compare the performance, we utilize two commonly used metrics in CTR models: Logloss, AUC <cit.>. AUC stands for Area Under the ROC Curve, which measures the probability that a positive instance will be ranked higher than a randomly chosen negative one. Logloss is the result of the calculation of ℒ in Equation <ref>. A lower Logloss suggests a better capacity for fitting the data. It is worth noting that even a slight improvement (e.g., 0.1%) in Logloss and AUC is meaningful in the context of CTR prediction tasks <cit.>. §.§.§ Baselines. We compared DCNv3 and SDCNv3 with some state-of-the-art (SOTA) models (* denotes Integrating the original model with DNN networks): (1) Since DCNv3 is a standalone network that performs explicit feature interactions, we compare it with several models that also perform explicit feature interactions on two large-scale datasets. For example, LR (2007) <cit.> implements first-order feature interactions; FM and its derivative models FM (2010) <cit.>, FwFM (2018) <cit.>, AFM (2017) <cit.>, FmFM (2021) <cit.> implement second-order feature interactions; and CrossNetv1 (2017) <cit.>, CrossNetv2 (2021) <cit.>, CIN (2018) <cit.>, AutoInt (2019) <cit.>, AFN (2020) <cit.>, FiGNN (2019) <cit.> implement higher-order feature interactions. (2) To verify the superiority of DCNv3 and SDCNv3 over models that include implicit feature interactions, we further select several high-performance representative baselines, such as PNN (2016) <cit.>, Wide & Deep (2016) <cit.>, DeepFM (2017) <cit.>, DCNv1 (2017) <cit.>, xDeepFM (2018) <cit.>, AutoInt* (2019) <cit.>, AFN* (2020) <cit.>, DCNv2 (2021) <cit.>, EDCN (2021) <cit.>, MaskNet (2021) <cit.>, CL4CTR (2023) <cit.>, EulerNet (2023) <cit.>, FinalMLP (2023) <cit.>, and FINAL (2023) <cit.>. §.§.§ Implementation Details. We implement all models using PyTorch <cit.> and refer to existing works <cit.>. We employ the Adam optimizer <cit.> to optimize all models, with a default learning rate set to 0.001. For the sake of fair comparison, we set the embedding dimension to 128 for KKBox and 16 for the other datasets <cit.>. The batch size is set to 4,096 on the Criteo, ML-1M, and iPinYou datasets and 10,000 on the other datasets. To prevent overfitting, we employ early stopping with a patience value of 2. The hyperparameters of the baseline model are configured and fine-tuned based on the optimal values provided in <cit.> and their original paper. Further details on model hyperparameters and dataset configurations are available in our straightforward and accessible running logs[<https://anonymous.4open.science/r/DCNv3-E352/checkpoints/>] and are not reiterated here. §.§ Overall Performance (RQ1) §.§.§ Comparison with models using only explicit feature interactions. Since DCNv3 models feature interactions explicitly, we select 11 representative models for comparison, categorized into First-order, Second-order, and Higher-order classes. We bold the best performance, while underlined scores are the second best. The experimental results are shown in Table <ref>, and we can draw the following conclusions: * By comparing Table <ref> and Table <ref>, we find that most models using only explicit feature interactions often perform worse than those integrating implicit feature interactions, and even worse than a simple DNN. This undoubtedly undermines the necessity of explicit feature interactions. * Overall, capturing higher-order feature interactions often enhances model performance. For example, FM outperforms LR on the two large-scale datasets Avazu and Criteo, and CrossNetv2 outperforms all first-order and second-order feature interaction models except for FwFM on Avazu. This demonstrates the effectiveness of higher-order feature interactions in improving model performance. * More complex model structures do not necessarily lead to performance improvements. AFM introduces a more complex attention mechanism compared to FM, yet it does not achieve better performance, as also reported in <cit.>. However, CrossNetv2 extends the size of the weight matrix compared to CrossNetv1, resulting in a certain degree of performance enhancement. Therefore, we should carefully design the model architecture. * FiGNN achieves the best baseline performance among the explicit feature interaction models. However, our DCNv3 still achieves a Logloss decrease of 0.4% and an AUC increase of 0.54% on the Avazu dataset compared to FiGNN, and a Logloss decrease of 0.18% and an AUC increase of 0.17% on the Criteo dataset, both exceeding 0.001 in a statistically significant level. This demonstrates the superiority of DCNv3. §.§.§ Comparison with models integrating implicit feature interactions. To further comprehensively investigate the performance superiority and generalization ability of DCNv3 and SDCNv3 on various CTR datasets (e.g., large-scale sparse datasets), we select 15 representative baseline models and 6 benchmark datasets. We highlight the performance of DCNv3 and SDCNv3 in bold and underline the best baseline performance. Table <ref> presents the experimental results, from which we can make the following observations: * Overall, SDCNv3 achieves the best performance across all six datasets. SDCNv3 shows an average AUC improvement of 0.22% over the strongest baseline model and an average Logloss improvement of 0.11%, both exceeding a statistically significant performance improvement of 0.1%. This demonstrates the effectiveness of SDCNv3. SDCNv3 ranks 1st in CTR prediction on the PapersWithCode benchmarks[<https://paperswithcode.com/task/click-through-rate-prediction>] for the Criteo, KDD12, and KKBox datasets. * The FinalMLP model achieves good performance on the Avazu and Criteo datasets, surpassing most CTR models that combine explicit and implicit feature interactions. This demonstrates the effectiveness of implicit feature interactions. Consequently, most CTR models attempt to integrate DNN into explicit feature interaction models to enhance performance. However, SDCNv3 achieves state-of-the-art performance using only explicit feature interactions, indicating the effectiveness of modeling with explicit feature interactions alone. * SDCNv3 achieves performance improvements over DCNv3 across all six datasets, demonstrating the effectiveness of SCNv3 in capturing low-order feature interactions and the Tri-BCE loss. Notably, on the iPinYou dataset, we observe that all models have Logloss values around the 0.0055 level. This is due to the imbalance between positive and negative samples in the dataset <cit.>, and other works have reported similar results <cit.>. * DCNv3 outperforms all baseline models in terms of AUC, with the only exception being Logloss optimization on the KKBox dataset, which is weaker than DCNv1. This further demonstrates the effectiveness of DCNv3, as it captures high-quality feature interaction information through exponentially growing feature interactions and noise filtering mechanisms. §.§ In-Depth Study of DCNv3 and SDCNv3 §.§.§ Efficiency Comparison (RQ2) To verify the efficiency of DCNv3 and SDCNv3, we fix the optimal hyperparameters of the 25 baseline models and compare their parameter count (rounded to two decimal places) and runtime (averaged over five runs). The experimental results are shown in Figure <ref>. We can derive: * Explicit CTR models typically use fewer parameters. For instance, LR, FM, FwFM, and AFM have nearly zero non-embedding parameters, while FmFM, CrossNet, CIN, and AutoInt all require fewer than 1M parameters. Notably, parameter count does not always correlate with time complexity. Although CIN uses only 0.57M parameters, its training time per epoch reaches a maximum of 606 seconds, making it unsuitable for practical production environments. * The LR model has the lowest runtime at 150 seconds among all models. CrossNetv1 and CrossNetv2 follow closely, requiring only a negligible increase in time while significantly improving performance. This demonstrates the efficiency of CrossNet and its series models. As a fundamental component of deep CTR models, DNN requires only 156 seconds. Due to the parallel-friendly nature of CTR parallel structures, some carefully designed deep CTR models, such as PNN, DCNv2, FinalMLP, and FINAL, significantly enhance prediction accuracy without a substantial increase in runtime. * Our proposed DCNv3 and SDCNv3 are the most parameter-efficient models in the DCN series, requiring only 0.78M and 1.37M parameters respectively to achieve SOTA performance. Meanwhile, regarding runtime, DCNv3 consistently outperforms strong baseline models such as FinalMLP, FINAL, DCNv2, and DCNv1. This demonstrates the time-efficient of DCNv3. Although SDCNv3 requires an additional 20 seconds compared to DCNv3 due to the employment of the Tri-BCE loss, it still remains comparable to DCNv2. Notably, the extra computational cost for the loss is only incurred during training and does not affect inference speed in practical applications. §.§.§ The Interpretability of SDCNv3 (RQ3) Interpretability is a crucial aspect of CTR prediction tasks <cit.>, as it helps researchers understand the reasons behind predictions and increases confidence in the results. In this section, we investigate the dynamic Cross & Masked Vector and the static 𝐖_l to understand the model's prediction process. The experimental and visualization results are shown in Figure <ref> (Value denotes the Frobenius norm for each feature field), we can derive the following observations: * From Figure <ref> (a∼d), we observe that 𝐜_l and (𝐜_l) change progressively with increasing layers. For example, UserID has high importance in the first layer of both SCNv3 and DCNv3 but decreases as the number of layers increases. Meanwhile, (𝐜_l) for UserID gradually increases its corresponding sparsity to further filter out noise information. * 𝐜_l and (𝐜_l) exhibit complementary properties. When a feature field in 𝐜_l becomes important, its corresponding sparsity in (𝐜_l) decreases, and vice versa (e.g., UserID, Occupation, Age fields). This demonstrates the effectiveness of our introduced Self-Mask operation, which further filters out noise by more aggressively assigning zero values to the representation elements of certain feature fields. * From Figure <ref> (e, f), we observe that SCNv3 and DCNv3 capture different feature interaction information at the same layer. In SCNv3, 𝐖_3 is used to compute the importance of 3-order features to generate 4-order feature interactions. In contrast, in DCNv3, 𝐖_3 is used to compute the importance of 2^2-order features to generate 2^3-order feature interactions. Consequently, DCNv3 shows reduced importance for UserID × Genres compared to SCNv3. This further proves the validity of SDCNv3. * Overall, we observe that the importance of higher-order feature interactions is lower than that of lower-order feature interactions, which is similarly reported in some works <cit.>. For example, in Figure <ref>, (f) has fewer bright red blocks compared to (e), and the blue blocks in (a) gradually darken as the number of layers increases, the situation is similar in (b). §.§.§ Ablation Study (RQ4) To investigate the impact of each component of SDCNv3 on its performance, we conduct experiments on several variants of SDCNv3: * DCNv3: SDCNv3 without the SCNv3. * SCNv3: SDCNv3 without the DCNv3. * w/o TB: SDCNv3 with BCE instead of the Tri-BCE. * w/o LN: Self-Mask without the . The results of the ablation experiments are shown in Table <ref>. It is observed that both DCNv3 and SCNv3 exhibit some performance loss compared to SDCNv3, which demonstrates the necessity of capturing both high-order and low-order feature interactions. Meanwhile, the variant w/o TB also leads to a certain degree of performance decline, particularly noticeable on KKBox. aims to ensure that the Self-Mask maintains a masking rate of around 0.5, so its removal also results in some performance loss. This demonstrates the necessity and effectiveness of each component within SDCNv3. §.§.§ Influence of Network Depths (RQ4) To further investigate the Influence of different neural network depths on the performance of DCNv3, we conduct experiments on two large-scale CTR datasets, Criteo and KDD12. Figure <ref> shows the AUC and Logloss performance of DCNv3 on the test sets. From Figure <ref>, we observe that on the Criteo dataset, the model achieves optimal performance at a depth of 4 layers, indicating that DCNv3 captures up to 2^4-order feature interactions. On the KDD12 dataset, DCNv3 achieves optimal performance at a depth of 6 layers, meaning it captures 2^6-order feature interactions. In contrast, achieving the same order of feature interactions in the linearly growing CrossNetv2 requires 2^4 - 1 and 2^6 - 1 layers, respectively. Considering the huge computational resources required, this is impractical, whereas DCNv3 easily accomplishes this with its exponentially growing feature interaction mechanism. This further demonstrates the effectiveness of DCNv3. § CONCLUSION This paper introduces the next generation deep cross networks, DCNv3 and SDCNv3. The former explicitly captures feature interaction through an exponentially growing modeling method, and further filters noise signals via the Self-Mask operation, reducing the parameter count by half. The latter builds on DCNv3 by incorporating the shallow cross network, SCNv3, to capture both high-order and low-order feature interactions without relying on the less interpretable DNN. Tri-BCE helps the two sub-networks in SDCNv3 obtain more suitable supervision signals for themselves. Comprehensive experiments on six datasets demonstrate the effectiveness, efficiency, and interpretability of DCNv3 and SDCNv3. Additionally, our proposed models achieve 1st rankings in multiple CTR benchmarks using only explicit feature interactions, breaking the convention that traditional CTR models must integrate implicit feature interactions to improve performance. To Robert, for the bagels and explaining CMYK and color spaces. ACM-Reference-Format
http://arxiv.org/abs/2407.11949v1
20240716174401
Minimally Entangled Typical Thermal States for Classical and Quantum Simulation of Gauge Theories at Finite Temperature and Density
[ "I-Chi Chen", "João C. Getelina", "Klée Pollock", "Srimoyee Sen", "Yong-Xin Yao", "Thomas Iadecola" ]
quant-ph
[ "quant-ph", "cond-mat.str-el" ]
These authors contributed equally to this work. Department of Physics and Astronomy, Iowa State University, Ames, Iowa 50011, USA Ames National Laboratory, Ames, Iowa 50011, USA These authors contributed equally to this work. Ames National Laboratory, Ames, Iowa 50011, USA Department of Physics and Astronomy, Iowa State University, Ames, Iowa 50011, USA Department of Physics and Astronomy, Iowa State University, Ames, Iowa 50011, USA ykent@iastate.edu Department of Physics and Astronomy, Iowa State University, Ames, Iowa 50011, USA Ames National Laboratory, Ames, Iowa 50011, USA iadecola@iastate.edu Department of Physics and Astronomy, Iowa State University, Ames, Iowa 50011, USA Ames National Laboratory, Ames, Iowa 50011, USA § ABSTRACT Simulating strongly coupled gauge theories at finite temperature and density is a longstanding challenge in nuclear and high-energy physics that also has fundamental implications for condensed matter physics. In this work, we investigate the utility of minimally entangled typical thermal state (METTS) approaches to facilitate both classical and quantum computational studies of such systems. METTS techniques combine classical random sampling with imaginary time evolution, which can be performed on either a classical or a quantum computer, to estimate thermal averages of observables. We study the simplest model of a confining gauge theory, namely ℤ_2 gauge theory coupled to spinless fermionic matter in 1+1 dimensions, which can be directly mapped to a local quantum spin chain with two- and three-body interactions. We benchmark both a classical matrix-product-state implementation of METTS and a recently proposed adaptive variational approach to METTS that is a promising candidate for implementation on near-term quantum devices, focusing on the equation of state as well as on various measures of fermion confinement. Of particular importance is the choice of basis for obtaining new METTS samples, which impacts both the classical sampling complexity (a key factor in both classical and quantum simulation applications) and complexity of circuits used in the quantum computing approach. Our work sets the stage for future studies of strongly coupled gauge theories with both classical and quantum hardware. Minimally Entangled Typical Thermal States for Classical and Quantum Simulation of Gauge Theories at Finite Temperature and Density Thomas Iadecola July 22, 2024 ==================================================================================================================================== § INTRODUCTION Gauge theories are archetypal models of strongly correlated matter that are relevant across energy scales. In nuclear and high energy physics, the phase diagram of quantum chromodynamics (QCD) at finite density and temperature is of interest to the physics of the early universe, heavy ion collisions and neutron stars <cit.>. In condensed matter physics, gauge theories play an important role in the description of topological phases of matter <cit.>, quantum criticality <cit.>, and correlated phenomena such as magnetism <cit.> and superconductivity <cit.>. These diverse models share the key feature that they are generically strongly coupled such that their properties are beyond the reach of analytical tools. For example, the outer and inner cores of neutron stars contain degenerate Fermi liquids of nucleons and/or quarks at densities close to few times the nuclear saturation density, where nuclear effective field theory and perturbative QCD calculations break down. Similar obstacles exist for calculations at higher temperature at finite density. Numerical tools to simulate gauge theories include Monte Carlo <cit.> and tensor network techniques <cit.>, but both classes of methods have their limitations. On one hand, Monte Carlo methods generically struggle to simulate systems at finite fermion density owing to the sign problem <cit.>. On the other hand, tensor network simulations become much more challenging above one spatial dimension due to the complexity of contraction <cit.>. Quantum simulation approaches bypass these obstacles and can potentially enable detailed studies of the phase diagram of strongly coupled gauge theories <cit.>. However, much work remains to determine efficient quantum simulation methods for complex non-Abelian gauge theories like QCD <cit.>. In the meantime, it is instructive to focus on method development for toy models that exhibit some of the same phenomenology as QCD, while still being relevant for condensed matter physics <cit.>. In this paper we consider such a model, namely a ℤ_2 lattice gauge theory coupled to spinless fermions in 1+1 dimensions <cit.>. Like QCD, this model exhibits chiral symmetry breaking, confinement and string tension <cit.>; it is also relevant for studies of nonequilibrium condensed-matter phenomena like quantum many-body scars <cit.> and Hilbert-space fragmentation <cit.>. We explore the finite-temperature and -density properties of this model, focusing on measures of confinement and on the equation of state relating internal energy and fermion density. Our study adopts the minimally entangled typical thermal states (METTS) approach <cit.>, which combines imaginary time evolution (ITE) with a statistical sampling procedure to estimate quantum statistical-mechanics averages. Although originally developed as a tensor-network method, METTS can also be recast as a quantum algorithm (QMETTS), in which a quantum computer is used to perform the ITE subroutine <cit.>. While many approaches to ITE on quantum computers are possible, our study focuses on a recently proposed adaptive <cit.> variational <cit.> approach to QMETTS (AVQMETTS) <cit.>. We perform systematic benchmarks of grand-canonical-ensemble calculations within METTS with a view towards both classical and quantum computing approaches, focusing on a matrix product state (MPS) approach for the former and on exact statevector calculations for the latter. The classical and quantum approaches have common systematics in that they rely on the same classical sampling procedure, where the choice of sampling basis has an impact on convergence. AVQMETTS additionally depends on a choice of the operator pool used to construct the variational ansatz state, which impacts both the accuracy and quantum resource cost of the simulation. In carefully examining these systematics, our study lays groundwork for future progress in both classical and quantum simulation of gauge theories. The remainder of the paper is organized as follows. In Sec. <ref>, we define the ℤ_2 gauge theory model and review how to perform grand-canonical-ensemble calculations within METTS. In Sec. <ref>, we benchmark classical METTS calculations of the internal energy density ϵ and fermion density n, focusing in particular on how to choose the optimal sampling basis. We then present METTS calculations of the ϵ-n equation of state, as well as two measures of confinement: Friedel oscillations of the fermion density <cit.> and string-antistring distribution functions <cit.>. In Sec. <ref>, we review the AVQMETTS method before benchmarking its performance with respect to sampling basis and operator pool. We move on to show results for the equation of state and Friedel oscillations, which we argue are a robust probe of confinement for the relatively small system sizes accessible on today's quantum computers. We end in Sec. <ref> with an outlook for future work. § ℤ_2 LATTICE GAUGE THEORY AND METTS We consider a 1+1-dimensional model of spinless and massless fermions coupled to a ℤ_2 gauge field: H=1/2∑_i=1^L-1(c^†_iσ^z_i,i+1c_i+1+H.c.)+h∑_i=0^Lσ^x_i,i+1 , where the first term is the kinetic term, and the second represents the confining (electric) field with strength h. The fermions are represented by creation/annihilation operators c^†_i/c_i on site i=1,…,L and the ℤ_2 gauge field is represented by Pauli operators σ^z_i,i+1 and σ^x_i,i+1 on the links (i,i+1). Note that the 1D lattice in Eq. (<ref>) contains L fermion sites and L+1 gauge links; the states of the gauge links (0,1) and (L,L+1) are constants of the motion because no fermion can hop across those links (we assume open boundary conditions). The Hamiltonian (<ref>) commutes with the Gauss-law operators G_i=σ_i-1,i^x(-1)^n_iσ_i,i+1^x (i=0,…,L) with n_i=c_i^†c_i the fermion number density. Each selection of ⟨ G_i⟩ =± 1 (known as a background charge configuration) corresponds to an independent symmetry sector of the Hamiltonian. In this paper we choose the uniform background charge configuration ⟨ G_i⟩=1. The model (<ref>) can be recast as a pure spin-1/2 model in terms of gauge-invariant local operators Z_i=σ_i,i+1^x (i=0,…,L) and X_i=(c^†_i-c_i)σ_i,i+1^z(c^†_i+1+c_i+1) (i=1,…,L-1) as follows <cit.>: H=1/4∑_i=1^L-1(X_i-Z_i-1X_iZ_i+1)+h∑_i=0^LZ_i. Note that the operators Z_0 and Z_L are not dynamical (i.e. the operators X_0 and X_L are undefined) and therefore serve only to label the states of the frozen gauge links (0,1) and (L,L+1). To study finite-temperature and -density properties of the model (<ref>), we compute the thermal expectation value of a generic observable 𝒪 in the grand canonical ensemble at inverse temperature β=1/T and chemical potential μ: ⟨𝒪⟩ _μ,β=1/𝒵Tr(𝒪e^-β(H-μ N)) , where N is the total fermion number operator and 𝒵 = Tr(e^-β(H-μ N)) is the grand canonical partition function. In the spin model, the fermion number operator is given by N =∑_i=1^Ln_i , n_i =I-Z_i-1Z_i/2 , which corresponds to the total number of Ising domain walls. To evaluate Eq. (<ref>), we adopt a statistical sampling procedure defined by the METTS algorithm. A METTS calculation can be viewed as a Markovian random walk consisting of multiple “thermal steps." The first thermal step starts with a random classical product state (CPS) |i⟩ and performs ITE up to an imaginary time τ=β/2 to obtain a METTS, which written as |ϕ_i(β)⟩ =P_i,β^-1/2e^- (β/2) (H-μ N)|i⟩ , where P_i,β=⟨ i|e^-β (H-μ N)|i⟩ is a normalization factor. We henceforth drop the explicit dependence on β to simplify the notation. A sample of the thermal average of an observable 𝒪 is then given by ⟨𝒪⟩ _i=⟨ϕ_i|𝒪|ϕ_i⟩. The next thermal step is triggered by an all-qubit measurement collapse of the METTS |ϕ_i⟩ to a CPS |i'⟩ in a specific basis, which occurs with probability |⟨ i'|ϕ_i⟩|^2. Thus, one thermal step amounts to a transition between CPSs |i⟩ and |i'⟩ in a Markovian random walk. The stationary distribution of this process is simply P_i/𝒵, owing to the detailed balance condition |⟨ i'|ϕ_i⟩|^2/|⟨ i|ϕ_i'⟩|^2 = P_i'/P_i. For an ensemble obtained from S thermal steps, the thermal expectation value of an observable 𝒪 can then be estimated as: ⟨𝒪⟩ _METTS=1/S∑_i=1^S⟨𝒪⟩ _i . In practice, the METTS sampling is performed in parallel with S_w independent random walks, each of which generates S_0 METTSs. This amounts to an ensemble of size S=S_wS_0. To remove memory of the initial conditions, the first few thermal steps (typically the first ten) are excluded from the statistical analysis. Furthermore, the choice of measurement basis for METTS collapse is flexible, and previous investigations have demonstrated that alternating between x- and z-basis measurements (abbreviated as xz-basis) between adjacent thermal steps drastically reduces autocorrelations of the random walk <cit.>. Here we perform a more systematic study of the choice of collapse basis and adopt the best strategy for our specific applications. ITE is the key subroutine in the METTS sampling approach. For gapped systems in 1D, the standard classical algorithm uses tensor networks (specifically MPSs) owing to the fact that ground states of such systems have finite bipartite entanglement and therefore typically feature a system-size-independent bond dimension χ. However, the scaling for tensor network approaches becomes less favorable for critical systems or above one spatial dimension, motivating quantum computing approaches as discussed in Sec. <ref>. We therefore additionally benchmark the AVQITE algorithm <cit.> for METTS preparation, presented previously as the AVQMETTS approach <cit.>. AVQITE features automatically generated compact circuits for state propagation, amenable to near-term quantum computing. The MPS calculations discussed in this work were performed using the iTensor package <cit.>, while the AVQMETTS calculations were performed using the AVQITE code <cit.>. § CLASSICAL METTS CALCULATIONS Before discussing the AVQMETTS simulations, we perform finite-temperature classical METTS calculations for the model (<ref>) leveraging the efficiency of MPSs. The purpose is two-fold. First, as noted in Sec. <ref>, the choice of METTS sampling basis can have a strong effect on sampling efficiency—for example the xz-basis collapse can outperform z-basis collapse as observed in Refs. <cit.>. Moreover, in Ref. <cit.> it was observed in a different context that sampling infinite-temperature expectation values in the y-basis can be advantageous for models like Eq. (<ref>) whose Hamiltonians contain only Pauli-X and Z terms. Thus we aim to study more systematically the comparative performance of collapse in various bases, including the y and x-basis, as well as alternating yz basis and xz collapse bases along the lines discussed in Sec. <ref>. We will use the optimal basis choice for subsequent classical METTS calculations and contrast the strategy for AVQMETTS, where additional constraints, such as circuit complexity measured by number of two-qubit gates, have to be taken into consideration. Second, we aim to have numerically converged calculations of the equation of state of the model (<ref>), utilizing the efficient Trotter approach for ITE in the MPS basis. Specifically, we evaluate the energy density ϵ =⟨ H⟩_μ,β /L and particle density n =⟨ N⟩_μ,β /L as functions of temperature and chemical potential. These calculations elucidate the physics of the 1D model and provide benchmark data for the AVQMETTS simulations presented in Sec. <ref>. §.§ Optimal measurement basis for METTS collapse Fig. <ref> visualizes the estimated energy density ϵ and particle density n for L=12 with β=10 and μ=-0.4 as a function of the number of thermal steps, highlighting the dependence of convergence on several specific choices of measurement basis for METTS collapse. These data are obtained using S_w = 100 parallel random walks. Figure <ref>(a) shows that ϵ obtained from METTS calculations with four different basis choices are generally quite close to the exact reference energy density, with relative error Δ_ϵ≲ 1% as shown in Fig. <ref>(c). For larger numbers of thermal steps, Δ_ϵ is consistently smaller for calculations using the x-, y- or yz-bases than that using the xz-basis. The fluctuations of Δ_ϵ with thermal steps are consistent with the standard error of ϵ. The y-basis calculation gives the smallest standard error (≳ 0.6× 10^-3), which aligns with the minimal fluctuations of Δ_ϵ. This is consistent with the findings of Ref. <cit.> which observed that the y-basis displayed minimal shot-to-shot fluctuations when sampling energy-density correlation functions at infinite temperature. Similar observations apply to the calculations of particle density n, with slightly larger errors occuring for the xz-basis results, as shown in Fig. <ref>(b,d). In the following classical METTS simulations we choose the alternating yz-basis for state collapse, since calculations using this basis give consistently lower errors at large numbers of thermal steps. §.§ Equation of state To study the equation of state for the model (<ref>)—i.e., the functional relationship between energy density ϵ and particle density n—we perform METTS calculations of a L=60 model with chemical potential varying from μ=-1.0 to μ=1.0 with a step 0.025. The results for the free-fermion limit h=0 are plotted in Fig. <ref>(a) at three inverse temperatures β=5, 10 and 20. Without the confining field, the energy-particle density curve is symmetric around half-filling, indicative of particle-hole symmetry. Generally, the thermal energy density increases with temperature, as the number of excited states within the accessible energy window ∼ 1/β increases. As a result, ϵ grows the most at half-filling, and becomes trivially temperature-independent at zero or full filling, where the relevant subspace dimension reduces to 2 and the ground state is two-fold degenerate. For reference, the analytical calculations for the same free-fermion model are also shown in Fig. <ref>(a) with dotted lines, which agree perfectly with the METTS results and confirm the convergence of METTS calculations. In Fig. <ref>(b), we show results for the equation of state with h=0.1. The finite confining field clearly breaks the particle-hole symmetry. Compared with the free-fermion results in Fig. <ref>(a), the energy density becomes significantly lower for n < 0.5 and reaches maximal energy reduction at n = 0. In contrast, the variation of ϵ for n > 0.5 is much smaller, and remains the same at n = 1. This can be understood by considering the ℤ_2 symmetry breaking by the confining field, which promotes the configurations with large negative magnetizations (or equivalently, those with long anti-strings as defined in Sec. <ref>) and further lowers the energy. Since the total magnetization magnitude (length of anti-string) in each particle number sector is bounded by L-2⌊ N/2⌋, the impact of the confining field on the thermal statistics grows from large filling to small filling of the model, consistent with the curves shown in Fig. <ref>(b). Note that the standard errors of ϵ and n for these METTS calculations are below 10^-3, i.e. they are are smaller than the line width of the curves. §.§ Probes of confinement: Friedel oscillations and string length distributions
http://arxiv.org/abs/2407.12246v1
20240717012458
Dumb RIS-Assisted Random Beamforming for Energy Efficiency Enhancement of Wireless Communications
[ "Yixin Zhang", "Wenchi Cheng", "Wei Zhang" ]
cs.IT
[ "cs.IT", "math.IT" ]
Dumb RIS-Assisted Random Beamforming for Energy Efficiency Enhancement of Wireless Communications Yixin Zhang^†, Wenchi Cheng^†, and Wei Zhang^  ^†State Key Laboratory of Integrated Services Networks, Xidian University, Xi'an, China ^School of Electrical Engineering and Telecommunications, The University of New South Wales, Sydney, Australia E-mail: {yixinzhang@stu.xidian.edu.cn, wccheng@xidian.edu.cn, w.zhang@unsw.edu.au} ================================================================================================================================================================================================================================================================================================================================================== § ABSTRACT Energy efficiency (EE) is one of the most important metrics for the beyond fifth generation (B5G) and the future sixth generation (6G) wireless networks. Reconfigurable intelligent surface (RIS) has been widely focused on EE enhancement for wireless networks because it is low power consuming, programmable, and easy to be deployed. However, RIS is generally passive and thus difficult to obtain corresponding full channel state information (CSI) via RIS, which severely impacts the EE enhancement of RIS-assisted wireless communications. To solve this problem, in this paper we propose the new single-active-antenna combined RIS transmitter structure, which can replace traditional multiple antennas to reduce hardware cost and power consumption. Based on the single-active-antenna combined RIS structure, we develop the Dumb RIS-Assisted Random Beamforming (Darb)-based Joint RIS-Elements and Transmit-power optimizAtion (Jeta) scheme, where dumb RIS randomly changes its phase shift according to isotropic distribution only depending on the CSI feedback from users to RIS-assisted transmitter. Then, we jointly design the number of RIS elements and optimize the transmit power to maximize the EE of RIS-assisted wireless communications. Simulation results show that compared with the traditional multi-antenna system, our developed Darb-based-Jeta scheme can significantly increase the EE without the full CSI. Reconfigurable intelligent surface, energy efficiency enhancement, random beamforming. § INTRODUCTION Determined by the demand for higher rates, network capacity is required to continue to increase by 1000 times in the future wireless networks <cit.>. In the beyond fifth generation (B5G) and the sixth generation (6G) communication systems, various promising technologies such as ultra-massive multiple-input multiple-output (UM-MIMO) and terahertz (THz) communications are expected to achieve a high access rate and network capacity. However, they require a large number of radio frequency (RF) chains and result in high hardware cost and complexity <cit.>. As a result, energy consumption remains a difficult problem and cost-effective solutions are still very important for future wireless networks. Recently, reconfigurable intelligent surface (RIS) has been proposed as a potential technology to provide a new possibility for energy-efficient wireless networks. RIS is a plane composed of a large number of low-cost passive reflecting elements. Each element can independently reflect the incident signal by controlling its amplitude and phase, thereby cooperatively realizing reflecting beamforming. Since the RIS is passive, it does not depend on RF chains and its energy consumption is very low, enabling low-cost, low power consuming and ultra-dense deployment <cit.>. Relevant work has been carried out to study the energy efficiency (EE) optimization problem in the RIS-assisted communication systems <cit.>. A practical RIS power consumption model was developed with jointly optimizing the RIS phase shift and the downlink transmit power to maximize the EE under quality of service (QoS) constraints in the RIS-aided downlink multi-user multiple-input single-output (MISO) system <cit.>. Taking the limited backhaul capacity constraints into account, the authors proposed a joint design of transmit beamforming and reflecting coefficients at RISs to maximize the EE of cell-free networks <cit.>. The authors studied the tradeoff between EE and spectrum efficiency (SE) in RIS-aided multi-user MIMO uplink systems <cit.>. Even though various investigations on the benefits of using RIS for green communications have been carried out, they are almost to design passive reflecting beamforming under the assumption that the channel state information (CSI) is perfectly known at the RIS side. However, since the RIS is passive and with relatively low signal processing capability, the perfect CSI is very challenging to be obtained at the RIS side. In addition, because a RIS controls a significant number of elements, the number of channels to be estimated is large <cit.>. All these factors have brought the challenge to the channel estimation in the RIS-assisted system, which severely impacts the EE enhancement of RIS-assisted wireless communications. In order to solve this problem, in this paper we place a RIS near the RF signal generator to form the new single-active-antenna combined RIS transmitter structure, which can replace multiple antennas to reduce hardware cost and power consumption. Based on the single-active-antenna combined RIS structure, we develop the dumb RIS-assisted random beamforming (Darb) scheme with a threshold feedback strategy, where dumb RIS randomly changes its phase shift according to isotropic distribution. As a result, the phase shift of the RIS is randomly generated by the RIS controller so that optimal phase shift is not required to design. The RIS-assisted system only depends on the overall channel-related feedback from users to the RIS-assisted transmitter. Thus, the problem of obtaining full CSI via RIS is avoided. Then, we jointly design the number of RIS elements and the transmit power to optimize the EE of RIS-assisted wireless communication system. The optimization problem is a mixed-integer non-convex problem that is difficult to solve. As such, we propose an alternating optimization algorithm for solving it, where the number of RIS elements and the transmit power are alternately optimized in each iteration until the result converges. Simulation results show that our developed Darb-based joint RIS-elements and transmit-power optimization (Jeta) scheme can significantly increase the EE without the need to know the full CSI. The rest of this paper is organized as follows. Section <ref> introduces the new RIS-assisted transmitter structure and the signal model. Section <ref> presents the power consumption and the energy efficiency model. Section <ref> presents the Darb-based-Jeta scheme. Section <ref> provides the numerical results. Finally, we conclude this paper in Section <ref>. § SYSTEM MODEL 0.27in §.§ RIS-Assisted Transmitter Structure Figure <ref> shows the new single-active-antenna RIS-assisted transmitter structure-based wireless communications system, where a RIS is placed very close to an antenna. The RF signal generator of the antenna transmits an unmodulated carrier signal to the RIS. Through the adjustment of phases on the RIS, user information can be transmitted. The distance between the RIS and the RF signal generator is very small so that the transmission is not affected by fading <cit.>. The RIS consists of L rows and C columns so that N=LC. The RIS element in the l-th row and the c-th column is denoted by E_lc and controlled by a field programmable gate array (FPGA) board to change its phase shift. In this paper, to provide multiple users with multiple data streams, we divide the RIS into different regions by row to realize the function of multiple antennas. That is, the C elements in the l-th row of the RIS are assigned to a user k, k=1,2,⋯, K, i.e., the phases of RIS elements E_l1,E_l2,⋯,E_lC are used to control the transmit signal for user k. Compared with traditional BS with multiple transmit antennas, the RIS-assisted transmitter only needs one active antenna. Therefore, the RIS-assisted transmitter can reduce hardware cost and power consumption to achieve high energy efficiency. §.§ Signal Model We consider a RIS-assisted multi-user MIMO broadcast channel, where a single-active-antenna base station (BS) and a RIS with N passive reflecting elements are used together to transmit common information to K single-antenna users randomly located on the ground. In this paper, we assume the downlink communication undergoes block fading and the channel stays constant during a time-slot t of length T corresponding to the coherence interval. In order to serve multiple users in the same time-slot, a precoding matrix W∈ℂ^L× C is applied at the RIS-assisted transmitter. The RIS element of each row can be regarded as a beam, which performs the same function as an antenna. W=[ w_1, w_2,⋯, w_L]^T, where w_l∈ℂ^1× C, l=1,2,⋯,L, denotes the phase shift of RIS elements on the l-th row. Since the signal reflected from the RIS has experienced high path loss, we only consider the first reflected signal and ignore the power of the signal reflected twice or more from the RIS <cit.>. Then, the signal received at user k, with k=1,2,⋯,K, is written as y_k =∑_l=1^L√(p_l) h_k^T w_l s_l+n_k =√(p_i) h_k^T w_i s_i+∑_l=1,l≠ i^L√(p_l) h_k^T w_l s_l+n_k, where p_l is the transmit power for beam l, h_k ∈ℂ^C× 1 is the complex Gaussian channel vector between the RIS-assisted transmitter and the k-th user, s_l is the signal corresponding to beam l and n_k∼𝒞𝒩(0,σ^2) is the additive white Gaussian noise (AWGN) at the k-th user. All users are assumed to experience independent and identically distributed (i.i.d.) Rayleigh fading h_k∼𝒞𝒩(0,β I_C), where β denotes path loss. We assume that the total transmit power P_T is equally allocated to each beam, i.e., p_l=P_T/L, l=1,2,⋯,L. We assume that the k-th user knows h_k^T w_l for l=1,2,⋯,L. There is an error-free and delay-free feedback channel that can pass part of the CSI to the RIS-transmitter. Hence, the corresponding received signal-to-interference and noise ratio (SINR) for the k-th user with the i-th beam is given as γ_k,i=| h_k^T w_i|^2/∑_l=1,l≠ i^L| h_k^T w_l|^2+Lσ^2/P_T. A max-SINR rule is used to coordinate the scheduling process. That is, each user feedbacks its highest SINR among all beams and the corresponding beam index to the RIS-assisted transmitter. Then, the RIS-assisted transmitter selects the user with the highest SINR for each beam to maximize the sum rate. Therefore, the sum rate of the RIS-assisted system, denoted by R, can be expressed as follows: R=L𝔼{log_2(1+max_1≤ k ≤ Kγ_k,i) }. § PROBLEM FORMULATION In this section, we analyze the total power consumption in both RIS-assisted and multi-antenna systems and compare their energy efficiencies. For the RIS-assisted system, we formulate the energy efficiency optimization problem with the joint design of the number of RIS elements and the transmit power. §.§ Total Power Consumption and Energy Efficiency Model The power consumption of RIS, denoted by P_RIS, can be given as follows: P_RIS=P_FPGA+NP_PIN, where P_FPGA and P_PIN are the power consumption for the FPGA board and the PIN diode on RIS, respectively. Since the RIS-assisted transmitter only has one active-antenna, the circuit power consumption of the RIS-assisted system can be given by P_CR=P_A+LP_U, where P_A is the circuit power consumed by the single-active-antenna, consisting of digital-to-analog converter (DAC), mixer and filters of transmitter <cit.>. While P_U is the circuit power consumed at the user side, which contains low noise amplifier, mixer, filters of the receiver and analog-to-digital converter (ADC). Thus, the total power consumption of the RIS-assisted system can be derived as follows: P_RA=P_T/η_T+P_RIS+P_CR+P_SR+∑_k=1^K P_U,k, where η_T is the power conversion efficiency of transmit power amplifier, P_SR and P_U,k are the hardware static power of the RIS-assisted transmitter and the k-th user equipment, respectively. On the other hand, the total power consumption of the multi-antenna transmitter system, denoted by P_MA, is given by P_MA=P_T/η_T+P_CA+P_SA+∑_k=1^K P_U,k, where P_CA=MP_A+MP_U is the circuit power consumption of the multi-antenna system, M is the number of antennas, and P_SA is the hardware static power of the multi-antenna transmitter. To fairly compare with RIS-assisted transmitter, we set M=L. Then, we have the energy efficiencies of RIS-assisted and multi-antenna system, denoted by EE_RA and EE_MA, respectively, as follows: { EE_RIS=R/P_RA, for RIS-assisted system; EE_MA=R/P_MA, for multi-antenna system.. §.§ Energy Efficiency Optimization Problem Formulation In order to increase the energy efficiency of the RIS-assisted wireless communication system, we propose the energy efficiency optimization problem by jointly optimizing the number of RIS elements N and the transmit power P_T. We set the maximum number of RIS elements as N_max and the maximum transmit power as P_max. The joint optimization problem, denoted by P1, is formulated as follows: P1: max_(N, P_T) L𝔼{log_2(1+max_1≤ k ≤ Kγ_k,i) }/P_T/η_T+P_RIS+P_CR+P_SR+∑_k=1^K P_U,k s.t. 1). N∈{1,2,⋯,N_max}; 2). 0<P_T≤ P_max. § DARB-BASED ENERGY EFFICIENCY OPTIMIZATION The full CSI of the RIS-assisted system is difficult to be obtained, which impacts the energy efficiency enhancement. In order to achieve high energy efficiency without full CSI, we propose the Darb-based-Jeta scheme. §.§ Random Beamforming Using Dumb RIS In order to optimize the energy efficiency of the RIS-assisted system, the phase shift needs to be optimally changed. However, the optimal phase shift control of RIS requires perfect CSI of all links between the RIS-assisted transmitter and the users, which is very difficult to be obtained because RIS is passive. Therefore, it is necessary to perform channel estimation and corresponding feedback mechanism at the RIS-assisted transmitter and the user side. Motivated by opportunistic beamforming <cit.>, we propose the dumb RIS-assisted random beamforming (Darb) scheme that constructs random beams without full CSI at the RIS-assisted transmitter side. We use the dumb RIS, where the word dumb means not to adjust any phase shift on the RIS but to let it change randomly. That is to say, the phase of RIS does not require any control, so there is no need to optimize it and no need to know the full CSI. According to the orthogonal random beamforming strategy for multi-user transmission <cit.>, we generate a random unitary matrix Φ = [ϕ_1, ϕ_2,⋯, ϕ_L]∈ℂ^L× L on RIS in each time-slot to transmit L signals simultaneously, where ϕ_l = [e^jθ_1l, e^jθ_2l,⋯,e^jθ_Ll]^T ∈ℂ^L× 1, l = 1,⋯, L, is orthonormal vector generated from an isotropic distribution randomly <cit.>. Then, we set the number of rows and columns in RIS to be the same value L, i.e., N=L^2. Hence, the corresponding received SINR for the k-th user with the i-th beam can be rewritten as follows: γ_k,i=| h_k^Tϕ_i|^2/∑_l=1,l≠ i^L| h_k^Tϕ_l|^2+L/ρ=z/y+Lσ^2/P_T. Since the channels of all users are assumed to be i.i.d. Rayleigh channels and Φ is a unitary matrix, the two variables z and y are independent chi-square distributed with z∼𝒳^2(2), y∼𝒳^2(2L-2), respectively. Then, the cumulative density function of the SINR, ∀ k,i, can be expressed as <cit.> F(γ) = 1-e^- L σ^2γ/P_T/(1 + γ)^L-1, γ≥ 0. According to the max-SINR rule, the selected user k_i^* for beam i is given by k_i^* = max_1≤ k ≤ K γ_k,i, i = 1,2,⋯,L. Then, the probability density function of the selected user k_i^*'s SINR is given by <cit.> f_k^*(γ) = Ke^- Lσ^2γ/P_T/(1+γ)^L [Lσ^2/P_T(1 + γ)+ L-1 ] ×[1-e^- Lσ^2γ/P_T/(1 + γ)^L-1]^K-1. The RIS-assisted transmitter forms L groups according to the beam index and the user who has the highest SINR in each group is chosen to be transmitted. Finally, the sum rate of the Darb scheme, denoted by R_Darb, can be expressed as follows: R_Darb = L∫_0^∞log_2(1+γ)f_k^*(γ) dγ. As the number of users grows to infinity, the sum rate of the Darb scheme can be reached as follows: R_Darb=^K→∞Llog(βlog K)+LlogP_T/Lσ^2. This means that when the number of users is large enough, the sum rate of the Darb scheme is the same as the sum rate be achieved by dirty paper coding <cit.>. The required feedbacks are only the highest SINRs of each user and the corresponding beam index, which means RIS-assisted transmitter does not need full CSI. The passive channel estimation via RIS is avoided, which reduces the overhead of CSI acquisition and simplifies the RIS design. When the number of users K in the system is large, the feedback overhead that linearly increases with K is also large, which is given by FO=KQ_the highest SINR+Klog_2L_corresponding beam index, where Q is the quantization bits of the highest SINR. In order to reduce large feedback overhead, we set a threshold α on each row. Each user calculates its SINR on each beam and obtains the highest value from it. Then, it compares the highest SINR with the given threshold α. If the highest SINR of the user is larger than the given threshold α, the user feedbacks the SINR and the corresponding beam index. Otherwise, the user does not feedback information. Thus, the sum rate of Darb scheme with the threshold feedback strategy can be expressed as follows: R_TFS=[1-F^K(α)]R_Darb. When we appropriately choose the threshold α, it has lim_K→∞R_TFS/R_Darb=1. Thus, the capacity loss caused by the threshold feedback strategy is small. The feedback overhead of the Darb scheme with the threshold feedback strategy is given by FO_TFS=[1-F(α) ]FO. §.§ Darb-based Joint RIS-elements and Transmit-power Optimization Scheme Based on the Darb scheme, we can rewrite the energy efficiency of the RIS-assisted system, denoted by EE_Darb, as follows: EE_Darb=Llog(βlog K)+LlogP_T/Lσ^2/P_T/η_T+L^2P_PIN+LP_U+P_1, where P_1=P_FPGA+P_A+P_SR+∑_k=1^K P_U,k. By setting the maximum number of RIS rows as L_max, we can convert problem P1 to problem P2 as follows: P2: max_(L, P_T) EE_Darb s.t. 1). L∈{1,2,⋯,L_max}; 2). 0<P_T≤ P_max. However, note that problem P2 is a mixed-integer non-convex optimization problem since Eq. (<ref>) involves integer constraint while the two optimization variables L and P_T are coupled with each other. To decouple L and P_T, we apply the alternating optimization (AO) algorithm to divide the problem into two sub-problems and optimize them separately. Specifically, one optimizes L with fixed P_T and the other optimizes P_T with fixed L. First, we fix the transmit power P_T and optimize the number of RIS elements L^2. In order to make problem P2 be conveniently solved, we relax the discrete variables in Eq. (<ref>) into continuous variables. Then, the sub-problem P3 is formed as follows: P3: max_(L) EE_Darb s.t. 1≤ L≤ L_max. Define f(l)=llog(βlog K)+llogP_T/lσ^2/l^2P_PIN+lP_U+a, where a=P_T/η_T+P_FPGA+P_A+P_SR+∑_k=1^K P_U,k is a constant, for all l_1,l_2 ∈ L and ∇ f(l_1)(l_2-l_1)≥ 0, satisfying f(l_2) ≥ f(l_1). Therefore, Eq. (<ref>) is a strictly pseudo-concave function. The strictly pseudo-concave function is monotonically increasing, or it has a unique stationary point which is also its global maximum point <cit.>. Thus, we can optimize L by finding its unique stationary point. Then, we fix the number of RIS elements L^2 and optimize the transmit power P_T. The sub-problem P4 is formed as follows: P4: max_(P_T) EE_Darb s.t. 0 < P_T≤ P_max. Define f(p)=Llogp/Lσ^2+b/p/η_T+c, where b=Llog(βlog K) and c=P_CA+P_SA+∑_k=1^K P_U,k are constants. The numerator of f(y) is strictly concave and the denominator is an affine. The ratio of a strictly concave function to an affine function is also a strictly pseudo-concave function, which is easy to find its global maximum point. §.§ Darb-based-Jeta Scheme and Its Convergence Based on the results of two sub-problems, the alternating optimization is adopted to solve problem P2. Specifically, in each iteration, while keeping the other variable fixed, the number of RIS elements L^2 and the transmit power P_T are alternately optimized by solving sub-problems P3 and P4. Furthermore, the solution obtained in each iteration is used as the input of the next iteration. The specific algorithm is given in Algorithm 1. Next, we prove the convergence of Algorithm 1 as follows: EE_Darb(L^(t),P_T^(t)) (a)≤ EE_Darb(L^(t+1),P_T^(t)) (b)≤ EE_Darb(L^(t+1),P_T^(t+1)), where (a) holds in step 4 of Algorithm 1 and (b) holds in step 5. The objective value of problem (P1) is non-decreasing, which guarantees the convergence of Algorithm 1. § PERFORMANCE EVALUATIONS In this section, we evaluate the performance of our proposed Darb-based-Jeta scheme for RIS-assisted wireless communications. We place the RIS-assisted transmitter at the origin and users are randomly distributed in a square area with a side length of 60 m. We set the bandwidth B = 180 KHz, the noise variance σ^2 = -80 dBm, the path loss from RIS-assisted transmitter to user β = 10^-3.53/d^3.76, the quantization bits of the user's highest SINR Q=4 and the algorithmic convergence parameter ϵ = 0.05, respectively. The parameters of power consumption are shown in Table <ref> <cit.>. Figure <ref> depicts the energy efficiency versus the number of users K with different numbers of RIS elements and antennas. The energy efficiency increases as K increases. This is because as the number of users K in the system increases, the probability that each beam has a higher SINR among different users also increases which brings a multi-user diversity gain. For a fixed value of K, compared with the multi-antenna system without the help of RIS, the energy efficiency is larger by using the Darb-based-Jeta scheme. This means multiple RIS elements at the RIS-assisted transmitter can achieve the same function as multiple antennas and provide diversity gain for the system. Figure <ref> plots the energy efficiency versus the number of users K in RIS-assisted and multi-antenna wireless communication system. Using our proposed Darb-based-Jeta scheme, the energy efficiency of the RIS-assisted system increases by jointly optimizing the number of RIS elements and the transmit power. As a result, the maximum energy efficiency is achieved when L=18 and P_T=1.14dBW. In addition, for a fixed value of K, the energy efficiency of the RIS-assisted system is larger than the multi-antenna system when we set L=M. This is due to the reason that RIS-assisted transmitter only needs single-active-antenna, which reduces the number of DACs, mixers and filters at transmitter. Therefore, the energy consumption of the system is well reduced and the energy efficiency correspondingly increases. Figure <ref> evaluates the average rate and feedback overhead versus the number of users K under no threshold set and a threshold set with α = 0.1. It can be seen from Fig. <ref> (a) that when the numbers of users and RIS elements are both small, there is a gap between the average rate of our proposed scheme with the threshold feedback strategy (TFS) and without the TFS. However, when the number of users exceeds 20, the average rate of the above two is almost the same, which means TFS does not affect the average rate. It can be seen from Fig. <ref> (b) that feedback overhead is significantly reduced with the help of TFS, which means as long as the number of users in the system is large, the strategy can reduce feedback overhead without impacting the average rate. § CONCLUSIONS We solved the problems on how to maximize the energy efficiency of the RIS-assisted wireless communication system where the full CSI via RIS is very challenging to be obtained. First, we proposed a RIS-assisted transmitter structure with single-active-antenna. On this basis, we developed the Darb-based-Jeta scheme with a threshold feedback strategy, where RIS only needs to perform random phase shift without knowing the full CSI. We jointly optimized the number of RIS elements and the transmit power to enhance the energy efficiency of RIS-assisted wireless communications. Compared with the traditional multi-antenna system, our proposed scheme can significantly increase energy efficiency without full CSI. IEEEtran
http://arxiv.org/abs/2407.13392v1
20240718110049
Lightweight Uncertainty Quantification with Simplex Semantic Segmentation for Terrain Traversability
[ "Judith Dijk", "Gertjan Burghouts", "Kapil D. Katyal", "Bryanna Y. Yeh", "Craig T. Knuth", "Ella Fokkinga", "Tejaswi Kasarla", "Pascal Mettes" ]
cs.CV
[ "cs.CV" ]
Ahmet Karadeniz^1 Dimitrios Mallis^1 Nesryne Mejri^1 Kseniya Cherenkova^1,2 Anis Kacem^1 Djamila Aouada^1 ^1SnT, University of Luxembourg ^2Artec3D July 22, 2024 ================================================================================================================================================================================================================== empty § ABSTRACT For navigation of robots, image segmentation is an important component to determining a terrain’s traversability. For safe and efficient navigation, it is key to assess the uncertainty of the predicted segments. Current uncertainty estimation methods are limited to a specific choice of model architecture, are costly in terms of training time, require large memory for inference (ensembles), or involve complex model architectures (energy-based, hyperbolic, masking). In this paper, we propose a simple, light-weight module that can be connected to any pretrained image segmentation model, regardless of its architecture, with marginal additional computation cost because it reuses the model’s backbone. Our module is based on maximum separation of the segmentation classes by respective prototype vectors. This optimizes the probability that out-of-distribution segments are projected in between the prototype vectors. The uncertainty value in the classification label is obtained from the distance to the nearest prototype. We demonstrate the effectiveness of our module for terrain segmentation. § INTRODUCTION Safe and efficient robot off-road navigation highly depends on accurate and actionable information of the environment. Semantic segmentation is a common component to determining the terrain's traversability with images (e.g.  <cit.>). This segmentation provides information about the current environment such as types of surfaces (e.g. puddles vs. dirt) or obstacles (e.g. tall grass vs. tree) that could impact robot navigation. If this segmentation model can run (almost) real-time on the robot, it can be used for path planning or replanning. Not only the labels themselves are important, but also an estimate of their uncertainty. Such an uncertainty estimation enables online reasoning in path planning, e.g. uncertain areas can be avoided or entered more carefully. Such an approach is e.g. shown by Cai et al. <cit.> where traversability estimates obtained by traction parameters of the platform are used for risk estimation, and by Hakobyan et al. <cit.> who proposed risk-aware motion planning and control Using conditional value-at-risk (CVaR)-Constrained Optimization. Standard methods for semantic segmentation focus on label accuracy and not on the accuracy of the uncertainty estimate. Segmentation models that provide uncertainty are not optimized for robotics and embedded scenarios <cit.>, where uncertainty quantification needs to be fast, compact, and without impacting the segmentation accuracy. The approach presented in this paper consist of a simple, lightweight module for uncertainty estimation for image segmentation. This module, dubbed Simplex Semantic Segmentation, can be connected to any pretrained semantic segmentation model as it is architecture agnostic. Our approach is based on the prototypes approach that was developed for classification, often on imbalanced datasets with rare classes <cit.>. In this paper, we apply this prototype approach to obtain the uncertainty for semantic segmentation, where each pixel of an input image needs to be labeled. This paper is organized as follows: Section <ref> presents our method for the prototype module and the uncertainty estimation. In Section <ref>, these proposed methods are evaluated. In Section <ref>, the findings are summarized and discussed. § PROPOSED APPROACH §.§ Rationale To estimate the uncertainty of image segmentation, we propose a simple, light-weight module that can be connected to any pretrained segmentation model. The module only needs a feature map and is independent of the underlying architecture. As the module is only an extra model head, it only adds a marginal computation cost. Our module builds on the prototypes approach, that is commonly used to improve classification of imbalanced datasets with rare classes <cit.>. In the standard prototype approach, each segmentation class is represented by its respective prototype vector. In our proposed method, we apply this prototype approach to obtain the uncertainty for semantic segmentation, where each pixel of a given input image is labeled to a specific class. In the proposed method, we apply this prototype approach to obtain the  uncertainty the labeled pixel. During training, these prototypes are incorporated to maximize distance between classes. In the inference phase, samples are then classified based on their distance to prototypes. The prototype approach maximizes the probability that pixels of unknown image segments will be projected in the void space between the unknown classes. The uncertainty is inferred from the distance to the nearest prototype vector. Figure <ref> provides an overview of our approach and how the proposed module is connected to any segmentation model. §.§ Method The starting point of our approach is a given pretrained image segmentation model h: 𝒳 → 𝒴 which classifies image pixels x_n into N respective classes y_n using a set of classification parameters W_c. We rewrite h(·; W_c) = g(f(·; W_f); W_g), where f(·; W_f) is the model's backbone which outputs a feature map, and g(·; W_g) is the pixel classification head that transforms the feature map into class predictions. W_f and W_g are the parameters connected to the functions f and g, respectively. Our objective is to predict the uncertainty of the labels using parameters W_u by the module U = u(f(·; W_f); W_u), with W_u the uncertainty estimation parameters. Note that this uncertainty function U can be applied to any spatial feature map f, without loss of generality, which makes our approach applicable to any segmentation model. The uncertainty function U can be further separated using prototype vectors P U= m(l(·, W_u) · P) where m is a fixed mapping and l(·, W_u) a neural network that projects pixel features onto the respective class prototype to maximize the output value for the pixel's correct segmentation class <cit.>. The prototype vectors are maximally separated on the (N-1) dimensional hypersphere. The derivation is recursive <cit.>: P_1 =[ 1 -1 ] ∈ ℝ^1×2 P_k =[ 1 -1/k1^T; 0 √(1-1/k^2) P_k-1 ] ∈ ℝ^k×(k+1) with 0 and 1 respectively the 0 and 1 column vectors. The columns of P_k are k + 1 equidistant vectors on the unit sphere in ℝ^k. With N classes, the prototypes are obtained by constructing P_N-1, yielding N vectors in N - 1 dimensions. The rationale is that if the pixel's projection is further away from the prototype vector, it is more uncertain. The uncertainty is inversely proportional to the maximum of a pixel's scores across the class prototypes: m(x) = 1 - σ(max(x)), where x are the pixel's prototype scores from l(·, W_u) · P, and σ is the softmax operator. Note that this uncertainty is not calibrated yet. To predict uncertainty, the module needs to learn the classes first. Therefore, the learning objective is to align the outputs of l(f(x_i), W_u) with the class prototypes P_y_i, with x_i and y_i from the training set 𝒮, using the cosine loss: ℒ = - ∑_i=1^Mm(l(f(𝐱_i), W_u)) · P_y_i/‖ m(l(f(𝐱_i), W_u)) ‖·‖ P_y_i‖ After learning to project a pixel's feature onto the prototype vector, at inference we can tell if a test pixel is projected far away from such a vector. The further away, the more uncertain the pixel's classification. We hypothesize that this works best if the prototype vectors are maximally apart. § EXPERIMENTS §.§ Base dataset Since we target off-road applications, we use a base dataset recorded in off-road environments with a diverse range of objects and terrain types. From the online available datasets <cit.> we choose Rellis3D <cit.>, which is collected in an off-road environment with 6,235 annotated images. The classes in this dataset are Concrete, Asphalt, Gravel, Grass, Dirt, Sand, Rock, Rock Bed, Water, Bushes, Tall Vegetation, Trees, Poles, Logs, Void, Sky, Sign. The environment is complex and differs between images. To segment the images in terms of traversability, we adopt the six classes from <cit.>: Smooth, Rough, Bumpy, Forbidden, Obstacles and Background. §.§ Model training For the experiments, we select DeepLabV3+ <cit.> for its simplicity and broad usage. DeepLabv3+ contains a decoder that refines the segmentation using atrous convolutions and a spatial pyramid pooling. As a backbone, we select a Resnet50 <cit.> pretrained on ImageNet <cit.>, as this is one of the most commonly used backbones. The images are resized to 512 × 512 pixels and augmented with standard transformations: horizontal flip, shift, scale, rotate and color jitter (all at a probability of 0.5). The batch size is 4. The model is trained for 25 epochs, at a learning rate of 0.001. All weights, including the backbone, are optimized during training. The segmentation results and accuracy of the segmentation are presented in the Appendix. §.§ Uncertainty estimation For the uncertainty estimation, we start by comparing the uncertainty values measured in the test set of Rellis3D and the uncertainty values measures in images from other datasets. The uncertainty values should be higher for datasets that are different. For simplicity, we assume that all pixels in an image are certain for Rellis3D and uncertain for other datasets, because for these datasets we do not have detailed annotations of certain or uncertain image segments. This assumption is often violated, because images from the other datasets may have segments that are very similar to Rellis3D. We compare against the DeepLabV3+ baseline, which is the same model, but without our uncertainty module. Its outputs are transformed into uncertainty values by the mapping m(·) from Equation <ref>. This methods is referred to a standard method in the remainder of this paper. The uncertainty performance is measured by the 1) Receiver Operator Characteristic (ROC) curve (graph), in which the true positive rate is plotted against the False Positive Rate and the 2) Area Under the Curve (AUC). The best performance will be found if the left upper corner is reached, in which case the AUC is 1. The AUC scores for both methods for all datasets is shown in Table <ref>. The ROC curves can be seen in Figure <ref> in the Appendix. On all datasets, our method shows a better uncertainty estimation performance than the standard method. On CUB-200, MS-COCO and KITTI, the performance is very similar with the standard method. These datasets are reasonably different from Rellis3D, but often contain vegetation, streets, humans and cars, which are also in Rellis3D. For WiderPerson and Fukuoka, our method is favorable, which can be explained by respectively the different viewpoint (aerial) and environment (indoor). The most prominent insight is that our method is uncertain about fog and fire, whereas the standard method is as certain as it is for Rellis3D on which it was trained. This result demonstrates the merit of our method for uncertainty estimation in practical cases. §.§ Segment-specific uncertainty We evaluate the uncertainty estimation for respective segment classes across images. In the SceneParse150 <cit.> dataset, most segmentation classes are semantically and visually different from the Rellis3D classes. The performance is measured by the AUC on uncertainty values per segment class in comparison to Rellis3D uncertainty values. The segmentation classes that are most uncertain are shown in Figure <ref>. Indeed, these classes are most different from Rellis3D, hence are expected to yield high uncertainties. In comparison to the standard method, our method yields higher uncertainty values for the segmentation classes that deviate from the training dataset. §.§ Uncertainty visualization It is also possible to show the uncertainty per pixel. An example is shown in Figure <ref>. Here the Input image shows both in-domain (vegetation, sky) and out-of-domain (fire) regions. Output1 shows the segmentation and Output2 shows the resulting uncertainty values: low (black) for the in-domain image segments and high (white) for the out-of-domain image segments. This is the desired behaviour: the robot can make navigation decisions about the image segments where it is certain, while operating in safe mode at the uncertain segments. It can be seen that the more uncertain values are found around the edges of segments, which is as expected as these pixels might contain information from both of these segments, and the labeling will often be based on surrounding pixels. This can also be seen in Figure <ref> for two images from the SceneParxe10 dataset, where the uncertainty found for the unknown class of the pigs is much higher with our proposed method than with the standard method. More examples are presented in Figure <ref> in the Appendix. In the bottom image, it can also be seen that the uncertainty for the field is much smaller for our method than for the standard method. This smaller uncertainty will allow for better path planning and faster navigation, by avoiding paths with high uncertainty. §.§ Inference computational costs Regarding the computational cost of inference on a test image, our method is advantageous. The most common method for uncertainty evaluation is to use Monte-Carlo dropout <cit.>. Although very effective, this requires a number of repeated feed-forward calculations of the model with randomly sampled weight parameters, which needs much computation power and can cause long latency. In our experiments, the model is DeepLabV3+ which has 11.9M weights. Instead of running this model multiple times, our module is low-cost and has to be run only once. For six traversability classes, our module has 1285 weights, i.e. 0.01% of the total model. These weights are required to transform the model's feature map (the final upscaling layer), a tensor of 512 × 512 × 256 dimensions, into 512 × 512 × 5 dimensions. This transformation is l(·, W_u) in Equation <ref>. It is implemented as a channel-wise convolution from 256 to 5 dimensions. The mapping P from Equation <ref> projects the output of l(·, W_u) from 5 to 6 dimensions, which are the 6 traversability classes. This projection is a fixed matrix multiplication, no learnable weights, implemented as a highly efficient vectorized operation. § CONCLUSIONS AND DISCUSSION We propose an extra, lightweight module for a semantic segmentation network, which provides a high quality segmentation label per pixel and an uncertainty estimate for these labels. The module first optimizes the segmentation by maximizing the separation of the different classes in the training phase. In the inference phase, the uncertainty of a pixel can be estimated as the distance of the current pixel to the center of the class it is labeled to. We have shown that our approach performs on par (with GA-Nav-r8) or just a little (for DeepLabV3++) better than standard models. We have shown high uncertainty values for data that really differs from the Rellis3D data we trained on. This indicates that the added module provides a good uncertainty estimate for pixels and segments in an image. In future work, We will evaluate the uncertainty estimation in a more quantitive way by estimating the 'ground truth uncertainty' using monte carlo estimations. Next to that, we will compare our approach against existing uncertainty estimation approaches. We will also focus on on calibrating the uncertainty. The uncertainty visualisation shows that our labeling is more uncertain on edges in the image. When using the labeling and uncertainty estimation for navigation, this means that these areas can be avoided when possible. In future work we will implement this simplex semantic segmentation and uncertainty approach for robust long-distance navigation on a physical robot, both for providing trusted generalized features for self-supervised traversability prediction as in  <cit.> and informing risk-based planning such as CVaR-Conditional optimization mentioned earlier. This will provide information on how well this uncertainty can actually be used for navigation purposes. IEEEtran § APPENDIX: ILLUSTRATIONS In this appendix, we show some figures illustrating the results presented earlier. In Figure <ref> the segmentation results on test images from the Rellis3D dataset are shown. These cases are in-domain. When comparing the results to the ground-truth labels, it can be seen they are mostly correct. The uncertainty estimation for our method and the standard method are also shown. It can be seen that larger uncertainty values are assigned to unclear boundaries between segments for the standard method (last than for our method. In Figure <ref> The uncertainty estimation for our method and the standard method are shown for images with out-of-domain regions: for two images from our own dataset, and four images from the SceneParse150 <cit.> dataset.The uncertainty estimates with our method are higher than those obtained with the standard method. In Figure <ref> the ROC curves for six datasets are shown. These graphs show that our method shows a better uncertainty estimation performance than standard method. §.§ Segmentation results and accuracy To evaluate the segmentation results of our proposed method, we compare the results with two state-of-the-art methods: DeepLabV3+ as our method is added on top of this method and GA-Nav <cit.>. We report the Intersection over Union (IoU) for each class i and the mean IoU (mIoU). Table <ref> shows that our method performs on par with GA-Nav-r8. Surprisingly, our model performs better than the reported DeepLabV3+ in <cit.>, which has the same architecture and setup as model, but was trained with other settings. We hypothesize that some of our design choices are better suited for Rellis3D. One of the differences with <cit.> is that they train with a smaller batch size (2 instead of 4). Their image size is 375 × 600 pixels, which corresponds less with the aspect ratio of the images in the dataset (1600 × 1920 pixels) than our image size (512 × 512 pixels). Possibly the biggest advantage is that we use more augmentations. Where <cit.> uses only horizontal flip and random crop, we add scale, rotate and color jitter. Especially the scale changes in the dataset are sometimes large, which we address by the scale augmentation. Some of the images are recorded under a slight tilt, because the robot might be on a slope. This is addressed with rotation augmentations during training. Examples of the segmentation images are shown in the Appendix in Figure <ref>.
http://arxiv.org/abs/2407.12276v1
20240717025441
VCP-CLIP: A visual context prompting model for zero-shot anomaly segmentation
[ "Zhen Qu", "Xian Tao", "Mukesh Prasad", "Fei Shen", "Zhengtao Zhang", "Xinyi Gong", "Guiguang Ding" ]
cs.CV
[ "cs.CV" ]
Visual context prompting model Z. Qu et al. CAS Engineering Laboratory for Intelligent Industrial Vision, Institute of Automation, Chinese Academy of Sciences, Beijing, China University of Chinese Academy of Sciences, Beijing, China CASI Vision Technology CO., LTD., Luoyang, China University of Technology Sydney, Sydney, Australia Hangzhou Dianzi University, Hangzhou, China Tsinghua University, Beijing, China {quzhen2022, taoxian2013, fei.shen, zhengtao.zhang}@ia.ac.cn mukesh.prasad@uts.edu.au, gongxinyi@hdu.edu.cn dinggg@tsinghua.edu.cn VCP-CLIP: A visual context prompting model for zero-shot anomaly segmentation Zhen Qu1,20009-0000-2173-612X Xian Tao1,2,3 0000-0001-5834-5181 Mukesh Prasad4 0000-0002-7745-9667 Fei Shen1,2,30000-0001-9263-4489 Zhengtao Zhang1,2,30000-0003-1659-7879 Xinyi Gong50000-0002-6515-2836 Guiguang Ding60000-0003-0137-9975 =================================================================================================================================================================================================================================================== § ABSTRACT Recently, large-scale vision-language models such as CLIP have demonstrated immense potential in zero-shot anomaly segmentation (ZSAS) task, utilizing a unified model to directly detect anomalies on any unseen product with painstakingly crafted text prompts. However, existing methods often assume that the product category to be inspected is known, thus setting product-specific text prompts, which is difficult to achieve in the data privacy scenarios. Moreover, even the same type of product exhibits significant differences due to specific components and variations in the production process, posing significant challenges to the design of text prompts. In this end, we propose a visual context prompting model (VCP-CLIP) for ZSAS task based on CLIP. The insight behind VCP-CLIP is to employ visual context prompting to activate CLIP’s anomalous semantic perception ability. In specific, we first design a Pre-VCP module to embed global visual information into the text prompt, thus eliminating the necessity for product-specific prompts. Then, we propose a novel Post-VCP module, that adjusts the text embeddings utilizing the fine-grained features of the images. In extensive experiments conducted on 10 real-world industrial anomaly segmentation datasets, VCP-CLIP achieved state-of-the-art performance in ZSAS task. The code is available at <https://github.com/xiaozhen228/VCP-CLIP>. § INTRODUCTION In the field of industrial visual inspection, zero-shot anomaly segmentation (ZSAS) endeavors to accurately localize and segment anomalous regions within novel products, without relying on any pre-customized training data. Due to its significant potential applications in scenarios with data privacy concerns or a scarcity of annotated data, ZSAS has garnered increasing attention from researchers <cit.>. Unlike traditional anomaly segmentation methods <cit.>, ZSAS requires strong generalization ability to adapt to significant variations in visual appearance, anomalous objects, and background features across different industrial inspection tasks. In recent, CLIP <cit.> has emerged as a vision-language foundation model for addressing the ZSAS task. As shown in Fig. <ref>(a), existing CLIP-based methods map images and their corresponding two-class text into a joint space and compute cosine similarity. Image regions that have high similarity with the defect-related text are considered as anomalies. For example, WinCLIP <cit.>, AnVoL <cit.>, and APRIL-GAN <cit.> extract dense visual features by applying multi-scale windowing or patching to images and align normal and abnormal image regions separately through a two-class text prompt design. However, the existing CLIP-based methods <cit.> present significant challenges in practical applications. On the one hand, previous methods <cit.> assume that the product category (e.g., wood) of inspected images is known in advance and utilize this information to design product-specific textual prompts (e.g., a photo of a normal wood). However, the product categories are unattainable or unpredictable in data privacy scenarios, rendering these methods unusable. Furthermore, we conducted an experiment in which we replaced the product categories (names) in the text prompts with semantically similar terms in WinCLIP, such as substituting bottle with container or vessel. We observed fluctuations in segmentation performance of up to ±8% in terms of Average Precision (AP) metric. This motivates us to reconsider the importance of product names in text prompts, especially since some product names are ambiguous (e.g., pcb1, pcb2, pcb3 in the VisA <cit.> dataset). Even within the same product category, significant differences arise due to specific components and differences in the production process, such as variations in appearance color, size, and manufacturing materials, among others. Recently, AnomalyCLIP <cit.> attempted to design object-agnostic text prompts, but they replaced all product name with a uniform description "object", leading to challenges in adapting to complex industrial scenarios. On the other hand, mapping images and text separately into a joint space <cit.> without any interaction does not facilitate mutual understanding of various modalities, and easily leads to image overfitting to certain text prompts. As illustrated in Fig. <ref>(a), where the output image and text embeddings are directly aligned, this approach results in a limited grasp of diverse modalities, thereby affecting anomaly segmentation performance. To address the aforementioned problems, a straightforward and effective visual context prompting (VCP) model based on CLIP is proposed for ZSAS task. As shown in Fig. <ref>(a), we aim to perform anomaly segmentation on novel (unseen) products (such as bottle and hazelnut) after training on limited seen products (such as cashews and pcb1) in auxiliary datasets. Existing methods <cit.> rely on manually defined text prompts as shown in Fig. <ref>(b). The unified text prompts are used as the baseline as shown in Fig. <ref>(c) in this paper, where the product categories are set as continuous learnable tokens. The proposed Pre-VCP module, depicted in Fig. <ref>(d), is an upgraded version of the baseline. It incorporates global image features to more accurately encode the product category semantics in the text space. To facilitate understanding of global image features, a deep text prompting (DTP) technique is introduced to refine the text space. Compared to the baseline, Pre-VCP enables the transition from uniform prompts to image-specific prompts, significantly reducing the cost of prompt designs. To enhance the mutual understanding of features from different modalities, the Post-VCP module is further proposed, which adjusts the output text embeddings based on fine-grained visual features. This approach further strengthens CLIP's ability to accurately segment anomalous regions. In conclusion, we propose a visual context prompting model based on CLIP (VCP-CLIP) for the ZSAS task. As depicted in Fig. <ref>(b), we extract the global and dense image embeddings from the image encoder. The former is integrated into the input text prompts after passing through the Pre-VCP module, while the latter is utilized for fine-grained image features in anomaly segmentation. A Post-VCP module is further designed to update the text embeddings based on fine-grained visual features, effectively facilitating mutual understanding between different modalities and further enhancing the model's generalization ability to novel products. The final anomaly maps simultaneously integrate segmentation results aligned from the original text embeddings and dense image embeddings, which helps further enhance the segmentation performance. The main contributions of this work are as follows: 1. We propose a novel visual context prompting model based on CLIP, namely VCP-CLIP, to tackle the ZSAS problem. By training on a limited set of seen products, VCP-CLIP can localize anomalies in any unseen product, even when the product category is unknown. Compared to current text prompting approaches <cit.>, our approach utilizes visual context prompting to fully activate CLIP's anomalous semantic perception ability. 2. We reveal for the first time that visual context provides additional information for text prompts in the ZSAS task. Specifically, the Pre-VCP and Post-VCP modules are designed to utilize global and fine-grained image features for text prompting, respectively. In doing so, VCP-CLIP avoids extensive manually defined text prompting engineering, thus alleviating the overfitting issue arising from pre-training on specific text prompts. 3. In extensive experiments conducted on 10 real-world industrial anomaly segmentation datasets, VCP-CLIP exhibits superior zero-shot performance in segmenting anomalies on unseen products. § RELATED WORK Prompt learning. Prompt learning is initially applied in the field of NLP, aiming to utilize affordable annotated data to automatically generate prompts, thereby enhancing the capabilities of foundation models, such as CLIP <cit.>, GPT-3.5 <cit.>, and LLaMA <cit.> in downstream tasks. CoOp <cit.> first introduces prompt learning in the CLIP model, utilizing learnable prompt tokens in the textual space. VPT <cit.> and ZegCLIP <cit.> insert trainable embeddings in each layer of the image encoder, allowing refinement of the image space to better adapt to downstream semantic segmentation task. These methods aim to enable the pretrained backbone to adapt to the target domain using prompt learning. In recent works, CoCoOp <cit.> and DenseCLIP <cit.> guide the pretrained backbone to adapt to the target domain through the visual context prompting. Related to our VCP module is CoCoOp, which incorporates visual contexts into text prompts to improve the classification performance on novel categories. However, our VCP replaces product categories within the text prompts rather than the entire sentence, in contrast to CoCoOp. The proposed approach has been validated as more effective than CoCoOp in ZSAS, which does not necessitate prior knowledge of product categories. Zero-shot anomaly segmentation. With the advancements of foundation models such as CLIP <cit.> and SAM <cit.>, ZSAS has increasingly captured the attention of researchers. According to whether auxiliary data for training is required, existing methods can be broadly categorized into two groups. 1) Training-free methods. Building upon CLIP, WinCLIP <cit.> and AnVoL <cit.> carefully craft text prompts to identify anomalies without training on auxiliary datasets. The former proposes a window-based approach, aggregating classification results from images within different scale windows using harmonic aggregation. The latter utilizes V-V attention instead of the original Q-K-V attention in the image encoder to extract fine-grained features and adaptively adjusts for each image during testing in a self-supervised manner. SAA/SAA+ <cit.> utilizes language to guide the Grounding DINO <cit.> for detection of anomalous regions and then employs SAM for finely segmenting the detection results. However, these existing methods not only require more complex prompt designs or post-processing but also introduce additional computational and storage burdens during inference. 2) Training-required methods. APRIL-GAN <cit.>, CLIP-AD <cit.>, and AnomalyCLIP <cit.> utilize seen products with annotations as auxiliary data to fine-tune CLIP for ZSAS on unseen products. These approaches employ linear layers to map patch-level image features to a joint space of text and vision, facilitating alignment between different modalities. AnomalyGPT <cit.> is another seminal work that utilizes the large language model Vicuna <cit.> to guide the model in locating anomalies. Through supervised pretraining on synthesized anomaly images, AnomalyGPT can support multi-turn dialogues and locate anomalies in unseen products. However, existing methods all overlook the role of visual context in fine-grained multimodal alignment, and they may struggle when confronted with complex industrial anomaly segmentation scenes. Recently, ClipSAM <cit.>, an integration of CLIP and SAM, has been employed for cross-modal interaction in ZSAS task. However, the two-stage prediction has increased the complexity of the model. § OUR METHOD §.§ Problem Definition Our approach follows the generalized ZSAS methods adopted in works <cit.>, which requires segmenting the anomalies in unseen products C^u after training on seen products C^s with pixel-annotations. During the training stage, the model generates pixel-wise classification results based on two categories of textual descriptions: normal and abnormal. During the testing stage, the model is expected to directly segment anomalies in unseen products. It is worth noting that C^u ∩ C^s = ∅ and the products used in the training and testing stages come from different datasets. This undoubtedly poses a significant challenge to the model's domain generalization capability. §.§ The design of baseline Existing CLIP-based improvement methods have three main drawbacks: 1) manually designing text prompts is time-consuming and labor-intensive, 2) product-specific text prompts cannot adapt to data privacy scenarios, and 3) the localization results are easily influenced by the semantics of product categories in the text prompts <cit.>. To address the aforementioned issues, we propose a baseline that incorporates two main designs: unified text prompting (UTP) and deep text prompting (DPT). As shown in Fig. <ref>, given an input image X ∈ℝ^h× w × 3 and two-class text prompts, the designed baseline (marked in red dashed) first extracts patch-level image features and text features separately. Then, the patch-level image features are mapped to a joint space, where the similarity between image features and text features is computed to generate anomaly maps. Finally, anomaly maps from multiple intermediate layers of the image encoder are fused after upsampling to obtain the final results. Unified text prompting (UTP). A unified template for generating normal and abnormal text prompts is designed as follows: H = [a] [photo] [of] [a] [state] [v_1] [v_2] ⋯ [v_r] where v_i, i∈{1,2,⋯ r} is a C-dimensional learnable vector embedded into the word embedding space, used to learn the unified textual context of the product categories. A pair of opposing [state] words, such as "good/damaged" and "perfect/flawed", is utilized to generate normal and abnormal text prompts, respectively. H represents the word embedding matrix corresponding to specific prompts in the textual space. In this paper, we choose a common state word pair, i.e. "good/damaged". Deep text prompting (DTP). Before statement, let us first review the inference process of the CLIP text encoder briefly. Before being fed into the text encoder, [SOS] and [EOS] are respectively added to the front and back of the text prompt, indicating the beginning and end of the sentence. Afterwards, these tokens are mapped to a discrete word embedding space, capped to a fixed length of 77 in CLIP. Let us denote the word embeddings as [s, H, e, J] ∈ℝ^77× C, where s and e are C-dimensional word embeddings corresponding to [SOS] and [EOS] tokens, respectively. J is a placeholder matrix initialized to zero to ensure a fixed length of the word embeddings. The final output text embedding at the position of the [EOS] token is aligned with the image features after passing through a linear projection layer. To better align fine-grained normal and anomalous visual semantics with text, deep text prompting is designed to further refine the textual space as shown in Fig. <ref>. In specific, continuous trainable embeddings are inserted at the beginning of text embedding in each transformer layer of the text encoder. Assuming the text encoder's (i+1)-th layer is represented as Layer_i+1^text, the inserted embeddings are P_i ∈ℝ^n × C and the output text embedding is g. The process is formulated as follows: [s_i, _ , H_i, e_i, J_i] = Layer_i^text([s_i-1, P_i-1, H_i-1, e_i-1, J_i-1]) g = TextProj (Norm(e_N_t)) where i = 1,2,⋯ N_t, s_0 = s, H_0 = H, e_0 = e. N_t is the number of text encoder layers. TextProj(·) and Norm(·) respectively denote final text projection and LayerNorm <cit.> layers. For normal and abnormal text prompts, we denote the embeddings after DTP as g_n and g_a, respectively. Since the masked self-attention is employed in the text encoder, [s_i, P_i, H_i, e_i, J_i] and [s_i, H_i,P_i, e_i, J_i] are not mathematically equivalent. We adopted the former because the model can only attend to tokens before itself, thus placing the learnable embeddings at the beginning of the sentence leads to a greater degree of refinement in the textual space. More details are shown in the Appendix B.2. How to acquire the anomaly map? For an input image X ∈ℝ^h× w × 3, patch-level visual feature map Z_s^l ∈ℝ^H× W × d_I, l=1,2,⋯, B are extracted from the image encoder layers, where H = h/patchsize, W=w/patchsize, d_I is the size of image embeddings and B is the number of extracted intermediate patch-level feature layers. Then, the feature maps are mapped to a joint space and align with text embeddings using a single linear layer by calculating the cosine similarity. Let us respectively denote the visual and textual features in the joint space as F_s^l ∈ℝ^HW × C and F_t = [g_n,g_a] ∈ℝ^2× C, where C is the embedding size in the joint space. The process of acquiring the anomaly map can be formulated as: M_1^l = softmax(Up(F_s^lF_t^T) / τ_1), l=1,2,⋯ B where τ_1 denotes the temperature coefficient, which is set as a learnable parameter. Up(·) is an upsampling operation with bilinear interpolation. (·) represents the L_2-normalized version along the embedding dimension. §.§ The design of VCP-CLIP The baseline has made some progress, but still faces the following three main problems: 1) The unified text prompt does not consider specific visual contexts. 2) Overfitting phenomena may occur in the unified text prompt. 3) Insufficient interaction between information from different modalities limits further improvement in segmentation performance. In this end, we further designed two novel visual context prompting modules, namely Pre-VCP and Post-VCP as shown in Fig. <ref>. In contrast to the baseline, the global features of the image are encoded into the text prompt using the Pre-VCP module. The Post-VCP module receives patch-level features from the image encoder and text features from the text encoder as inputs to generate the anomaly map. Pre-VCP module. We designed a Pre-VCP module to introduce global image features into the text prompts of the baseline. Due to the extensive alignment of image-text pairs during the pretraining process of CLIP, the embedding at the [CLS] token position of the image encoder encompasses rich global image features. We combine the global image features with learnable vectors in the baseline to facilitate the fusion with the unified category contexts. Specifically, the global image features are initially mapped to the word embedding space through a small neural network, namely Mini-Net. This can be expressed as {x_i}_i=1^r = h(x), where x_i ∈ℝ^1× C, i=1,2,⋯ r represents the mapping results, which are combined with embeddings corresponding to the product category: z(x,v) = [z_1(x_1,v_1), z_2(x_2,v_2), ⋯ , z_r(x_r,v_r)] where z_i = x_i + v_i. For the Mini-Net h(·), a parameter-efficient design utilizing only a one-dimensional convolutional layer with (r, 1× 3) kernels is employed. The final text prompt based on Pre-VCP can be expressed as follows: H_v = [a][photo][of][a][state][[z_1(x_1,v_1)][z_2(x_2,v_2)]⋯ [z_r(x_r,v_r)] For convenience in the subsequent text, we refer to the text prompt template as "a photo of a [state] [z(x,v)]". Post-VCP module. To further enable the text embedding to adapt based on fine-grained image features, we devised a Post-VCP module, as illustrated in Fig. <ref>. The text embedding F_t ∈ℝ^2× C and flattened visual embedding Z_s^l ∈ℝ^HW × d_I from each layer are projected into a latent space with C-dimension. Then the learnable queries Q_t, keys K_s^l, and values V_s^l can be obtained: Q_t = F_tW_t^q, K_s^l = Z_s^l W_s^k, V_s^l = Z_s^lW_s^v where W_t^q∈ℝ^C× C,W_s^k∈ℝ^d_I× C,W_s^v∈ℝ^d_I× C are linear projection matrices in the PreProj layer. To capture richer visual features for fine-tuning text, a multi-head structure is adopted for computing attention maps to update text features within each head using matrix multiplication: {Q_t^(m)}{K_s^l(m)}{V_s^l(m)} = Split(Q_t, K_s^l, V_s^l) A_t^l(m) = SoftMax(Q_t^(m)K_s^l(m)T), O_t^l(m) = A_t^l(m)V_s^l(m) O_t^l = Concat(O_t^l(1),O_t^l(2),⋯,O_t^l(M))W_t^o where m = 1,2,⋯, M. M is the number of heads, Q_t^(m)∈ℝ^2×(C/M), K_s^l(m)∈ℝ^HW×(C/M),V_s^l(m)∈ℝ^HW×(C/M) represent the features within each head after the Split(·) operation for partitioning along the embedding dimension. A_t^l(m)∈ℝ^2× HW and O_t^l(m)∈ℝ^2× (C/M) respectively refer to the attention maps and the text features updated through the image feature within each head. After concatenating all features along the embedding dimension using the Concat(·) operation, a PostProj layer with weight matrix W_t^o ∈ℝ^C× d_I is employed to obtain the final updated text embedding O_t^l ∈ℝ^2× d_I from F_t. Then, the updated anomaly map is calculated as: M_2^l = softmax(Up(Z_s^lO_t^lT) / τ_2), l=1,2,⋯ B where τ_2 is a temperature coefficient set as a learnable parameter. r0.5 < g r a p h i c s > The visualization result of the attention maps from the Post-VCP module. To visually validate the effectiveness of the Post-VCP module, we show the attention maps A_t^l(m) under different heads corresponding to normal and abnormal text embeddings. These maps reveal that abnormal text embeddings concentrate more on defective regions of the image compared to normal text embeddings. This clear differentiation stems from employing fine-grained visual contexts in the Post-VCP module to update text embeddings from F_t to O_t^l. §.§ Training and Inference Loss function. In this work, we employed focal loss <cit.> and dice loss <cit.> to supervise the learning of VCP-CLIP. The total loss function of VCP-CLIP is calculated as: L_total = ∑_lFocal(M_1^l,S) + ∑_lDice(M_1^l,S)_Baseline + ∑_lFocal(M_2^l,S) + ∑_lDice(M_2^l,S)_Additional VCP modules where the loss function consists of two components, one for the baseline and the other for additional VCP module. M_1^l and M_2^l, l=1,2,⋯ B are anomaly maps generated from the two branches mentioned above. S∈ℝ^h× w is the ground truth corresponding to the input image. Inference. The ultimate anomaly maps come from different layers of the image encoder by summation. The anomaly maps generated from the two branches are represented as M_1 and M_2. To further enhance the ZSAS capability, we introduced a weighted fusion policy to generate the final anomaly map, M_a = (1 - α)M_1 + α M_2 , where α∈ [0,1] is a fusion weight designed as a hyperparameter to balance the importance of different anomaly maps. § EXPERIMENTS §.§ Experimental Setup Datasets and metrics. To assess the performance of the model, ten real industrial anomaly segmentation datasets are used, including MVTec-AD <cit.>, VisA <cit.>, BSD <cit.>, GC <cit.>, KSDD2 <cit.>, MSD <cit.>, Road <cit.>, RSDD <cit.>, BTech <cit.>, DAGM <cit.>. Since the products in VisA do not overlap with those in other datasets, we use VisA as the training dataset for evaluation on other datasets. For VisA itself, we assess it after training on MVTec-AD. Please refer to the Appendix C for more details. To ensure a fair comparison, pixel-level AUROC (Area Under the Receiver Operating Characteristic), PRO (Per-Region Overlap), and AP (Average Precision) are employed as the evaluation metrics, following the recent works <cit.>. Implementation details. In the experiments, we adopt the CLIP model with ViT-L-14-336 pretrained by OpenAI <cit.> by default. Specifically, we set the number of layers B for extracting patch-level features to 4. Since the image encoder comprises 24 transformer layers, we evenly extract image features from layers {6, 12, 18, 24}. All images are resized to a resolution of 518×518, and then fed into the image encoder. The length of the learnable category vectors r and the length of the learnable text embeddings n in each text encoder layer are set to 2 and 1, respectively, by default. The number of attention heads M in the Post-VCP module is set to 8. The fusion weight α for different anomaly maps is set to 0.75 as the default value. The Adam optimizer <cit.> with an initial learning rate of 4e-5 is used, and the model is trained for continuous 10 epochs with a batch size of 32. All experiments are conducted on a single NVIDIA GeForce RTX 3090. We conducted three runs using different random seeds and then averaged the results. More details can be found in Appendix A. §.§ Comparison with the State-of-the-art Two kinds of state-of-the-art approaches are used to compare with ours: training-free approaches and training-required approaches. The training-free approaches include WinCLIP <cit.> and AnVoL <cit.>, which do not require auxiliary datasets for fine-tuning the model but necessitate more complex manual prompt designs and inference processes. The training-required approaches comprise CoCoOp <cit.>, AnomalyGPT <cit.> and APRIL-GAN <cit.>, which adhere to the protocol of training on the seen products and testing on the unseen products. Quantitative comparison. Table <ref> shows the quantitative performance comparison with other state-of-the-art methods on ZSAS. The best results are shown in bold, and the second best results are underlined. It can be observed that the proposed VCP-CLIP outperforms all other methods across all metrics, particularly in terms of AP. Due to the tiny anomaly regions on the Visa dataset, its anomaly segmentation is more challenging. However, VCP-CLIP still maintains its advantage compared to other methods. Notably, it achieves state-of-the-art results on VisA dataset, with AUROC score of 95.7%, PRO score of 90.7% and AP score of 30.1%. It is noteworthy that our baseline approach has already achieved nearly superior performance compared to existing methods such as CoCoOp, which similarly introduces global image information in the text prompts. This is because our method simultaneously adjusts text embeddings using fine-grained image features. Qualitative comparison. For a more intuitive understanding of the results, we visualized the anomaly segmentation results of our VCP-CLIP alongside another five methods: WinCLIP <cit.>, AnVoL <cit.>, CoCoOp <cit.>, AnomalyGPT <cit.>, and APRIL-GAN <cit.> on the MVTec-AD and VisA datasets in Fig. <ref>. The visualization results clearly indicate that the compared approaches have a tendency to generate incomplete or false-positive results, which can negatively impact the performance of anomaly localization. In contrast, our VCP-CLIP effectively mitigates these issues, providing a more accurate and reliable approach to ZSAS. More quantitative and qualitative comparisons are provided in the Appendix D. §.§ Unified text prompting vs. visual context prompting Same prompts during training and testing. To better validate the effectiveness of VCP-CLIP, we compared it with the proposed baseline on MVTec-AD and VisA. Fig. <ref> illustrates the AP improvement of VCP-CLIP over the baseline for each product. In specific, VCP-CLIP demonstrates varying degrees of improvement among 13 out of the 15 products and 10 out of the 12 products on the MVTec-AD and VisA datasets, respectively. This affirms the robust generalization capability of VCP-CLIP, which is attributed to both the global visual context in Pre-VCP and the fine-grained local visual context in Post-VCP. Different prompts during training and testing. To validate the robustness of VCP-CLIP during the test process with different text prompts, we employed text prompts different from those used during training on the MVTec-AD and VisA datasets. Specifically, during training, the default state words "good/damaged" were used. During testing, we reported the metric AP when the state words were respectively "normal/abnormal", "perfect/flawed", and "pristine/broken". As shown in Fig. <ref>, our baseline performance sharply declined on two datasets, while the performance of VCP-CLIP remained relatively stable. This indicates that after incorporating VCP, the model can adaptively adjust the output text embeddings based on input images, thereby avoiding dependence on the specific text prompts used during training. §.§ Ablation Studies Influence of different components. To assess the impact of different components on VCP, experiments were conducted on MVTec-AD. Results in Table <ref> indicate performance when using DTP, Pre-VCP or Post-VCP individually. Notably, the optimal performance for VCP is achieved when all combined. It can been seen that the performance decline is more pronounced after removing Post-VCP compared to Pre-VCP. We also attempted to remove the learnable text embeddings from each layer of the text encoder (without DTP), which resulted in a decrease of 0.3% in AUROC, 0.6% in PRO, and 1.2% in AP. This is because the original text space cannot directly comprehend the global features of images, while DTP ensures deep fine-tuning of each text encoder layer, thereby fostering mutual understanding and fusion of different modalities. Influence of ensemble of different patch-level image layers. In Table <ref>, we explore the impact of patch-level features from different image encoder layers on VCP-CLIP’s performance. The experiments were conducted on the MVTec-AD dataset. An intuitive observation is that image features from intermediate layers (i.e. the 12th and 18th layers), contribute more to the final segmentation result. Image features from lower layers (i.e., the 6th layer) are too low-level, while those from higher layers (i.e., the 24th layer) are overly abstract. Their effectiveness is not as pronounced as those from intermediate layers. However, We observed a positive correlation between incorporated layer numbers and improved segmentation results. To maintain high performance, we adopted all patch-level features from {6, 12, 18, 24} layers in VCP-CLIP. Ablation on text prompt design. As demonstrated in Table <ref>, we considered two commonly used text prompt templates and explored the impact of different prompting state words in the proposed VCP-CLIP on MVTec-AD. Specifically, we designed the following two text prompt templates: 1) this is a [state] photo of [z(x,v)]; 2) a photo of a [state] [z(x,v)]. The state words (e.g. "perfect/flawed") are respectively inserted into the template to generate normal and abnormal text prompts. It can be observed that for the same template with different state words, our VCP-CLIP model consistently maintains similar performance, validating the robustness towards the state words. Furthermore, the second type of template, default employed in VCP-CLIP, outperforms the first type overall, which may be attributed to the repeated usage of similar template during the pre-training process of the vanilla CLIP. Ablation on different pretrained models and resolutions. In Table <ref> and Table <ref>, we conducted a comprehensive analysis of the impact of varying input image resolution and pre-trained backbone on MVTec-AD. The former is tested using ViT-L-14-336, while the latter reports the optimal performance under different backbones pre-trained by OpenAI. The inference time was simultaneously tested for a single image (average of 200 images). We observe that a moderate increase in input image resolution contributes to more precise segmentation (higher AP). However, deviations from the original pre-training resolution (336^2 to 798^2), leading to model degradation. This outcome can be attributed to the model deviating from the original image space. The result in Table <ref> shows that our VCP-CLIP achieves the optimal segmentation performance in ViT-L-14-336. Therefore, we have chosen it as the default backbone. § CONCLUSION In this paper, we present VCP-CLIP, a novel zero-shot anomaly segmentation (ZSAS) method achieved through the integration of visual context prompting (VCP). The core methodology involves incorporating richer visual knowledge into the textual space and cross-modal interaction between textual and visual features. Specifically, a Pre-VCP and a Post-VCP module are designed to respectively introduce global and fine-grained image features into the textual space. With this design, our model can directly segment anomalies in novel products without any prior knowledge. Extensive experiments conducted on 10 real-world industrial anomaly segmentation datasets showcase VCP-CLIP’s state-of-the-art performance in ZSAS. Acknowledgments. This work was supported in part by the National Science and Technology Major Project of China under Grant 2022ZD0119402; in part by the National Natural Science Foundation of China under Grant No. 62373350 and U21A20482; in part by the Youth Innovation Promotion Association CAS (2023145) ; in part by the Beijing Municipal Natural Science Foundation, China, under Grant L243018. splncs04 Appendix This supplementary appendix contains the following five parts: 1) Detailed experimental setup and introduction of state-of-the-art methods in Section <ref>; 2) Additional experiments and further analysis in Section <ref>; 3) Introduction of the real industrial datasets in Section <ref>; 4) More detailed presentations of quantitative and qualitative results in Section <ref>; 5) Discussion of model limitations in Section <ref>. § IMPLEMENTATION DETAILS AND STATE-OF-THE-ART METHODS §.§ Implementation details Details of model configuration. In this paper, we adopt the CLIP model with ViT-L-14-336 pretrained by OpenAI <cit.>. The length of learnable category vectors (tokens) r and the length of learnable text embeddings n in each text encoder layer are set to 2 and 1, respectively. The number of image encoder layers B for extracting patch-level features is set to 4 and we evenly select the 6th, 12th, 18th, and 24th layers to acquire dense image embeddings. In our baseline, we use a single linear layer to map the dense image embeddings Z_s^l to F_s^l in the joint embedding space. In addition, the number of attention heads M in Post-VCP module is set to 8 and the fusion weight α is set to 0.75 in our VCP-CLIP. All images are resized to a resolution of 518 × 518, and then fed into the image encoder. Details of training and testing. We conduct experiments on 10 publicly available real-world industrial anomaly segmentation datasets, including MVTec-AD and VisA, which are widely used in previous ZSAS tasks. Notably, the products in VisA do not overlap with those in other datasets. Therefore, to evaluate ZSAS performance on other datasets, we employ weights trained on VisA's test sets. As for VisA, we assess ZSAS performance after training on MVTec-AD. The final results for each dataset are derived from the average value of the products it contains. During training on seen products, we maintain the original CLIP parameters in a frozen state, updating only the newly introduced learnable parameters. The Adam optimizer <cit.> with an initial learning rate of 4e-5 is used and the model is trained for continuous 10 epochs with a batch size of 32. All experiments are conducted on a single NVIDIA GeForce RTX 3090, and we perform three runs using different random seeds and then average the results. §.§ State-of-the-art methods * WinCLIP <cit.> is a representative ZSAS method that ensembles a large number of manually designed text prompts to classify sub-images within each window. The classification outcomes from various scaled windows are then aggregated to derive the ultimate anomaly segmentation results. The values of AUROC and PRO on MVTec-AD and VisA are obtained from the original paper, while the results for other metrics and datasets are based on our code reproduction following the settings specified in the original paper. * AnVoL <cit.> optimizes model parameters to adapt to ZSAS tasks by utilizing a test-time adaptation technique. The values of AUROC and PRO on MVTec-AD and VisA are obtained from the original paper, while other metrics and datasets are derived from the implementation of the official code. * CoCoOp <cit.> is a method that applies CLIP to image classification tasks based on prompt learning. It uses continuously learnable vectors instead of manually designed text prompts, enhancing the model's generalization to novel classes by making the prompt conditioned on each input image. To adapt CoCoOp to ZSAS task, we improve the prompt templates used in the original paper. In specific, the original template [v_1(x)][v_2(x)]⋯[v_r(x)][class] is replaced with [v_1(x)][v_2(x)]⋯[v_r(x)][good][class] and [v_1(x)][v_2(x)]⋯[v_r(x)][damaged][class] for the generation of normal and abnormal text prompts, where v_i(x) represents the learnable word embeddings that incorporate image features x. The anomaly segmentation results are obtained in the same manner as our baseline, with all other parameters remaining unchanged from the original paper. * AnomalyGPT <cit.> integrates a large language model for anomaly segmentation and supports multi-turn dialogues with users. It employs supervised training using synthetic anomaly data to enable the model to generalize to new products. Additionally, it supports fine-tuning the model using finely annotated data to achieve better ZSAS performance. For product category descriptions in MVTec-AD and VisA, we adhere to the original settings. For other datasets, we use the following product description: "This is a photo of a [class] for anomaly detection, which should be without any damage, flaw, defect, scratch, hole, or broken part". We conducted experiments using official code and evaluated the model's ZSAS performance in the same manner as VCP-CLIP. * APRIL-GAN <cit.> adopts a text prompting design strategy similar to WinCLIP, and it fine-tunes the model using auxiliary datasets to adapt to the ZSAS task. We conducted experiments using official code and pretrained weights, keeping all parameters and settings consistent with the original paper. * CLIP-AD <cit.> utilizes a text prompt design similar to WinCLIP and adapts to the ZSAS task through feature surgery and fine-tuning techniques. Due to the official code not being open-sourced, it is only compared as a concurrent work with our VCP-CLIP on MVTec-AD and VisA. * ClipSAM <cit.> is a collaboration framework of CLIP and SAM, which are respectively used for rough and fine segmentation of abnormal regions. In this paper, it is compared as a concurrent work with our VCP-CLIP on MVTec-AD and VisA. * AnomalyCLIP <cit.> proposes to learn object-agnostic text prompts for ZSAS. It uses [object] to replace specific product categories [class] in the text prompts, thereby focusing the model on the abnormal regions of images. Due to the official code not being open-sourced, it is compared as a concurrent work with our VCP-CLIP on MVTec-AD and VisA. § ADDITIONAL RESULTS AND ABLATIONS §.§ Comparison with concurrent methods In addition to the state-of-the-art methods that have already been compared, we also pay attention to three other concurrent works for ZSAS, namely CLIP-AD <cit.>, ClipSAM <cit.> and AnomalyCLIP <cit.>. They all utilize auxiliary datasets to fine-tune foundation models such as CLIP and SAM, to adapt to the ZSAS task. Table <ref> presents a quantitative comparison between the other three methods and our VCP-CLIP. Given that these methods utilize the same experimental setup as ours, all experimental results are directly sourced from their respective original papers. Our method demonstrates superior performance compared to the others in terms of AUROC, PRO, and AP on the VisA dataset, showcasing remarkable results. On the MVTec-AD dataset, our VCP-CLIP also achieves comparable ZSAS performance. Despite potentially having lower AUROC and PRO scores compared to ClipSAM, VCP-CLIP exhibits a higher AP, indicating more precise segmentation results. Furthermore, ClipSAM combines two foundation models, CLIP <cit.> and SAM <cit.>, which may reduce model inference efficiency and render it less suitable for real-world industrial applications. §.§ Additional ablations In this subsection, we explore the impact of the position of learnable vectors in deep text prompting (DTP) and the hyperparameters on VCP-CLIP. Ablation on DTP. As mentioned in Section 3.2, due to the utilization of masked self-attention in the text encoder, the model does not attend to the context of future tokens in the text prompts. Consequently, the placement of learnable text embeddings at the start and end of a sentence produces different outcomes. In other words, [s_i, P_i, H_i, e_i, J_i] and [s_i, H_i, P_i, e_i, J_i] are not mathematically equivalent, where [·,·] represents the concatenation operation on the sequence length dimension. We refer to these two scenarios as DTP (Pre) and DTP (Post) and compare the performance of ZSAS as depicted in Fig. <ref>. It is evident that the performance of DTP (Post) is slightly lower than that of DTP (Pre). This is because placing learnable text embeddings at the beginning of a sentence can influence the entire sentence, thereby benefiting the textual space refinement. Ablation on hyperparameters. As shown in Fig. <ref>, we explore the impact of different hyperparameters on the performance of VCP-CLIP, including the length of learnable category vectors r, the length of learnable text embeddings n in each text encoder layer, the fusion weight of different anomaly maps α, and the number of attention heads M in the Post-VCP module. 1) Fig. <ref>(a) shows that the model performs best when the length of learnable category vectors r is set to 2. Using too many or too few vectors hinders the model's segmentation performance. This aligns with our expectations, as typically, two tokens are adequate for representing product categories, while an excess of tokens may introduce unnecessary semantic overlap; 2) Fig. <ref>(b) illustrates that the model achieves the highest AUROC and AP when including one learnable text embedding (n=1) in each layer of the text encoder. However, as n increases, the model's performance starts to decline. This phenomenon occurs because excessive refinement of the textual space can diminish the original CLIP's generalization capability and may lead to overfitting on the limited training data; 3) The fusion weight α determines the relative importance of the anomaly maps M_1 from the baseline and M_2 from the additional VCP module. As depicted in Fig. <ref>(c), our model achieves superior fusion results with α set to 0.75. It's important to note that a higher α value indicates a greater contribution from the anomaly map M_2 generated by the VCP module, highlighting the effectiveness of using visual context prompting; 4) Fig. <ref>(d) outlines the impact of the number of attention heads M in the Post-VCP module. To achieve optimal performance, we set M = 8 to leverage multiple attention heads for focusing on detailed image features and updating text embeddings accordingly. §.§ Additional analysis In this subsection, we first compare the efficiency of different state-of-the-art methods. Subsequently, we visualize the output text embeddings to analyze the impact of visual context prompting. Analysis of model efficiency. In addition to anomaly segmentation performance, the efficiency of the model is also a focal point of our attention. Table <ref> reports the average inference time and the GPU cost per single image during the test stage. Despite not requiring auxiliary datasets for training, WinCLIP and AnVoL exhibit lower inference speed and segmentation performance compared to our VCP-CLIP. The inference time and GPU consumption of AnomalyGPT are substantial, far exceeding those of other methods, making it difficult to apply in real-world industrial scenarios. Compared to training-required methods like CoCoOp and APRIL-GAN, although the inference time of our VCP-CLIP slightly increases, it brings more gains in ZSAS performance. Do text embeddings effectively integrate visual contexts? Assuming the model's output normal and abnormal text embeddings are denoted as g_n ∈ℝ^1× C and g_a ∈ℝ^1× C respectively, and an image embedding of a patch as x_p ∈ℝ^1× C. Subsequently, if x_p is classified as abnormal, then we have: g_nx_p^T < g_ax_p^T ⇒ (g_n - g_a)x_p^T < 0 where (·) represents L_2-normalized operation along the embedding dimension. Then g = g_n - g_a can be considered as classification weights (hyperplanes) for patch features. As shown in Fig. <ref>, we visualize the classification weights g using the output text embedding (i.e., O_t^4 in Equation 8) derived from the last Post-VCP module. Specifically, we first extract the output text embeddings for each input image after visual context prompting. Then, we apply t-SNE dimensionality reduction technique to the classification weights g obtained from the text embeddings. Our observation reveals that the classification weights corresponding to images of the same product, across both the MVTec-AD and VisA datasets, are clustered together. This signifies that the output text embeddings fully integrate visual contexts, enabling the model to focus on the shared characteristics (domain information) inherent in the products. Additionally, we note that the classification weights for different images are distinctly separate, even if they belong to the same product type. This suggests that our VCP-CLIP concurrently learns the distinctions between images, allowing the text embeddings to dynamically adjust based on the input images. Therefore, compared to using unified text prompts, our VCP-CLIP showcases enhanced generalization capabilities for novel products by leveraging visual context prompting. Can VCP-CLIP generalize to other text prompt templates during testing? Despite being trained on a fixed prompt template (Prompt 1 in Table 3), VCP-CLIP still demonstrates strong ZSAS capabilities on other text prompt templates during testing. As shown in Table <ref>, we employ four different text prompts during testing, namely Prompt 2∼5, which are distinct from those used during training. To our surprise, when using new prompt templates such as Prompt 3 and Prompt 5, some metrics have actually improved compared to using Prompt 1. This can be attributed to the design of visual context prompting, which dynamically update text embeddings based on input images. §.§ Results of zero-shot anomaly classification In Table <ref>, we compare the classification performance of VCP-CLIP with other SOTA methods, including WinCLIP <cit.>, AnomalyGPT <cit.>, and APRIL-GAN <cit.>. We present the results of taking the maximum value of the pixel-level score map as the anomaly classification score. The results indicate that our zero-shot anomaly classification performance still surpasses other SOTA methods. § DATASETS In this section, we provide a detailed introduction to ten real industrial datasets used in this paper. More details are as follows: * MVTec-AD <cit.>. It contains 5354 color images with resolutions ranging from 700 to 1024, which are used for industrial anomaly detection. It consists of 15 types of products containing object and texture categories with pixel-level annotations. * VisA <cit.>. It contains 10,821 color images with resolutions ranging from 960 to 1500, used for industrial anomaly detection. It comprises 12 types of products containing object category with pixel-level annotations. * BSD <cit.>. It is a ball screw drive nut database that consists of 1104 color images with a resolution of 2592 × 1944, showing areas with and without pitting(s). The dataset contains 710 images without pitting and 394 images with pitting. In our study, we only use the defective images for evaluation. * GC <cit.>. It is a surface defect dataset collected in a real industry. It contains ten types of steel surface defects and includes 3570 gray-scale images. The original annotations are in the form of bounding boxes. In the experiments, the MobileSAM <cit.> model is used for assisted annotation to obtain segmentation ground truth. * KSDD2 <cit.>. It is a dataset that captured in a controlled environment with 230 pixels wide and 630 pixels high. The dataset has 2085 negative and 246 positive samples in the train, and 894 negative and 110 positive samples in the test subset. Defects are annotated with fine-grained segmentation masks and vary in shape, size and color, rangin gfrom small scratches and minor spots to large surface imperfections. * MSD <cit.>. It is a mobile phone screen surface defect dataset that consists of 1200 images. The defects are made by ourselves and pixel-level labeled by labelme. The images are collected by an industrial camera and the resolution is 1920 × 1080. * Road <cit.>. It is a road crack dataset that is composed of 118 images, which can generally reflect urban road surface condition in Beijing, China. Each image has hand labeled ground truth contours. The width of the images ranges from 1 to 3 mm. Tese images contain noises such as shadows, oil spots and water stains. * RSDD <cit.>. This dataset contains two style data sets from the China Academy of Railway Sciences, collected using a linear array camera: 1) Type-I RSDD data set contains 67 images captured from express rails; 2) Type-II RSDD data set contains 128 images captured from common/heavy haul rails.Note that each image from both data sets contains at least one defect. * BTech <cit.>. It is an anomaly detection dataset. It consists of three subdatasets. Among them, Product 1 has a resolution of 1600 × 1600 pixels, Product 2 has a resolution of 600 × 600 pixels, and Product 3 has a resolution of 800 × 600 pixels. Product 1, 2, and 3 have 400, 1000, and 399 training images, respectively. * DAGM <cit.>. It is a manually generated texture dataset, consisting of a total of 10 classes, with image size of 512×512 pixels. The original annotations were weakly supervised annotations, where the defect areas are in the form of ellipses. Therefore, in the experiments, MobileSAM <cit.> is used for fine annotation. In this experiment, the image size used for the experimental evaluation in <cit.> was uniformly 512 x 512 pixels, all of which were randomly selected and cropped from the original dataset. Subsequently, all eight processed industrial datasets mentioned above will be open source. § DETAILED ZSAS RESULTS In this section, we provide detailed quantitative and qualitative results specific to the products in our VCP-CLIP. §.§ Detailed quantitative results in different products §.§ Detailed qualitative results in different products § LIMITATIONS We have demonstrated the superiority of the proposed VCP-CLIP in the ZSAS task. In this section, we will discuss the two main limitations of our method. First, our method can locate the area of the anomaly but may result in some over-detection for minor anomalies, such as the candle in Fig. <ref> and pipe_fryum in Fig. <ref>. This means that the segmentation results are often slightly larger than the ground truth. This may be attributed to the small input resolution (336^2) and large patch size (14^2) used in the pretrained backbone (ViT-L-14-336). Second, our method cannot accurately localize certain abnormal regions that must rely on normal images for identification, such as pcb4 in Fig. <ref>. This is because in the ZSAS task setting, VCP-CLIP directly performs anomaly segmentation on novel products without introducing any prior information from normal images. In the future, we plan to further explore the utilization of few-shot techniques to tackle this issue, leveraging the groundwork laid by VCP-CLIP.
http://arxiv.org/abs/2407.13481v1
20240718130030
Attention Overflow: Language Model Input Blur during Long-Context Missing Items Recommendation
[ "Damien Sileo" ]
cs.CL
[ "cs.CL" ]
Anisotropic cosmology in Bumblebee gravity theory Umananda Dev Goswami 0000-0003-0012-7549 Received: date / Accepted: date ================================================= § ABSTRACT Large language models (LLMs) can suggest missing elements from items listed in a prompt, which can be used for list completion or recommendations based on users' history. However, their performance degrades when presented with too many items, as they start to suggest items already included in the input list. This occurs at around 100 items for mid-2024 flagship LLMs. We evaluate this phenomenon on both synthetic problems (e.g., finding missing numbers in a given range of shuffled integers) and realistic movie recommendation scenarios. We refer to this issue as attention overflow, as preventing repetition requires attending to all items simultaneously. Although iterative loops can mitigate this problem, their costs increase with the repetition rate, affecting the language models' ability to derive novelty from lengthy inputs. § INTRODUCTION Large language models (LLMs) boast ever-growing context windows, enabling new potential applications. However, the theoretical context length is not a sufficient indication of a model's real performance with a given input size <cit.>. Multiple benchmarks have been proposed to stress-test the actual capabilities of language models to reason over long contexts. Most of these tasks are either pure retrieval or involve a form of reasoning, requiring the identification of a few relevant portions from a large context. We question the effective context window of language models from an opposite angle: asking them to provide the only relevant elements that are not in a large input. We formulate this as a missing item prediction task. Missing item prediction has multiple applications, notably in conversational recommendation, where users can provide a list of movies they have already watched and ask for new suggestions. This task involves a form of inductive reasoning, in contrast to the deductive reasoning typically explored in long context stress tests. More importantly, it requires comparing a representation to the whole input, and we notice that this is difficult for current LLMs, which leads to the prediction of items already in the input (repetition). Missing item prediction is relevant when models are asked to generate long lists, as we have observed repetitions in this scenario[For example, asking Claude Sonnet 3.5 200 movies released in 2022 leads to numerous repetitions: https://claude.site/artifacts/67f091d2-4ab5-4b88-9fce-b4114ade666e[artifact]], but we focus on the movie recommendation use case, where users provide the movies they have watched, and we also create synthetic examples, notably number ranges with a missing element. We quantify the repetition phenomenon with existing off-the-shelf language models and investigate whether fine-tuning can easily address this problem. The generated datasets are publicly available[ https://huggingface.co/datasets/sileod/missing-item-prediction[data:HF-datasets ]]. § RELATED WORK Repetitions in language modeling We study a form of repetitions, a well-identified problem in language models <cit.>, which can sometimes lead to text degeneration, where models repeat the same token indefinitely <cit.>. Repetition penalties were proposed to alleviate this issue <cit.>, but they operate at the token level and cannot scale to large contexts where all tokens are already represented. Repetitions also exist in more subtle ways, as <cit.> showed that chain-of-thought reasoning contains redundant content. LLM context length stress tests Our work is also related to context window stress testing and language modeling-based recommendation. Previous work has studied the ability of attention mechanisms to identify what is present in long contexts, but not what is missing. The Long-Range Arena <cit.> provides the first systematic analysis of the long-range processing capabilities of text encoders, focusing mainly on algorithmic reasoning and retrieval tasks. BABILong <cit.> uses bAbi reasoning tasks <cit.> and interleaves relevant text with irrelevant input. FlenQA <cit.> applies a similar process to the RuleTaker <cit.> deductive logical reasoning task. Recommendation with LLMs Our study is also related to LLM usage for collaborative filtering <cit.>, where users enumerate a list of items to communicate their tastes. LLMs can also be used in content-based recommendations, where users explicitly mention what they are looking for <cit.>. Here, we do not address the fine-grained relevance of the recommendations (providing an item that users do not already know). Repetition is also related to the novelty metric in recommender systems evaluation <cit.>. § MISSING ITEM PREDICTION We formalize the task of missing item prediction as follows: Given a set X (=randomly ordered) of N elements, guess the element y that is missing in X. This is technically an induction task that can be under-determined but we can construct relatively easy X,y pairs with easily identifiable itemsets 𝒮 (numbers from 0 to 1024, letters, chemical elements...) and randomly removing one element y from 𝒮 to get X. We can use two evaluation metrics: Accuracy the rate at which a language model returns the expected missing element. Repetition rate the rate at which a language model returns an element that is already in X. Repetitions are always mistakes. For easily identifiable sets, ideal behavior is perfect accuracy and no repetition. But even in cases where the structure of 𝒮 is under-determined, language models performing missing item prediction should not repeat elements from X. To construct an example of the missing item prediction task, we select an itemset 𝒮, select a random element y, and present a scrambled version of X=𝒮∖{y} in a prompt explicitly asking the model to guess a missing element. We provide the following itemsets: Movies We select a user from the MovieLens 1M dataset who watched more than 2048 movies. Numbers Numbers in numerical form (1...1024). We exclude set extrema from the choice of y for numerical itemsets. Numbers-english We use the same numbers but converted in English using the num2word library [<https://github.com/savoirfairelinux/num2words>]. An example with the Numbers itemset of size 8 is Question: Find the missing element in 5, 7, 1, 3, 6, 8, 4. Answer: 2. § EXPERIMENTS We use the same prompt template for all models: Guess the missing item from this list: {X}. Directly answer with only one item. Item format should match the list format. Provide no explanation. Answer format: "{item}." To construct this prompt template, we iterated on Llama-3-8B-Instruct with the numbers itemset validation data until we obtained a satisfactory output format. We then normalize the outputs with punctuation removal and lowercase to compute repetition rate and accuracy, and perform exact matches to compute accuracy and repetition rate. We use powers of 2 from 16 starting from 16 as itemsize. This ensures that there are enough items to guess the itemset structure. We generate 200 train examples and 100/100 validation test examples per itemset size and itemset type. §.§ Zero-shot evaluation We evaluate off-the-shelf instruction-tuned language models API through OpenRouter. We evaluate Llama3-Instruct 8B and 70B, Gemini 1.5 Flash and Pro, GPT-4o, and Claude 3.5 Sonnet on July 10th 2024, with the default hyperparameters. Figure <ref> shows the evolution of Accuracy and Repetition metrics with different itemsets sizes for numeric numbers and movies missing item prediction tasks. Most language models solve the missing number prediction task with relatively high accuracy with less than 128 items. Increasing model size seems to improve accuracy, as Gemini Pro and Llama-3-70B outperform their smaller counterpart. However, the repetition rates shoot up and the accuracy decreases in all models after 256 items. We cannot interpret the low accuracy of the movie item prediction tasks as a failure because the models can predict relevant movies that are not y. However, we can interpret the growing repetition rate as a failure, which can frustrate users who could expect better recommendations as they provide more examples, which limits the accuracy of conversational recommender systems that not filtering their output to prevent repetitions. §.§ Fine-tuning We now investigate whether fine-tuning can easily address this issue. We fine-tune Llama-3 Instruct 8B using Unsloth default configuration [<https://colab.research.google.com/drive/135ced7oHytdxu3N2DNe1Z0kqjyYIkDXp?usp=sharing>] (4bit quantization, Lora <cit.> with dimension 16, 1 epoch with a learning rate of 2e-4). We fine-tune on 500 numeric items of a size below 256 and evaluate on the test set in-domain and out-domain. Figure <ref> shows that fine-tuning improves missing item prediction on in-domain data, but do not generalize to larger itemsets nor to different domains, which might indicate a fundamental limit of current attention architectures that may not be solved with data only. §.§ Contrastive evaluation We also evaluated the ability of LLama-3-8B-Instruct to tell whether an element is present or not in the list by randomly sampling either the missing element or a random element from a prompt. {X}. Is "{i}" in the previous list? Provide no explanation, directly answer with only "Yes." or "No." Figure <ref> shows the evolution of accuracy with growing itemset sizes. Llama-3-8B-Instruct maintains 75% accuracy below 1024 items[All examples fit in the 8K context window of Llama 3.]. This shows that once the item is explicitly present in the query, the model is much better at identifying it. These results are lower than the Needle in a Haystack evaluation scores of Llama-3 <cit.>, which is due to the high similarity between items. This suggests that context-length stress testing is harder when all many prompt elements are similar to each other, and which would make BABILong <cit.> problem lengthening too easy get around. § ANALYSIS To solve missing item prediction, a transformer language model needs to construct a latent representation of the missing item when predicting the next token. Finding a close representation is relatively simple in the tasks we propose, as language models consistently output items that belong to the item set. However, they also need to compare the latent representation with the latent representations of the prompted items. At each layer, the transformer layer can refine the representation to shift it away from prompted items, but the models lack the depth to do it for many items. § CONCLUSION We introduce a new missing item prediction dataset and we show that repetitions occur during movie recommendation tasks, which is a real-world problem, alongside list completion. This also has implication on the current language models to check exhaustivity in texts. Our simple examples show that we must be careful when asking language models to produce new content from contextual information, as language models can repeat context elements without noticing it. This finding provides further evidence for the need for caution when interpreting context length <cit.>. We attribute this phenomenon to an overflow of attention, speculating that the model needs to evaluate candidates and compare them to all input items at once. It would be worthwhile to actually analyze the attention heads during this task, even though multi-head attention is hard to interpret <cit.>. Our dataset is publicly available with itemset sizes up to 8192 for future work.
http://arxiv.org/abs/2407.12636v1
20240717150536
Pulse-based variational quantum optimization and metalearning in superconducting circuits
[ "Yapeng Wang", "Yongcheng Ding", "Francisco Andrés Cárdenas-López", "Xi Chen" ]
quant-ph
[ "quant-ph" ]
ifundefinedtextcolor
http://arxiv.org/abs/2407.12436v1
20240717094422
3He adsorbed on molecular hydrogen surfaces
[ "M. C. Gordillo", "J. Boronat" ]
cond-mat.other
[ "cond-mat.other" ]
[]Corresponding author: cgorbar@upo.es ^1Departamento de Sistemas Físicos, Químicos y Naturales, Universidad Pablo de Olavide, Carretera de Utrera km 1, E-41013 Sevilla, Spain ^2Instituto Carlos I de Física Teórica y Computacional, Universidad de Granada, E-18071 Granada, Spain ^3Departament de Física, Universitat Politècnica de Catalunya, Campus Nord B4-B5, 08034 Barcelona, Spain § ABSTRACT Using a diffusion Monte Carlo (DMC) technique, we calculated the phase diagram of ^3He adsorbed on a first solid layer of a molecular hydrogen isotope (H_2,HD and D_2) on top of graphite. The results are qualitatively similar in all cases: a two-dimensional gas spanning from the infinite dilution limit to a second-layer helium density of 0.048 ± 0.004 Å^-2. That gas is in equilibrium with a 7/12 commensurate structure, more stable than any incommensurate triangular solid of similar density. These findings are in reasonably good agreement with available experimental data. ^3He adsorbed on molecular hydrogen surfaces J. Boronat^3 July 22, 2024 ============================================= § INTRODUCTION At the heart of any Monte Carlo calculation lies the same principle that allow us to compute a simple integral by a hit-and-miss method, modified by the introduction of techniques to reduce the statistical variance. The first relevant application of that proposal was the calculation of the properties of a gas of hard spheres in the seminal paper by Metropolis et al <cit.>. Thus, the basic idea behind the Monte Carlo method consists in transforming the expression that describe the phenomena we are interested in into an integral, and apply a modification of that simple integration recipe to calculate the desired magnitude <cit.>. For classical averages this is straightforward, since the equations that define them are already integrals defined in a multidimensional space. The application to quantum systems is more involved but it is now currently used in a set of methods known globally as quantum Monte Carlo (QMC). The simplest QMC method consists in the application of the variational principle of Quantum Mechanics to calculate the energy and other observables for a proposed many-body wavefunction <cit.>. The accuracy of this method, known as variational Monte Carlo (VMC), depends on the quality of that wavefunction. This constraint can be removed by using the diffusion Monte Carlo (DMC) algorithm <cit.>, which tackles directly the imaginary-time Schrödinger equation by using the connection between a random walk and the diffusion equation. Therefore, DMC is able, at least in principle, to produce the real ground state of any many-body system by improving upon that initial guess. That technique has been applied successfully to many bosonic and fermionic ensembles of particles <cit.>. In this work, we are going to use the DMC algorithm to study the properties of a system that includes either fermions (^3He atoms, HD molecules) or a a mixture of bosons and fermions (^3He adsorbed on top of a H_2 or a D_2 layer). The calculations presented in this work are prompted by experimental results on ^3He adsorbed on a double HD layer on top of graphite, both using NMR techniques <cit.> or calorimetric measurements <cit.>. Those findings are part of a long series of studies on ^3He on clean and preplated graphite, both from the experimental <cit.> and theoretical <cit.> points of view. All those experimental studies of ^3He on HD preplated graphite agree in finding both a liquid/gas and a low-density commensurate solid. In this work, we will address the nature of both phases by means of DMC calculations. The rest of the paper is organized as follows. In the next section, we describe the DMC algorithm and its application to obtain the equation of state of ^3He on a single molecular hydrogen layer on top of graphite. In Sec. III, we report the results obtained for the equations of state of the ^3He films adsorbed on different molecular hydrogen isotopes. Finally, Sec. IV comprises a brief summary and a discussion of the main conclusions. § METHOD The most ambitious approach to the quantum many-body problem, from a microscopic point of view, is to solve its corresponding Schrödinger equation. The diffusion Monte Carlo (DMC) method does so stochastically, starting by its imaginary-time counterpart <cit.>: -∂Ψ(R,t)/∂ t = (H-E) Ψ(R,t) , with R standing for the positions of all the atoms/molecules in the system. The Hamiltonian H of the system, composed by ^3He atoms and hydrogen molecules, is given by H = ∑_α∑_i=1^N_α[ -ħ^2/2m_α∇_i^2 + V_ ext^(α) (x_i,y_i,z_i) ] + ∑_α∑_i<j^N_α V_ pair^(α,α) (r_ij) + ∑_α∑_β∑_i^N_α∑_j^N_β V_ pair^(α,β) (r_ij) . As in previous literature, we considered graphite as a rigid structure, i.e., its influence on the behavior of ^3He and hydrogen molecules will be modeled by an external potential, V_ ext^(α)(x_i,y_i,z_i), different for each species α = [^3He,(H_2,HD,D_2)]. To release that constraint by allowing the carbon atoms to move around their crystallographic positions does not change the behavior of the first layer of molecular hydrogen at the densities considered in this work <cit.>. In Eq. <ref>, the coordinates x_i, y_i, and z_i correspond to each of the N_α or N_β (Helium or Hydrogen) adsorbate particles with mass m_α. All the individual adsorbate-carbon interactions were explicitly considered, in a full rendition of graphite as a corrugated structure made up of parallel layers separated 3.35 Å in the z direction. In all cases, those V_ ext^(α)(x_i,y_i,z_i) potentials were taken to be of the Lennard-Jones type and no distinction was made between different hydrogen isotopes <cit.>. In particular, the He-C interaction was taken from Ref. carlos, while the (H_2,HD,D_2)-C potential was the one derived in Ref. coleh2. In the Hamiltonian (<ref>), we have as many expressions for V_pair^(α,β) as possible adsorbate pairs, i.e., He-He, He-(H_2,HD,D_2) and (H_2,HD,D_2)-(H_2,HD,D_2). For the ^3He-^3He interaction, we used the standard Aziz potential <cit.>, while for any hydrogen-hydrogen interaction we resort to the Silvera and Goldman expression <cit.>. The Helium-H_2 potential was taken from Ref. <cit.>, previously used in the study of small clusters including ^4He-H_2 mixtures <cit.>. What all those potentials have in common is that they are isotropic interactions, depending only on the distance r_ij between particles i and j. In the case of molecular hydrogen, an elipsoid, the Silvera and Goldman potential was built to reproduce the isotropic properties of solid phases and does so very successfully. In all cases, the hydrogen molecules were not kept fixed but allowed to move around their crystallographic positions. The solution of Eq. (<ref>) can be formally written as Ψ(R', t + Δ t)= ∫ G(R',R,Δ t) Ψ (R, t) d R , with t the imaginary time. The Green's function is given by G(R',R,Δ t) = ⟨R' | exp[-(H -E)Δ t] | R⟩ , with E an energy close to the ground-state value. By remembering that Ψ (R,t) can be expanded in terms of a complete set of the Hamiltonian's eigenfunctions, Φ_i (R), with eigenvalues E_i, as Ψ (R,t) = ∑_i c_i e^-(E_i-E) t Φ_i (R), we can see that, successive applications of Eq. <ref> on any initial approximation to the exact wavefunction, will project to the ground state in the t →∞ limit, i.e, this method produces a zero-temperature estimation. However, given the very low temperatures at which the relevant experiments are performed, usually of the order of the mK, that solution is expected to be a very good approximation to what is observed. Any iterative application of Eq. <ref> constitutes a Monte Carlo step in the DMC algorithm. Unfortunately, this procedure, even though it is formally correct, produces very noisy estimations <cit.>. To reduce the statistical variance of the results to a manageable level, one introduces importance sampling. This is done by means of a time-independent trial wave function, ψ(R), as close as possible to the exact solution of Eq. <ref>. We define then an auxiliary function, f(R,t), as f(R,t) = ψ(R) Ψ (R, t), that introduced in Eq. <ref> gives -∂ f(R,t)/∂ t = A(R,t) f(R,t) , with A(R,t) = -ħ^2/2m_i∇_i^2 + ħ^2/2 m_i F(R) + [E_L(R)-E] . At difference with Eq. (<ref>), Eq. <ref> includes a drift term, with F(R) = 2 ψ(R)^-1∇ψ(R), that guides the stochastic process to the regions where the trial function is larger. E_L(R) = ψ(R)^-1 H ψ(R) is the so-called local energy, whose mean value is the exact energy of the system. We considered ^3He adsorbed on a single molecular hydrogen layer, contrarily to the experimental setups of Refs. casey2,ikegami1,ikegami2,masutomi,casey1,casey,fukuyama2,fukuyama3, in which Helium is adsorbed on top of two or more <cit.> HD sheets. To include two layers would have doubled the number of hydrogen molecules in our simulations (see below), and would have implied a considerable increase in the computational complexity and in the simulation time. Moreover, the vertical distance between the ^3He sheet and a second hydrogen layer closest to the graphite would have been large enough (around 6 Å, see for instance the distribution of H_2 layers in Ref. prb2022) to make the influence of that layer on the Helium equation of state negligible, apart from a nearly constant correction in the value of the binding energy. That correction is expected to be small given the shallowness of the He-(H_2,HD,D_2) potential <cit.>. In any case, ours is a new quasi-2D ^3He system whose behaviour could be directly compared with an experimental setup with a single hydrogen layer on top of graphite. Taking all that in mind, and following previous work on ^3He films <cit.>, we considered a two-layer trial wave function of the form ψ( r_1, r_2, …, r_N) = ψ_1( r_1, r_2, …, r_N_1) × ψ_2( r_N_1+1, r_N, …, r_N) , with N_1 the number of hydrogen molecules in the single layer adsorbed on the graphite surface and N the total number of particles (H_2/HD/D_2 molecules and ^3He atoms). The number of ^3He atoms in the second layer is thus N_2 = N-N_1. The trial wave function for the upper ^3He layer is <cit.> ψ_2( r_N_1+1, r_N_1+2, …, r_N) = D^↑ D^↓∏_i=N_1+1^N u_3( r_i) × ∏_i<j^N_2exp[-1/2(b_3/r_ij)^5 ], where D^↑ and D^↓ are Slater determinants including two-dimensional plane waves depending on the second layer particle coordinates (with spins up and down) and whose periodicity is determined by the size of the simulation cell. In all cases, we considered the same number of spin-up and spin-down ^3He atoms. The coordinates in the Slater determinants were corrected by backflow terms in the standard way <cit.>, x̃_i = x_i + λ∑_j iexp [-(r_ij - r_b)^2/ω^2] (x_i - x_j) ỹ_i = y_i + λ∑_j iexp [-(r_ij - r_b)^2/ω^2] (y_i - y_j). The optimal values for the parameters in the backflow term were those of the bulk three-dimensional system <cit.>, i.e., λ = 0.35; ω = 1.38 Å, and r_b = 1.89 Å. The one-body function u_3( r) is the numerical solution of the Schrödinger equation that describes a single ^3He atom on top of a hydrogen first layer of density 0.095 Å^-2. This is the largest experimental HD density before a promotion to a second HD layer on top of graphite happens <cit.>. This density is the same as for H_2 promotion to the a second layer <cit.>, and comparable to the density D_2 needs to jump to that second layer <cit.>, 0.100 Å^-2. That density is also of the same order of magnitude as the corresponding to the uppermost HD layer in a ^3He/HD/HD, studied experimentally in Refs. casey2,ikegami1,ikegami2,masutomi,casey1,casey,fukuyama2,fukuyama3 (∼ 0.092 Å^-2), whose results are directly comparable to those of the present work. With all those considerations in mind, and to avoid as much as possible size effects, we considered a 14 × 8 first layer cell of molecules separated 3.48 Å from each other, i.e., a 48.72 × 48.22 Å^2 simulation cell comprising 224 hydrogen molecules. The remaining parameter b_3 is 2.96 Å, as in previous literature <cit.> and defines the Jastrow part of the trial wavefunction, designed to avoid the unphysical situation in which two Helium atoms are located one on top of each other. The part of the trial wave function corresponding to the layer in contact with the graphite surface, that contains the different hydrogen isotopes is taken as ψ_1( r_1, …, r_N_1) = ∏_i^N_1 u( r_i) ∏_i<j^N_1exp[-1/2(b/r_ij)^5 ] × ∏_i^N_1exp{ -a_1 [(x_i-x_ site)^2 + (y_i-y_ site)^2] } . As before, the function u( r) is the numerical solution to the Schrödinger equation that defines the interaction between a single hydrogen molecule and the graphite surface, and it depends on the mass of the different species (H_2,HD or D_2) on top of the carbon layer. The variational Jastrow parameter b was fixed to 3.195 Å for all isotopes, since in previous literature dealing with similar systems <cit.> it was found to be independent of the mass. The last term in Eq. <ref> pines the atoms around their crystallographic positions (x_ site,y_ site), in this case the ones defining a triangular lattice of density 0.095Å^-2. The a_1's parameters entering that function depend on the mass of the hydrogen adsorbate and to obtain them we performed several VMC calculations using Eq. <ref> as a wavefunction for different values of a_1. The parameters that produced the lowest energies for the different molecular isotopes were a_1=1.19 Å^-2 (H_2), 1.61 Å^-2 (HD) and 2.02 Å^-2 (D_2). Eq. <ref> would define adequately a gas or a liquid ^3He layer. On the other hand, to model efficiently a second-layer ^3He solid, one fixes those atoms to the crystallographic positions corresponding to the structure we are interested in. To do so, we will have to multiply Eq. <ref> by ∏_i exp{ -a_2 [ (x_i-x_ site)^2 + (y_i-y_ site)^2] } , in which we used the same parameter for all the solid phases and densities (a_2 = 0.24 Å^-2) <cit.>. We have considered four different phases, three commensurate (the 4/7, widely considered in the standard literature <cit.>, the 7/12 <cit.>, and the newly proposed 1/2 structure <cit.>) and an incommensurate triangular one at different densities. The fact that triangular ^3He solids are incommensurate structures with respect to those of the first hydrogen layer implies that the dimensions of the simulation cells corresponding to those second-layer solids do not have to be (and in fact, they are not) the same as the ones for the hydrogen layer. To avoid mismatch problems between those two sheets, we followed the procedure reported in Refs. <cit.>. First, we used as the upper simulation cell the larger piece of a triangular solid of a given density that fits in the 48.72 × 48.22 Å^2 cell defined by the hydrogen substrate. For instance, if we consider that the upper density for the ^3He triangular solid before the promotion of ^3He to a second Helium layer is the one given in Ref. fukuyama2 (0.058 Å^-2) the dimensions of that simulation cell are 44.6 × 46.34 Å^2, corresponding to a 10 × 6 supercell. To take into account all the interactions between any Helium atom and the Hydrogen substrate, we replicate the first layer simulation box to create a nine-cell structure using the vectors that define that Hydrogen sheet. Then, we calculate the corresponding potential terms within a given cutoff distance between the particles in the first and second layer, without using the minimum image convention. That cutoff must be smaller than half the shortest side of the upper simulation cell, in this example 22.3 Å. On the other hand, the Helium-Helium interactions are calculated in a similar way by using the nine vectors (0,0),(0,±46.34),(±44.6,0),(±44.6,±46.34) Å to replicate the initial set of second-layer coordinates using the same 22.3 Å cutoff. The same recipe was employed for the Hydrogen molecules with respect to the second-layer Helium atoms and the molecules closer to the graphite surface. This procedure makes possible to consider any adsorbate density, and not only those which fit exactly the periodicity of the first layer. In the DMC calculations, f(R,t) is represented not by an analytical function but by a set of walkers <cit.>. Each walker is defined by a set of coordinates, R, of all the atoms/molecules of the system. Those positions are evolved in imaginary time by the prescription given in Eqs. <ref> and <ref> until the local energy of the set of particles, E_L( R), varies stochastically around a stable mean <cit.>. That would correspond to the limit t →∞, limit in which we can calculate other thermodynamic properties. The value of those observables is the average over the set of walkers. We have checked that considering more than 300 walkers leaves the results unchanged. To avoid any influence of the initial configurations on the simulations results, we typically dispose of the first 2 10^4 Monte Carlo steps (a change in all the particles positions in all the 300 walkers) in a typical 1.2 10^5 steps long simulation run. To further avoid spurious influences of a particular DMC history, we averaged the energies of three independent Monte Carlo runs. Finally, to fully characterize the DMC algorithm, we have to bear in mind that the ^3He atoms are fermions. This implies that when we interchange the positions of any two of those particles with the same spin, the total (and its approximation, the trial) function should change sign. We made sure of that by introducing the Slater determinants D^↑ and D^↓ in Eq. <ref>. Those determinants impose the nodal structure and the positive and negative regions of the wavefunction. In its simplest form (the one described here and aptly called Fixed-Node diffusion Monte Carlo, FN-DMC), the algorithm does not change the position of those nodes, making the energy derived from it an upper bound to its exact value <cit.>. § RESULTS The primary output of any DMC calculation is the energy of the ground state of the system under consideration. This implies that we are operating at T=0 and that the energy is equal to the free energy. From that magnitude, we can obtain the phase diagram of ^3He adsorbed on top of the different molecular hydrogen substrates. Those results are displayed in Figs. <ref>-<ref>. Fig. <ref> displays what happens on the HD surface, the one for which we have experimental information <cit.>. The solid squares correspond to the gas phase described by Eq. <ref> alone, the dotted line being the result to a least-squares third-order polynomial fit to that set of data. What we observe is that, for that phase, the energy increases monotonically as a function of the ^3He density. Moreover, there is no flat region of the curve that we could associate to a liquid-gas transition, as in the first layer of ^3He on graphite <cit.>, i.e., at low densities the stable phase is a gas and not a liquid, at least within the accuracy (± 0.3 K) of our calculation. This similar to what happens to the second-layer of ^3He on ^3He on graphite <cit.>, but it is at odds with its behavior on ^4He, where a very dilute liquid phase was predicted <cit.>. That diluted phase was labeled as liquid because its energy per particle was lower than the corresponding to the infinite dilution limit, i.e., the curve of the energy per Helium atom versus density had a local minimum, something not seen here. The solid structures, described by the product of Eqs. <ref> and Eq. <ref>, differ from each other by the set of crystallographic positions that define them. The energy per Helium atom for a triangular solid is given by the open squares in Fig. <ref>. This phase is clearly unstable with respect to both the gas and to any of the other registered phases shown in that figure. Those are represented by isolated points: 4/7 (open circle), 7/12 (solid circle) and 1/2 (open triangle). This last structure was proposed to be stable in Ref. fukuyama3 and can be built by locating ^3He atoms on some of the potential minima produced by three neighboring Hydrogen molecules underneath. In that phase not all such minima are occupied, but only the ones needed to produce a honeycomb lattice on the second layer. Unfortunately, our results do not support the stability of that structure, since its energy per atom is larger than the corresponding to a gas structure of the same density. The 4/7 and the 7/12 structures could be stable, though. To check that, in Fig. <ref>, we display the double-tangent Maxwell construction (dashed line) between the 7/12 solid and a gas of density 0.048 ± 0.004 Å^-2. The slope of that line, that joints the inverse density points with the same derivative of the free energy, corresponds to minus the equilibrium pressure <cit.>. So, from the two possible Maxwell constructions (4/7-gas, not shown, and 7/12-gas), we have to consider only the second, since it corresponds to the lowest pressure value. This line goes from 20.8 Å^2 (lower surface per particle value for which the gas-like structure is stable, corresponding to a 0.048 Å^-2 helium density) to 18.04 Å^2, the inverse of the density of the 7/12 registered solid. Taking everything into account, we can say that in the 0-0.048 Å^-2 range, ^3He on HD is a gas that, upon further Helium loading changes into a 7/12 registered solid of density 0.055 Å^-2. This is in overall agreement with the experimental data in the literature <cit.> and similar to what happens on top of ^3He <cit.> and ^4He <cit.>. The DMC algorithm is able to discriminate between ^3He adsorbed on similar substrates, as it can be seen in the comparison between Fig. <ref> and Figs. <ref> and <ref>. We can see, for instance, that the Helium binding energies in the respective dilution limits are different from each other: -30.3 ± 0.2 K for HD, -30.4 ± 0.3 K for D_2 and -28.0 ± 0.2 K for H_2, something that depends exclusively on the mass of the molecules of the first layer, all the interaction potentials being equal. This is in agreement to what happens to ^3He adsorbed on ^4He <cit.> and ^3He <cit.>, two substrates with different masses and the same interaction potentials. In the first case, the ^3He binding energy in the infinitely dilution limit for a 0.112 Å^-2 first layer density was -24.45 ± 0.04 K, to be compared to -22.7 ± 0.1 K for a layer of density 0.109 Å^-2 for the lighter isotope. The first value varies very little upon compression of the first layer, increasing to a value of -24.74 ± 0.07 K for an underlying ^4He density of 0.120 Å^-2. This makes us confident that the ∼ 2 K difference between the binding energy of ^3He on both helium substrates is due to the mass difference with a very weak dependence on density. All of the above implies that, all things being equal, the larger zero-point motion of a first layer lighter isotope produces a smoother effective potential surface in which the local minima a single atom can sit upon are less deep than for more localized isotopes. The particular details of the dependences of the energy per ^3He atom on the second-layer density for the gas phases are also substrate dependent, but the slopes of those curves as a function of the ^3He density are similar (not equal) to each other, as it can be seen in Fig. <ref>, in which we display all the stable ^3He phases for the different hydrogen substrates. In addition, by following the same procedure involving the respective double-tangent Maxwell constructions we have found that the stability range for the gas phases is always 0-0.048 Å^-2, irrespectively of the Hydrogen isotope (see Figs. <ref> and <ref>). Those gases are also in equilibrium with the same 7/12 commensurate phase, all other solid phases being unstable. The details of the adsorption of the unstable triangular solids are also substrate dependent, but are irrelevant for our conclusions since those phases are not experimentally obtained in the range of densities considered here. In any case, the fact that the D_2 substrate produces a phase whose energy per particle is closest to the gas one can be partially adscribed to the the fact that a non-moving fixed substrate (or one with smaller zero-point displacements) can artificially boost the stability of a solid phase adsorbed on top, as it can be seen in the case of a second layer of ^4He on graphite <cit.>. § CONCLUSIONS By using the DMC method, we were able to calculate the phase diagram of ^3He adsorbed on top of a layer of different Hydrogen isotopes. In all cases, those diagrams show stable gases in the 0-0.048 Å^-2 range in equilibrium with 7/12 registered solids. The only difference would come from the Helium-Hydrogen binding energies, something that it can be, but it is hard to be measured experimentally. In any case, our results compare reasonably well with experimental data for ^3He/HD/HD, that points to the existence of a low-density gas phase that, upon further Helium loading, changes into a commensurate solid. Unfortunately, the nature of that registered phase is different in this work and in the experiment <cit.>. That difference can be ascribed, at least partially, to the small differences in the densities of the underlying first-layer solid and it is comparable to what happens in a second ^3He layer on helium substrates <cit.>. On the other hand and importantly, we are able to reproduce the fact that the triangular solid is unstable with respect to any other ^3He phase. We acknowledge financial support from Ministerio de Ciencia e Innovación MCIN/AEI/10.13039/501100011033 (Spain) under Grants No. PID2020-113565GB-C22 and No.PID2020-113565GB-C21, from Junta de Andalucía group PAIDI-205, and AGAUR-Generalitat de Catalunya Grant No. 2021-SGR-01411. We also acknowledge the use of the C3UPO computer facilities at the Universidad Pablo de Olavide. We also thank H, Fukuyama by generously share with us the unpublished work on the ^3He/HD/HD system.
http://arxiv.org/abs/2407.13018v1
20240717211405
Proof-of-Collaborative-Learning: A Multi-winner Federated Learning Consensus Algorithm
[ "Amirreza Sokhankhosh", "Sara Rouhani" ]
cs.DC
[ "cs.DC", "cs.LG" ]
Proof-of-Collaborative-Learning: A Multi-winner Federated Learning Consensus Algorithm Amirreza Sokhankhosh University of Manitoba, Winnipeg, Canada sokhanka@myumanitoba.ca Sara Rouhani University of Manitoba, Winnipeg, Canada sara.rouhani@umanitoba.ca Received -; accepted - ============================================================================================================================================================================ § ABSTRACT Regardless of their variations, blockchains require a consensus mechanism to validate transactions, supervise added blocks, maintain network security, synchronize the network state, and distribute incentives. Proof-of-Work (PoW), one of the most influential implementations of consensus mechanisms, consumes an extraordinary amount of energy for a task that lacks direct productive output. In this paper, we propose Proof-of-Collaborative-Learning (PoCL), a multi-winner federated learning validated consensus mechanism that redirects the computation power of blockchains to train federated learning models. In addition, we present a novel evaluation mechanism to ensure the efficiency of the locally trained models of miners. We evaluated the security of our evaluation mechanism by introducing and conducting probable attacks. Moreover, we present a novel reward distribution mechanism to incentivize winning miners fairly, and demonstrate that our reward system is fair both within and across all rounds. Blockchain, Consensus, Federated Learning, Incentive Mechanism, Security, Fairness. § INTRODUCTION The decentralized nature of blockchain technology, the incentive mechanisms supporting blockchain networks, and smart contracts—which enable programmable and automated transactions—have all contributed to blockchain's explosive expansion. The Proof-of-Work (PoW) consensus mechanism, which rewards miners for their computing efforts to maintain the network, requires substantial computational power to preserve data integrity and security. The most well-known use of PoW, Bitcoin mining, has an energy demand that is currently equal to the yearly energy usage of nations like Poland <cit.>. These serious environmental issues emphasize the need for more effective consensus mechanisms to retain blockchain's advantages while mitigating its environmental impact. The PoW mechanism tasks miners by finding a specific nonce value via trial and error and rewarding the first successful miner. Because of the inefficiency of this brute-force approach, proposals have emerged to replace it with a more meaningful puzzle <cit.>. Other studies <cit.> have also suggested alternatives with less computational requirements, using various consensus models that focus on efficiency and lower energy consumption. Bravo-Marquez et al. <cit.> proposed Proof of Learning (PoL) where miners are requested to train machine learning models for a given task. In this network, the validators evaluate the models and select a winning miner in each round. Building on a similar idea, Qu et al <cit.> suggested Proof of Federated Learning (PoFL). Federated learning (FL), first proposed by McMahan et al. <cit.>, is a collaborative machine learning approach that trains models across distributed networks. FL improves privacy by retaining data on individual client devices, sharing only model updates with the central server. However, this approach still raise privacy concerns for both the server and clients within an FL network <cit.>. An incentive mechanism can motivate clients to truthfully contribute to the global model, thereby strengthening the server's security. In addition, a decentralized governance scheme can eradicate the security concerns of the clients. Hence, to address these problems, Kim et al. <cit.> proposed a blockchain-enabled FL system where model updates are verified and communicated using blockchain and smart contracts. Since then, numerous studies have expanded on this concept, applying it to various technological domains <cit.>. In Proof-of-Federated-Learning (PoFL) <cit.>, requesters send Deep Learning (DL) tasks along with a list of possible data providers. These tasks are distributed among distinct pools of miners, with each pool selecting a DL model for the task. In each pool, a miner is selected as the leader and acts as the central server in the traditional FL paradigm, while others operate as clients. However, there are two major drawbacks to this approach: i) FL is only achieved in pools with a limited number of miners, thereby undermining the efficiency of models trained, and ii) the framework lacks fairness because the winning global model is not shared with other pools. This leads to an unfair advantage for pools that win in the initial rounds. This disparity significantly affects miners who join the competition in the middle or final rounds. Accordingly, this paper presents Proof-of-Collaborative-Learning (PoCL), a novel decentralized multi-winner FL consensus mechanism that improves model evaluation by using a distributed network of miners. In our framework, miners distribute unlabeled test records to evaluate the trained local models of other miners, who predict these records and report their results. These predictions are evaluated based on accuracy (loss value) and timeliness (prediction time) parameters. Through the implemented smart contacts, top K miners receive the highest votes and are selected as winners of each round. Consequently, these winners contribute their models to form an updated global model, and the implemented smart contracts fairly reward them based on the significance of their contributions. This system addresses critical challenges in conventional FL-based consensus mechanisms, such as fairness and incentive alignment, while enhancing overall efficiency. The contributions of this paper are presented as follows: * We propose a novel multi-winner consensus mechanism based on FL. In this framework, training is achieved globally through the contributions of all miners, improving the fairness in mining competition, as opposed to PoFL. * We present a novel model evaluation mechanism based on miners' test data and the utilization of their local models. We also explore the robustness of this method by proposing and executing potential attacks, demonstrating their failure in the results section. * Lastly, we propose a novel rewarding mechanism that considers the significance of the contribution of each winning miner in reward distribution. Through comprehensive experiments, we show that our reward system is fair both within and across all rounds. § RELATED WORKS Given the multimodal nature of our proposed framework, we categorize related works into three distinct areas for thorough comparison: i) related consensus mechanisms, ii) studies on FL, and iii) the application of blockchain technology within FL contexts. §.§ Consensus Mechanisms Initially, Nakamoto <cit.> introduced Proof-of-Work (PoW) for Bitcoin. To maintain the ledger's integrity, miners must find a nonce that meets specific hash conditions when adding to the block header. The search for an appropriate nonce value is notably computation-intensive, driving research into more efficient alternatives, leading to solutions for resource-efficient and energy-recycling consensus mechanisms <cit.>. Several studies have introduced consensus mechanisms that are more efficient and less energy-intensive than Proof-of-Work (PoW) <cit.>. For instance, Proof-of-Stake (PoS), suggested by Nguyen et al. <cit.>, grants validators with high stakes the privilege to add new blocks. Proof-of-Work-Time (PoWT) <cit.> seeks to increase mining efficiency and reduce computational waste by incorporating a block time parameter <cit.>. Proof-of-Burn (PoB) <cit.> encourages miners to “burn" coins by sending them to an address from which they cannot be recovered in exchange for increased virtual mining power. Expanding on these innovations, Proof-of-History (PoH) <cit.> provides a chronological verification of events, enhancing blockchain efficiency by confirming transaction sequences. Proof-of-Activity (PoA) <cit.> combines the principles of PoW and PoS, starting with a mining process and subsequently transitioning to a stake-based validator selection for block finalization, thereby bolstering both network security and energy efficiency. Further studies have considered replacing the PoW puzzle with productive tasks to conserve energy <cit.>. Proof-of-eXercise (PoX) <cit.> and Proof-of-Useful-Work <cit.> repurpose mining efforts for real-world scientific computations and polynomial problem-solving, respectively. Primecoin <cit.> searches for prime number sequences. Our focus is on consensus mechanisms that integrate learning algorithms into their core. Bravo-Marquez et al <cit.> developed Proof-of-Learning (PoL), wherein “trainers" submit machine learning models for evaluation by “validators," and the most effective model receives rewards. PoFL <cit.> engages miners in training models within mining pools to compete for rewards, leading to an unfair advantage for the winning pool in early rounds as the victorious model is not shared among pools. Moreover, the network does not train a unified global FL model since each pool develops its own model architecture. Similarly, Proof-of-Training-Quality <cit.> faces similar issues by implementing FL in local committees. In conclusion, we identify a gap in existing consensus mechanisms: a lack of support for a fair and secure, globally validated FL model. Our paper addresses this by proposing a novel mechanism to fill this gap. §.§ Federated Learning McMahan et al. <cit.> originally proposed FL to train machine learning models across distributed networks. A standard FL network comprises a central server and several client devices. The process begins with the server distributing the global model to each client. Clients then train this model locally using their private datasets. Once training is complete, clients send their model updates back to the central server. The server aggregates these updates to enhance the global model. This cycle of distribution, local training, and aggregation constitutes one round of FL. Multiple such rounds are conducted, allowing the global model to progressively improve in accuracy and performance as it learns from a broader range of data across the network. While FL was first introduced within the realm of machine learning, numerous studies, including this paper, have explored its application in training deep learning models in a distributed manner <cit.>. §.§ Blockchain-enabled federated learning Although FL provides significant privacy benefits, privacy still remains a primary concern when implementing this algorithm. In an FL network, no raw data is communicated; however, sharing model parameters can still pose privacy risks for clients <cit.>. For example, the central authority may infer details about local training datasets by conducting Membership Inference Attacks (MIA) on the received model parameters <cit.>. Efforts to address these privacy concerns, such as the integration of differential privacy into federated or deep learning models, often result in reduced model utility <cit.>. Additionally, the standard FL algorithm lacks an incentive mechanism to motivate clients to contribute their computational resources honestly in training the global model <cit.>. To address these challenges, Blockchain-enabled FL systems have emerged <cit.>. The decentralized management of blockchain systems, along with their ability to reward clients, substantially benefits FL. Originally, Kim et al. <cit.> proposed BlockFL, the first blockchain-enabled FL method that utilizes a decentralized ledger to exchange local model updates. Li et al. <cit.> propose a blockchain-based FL framework with committee consensus. In this framework, a committee of honest nodes is randomly selected to validate the model updates proposed by other trainers. Blockchain-empowered FL is utilized in various applications, including the industrial internet <cit.>, smart healthcare <cit.>, and wireless network infrastructure <cit.>, <cit.>. § DESIGN In this paper, we extend the principles of FL to propose a consensus mechanism integrated with a collaborative deep learning algorithm. The traditional FL network comprises a central server and multiple clients. These clients train a shared global machine learning model using their private local datasets and subsequently upload their model updates to the central server. The server aggregates these updates to form an enhanced global model, completing what is known as a round. In this section, we introduce a series of steps that must be taken, in each round, to implement PoCL. To effectively implement our FL-based consensus mechanism, we define the following entities within our network: (i) Administrator: An entity responsible for altering user-defined values including the number of rounds, number of winning miners, and deadlines for each step. The administrator is also responsible for notifying the miners about the details of each step. To ensure that this role does not compromise the network's decentralization, the Administrator's functions can be implemented through decentralized governance mechanisms. For instance, changes proposed by the Administrator could require approval through a distributed consensus process among selected stakeholders or be automated through smart contracts that execute based on predefined rules agreed upon by the network participants. (ii) Requesters: In this network, users who commit deep learning tasks are called requesters. Each request contains a deep learning model, which will be trained globally, and a list of publicly available datasets for training. Since sharing private data with multiple miners might bring about privacy concerns for the requesters, we limit our consideration for the training data of each model to only publicly available datasets. Requests are saved in a queue to select the global model trained at each round. (iii) Miners: In charge of adding new blocks to the blockchain, miners train the global deep learning models, predict test records of other miners, and vote on the predictions made by others. Miners compete with each other to be among the winning miners by training the most superior model in a short time. Winner miners are rewarded according to the significance of their contribution to the global models. (iv) Aggregator: An off-chain program that supervises the aggregation of the winner models. The aggregator can shift between different variations of FL according to the requester’s preference. Furthermore, the program computes the contribution of each winning miner and reports it to the blockchain to reward them accordingly. (v) Users: Similar to any user in a blockchain network, they submit transactions to be mined and added to the blockchain. All submitted transactions are stored in a transaction pool, from which miners select transactions to mine. Furthermore, we assume that the peers within the blockchain network possess the capability to execute and store smart contracts. Given these assumptions, we propose a series of actions to be undertaken in each round to achieve a consensus validated by global FL. Figure <ref> and Algorithm <ref> illustrate the key steps of our proposed framework. §.§ Model Proposal Each round is started by selecting a global deep-learning model from the request queue and sending it to the miners, who in turn ask for validated transactions to mine and start training. After receiving their designated transactions and training the global model using their local data, the miners form a model proposal block including the hash of the trained model and a set of unlabeled test records. A curated set of test cases is distributed to clients, ensuring that these cases are representative of the overall data distribution while not compromising data privacy. This step is crucial for evaluating the local models under similar conditions, providing a fair basis for comparison. At the initiation of each round, miners are notified of a period called the model proposal deadline. Miners should take the actions mentioned above and send their model proposal block before the deadline passes; otherwise, they will not be included in the following steps. The administrator determines the model proposal deadline which can vary from one system to another based on distinct miner needs and global models. In some cases, a global model may require more training time to reach an adequate training level. Hence, we consider this as a custom value for the employers of our system. §.§ Prediction Proposal To select the best models, an evaluation step must be performed. In our FL framework, We address the quality of the locally trained model by implementing a distributed evaluation protocol. This protocol allows us to assess the performance of local models trained on miners' devices, ensuring that contributions to the global model are made by the most effective miners. After passing the model proposal deadline, the submitted test records are collected and sent to miners who participated in the model proposal step. The miners predict the given test records by feeding the inputs to the model. Afterward, they forward the predictions to a smart contract, managing all miner predictions. To prevent miners from deducing the correct labels of the test records through any means other than their trained models, we restrict the prediction proposal phase by setting a deadline. This deadline is carefully calibrated to be just sufficient for the execution of a forward pass of the global model. Proposals submitted after this deadline are not accepted to ensure the integrity of the evaluation process. Setting the prediction proposal deadline is another responsibility of the administrator, and its duration may vary between systems in distinct cases. Nevertheless, the general rule is to conduct an experiment and calculate the average time it takes miners to complete a forward pass of the global model and send the results. This empirically determined duration can then serve as the basis for establishing the prediction proposal deadline. §.§ Vote Proposal The vote proposal phase is crucial in identifying and distinguishing between honest miners and potential adversaries. During this phase, miners evaluate the predictions submitted by their peers in the preceding step, considering two primary criteria: the loss value associated with the predictions and the time required for the miners to submit their predictions. Firstly, they rank predictions from most to least accurate. In instances of identical accuracy, preference is given to submissions that were made earlier. This prioritization encourages miners to comply with the guidelines set for the prediction proposal phase and to submit their predictions promptly. Furthermore, this approach enhances the system's resilience against potential attacks discussed in the Security Concerns section. §.§ Winner Selection A Chaincode collects all submitted votes and selects the top K miners with the highest number of votes as winners of the round. Subsequently, their trained models are transmitted to the aggregator to perform a FL combination algorithm, such as Federated Averaging (FedAvg) <cit.>. The aggregator verifies the integrity of each winning model by comparing its hash against the proposed hash from the initial step to ensure that the model has remained untampered since its proposal. §.§ Block Creation In the Model Proposal step, each miner is assigned some validated transactions to mine. In this step, A smart contract gathers the designated transactions of the winner miners into a single block and appends it to the ledger. §.§ Reward Mechanism In this framework, we introduce a fair reward distribution mechanism designed to compensate winning miners in proportion to the significance of their contributions to the global model in the preceding round. We denote the global model as M_G = [W̃^̃1̃, W̃^̃2̃, ..., W̃^̃L̃] and the local model of the i-th miner as M_i where M_i = [W^1, W^2, ..., W^L]. In these lists, W^1 and W̃^̃1̃ are the weights of the first layers of the local and global models, respectively, and L is the number of layers in the model. Given that the layer sizes can vary, the reward for winning miner i is calculated based on a formula that accounts for these variations: R_i = 1/L∑_l1/N_l∑_n |W_n^l - W̃_̃ñ^̃l̃| In simpler terms, we calculate the average impact that each winning miner has on the current global model and base their rewards on this calculation. At the end of each round, the aggregator sends this information to the smart contract that issues rewards to the winning miners. In the results section, we show how this reward mechanism enhances the fairness of our proposed framework. § SECURITY CONCERNS A potential threat to our FL validated consensus mechanism is the risk of miners not following the training rules. Specifically, there is a concern that some miners might attempt to mislead others by generating predictions using other supervised or unsupervised learning algorithms, such as K Nearest Neighbors (KNN), rather than adequately training the global FL model. In this section, we identify two scenarios where an adversary might employ such techniques, and we argue that in both scenarios, such adversaries will not succeed. Additionally, in the results section, we substantiate our assertion by conducting comprehensive simulations of KNN-based attacks across varying data sizes. Firstly, we define the dataset 𝒯_train = {x_i}_i=1^N_train where x_i ∈ℝ^D as our training dataset. If D is considerably large, particularly when 𝒯_train comprises image data, alternative methods to deep learning models generally do not produce results competitive with those achieved by miners training a global deep learning model. Therefore, an adversary employing simpler machine learning algorithms in an attempt to mislead other miners will fail in this competitive scenario. However, the effectiveness of K-Nearest Neighbors (KNN) algorithms can be similar to deep neural networks when data has low dimensionality. That is to say, either approach achieves satisfactory and similar results. In such instances, the complexity introduced by deep convolutional neural networks (CNNs) may not significantly outperform simpler Fully Connected (FC) models. Therefore, we consider a simple, fully connected neural network for our global model in scenarios characterized by low-dimensional data. This decision is based on the principle that the data's simplicity does not require the advanced feature extraction of CNNs, which are suited for high-dimensional datasets like images. Assuming that N_train and N_test are the test and training data sizes, respectively, the time complexity of the KNN algorithm is: T(N_train, N_test, D) = O(N_test.N_train.D + N_test.log(N_test)) where O(N_test.N_train.D) is the time complexity of computing the distance between the test records and all the training records and N_test.log(N_test) is for sorting the distances. Similarly, the time complexity of a forward pass in a fully connected neural network is: T(N_test, D, H_max) = O(N_test.D.H_max) Where H_max is the maximum layer size in the network. This equation is derived based on the most computationally intensive matrix multiplication in the forward pass. Considering the above equations, unless for extremely large values of H_max, the time complexity of a forward pass is smaller than that of a KNN algorithm for a low-dimensional dataset 𝒯. Consequently, in the proposed voting step, the deep learning model is favored for faster computation when these algorithms perform equally well. § IMPLEMENTATION We utilized Hyperledger Fabric <cit.>, a modular and permissioned blockchain platform, to implement PoCL[https://github.com/tcdt-lab/FL-Validated-Learning]. The platform's Chaincode-centric architecture, where Chaincodes function as smart contracts, allowed us to seamlessly integrate our consensus mechanism as a network layer on the existing Hyperledger Fabric infrastructure. We developed smart contracts (Chaincodes) for each critical operation within our comprehensive design. This design ensures that no single miner or central authority oversees network operations; instead, governance is distributed across a suite of smart contracts, ensuring decentralized control. Figure <ref> shows a high-level view of our implemented network and demonstrates how distinct components of the PoCL framework are connected. Our implementation revolves around the functionality of the following elements: (i) Peers: As essential nodes in Hyperledger Fabric networks, peers maintain a copy of the ledger and are responsible for endorsing transactions by executing Chaincodes (smart contracts). They validate transaction outputs against the current ledger state before committing them to the blockchain, ensuring consistency and integrity. (ii) Applications: Applications serve as gateways for communicating with the peers of the Hyperledger Fabric network. They enable the submission of transactions and the triggering of Chaincode functions. In our implementation, each peer is associated with one application to facilitate communication. Additionally, the administrator is implemented as one of these applications to simplify the contact and notification processes required for interaction with miners. (iii) Miners: Miners are implemented as Flask servers waiting for applications to notify them to start training. They predict test records and vote on the predictions made by other miners. The TensorFlow framework performs this training process. (iv) Aggregator: Similar to the miners, the aggregator is implemented as a Flask server that awaits the command to aggregate updates into a new global model using the FedAvg algorithm. Notably, our design is flexible and can be adapted to incorporate other variations of FL. (v) Submitter: To accurately simulate users' activities on the blockchain, a program is built to submit transactions to the blockchain at a custom rate. The task of our blockchain is to supervise a cryptocurrency system; therefore, the submitter proposes transfer transactions from one wallet to another. (vi) Adversaries: The miners who attempt to invade the privacy of the system by conducting three malicious actions. First, instead of training the global model using their local datasets, they replace every trained parameter with zero to ruin the performance of the global model should they win the current round. In addition, instead of computing a forward pass to make predictions, they utilize a KNN algorithm. At last, while every honest miner votes from best to worst predictions, they vote from worst to best to increase the chance of winning for less efficient models. § RESULTS We trained a deep Convolutional Neural Network <cit.>, as depicted in Fig. <ref>, to classify the CIFAR-10 dataset using 10 miners with identical capabilities. In each round, the top K=5 miners are selected based on the loss value and prediction time, and their models are aggregated using the FedAvg algorithm. To assess the fairness of our framework, we unevenly distributed the dataset among miners, with miners six to ten receiving four times as many data records as miners one to five. This experimental setup was chosen to evaluate the impact of data volume on each miner's chance of winning in competitive rounds. Fig. <ref> shows the validation loss of each local model over 20 rounds of training. As demonstrated by these plots, all local models show improvement over the course of training. Furthermore, to assess the relationship between data size and competitive success, we illustrate the total number of winning rounds for each miner in Figure <ref>. Counterintuitively, miners with smaller data sizes win more rounds. These unexpected results necessitated further examination of the distribution of winning miners across each round, as shown in Figure <ref>. This illustrates that miners with larger data sizes are more likely to win initial rounds. Additionally, from Figure <ref>, we infer that their contributions to the global model in these rounds are highly significant, evidenced by the considerable decrease in validation losses and increase in validation accuracy. Our reward mechanism appropriately acknowledges the contributions of these miners by assigning reward values, as illustrated in Figure <ref>. Although miners with smaller data sizes win more rounds, they receive fewer rewards than miners with larger data sizes. This analysis confirms that our framework promotes fair competition in each round and ensures an equitable reward distribution among winning miners across all rounds. Moreover, we conducted a similar experiment using the same hyperparameters to assess the impact of the KNN attacks on the system. In this experiment, we introduced two adversaries, miners one and six, with the latter possessing four times more data records than the former. The results of this setting are demonstrated in Figure <ref>, showing that neither adversary won any competition round, regardless of data size. § CONCLUSION In this paper, we proposed Proof-of-Collaborative-Learning (PoCL), a multi-winner FL validated consensus mechanism that recycles the energy of the original proof of work. This framework trains the requested models globally using the computation power of the contributing miners. To evaluate miners' performance and select winners, we proposed a novel evaluation step that relies on the predictions miners make on each other's test records. In addition, we verified the robustness of our proposed evaluation mechanism by simulating an attack scenario on the mechanism and demonstrated how these attacks are ineffective in compromising the system. Finally, we proposed a novel reward distribution mechanism that compensates winning miners according to the significance of their contributions. Through extensive experiments, we demonstrated that our reward distribution algorithm fairly compensates miners both within and across all rounds. § FUTURE WORKS In this paper, we proposed a general framework for achieving FL validated consensus through the global contribution of miners. We purposefully designed this framework to be expandable in many directions, including: * In this study, we assumed the training data to be publicly available to prevent security concerns of requesters. Nevertheless, a privacy data-sharing approach can be utilized to train the global model using private data. This approach is particularly useful in private blockchains due to the provided transparency. * For simplicity, we assumed that the requesters select the number of training rounds for their proposed models. However, one might not be aware of the correct number of training rounds required for training one's global model to convergence. In this setting, the framework can be adjusted to stop global training using approaches such as early stopping or by calculating the difference between the global models of consecutive rounds. IEEEtran
http://arxiv.org/abs/2407.11924v1
20240716171517
Holographic QCD Running Coupling for Light Quarks in Strong Magnetic Field
[ "Irina Ya. Aref'eva", "Ali Hajilou", "Alexander Nikolaev", "Pavel Slepov" ]
hep-th
[ "hep-th", "hep-ph" ]
=1
http://arxiv.org/abs/2407.13014v1
20240717210639
High-Contrast Imaging at First-Light of the GMT: The Preliminary Design of GMagAO-X
[ "Jared R. Males", "Laird M. Close", "Sebastiaan Y. Haffert", "Maggie Y. Kautz", "Doug Kelly", "Adam Fletcher", "Thomas Salanski", "Olivier Durney", "Jamison Noenickx", "John Ford", "Victor Gasho", "Logan Pearce", "Jay Kueny", "Olivier Guyon", "Alycia Weinberger", "Brendan Bowler", "Adam Kraus", "Natasha Batalha" ]
astro-ph.IM
[ "astro-ph.IM", "astro-ph.EP" ]
Length-preserving biconnection gravity and its cosmological implications Sorin V. Sabau July 22, 2024 ======================================================================== § ABSTRACT We present the preliminary design of GMagAO-X, the first-light high-contrast imager planned for the Giant Magellan Telescope. GMagAO-X will realize the revolutionary increase in spatial resolution and sensitivity provided by the 25 m GMT. It will enable, for the first time, the spectroscopic characterization of nearby potentially habitable terrestrial exoplanets orbiting late-type stars. Additional science cases include: reflected light characterization of mature giant planets; measurement of young extrasolar giant planet variability; characterization of circumstellar disks at unprecedented spatial resolution; characterization of benchmark stellar atmospheres at high spectral resolution; and mapping of resolved objects such as giant stars and asteroids. These, and many more, science cases will be enabled by a 21,000 actuator extreme adaptive optics system, a coronagraphic wavefront control system, and a suite of imagers and spectrographs. We will review the science-driven performance requirements for GMagAO-X, which include achieving a Strehl ratio of 70% at 800 nm on 8th mag and brighter stars, and post-processed characterization at astrophysical flux-ratios of 1e-7 at 4 lambda/D (26 mas at 800 nm) separation. We will provide an overview of the resulting mechanical, optical, and software designs optimized to deliver this performance. We will also discuss the interfaces to the GMT itself, and the concept of operations. We will present an overview of our end-to-end performance modeling and simulations, including the control of segment phasing, as well as an overview of prototype lab demonstrations. Finally, we will review the results of Preliminary Design Review held in February, 2024. § INTRODUCTION The 25-40 m Extremely Large Telescopes (ELTs) will transform the study of extrasolar planets. Their larger collecting area, compared to current generation 5-10 m telescopes, will dramatically improve the sensitivity of exoplanet studies including radial velocity measurements and transit spectroscopy on nearby planets. The gains for resolved direct imaging of exoplanets will be even more dramatic. The most profound result of this transformative leap in capabilities will be the ability to search for the signatures of life on planets orbiting other stars. The Astro2020 Decadal Survey<cit.> placed the search for life on exoplanets as a Priority Area for astrophysics over the next decade. In “Pathways to Habitable Worlds”, Astro2020 found that one of the four key capabilities needed to achieve this is: Ground-based extremely large telescopes equipped with high-resolution spectroscopy, high-performance adaptive optics, and high-contrast imaging.<cit.> Preparing the instrumentation suite of the ELTs to deliver on this promise is crucial, but also a demanding technological challenge. It is well known that in the background-noise-limited case, the sensitivity (time to signal-to-noise, SNR) to point sources of diffraction-limited telescopes scales with diameter D^4. Just as importantly, the improvement in spatial resolution with 1/D will allow direct imaging at smaller separations than currently possible with today's telescopes. However, the application of these scaling laws to direct imaging is non-trivial. D^4 only obtains if the noise scales with D^2 and exposure time, but this is well-known to not be the case at small separations for typical high-contrast imaging instruments. This is due to the internal systematics driven quasi-static speckle problem<cit.>. Only if quasi-static speckles are controlled and suppressed such that residual speckle lifetimes are short will such an instrument achieve D^4<cit.>. Furthermore, the science foreseen by Astro2020 requires working at small separations, ≲ 5 λ/D, with coronagraphs. This requires exquisite wavefront sensing and control (WFS&C) and so places demanding requirements on a direct imaging instrument. The 25 m Giant Magellan Telescope (GMT) presents the best opportunity to realize the vision of Astro2020 early in the ELT era. Given its unique aperture and smaller D, an instrument can be constructed to achieve higher Strehl with fewer deformable mirror (DM) actuators. This makes the GMT the ideal facility for short wavelength Extreme AO (ExAO) <cit.>. The large-segment design of the GMT provides a straightforward way to deploy mature, well tested DM technology with high actuator density. Equipped with an ExAO-fed coronagraph, the GMT will characterize large numbers of temperate, mature exoplanets for the first time<cit.>, including terrestrial potentially habitable exoplanets. Here we present the results of the preliminary design for GMagAO-X, an instrument in development to provide a first-light ExAO and coronagraph facility for GMT. We will review the science cases, science-driven requirements, and the mechanical, optical, electrical, and software & control preliminary designs. GMagAO-X underwent a successful preliminary design review (PDR) in February, 2024 and is now preparing for final design. § SCIENCE CASES AND REQUIREMENTS The GMagAO-X science team produced the following core science cases. We briefly state them here, see the GMagAO-X Instrument Science Requirements Document (GMAGX-DOC-0001) for detailed discussion. -5pt * Reflected Light Imaging and the Search for New Life -1pt * Characterizing Temperate Giant Exoplanets * Terrestrial Planets: Detecting and Characterizing the Atmosphere * Detection of biosignatures (e.g. H_2 O, O_2). * Reconnaissance for Habitable World's Observatory * Characterizing Young Extrasolar Giant Planets (EGPs) -1pt * Young Self-Luminous EGPs * Rotational Periods of EGPs * Cloud 3-D Structure * Planet Formation at Low Mass and Small Separation -1pt * Accretion onto Planets and Disks * Circumstellar Disk Structure and Disk-Planet Interactions * Stellar Evolution -1pt * Benchmark Binary System Characterization * White Dwarf / Main Sequence Star Binaries * White Dwarf Pollution * Interacting Stars * Resolved Stars * Solar System Science -1pt * Asteroid Density Measurements * Solar System Volatiles §.§ Science Driven Requirements The science and instrument teams distilled the individual science cases discussed above into a set of driving science requirements which GMagAO-X is designed to meet. The resulting key instrument parameters are summarized in Table <ref>. The top-level requirements are: * GMAGX-SCI-001: Exoplanet Characterization in Reflected Light * Requirement: GMagAO-X shall measure reflected light from planets around nearby stars such as Proxima Cen and GJ 876. * GMAGX-SCI-002: Spatial Resolution * Requirement: GMagAO-X shall achieve λ/D spatial resolution at wavelengths equal and longer than 600 nm on unresolved guide stars. * Goal: GMagAO-X shall achieve λ/D spatial resolution on resolved guide stars, such as asteroids and resolved stars. * Goal: GMagAO-X shall achieve λ/D spatial resolution at wavelengths longer than 450 nm. * Stretch-Goal: GMagAO-X shall achieve λ/D spatial resolution at wavelengths longer than 350 nm. * GMAGX-SCI-003: Broadband Photometry * Requirement: GMagAO-X shall be capable of performing photometry in narrow-band and 10% bandwidth filter bands. * GMAGX-SCI-004: Low Resolution Spectroscopy * Requirement: GMagAO-X shall be capable of spectroscopy at a resolution between 20-100. * GMAGX-SCI-005: High Resolution Spectroscopy * Requirement: GMagAO-X shall be capable of spectroscopy at resolutions from 1000-65000. * Goal: GMagAO-X shall be capable of spectroscopy at resolutions greater than ℛ=100,000 at least in the Oxygen-A band. * GMAGX-SCI-006: Wavelength Coverage * Requirement: GMagAO-X shall provide science wavelength coverage from 600-1900 nm to provide discrimination between various exoplanet models. * Goal: Wavelength coverage of 450-1900 nm. * Stretch Goal: Wavelength coverage of 350-1900 nm. * GMAGX-SCI-007: Magnitude Range * Requirement: GMagAO-X shall be capable of observing stars in the magnitude range I = -1.5 to 13. * Goal: GMagAO-X shall be capable of observing stars in the magnitude range I = -1.5 to 15. * GMAGX-SCI-008: Coronagraph Contrast * Requirement: GMagAO-X shall be capable of performing photometry at a signal-to-noise ratio of 5 on a point source with a flux ratio of 1e-7 or better with respect to its host star in a 10% bandwidth filter at 4 λ/D. * Goal: GMagAO-X shall be capable of performing SNR=5 photometry on a point source with a flux ratio of 1e-7 or better in a 10% bandwidth filter at 2 λ/D and 1e-8 at 6 λ/D. * Stretch Goal: 1e-8 contrast at 1 λ/D and 1e-9 at 5 λ/D. * GMAGX-SCI-009: Planet Position Measurement * Requirement: GMagAO-X shall be capable of measuring companion position with respect to the star with sufficient precision (TBD) to constrain inclination and the orbital elements. * GMAGX-SCI-010: Field of View * Requirement: GMagAO-X shall provide a field of view of at least 3 arcseconds x 3 arcseconds. * GMAGX-SCI-011: Relative Photometric Stability * Requirement: GMagAO-X shall be capable of 1% (TBC) relative photometric stability over a 4 hr (TBC) period in coronagraphic high-contrast observations. The GMagAO-X Instrument Science Requirements Document (GMAGX-DOC-0001) contains much more discussion about the rationale for each of these 11 requirements and how they flow down from the science cases. These requirements flow down to the instrument level requirements, on which the preliminary design of GMagAO-X is based. § PRELIMINARY DESIGN §.§ MECHANICAL DESIGN The mechanical design of GMagAO-X is shown in Figure <ref>. GMagAO-X will occupy a Folded Port (FP) on the Gregorian Instrument Rotator (GIR) of the GMT. The main structure of GMagAO-X consists of a small optical bench with fore-optics and a fast-steering mirror, a rotating frame containing the main two-level optical subsystem, and onboard electronics racks which primarily house DM electronics and so must be close to the instrument. GMagAO-X also supplies its own custom tertiary mirror (M3). This decision was made to enable a very high quality, but smaller than facility-sized, optic to be used for M3. To provide vibration isolation, the two-level optical table is floating (air-damped). We will use an actively controlled, in height and level, system from TMC called “PEPSII”, which we have demonstrated with the existing MagAO-X instrument on the Magellan Clay telescope. See Close et al.<cit.> in these proceedings. The GIR is below the primary mirror (M1), and tips with elevation. The GIR, as its names implies, rotates to track the sky. To maximize stability and provide the ability to use pneumatic vibration isolation, GMagAO-X is designed to counter rotate such that its main optics are gravity invariant (see Figure <ref>). To facilitate this, when GMagAO-X is in operation the GIR will be locked in position such that GMagAO-X will be parallel to the elevation axis of the GMT. Figure <ref> illustrates the crossed roller bearing and drive system for rotating the optical table. When not in operation, GMagAO-X will not be floating and is designed to travel through the full range of motion of the telescope. This includes a locking mechanism to clamp the table down rigidly and ensure that it is constrained for all possible angles of the GIR while other instruments are in operation. The preliminary mechanical design of GMagAO-X has been thoroughly analyzed for structural integrity, and to ensure it meets the demanding seismic survivability standards of the GMT. All aspects of the construction, shipment, assembly, installation and removal, alignment, and night-time operations have been considered in the design. §.§ OPTICAL DESIGN Figure <ref> shows a summary of the optical design of GMagAO-X. The custom M3 reflects the f/8.157 beam of the GMT into the frontend OAP Relay (green). These fore-optics contain a fast steering mirror (FSM) to provide high-stroke and high-speed vibration control. The beam next passes through a pupil alignment periscope, a k-mirror de-rotator, and atmospheric dispersion corrector (ADC). A 3000 actuator DM “woofer” is used to provide large-stroke for correcting low-order aberrations at lower speeds. After the woofer, the beam is relayed to the 21,000 “parallel DM” which provides the high-speed and high-spatial-frequency control. The beam is then split between the wavefront sensors (WFS) and the coronagraph. The wavefront sensing subsystem (not shown in Figure <ref> contains both a Holographic Dispersed Fringe Sensor (HDFS,<cit.>) for coarse phasing control and a high-order WFS. A 3-sided pyramid WFS is currently baselined (see Haffert et al.<cit.> in these proceedings). The coronagraph beam is relayed to the upper level (purple rays in Figure <ref>. The coronagraph contains a 3000 actuator non-common path correcting (NCPC) DM, an architecture which has been demonstrated on-sky with MagAO-X<cit.>. This allows in-coronagraph WFS&C, such as the digging of dark holes<cit.>, without offsetting to the high-order WFS&C system. After the NCPC DM, the coronagraph optics provide a Lyot-style architecture. The baseline coronagraph is the Phase-Apodized Pupil Lyot Knife-Edge Coronagraph (PAPLKEC)<cit.>, which uses the NCPC DM for apodization and a knife-edge mirror as the focal plane mask. To push towards the goal and stretch-goal of GMAGX-SCI-008 the design include Phase Induced Amplitude Apodization (PIAA<cit.>) optics (including inverse PIAA) and transmissive complex focal plane masks. Both focal-plane low-order WFS (FLOWFS) and Lyot-plane LOWFS (LLOWFS) are supported to make use of light rejected by the coronagraphs. After the coronagraph, focal plane instrumentation will include imagers covering the optical through 1 μm, and near-IR through H band (see GMAGX-SCI-006). We plan to couple GMagAO-X to the facility G-CLEF spectrograph<cit.> as a fiber fed integral field unit (IFU), and possibly to other facility spectrographs for near-IR coverage. We have also left room for an on-board IFU which is yet to be designed. See Close et al.<cit.> in these proceedings for further details about the optical design of GMagAO-X. §.§.§ The Parallel DM The key to achieving the WFS&C required for the science goals of GMagAO-X is the ability to deploy a high-actuator count DM. Based on the specifications and performance of MagAO-X<cit.>, a ∼14 cm projected pitch is needed, which is equivalent to ∼3000 illuminated actuators per segment. Figure <ref> shows how this can be achieved on the GMT with the “Parallel DM” architecture. We use a hexagonal prism with its 6 segments coated as mirrors. Placed near a pupil plane, this splits the GMT aperture such that each segment is imaged onto a 3K MEMS DM. The central hole passes the center segment to its DM in the back. Crossed folds are used to control polarization effects, and folds are actuated with piston-tip-tilt stages to provide coarse phasing control. Fine high-speed phasing control is provided by the MEMS segments. Figure <ref> shows how the parallel DM fits on the lower table of GMagAO-X. The parallel DM has been prototyped and demonstrated, albeit without MEMS installed, in the High Contrast Adaptive Optics phasing Testbed (HCAT) at the University of Arizona. See Close et al.<cit.> in these proceedings and Kautz et al (submitted to JATIS). §.§.§ Coronagraphs Figure <ref> documents the baseline PAPLKEC coronagraph design. The design easily supports achieving the science goals define above. The GMT aperture presents no significant challenges for modern coronagraph designs. §.§ ELECTRICAL DESIGN The electronics of GMagAO-X are concerned with the safe operation of the rotation system and the air floating system, as well as the operation of the ExAO system and coronagraph components. The main feature of the electrical design is the layout of the equipment racks that will house the drive and control electronics. Figure <ref> illustrates the 5 racks we plan to use. Two racks will be mounted on the instrument frame. These are primarily occupied by DM driver electronics, which due to cable length restrictions must be as close as possible to the devices. Other components with similar cable length and/or latency restrictions are included in these racks. Note that these racks tip with elevation, but do not rotate with the internal barrel and so all cabling from these racks is managed by a cable guide. A third rack below the deck of the GIR, co-rotating with azimuth, will house additional motion control electronics and networking gear. Power distribution is distributed amongst the racks. The entire power requirement of GMagAO-X is well within the allotment for an FP instrument, as is the thermal performance and cooling requirements. At least two additional racks will be located off the telescope in an equipment/computer room, to house the distributed real-time control computer system. This is described further below. Sufficient allocations of optical fiber have been reserved between the GIR and the equipment room to support GMagAO-X's unique needs. An important concern with MEMS DM is cable management, as the ribbon cables typically used to supply per-actuator voltages take up significant space. In order to support occasional maintenance and shipping, we will provide bulkhead connectors on the outside of the MEMS rack, as shown in Figure <ref>. This will allow the rack to be disconnected from the instrument without disturbing either the electronics or the DM segments. We have extensive experience with this system on MagAO-X, which undergoes routine shipping and transportation on the mountain. §.§ SOFTWARE & CONTROL SYSTEM The baseline software system for GMagAO-X is based on the proven MagAO-X software suite. The MagAO-X software is based on a modern C++ API, with a base class inherited by all applications which implements standard housekeeping (startup sequence, event loop, logs, telemetry, and interprocess communication (IPC, both routine and low-latency)). We use a binary logging framework for efficiency. The architecture is multi-application, with multiple threads in each application. The Instrument Neutral Distributed Interface (INDI) protocol is used for soft-real-time IPC, and the ImageStreamIO low-latency library is used for WFS&C image processing and hard-real-time IPC. The MagAO-X framework is robust and fully demonstrated on-sky, as well as being used in several lab testbeds, and in addition, significant effort is underway to prepare it for space-flight applications. The MagAO-X PDR and pre-ship review (PSR) software designs are available online[<https://magao-x.org/docs/handbook/_downloads/78a8f3b30b90bdcb2a4560f4c0981fca/3.3_software.pdf>] [<https://magao-x.org/docs/handbook/_downloads/e9e6b896c53ab5b359d890763de06e96/2_5_Software_Processes.pdf>]. The API reference[<https://magao-x.org/docs/api/>] and source code are also public[<https://github.com/magao-x/MagAOX>]. The real-time control system for GMagAO-X is based on the Compute and Control for Adaptive Optics (CACAO<cit.>) system. CACAO is in routine use at SCExAO<cit.> and on MagAO-X. The shared memory architecture of CACAO (MILK ImageStreamIO) is the backbone of our image transfer and low-latency IPC. We use this for the science cameras as well, enabling low-latency focal plane WFS (FPWFS). In order to enable image acquisition from and control of the large number of devices in GMagAO-X, and to enable the processing throughput needed for a 21,000 actuator AO system, we plan to implement a distributed real-time control system. A key component of CACAO that is needed for GMagAO-X is low-latency computer-to-computer image transfer. This is also used routinely on SCExAO and MagAO-X for wavefront control. The computers and their task allocation are: * WFSCC: WFS Control Computer * Low-latency camera readout (one PCIe slot per camera, dedicated signal line(s) each (cameralink, coaxpress)) * Initial calibrations (dark sub, flat field, masking) applied here before transfer * Low-latency transfer to RTC-1 * Ancillary electronics control (modulator, shutters, filters, etc.) * ICC: Instrument Control Computer * Low-latency camera readout  (one PCIe slot per camera, dedicated signal line(s) each (cameralink, coaxpress)) * Low-latency transfer to RTC-2 * Ancillary electronics control (shutters, filters, etc.) * RTC-1: Real-Time Computer for Main AO * Low-latency transfer from WFSCC, to DMCC * PCIe expansion (GPUs) * RTC-2: Real-Time Computer for Coronagraph AO * Low-latency transfer from ICC, to DMCC * PCIe expansion (GPUs) * DMCC: DM Control Computer * Commands to DM drivers (one PCIe per segment, dedicated fiber pair each) * Low-latency transfer from RTC-1 & RTC-2 We make use of PCIe expansion to provide the data acquisition, control output, and computing capacity needed. Figure <ref> illustrates the layout of the GMagAO-X control system. § PERFORMANCE & EXOPLANET YIELD We used dynamical models of the GMagAO-X instrument to assess the performance of the preliminary design. Figure <ref> shows the tip/tilt & vibration control performance of GMagAO-X. This includes the atmosphere, the results of the structural modeling done by the GMT, contributions from GMagAO-X itself (dominated by M3), and the results of closed-loop control. The FSM (a PI S-331 is the baseline) has sufficient stroke to correct the worst case tip/tilt, and the resulting 0.03 λ/D rms WFE residual meets the instrument level requirement set by coronagraph performance. The same analysis was conducted for segment piston. Figure <ref> shows the segment piston control performance of GMagAO-X. This includes the atmosphere, the results of the structural modeling done by the GMT, and the results of closed-loop control. The resultant 3.5 nm rms WFE residual meets the instrument level requirement set by coronagraph performance. A dynamical model of the closed-loop performance was used to predict the performance of the high-order AO system. Figure <ref> shows the Strehl ratio vs. guide-star magnitude. Figure <ref> shows the residual instantaneous contrast due to residual atmospheric speckles, as well as the statistical speckle lifetimes. The results of the performance analysis were used to assess the reflected light imaging exoplanet yield. We base this analysis on the known exoplanets rather than a projected population model. We emphasize this point: the planets under consideration are known to exist. We assessed each planet based on its guide-star magnitude and the resultant predicted GMagAO-X performance. For each planet we estimate its radius based on the RV minimum mass using a mass-to-radius relationship calibrated from measured exoplanets. Geometric albedos were based on a suite of models including Earthshine, Venus, and published models for EGPs. The results are shown in Figure <ref>. GMagAO-X will be capable of characterizing the atmospheres of up to over 200 currently known exoplanets, for which we currently know only a minimum mass. § CONCLUSION GMagAO-X is the ExAO-fed coronagraph planned for first-light of the GMT. It represents the earliest opportunity of the ELT era to begin search nearby terrestrial planets for life. GMagAO-X has passed preliminary design review, and is preparing to begin the final design phase. The scientific potential of GMagAO-X highlights how crucial the US-ELT program is to the future of U.S. Astronomy and motivates a positive decision to move forward with construction of the GMT as rapidly as possible. The GMagAO-X conceptual and preliminary design would not have been possible without the support of the University of Arizona Space Institute. We are also grateful for the support of an anonymous donor to Steward Observatory. spiebib
http://arxiv.org/abs/2407.12599v1
20240717142644
On Diversity in Discriminative Neural Networks
[ "Brahim Oubaha", "Claude Berrou", "Xueyao Ji", "Yehya Nasser", "Raphaël Le Bidan" ]
cs.LG
[ "cs.LG", "cs.AI", "cs.CV" ]
arabic On Diversity in Discriminative Neural Networks Brahim Oubaha Mathematical and Electrical Engineering IMT Atlantique Brest, France brahim.oubaha@imt-atlantique.fr Claude Berrou Mathematical and Electrical Engineering IMT Atlantique Brest, France claude.berrou@imt-atlantique.fr Xueyao Ji Center of Brain Sciences Institute of Basic Medical Sciences Beijing, China xy.ji@foxmail.com Yehya Nasser Mathematical and Electrical Engineering IMT Atlantique Brest, France yehya.nasser@imt-atlantique.fr Raphaël Le Bidan Mathematical and Electrical Engineering IMT Atlantique Brest, France raphael.lebidan@imt-atlantique.fr July 22, 2024 ==================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================== § ABSTRACT Diversity is a concept of prime importance in almost all disciplines based on information processing. In telecommunications, for example, spatial, temporal, and frequency diversity, as well as redundant coding, are fundamental concepts that have enabled the design of extremely efficient systems. In machine learning, in particular with neural networks, diversity is not always a concept that is emphasized or at least clearly identified. This paper proposes a neural network architecture that builds upon various diversity principles, some of them already known, others more original. Our architecture obtains remarkable results, with a record self-supervised learning accuracy of 99. 57% in MNIST, and a top tier promising semi-supervised learning accuracy of 94.21% in CIFAR-10 using only 25 labels per class. Neural network, diversity, competition, sparsity, self- and semi-supervised learning, ensemble learning. § INTRODUCTION In the information sciences, the principle of diversity consists in combining information from different sources to better estimate the data. Diversity is all the more effective when the sources are decorrelated, i.e. when the information they provide is not processed in the same way and/or does not derive from the same observations. This ideal condition is rarely met, and we generally make do with partially correlated information. It is probably the field of telecommunications that has benefited most from the principle of diversity in moving towards very high-performance systems, both fixed and mobile. In the time domain, channel coding (or error correcting coding) makes it possible to transmit the binary elements of an augmented (redundant) version of the original message at different times (and therefore generally subject to different disturbances) and to benefit from this redundancy in the receiver. In the frequency domain, techniques such as Orthogonal Frequency-Division Multiplexing can more or less eliminate spectrum irregularities and interferences. Pruning techniques may also be considered to remove inappropriate parts of the bandwidth. The spatial dimension is of course also used, with multi-path techniques such as Multiple-Input Multiple-Output taking advantage of the particular properties of the wave paths. Other types of diversity can be exploited at higher system levels (multiuser, multistandard, etc.). In contrast, the classic architecture of a neural network, i.e. a few convolution layers followed by a classifier with a simple one-hot output (as many neurons as classes), does not reveal any deliberately introduced diversity technique. It could of course be pointed out that the totality of the weights of a neural network's connections is always oversized and therefore redundant. However, in the absence of a theory on neural network capacity and redundancy, we cannot really speak of intentional, controlled diversity. Analogies can however be drawn between different types of diversity found in digital communications and in neural networks: §.§ Channel Coding Two techniques can be related to channel coding (redundant coding). The first, of high importance in self-supervised and semi-supervised applications, is data augmentation. This involves submitting several distorted versions (rotation, cropping, mirroring, etc.) of the same sample to the network. Redundancy rates are therefore several hundred percent. The second technique involves increasing the length of the network output by multiplying the number of neurons that must be activated for a given class. This is known as distributed coding. The redundancy rate is determined by the length of the output and can be several thousand percent. A theory of this process has been developed under the name of Error Correcting Output Coding (ECOC) <cit.>. §.§ Spatial Coding A convolution layer can be presented as a spatio-temporal encoding layer. This is because the implementation of filters seeking to extract features independently of coordinates involves sharing the synaptic weights of these filters. There is therefore both redundant coding (repeated weights) and spatial coding (the search for a certain invariance with respect to coordinates). Regularization techniques such as dropout or drop-connect can also be assimilated to a form of spatial diversity. Another type of spatial diversity, not often implemented to our knowledge, can be provided by the sparsity of connection matrices. This concept is developed in section II. §.§ Pruning A famous example of pruning in digital communications is Discrete MultiTone (DMT) modulation, which enabled the massive development of the Asymmetric Digital Subscriber Line (ADSL) application. This modulation divides the spectrum into multiple sub-channels whose capacity (number of bits transmitted per unit of time) is evaluated once and for all on a fixed channel (telephone pair). The least favorable sub-channels are assigned the lowest data rates. Some sub-channels may even be discarded. In a neural network, which will also eventually become a fixed device, pruning consists in removing the least discriminating paths with regard to the categories to be recognized. The analogy is relative, however, because in the first case, the aim is to maximize transmission throughput, whereas pruning in a neural network aims to simplify implementation and reduce computational requirements. §.§ Ensemble processing In the world of telecommunications, the most representative example of an ensemble processing is probably a constellation of satellites such as OneWeb or Starlink. In this type of system, the operation is unimodal, meaning that each satellite is entrusted with the task of communicating with the earth using the same transmission mode and the same type of equipment. The only parameter that distinguishes one satellite from another, towards a potential user, is the link budget, on which the choice of the most favorable satellite is based. In the field of discriminative neural networks, a unimodal ensemble processing does not consist in selecting one network among several, but in using all or almost all of them at the same time for the inference task <cit.>. Networks differ in the initialization of their weights or in their hyperparameters, which diversifies the ways in which they learn, particularly with regard to the inevitable local minima. Often, a simple majority vote decision or, if weighted decisions are available, a probabilistic vote is enough to improve performance compared to that of a single network. Ensemble processing can be performed independently by each network (in this case, it is better to call this ensemble inference rather than ensemble learning) or by linking their operations, using some information transfer algorithm <cit.>. § COMPETITION AND SPARSITY Competition has played an important role in the evolution of species, most often due to limited resources. The same is true for nearby neurons in the nervous system, because the energy provided by the local blood flow does not allow them all to activate at the same time. It is then not unreasonable to think that the brain relies on some kind of competition in order to learn and memorize while saving energy, and the same applies for artificial networks, by bio-inspired analogy. The first paper to highlight the benefits of using the competition principle in neural networks dates back ten years <cit.>. It showed that the classical activation function of neurons in a hidden layer, usually a sigmoid or ReLU function, can be advantageously replaced by a Local Winner-Takes-All (LWTA) function in blocks of several neurons. In a given layer, this technique thus puts into competition all the activities coming from the neurons of the underlying layer through a full matrix of connections. It turns out that this principle works just as well, and perhaps even better, if the matrix is sparse, or even very sparse, although this possibility was not considered in <cit.>. In this situation, the competition is performed between subpopulations of the underlying neurons, rather than within the whole population. These small groups of neurons are selected completely at random, since the drawing of the matrix of sparse connections is itself random. In addition to the diversity brought about by multiple, quasi-independent competitions within the considered layer, extra-diversity can be added when learning by an ensemble of networks. As each network is initialized in a different way (each time by a different seed that decides the topology of connections), the number of different competitions in the ensemble is increased. The realities of competition (see <cit.> for relevant references) and sparse connections <cit.> are considered proven and fundamental properties in mammalian cortexes. Our contribution appears to be the first to combine these two bio-inspired principles in artificial neural networks, and to evaluate the potential of such architectures for self- and semi-supervised learning. We note that the competition and sparsity are very simple to implement. No particular pre- or post-processing is needed to make the best use of. We will only rely on the now classical functions adopted in neural networks, such as data augmentation, convolutional layers, batch normalization, pseudo-label estimation, etc. § PROPOSED LEARNING METHOD In this section, we start by revisiting preliminary work in self and semi-supervised learning frameworks, in particular those involving the principle of data augmentation and label estimation. Then we introduce the proposed learning method, detailing the architectural design of the models for each dataset. Next, we elaborate on our label estimation algorithm, highlighting its main features and operation. Semi-supervised learning lies between supervised and unsupervised learning. It involves the use of a small amount of labeled data in conjunction with a large amount of unlabeled data during the training process. This approach is particularly beneficial in scenarios where labeled data is scarce or expensive to obtain, but there is an abundance of unlabeled data. Consistency regularization, a key feature of many advanced semi-supervised learning algorithms, exploits unlabeled data based on the principle that a model should produce consistent predictions for different perturbed versions of the same image. This concept was initially introduced in an earlier work <cit.>, and has been more widely recognized in subsequent studies <cit.>. It is implemented by training the model with traditional supervised classification loss and an additional loss function that handles unlabeled data, thus improving the model's ability to learn from a wider spectrum of data. Another common approach in semi-supervised learning is pseudo-labeling <cit.>, where the model uses its predictions on unlabeled data to generate artificial labels. These pseudo-labels are then used in subsequent training to refine the model's performance. Our semi-supervised learning approach leverages the strength of both labeled and unlabeled data, strategically incorporating data augmentation to enhance model performance. This allows us to make efficient use of all available data, combining the reliability of labeled examples with the broader coverage provided by unlabeled examples, thus improving the efficiency and robustness of the model. Conversely, in our self-supervised learning approach, we exclusively rely on unlabeled data for training, omitting the use of labeled data. This method focuses on extracting meaningful patterns and structures from the data itself without direct guidance from explicit labels. To efficiently handle unlabeled data, our approach starts with pseudo-labeling, a critical step where we utilize our model on subtly altered data to generate pseudo-labels. This is instrumental in guiding the learning process, even without standard labeled data. What sets our label estimation process apart is its unique treatment of embeddings. Within each block of size n of the embedding space (embedding refers to the output of the model), we identify the maximum value and assign it a label of 1, while all other elements within the block are set to 0, following the principle of competition presented in Section II. This selective activation within the embedding space effectively highlights the most prominent features or characteristics captured by each block, acting as a form of dimensionality reduction and targeted feature amplification. Subsequently, we expose the same batch of data to strong deformations, such as extensive augmentation or distortion, while utilizing the estimated label as the target. This method effectively challenges the model to maintain its predictions under more significant variations, thereby enhancing its robustness and ability to generalize from complex or noisy data. The complete learning process is illustrated in Figure <ref>. It is worth noting that for the labeled batch, nothing changes compared to other supervised learning approaches. Labeled data continue to serve as a reliable source of ground truth, anchoring the model learning with trustable examples. This combination of reliable labeled data with a creative use of unlabeled data allows our model to benefit from the full spectrum of available information, leading to more effective and comprehensive learning outcomes. §.§ A. Datasets The MNIST dataset <cit.> is a well-known benchmark in the field of machine learning. It consists of 60,000 handwritten digits (0-9), with each digit represented as a 28 × 28 pixels grayscale image. This dataset is commonly used for tasks related to digit recognition and image classification, making it a fundamental resource for testing and developing various machine learning algorithms. The CIFAR-10 dataset <cit.> is another widely used dataset in computer vision. It contains 60,000 colored images, divided into 10 classes, with each class representing various everyday objects or animals, such as cars, birds, or cats. These images are relatively small, 32 × 32 pixels in size, and serve as a valuable benchmark for testing image classification and deep learning models due to their intrinsic diversity and complexity. §.§ B. Model Architecture Our models dedicated to self- and semi-supervised classification on the MNIST and CIFAR-10 datasets use the same general two-part architecture, consisting of an encoder in charge of the features extraction, followed by a Multi-Layer Perceptron (MLP) classifier. §.§.§ Encoder The encoder architecture is tailored to the target dataset. For MNIST ( see Fig. <ref>): The encoder features two convolutional layers, each followed by a ReLU activation function. At the output of each convolutional layer, max pooling is applied to reduce the feature maps' spatial dimensions, thereby decreasing computational complexity and parameters, and enhancing network efficiency. For CIFAR-10 (Fig. <ref>): To increase learning efficiency for this more challenging dataset, the simple encoder for MNIST is replaced with ResNet-18. ResNet-18 is a deeper Residual Network variant, that incorporates residual connections to facilitate deeper network training, improving image classification performance on complex datasets like CIFAR-10. §.§.§ MLP The architecture of our MLP classifier relies on an innovative structure with two distinct sparse layers and specialized processing blocks. The first sparse layer, inserted after the last max pooling stage, achieves a sparsity level of 85%, allowing the network to focus on essential features, thus reducing overfitting and improving efficiency. Coming next, the Add-Compare-Select (ACS) function introduces a competition among neurons, activating only the one with the highest value. A second sparse layer follows, with an increased sparsity of 96%, after which we may find an Add-Normalize-Compare-Select (ANCS) function, but for MNIST only. The ANCS function extends the ACS functionality by incorporating block-by-block normalization. To enable this local normalization, the weights of the final sparse layer are made positive through the utilization of the ReLU activation function. § EXPERIMENTS The performance of the proposed learning architecture has been evaluated on MNIST in a self-supervised learning context, as well as on CIFAR10 in a semi-supervised configuration. §.§ Implementation Details To ensure consistent, systematic learning and model refinement for both MNIST and CIFAR-10, we found crucial to organize the training into cycles, epochs, and batches. Cycles ensure complete dataset coverage, epochs allow full iterations over all data batches, and batches enable efficient data processing and incremental model updates, collectively facilitating continuous model improvement. It is important to note that each epoch uses different data augmentation strategies, further enriching the learning and ensuring that the model encounters diverse data representations throughout its training. For the MNIST dataset, the model architecture is trained through 100 cycles, each cycle consisting of five epochs. At the beginning of each cycle, pseudo-labels estimation is conducted once. These pseudo labels are then used for the subsequent five epochs of training. For training, we use Adam optimizer and a linear learning rate schedule with damping coefficient decaying from 1.0 to 0.001, starting from an initial learning rate of 0.0015. The model's robustness and adaptability are further enhanced by a series of data augmentation techniques: rotation, elastic distortion, random erasing, and center cropping. The Binary Cross-Entropy (BCE) loss is used to measure the difference between predictions and pseudo-labels, paired with a local normalization strategy for block-by-block data processing. This comprehensive approach ensures robust learning and model's effectiveness in discerning complex patterns throughout its extensive training. Similarly, for CIFAR-10, our model undergoes 300 training cycles, with each cycle comprising five epochs to deeply engage with the complexity of the dataset. At the core of our model is the ResNet18 encoder, selected for its robust feature extraction capabilities. Stochastic Gradient Descent (SGD) with Nesterov momentum is used to optimize the model, starting with an initial learning rate of 0.03. The learning rate is meticulously modulated across the 300 cycles by means of a cosine decay schedule, of the form 0.03cos(8π s/16S), where s and S represents the current and last training steps, respectively. An MSE loss function is used for both labeled and unlabeled batches. For labeled data, the MSE loss quantifies the difference between the model's predictions and the true labels. For unlabeled data, it measures the discrepancy between predictions and generated pseudo-labels. The labeled and unlabeled datasets will be denoted by X and U, respectively. The batch size for X is set to 64. The batch size for U is set to be 8 times the batch size of X, that is 8× 64 = 512. In terms of data augmentation, our implementation strictly adheres to the strong and weak augmentation strategies outlined in FixMatch <cit.>. §.§ Evaluation Our comprehensive evaluation strategy for the MNIST dataset relies on two distinct methods to assess the performance of our model using the embeddings as features. Firstly, we use K-means clustering <cit.> to categorize the embeddings from the test set into 10 clusters. Each cluster is mapped onto one of the ten digits, based on the majority label of its member embeddings. Secondly, we randomly select one labeled instance from each class in the training set as a representative. We then measure the similarity between the selected instances and the embeddings of the test set to assign class labels. For the CIFAR-10 dataset, the evaluation relies solely on the second method. We choose a single representative labeled example per class, and then assess model performance based on the similarity between the selected examples and the test embeddings. This more focused approach adopted for CIFAR-10 allows us to take advantage of the more complex and varied nature of the dataset, aligning the evaluation strategy with the specific challenges and characteristics inherent to CIFAR-10 images. As for the experimental design, we conducted five distinct experiments for both self-supervised and semi-supervised learning methods to confirm the robustness and reliability of our results. Each experiment begins with random initialization of the weights and sparse layers' connections for each network, to avoid any initialization bias and create network diversity. Within each cycle, data augmentation is introduced in a stochastic manner over different sets of training images. Last but not least, to assess the outcomes, we use a dual strategy that combines taking the majority vote from the five models, and computing the average accuracy across different labeled data sets. §.§ Results In Table <ref>, we present the classification accuracy achieved on the MNIST dataset by the proposed self-supervised learning approach. The Table reports the average accuracy across five different labeled data (avg acc), as well as the collective decision-making accuracy obtained by majority vote among the five networks with the same labeled data (maj. vote). Two distinct methodologies were used to assess model performance: one using K-means clustering ('K-means(%)') and the other leveraging cosine similarity ('cosine sim(%)') for each network. Together these metrics provide a clear and comprehensive view of the model's performance, demonstrating the effectiveness of both individual networks and a collective ensemble approach in accurately classifying MNIST digits. [1]Average accuracy (avg acc): this is the mean of the accuracy values obtained from five networks (five distinct instances of the model). [2]Majority vote (maj. vote): this method aggregates the votes from the five networks to decide the class label. Table <ref> presents a comparison of the classification accuracy obtained for semi-supervised learning on CIFAR10 with various well-established as well as more recent methods, including the Π-Model <cit.>, Pseudo-Labeling<cit.>, Mean Teacher<cit.>, UDA<cit.>, MixMatch<cit.>, FixMatch (RA)<cit.>, and Dash<cit.>, each referenced accordingly. The Table also showcases the performance of our method, both in terms of average accuracy (avg. acc) and majority vote accuracy (Maj. Vote). At this early stage of our research on CIFAR-10 classification, the results obtained with our semi-supervised learning method are quite competitive, achieving an accuracy of 94.21% with majority vote. What sets our approach apart is the strategic use of sparsity layers, which significantly reduces the number of parameters in the model, thereby increasing its efficiency and speed. Despite the current state-of-the-art for CIFAR-10 being held by Dash with an accuracy of 95.44%, our method stands out by its promising performance as well as its efficiency. Part of this efficiency gain can be attributed to our use of sparse layers, making our approach an attractive and resource-efficient choice to tackle the CIFAR-10 dataset. The residual accuracy gap of 1.93% only between our approach and the current leader leaves room for additional improvement, for example by further fine-tuning of hyperparameters and other model enhancements. § CONCLUSION It is possible to design and operate an artificial neural network without having to understand all its components and behavior in detail. However, there are critical applications, such as autonomous driving or medical diagnostics, which require total control over the explainability of operations performed and decisions made, especially when the network makes mistakes <cit.>. One way of ensuring that algorithms behave as desired is to introduce principles and functions that have proved their worth in other technological fields. In this paper, we have compiled a list of concepts from which the telecommunications field has benefited greatly. Among these, it seemed relevant to us to combine redundant coding and spatial diversity in a relatively simple learning architecture integrating competition layers and sparse matrices. The results obtained with self-supervised learning experiments on the MNIST dataset are convincing, with an inference accuracy higher than the previous state-of-the-art. Semi-supervised learning simulations have also been conducted on the more challenging CIFAR-10 dataset. To date, the results are not quite up to the state-of-the-art, yet very close. This is even more promising, as they were obtained without the need for sophisticated mathematical processing. Therefore, our future work will focus on finding the reasons for this performance gap between self-supervised and semi-supervised applications. Throughout our work, we have observed that the classification performance of CIFAR-10 images is very sensitive to the values of hyperparameters, in particular sparsity rates and learning rate, which we have not been able to refine completely. Other avenues could also be investigated: increasing the number of competition-based layers, merging the X and U batches into a single one, replacing binarization (hard decision) with a more progressive function (soft decision), still to be determined. 00 b1 T. G. Dietterich and G. Bakiri. "Error-correcting output codes: A general method for improving multiclass inductive learning programs", The Mathematics of Generalization. CRC Press, pp. 395-407, 2018. b2 L. K. Hansen and P. Salamon, "Neural Network Ensembles", IEEE Trans. on pattern analysis and machine intelligence, vol. 12, n° 10, pp. 993-1001, Oct. 1990. b3 X. Dong et al. "A survey on ensemble learning", Comput. Sci., 2020, vol. 14, n° 2, pp. 241–258, 2020 b4 R. K. Srivastava, J. Masci, S. Kazerounian, F. Gomez, J. Schmidhuber, "Compete to compute", Advances in Neural Information Processing Systems (NIPS), pp. 2310-2318, 2013. b5 S. Seeman et al., "Sparse recurrent excitatory connectivity in the microcircuit of the adult mouse and human cortex", elife, 7, e37349, 2018. b6 C. Richard et al. "Sparsity through evolutionary pruning prevents neuronal networks from overfitting", Neural Networks vol. 128, pp. 305-312, 2020. b7 Mehdi Sajjadi, Mehran Javanmardi, and Tolga Tasdizen. "Regularization with stochastic transformations and perturbations for deep semi-supervised learning", Advances in neural information processing systems, 29:1163 1171, 2016. b8 W. Zhang, L. Zhu, J Hallinan, S. Zhang, A. Makmur, Qingpeng Cai, and B. Chin Ooi. "Boostmis: Boosting medical image semi-supervised learning with adaptive pseudo labeling and informative active annotation". In CVPR, 2022. b9 T Miyato, S. Maeda, M. Koyama, and S. Ishii. "Virtual adversarial training: a regularization method for supervised and semi-supervised learning". IEEE TPAMI, 2018. b10 D. Lee et al. "Pseudo-label: The simple and efficient semi-supervised learning method for deep neural networks". In Workshop on challenges in representation learning, ICML, volume 3, page 896, 2013. b11 Y. LeCun, C. Cortes, and C. J. Burges. "The MNIST database of handwritten digits". 1998. b12 A. Krizhevsky, and G. Hinton. "Learning multiple layers of features from tiny images". Master's thesis, Department of Computer Science, University of Toronto. 2009. b13 K. Sohn, D. Berthelot, C. Li, Z. Zhang, N. Carlini, E. D Cubuk, A. Kurakin, H. Zhang, and C. Raffel. "Fixmatch: Simplifying semisupervised learning with consistency and confidence". arXiv preprint arXiv:2001.07685, 2020. b14 G. James , D. Witten , T. Hastie , R. Tibshirani. "An Introduction to Statistical Learning". Springer. 2013. b15 A. Byerly, T. Kalganova, and I. Dear. "No routing needed between capsules. Neurocomput". 463, C (Nov 2021), 545–553. https://doi.org/10.1016/j.neucom.2021.08.064, 2021. b16 X. Ji, J. F. Henriques, A. Vedaldi. "Invariant Information Clustering for Unsupervised Image Classification and Segmentation". ICCV. 2019 b17 A. Rasmus, M. Berglund, M. Honkala, H. Valpola, and T. Raiko. "Semi-supervised learning with ladder networks". In Advances in Neural Information Processing Systems, 2015. 5, 6 b18 D.-H. Lee. "Pseudo-label: The simple and efficient semi-supervised learning method for deep neural networks". In ICML Workshop on Challenges in Representation Learning, 2013. b19 A. Tarvainen and H. Valpola. "Mean teachers are better role models: Weight-averaged consistency targets improve semi-supervised deep learning results". In Advances in neural information processing systems, 2017. b20 Q. Xie, Z. Dai, E. Hovy, M.-T. Luong, and Q. V. Le. "Unsupervised data augmentation for consistency training. arXiv preprint arXiv:1904.12848, 2019. b21 D. Berthelot, N. Carlini, I. Goodfellow, N. Papernot, A. Oliver, and C. A. Raffel. "Mixmatch: A holistic approach to semi-supervised learning". In Advances in Neural Information Processing Systems, pages 5050–5060, 2019. b22 Y. Xu, L. Shang, J. Ye, Q. Qian, Y. Li, B. Sun, H. Li, and R. Jin. "Dash: Semi-supervised learning with dynamic thresholding". In International Conference on Machine Learning, pages 11525–11536. PMLR, 2021. b23 R. Saleem et al., "Explaining deep neural networks: A survey on the global interpretation methods", Neurocomputing, pp. 165-180, 2022. b24 I. E. Nielsen, "Robust explainability: A tutorial on gradient-based attribution methods for deep neural networks", IEEE Signal Processing Magazine, vol. 39, n° 4, pp. 73-84, 2022.
http://arxiv.org/abs/2407.12921v1
20240717180014
Finite de Finetti bounds in relative entropy
[ "Lampros Gavalakis", "Oliver Johnson", "Ioannis Kontoyiannis" ]
math.PR
[ "math.PR", "cs.IT", "math.IT" ]
T MAL MLT RMA CTW BST MAPT MHT MHTj BCT BCTsk-BCT λ s.t.W̌Ȟ+L_∞,2^W_1,2^WL_∞,2^W_0_1,2^W_0ν̌Ǔξ̂ b c xξFd/dt#1#2 #1=#2 #1 #1#1#1#1#1 501em=0pt=0 tr rank deg sign supp [1.6mm]||| detlim sup lim suplim inf lim inf X YR_+RTZZZ_+QCAI 1Φ̌Φ̌π̌ǍǦP̌ B^+()() Re A B C D E F G H I J K L M N O P Q R S T U V W X Y Z a b c d e f g h i j k l m n o p q r s t u v w x y zΨΓΣPf̂ĝĥŷv̂θ̂ψ̂ν̂μ̂α̂β̂λ̂γ̂λ̂Λ̂ A B C C E H I M P Q d g k u x P E==ΦGNPQSTWμ̃π̃X̃m̃θ̃ẽσ̃τ̃ A B C D E F G H J I K L M N O P Q R S T U V W X Y Z1/2 1/2 1/2 1/23 4 3 4 3 4 3 41 4 1 4 1 4 1 4:=μ^ Leb M P P E#1_#1P#1/(<ref>)ε · theoremTheorem[section] corollary[theorem]Corollaryproposition[theorem]Propositionlemma[theorem]Lemma definition[theorem]Definitionexample[theorem]Exampleexercise[theorem]Exercisesolution[theorem]SolutionremarkRemark#1Lemma <ref>#1Proposition <ref>#1Theorem <ref>#1Corollary <ref>#1Section <ref>#1Figure <ref>#1Chapter <ref>#1Appendix <ref>#1/(<ref>)ωafghlnpyABCEGPQTnρρμναβαΦσΣτ Proof Outline. #1 trace #1(#1) L_∞^VL_∞^vL_∞^FL_∞^WL_∞^W_0_1^W_1^W_0_1^V_1^v⟨⟨⟩⟩Const. HKv̌ŽΛ̌Ξ̌σ̌Z̅M=30pt plain1.0cm .0cm .0cm 0.0cm 21.0cm 16.5cm Finite de Finetti bounds in relative entropy Lampros GavalakisUniv Gustave Eiffel, Univ Paris Est Creteil, CNRS, LAMA UMR8050 F-77447 Marne-la-Vallée, France. Email: lampros.gavalakis@univ-eiffel.fr. L.G. has received funding from the European Union's Horizon 2020 research and innovation program under the Marie Sklodowska-Curie grant agreement No 101034255 and by the Bézout Labex, funded by ANR, reference ANR-10-LABX-58. Oliver JohnsonSchool of Mathematics, University of Bristol, Woodland Road, Bristol BS8 1UG, U.K. Email: O.Johnson@bristol.ac.uk. Ioannis Kontoyiannis Statistical Laboratory, DPMMS, University of Cambridge, Centre for Mathematical Sciences, Wilberforce Road, Cambridge CB3 0WB, U.K. Email: yiannis@maths.cam.ac.uk. July 22, 2024 ======================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================= § ABSTRACT We review old and recent finite de Finetti theorems in total variation distance and in relative entropy, and we highlight their connections with bounds on the difference between sampling with and without replacement. We also establish two new finite de Finetti theorems for exchangeable random vectors taking values in arbitrary spaces. These bounds are tight, and they are independent of the size and the dimension of the underlying space. § INFORMATION IN PROBABILITY Early history. Classical information theory is built on probability theory. Not only is its core toolbox and vocabulary probabilistic but, since its early days, sophisticated probabilistic techniques were effectively utilised in many core information-theoretic problems. For example, what is arguably the crowning achievement of information theory – Shannon's channel coding theorem <cit.>– is proved with one of the earliest applications of the “probabilistic method” <cit.>. Since then, there has been a consistent influx of modern probabilistic ideas and tools, employed in a very rich manner for the analysis of information-theoretic questions. Equally remarkable has been the mathematical “traffic” in the reverse direction. Information-theoretic results and intuition have found applications in the study of a rapidly growing number of fundamental probabilistic phenomena. In most cases, information theory not only informs our understanding of different aspects of stochastic behaviour, but it also offers new avenues for exploring this behaviour rigorously. In 1958, Hájek used the properties of the relative entropy to explore the absolute continuity between Gaussian measures <cit.>. But the first major results established by the application of information-theoretic tools for the proof of purely probability-theoretic results came in 1959, when Linnik suggested an information-theoretic proof of the central limit theorem <cit.>, and the following year, in 1960, when Rényi used information-theoretic methods to prove that finite-state Markov chains with all positive transitions convergence to equilibrium in relative entropy <cit.>. Then in the 1970s and 80s, Csiszár used his `method of types' <cit.> to establish strong versions of a number of the standard results in large deviations <cit.>. Barron's influence. Over the past 40 years, Andrew Barron has played a major role in promoting the information-theoretic approach to probabilistic fundamentals. Here we briefly outline some of his main contributions in this direction. The Shannon-McMillan-Breiman theorem is probably the most important foundational result of classical information theory, with important links to probability, statistics, ergodic theory, dynamical systems, and beyond. Andrew Barron entered the world of information theory in 1985 with his proof of the most general version of the Shannon-McMillan-Breiman theorem for general ergodic processes with densities <cit.>. In a very different direction, only a year later, he proved what by now is the most well-known version of what has come to be known as the information-theoretic central limit theorem <cit.>: He showed that the relative entropy between the distribution of the standardised sum of independent and identically distributed random variables, and the Gaussian law, converges to zero if and only if it is ever finite. This paper has been very influential, leading to a number of subsequent breakthroughs including <cit.> and <cit.>. Then in 1991, Barron gave an information-theoretic proof of the martingale convergence theorem for nonnegative martingales, and established a number of related convergence results for monotone sequences of σ-algebras <cit.>. Building on earlier work by Rényi <cit.> and Fritz <cit.>, in 2000 Barron established an elegant Markov chain convergence theorem. He used information-theoretic arguments to show that, for a reversible Markov chain (on a general state space) with a unique invariant measure, the relative entropy between the time-n distribution of the chain and its invariant measure, converges to zero if it is ever finite <cit.>. And in 2007, in joint work with Madiman <cit.>, Barron developed a number of sharp inequalities for the entropy and Fisher information of sums of independent, continuous random variables, drawing interesting parallels with functional-analytic and statistical considerations. It is certainly fair to say that, at least between the mid-1980s and the late 1990s, Andrew Barron was the main driving force of the information-theoretic approach to probabilistic fundamentals. More information in probability. Over the past 30 years, many more fascinating connections have been established between information-theoretic ideas and core probabilistic results. We only mention some of them very briefly; more details can be found in the relevant historical discussion in <cit.>, and in the earlier reviews by Barron <cit.>, Csiszár <cit.>, and the text <cit.>. The so-called entropy method, introduced Herbst <cit.> and developed by Marton <cit.> and Ledoux <cit.>, has been one of the key approaches to proving concentration of measure inequalities <cit.>, sometimes also in connection with ideas from optimal transport <cit.>. Poisson approximation <cit.> and compound Poisson approximation <cit.> have been extensively studied via an information-theoretic lens. The profound relationship between information theory and functional-analytic inequalities has a long history, dating back to the work of Shannon <cit.>, Stam <cit.> and Blachman <cit.> on the entropy power inequality <cit.>. These include re-interpretations of the Gross' logarithmic Sobolev inequality <cit.>, the Brascamp-Lieb inequality <cit.>, and connections with high-dimensional convex geometry <cit.>. The deep connections between the convergence of diffusions, estimation, relative entropy, and Fisher information, have led to the development of a rich web of results known as `Bakry-Émery theory' <cit.>; see also the relevant work by Brown <cit.>, Barron <cit.> and Guo et al. <cit.>. Motivated by fascinating developments in additive combinatorics and number theory, a series of entropy bounds have been developed by, among others, Ruzsa <cit.> and Tao <cit.>. More recent work in this direction includes <cit.>. Elementary information-theoretic results were applied to some classical questions in probabilistic number theory in <cit.>. Finally, free entropy plays a major role in the noncommutative probability theory developed by Voiculescu <cit.>. § DE FINETTI'S REPRESENTATION THEOREM A random vector X_1^n:=(X_1,…, X_n) is exchangeable if its distribution is invariant under permutations of the indices {1,…,n}. A process {X_n ; n≥ 1} is exchangeable if X_1^n is exchangeable for every n ≥ 1. [Throughout, we use the notation X_i^j to denote the vector of random variables (X_i,…,X_j), i≤ j, and the corresponding lower-case notation x_i^j for individual realisations (x_i,…,x_j) of X_1^j.] De Finetti's celebrated representation theorem, established in the 1930s, states that a binary process is exchangeable if and only if it is a mixture of independent and identically distributed (i.i.d.) sequences. Let {X_n} be an exchangeable process where each X_n takes values in {0,1}. Then there is a unique Borel probability measure μ on [0,1] such that, for every n≥ 1, (X_1^n = x_1^n) = ∫_[0,1]Q^n_p(x_1^n) dμ(p) = ∫_[0,1]∏_i=1^nQ_p(x_i) dμ(p), x_1^n∈{0,1}^n, where Q_p(1) = 1- Q_p(0) = p is the probability mass function of the Bernoulli distribution with parameter p. De Finetti's theorem holds much more generally. For a measurable space (S, 𝒮) we write for the space of probability measures on S, and for the smallest σ-algebra that makes the maps {π_A ; A ∈𝒮} measurable, where each π_A:(S)→[0,1] is defined by P↦π_A(P) := P(A). For a probability measure μ on (,) and k≥1, we write M_k,μ for the mixture of i.i.d. measures on (S^k,^k), defined by: M_k,μ(A):= ∫_Q^k(A) dμ(Q), A ∈𝒮^k. In the case when S is a finite set, we often identify probability measures P on S with the corresponding probability mass functions (p.m.f.s), so that P(x)=P({x}), x∈ S. Similarly we identify (S) with the corresponding simplex in ^c consisting of all probability vectors (P(x) ;x∈ S), and we equip (S) with the Borel σ-algebra generated by the open subsets of (S) in the induced subspace topology. The most general form of de Finetti's theorem is due to Hewitt and Savage: Let S be a compact Hausdorff space equipped with its Baire σ-algebra 𝒮. If {X_n} is an exchangeable process with values in S, then there exists a unique measure μ on the Baire σ-algebra of such that, for each k≥ 1, the law P_k of X_1^k admits the representation: P_k = M_k,μ, Recall that the Baire σ-algebra of a topological space B is the smallest σ-algebra that makes all continuous functions f:B→ measurable <cit.>. A key idea in the proof of Hewitt and Savage is the geometric interpretation of i.i.d. measures as the extreme points of the convex set of exchangeable measures, so that any point in this convex set can be expressed as a mixture of sufficiently many extreme points. This idea also plays an important role in the finite de Finetti regime discussed in the following sections. There is extensive literature extending de Finetti's theorem in a number of different directions. Although in this paper we focus on finite de Finetti bounds and their connection with sampling bounds, we briefly mention some other interesting connections. One exciting such connection is between de Finetti-style theorems and what in the probability literature is known as Gibbs' conditioning principle <cit.>, also referred to as the conditional limit theorem <cit.>. Diaconis and Freedman <cit.> showed that the first k coordinates of an orthogonally invariant random vector in ℝ^n are approximately mixtures of independent Gaussian random variables. This is very similar, in spirit, both to de Finetti's theorem and to the conditional limit theorem, which states the following: Suppose n is large and k is fixed, and let P̂_X_1^n denote the empirical measure induced by the i.i.d. vector X_1^n∼ P^n. Then, conditional on P̂_X_1^n belonging to an atypical set E of probability measures, the law of X_1^k is approximately equal to the i.i.d. law (P^*)^k, where P^* is the relative entropy-closest member of E to P. Exploring this connection further, a finite de Finetti theorem was proved in <cit.> using a finite form of the conditional limit theorem. Interestingly, one of the information-theoretic proofs <cit.> recently developed (see Section <ref>) is also based on the proof of the conditional limit theorem. Another fascinating area is that of exchangeable random graphs, which is also connected to the well known array-version of de Finetti's theorem, known as the Aldous-Hoover theorem; see. e.g., <cit.> and references therein. Extensions of de Finetti-style theorems to mixtures of Markov chains were also established by Diaconis and Freedman <cit.>. A conjecture on partial exchangeability made in that work was recently proven in <cit.>. Finally, we mention that there has been renewed interest in exchangeability in statistics, in part motivated by the success of conformal prediction and related methods <cit.>, leading among other things to the notion of weighted exchangeability; see, e.g., the recent work in <cit.>. § FINITE DE FINETTI BOUNDS IN TOTAL VARIATION De Finetti's theorem offers an explicit characterisation of exchangeable processes, which is general, natural, and useful. Historically, it has also been viewed as a powerful justification of the subjective probability point of view in Bayesian statistics <cit.>. In this context, it is interpreted as stating that, an exchangeable binary sequence, for example, can equivalently be viewed as the realisation of an i.i.d. Bernoulli sequence, conditional on the value of the Bernoulli parameter, which is distributed according to a unique prior distribution. In terms of applications, it is natural to ask whether de Finetti's theorem also holds for finite exchangeable sequences. The answer is “yes and no”. Strictly speaking, the exact representation of the distribution of a finite exchangeable vector as a mixture of product distributions does not hold in general, but it does hold approximately. Consider, e.g., a pair (X_1,X_2) of binary random variables with, (X_1=0, X_2 = 1) = (X_1 = 1, X_2 = 0) = 1/2. The random vector (X_1,X_2) is clearly exchangeable, but if a representation like (<ref>) were true for some probability measure μ on [0,1], then we would have, ∫_[0,1]p^2 dμ(p) = ∫_[0,1] (1-p)^2 dμ(p) = 0, which implies that μ({0}) = μ({1}) = 1, a contradiction. The example (<ref>) was given by Diaconis <cit.>. In that work, the set of exchangeable binary measures for which a de Finetti-style representation fails was interpreted in a geometric way, and it was observed that the volume of the region representing those measures decreases, in the following sense: If X_1^k are the first k coordinates of a longer exchangeable binary sequence X_1^n, then for n significantly larger than k the distribution of X_1^k is close to a product distribution. More specifically, it was shown that for each k≤ n there exists a mixing measure μ_n, depending on n but not on k, such that the distribution P_k of X_1^k satisfies, P_kM_k,μ_n≤C_k/n, where C_k is a constant depending only on k. Here the total variation distance (TV) between two measures μ, ν∈ is defined as: μν := 2sup_A ∈𝒮|μ(A) - ν(A)|. Not coincidentally, as we will see below, the hypergeometric probabilities appear in the geometric proof of (<ref>) as the extreme points of the convex set of exchangeable measures embedded in ℝ^k. Results of the form (<ref>) are referred to as finite de Finetti theorems, and they typically state that, if X_1^n is exchangeable and k is small compared to n, then the distribution of X_1^k is in some sense close that of of an i.i.d. mixture. The binary assumption was removed and the the sharpest rates were obtained in Diaconis and Freedman <cit.>, for exchangeable random vectors with values in an arbitrary measurable space: Let X_1^n be an exchangeable random vector with values in a measurable space (S,𝒮). Then there exists a probability measure μ_n on (,) such that, for every k ≤ n, the distribution P_k of X_1^k satisfies, P_kM_k,μ_n≤k(k-1)/2n, where M_k,μ_n is the mixture of product distributions in  (<ref>). Unlike the geometric proof of (<ref>), the proof of (<ref>) was based on the elegant connection of finite de Finetti representations with bounds on the difference of sampling without and with replacement, described next. We will make use of a similar argument in Section <ref>, adapted for relative entropy. Empirical measures and types. Let P̂_X_1^n denote the empirical measure induced on (S,𝒮) by the random vector, namely, P̂_X_1^n := 1/n∑_i=1^nδ_X_i, where δ_x denotes the Dirac measure that places a unit mass at x∈ S. Similarly, let P̂_x_1^n denote the empirical measure induced by a fixed string x_1^n∈ S^n. We refer to P̂_x_1^n as the type of x_1^n. If Q is a p.m.f. on a finite set S_0, or a probability measure supported on a finite subset S_0⊂ S, we call Q an n- type if nQ(x) is an integer for each x∈ S_0. Exchangeability and sampling. The key observation here is the following: Let X_1^n be an exchangeable random vector. Then, conditional on P̂_X_1^n=Q, the distribution of X_1^n is uniform on the set of sequences x_1^n with the same type Q. Moreover, for any k≤ n, the distribution P_k of X_1^k is the distribution of sampling without replacement from an urn containing the balls x_1,…,x_n, where x_1^n is any string with P̂_x_1^n=Q. Therefore, letting μ_n denote the law of P̂_X_1^n on ((S),), we have, P_k(A)=∫ h(Q,n,k;A) dμ_n(Q), A∈^k, where h(Q,n,k;·) denotes the multivariate hypergeometric law of drawing k balls without replacement from an urn containing the balls x_1, x_2, …, x_n, where x_1^n is any string in S^n with type P̂_x_1^n=Q. Similarly, the mixture M_k,μ_n with respect to the same mixing measure μ_n can be written, M_k,μ_n(A)=∫ b(Q,n,k;A) dμ_n(Q), A∈^k, where b(Q,n,k;·)=Q^k(·) denotes the multinomial law of drawing k balls with replacement from an urn containing x_1, x_2, …, x_n, for a string x_1^n∈ S^n with type P̂_x_1^n=Q. In view of the expressions (<ref>) and (<ref>), comparing P_k with M_k,μ_n reduces to comparing the hypergeometric and multinomial distributions, as described in more detail in Section <ref>. Note that, as mentioned in <cit.>, the rigorous justification of the representations (<ref>) and (<ref>) follows from the measurability of P̂_X_1^n discussed in the Appendix, and the obvious measurability of the p.m.f.s h(Q,n,k;·) and b(Q,n,k;·) as functions of Q=P̂_X_1^n. The above argument strongly indicates that the “natural" mixing measure to consider for finite de Finetti theorems is the law of the empirical measure P̂_X_1^n induced by X_1^n. Diaconis and Freedman also showed that the O(k^2/n) rate of Theorem <ref> can be improved to O(k/n) when the space S is finite: Let X_1^n be an exchangeable random vector with values in a finite S of cardinality c. Then there exists a Borel probability measure μ_n on such that, for every k≤ n, the distribution P_k of X_1^k satisfies, P_kM_k,μ_n≤2ck/n, where M_k,μ_n is the mixture of product distributions in  (<ref>). The rates in both bounds (<ref>) and (<ref>) were shown in <cit.> to be tight. We explain the tightness of (<ref>) at the end of Section <ref> below, as it follows from the tight approximation of sampling without and with replacement. § SAMPLING BOUNDS IN TOTAL VARIATION Consider an urn containing n balls, each ball having one of c≥ 2 different colours, and suppose we draw out k≤ n of them. Beyond the motivation offered above in connection with de Finetti-style representations, comparing the distributions of sampling with and without replacement is a fundamental problem with a long history in probability and statistics; see, e.g., <cit.>. Intuitively, if the number n of balls is large compared to the number k of draws, there should only be a negligible difference in the results of sampling with and without replacement. Write = (ℓ_1,ℓ_2, …, ℓ_c) for the vector representing the number of balls of each colour in the urn, so that there are ℓ_j balls of colour j, 1≤ j≤ c and ℓ_1+ℓ_2+⋯+ℓ_c = n. Let =(s_1,s_2,…,s_c) denote the vector of the numbers of balls of each colour drawn, so that s_1+s_2+⋯+s_c = k. When sampling without replacement, the probability that the colours of the k balls drawn are given by is given by the multivariate hypergeometric p.m.f., H(,n,k;) := ∏_j=1^c ℓ_is_j/nk for all with 0≤ s_j≤ℓ_j for all j, and s_1+⋯+s_c=k. On the other hand, the corresponding p.m.f. B(,n, k; ) of sampling with replacement, is the multinomial, B(, n, k; ) := ks_1,…,s_c∏_j=1^c ( ℓ_j/n)^s_j, for all with s_j≥ 0 and s_1+⋯+s_c=k, where ks_1,…,s_c = k!/∏_j=1^cs_j! is the multinomial coefficient. Note that the p.m.f.s H and B in (<ref>) and (<ref>) involve only the numbers of balls of each colour that are drawn, whereas the corresponding distributions h and b defined in the previous section are over the entire sequence of colours drawn from the urn. Of course the two are simply related: Suppose the composition of the urn is described by the vector or, equivalently, by the n-type Q^() defined by Q^()(j):=ℓ_j/n, for each colour j=1,2,…,c. For the sake of simplicity (and without loss of generality), take the set of colours S to be S={1,2,…,c}. Then, by the definitions of H,B,h and b, for any x_1^k∈ S^k, we have, h(Q^(),n,k;x_1^k) = ks_1,…,s_c^-1 H(,n,k;) b(Q^(),n,k;x_1^k) = ks_1,…,s_c^-1 B(,n,k;), where is the composition of x_1^k, i.e., each s_j is the number of occurrences of j in x_1^k, 1≤ j≤ c. Diaconis and Freedman established the following bound between h and b, and used it to prove Theorem <ref> using the connection between exchangeability and sampling explained in the previous section. Let h(Q,n,k;·) and b(Q,n,k;·) denote p.m.f.s of sampling k balls without and with replacement, respectively, from an urn containing n balls of c different colours, where the n-type Q describes the composition of the balls in the urn. Then: h(Q,n,k;·)b(Q,n,k;·)≤2ck/n. However, if one wants a bound that is independent of the number of colours, then one has to pay a factor k in the bound: In the notation of Theorem <ref>, suppose k balls are drawn from an urn containing n balls of n different colours, so that c=n and Q=Q_U with Q_U(j)=1/n for each j=1,2,…,n. Then: 2(1 - e^-k(k-1)/2n) ≤h(Q_U,n,k;·)b(Q_U,n,k;·)≤k(k-1)/n. The proof of (<ref>) is based on considering the set, B = {x_1^k : x_i=x_j for some 1≤ i<j≤ k}, and noting that h(Q_U,n,k;x_1^k) = 0 for all x_1^k∈ B, which implies, 1/2h(Q_U,n,k;·)b(Q_U,n,k;·) = 1 - n!/(n-k)!n^k. Getting back to de Finetti's theorem, Diaconis and Freedman <cit.> show the inequality, h(Q_U,n,k;·)M_k,μ(·)≥h(Q_U,n,k;·)b(Q_U,n,k;·) for any mixing measure μ. Therefore, since h(Q_U,n,k;·) is the distribution of X_1^k when X_1^n is the (exchangeable) vector obtained by a random permutation of S={1,…,n}, the sharpness of (<ref>) follows from the sharpness of the upper bound in (<ref>). § SAMPLING BOUNDS IN RELATIVE ENTROPY For μ, ν∈, the relative entropy between μ and ν is defined as, D(μν) := ∫_Sdμ/dνlogdμ/dν dν, if μ≪ν, and D(μν) = ∞ otherwise, where dμ/dν stands for the Radon-Nikodým derivative of μ with respect to ν, and where log denotes the natural logarithm throughout. In particular, if S is discrete and P,P' are the p.m.f.s corresponding to μ,ν, then, D(μν)=D(PP')=∑_x∈ S:P(x)>0P(x)logP(x)/P'(x). In view of Pinsker's inequality <cit.>, μν≤[2D(μν)]^1/2, relative entropy is considered a stronger notion of “distance" than total variation. It is 0 if and only if μ = ν, and it is locally quadratic around μ=ν <cit.>. Moreover, although not a proper metric, relative entropy is often thought of as a notion of distance between the two measures μ and ν, justified in part by important results in probability and statistics <cit.>. The difference between sampling with and without replacement has also been studied in terms of relative entropy: Let H(,n,k;·) and B(,n,k;·) denote p.m.f.s of sampling without and with replacement from an urn with balls of c colours, as in  (<ref>) and  (<ref>). Then, for any and any k≤ n: D(H(,n,k;·)B(,n,k;·)) ≤(c-1)k(k-1)/2(n-1)(n-k+1). Based on (<ref>) and (<ref>), Stam <cit.> observed that, D(h(Q^(),n,k;·)b(Q^(),n,k;·)) = D(H(,n,k;·)B(,n,k;·)), so that we also have: D(h(Q^(),n,k;·)b(Q^(),n,k;·)) ≤(c-1)k(k-1)/2(n-1)(n-k+1). Moreover, Stam established a closely matching lower bound, showing that the O(k^2/n^2) upper bound in Theorem <ref> is of optimal order in terms of its dependence on k and n, but in general it can be improved: Harremoës and Matúš <cit.> showed that: D(H(,n,k;·)B(,n,k;·)) ≤ (c-1) ( log( n-1/n-k) - k/n + 1/n-k+1). In the special case c=n as in Theorem <ref>, an exact expression is derived in <cit.>, which is interesting to compare with (<ref>) above: D(H(,n,k;·)B(,n,k;·)) =log(n^k(n-k)!/n!). Note that all the bounds in (<ref>), (<ref>) and (<ref>) hold uniformly in . Sharper bounds can be obtained if we allow dependence on . Indeed, a bound which is often sharper was recently given in <cit.>: For any and all 1≤ k ≤ n/2: D(H(,n,k;·)B(,n,k;·)) ≤c-1/2( log( n/n-k) - k/n-1) + k(2n+1)/12n(n-1)(n-k)∑_i=j^c n/ℓ_j + 1/360( 1/(n-k)^3 - 1/n^3) ∑_j=1^c n^3/ℓ_j^3. See <cit.> for some detailed comparisons between (<ref>), (<ref>), (<ref>) and (<ref>). Finally, we emphasise that all the bounds in this section depend (in fact, linearly) on the number of colours c. § FINITE DE FINETTI BOUNDS IN RELATIVE ENTROPY A number of finite de Finetti bounds in relative entropy have recently been established in <cit.>. Let X_1^n be an exchangeable random vector with values in some space (S,), let P_k denote the law of X_1^k for k≤ n, and for any probability measure μ on ((S),), recall the definition of the mixture of product distributions M_k,μ in (<ref>). In <cit.>, it was shown that, if S={0,1}, then there is a mixing measure μ_n such that, for k≤ n, D(P_kM_k,μ_n) ≤5k^2log n/n-k. Then in <cit.> it was shown that, if S is a finite set, then there is a mixing measure μ_n such that a weaker bound of the following form holds for all k sufficiently smaller than n: D(P_k M_k,μ_n)= O((k/√(n))^1/2logn/k). The proofs of both of these results are information-theoretic, and in both cases the mixing measure μ_n is the law of the empirical measure P̂_X_1^n. The bound (<ref>) was proved via conditional entropy estimates, while the proof of (<ref>) explored the connection between exchangeability and the Gibbs conditioning principle. In the more general case when S is an arbitrary discrete (finite or countably infinite) set, it was shown in <cit.> that (a different) mixing measure μ^*_n exists, such that the following sharper bound holds for all k<n: D(P_kM_k,μ^*_n) ≤k(k-1)/2(n-k-1)H(X_1). Note that this meaningful as long as H(X_1) is finite, and that it gives potentially much sharper estimates when H(X_1) is small. De Finetti-type bounds for random variables with values in abstract spaces (S,) were also derived in <cit.>. The proof of (<ref>) was based on an argument that originated in the quantum information theory literature. The derivations of (<ref>)–(<ref>) employed purely information-theoretic ideas and techniques, but the actual bounds are of sub-optimal rate. A sharp rate was more recently obtained in <cit.>, using Stam's sampling bound in Theorem <ref> combined with the convexity of relative entropy: Let X_1^n be an exchangeable random vector with values in a finite set S of cardinality c. Then there is a Borel probability measure μ_n on (S) such that, for every k ≤ n, the distribution P_k of X_1^k satisfies: D(P_kM_k,μ_n) ≤(c-1)k(k-1)/2(n-1)(n-k+1). Theorem <ref> gives a bound with rate O(k^2/n^2). In view of Pinsker's inequality and the lower bound in total variation obtained by Diaconis and Freedman <cit.>, this bound is of optimal order in its dependence on k and n. Even more recently, Song, Attiah and Yu <cit.> used an adaptation of Freedman's argument <cit.>, again based on considering the set (<ref>), to establish a finite de Finetti theorem with a relative entropy bound that is weaker when S is finite, but which is independent of the alphabet size: Let X_1^n be an exchangeable random vector with values in a discrete (finite or countably infinite) set S. Then there exists a probability measure μ_n on (,), such that, for all k<n, the distribution P_k of X_1^k satisfies, D(P_kM_k,μ_n) ≤log(n^k(n-k!)/n!)≤ -log(1-k(k-1)/2n), where the second inequality holds as long as k(k-1)<2n. In the same work, the bound of Theorem <ref> was shown to be tight. Recall the notation h(Q,n,k;·) for the law of sampling without replacement as in Section <ref>. Recall also the exact expression in (<ref>) above. Let h(Q_U,n,k;·) denote the law of sampling k balls without replacement from an urn containing n balls of n different colours, where Q_U(j)=1/n for j∈ S={1,…,n}. Then, for any mixing measure μ on (S) and any k<n: D(h(Q_U,n,k;·)M_k,μ(·)) ≥log(n^k(n-k)!/n!). Letting as before X_1^n denote the exchangeable random vector obtained as a random permutation of the set S={1,…,n}, and noting that h(Q_U,n,k;·) is then the same as the distribution P_k of X_1^k, Theorem <ref> shows that the bound (<ref>) is indeed tight. It is interesting to note that the finite de Finetti bound in (<ref>) is used in <cit.> in the proof of a strong achievability result for a certain source coding scenario, where an encoder transmits a k-letter information sequence to a randomly activated subset of k out of n possible users. In Section <ref> we give two new finite de Finetti bounds in relative entropy, that are essentially tight, for exchangeable random vectors with values in arbitrary measurable spaces. Theorem <ref> is proved by combining Stam's sampling bound in Theorem <ref> with the representations of P_k and M_k,μ_n in terms of sampling distributions in (<ref>) and (<ref>). The proof of Theorem <ref> is a generalization of the proof of Theorem <ref> in <cit.>, combined with the classical representation of relative entropy as a supremum over finite partitions. § NEW FINITE DE FINETTI BOUNDS ON ABSTRACT SPACES Recall from Section <ref> the definition of the σ-algebra associated with the space (S) of probability measures on an arbitrary measurable space (S,), and the definition of the mixture of i.i.d. measures M_n,μ in (<ref>). Let X_1^n be an exchangeable random vector with values in a measurable space (S,). Then there is a probability measure μ_n on ((S),) such that, for each 1≤ k≤ n, the distribution P_k of X_1^k satisfies: D(P_kM_k,μ_n) ≤k(k-1)/2(n-k+1). Under the same assumptions as Theorem <ref>, there is a probability measure μ_n on ((S),) such that, for each 1≤ k≤ n, D(P_kM_k,μ_n) ≤log(n^k(n-k!)/n!)≤ -log(1-k(k-1)/2n), where the second inequality holds as long as k(k-1)<2n. Remarks. Before giving the proofs of Theorems <ref> and <ref>, some remarks are in order. * In both theorems, the mixing measure μ_n is the law of the empirical measure P̂_X_1^n on (S). * Also both theorems give bounds with the same asymptotic behaviour ∼ k^2/2n. The bound in Theorem <ref> is slightly stronger than that in Theorem <ref>, and in fact, in view of the lower bound in Theorem <ref>, it is tight. On the other hand, the proof of Theorem <ref> is very short and quite satisfying in that it is based on a very direct connection with Stam's <cit.> sampling bound (<ref>). * As discussed by Diaconis and Freedman in <cit.>, we note that it is curious that we have finite de Finetti bounds, both in relative entropy and in total variation, under no assumptions whatsoever on the underlying space (S,), whereas for the seemingly weaker infinite-dimensional representation of the classical de Finetti theorem in Theorem <ref> more structure is required. * The choice of the σ-algebra ℱ on (S) is essential for the measurability of the empirical measure P̂_X_1^n, which is needed to define μ_n for both theorems. Indeed, for stronger σ-algebras, the measurability of Dirac measures may fail. For example, if S is a Polish space and we equip ℳ(S) with the Borel σ-algebra with respect to the τ-topology, the Dirac measures are not measurable any more, and therefore neither is the empirical measure P̂_X_1^n; see, e.g., the relevant discussion in <cit.>. * As also mentioned in <cit.>, the dependence of de Finetti-relative entropy upper bounds on the alphabet size is of interest in applications. For example, it is related to the running time of approximation schemes for the minimisation of polynomials of fixed degree over the simplex <cit.>. As it turns out, we have already done almost all of the work for this. Using the definition of M_k,μ_n, the sampling representations in (<ref>) and (<ref>), and Stam's bound (<ref>), D(P_kM_k,μ_n) = D(∫ h(Q,n,k;·) dμ_n(Q) . ∫ b(Q,n,k;·) dμ_n(Q)) ≤∫ D(h(Q,n,k;·) b(Q,n,k;·)) dμ_n(Q) ≤k(k-1)/2(n-k+1), where the first inequality follows from the joint convexity of relative entropy in its two arguments <cit.>, and the second inequality follows from the fact that, for any specific n-type Q, the urn with composition described by Q contains balls of at most n different colours, so we can take c≤ n in (<ref>). We begin by generalising a lower bound in <cit.> based on an argument by Freedman <cit.>. Take W_1,W_2,…,W_k to be i.i.d. and uniformly distributed in {1,2,…,n}, independent of X_1^n. For any A∈ and any n-type Q∈(S), from the definitions it is easy to see that, ((X_W_1,X_W_2,…,X_W_k)∈ A |P̂_X_1^n=Q)=Q^k(A). Therefore, the mixture measure M_k,μ_n can be written, M_k,μ_n(A) = ∫((X_W_1,X_W_2,…,X_W_k)∈ A |P̂_X_1^n=Q) dμ_n(Q) = ((X_W_1,X_W_2,…,X_W_k)∈ A) = ∑_w_1^k∈{1,…,n}^k1/n^k ((X_w_1,X_w_2,…,X_w_k)∈ A). Summing instead over all the index vectors w_1^k in the subset D_k of {1,…,n}^k that consists of all those w_1^k that have k distinct elements, and using the exchangeability of X_1^n, M_k,μ_n(A) ≥∑_w_1^k∈ D_k1/n^k ((X_w_1,X_w_2,…,X_w_k)∈ A) = ∑_w_1^k∈ D_k1/n^k (X_1^k∈ A) = n!/(n-k)!n^k P_k(A). Now let ={A_1,…,A_N} be any finite partition of S. Applying the last bound above to each A_i and summing over i=1,…,N, ∑_i=1^NP_k(A_i)logP_k(A_i)/M_k,μ_n(A_i)≤log(n^k(n-k)!/n!). Taking the supremum of this over all finite partitions and recalling <cit.> that the relative entropy between any two probability measures is equal to the supremum over all finite partitions , completes the proof. § APPENDIX Let X_1^n be an arbitrary (measurable) random vector. The measurability of the empirical measure P̂_X_1^n mentioned in Section <ref> immediately follows from the following lemma. The Dirac measures δ_s are measurable maps from (S,𝒮) to (ℳ(S),ℱ), where ℱ is the σ-algebra generated by the maps {π_A :ℳ(S) → [0,1]}_A ∈𝒮 given by π_A(P) = P(A). We need to show that {s: δ_s ∈ F}∈𝒮 for every F∈ℱ. Since ℱ is generated by the maps {π_A}, it is the smallest σ-algebra that contains all of, ⋃_A ∈𝒮σ({P: π_A(P) ∈ B}_ B ∈ℬ([0,1])), where ℬ([0,1]) is the Borel σ-algebra on [0,1]. So it suffices to show that, for every A ∈𝒮, σ({P: π_A(P) ∈ B}_ B ∈ℬ([0,1])) ⊂𝒮. Now we claim that, for every A ∈𝒮, {s:δ_s ∈{P: π_A(P) ∈ B}}_ B ∈ℬ([0,1])∈𝒮, implies (<ref>). But this follows since the collection of sets F ∈ℱ such that {s: δ_s ∈ F}∈𝒮 forms a σ-algebra <cit.>. So it only remains to establish (<ref>). Consider four cases: ({0,1}⊂ B),(0 ∈ B, 1 ∉ B),(1 ∈ B, 0 ∉ B), and ( {0,1}∈ B^c). For each of these cases, the set on the left-hand side of (<ref>) is simply S, A^c, A, and ∅, respectively, all of which are in 𝒮. The result follows.
http://arxiv.org/abs/2407.12597v1
20240717142153
Enhancing Wrist Abnormality Detection with YOLO: Analysis of State-of-the-art Single-stage Detection Models
[ "Ammar Ahmed", "Ali Shariq Imran", "Abdul Manaf", "Zenun Kastrati", "Sher Muhammad Daudpota" ]
cs.CV
[ "cs.CV" ]
1 .001 mode = title]Enhancing Wrist Abnormality Detection with YOLO: Analysis of State-of-the-art Single-stage Detection Models 1]Ammar Ahmed 2]Ali Shariq Imran 1]Abdul Manaf 3]Zenun Kastrati 1]Sher Muhammad Daudpota [1] organization=Dept. of Computer Science, Sukkur IBA University, city=Sukkur, postcode=65200, country=Pakistan [2]organization=Dept. of Computer Science, Norwegian University of Science & Technology (NTNU), city= Gjøvik, postcode=2815, country=Norway [3]organization=Dept. of Informatics, Linnaeus University, city= Växjö, postcode=351 95, country=Sweden § ABSTRACT Diagnosing and treating abnormalities in the wrist, specifically distal radius, and ulna fractures, is a crucial concern among children, adolescents, and young adults, with a higher incidence rate during puberty. However, the scarcity of radiologists and the lack of specialized training among medical professionals pose a significant risk to patient care. This problem is further exacerbated by the rising number of imaging studies and limited access to specialist reporting in certain regions. This highlights the need for innovative solutions to improve the diagnosis and treatment of wrist abnormalities. Automated wrist fracture detection using object detection has shown potential, but current studies mainly use two-stage detection methods with limited evidence for single-stage effectiveness. This study employs state-of-the-art single-stage deep neural network-based detection models YOLOv5, YOLOv6, YOLOv7, and YOLOv8 to detect wrist abnormalities. Through extensive experimentation, we found that these YOLO models outperform the commonly used two-stage detection algorithm, Faster R-CNN, in bone fracture detection. Additionally, compound-scaled variants of each YOLO model were compared, with YOLOv8x demonstrating a fracture detection mean average precision (mAP) of 0.95 and an overall mAP of 0.77 on the GRAZPEDWRI-DX pediatric wrist dataset, highlighting the potential of single-stage models for enhancing pediatric wrist imaging. wrist fracture detection object localization medical imaging pediatric X-ray deep learning YOLO [ [ July 22, 2024 ================= § INTRODUCTION Wrist abnormalities are a common occurrence in children, adolescents, and young adults. Among them, wrist fractures such as distal radius and ulna fractures are the most common with incidence peaks during puberty <cit.>. Timely evaluation and treatment of these fractures are essential to prevent life-long implications. Digital radiography is a widely used imaging modality to obtain wrist radiographs which are then interpreted by surgeons or physicians in training to diagnose wrist abnormalities. However, medical professionals may lack the specialized training to assess these injuries accurately and may rely on radiograph interpretation without the support of an expert radiologist or qualified colleagues <cit.>. Studies have shown that diagnostic errors in reading emergency X-rays can reach up to 26% <cit.>. This is compounded by the shortage of radiologists even in developed countries <cit.> and limited access to specialist reporting in other parts of the world <cit.> posing a high risk to patient care. The shortage is expected to escalate in the upcoming years due to a growing disparity between the increasing demand for imaging studies and the limited supply of radiology residency positions. The number of imaging studies rises by an average of five percent annually, while the number of radiology residency positions only grows by two percent. <cit.>. While imaging modalities such as MRI, CT, and ultrasound can assist in the diagnosis of wrist abnormalities, some fractures may still be occult <cit.>. Recent advances in computer vision, more specifically, object detection have shown promising results in medical settings. Some of the positive results of detecting pathologies in trauma X-rays were recently published <cit.>. Computer vision algorithms are accurate, efficient, and more importantly extremely quick to produce results compared to any radiologist or other imaging modalities currently used in practice. For example, radiology imaging delays have been found to independently contribute to longer hospital stays, as indicated by a recent study <cit.>. In addition, a separate study <cit.> found that creating reports from CT scans often took over three hours, with radiologists being responsible for a significant portion (42%) of the delay. The delays in obtaining clinically relevant information can have significant impacts on patients and contribute to unnecessary burdens on health systems, patients, and insurers. Computer vision algorithms can potentially address the delays associated with radiographic interpretation by providing a more efficient and prompt alternative, while still achieving comparable or even superior results. Object detection has emerged as a powerful tool for identifying abnormalities in X-ray images. Its ability to locate and classify various objects within an image has made it a valuable asset in the diagnosis and treatment of various medical conditions. In recent years, significant progress has been made in the development of object detection algorithms, leading to their widespread adoption in the medical community. An earlier approach called the sliding window approach <cit.> for object detection involved dividing an image into a grid of overlapping regions and then classifying each region as containing the object of interest or not. There are several disadvantages of this approach, one of them being that it is computationally expensive as a large number of regions need to be classified. To address these issues, region-based methods were invented. The introduction of Region-based Convolutional Neural Network (R-CNN) <cit.> was the first breakthrough in the application of region-based methods. The main idea behind these methods was to generate candidate object regions and classify only those regions as containing the object of interest or not. Another method developed as an improvement over the sliding window approach was the single-stage detection method which has gained popularity in recent years due to its efficiency and good performance. This approach uses a single forward propagation through the network to predict bounding boxes and class probabilities, eliminating the need to generate candidate object regions, and making it faster than region-based approaches. While two-stage detection generates candidate regions in the first stage and refines them in the second stage at the cost of speed, single-stage detection provides a balance between speed and accuracy by predicting final results in a single pass through the network. Two-stage detection has been the most widely used approach for detecting wrist abnormalities in recent years. However, there has been limited research on the effectiveness of single-stage detectors in detecting various abnormalities in the wrist, including fractures. In this study, we focus on the effectiveness of single-stage detectors in detecting wrist abnormalities, more specifically, we focus on the capabilities of recent versions of the YOLO algorithm (v5, v6, v7, and v8). Additionally, this study is unique in its use of a large, comprehensively annotated dataset called GRAZPEDWRI-DX presented in a recent publication <cit.>. The characteristics and complexity of the dataset are discussed in section <ref> §.§ Study Objective & Research Questions The primary objective of this study is to test the effectiveness of the state-of-the-art YOLO detection models, YOLOv5, YOLOv6, YOLOv7, and YOLOv8 on a comprehensively annotated dataset "GRAZPEDWRI-DX" <cit.> recently released to the public. We compare the performances of all variants within each YOLO model employed in this study to see whether the use of a compound-scaled version of the same architecture improves its performance. Moreover, this study also investigates how effective these single-stage detection methods are in detecting fractures compared to a two-stage detection method widely used in the past. We hypothesize that fractures in the near vicinity of the wrist in pediatric X-ray images can be detected efficiently using YOLOv5, YOLOv6, YOLOv7, and YOLOv8 models proposed by <cit.>, <cit.>, <cit.>, and <cit.> respectively. We prove our hypothesis using the comprehensively annotated GRAZPEDWRI-DX dataset. The general objective of the study is to use the GRAZPEDWRI-DX dataset to analyze the potential of utilizing object detection techniques in answering the following research questions (RQ): * To what extent do state-of-the-art YOLO object detection models effectively detect fractures in the vicinity of the wrist in pediatric X-ray images? * In the analysis of wrist images, do the state-of-the-art single-stage detection models outperform a two-stage detection model widely used in the past? * Does the use of compound scaled variants within each YOLO algorithm improve its performance in detecting fractures in the vicinity of the wrist in pediatric X-ray images? §.§ Contribution The major contributions of this article are as follows: * A thorough performance assessment of state-of-the-art YOLO detection models (YOLOv5, YOLOv6, YOLOv7, and YOLOv8) on the newly released GRAZ-PEDWRI-DX dataset, a large and diverse set of pediatric X-ray images. To the best of our knowledge, this is the first study of its kind. * An in-depth comparison of the performance of various variants within each YOLO model utilized, including compound, medium, and smaller-scaled versions. * Achieved state-of-the-art mean average precision (mAP) score on the (GRAZPEDWRI-DX dataset). * A detailed analysis of the performance of single-stage detection models in comparison to the two-stage detection model widely used in the literature, namely, Faster R-CNN. § RELATED WORK Fracture detection is a crucial aspect in the field of wrist trauma, and computer vision techniques have played a significant role in advancing the research in this area. This section provides a comprehensive overview of the existing studies on fracture detection and highlights the key findings. The studies are divided into two subheadings: "Two-stage detection" and "One-stage detection". The first subheading covers studies that have used two-stage detection techniques, while the second subheading focuses on studies that have only employed single-stage detection algorithms. §.§ Two-stage detection The detection of bone abnormalities, including fracture detection, has been widely studied in the literature, mainly using two-stage detection algorithms. For instance, In a study by <cit.>, a Faster R-CNN model utilizing Visual Geometry Group (VGG16) was applied to identify distal radius fractures in anteroposterior wrist X-ray images. The model achieved a mAP of 0.87 when tested on a set of 1,312 images. It should be noted that the initial dataset consisted of only 95 anteroposterior images, with and without fractures, which were then augmented for training as well as for testing. <cit.> developed two separate Faster R-CNN models with Inception-ResNet for frontal and lateral projections of wrist images. The models were trained on 6,515 and 6,537 images of frontal and lateral projections, respectively. The frontal model detected 91% of fractures, with a specificity of 0.83 and a sensitivity of 0.96. The lateral model detected 96% of fractures, with a specificity of 0.86 and a sensitivity of 0.97. Both models had a high area under the receiver operating characteristic curve (AUC-ROC) values, with the frontal model having 0.92 and the lateral model having 0.93. The overall per-study specificity was 0.73, sensitivity was 0.98, and AUC was 0.89. <cit.> used a two-stage R-CNN method to achieve an average precision (AP) of 0.62 on approximately 4,000 X-ray images of arm fractures in musculoskeletal radiographs, MURA dataset. <cit.> developed a two-stage R-CNN network called ParallelNet, with a TripleNet backbone network, for fracture detection in a dataset of 3,842 thigh fracture X-ray images, achieving an AP of 0.88 at an Intersection over Union (IoU) threshold of 0.5. <cit.> used a Faster R-CNN model with an anchor-based approach, combined with a multi-resolution Feature Pyramid Network (FPN) and a ResNet50 backbone network. They tested the model on 2333 X-ray images of different types of femoral fractures and obtained a mAP score of 0.69. <cit.> developed a deep learning-based pipeline called DeepWrist for detecting distal radius fractures. The model was trained on a dataset of 1946 wrist studies and was evaluated on two test sets. The first test set, comprising 207 cases, resulted in an AP score of 0.99, while the second test set, comprising 105 challenging cases, resulted in an AP of 0.64. The model generated heatmaps to indicate the probability of a fracture near the vicinity of the wrist but did not provide a bounding box or polygon to clearly locate the fracture. The study was limited by the use of a small dataset with a disproportionate number of challenging cases. <cit.> in their study, first classified the images in the Radiopaedia dataset into the fracture and non-fracture categories using CrackNet. After this, they utilized Faster R-CNN for fracture detection on the 1052 bone images in the dataset. With an accuracy of 0.88, a recall of 0.88, and a precision of 0.89, they demonstrated the usefulness of the proposed approach. <cit.> applied a Feature Ambiguity Mitigate Operator model along with ResNeXt101 and a FPN to identify fractures in a collection of 9040 radiographs of various body parts, including the hand, wrist, pelvic, knee, ankle, foot, and shoulder. They accomplished an AP of 0.77. <cit.> proposed a guided anchoring method (GA) for fracture detection in hand X-ray images using the Faster R-CNN model, which was used to forecast the position of fractures using proposal regions that were refined using the GA module’s learnable and flexible anchors. They evaluated the method on 3067 images and achieved an AP score of 0.71. <cit.> conducted 20 fracture detection experiments using a dataset of wrist X-ray images from Gazi University Hospital. To improve the results, they developed an ensemble model by combining five different models, named WFD-C. Out of the 26 models evaluated for fracture detection, the WFD-C model achieved the highest average precision of 0.86. This study utilized both two-stage and single-stage detection methods. The two-stage models employed were Dynamic R-CNN, Faster R-CNN, and SABL and DCN models based on Faster R-CNN. Meanwhile, the single-stage models used were PAA, FSAF, RetinaNet and RegNet, SABL, and Libra. <cit.> employed transfer learning with a modified Mask R-CNN to detect and segment fractures using two datasets: a surface crack image dataset of 3000 images and a wrist fracture dataset of 315 images. They first trained the model on the surface crack dataset and then fine-tuned it on the wrist fracture dataset. They achieved an average precision of 92.3% for detection and 0.78 for segmentation on a 0.5 scale, 0.79 for detection, and 0.52 for segmentation on a strict 0.75 scale. §.§ One-stage detection Very few studies have been conducted demonstrating the performance of one-stage detectors in the area of wrist trauma and fracture detection. In the study by <cit.>, a YOLOv2 model was used to detect fractures in a dataset of 5134 spinal CT images, resulting in a mAP of 0.75. In another research by the same authors <cit.>, a Faster R-CNN model was applied to the same dataset, yielding an mAP of 0.73. A recent study by <cit.> compared the performance of the YOLOv4 object detection model <cit.> to that of the U-Net segmentation model proposed by <cit.> and a group of radiologists on the "GRAZPEDWRI-DX" dataset. The authors trained two YOLOv4 models for this study: one for identifying the most probable fractured object in an image and the other for counting the number of fractures present in an image. The first YOLOv4 model achieved high performance, with an AUC-ROC of 0.90 and an F1-score of 0.90, while the second YOLOv4 model achieved an AUC-ROC of 0.90 and an F1-score of 0.96. These results demonstrate the superior performance of YOLOv4 in comparison to traditional methods for fracture detection. The "GRAZPEDWRI-DX" dataset used in this study was recently published <cit.>. The authors presented the baseline results for the dataset using the COCO pre-trained YOLOv5m variant of YOLOv5. The model was trained on 15,327 (of 20,327) images and tested on 1,000 images. They achieved a mAP of 0.93 for fracture detection and an overall mAP of 0.62 at an IoU threshold of 0.5. In conclusion, the literature review shows that the majority of studies on fracture detection have utilized the two-stage detection approach. Additionally, the datasets utilized in these studies tend to be limited in size in comparison to the dataset used in our study. This study builds upon the work of studies <cit.> and <cit.> by conducting a comprehensive comparative study between the state-of-the-art single-stage detection algorithms (YOLOv5, YOLOv6, YOLOv7, and YOLOv8) and a widely used two-stage model Faster R-CNN. The results of this study provide valuable insights into the performance of these algorithms and contribute to the ongoing research in the field of wrist trauma and fracture detection. § MATERIAL & METHODS §.§ Research Design A quantitative (experimental) study is conducted using data from 10,643 wrist radiography studies of 6,091 unique patients collected by the Division of Paediatric Radiology, Department of Radiology, Medical University of Graz, Austria. As shown in Fig. <ref>, the dataset was randomly partitioned into a training set of 15,245, a validation set of 4,066, and a testing set of 1016. In the following subsection, we describe various measurements used to assess the performance of the models. §.§ Study Dimensions The following dimensions are used to facilitate the interpretation of results: * Abnormality-(ab): The object detection models were evaluated on their ability to detect different types of abnormalities in the radiographic images. * Fracture-(f): The object detection models were also evaluated on their ability to effectively detect fractures in the radiographic images. * Recall-(r): The proportion of positive instances that were correctly detected by the model. The calculation of recall is TP / (TP + FN), where TP represents the number of true positive detections and FN the number of false negative detections. * Precision-(p): The proportion of positive detections that were actually positive instances. It is calculated by dividing the number of true positive detections (TP) by the sum of true positives and false positives (incorrect detections) represented as TP / (TP + FP), where FP stands for false positive detections. * Mean Average Precision-(mAP): mAP is a performance metric used to evaluate an object detection model with an intersection over union (IoU) threshold of 0.5. It's a widely adopted evaluation method for object detection models as it takes into account both precision and recall. §.§ Tools & Instruments Python scripts were used to partition the dataset into training, validation, and testing sets. The deep learning framework PyTorch was used to train object detection models. To visualize, track, and compare model training, we employed the Weights and Biases (WANDB) platform. To take advantage of our system's graphical processing units (GPUs), we utilized CUDA and cuDNN. All training was performed on a Windows PC equipped with an NVIDIA GeForce RTX 2080 SUPER (with 8,192 MB of video memory), an Intel(R) Xeon(R) W-2223 CPU @ 3.60GHz processor, and 64GB of RAM. The Python version used was 3.9.13. §.§ Deep Learning Models For Object Detection In this study, we employed 4 single-stage detection models, namely YOLOv5, YOLOv6, YOLOv7, and YOLOv8, as well as a two-stage detection model Faster R-CNN. To further optimize the performance of the single-stage models, we experimented with multiple variations of each YOLO model, ranging from 5 to 7 variants. This resulted in a total of 23 wrist abnormality detection procedures. We conducted initial training on various variants of each YOLO model. Subsequently, we selected the highest-performing variant within each YOLO model based on the results obtained and compared them to the two-stage detection model Faster R-CNN. The models that were trained for 100 epochs, were observed to converge between 90-100 epochs, indicating no additional improvement beyond the 100th epoch, thus further training was deemed unnecessary. The YOLO (You Only Look Once) algorithm, initially introduced by <cit.> in 2016, is a single-stage object detection approach that uses a single pass of a convolutional neural network (CNN) to make predictions about the locations of objects in an image, making it faster than other approaches to date. In 2021, YOLOv4 achieved the highest mean average precision on the MS COCO dataset while also being the fastest real-time object detection algorithm <cit.>. Since its initial release, the algorithm has undergone several improvements, with versions ranging from v1 to v7 <cit.>, with v5, v6, v7, and v8 being released in 2020, 2022, and 2023 offering smaller volume, higher speed, and higher precision <cit.>. Fig. <ref> illustrates the general structure of YOLO with backbones used in this study such as CSP, VGG, and EELAN. R-CNN proposed by <cit.> was one of the first algorithms to achieve state-of-the-art performance on the PASCAL VOC object detection benchmark. R-CNN is a two-stage algorithm that takes an entire image as input, generates regions likely containing objects, extracts features using a CNN, and classifies objects within these regions. Faster R-CNN is a widely adopted and well-established model within the R-CNN family, known for its efficiency and accuracy in object detection. It has been widely utilized in the medical field, specifically in the detection of bone fractures. The Faster R-CNN model, first introduced by <cit.>, has continued to be a significant and influential contribution to the field of computer vision, remaining one of the most highly cited papers in the field to this day. §.§.§ The YOLOv5 Model The YOLO framework consists of three main components: the backbone, the neck, and the head. First, the input terminal performs various data processing tasks, including adaptive image filling and mosaic data augmentation <cit.>. In our research, we have utilized the same data augmentation and pre-processing methods. Additionally, the YOLOv5 model uses adaptive anchor frame calculation to optimize its performance on different datasets by adjusting its anchor frame size when the dataset changes. The backbone is responsible for extracting image features. It aggregates and forms features at different granularities. The specific CNN architecture used in YOLOv5 is CSPDarknet since this is the best-performing one so far <cit.>. Hence, in our work, we have utilized the same CNN architecture. The CSPDarknet architecture consists of convolutional, pooling, and residual connections represented mathematically as: F_i = f(F_i-1, W_i) + F_i-1 where F_i is the feature maps at the i-th layer, F_i-1 is the feature maps at the (i-1)-th layer, W_i are the weights and biases at the i-th layer, and f(·) is the function applying convolution and pooling operations. The SPP structure is then applied to the feature maps produced by the CSPDarknet to extract features at multiple scales. This can be represented mathematically as: F_SPP = g(F_i) where F_SPP represents the multi-scale feature maps produced by the SPP structure, and g(·) represents the function that applies the SPP operation to the input feature maps F_i. The neck of YOLOv5 uses Path Aggregation Network (PANet) to aggregate features from the backbone and produce higher-level features for the output layers. The same architecture is used in our study. The head constructs output vectors with class probabilities, objectness scores, and the bounding box (coordinates for the box: center, height, and width) representing objects in the image. The utilized head is the same as the original implementation. The YOLOv5 model includes five different model variants, namely, YOLOv5n, YOLOv5s, YOLOv5m, YOLOv5l, and YOLOv5x. All are compound-scaled variants of the same architecture. Each of these offers different detection accuracy and performance. The differences between these variants are the number of channels i.e. (the depth of the network), and the number of layers. In the experimentation of YOLOv5 variants, standard hyperparameters were utilized. The input resolution was fixed at 640 pixels, and the batch size was set to 16. The optimization algorithm employed was Stochastic Gradient Descent (SGD) with an initial learning rate α = 1 × 10^-2, final learning rate α_f = 1 × 10^-2, momentum = 0.937, weight decay = 5 × 10^-4, warmup momentum = 0.8, and warmup bias lr = 0.1. Each variant underwent 100 epochs of training from scratch. During the evaluation phase, each variant was tested on 1016 randomly selected samples, using an Intersection over Union (IoU) threshold of 0.5 for inference. §.§.§ The YOLOv6 Model YOLOv6 features an anchor-free design and reparameterized Backbone, with VGG and CSP Backbones used in the "n" and "s" variants, and "m", "l" and "l6" variants respectively. This Backbone is referred to as EfficientRep. The Neck, named Rep-PAN, is similar to YOLOv5, but the Head is efficiently decoupled, improving accuracy and reducing computation by not sharing parameters between the classification and detection branches. The YOLOv6 also includes five different model variants, namely, YOLOv6n, YOLOv6s, YOLOv6m, YOLOv6l, and YOLOv6l6. Fig. <ref> shows the performance comparison of these variants on the COCO dataset. All 5 variants of YOLOv6 were trained for 100 epochs from scratch. Standard hyperparameters were used. The input was set to 640 pixels with a batch size of 16 samples. The optimization algorithm employed was Stochastic Gradient Descent (SGD) with the same parameters as used for YOLOv5, including initial and final learning rates, momentum, warmup momentum, weight decay, and warmup bias lr. As before, each variant was tested on 1016 randomly selected samples, using an Intersection over Union (IoU) threshold of 0.5 for inference. §.§.§ The YOLOv7 Model YOLOv7 also has three main components discussed before with several changes. The E-ELAN is a component in YOLOv7 that uses expand, shuffle, and merge cardinality to continuously improve network learning without disrupting the gradient path. <cit.>. Other notable changes include Model Scaling techniques, Re-parameterization planning, and Auxiliary Head Coarse-to-Fine. Model scaling is a technique used to adapt key characteristics of a model to align with specific application requirements. This includes adjusting the width (number of channels), depth (number of stages), and resolution (input image size) of the model. The scaling of object detection models requires knowledge of the network depth, width, and resolution on which it is trained. YOLOv7 utilizes a compound scaling technique, which simultaneously scales the depth and width of the network by concatenating layers. This method has been shown through ablation studies to maintain optimal model architecture while scaling for different sizes. Without this technique, an increase in depth alone may cause a decrease in hardware efficiency due to a change in the ratio between input and output channels of a transition layer. YOLOv7's compound scaling technique prevents such negative effects on performance. Re-parameterization techniques aim to create a more robust model by averaging a set of weights. Recent research has focused on module-level re-parameterization, where specific parts of the network are targeted. YOLOv7 uses gradient flow propagation to determine which modules should be re-parameterized. The YOLO network head generates the final predictions, however, an auxiliary head located in the middle of the network can be beneficial during training. The auxiliary head is supervised along with the final head. However, it does not train as efficiently because it is closer to the prediction, thus the YOLOv7 authors experimented with different levels of supervision for the auxiliary head, using a coarse-to-fine approach where supervision is passed back from the final head at various granularities. The YOLOv7 model comprises of seven different variants, which include "P5" models (v7, v7x, and v7-tiny) and "P6" models (d6, e6, w6, and e6e). These variants are compound-scaled versions of the same architecture, each of which offers a different level of detection accuracy and performance when trained on the standard COCO dataset. This variation in performance is illustrated in Fig. <ref>. In the experimentation of these 7 variants of the YOLOv7 model, all variants underwent a training phase with a duration of 100 epochs from scratch with 16 samples as batch size. The standard hyperparameters were applied and the optimization algorithm employed was Stochastic Gradient Descent (SGD) with an initial learning rate α = 1 × 10^-2, final learning rate α_f = 1 × 10^-1, momentum = 0.937, weight decay = 5 × 10^-4, warmup momentum = 0.8, and warmup bias lr = 0.1. The "P5" models within YOLOv7, namely, v7, v7x, and v7-tiny were trained with an input resolution of 640 pixels, while the "P6" models, namely, d6, e6, w6, and e6e were trained with an input resolution of 448 pixels due to computational constraints, and although it may have a negative effect on the model's performance, this issue is compensated for by utilizing mosaic augmentation within YOLOv7. Mosaic augmentation is a technique used to increase the diversity of training data by combining multiple small images into a larger "mosaic" image. This can help to improve the robustness of object detection models by exposing them to a wider variety of object scales, orientations, and backgrounds. During the evaluation phase, all variants were tested on a test set of 1016 samples, using an Intersection over Union (IoU) threshold of 0.5. §.§.§ The YOLOv8 Model YOLOv8 is reported to provide significant advancements in object detection as well as image segmentation when compared to previous YOLO models, particularly in compact versions that are implemented on less powerful hardware. At the time of writing this paper, the architecture of YOLOv8 is not fully disclosed and some of its features are still under development. As of now, it's been confirmed that the system has a new backbone, uses an anchor-free design, has a revamped detection head, and has a newly implemented loss function. We have included the performance of this model on the GRAZPEDWRI-DX dataset as a benchmark for future studies, as further improvements to YOLOv8 may surpass the results obtained in this study. YOLOv8 comes in five versions at the time of release (January 10, 2023), namely, "n", "s", "m", "l", and "x". The performance of these variants on a COCO dataset is shown in Fig. <ref>. Just as before, all variants underwent 100 epochs of training from scratch, with standard hyperparameters. The image resolution was set at 640 pixels, with a batch size of 16 samples. The optimization algorithm employed was Stochastic Gradient Descent (SGD) as before with a starting learning rate of 1 × 10^-2, final learning rate of 1 × 10^-2, the momentum of 0.937, weight decay of 5 × 10^-4, warmup momentum of 0.8, and warmup bias lr of 0.1. As before, all variants were tested on a test set of 1016 samples, using an IoU threshold of 0.5. §.§.§ Faster R-CNN The Faster R-CNN model consists of three main components: the backbone, the region proposal network (RPN), and a detection network. In this study, a ResNet50 backbone with FPN was used for feature extraction from the input image. Anchors were generated for each feature, and a set of anchor boxes with variable sizes and aspect ratios were created for each anchor. The RPN was responsible for selecting appropriate anchor boxes and passing them onto the next layer. The classifier within RPN predicted if an anchor box contained an object, determined by an IoU threshold of 0.5. The regressor within RPN predicted offsets for the anchor boxes containing objects to fit them tightly to the ground truth labels. Lastly, the RoI pooling layer converted variable-sized proposals to a fixed size to run a classifier and regress a bounding box. Fig. <ref> illustrates the architecture of Faster R-CNN. The Faster R-CNN model underwent 100 epochs with an image size of 640 pixels and 16 samples as batch size. The default parameters were used and the optimization algorithm employed was Stochastic Gradient Descent (SGD) with a learning rate of α = 1 × 10^-3, a momentum of 0.9, and weight decay of 5 × 10^-4. It is important to note that, as with YOLO models, the selection of these parameters is not deliberate, they are the default settings. During the evaluation phase, each variant was tested on 1016 randomly selected samples, using an Intersection over Union (IoU) threshold of 0.5 for inference. §.§ Evaluation Metrics: mAP For the evaluation of object detection, a common way to determine if the predicted location of an object was correct is to find in Intersection over Union (IoU). It is defined as the ratio of the intersection of the predicted and the ground truth bounding box over the union of the predicted and ground truth bounding box. A visual illustration of IoU is presented in Fig. <ref>. Given the set of predicted bounding boxes A for a given image, and the set of ground truth bounding boxes B for the same image. The IoU can be computed as: IoU(A, B) = A ∩ B/A ∪ B; where A, B ∈ [0, 1] Commonly, if the IoU > 0.5, we classify the detection as true positive, otherwise, it is classified as false positive. Given IoU, we can compute the number of true positives TP and false positives FP and compute the Average precision AP for each object class c as follows: AP(c) = TP(c)/TP(c) + FP(c) Finally, after computing AP for each object class, we compute the Mean Average Precision mAP which is an average of AP across all classes C under consideration. mAP is given as: mAP = 1/C∑_c=1^CAP(c) mAP is the metric that quantifies the performance of object detection algorithms. Thus, the metric mAP_0.5 indicates mAP for IoU > 0.5. This is the IoU threshold we will be using to make our assessments of the detection models. §.§ Supplementary Materials The supplementary materials, including source code, and dataset split can be accessed through the following links: * https://github.com/ammarlodhi255/pediatric-wrist-anomaly-detectionSource Code * https://drive.google.com/file/d/1ubdO0_j2cr7AKSxVKQlGh99VzfrnyKfe/view?usp=share_linkDataset Split § DATASET The dataset used in this study is called GRAZPEDWRI-DX for machine learning presented by the authors in <cit.> and is publicly made available to encourage computer vision research. The dataset contains pediatric wrist radiograph images in PNG format of 6,091 patients (mean age 10.9 years, range 0.2 to 19 years; 2,688 females, 3,402 males, 1 unknown), treated at the Division of Paediatric Radiology, Department of Radiology, Medical University of Graz, Austria. The dataset includes a total of 20,327 wrist images covering lateral and posteroanterior projections. The radiographs were acquired over the span of 10 years between 2008 and 2018 and have been comprehensively annotated between 2018 and 2020 by expert radiologists and various medical students. The annotations were validated by three experienced radiologists as the X-ray images were annotated. This process was repeated until a consensus was met between the annotations and interpretations from three radiologists. We choose to use this dataset in our study for the following reasons: * The dataset is quite large consisting of 20,327 labeled and tagged images, making it suitable for various computer vision algorithms * To our knowledge, there are no related pediatric datasets publicly available, with others featuring only binary labels or not as comprehensively labeled as the one we use. * To the best of our knowledge, this is the first comprehensive study of the recently released GRAZPEDWRI-DX dataset using state-of-the-art computer vision models YOLOv5, v6, v7 and v8. * It contains diverse images of the early stages of bone growth and organ formation in children. Studying the wrist at this stage offers unique insights into the diagnosis, treatment, and prevention of anomalies that are not possible when studying adult wrists. §.§ Analysis of Objects in the Dataset The dataset includes a total of 9 objects: periosteal reaction, fracture, metal, pronator sign, soft tissue, bone anomaly, bone lesion, foreign body, and text. The object "text" is present in all X-ray images and is used to identify the side of the body (right or left hand) on which the X-ray was taken. The number of objects in the dataset is shown in Table <ref>. The table clearly indicates that the object "fracture" has the most common occurrence in wrist X-rays of GRAZPEDWRI-Dataset. The class "periosteal reaction" has the second largest occurrence followed by the third largest class "metal". Meanwhile, the classes "bone anomaly", "bone lesion", and "foreign body" have the lowest occurrence. Note that this table shows how many X-ray images contain a particular object and not the number of times an object is labeled in the dataset. Additionally, a histogram is shown in Fig. <ref> visually shows the class distribution. In Table <ref>, we show the number of images in which a particular anomaly occurs only once, twice, or multiple times. The column "Total" represents the total number of images in which a particular anomaly is present. § RESULTS & DISCUSSION This section presents a comprehensive analysis of the performance of various models for wrist abnormality detection on the GRAZPEDWRI-DX dataset. A total of 23 detection procedures were conducted using different variants of each YOLO model and a two-stage detection model (Faster R-CNN) on a test set consisting of 1016 randomly selected samples. The performance of each model was evaluated using metrics such as precision, recall, and mean average precision (mAP). We begin by providing a detailed analysis of the variants within each YOLO model. Next, we select the best-performing variant from each YOLO model based on the highest mAP score obtained for the fracture class, as well as across all classes. Finally, we compare these variants to determine the overall best-performing model and evaluate its performance against Faster R-CNN. The results of YOLOv5 variants are presented in Table <ref> and <ref>, showing the performance of the variants across all classes and on the fracture class, respectively. All values are rounded to two decimal places. The results show that the fractures were detected with the highest mAP of 0.95 at IoU = 0.5, with a precision of 0.92, and a recall of 0.90 by the YOLOv5 variant, YOLOv5l. Additionally, the performance of YOLOv5l appears to be satisfactory across all classes with the mAP score of 0.68 at IoU = 0.5. The variant YOLOv5x seems to perform just as well in terms of mAP obtained for the fracture class. In terms of overall performance across all classes, the highest mAP score achieved was 0.69 by the two YOLOv5 variants "m" and "x". The highest precision obtained across all classes is 0.80 by the variant "m", while the highest recall achieved was 0.66 by the variant "s". It can also be observed from the results shown in Table <ref> that as the complexity of the architecture in YOLOv5 increases, its performance improves. Table <ref> displays the mAP scores of all YOLOv5 variants at an IoU threshold of 0.5 for all classes present in the GRAZPEDWRI-DX dataset. It is worth noting that these mAP scores are particularly significant as they are calculated at an IoU threshold of 0.5, which is a commonly used threshold in object detection evaluations. These scores are crucial indicators of the performance of the YOLOv5 variants on the GRAZPEDWRI-DX dataset and provide valuable insights into their abilities to detect objects within the various classes present in the GRAZPEDWRI-DX dataset. Upon examination of the Table <ref>, it can be seen that almost all variants of YOLOv5 demonstrate the capability to detect classes that are in the minority, such as bone anomaly, bone lesion, and foreign body, with considerably good mAP scores as seen in Table <ref>. For instance, despite the limited number of instances of the class "Bonelesion" (only 42, as shown in Table <ref>), the four variants of YOLOv5 ("s", "m," "l," and "x") are able to correctly detect it in all instances where it occurs, with the mAP score of 1.00. Table <ref> and <ref> present the results of YOLOv6 variants, showcasing their performance on all classes and the fracture class, respectively. Variants "n", "s", and "m" achieved the highest mAP of 0.94 at an IoU threshold of 0.5 for detecting fractures. Variants "n", "m", and "l" displayed the highest precision for the fracture class with a value of 0.94, while variant "s" had the highest recall of 0.89. In terms of overall performance across all classes, the highest mAP score of 0.64 at an IoU threshold of 0.5 was obtained by variants "m" and "l", with variant "l" achieving the highest precision of 0.60 and variant "m" having the highest recall of 0.83. Table <ref> illustrates that YOLOv6 variants, similar to YOLOv5 variants, exhibit the ability to detect minority classes. However, Table <ref> reveals that, unlike YOLOv5, as the complexity of the model increases from variant "m" to "l" and then to "l6", the mAP score decreases, indicating that complexity beyond variant "m" results in decreased performance. This trend is also observed in Table <ref>, where increasing complexity from variant "l" to "l6" results in decreased performance across all classes. The performance of YOLOv7 variants on both across classes and the fracture class is presented in Tables <ref> and <ref>, respectively. The results indicate that the second variant of the YOLOv7 model exhibits the highest mean average precision (mAP) of 0.94 at an intersection over union (IoU) threshold of 0.5, with a precision of 0.86 and recall of 0.91 for detecting fractures. This variant also demonstrates superior performance across all classes with a mAP of 0.61 at an IoU of 0.5, a precision of 0.79, and a recall of 0.54. The variant YOLOv7x seems to perform just as well in terms of mAP obtained for the fracture class but has a lower mAP score compared to the second variant across all classes. Additionally, it can be observed from our experiments that, in contrast to YOLO5, increasing the complexity of the YOLOv7 architecture, in terms of depth and number of layers, hurts its performance in detecting wrist abnormalities. The only exception to this trend is the increase in performance observed when comparing the smaller variant "YOLOv7-Tiny" to the slightly larger variant "YOLOv7". The "YOLOv7-Tiny" achieved mAP of 0.5 at IoU=0.5, but the "YOLOv7" variant showed an improvement of 0.11 across all classes. Additionally, when focusing on the specific class of fractures, an improvement of 0.01 in the mAP score was observed, suggesting that there is an optimal balance of complexity and performance for this model. The decline in performance for YOLOv7's "P6" models, specifically "W6", "E6", "D6", and "E6E", compared to the "P5" models may be attributed to the reduced image resolution. However, the results across all classes indicate that even with this resolution, the performance of "P6" models either decreases or does not improve at all. It is worth noting that rare classes such as bone anomaly, bone lesion, and foreign body have a very low mAP score and are sometimes not detected at all, as shown in Table <ref>. However, the second variant of YOLOv7 is the only variant able to detect all the minority classes such as "bone anomaly", "bone lesion", and "foreign body". Tables <ref> and <ref> show the performance of YOLOv8 model variants across all classes and on the fracture class, respectively. The YOLOv8 variant "YOLOv8x" achieved the highest mAP of 0.95 for fracture detection at an IoU threshold of 0.5, with a precision of 0.91 and a recall of 0.89. Additionally, it demonstrated superior overall performance across all classes with a mAP of 0.77 at an IoU threshold of 0.5. Table <ref> also shows that all YOLOv8 variants demonstrated good performance in detecting all classes, including minority classes, except the "foreign body" class not being detected by the small and the medium variants. The results suggest that using compound-scaled variants of the YOLOv8 architecture generally improves performance, except for a decrease in mAP scores across all classes when moving from the variant "s" to a medium variant "m", with a decrease of 0.09 in Table <ref>. The results of the experimental evaluation using the two-stage detector Faster R-CNN are presented in Table <ref>. The table shows the mean Average Precision (mAP) scores obtained for each class individually as well as the overall mAP across all classes. The results indicate that all variants of the YOLO model outperform Faster R-CNN by a significant margin. This is supported by the fact that the mean mAP score of every YOLO variant was found to be higher than that of Faster R-CNN, both for fracture detection and overall performance across all classes. These findings suggest that the single-stage detection algorithm, YOLO, is a more effective model for this task. Moreover, Faster R-CNN does not seem to exhibit the ability to detect the classes in minority such as "bone anomaly", "bone lesion", and "foreign body". Figures <ref> and <ref> provide an overview of the mAP scores obtained for fracture class as well as across all classes by all YOLO variants and Faster R-CNN. In applications where false positives are costly, a model with high precision may be preferable, while in situations where missing detections are costly, a model with high recall may be more desirable. The mean Average Precision (mAP) serves as a comprehensive measure of the model's performance. Therefore, we selected the best-performing variant within each YOLO model based on the highest mAP achieved for the fracture class and overall performance across all classes. We have also compared their mAP scores to each other as well as with that of the Faster R-CNN model, as illustrated in Table <ref>. We also evaluated the performance of all variants, including Faster R-CNN, on a challenging image containing multiple objects of interest, including 2 fractures, 3 periosteal reactions, 1 metal, and 1 text. The bounding box estimates for these objects from each variant and Faster R-CNN are illustrated in Fig. <ref>. It is clear from Table <ref> that the variant "YOLOv8x" of YOLOv8 is the best-performing variant out of all the variants employed in this study. The results presented in this study using the variant "YOLOv8x" represent a significant improvement upon the ones originally presented in <cit.> for the fracture class. In that paper, the model variant "YOLOv5m" trained on COCO weights achieved a mean average precision (mAP) score of 0.93 for fracture detection and an overall mAP score of 0.62 at an IoU threshold of 0.5. In contrast, the results obtained in this study demonstrate a higher mAP score of 0.95 for fracture detection and an overall mAP of 0.77 at an IoU threshold of 0.5. Fig. <ref>, <ref>, <ref>, and <ref> present the F1 versus Confidence, Recall versus Confidence, Precision versus Confidence, and Precision versus Recall curves, respectively, for the variant "YOLOv8x" across all classes. These curves provide a visual representation of the model's performance on different confidence intervals and allow for a more thorough evaluation of its capabilities. The F1 versus Confidence curve shows the relationship between the model's F1 score, which is a measure of the balance between precision and recall, and the confidence of its predictions. The Recall versus Confidence curve illustrates the model's ability to correctly identify objects, while the Precision versus Confidence curve demonstrates the proportion of correct predictions made by the model. The Precision versus Recall curve shows the trade-off between the model's precision and recall, with higher precision typically corresponding to lower recall and vice versa. Additionally, a confusion matrix <ref> is shown for the variant "YOLOv8x". Our study found that the relationship between the complexity of a YOLO model and its performance is not always linear. Our results on the GRAZPEDWRI-DX dataset revealed that the performance of YOLO models did not consistently improve with increasing complexity, except for YOLOv5 and YOLOv8. § CONCLUSION & FUTURE WORK In this study, we aimed to evaluate the performance of state-of-the-art single-stage detection models, specifically YOLOv5, YOLOv6, YOLOv7, and YOLOv8, in detecting wrist abnormalities and compare their performances against each other and the widely used two-stage detection model Faster R-CNN. Additionally, the analysis of the performance of all variants within each YOLO model was also provided. The evaluation was conducted using the recently released GRAZPEDWRI-DX <cit.> dataset, with a total of 23 detection procedures being carried out. The findings of our study demonstrated that YOLO models outperform the commonly used two-stage detection model, Faster R-CNN, in both fracture detection and across all classes present in the GRAZPEDWRI-DX dataset. Furthermore, an analysis of YOLO models revealed that the YOLOv8 variant "YOLOv8x" achieved the highest mAP across all classes of wrist abnormalities in the GRAZPEDWRI-DX dataset, including the fracture class, at an IoU threshold of 0.5. We also discovered that the relationship between the complexity of a YOLO model, as measured by the use of compound-scaled variants within each YOLO model, and its performance is not always linear. Specifically, our analysis of the GRAZPEDWRI-DX dataset revealed that the performance of YOLO variants did not consistently improve with increasing complexity, except for YOLOv5 and YOLOv8. Some variants were successful in detecting minority classes while others were not. These results contribute to understanding the relationship between the complexity of YOLO models and their performance, which is important for guiding the development of future models. Our study highlights the potential of single-stage detection algorithms, specifically YOLOv5, YOLOv6, YOLOv7, and YOLOv8, for detecting wrist abnormalities in clinical settings. These algorithms are faster than their two-stage counterparts, making them more practical for emergencies commonly found in hospitals and clinics. Additionally, the study's results indicate that single-stage detectors are highly accurate in detecting wrist abnormalities, making them a promising choice for clinical use. While this research was conducted, YOLOv8 was the most recent version. The results of this study can serve as a benchmark for evaluating the performance of future models for wrist abnormality detection, as further improvements to either YOLOv8 or future versions of YOLO may surpass the results obtained in this study. It is worth noting that this study didn't explore the entire hyperparameter space and finding the best hyperparameters for each YOLO model may improve wrist abnormality detection performance on the dataset. Computational limitations restricted the input resolution to 640 pixels, but higher resolutions could further improve performance. The study showed that the models had difficulty detecting "bone anomaly", "bone lesion", and "foreign body" due to low instances of these classes, so increasing their instances through augmentation or image generation could enhance performance. Additionally, the performance of classification models could also be assessed by exploring the dataset for pure classification tasks without object localization. § ACKNOWLEDGEMENT This work was supported in part by the Department of Computer Science (IDI), Faculty of Information Technology and Electrical Engineering, Norwegian University of Science and Technology (NTNU), Gjøvik, Norway; and in part by the Curricula Development and Capacity Building in Applied Computer Science for Pakistani Higher Education Institutions (CONNECT) Project NORPART-2021/10502, funded by DIKU. cas-model2-names
http://arxiv.org/abs/2407.12100v1
20240716180732
Agglomerative Clustering of Simulation Output Distributions Using Regularized Wasserstein Distance
[ "Mohammadmahdi Ghasemloo", "David J. Eckman" ]
stat.ME
[ "stat.ME", "stat.AP", "stat.ML" ]
Exploring Top-Quark Signatures of Heavy Flavor-Violating Scalars at the LHC with Parametrized Neural Networks Diego S. V. Gonçalves ============================================================================================================= § ABSTRACT We investigate the use of clustering methods on data produced by a stochastic simulator, with applications in anomaly detection, pre-optimization, and online monitoring. We introduce an agglomerative clustering algorithm that clusters multivariate empirical distributions using the regularized Wasserstein distance and apply the proposed methodology on a call-center model. § INTRODUCTION A simulation model is a computational or mathematical representation of a real-world system designed to study its behavior under various scenarios. Simulation models are used extensively in fields such as engineering, economics, healthcare, and environmental science to predict and analyze the outcomes of different scenarios without the need to experiment in the real world, which can be costly, time-consuming, or impractical. Outputs of a simulation model typically correspond to key performance indicators (KPIs) of interest to the decision maker, e.g., profit, throughput, or service level. For stochastic simulation models, simulating a given scenario generates outputs that vary from replication to replication, thus each scenario has an associated probability distribution describing the stochastic behavior of its outputs. When conducting simulation experiments, the user controls which scenarios are simulated and how many replications are run. Simulation experiments can be easily designed to generate data that satisfies the standard assumptions of being independent and identically distributed, in contrast to most data obtained from the real world. Common tools for analyzing simulation output data include summary statistics (e.g., sample means, variance, and covariances) and visualization tools (e.g., histograms and boxplots). For problems with multiple KPIs, the multivariate empirical distribution produced by the data contains valuable information about system performance, but can be difficult to analyze and plot. To reveal important patterns and relationships that cannot be detected by conventional data analysis methods, we propose clustering the empirical distributions of simulated scenarios. Clustering is an unsupervised learning approach that can help discover important patterns and relationships in complex datasets. In the context of simulation output data, clustering can identify scenarios with similar KPIs, or more precisely, similar output distributions. Moreover, clustering facilitates comparative analysis—by understanding the characteristics of each cluster, a decision maker can draw meaningful comparisons and make informed decisions. We consider three important applications of clustering for enhancing simulation output analysis: anomaly detection, which involves identifying and investigating outliers in simulation outputs; pre-optimization, which involves formulating simulation-optimization problems and identifying promising initial solutions; and online monitoring, which involves tracking the system state over time and using classification methods to detect potentially undesirable system behavior and trigger appropriate actions or alerts. We propose an agglomerative clustering method that uses the complete-linkage criterion for forming clusters. We choose agglomerative clustering for its flexibility, as it does not require specifying a predetermined number of clusters, and opt for complete linkage because it maintains compact and well-separated clusters. For the algorithm's measure of dissimilarity between distributions, we choose the regularized Wasserstein distance. The (unregularized) Wasserstein distance quantifies the distance between two discrete probability distributions by the minimum amount of “work”—defined as the product of the probability mass that needs to be moved and the distance it needs to be transported—required to transform one distribution into the other <cit.>. The Wasserstein distance is chosen over other metrics like Kullback-Leibler divergence or Jensen-Shannon divergence due to its ability to handle distributions with non-overlapping supports, as arises when working with continuous-valued empirical distributions. Another advantage of the Wasserstein distance is the notion of a barycenter, which acts like an average among distributions. This property is particularly useful for post-hoc analysis, when desiring to summarize each cluster with a representative distribution <cit.>. However, computing the Wasserstein distance between two discrete distributions entails solving a linear program <cit.>, which could become computationally intensive when working with large datasets. The regularized Wasserstein distance, also known as the Sinkhorn distance, adds an entropic regularization term that promotes smoother transport plans, is easier to compute, and more stable, making it well suited for data-intensive applications <cit.>. Our algorithm determines the optimal number of clusters based on the silhouette index <cit.>, a centroid-free metric that evaluates clustering quality in terms of intra-cluster and inter-cluster distances. Clustering has been used for many purposes in many fields, such as analyzing gene expression in bioinformatics <cit.>, market segmentation in economics <cit.>, and textures and shapes in image processing <cit.>. Hierarchical clustering techniques, specifically agglomerative clustering, build nested clusterings by iteratively merging the two most similar clusters <cit.> and produce a dendrogram that depicts the nested clusterings at different levels <cit.>. Adaptations like single, complete, and average linkage offer great flexibility in how agglomerative clustering algorithms merge clusters <cit.>. More recently, clustering has been employed in the study of stochastic simulation models, including for simulation optimization <cit.> and reducing model-form uncertainty <cit.>. To the best of our knowledge, we are the first to propose agglomerative clustering of simulation output distributions and investigate important use cases. The Wasserstein distance is used extensively in machine learning applications, such as in computer vision and pattern recognition to robustly compare visual feature distributions <cit.>. For large-scale problems, variations like the entropy-regularized Wasserstein distance improve computational efficiency. For example, <cit.> employ iterative Bregman projections to efficiently solve regularized transportation problems, thereby improving scalability and accuracy. Our clustering framework uses the algorithm of <cit.> to calculate the regularized Wasserstein distance and regularized Wasserstein barycenters. For a more detailed overview of computational techniques for optimal transport and its practical applications, we direct the reader to <cit.>. The Wasserstein distance has previously appeared in k-means clustering, e.g., for clustering market regimes <cit.> and financial data <cit.>. <cit.> extend the classical k-means algorithm to the clustering of one-dimensional, continuous-valued empirical distributions. However, their clustering algorithm, EP-MEANS, becomes computationally inefficient as the sample size increases and is difficult to extend to multivariate empirical distributions. <cit.> generalize the distance-based formulation of k-means to the Wasserstein space but identify several shortcomings, including the irregularity and non-robustness of barycenter-based Wasserstein k-means. To address scalability issues when clustering discrete distributions, <cit.> approximate discrete Wasserstein barycenters for large clusters using a modified Bregman alternating direction method of multipliers (ADMM) approach. Our hierarchical clustering algorithm avoids many of these regularity and scalability issues. <cit.> introduce a hierarchical clustering algorithm that utilizes optimal transport-based distance measures, though the instances being clustered are not themselves distributions, as in our setting. The rest of this paper is organized as follows: In Section <ref>, we elaborate on use cases of clustering simulation output distributions. In Section <ref>, we introduce the relevant notation and propose our agglomerative clustering algorithm. We present the results of several experiments on a call-center staffing problem in Section <ref> and conclude in Section <ref>. § USE CASES We first discuss the utility and versatility of clustering simulation output distributions through several use cases. Anomaly Detection In the context of simulation experiments, anomalies can be categorized as artificial or systemic. An artificial anomaly is typically associated with logic or coding errors within the simulation model, whereas a systemic anomaly is related to inherent features of the system. When using hierarchical clustering algorithms, anomalous output distributions can be identified by examining the dendrogram, the distances between clusters, or the cluster sizes <cit.>. After identifying an anomalous output distribution, one would first scrutinize the simulation code to determine if it is an artificial anomaly. If the anomaly is not artificial, then one might investigate further by, for example, examining the marginal distributions, correlation matrices, and corresponding input variables. Hierarchical clustering algorithms such as ours are expected to be more stable for identifying outliers than non-hierarchical clustering methods whose clusterings strongly depend on the initialization of the clusters. Pre-Optimization In many practical situations, there are tradeoffs between multiple KPIs, and the decision maker may be unable to articulate a priori what constitutes “good” versus “bad” system performance. By clustering output distributions and obtaining the barycenters, the decision maker can be presented with a more manageable number of distributions to compare. The decision maker could then conduct a series of A/B comparisons, wherein pairs of barycenters are compared and some are eliminated based on unformalized notions of preferred performance until a small number of clusters (or scenarios) remain. The clustering analysis can also help the decision maker specify which metrics should be modeled as objectives in a subsequent simulation-optimization problem and which should be treated as constraints. Achievable thresholds for the constraints can be set based on the observed performance outcomes of the simulated scenarios. Additionally, by examining the inputs associated with scenarios in a promising cluster, the decision maker can identify promising regions of the input space from which to initiate an optimization search, potentially leading to more rapid progress toward the optimal solution. Online Monitoring This application concerns how the output of a simulation model is influenced by state variables, namely those that evolve over time and can be observed, but not directly controlled, by the decision maker. We envisage an online monitoring framework in which clustering is performed offline and state variables are later tracked in real time with classification algorithms being utilized to help the decision maker anticipate changes in system performance. This approach involves a preliminary simulation experiment in which the scenarios correspond to different initial states, followed by the clustering of the generated outputs. When monitoring the system's state online, classification algorithms can be used to predict the cluster to which an observed state's output distribution may belong. Conversely, if the classification algorithm struggles to assign a state to a single cluster, such as a tie when using k-nearest neighbors, it would suggest that system performance may change soon, potentially prompting intervention. § CLUSTERING SIMULATION OUTPUT DISTRIBUTIONS Suppose there are N scenarios under consideration, and for each Scenario i, i=1,2,…,N, we obtain n_i independent simulation replications. Let 𝐲_il∈ℝ^d denote the vector output of the lth simulation replication at Scenario i and let μ_i := n_i^-1∑_l=1^n_iδ_y_il denote the corresponding empirical distribution, i.e., a discrete probability distribution with support 𝒴_i = {𝐲_i1, …, 𝐲_in_i}, where we ignore any duplicate values in the definition of 𝒴_i, and δ_y_il is the Dirac delta function at y_il. Since we are interested in clustering {μ_1,μ_2,…,μ_N }, and there is no specific ordering among distributions, we henceforth drop the subscript i and denote the probability mass vector, support, and cardinality of the support of an empirical distribution μ as 𝐩_μ, 𝒴_μ, and M_μ = |𝒴_μ|, respectively. §.§ Wasserstein Distance Let Δ_M := {𝐩∈ℝ_+^M∑_l=1^M p_l=1} denote the set of all possible probability mass vectors on a support of size M. For two empirical distributions μ and μ' having probability mass vectors 𝐩_μ∈Δ_M_μ and 𝐩_μ'∈Δ_M_μ', respectively, the polytope of couplings is defined as Π(𝐩_μ, 𝐩_μ') :={γ∈ℝ_+^M_μ× M_μ'γ1_M_μ'=𝐩_μ, γ^T 1_M_μ=𝐩_μ'}, where γ^T indicates the transpose of γ, and 1_M indicates a length-M vector of all 1s. The polytope Π(𝐩_μ, 𝐩_μ') represents the set of all possible matrices γ that redistribute the probability mass from 𝐩_μ to 𝐩_μ', where each matrix entry γ_ll' represents the amount of mass transported from the lth element in 𝒴_μ to the l'th element in 𝒴_μ' for l=1,2,…,M_μ and l' = 1,2,…, M_μ', where the indexing of the supports is arbitrary. The Wasserstein distance between μ and μ', denoted by W (μ,μ'), is defined as the optimal value of the following optimization problem: W (μ,μ') := γ∈Π( 𝐩_μ, 𝐩_μ' )min⟨𝐃,γ⟩, where 𝐃∈ℝ^M_μ× M_μ' is a cost matrix consisting of the pairwise distances between points in 𝒴_μ and 𝒴_μ', and ⟨· , ·⟩ denotes the summation of the element-wise product of two matrices. The optimal solution to the linear program posed in (<ref>), denoted by γ^*, is often referred to as the transportation plan matrix and represents the optimal allocation of probability mass from the source distribution μ to the target distribution μ'. The time complexity of algorithms for computing γ^* is proportional to the cube of the support size <cit.>. The regularized Wasserstein distance, on the other hand, can be solved in near-linear time <cit.> and is defined as W_λ (μ, μ') := γ_λ∈Π(p_μ, p_μ')min⟨𝐃, γ_λ⟩ - λ E(γ_λ), where λ is a regularization parameter, and E(γ_λ) is the entropy of the transportation plan matrix γ_λ, defined as E(γ) :=-∑_l=1^M_μ∑_l'=1^M_μ'γ_ll'logγ_ll', where we set γ_ll'logγ_ll' = 0 if γ_ll' = 0. As λ approaches 0, the optimal transportation plan matrix for (<ref>), γ_λ^*, becomes more sparse and approaches γ^* <cit.>. The entropic regularization term incentivizes γ_λ^* to be more diffuse than γ^*, and this induced non-sparsity helps to stabilize the computation of γ_λ^* because (<ref>) is a strongly convex program with a unique solution. An advantage of the regularized Wasserstein distance is that γ^*_λ can be calculated through an efficient iterative procedure involving matrix multiplications, as described in Algorithm <ref>. The stopping criteria in Algorithm <ref> helps to control the computational cost and could involve setting a maximum number of iterations or stopping when the percentage change in the regularized Wasserstein distance is less than some threshold. The regularized Wasserstein distance plays a central role in our proposed clustering algorithm. §.§ An Agglomerative Clustering Algorithm Agglomerative clustering is a hierarchical clustering method that begins by treating each instance as an individual cluster and successively merges the closest pairs based on a specified distance metric, allowing clusters to form organically from the data. We choose to employ agglomerative clustering for several reasons. Firstly, unlike k-means clustering, in which the number of clusters is predefined, agglomerative clustering excels in situations where the optimal number of clusters is unknown. Secondly, centroid-based methods, such as k-means, are sensitive to outliers due to their reliance on a single central point to represent each cluster. Outliers can significantly skew the centroid's location and distort the clustering process. In contrast, the complete-linkage approach commonly used in agglomerative clustering considers the maximum distance between any two points in distinct clusters, making the clustering more robust to outliers. Complete-linkage clustering also considers the farthest points within the merged clusters, resulting in tighter and more spherical cluster formations compared to single-linkage clustering, which can generate elongated clusters due to chaining effects. Thirdly, agglomerative clustering, particularly when employing complete linkage, offers a valuable output in the form of a dendrogram, which depicts the merging process and the distances between clusters at each stage of the algorithm and can aid in comprehending the relationships between instances and in determining an appropriate number of clusters. Fourthly, unlike the k-means algorithm in which one must repeatedly recalculate the centroid of each cluster, agglomerative clustering does not entail calculating centroids. In our setting, the Wasserstein barycenter is a natural choice of centroid, but lacks robustness <cit.>; this idiosyncratic behavior of Wasserstein barycenters renders the centroid-based formulation inadequate for representing inter-cluster instances. To determine the optimal number of clusters, we use the silhouette index proposed by <cit.>. The silhouette index for a given clustering 𝒞 is defined as S_𝒞=1/|𝒞|∑_C ∈𝒞1/|C|∑_μ∈ C S_μ , where S_μ =( b_μ - a_μ )/ max{b_μ, a_μ}, a_μ = (|C_μ|- 1)^-1∑_μ' ∈ C_μ, μ' ≠μ W_λ(μ, μ') is the average regularized Wasserstein distance between μ and every other distribution in the same cluster, and b_μ = min_C' ∈𝒞, C' ≠ C_μ{(|C'|-1)^-1∑_μ' ∈ C' W_λ(μ, μ') } is the minimum average distance between μ and distributions in other clusters. The silhouette index considers both intra-cluster (as in a_μ) and inter-cluster (as in b_μ) distances, and its values fall within the range of -1 to 1, with a higher silhouette index indicating a more favorable clustering. For an individual distribution μ, a silhouette index S_μ close to 1 signifies that μ is well-positioned within its assigned cluster. We now present Algorithm <ref>, an agglomerative algorithm for clustering the multivariate empirical distributions of simulation outputs. In Algorithm <ref>, 𝒟 denotes the distance metric used to calculate the cost matrix between points in the supports. Before applying Algorithm <ref>, the output data is normalized within each dimension to ensure that no one KPI skews the clustering results. To further assess the practicality of Algorithm <ref>, it is essential to consider its computational cost. Step 1 of Algorithm <ref> calculates the pairwise distances between all distributions, the cost of which scales quadratically with the number of distributions, N. Calculating the regularized Wasserstein distance between a pair of distributions with the same support size exhibits a quadratic dependence on the size of the support <cit.>. After obtaining the pairwise distances, the rest of Algorithm <ref> has a cubic dependence on the number of distributions <cit.>. Additionally, calculating the silhouette index for a given clustering scales quadratically with the number of distributions <cit.>. §.§ Wasserstein Barycenter After clustering the distributions, we turn to the regularized Wasserstein barycenter to summarize the information in each cluster. For a given cluster, the regularized Wasserstein barycenter minimizes the average regularized Wasserstein distance between itself and each of the distributions within the cluster, effectively acting as an “average” of the distributions. To compute the barycenter for a cluster C, denoted generically by μ̅, we employ a method that assumes that all distributions have a common support. To conform with this assumption, we manipulate the probability mass vectors of the distributions in each cluster. Specifically, let 𝒴_μ̅ := ⋃_μ∈ C𝒴_μ be the collective support of all distributions in cluster C and let M_μ̅:=|𝒴_μ̅|. For each μ∈ C, 𝐩_μ can be modified by extending it to a length of M_μ̅ by assigning probability masses of 0 to values in 𝒴_μ̅\𝒴_μ, resulting in a modified distribution μ̃ defined on 𝒴_μ̅. The regularized Wasserstein barycenter is a discrete distribution on 𝒴_μ̅ having a probability mass vector 𝐩_μ̅ := 𝐩∈Δ_M_μ̅argmin1/|C|∑_μ∈ C W_λ(μ̃, μ̅). Although the Wasserstein barycenter can be derived by minimizing a weighted sum of regularized Wasserstein distances, in this paper we assume that the scenarios are equally important, and hence weighted equally. The optimal probability mass vector 𝐩_μ̅ can be computed using another iterative procedure, given in Algorithm <ref>. § EXPERIMENTS We demonstrate several use cases of the proposed algorithm through experiments involving a discrete-event simulation model of a call center. The call center operates from 8 am to 4 pm and during this time customers call in according to a stationary Poisson process with a rate of 400 customers per hour. This call center serves two classes of customers—regular and premium—with regular customers comprising 60% of incoming calls. Two sets of operators—basic service and premium service—provide initial service to regular and premium customers, respectively. If there are no premium customers in the queue, premium service operators can serve regular customers; however, basic service operators cannot serve premium customers. Additionally, 15% of arriving customers, irrespective of their class, abandon if their initial service does not start within a customer-specific amount of time following a uniform distribution between 0.5 and 3 minutes. After their initial service is completed, 15% of customers, irrespective of their class, require additional service that is provided by a third type of operator: technical. Regular and premium customers are served by the same team of technical operators. Service times from basic service, premium service, and technical operators follow exponential distributions with means of 7, 3, and 10 minutes, respectively. Operator-dependent service rates such as these may arise because premium service operators have more resources, full system access, and extensive experience, and therefore can resolve issues more quickly. When queueing for technical support, premium customers are given priority over regular customers, and customers do not abandon. The call center stops receiving new calls at the end of the workday but continues operating until all customers have been served; this policy imposes overwork on the operators. §.§ Staffing a Fixed Number of Operators Suppose the call-center manager needs to train 49 operators for some combination of basic service, premium service, and technical roles and is interested in five KPIs: the mean time in the system for regular (Y_1) and premium customers (Y_2), and the mean overwork time for basic service (Y_3), premium service (Y_4), and technical operators (Y_5). Assuming that there must be at least one operator of each type, there are 1128 possible staffing configurations (scenarios). We show that even when simulating a fraction of these configurations, clustering can provide valuable insights about the system's behavior. We choose 100 configurations that uniformly cover the space of all configurations and simulate 40 days (replications) under each configuration. We then apply Algorithm <ref> to cluster the obtained empirical distributions for the five KPIs. The dendrogram in Figure <ref> shows the hierarchical clustering of the simulated configurations. Based on the silhouette index plot shown in Figure <ref>, having 7 clusters is a good choice, though having 8 or 9 clusters would also be satisfactory. To more deeply understand the distributions within each cluster, we compute the barycenters of each cluster and plot the marginal cumulative distribution functions (cdfs) for each of the five KPIs. In Figure <ref>, we observe that no cluster consistently outperforms the others, however, Cluster 4 performs well across all five KPIs, whereas the other clusters perform poorly in at least one KPI. Having identified a good cluster, we examine the correlation matrix of Cluster 4 in Figure <ref>. We observe a positive correlation between the overwork of premium service operators and the time spent by both types of customers in the call center, as well as the mean overwork of basic service operators. This suggests that for staffing configurations in Cluster 4, a high overwork time for premium service operators on any given day is associated with both regular and premium customers spending more time in the call center than usual and basic service operators experiencing more overwork time than average. We compare to the correlation matrix of Cluster 5, which performs very well in the first four KPIs but poorly in terms of mean overwork for technical operators. In Figure <ref>, we observe a strong negative correlation between the mean time in the system of regular and premium customers. As seen in Figure <ref>, the staffing configurations that make up Cluster 4 are characterized by having a moderate number of technical operators and a variable number of basic service and premium service operators, where the number of basic service operators is as low as 1 in some configurations. The decision maker might want to identify the staffing configuration among those in Cluster 4 that, say, minimizes the total staffing costs. For instance, if the staffing costs for basic service, premium service, and technical operators were 4, 1, and 1, respectively, then configuration (7, 28, 14) would be the cheapest. §.§ Staffing Subject to a Budget Staffing-cost considerations could alternatively be incorporated into the design of the experiment. As a follow-up to the staffing-cost analysis above, suppose the decision maker's objective were to find a staffing configuration with a desirable output distribution among those having total costs between 50 and 55. There are 2,143 feasible staffing configurations and, as before, we uniformly select 100 configurations, simulate each for 40 days, and apply Algorithm <ref> on the results. The silhouette index recommends five clusters; the marginal distributions of the corresponding barycenters are shown in Figure <ref>. Unlike in the previous experiment, no cluster dominates across all five KPIs: each cluster performs very well in at least one KPI, but suffers in other aspects. The decision maker's priorities play a crucial role in balancing the tradeoffs across KPIs. For instance, if providing good customer service is more important than ensuring favorable conditions for operators, then Cluster 1 is preferable. Conversely, if keeping operators' overwork low is a priority, then Cluster 3 might be preferred. §.§ Monitoring Queue Lengths In this experiment, we illustrate an approach that enables a decision maker to monitor the system in real time and use offline clustering to make staffing adjustments. In our setup, the system consists of 22 basic service, 9 premium service, and 8 technical operators, with other system specifications remaining the same as before. The state of the system is represented by a 4-dimensional vector consisting of the queue lengths for regular and premium customers for initial service, and the queue lengths for regular and premium customers for technical service. For a given state, we consider three KPIs: the mean utilization of all operators, the sum of the maximum waiting times of regular and premium customers in the technical queue, and the total number of customers who abandoned the queues (referred to as customer churn), all measured over a one-hour period when starting in that state. To construct a set of scenarios for our offline experiment, we first simulate the system for 5000 days, recording the states at the beginning of each hour along with the corresponding output vector after an hour of observation. We restrict our attention to those states that were observed 10 or more times, of which there were 113. Algorithm <ref> groups the output distributions into three clusters, the barycenters of which are depicted in Figure <ref>. Across all three performance metrics, Cluster 1 performs the best, Cluster 2 performs moderately well, and Cluster 3 performs the worst. We now have the tools to monitor and classify the states visited during a new day by considering the state's two nearest neighbors, as measured in terms of the queue lengths. When the current state's two nearest neighbors belong to Cluster 1, we anticipate good performance in the next hour. Conversely, having both nearest neighbors in Cluster 2 suggests high customer churn and moderate operator utilization, with minimal impact on maximum waiting times. If both nearest neighbors belong to Cluster 3, poor performance across all metrics is expected in the next hour. There will be cases where the nearest neighbors are of different kinds, i.e., we find ourselves in a transition state between clusters. When this is the case, the decision maker should closely monitor trends and be prepared to take preventive actions, such as adjusting staffing in bottleneck areas or reallocating roles among cross-trained operators in a call-center context. Figure <ref> illustrates this monitoring approach over the course of one day. The system starts the day with empty queues (a good state), but before long it begins to oscillate between good and moderate states, before settling into a moderate state around 8:30 and remaining there until 11:15 with occasional transitions. Around 12:30, the system briefly shifts to a bad state, before returning to a moderate state until about 13:30, after which all states are bad. The dashed area represents the times when the call center will be closed within the next hour, but the state classifications during this period can still be useful. The plot suggests several times where preventive action could be taken, e.g., around 11:20, when the system first enters an estimated bad state. A risk-averse decision maker might take preventive action at this time, but if they had waited to see if the situation persisted, they would have discovered that the system recovered on its own. Alternatively, the decision maker could intervene after observing bad states for some given duration. Each approach has its merits, catering to different risk tolerances and operational strategies. § CONCLUSION This paper introduces an efficient agglomerative clustering algorithm for multivariate empirical distributions, motivated by the setting of analyzing simulation output data. Clustering simulation output data by scenario can be a powerful approach for anomaly detection, pre-optimization, and classification in online monitoring. Future research directions include clustering simulation output distributions in a streaming-data setting and clustering simulation sample paths, which can provide deeper insights into dynamic system behavior. authordate1
http://arxiv.org/abs/2407.11952v1
20240716174603
Young Black Holes Have Smooth Horizons: A Swampland Argument
[ "Chethan Krishnan", "Ranjini Mondol" ]
hep-th
[ "hep-th", "gr-qc" ]
http://arxiv.org/abs/2407.12724v1
20240717164122
An Evaluation of Continual Learning for Advanced Node Semiconductor Defect Inspection
[ "Amit Prasad", "Bappaditya Dey", "Victor Blanco", "Sandip Halder" ]
cs.CV
[ "cs.CV", "cs.AI", "cs.LG" ]
An Evaluation of Continual Learning for Advanced Node Semiconductor Defect Inspection Amit Prasad and Bappaditya Dey Interuniversity Microelectronics Centre, Kapeldreef 75, 3001, Belgium1 SCREEN SPE Germany GmbH, Germany2 Equal Contribution* This research was conducted during Sandip Halder’s tenure at imec' amit.prasad.ext@imec.be, Bappaditya.Dey@imec.be An Evaluation of Continual Learning for Advanced Node Semiconductor Defect Inspection Amit Prasad1,* Bappaditya Dey1,* Victor Blanco1 Sandip Halder2,' July 22, 2024 ===================================================================================== § ABSTRACT Deep learning-based semiconductor defect inspection has gained traction in recent years, offering a powerful and versatile approach that provides high accuracy, adaptability, and efficiency in detecting and classifying nano-scale defects. However, semiconductor manufacturing processes are continually evolving, leading to the emergence of new types of defects over time. This presents a significant challenge for conventional supervised defect detectors, as they may suffer from catastrophic forgetting when trained on new defect datasets, potentially compromising performance on previously learned tasks. An alternative approach involves the constant storage of previously trained datasets alongside pre-trained model versions, which can be utilized for (re-)training from scratch or fine-tuning whenever encountering a new defect dataset. However, adhering to such a storage template is impractical in terms of size, particularly when considering High-Volume Manufacturing (HVM). Additionally, semiconductor defect datasets, especially those encompassing stochastic defects, are often limited and expensive to obtain, thus lacking sufficient representation of the entire universal set of defectivity. This work introduces a task-agnostic, meta-learning approach aimed at addressing this challenge, which enables the incremental addition of new defect classes and scales to create a more robust and generalized model for semiconductor defect inspection. We have benchmarked our approach using real resist-wafer SEM (Scanning Electron Microscopy) datasets for two process steps, ADI and AEI, demonstrating its superior performance compared to conventional supervised training methods. § RELATED WORK In the semiconductor process (mainly, Litho-Etch) domain, numerous approaches have been suggested for defect classification and localisation <cit.>, <cit.>, <cit.>. To the best of the authors' knowledge, the concept of incremental learning <cit.> for multi-class, multi-instance defect detection on SEM images has previously not been explored. § METHODOLOGY §.§ Dataset Original (resist) wafer SEM (Scanning Electron Microscopy) images were obtained during ADI (After Development Inspection) and AEI (After Etch Inspection) stages. Figure <ref> illustrates exemplary defect types in both process steps. The instance distribution per defect class is captured in Table <ref>. §.§ Notations and Preliminaries The following notations have been used in this work. Task (T_p): This is defined as supervised training of a defect detection framework for p classes (0 to p-1) in the dataset of the form (x_i, y_i)_i=1^m (m instances with defect feature x_iand corresponding label y_i). This is denoted by T_p. Finetuned task (F_p^q): This is defined as supervised training of a defect detection framework for next q classes (p to q-1) in the dataset of the form (x_i, y_i)_i=1^m, which has previously been trained on the initial p classes (0 to p-1). However, it's important to note that identifying these initial p classes is not guaranteed.This is denoted by F_p^q. Incremental task (𝒯_p^q): This is defined as incremental supervised training of a defect detection framework for next q classes (p to q-1) in the dataset of the form (x_i, y_i)_i=1^m, which has previously been trained on the initial p classes (0 to p-1), enabling it to identify all (p+q) classes. This is denoted by 𝒯_p^q. §.§ Structure of study In this work we present the following case studies. * Case study 1 (see Section <ref>) examines effectiveness of the framework in incrementally learning new defect classes and minimizing forgetting of previously trained defect classes on the ADI dataset. * Case study 2 (see Section <ref>) assesses framework for incrementally learning new defect classes in AEI images and minimizing forgetting of previously trained defect classes across the entire ADI dataset. * Case study 3 (see Section <ref>) compares three training strategies: (i) conventional supervised training strategy with all defect classes at once, (ii) conventional supervised training with first p defect classes and then fine-tune on new q defect classes, (iii) proposed incremental supervised training strategy with first p defect classes and then fine-tune on new q defect classes. We use the Faster-RCNN <cit.> model for all studies. Moreover, for incremental tasks, the approach utilized is presented in <cit.> which also uses FRCNN. § CASE STUDY 1 The model starts training with the task T_2 (initially trained for 2 defect classes, microbridge and gap), followed by two consecutive incremental training tasks: 𝒯_2^2 (adding 2 more defect classes, bridge and line-collapse), and finally 𝒯_4^1 (adding the last defect class as probable gap), using the ADI dataset. For an evaluation of performance, average precision (AP) per defect class vs iterations is plotted, marking checkpoints where new defect classes were introduced and where continual learning takes place. The results are compared to the conventional fine-tuning approach, where the model has been trained on tasks F_2^2 and F_4^1, while keeping all experimental conditions constant. In Figure <ref> a), it is evident how effective incremental learning is for progressively learning defect classes and minimizing catastrophic forgetting. Conversely, in Figure <ref> b), it is apparent how swiftly catastrophic forgetting occurs in the case of fine-tuning. § CASE STUDY 2 Defect classes from the AEI dataset are incrementally added following training on the ADI dataset. The model, following task 𝒯_4^1, undergoes training on tasks 𝒯_5^2 and 𝒯_7^3. Similarly, following task F_4^1, the model undergoes fine-tuning for tasks F_5^2 and F_7^3. The Figure <ref> illustrates the comparison between proposed incremental learning and conventional fine-tuning (using AP vs iteration plot). § CASE STUDY 3 Inference results are shown in Figure <ref> (with corresponding labels, bounding boxs and confidence scores) are from 3 training strategies, first is the model trained on task T_10 (incorporating all defect classes simultaneously) while the other models are derived from tasks 𝒯_7^3 and F_7^3. The labels are referenced from Table <ref>. Notably, it is observed that the model after task T_7^3 performs comparably to the model trained on task T_10. However, the model obtained after task F_7^3 demonstrates forgetfulness or mislabeling of defects it encountered earlier, as it has only recently been exposed to labels 7, 8, and 9. § CONCLUSION In this study, we demonstrated the effectiveness of a continual learning strategy in progressively learning the classification and localization of semiconductor defect classes in aggressive pitches, while mitigating catastrophic forgetting.
http://arxiv.org/abs/2407.13184v1
20240718054749
HSEmotion Team at the 7th ABAW Challenge: Multi-Task Learning and Compound Facial Expression Recognition
[ "Andrey V. Savchenko" ]
cs.CV
[ "cs.CV", "68T10", "I.4.9" ]
HSEmotion Team at the 7th ABAW Challenge A.V. Savchenko. Sber AI Lab, Moscow, Russia HSE University, Laboratory of Algorithms and Technologies for Network Analysis, Nizhny Novgorod, Russia avsavchenko@hse.ru HSEmotion Team at the 7th ABAW Challenge: Multi-Task Learning and Compound Facial Expression Recognition Andrey V. Savchenko1,20000-0001-6196-0564 ======================================================================================================== § ABSTRACT In this paper, we describe the results of the HSEmotion team in two tasks of the seventh Affective Behavior Analysis in-the-wild (ABAW) competition, namely, multi-task learning for simultaneous prediction of facial expression, valence, arousal, and detection of action units, and compound expression recognition. We propose an efficient pipeline based on frame-level facial feature extractors pre-trained in multi-task settings to estimate valence-arousal and basic facial expressions given a facial photo. We ensure the privacy-awareness of our techniques by using the lightweight architectures of neural networks, such as MT-EmotiDDAMFN, MT-EmotiEffNet, and MT-EmotiMobileFaceNet, that can run even on a mobile device without the need to send facial video to a remote server. It was demonstrated that a significant step in improving the overall accuracy is the smoothing of neural network output scores using Gaussian or box filters. It was experimentally demonstrated that such a simple post-processing of predictions from simple blending of two top visual models improves the F1-score of facial expression recognition up to 7%. At the same time, the mean Concordance Correlation Coefficient (CCC) of valence and arousal is increased by up to 1.25 times compared to each model's frame-level predictions. As a result, our final performance score on the validation set from the multi-task learning challenge is 4.5 times higher than the baseline (1.494 vs 0.32). § INTRODUCTION In recent years, the development of robust and accurate models for facial expression recognition has garnered significant attention due to its applications in various fields, such as human-computer interaction, mental health assessment, and surveillance. However, building models that are accurate in cross-dataset settings, fair, explainable, and robust remains a challenging task. Traditional approaches often struggle with biases introduced by unevenly distributed training data, resulting in models that perform well on average but poorly on underrepresented subgroups. Moreover, the necessity of balancing high performance with ethical considerations like privacy and explainability further complicates the development of these models. Ensuring that the models perform well across all demographic subgroups, and in unconstrained environments (i.e., in-the-wild conditions), poses additional difficulties. These challenges are highlighted by the organizers of a sequence of Affective Behavior Analysis in-the-wild (ABAW) contests <cit.> based on AffWild <cit.> and AffWild2 <cit.> datasets. The 7th ABAW competition <cit.> highlights these challenges by tasking participants with two tasks: 1) multi-task learning (MTL) <cit.> for simultaneous predicting facial Expressions (EXPR), Valence/Arousal (VA), and Action Units (AUs), along with 2) recognizing Compound Expressions (CE) <cit.> in real-world settings. The first tasks have been studied in ABAW-3 <cit.> and ABAW-4 <cit.>, while the latter problem has recently appeared in ABAW-6 <cit.>. The winners of these challenges achieve high accuracy by utilizing complex architectures and large ensembles <cit.>, which often require significant computational resources and extensive data processing capabilities. While these models excel in high-end hardware settings, their applicability in mobile or low-resource environments is limited <cit.>. This restricts their deployment in many practical scenarios where real-time processing on personal devices is necessary, emphasizing the need for lightweight yet effective solutions. Our approach to the ABAW-7 challenge focuses on addressing these issues by developing a streamlined pipeline based on efficient, frame-level facial feature extractors. These extractors, pre-trained in a multi-task setting to estimate valence-arousal and basic facial expressions, ensure high accuracy while maintaining low computational demands. To enhance privacy awareness, we utilize lightweight neural network architectures such as MT-EmotiDDAMFN, MT-EmotiMobileFaceNet <cit.> and MT-EmotiEffNet <cit.>, which are designed to run efficiently on mobile devices. This eliminates the need to send facial video data to remote servers, safeguarding user privacy and ensuring compliance with stringent data protection regulations. Our methodology incorporates advanced post-processing techniques to refine the predictions of these models. By applying Gaussian or box filters to smooth the output scores and blending predictions from top-performing models, we significantly enhance facial expression recognition accuracy. Our experiments demonstrate that our final performance score in the multi-task learning challenge outperforms the baseline by a substantial margin, showcasing the efficacy of our approach in balancing accuracy, efficiency, and privacy. The remaining part of this paper is structured as follows. Related works of MTL and CE competition participants are discussed in Section <ref>. Section <ref> described the proposed approach. Thorough experimental results are discussed in Section <ref>. Finally, Section <ref> contains the conclusion and future works. § RELATED WORKS In the MTL task in the ABAW competition, it is required to assign an image of a facial frame to valence, arousal, one of 8 basic expressions, and a subset of 12 AUs. The participants of ABAW-3 were not required to use the training set (s-Aff-Wild2), so most of them <cit.> were trained on larger AffWild2 <cit.> set. Hence, their results on the validation set cannot be directly compared to participants of ABAW-7 and ABAW-4 <cit.> who were forced to refine their models on s-Aff-Wild2. Let us discuss only techniques proposed during the latter competition. The two-aspect information interaction model <cit.> represents interactions between sign vehicles and messages. The SS-MFAR <cit.> extracts facial features using ResNet and leverages adaptive threshold for every class of facial expressions. The thresholds were estimated based on semi-supervised learning. The SMMEmotionNet was used to extract facial embeddings for an ensemble in the solution of the 6th place <cit.>. At the same time, the hybrid CNN (Convolutional Neural Network)-Transformer <cit.> with a fusion of ResNet-18 and a spatial transformer took the 5th place. Slightly better results have been obtained by the cross-attentive module and a facial graph that captures the association among action units <cit.>. The EfficientNet model pre-trained in Multi-Task setting (MT-EmotiEffNet) took the third place <cit.>. The top results have been achieved by the Masked Auto-Encoder (MAE) pretrained on unlabeled face images. For example, the EMMA ensemble of pre-trained MAE ViT (Vision Transformer) and CNN took the 2nd place <cit.>. The winner <cit.> adopted ensembles of various temporal encoders, multi-task frameworks, and unsupervised (MAE-based) and supervised (IResNet/DenseNet-based) visual feature representation learning. To encourage the study of cross-dataset settings, the ABAW-6 challenge <cit.> presented the CE recognition problem from the C-EXPR database <cit.>, which does not contain the training set. Indeed, participants have to assign each frame of 56 videos into one of seven categories (Fearfully Surprised, Happily Surprised, Sadly Surprised, Disgustedly Surprised, Angrily Surprised, Sadly Fearful, and Sadly Angry). The fifth place was achieved by the pre-trained visual language model that annotated a subset of the unlabeled data <cit.>. Next, 5 CNNs have been fine-tuned using the generated labels. The audio-visual ensemble of three pre-trained models was utilized in <cit.>, where the predictions for basic expressions are weighted using the Dirichlet distribution and summarized to predict the compound expressions. Three different models (ResNet50, ViT and Multi-scale, and Local Attention Network) were fine-tuned on the dataset annotated with the same compound expressions <cit.>. Their combination with late fusion allowed the authors to reach third place. The lightweight MT-EmotiMobileFaceNet model with a simple sum of predictions corresponding to compound expressions reaches the second place <cit.>. The winner <cit.> adopted the MAE pre-trained on a large facial dataset and the ViT encoder. It was proposed to transform the task to a multi-label recognition task for basic emotions: the ViT encoder was finetuned on the part of the AffWild2 dataset to predict the probability of each basic emotion, which can be combined to make the final prediction for CE. Thus, most solutions of previous competitions adopted ensembles of deep models, which significantly limits their practical applications. Hence, in this paper, we try to simplify the decision-making pipeline for both tasks by leveraging only simple post-processing of lightweight models. § METHODS The main task of this paper is to recognize the emotions of each video frame X(t), t=t_1, t_2, ...,t_N, where 1≤ t_1< t_2< ...<t_N are the observed frame indices. We deal with both continuous and discrete models of emotions. In the former case, the most typical emotional space is the two-factor Russel's circumplex model of affect <cit.> with VA-based emotional encoding. However, three- or even four-factor models have also been studied. Discrete representations have initially appeared as a set of basic expressions of Paul Ekman (anger, happiness, etc.). Moreover, detailed FACS (Facial Action Coding System) <cit.> with specific facial parts (AUs) is also widely used. Finally, due to the complexity of human emotions, it is also important to analyze compound expressions that have appeared simultaneously. We consider two tasks of the ABAW-7 challenge. In the MTL competition, it is necessary to assign X(t) to three emotional representations: * Valence V(t) ∈ [-1,1] and arousal A(t) ∈ [-1,1] (multi-output regression task). * Facial expression c(t) ∈{1,..., C_EXPR}, where C_EXPR=8 is the total number of basic emotions: Neutral, Anger, Disgust, Fear, Happiness, Sadness, Surprise, and Other (multi-class classification). * AUs 𝐀𝐔(t)=[AU_1(t),...,AU_C_AU(t)], where C_AU = 12 is the total number of AUs and AU_i(t) ∈{0,1} (multi-label classification). This paper proposes a novel pipeline (Fig. <ref>) for the MTL challenge. The main part is the feature extractor backbones based on lightweight neural network architectures <cit.>, such as EfficientNet-B0, MobileFaceNet, DDAMFN (Dual-Direction Attention Mixed Feature Network) <cit.> and MobileViT. We performed their training as described in <cit.>, namely, firstly, pre-trained them to recognized faces from VGGFace2 dataset, and next fine-tune them on AffectNet to simultaneously classify static facial expressions and predict VA using multi-task loss <cit.>, which is essentially a sum of weighted categorical cross-entropy for facial expressions and CCC (Concordance Correlation Coefficients) for valence and arousal. The resulting models (MT-EmotiEffNet, MT-EmotiDDAMFN, MT-EmotiMobileFaceNet, MT-EmotiMobileViT) <cit.> are used to extract D ≫ 1-dimensional facial embeddings 𝐱 at the output of penultimate layer and 10-dimensional scores 𝐬 (8 logits of emotions from AffectNet plus valence and arousal) at the output of the last layer. Next, we train a simple feed-forward network with three output layers: 1) 2 hyperbolic tangent activations 𝐩_VA∈ [-1,1]^2; 2) softmax layer with C_EXPR outputs (𝐩_EXPR∈ [0,1]^8); and 3) C_AU logistic sigmoids (𝐩_AU∈{0,1}^12) <cit.>. The input of this model is a concatenation of 𝐱 and 𝐬. However, we empirically observed that VAs are predicted better using only logits, so we developed a special slice layer that extracts only 10 last inputs (logits 𝐬) and feeds them into the VA dense layer. Moreover, we studied a straightforward blending of two top models indicated by upper indices ^(1) and ^(2): 𝐩^(blend)_VA(t)=w_VA·𝐩^(1)_VA + (1-w_VA)·𝐩^(2)_VA, 𝐩^(blend)_EXPR(t)=w_EXPR·𝐩^(1)_EXPR + (1-w_EXPR)·𝐩^(2)_EXPR, 𝐩^(blend)_AU(t)=w_AU·𝐩^(1)_AU + (1-w_AU)·𝐩^(2)_AU, where the weights w_VA, w_EXPR and w_AU are chosen using available validation set. As the values of sequential emotions should be smooth, we propose to perform post-processing of our predictions using simple filters. In general, dynamic changes of facial expressions, VA and AUs may be significantly different, so we use separate kernel sizes k_EXPR, k_VA, and k_AU. In practice, we observed that smoothing decreases the quality of AU detection, so we set k_AU=0. Nevertheless, we obtain a sequence of predictions for each task and compute either box filter (simple average): 𝐩^(avg)(t)=∑_|t_i-t| ≤ k𝐩(t_i), or Gaussian filter with the smoothing factor (variance) σ^2: 𝐩^(Gauss)(t)=∑_|t_i-t| ≤ k(exp(-(t-t_i)^2)/2σ^2𝐩(t_i))/∑_t_i ∈Δ T(t)exp(-(t-t_i)^2)/2σ^2. The final decision for the VA prediction task is 𝐩_VA. The index that corresponds to the maximal score 𝐩_EXPR is returned for EXPR classification. The final AU scores are obtained by a hard decision rule in which scores 𝐩_AU are compared with fixed thresholds t_AU. We consider two types of thresholds: fixed threshold t_AU=0.5 and the best thresholds t^*_AU, achieving the top macro-averaged F1-score on the validation set. We modified the pipeline originally described in <cit.> for the CE classification task. In particular, we perform face detection in each frame using the RetinaFace <cit.> and Mediapipe <cit.>. Next, we feed each facial image into an emotional neural network and compute probability scores 𝐩 for 8 basic emotions from AffectNet <cit.>. We study two possibilities, namely, leveraging lightweight models pre-trained on AffectNet <cit.> and using them to extract features for a feed-forward neural network with one hidden layer trained on the frame-wise expression classification task from ABAW-6 <cit.>. In the latter case, a straightforward multi-class classification problem was solved with softmax activation and weighted categorical cross-entropy loss using either the official training part or the train and validation set concatenation. Moreover, we implemented the idea of the winner of the previous edition of this challenge <cit.>. We trained a model for multi-label classification with sigmoid activations and weighed binary cross-entropy loss. Next, the scores 𝐩 are aggregated for two classes from the compound expressions. We use three different average functions: arithmetic mean (A-mean), geometric mean (G-mean), and harmonic mean (H-mean). As one frame may contain several detected faces, we examine two options: 1) compute the simple average of CE predictions inside one frame, or 2) choose only the largest face and select its predictions. Finally, we perform smoothing of sequential predictions with kernel size k using either box (<ref>) or Gaussian (<ref>) filters and return the label corresponding to the maximal smoothed score. § EXPERIMENTS §.§ Multi-Task Learning Challenge In this subsection, we provide an ablation study of our approach for the MTL challenge using official training and validation sets from s-Aff-Wild2 <cit.>. Some labels are missed, so 142,333 training facial frames have only 103,917 values of Valence / Arousal, 90,645 expression labels, and 103,316 AUs. The validation set consists of 26,876 faces: all AU and VA are available, but only 15,440 expressions are known. We use official performance measure (P_MTL=P_VA+P_EXPR+P_AU), which is the sum of the mean CCC of valence and arousal (P_VA=(CCC_V+CCC_A)/2); the macro-averaged F1-score across all 8 expression categories (P_EXPR); the average F1-score across all 12 AUs (P_AU). The training results on the cropped and cropped_aligned official sets are shown in Table <ref>. As one can notice, the latter set is much better for our models. Moreover, the best results are obtained by MT-EmotiEffNet-B0 <cit.> and MT-EmotiDDAMFN <cit.>, which will be considered in the remaining experiments. The results of their pre-trained versions without refinement on s-Aff-Wild2 are presented in Table <ref>. As there is no Other category in AffectNet and Contempt emotion in s-Aff-Wild2, we implemented two techniques for matching expressions: assigning all “Contempt” predictions as “Other” or excluding the “Contempt” class from predictions and “Other” from the validation dataset. Though the performance measures on EXPR classification and VA prediction tasks are slightly worse when compared to training a feed-forward net on the s-Aff-Wild2 (Table <ref>), they still demonstrate promising results, especially if rather broad and unspecific “Other” category is ignored. It is also remarkable that though the pre-trained MT-EmotiEffNet-B0 achieves higher mean CCC P_VA when compared to MT-EmotiDDAMFN, the latter can be trained to achieve much better final P_VA (0.4826 vs 0.4433, see Table <ref>). In the next experiments, we study the Gaussian smoothing (<ref>) of predictions (Fig. <ref>, <ref> and <ref>). As one can see, smoothing does not work for AU detection but can significantly improve the results for other tasks: up to 0.06 difference in F1-score for EXPR classification and up to 0.06 difference in mean CCC for VA prediction. Moreover, the smoothing works nicely even for blending the best models (Fig. <ref>). The detailed results of our approach for each category of VA, EXPR, and AU are presented in Table <ref>, Table <ref> and Table <ref>, respectively. The best results of our approach compared to known results for the validation set of the MTL challenge are presented in Table <ref>. We significantly improve the previous best metric P_MTL of MT-EmotiEffNet <cit.>: from 1.30 to 1.42. Moreover, the usage of recently introduced MT-EmotiDDAMFN in a simple blending ensemble (<ref>) wit Gaussian smoothing (<ref>) makes it possible to achieve P_MTL=1.49, which is 5-times better than the baseline VGGFace <cit.> (0.32) and only slightly worse than the first place of previous challenge <cit.>. §.§ Compound Expression Recognition Challenge In this Subsection, we present the results of the CE classification challenge. At first, we present the results of refinement of EXPR classification feed-forward networks using AffWild2 (Table <ref>). The best F1-score is obtained by EmotiEffNet-B0, while the multi-label classification problem lets us obtain a more accurate network. In the remaining part of this section, we will denote these models using the suffix “(EXPR_ft)”. Speaking about results on CE recognition, there are no training/validation sets for this task. Hence, we borrow the approach from <cit.> and estimate the resulting class balance by computing the Kullback-Leibler (KL) divergence between actual and predicted class probabilities (Table <ref>, Fig. <ref>). The actual frequencies of each category were taken from original paper <cit.>: Fearfully Surprised (14445 frames), Happily Surprised (24915), Sadly Surprised (10780), Disgustedly Surprised (10637), Angrily Surprised (10535), Sadly Fearful (10112), and Sadly Angry (8878). Here, the MT-EmotiMobileFaceNet for the largest face in a frame and arithmetic mean of compound class probabilities seems to be the best candidate for submission. However, the Gaussian smoothing of its predictions slightly increases the KL divergence (Fig. <ref>). Cohen's kappa coefficient estimates the inter-rater reliability (Fig. <ref>). This figure has two parts: the top-left corner of models pre-trained on AffectNet and the left-bottom corner for EXPR classifiers trained on AffWild2. The results on the test set (Table <ref>) demonstrate the superiority of using our post-processing. Indeed, we increase the F1-score to 4.5% (from 0.2708 to 0.3146) for the best MT-EmotiMobileFaceNet model. § CONCLUSION In this paper, we proposed a novel pipeline (Fig. <ref>) for efficient solutions to various emotion recognition problems. We use lightweight neural network feature extractors pre-trained on AffectNet to simultaneously recognize basic expressions and predict valence and arousal. Our method does not need to fine-tune the neural network model on a new dataset. Experiments on datasets from ABAW-6 competition show that even pre-trained models demonstrate reasonable performance (Table <ref>), and training a simple feed-forward neural network with 1 hidden layer makes it possible to reach even better results (Table <ref>). The post-processing of frame-wise predictions with Gaussian (<ref>) or box filters (<ref>) and blending of two best models (<ref>) improves the F1-score of facial expression classification up to 7% (Fig. <ref>), while the mean CCC of VA prediction is up to 1.25-times better (Fig. <ref>) than the frame-level predictions for both MTL (Table <ref>) and CE challenges (Table <ref>). As a result, our final performance score on the validation set from the MTL task is 4.5 times higher than the baseline (1.494 vs 0.32). The source code to reproduce our experiments is available at[<https://github.com/HSE-asavchenko/face-emotion-recognition/tree/main/src/ABAW/ABAW7>]. In the future, it is necessary to continue the study of pre-trained models for feature extractors similarly to the winners of previous challenge who successfully trained an effective facial feature extractor using MAE and ViT <cit.>. Moreover, it is possible to integrate the pre-trained speaker emotion recognition models <cit.> to improve the accuracy of our pipeline. §.§.§ Acknowledgements The article was prepared within the framework of the Basic Research Program at the National Research University Higher School of Economics (HSE). splncs04
http://arxiv.org/abs/2407.12382v1
20240717080225
Enhancing Fluorescence Correlation Spectroscopy with Machine Learning for Advanced Analysis of Anomalous Diffusion
[ "Nathan Quiblier", "Jan-Michael Rye", "Pierre Leclerc", "Henri Truong", "Abdelkrim Hannou", "Laurent Héliot", "Hugues Berry" ]
q-bio.QM
[ "q-bio.QM", "physics.bio-ph" ]
Towards Revisiting Visual Place Recognition for Joining Submaps in Multimap SLAMThe work of Stefan Schubert was supported in part by the German Federal Ministry for Economic Affairs and Climate Action. Markus Weißflog10009-0003-1163-8755 Stefan Schubert10000-0001-9841-0689 Peter Protzel10000-0002-3870-7429 Peer Neubert20000-0002-7312-9935 July 22, 2024 ========================================================================================================================================================================================================= § ABSTRACT The random motion of molecules in living cells has consistently been reported to deviate from standard Brownian motion, a behavior coined as “anomalous diffusion”. Fluorescence Correlation Spectroscopy (FCS) is a powerful method to quantify molecular motions in living cells but its application is limited to a subset of random motions and to long acquisition times. Here, we propose a new analysis approach that frees FCS of these limitations by using machine learning to infer the underlying model of motion and estimate the motion parameters. Using simulated FCS recordings, we show that this approach enlarges the range of anomalous motions available in FCS. We further validate our approach via experimental FCS recordings of calibrated fluorescent beads in increasing concentrations of glycerol in water. Taken together, our approach significantly augments the analysis power of FCS to capacities that are similar to the best-in-class state-of-the-art algorithms for single-particle-tracking experiments. § INTRODUCTION Deviation of random motion from standard Brownian motion (BM) has received considerable attention in the literature to describe diverse physical situations <cit.>. For instance, anomalous diffusion, where the mean-squared displacement scales non-linearly with time, ⟨ r^2(t) ⟩=D t^α, has been reported to describe the motion of several proteins or particles in living cells <cit.>. In this case, the exponent α is usually referred to as the anomalous exponent, and D is the diffusion coefficient. All anomalous subdiffusion motion models exhibit α<1, whereas α=1 for standard Brownian motion. However, anomalous subdiffusion is a characteristic shared by several unrelated types of motion. For instance, continuous-time random walk (CTRW), fractional Brownian motion (fBM) or random walk on a fractal support (RWf), all exhibit anomalous subdiffusion while the physical processes they describe are very different: heavy-tailed residence time distribution for CTRW, correlation between successive jumps for fBM or the fractal geometry of the object on which RWf takes place <cit.>. Therefore, the complete characterization of the motion of a biomolecule in a live cell requires the completion of two tasks: (i) a classification or selection task to decide what model is the best at explaining the observations (e.g., BM, fBM, RWf or CTRW) and (ii) an inference or calibration task, to estimate the parameter values of the selected model given an experimental observation. In recent years, the advent of single-particle tracking supra-resolution microscopy <cit.> has generalized the use of individual trajectories to quantify the motion of biomolecules or particles in living cells. A range of methods have been proposed for the classification and inference tasks based on individual trajectories <cit.>, from simple (non-)linear regression <cit.>, statistical tests <cit.> or Bayesian inference <cit.>, to machine- <cit.> and deep-learning <cit.>. A key factor here is the length of the observed individual trajectories, since for all the methods, the longer the individual trajectories, the better the performance. Experimentally, though, technical limits strongly constraint the typical time of a trajectory, which can be as large as several seconds for membrane proteins <cit.> but is usually closer to milliseconds for motions probed in the nucleus <cit.>. On the other hand, Fluorescence Correlation Spectroscopy (FCS) is the main methodological alternative to single-particle-based techniques for the motion characterization of biomolecules in living cells <cit.>. In FCS, the biomolecules of interest are labelled with a fluorophore, and one monitors the fluctuations of the fluorescence signal due to their interaction with the light beam illuminating the sample. Although alternative approaches have been proposed <cit.>, data analysis in FCS is usually based on the auto-correlation of the fluorescence signal, G(τ). In the case of BM and fBM, theoretical considerations yield explicit non-linear functions for the expression of G(τ) as a function of the correlation delay τ, the parameters of the optical setup and the parameters of the model of motion <cit.>. Fitting this expression to the measured auto-correlation can be used for both model classification and selection with information criteria as well as for parameter inference <cit.>. Each approach, whether FCS or SPT, comes with its own specificity <cit.>. FCS can yield good results with a few individual molecules in the illumination volume, but is not a single-molecule approach, as opposed to SPT. The time scales they address are usually different: typically between 1 μs to 1 ms for FCS vs 100 ms to 1 s for SPT. In SPT, one usually has to reconstruct the trajectories from the measured individual localizations. Tracking errors during these reconstructions can induce significant measurement errors <cit.>. In FCS, the signal-to-noise ratio of the auto-correlation function is usually low, so one has to continuously monitor the signal over long durations (more than 1 second) and average large numbers of consecutive measurements (often more than 100). Because of this, FCS is usually not able to track changes of the motion parameters if they occur over a time scale shorter than several minutes. Finally, analytical expressions for the auto-correlation function G(τ) are available for BM and fBM, but they are still lacking for other anomalous models, e.g. RWf or CTRW <cit.>[Actually, an analytical expression can be obtained for motions defined by stationary processes with anomalous diffusion at all times and Gaussian distribution of the spatial displacements <cit.>. In practice, this usually restricts to fBM.]. Therefore, FCS is usually considered not to be applicable to the characterization of RWf or CTRW. Here we show that most of the above shortcomings of FCS for the classification and characterization of biomolecule motions can be overcome. Instead of fitting the auto-correlation function by a theoretical expression, we use machine learning based on the auto-correlation function to perform the classification and inference tasks. With synthetic FCS data, we show that this approach renders FCS a powerful tool to distinguish between a range of standard and anomalous motions (BM, fBM and CTRW). The performance of our approach for the classification task and for parameter inference is found to be similar to the best-in-class state-of-the-art SPT algorithms on long trajectories. Our approach accommodates a wide range of FCS experimental setup parameters (beam width and illumination intensity) and uses recordings that are both unique (one recording per estimation) and short (≥ 100-200 ms). We show that it can be used to accurately track changes of the parameter motions even with 1 Hz parameter-change frequency. Finally we apply the method on experimental data using calibrated beads in water with an increasing concentration of glycerol. Our predictions regarding the model of motion and physical parameters follow the Stokes-Einstein law and serves as a validation of our method. § RESULTS §.§ Motion classification and parameter inference on synthetic data We generated a learning set of more than 2.5 millions simulated FCS experiments, corresponding to 945 values of motion parameters α and D sampled uniformly in (0,1) and (0,10], respectively (see sec. <ref> for details on the generation of synthetic FCS data). For each pair of sampled parameters, 3 sets of trajectories were simulated with the following models: Brownian motion (BM, for which α was set to 1), fractional Brownian motion (fBM) and continuous-time random walk (CTRW) (more details on the models in sec. <ref>). One constant concern in this study was to develop a method that is robust enough to accommodate a wide range of experimental setup parameters, as encountered in the FCS laboratories worldwide. To this end, for each of the 945× 3 trajectories generated with the sampled parameters, we generated 900 FCS recordings by covering a wide range of experimental setup parameters: illumination beam waists ω_xy∈{200, 225,250,275,300} nm, and ω_z ∈{400,500,600} nm and recording durations T_obs∈{0.1, 0.25, 0.5, 0.75, 1,1.25,1.5,2} s. Figure <ref> provides illustrations of the types of trajectories generated (Fig. <ref>a1,b1,c1), as well as the corresponding estimators of the auto-correlation Ĝ(τ) (defined by eq. <ref>) for simulated FCS recordings of 0.15 or 1.5 seconds (Fig. <ref>a2,b2,c2). Due to the FCS signal-to-noise ratio (SNR) of the auto-correlation Ĝ(τ), in fitting FCS methodologies, one typically accumulates and averages a large number of measurements (several hundreds), in order to, precisely, compensate for the lower SNR of individual measurements. Here, our objective was to test whether machine learning could exploit the information contained in individual auto-correlation measurements, despite their low SNR, in the absence of any averaging or accumulation procedure. Figure <ref>d shows the performance of our machine learning strategy for the model classification task. Our strategy, described in sec. <ref> is based on histogram gradient boosting and exclusively uses individual auto-correlation measurements as illustrated in Fig. <ref>a2,b2,c2. Despite the low SNR of individual FCS recordings, our method exhibits very good classification accuracy, as measured by the F_1-score (F_1=TP/[TP+0.5(FN+FP)], with TP = # true positives, FN= # false negatives, FP=# false positives). With observation times larger than 1.0 s, the average F_1-scores reach large values, in the range [0.88-0.90]. As expected, performance decreases with the FCS measurement time, but even with the smallest value used, T_obs=0.1 s, the F_1-scores remain large, with values close to 0.80. Importantly, our algorithm manages to exhibit very similar values for all the beam waists tested, thus suggesting its applicability to a range of experimental setups. Indeed, the F_1-scores for the three ω_z of the figure are very similar. The shadings of these curves show the standard deviations of the score computed for different values of the motion parameters but also for different values of ω_xy, the beam waist in the x and y direction. The amplitudes of the shadings reflect the fact that our algorithm delivers good performance for all the motion parameters and all the beam waists tested. The large values of the F_1-scores exhibited by our algorithm thus reveal its capacity to perform a robust classification of the motion types, even with individual (non-averaged) and short FCS measurements, and even when CTRW is part of the possible motions. Regarding now the regression task, the accuracy of our machine learning algorithm is shown on Figure <ref>, with separate inference of the anomalous exponent α (Fig. <ref>a) and the diffusion coefficient (Fig. <ref>b). The estimation of α exhibits very good accuracy with MAE (mean absolute error) values around 0.12 for the largest observation times, both for fBM and CTRW. Most of the BM trajectories are correctly classified (see Fig <ref>d), corresponding to α set to exactly 1, thus MAE=0. However, for the small fraction of BM trajectories that are incorrectly classified as fBM or CTRW, the inference of α yields values that are different but very close to α=1.0. On average, the MAE for BM is therefore non-zero but still very small. In all cases, the estimation of α of course deteriorates with decreasing recording times, but the loss of accuracy down to T_obs=250 ms remains limited (not larger than 0.15). We therefore conclude that our machine learning strategy delivers good estimates of the value of α even in the case of CTRW motion. The accuracy for the estimation of the diffusion coefficient of BM motions is even better. The MAE values are around 0.70 for long T_obs, a very good performance given that the real value is sampled uniformly at random in (0,10]. Here again the accuracy decreases with smaller observation times, but even with the smaller value used here, T_obs=250 ms, the error is less than twice the error with T_obs=2 s. We compared the accuracy of our method with the standard methodology of FCS, that is based on non-linear fitting of the auto-correlation function. Indeed, for BM and fBM, theoretical expressions can be derived for the decay of the auto-correlation function <cit.>: G̅_BM(τ) = 1/N( 1 + 4D τ/ω_xy^2)^-1( 1 + 4Dτ/ω_z^2)^-1/2 and G̅_fBM(τ) = 1/N( 1 +( 4D τ/ω_xy^2)^α)^-1( 1 +( 4Dτ/ω_z^2)^α)^-1/2 Fitting the expression corresponding to the a priori model of motion of the measured auto-correlation function allows one to estimate the value of the free parameters α and/or D. However, to our knowledge, such an expression is not available for CTRW, so this method cannot be used for parameter estimation in CTRW. We show in figure <ref>c a comparison of the accuracy obtained using the above non-linear fits with the one obtained with our machine learning method. Both methods were applied to individual (non-averaged) auto-correlation functions like those shown in fig. <ref>a2,b2,c2. Given the level of noise present in these recordings, it is not surprising that the estimation of α by standard non-linear fitting is not very good, with accuracies that are 3- to 4-times lower than our machine learning approach (Fig. <ref>a). For the estimation of D, the accuracy of the non-linear fits is markedly better (Fig. <ref>d). Our ML approach is still approx. 1.8-times more accurate than the standard non-linear fit method at very small T_obs, but the accuracy values of both methods converge at long T_obs. Therefore, the machine-learning approach proposed in the current study demonstrates better accuracy on individual (non-averaged) synthetic FCS recordings than the standard non-linear fit methods. §.§ Monitoring fast variations of the motion parameters We then explored whether our method could be used to monitor rapid changes of the parameters of motion. To this end we used the simulation methodology presented in section <ref> to generate synthetic FCS recordings of 10 s duration, where we changed the parameter of motion every second. For CTRW motion, we resampled the value of the anomalous exponent α every second according to an uniform distribution in (0,1). For BM, we resampled the coefficient of diffusion D with the same frequency, using an uniform distribution in (0,10]. Figure <ref>a1 and b1 show examples of the resulting constant-by-part evolution of the real values of both parameters (red). We then applied our algorithm as a sliding window of length 500 ms with a shift of 100 ms after every prediction. Figure <ref>a1 shows the corresponding estimations of the anomalous exponent for the CTRW case (gray trace). The estimation follows the changes of the true value well, with occasional delays and over estimations especially for large values of real α (>0.85), where our algorithm tends to classify the trajectory as BM, thus setting α to exactly 1. On average, however, the estimation error is large only for the first 500 ms after the parameter change, where the sliding window of the segment overlaps two true values (fig. <ref>a2). Outside of these 500 ms period of overlap, the MAE converges back to the value exhibited with constant α, i.e. around 0.13 for T_obs=0.5 s (compare with fig. <ref>a). The estimation appears slightly better for the estimation of D, that follows the changes of the true value quite closely (fig. <ref>b1). Like for α, the mean error on D drastically increases for the first 500 ms after the change of the true value and then returns to low values (fig. <ref>b2), reaching MAE values similar to those obtained with constant values of the true D (fig. <ref>b). §.§ Application to the analysis of experimental data The previous series of results show that our approach provides a robust and accurate solution to motion classification and inference tasks using synthetic FCS recordings. Interpreting these results as a first validation of our method, we applied it on real experimental data. To this aim, we carried out experimental FCS measurements of calibrated 40 nm fluorescent beads in water with an increased concentration of glycerol (see section <ref>). We applied our algorithm on these 1 second measurement as sliding window of length 500 ms with a shift of 100 ms after every prediction. Figure <ref>a shows the results of the classification task with an increasing concentration of glycerol. With a small concentration of added glycerol (6%), our algorithm classifies most of the motion segments as BM (70%), while a minority is classified as fBM (30%). The corresponding estimation of α evidences a mostly uni-modal distribution for 6% glycerol (Figure <ref>b, blue), with BM motion at α=1. The algorithm also predicts the presence of a residual population with anomalous motion (fBM, with α values around 0.40). The inferred diffusion coefficient D (Fig. <ref>c, blue) also exhibits an unimodal distribution centered around 9 µm^2/s, a value that underestimates the theoretical value of 10.4 µm^2/s for this glycerol concentration (red-grey circles). Note that we have trained our algorithm with values of D ∈ (0,10] µm^2/s, so the theoretical value of the diffusion coefficient of the beads in 6% glycerol, 10.4 µm^2/s, is slightly beyond our training range. It is therefore not surprising that our estimations lack accuracy for such low glycerol concentrations. However, with increasing glycerol values, the theoretical value of D is expected to decrease, and enter the training range (0,10]. Therefore, we expect to get better results with larger glycerol concentrations. Accordingly, the fraction of BM segments strongly increases with glycerol concentration so that the fraction of BM segments is larger than 90-95% for 13 to 31 % glycerol (Fig. <ref>a). In this range of glycerol concentrations, the inference of α remains mostly concentrated around 1 (Fig. <ref>b) and the distributions of D exhibit medians that are close to the theoretical values (Fig. <ref>c). For the largest glycerol concentration tested (e.g. 48%), the algorithm predicts a balanced mix of mostly BM and fBM together with rare CTRW motions (less than 10%). In addition to a majority Brownian population at α=1, the inference of α again predicts an anomalous minority population centered on α=0.4. The inference of D remains very good compared to its theoretical value. Therefore our algorithm classifies the bead motions as mostly BM up to 31% glycerol with inferred D values that match their theoretical values predicted from Stokes-Einstein's law. For higher concentrations, however - here 48% glycerol, the motions seem to become more complex, with a significant population of weakly anomalous (fBM) motion. In opposition to the results obtained with our method, the estimations of α and D obtained with standard non-linear fits show much broader distributions, with medians of anomalous exponents centered around 0.8 to 0.9 (Fig. <ref>b, orange). Estimations of the diffusion coefficient with this classical fitting method (Fig. <ref>c, blue) appear closer to the expected theoretical values in terms of medians. However, the distributions of the estimations of D are much broader than our ML estimations. Taken together, these data confirm that our ML methodology is more adapted than the standard non-linear fit for short and individual FCS measurements such as those used in these experiments, in particular because it is less biased towards slightly anomalous motions. We then pushed the analysis further and carried out segmentation of the FCS measurements. To this end, we projected the decision regions of our classification algorithm on a two-dimensional representation. Figure <ref> shows the results of this projection as a ternary diagram where the green region shows the zone where the algorithm decides that the motion is BM, whereas the brown and blue regions show where the decision is fBM or CTRW, respectively. These regions locate positions where the probability of following one model of motion is larger than the probability of following any of the other two motions. To locate the experimental FCS measurements in this 2d-plane, we projected a given experiment as a trajectory made of the classifications given by the successive sliding windows in this ternary coordinate system (full lines with full circles). With low glycerol concentrations (fig. <ref> a-d), most of the segments are located or at least end up in the BM domain. For some of the trajectories, the first segment or the first two segments can occasionally be found in the fBM domain, but in all cases, the trajectory quickly converges to the BM domain after this initial segment. This suggests that the minority fraction of segments classified as fBM in Fig.<ref>a is probably due to a lower accuracy for the classification of the very first segments in the trajectories. Inspection of the trajectories obtained with larger glycerol concentrations (48%), confirms the results of fig. <ref>a. These trajectories remain in the center of the triangle, indicating that classification is harder than the other glycerol concentrations (the difference of probabilities between two models is smaller). In addition, the trajectories are more spread out over the regions than for the other concentrations, so that a trajectory can switch classification regions several times, and not only after the first segments, as seen with 6% glycerol. This suggests that with 48% glycerol, the bead motions change and become more complex, in particular with the appearance of a marked heterogeneity of the motion conditions either along time or along the explored space. § DISCUSSION The current study is a first step to widen the applicability of Fluorescence Correlation Spectroscopy (FCS) by using machine learning for FCS recording analysis. We propose a method that is robust enough to be generic regardless of the specific technical characteristics of the setup under consideration. Depending on the laboratory or even on the specific experiment, the value of the beam waists (in x, y or z) or the total brightness can vary. Our machine-learning algorithm has been designed to accommodate a range of values for these parameters. Figures <ref>, <ref> and <ref> demonstrate the performances of our algorithm over a wide range of beam waists (from 200 to 300 nm in x,y and from 400 to 600 nm in z) on synthetic data. The limited dispersion of the resulting performance curves suggests that our method is largely independent of the exact value of the beam waists and should be applicable to a wide gamut of beam sizes. We conclude that our machine learning algorithm should be able to accommodate many experimental setups. That being said, the algorithm cannot be expected to exhibit correct performance for technical characteristics that differ significantly from the value ranges used in the training set. In such a case, the accuracy of our approach, trained on the current parameter ranges, will likely deteriorate. This is for instance the case with our bead experiments with 6% glycerol where the theoretical diffusion coefficient is above the range used for training the algorithm (Fig. <ref>c). For these cases, our algorithm delivers a deteriorated accuracy. However, it is easy to generate a new synthetic learning set with parameter ranges that are better adapted to the specificity of the setup. We provide in parallel with the current article an open-source computer code that can be directly used to generate a new learning set, and train a new version of the algorithm on this more adapted learning set (see section <ref>). The performance of our algorithm for the model classification and parameter inference tasks on FCS recording can be compared to the algorithms developed for the same tasks on single-particle tracking. To this end, the benchmark provided by the anomalous diffusion (AnDi) challenge is especially useful <cit.>. This collaborative open community competition has produced a fair benchmarking of the performance of more than 10 state-of-the-art algorithms on synthetic single-particle tracks (SPT). The proposed tasks included a model classification task (among 5 possible anomalous diffusion models), an inference task (anomalous diffusion exponent α) and a segmentation task in which the model class is altered along the trajectories. Because the data used in this challenge were individual single-particle trajectories, the performance of the algorithms was quantified as a function of the most critical parameter, the length L of the trajectories. It is not possible to directly use the same reference in FCS data, which do not explicitly feature trajectory length. However, since the average length of the imaged trajectories in FCS is expected to increase with the observation time T_obs, we use T_obs below as a FCS proxy for L in SPT. For the classification task, the best-in-class SPT algorithms exhibit F_1 scores ranging from 0.6 (L=40) to 0.9 (L>500) whereas for our FCS-based algorithm, the F_1 scores for classification varied is larger than 0.88 for T_obs>1 second (fig. <ref>d). Regarding the inference task, the best SPT methods provided MEA values for α ranging from 0.35 (L=40) down-to ≈ 0.10 (L>500). For comparison, even if we exclude the case of incorrectly classified BM ( <ref>a, brown), the MEA of our FCS method for the estimation of α varied from 0.14 (T_obs=0.25 s, fBM) to circa 0.11 (T_obs=2 s, CTRW). We conclude from these comparisons that our FCS-based machine-learning approach exhibits performances that are similar to the best-of-the-class SPT algorithms of the AnDi challenge. Our methods may even be a bit better for short T_obs than SPT methods on short L. However, the limit of these comparisons is that the tasks are not entirely similar: we sampled α∈ (0,1], compared to [0.05,2] in the AnDi challenge and our set of possible motions take into account 3 models, instead of 5 models in the AnDi challenge. This differences preclude a precise one-to-one comparison, so we only retain the general conclusion that our method on FCS data yields an accuracy that favorably compares to the best-of-the-class methods for SPT data. As a validation of our method, we applied it to experimental FCS measurement of calibrated fluorescent beads in solutions with an increasing glycerol concentration. For all the studied glycerol concentrations but the largest one (48%), our algorithm predicts that the bead motion essentially remains Brownian with a diffusion coefficient that decreases with an increase of glycerol. This is in agreement with the behavior expected from the diffusion of spherical molecules at very low Reynolds numbers in viscous fluids or from point tracers among diffusing mobile obstacles (see e.g., <cit.>). Our estimates for the diffusion coefficient D also agree with the values one would expect from the Stokes-Einstein's law. However, with very large glycerol concentrations (48%), our algorithm reports a change in the bead motions, that start to depart from pure Brownian. Further work is required to confirm the signification of these observations, though, but we hope that the method introduced in the present article will be helpful to this end. § ONLINE METHODS §.§ Machine learning methods The goal of our machine learning approach is to (i) learn to predict the class of motion M of the random walkers among the set of possible motions ℳ={BM, fBM,CTRW} (classification task), and (ii) estimate the value of the parameters θ_M of this motion, i.e. D for BM and α for fBM and CTRW. §.§.§ Auto-correlation functions Our analysis starts with the collection of photon emission times, {Γ(t), t≤ T_obs} that constitutes the raw data of an FCS experiment (see sec. <ref>). T_obs is the total measurement duration. Let E = (𝒮,ℱ,ℙ) be a probability space with sample space 𝒮, event space ℱ and probability function ℙ. In case of a stationary process (true for BM and a fBM), Γ is L^2([0,T_obs],E), in the sense that ||Γ||_L^2 = ∫_0^T_obs. 𝔼[|Γ(s)|^2]. ds < +∞. In this case, Γ admits an auto-correlation function <cit.> denoted { G(τ),τ∈ [0,T_obs-τ] } that depends on the auto-correlation lag τ but not on time t: G(τ) = ⟨Γ(t)Γ(t+τ)⟩ - ⟨Γ(t)⟩⟨Γ(t+τ)⟩/√(⟨Γ^2(t)⟩⟨Γ^2(t+τ)⟩) where ⟨·⟩ denotes ensemble averaging. To introduce time binning, we first define a few notations: * Number of photons emitted between t_a and t_b: I[t_a,t_b] = ∫_t_a^t_b.Γ(s) ds . * Bin interval: Δτ = T_obs/L , where L the length of the binned vector * Binned value of I: (I[i] )_i ∈ [0,L-1] = (I[i Δτ,(i+1) Δτ] )_i ∈ [0,L-1] Using these notations, we estimate the ensemble-average ⟨Γ(t)⟩ of eq. <ref> by its time-average I = 1/L∑_i=0^L-1 I[i] and its second moment ⟨Γ^2(t)⟩ by I^2, since ⟨Γ^2(t)⟩ = ⟨Γ(t)⟩^2 for a Poisson process. This leads to an approximation of G by its time-averaged auto-correlation estimator Ĝ <cit.>: Ĝ(τ) = 1/L-τ/Δτ∑_i=0^L-1-τ/Δτ. I[i] I[i+τ/Δτ] .-I̅^2/I̅^2 In case Γ is not stationary but still L^2([0,T_obs],E), i.e. for the CTRW in our case, the auto-correlation function eq. (<ref>) is not defined, but it is still possible to construct a partial auto-correlation function <cit.> for every associated t ∈ [0,T_obs], denoted as { G_t(τ),t,τ∈ [0,T_obs-τ] }. The partial auto-correlation function of such a non-stationary process is a quantity characterizing the autocorrelation function of the stationary process associated to the non-stationary process for every t, defined by : G_t (τ) = ⟨Γ(t)Γ(t+τ)⟩ - ⟨Γ(t)⟩⟨Γ(t+τ)⟩/√(⟨Γ^2(t)⟩⟨Γ^2(t+τ)⟩) , In theory, the partial auto-correlation function of a non-stationary process can not be estimated by time averaging, but only by ensemble averaging <cit.>. This is not suitable in our case since we want to produce estimations for each trajectory. However, we still used the time-averaging of eq. <ref> as a feature to quantify the auto-correlation of non-stationary processes based on the ansatz that this feature is still good enough for machine learning algorithms. This ansatz originates from the hypothesis that the process Γ exhibits periodicity at long times, which would mean that the mean on t of its partial auto-correlation function m(τ) =lim_T_obs→∞1/T_obs-τ∫_0^T_obs-τ. G_s(τ) . ds exists and is finite. In this case, the quantity Ĝ(τ) from eq. (<ref>) is also a good estimator for non-stationary processes. As a final step, we normalize the feature Ĝ(i Δ t) obtained from eq. (<ref>) by dividing it by the mean of its first five elements and reduce dimensionnality by keeping only the first K<L/2 values of the sequence, using log sampling of the delay τ. §.§.§ Machine learning methods Learning set. A central concern in this work is that our machine learning methods must be robust to the variety of setups used in experimental labs and, in particular, must be able to be generalized to a range of beam waists ω_xy and ω_z. To this aim we generated a learning set comprising more than 2.5 million simulated FCS experiments of various duration and beam waists, in the following way: * We first set the value of the motion parameters with uniform sampling: α∼𝒰((0,1)) and D ∼𝒰((0,10]) * Using the algorithms described in section <ref>, we then generated three sets of simulated trajectories using the sampled α and D: one with fBM motion, one with CTRW motion and one with BM motion (for BM, we set α=1). * For each resulting set of trajectories, we sampled the corresponding set of photon emission times for 3 seconds, using the thinning algorithm of section <ref>. The process of photon time sampling was repeated with all possible pairs of beam waists among ω_xy∈{200, 225,250,275,300} nm and ω_z ∈{400,500,600} nm, resulting in 15 FCS simulations per set of trajectories. * In order to analyze the performance of our machine learning algorithms depending on the duration of the FCS experiment, every 3s FCS simulation described above was split into non-overlapping segments of duration T_obs∈{0.1, 0.25, 0.5, 0.75, 1,1.25,1.5,2} seconds, and every one of the 60 resulting segments was used in the learning set. With this procedure, the number of examples in the learning set was larger for short T_obs than longer ones (e.g., 10 times more examples with T_obs=0.1 s compared to T_obs=1.0 s). This allowed us to invest more learning effort on shorter observation times than larger ones. * Finally, we computed the estimator of the auto-correlation Ĝ from eq. (<ref>) for each of the simulation fragments above. We repeated this process 945 times (i.e., 945 samplings of the motion parameters), yielding a learning set of 2,551,500 simulated FCS experiments in total. This learning set was then split into a test set (315 sampled parameter values, i.e. 850,500 simulations, 4.8% of the total) and a training set (the rest of the simulations) using uniform distribution. Learning algorithm. The initial feature associated with each simulated FCS experiment is a vector of size 1,003, comprising the 1,000 log-sampled values of Ĝ, plus the values of ω_xy, ω_z and T_obs used for this simulation. We used these features to train a classifier C with the Histogram Gradient Boosting Classifier of scikit-learn<cit.> (sklearn.ensemble.HistGradientBoostingClassifier) with default parameters. The classifier yields the predicted model probability for the simulation: C(G̅,T_obs, ω_xy, ω_z) = (ℙ_BM,ℙ_fBM,ℙ_CTRW). In a second phase, we trained regressors to determine α and D (Histogram Gradient Boosting Regressor of scikit-learn with default parameters), individually for each pair of ω_xy and ω_z and each candidate model. For each pair of values (α, D), this resulted in 5× 3 × 3 = 45 classifiers, (R_ω_xy,ω_z,M). The input to these classifiers is also the vector of size 1,003: (G̅,T_obs, ω_xy, ω_z). For example R_225,600,fBM is trained on data with beam waist diameter of ω_xy=225 nm, ω_z=600 nm and with diffusion model fBM. These regressors are trained to predict α and D: R_ω_xy,ω_z,M(G̅,T_obs, ω_xy, ω_z)= α̂ if M ∈{fBM,CTRW} D̂ if M = BM The final stage consolidates the classification and the regression tasks above using a last Histogram Gradient Boosting Regressor that takes into account the regression estimation for all beam waists and model classes. This final regressor R̅ learns to predict α and D taking as input the output of the above classifier C and the outputs of the 45 corresponding regressors (vector of size 3+45+3=51): (R_ω_xy,ω_z,M): R̅(C(G̅,T_obs, ω_xy, ω_z),(R_ω_xy,ω_z,M(G̅,T_obs, ω_xy, ω_z) ),T_obs, ω_xy, ω_z) = α̂ if M ∈{fBM,CTRW} D̂ if M = BM For inference or testing, we determine the model class according to the maximal value of ℙ_M estimated by the classifier C and the estimation of the parameter value (α̂ or D̂) according to the prediction of the final regressor R̅. §.§.§ Code availability The entirety of the code used in the present article is available as an open source framework at <https://gitlab.inria.fr/nquilbie/mlfcs>. In particular, the repository offers the possibility to download the trained algorithm for use on a local computer. We also provide the code needed to generate a personalised synthetic learning set and to train the algorithm on it. The repository also proposes a simple interface based on a Jupyter notebook that allows the user to upload their own data (either as direct FCS recordings or the derived auto-correlation functions) and use our trained algorithm for the classification and inference tasks. The gitlab repository comes with a medium-size test set of synthetic trajectories, that can be used to test the performance of the algorithm. The whole training set used here (more than 2.5 million synthetic trajectories), or the experimental FCS measurements of the beads represent a considerable volume of data. The corresponding files are too large to be made available on a open access server, but they can be obtained from the authors upon request. §.§ Experimental data To evaluate our estimation method, we tested it on experimental data. We carried out FCS measurements with calibrated fluorescent beads. FCS measurements were performed using a confocal microscope (Nikon A1R) with a 488 nm diode laser (LBX-488, Oxxius). Experiments were conducted with polystyrene nanobeads (Fluoro-Max G40, Thermo Fisher) with an average diameter of 40 nm and diluted in a water-glycerol mixture to modulate the viscosity and, consequently, the diffusion coefficient. The sample was placed in a glass bottom dish (0.16-0.19 mm, P35G-1.5-20-C MatTek) and FCS measurements were acquired using a 40x NA = 1.25 water immersion objectif (CFI Apo LWD Lambda S). The beam waists were determined as ω_xy=214 nm and ω_z=522 nm. The output signal from the sample was collected with a photon counting module (SPCM-CD, Excelitas), and time tagging was carried out by a time-correlated single photon counting module (HydraHarp 400, PicoQuant). Bead solutions were diluted to reach a concentration of 10^11 particles/mL, resulting in approximately 2 individual beads on average within the focal volume. Assuming that the beads in glycerol solutions are spherical objects and the flows are dominated by the viscous effect, the Reynolds numbers is very small (Re <<1). Then, the theoretical value of their diffusion coefficient can be estimated using the Stokes-Einstein formula D=kT /(6 π·η_0 · r_beads) where η_0 is the viscosity of the glycerol solution and r_beads the bead radius. We estimated the dependence of the viscosity η_0 to glycerol concentration according to Ref <cit.>. Using r_beads=20 nm in the Stokes-Einstein formula then yields a theoretical estimate for the bead diffusion coefficient. § SUPPLEMENTARY INFORMATION §.§ Generation of synthetic FCS data §.§.§ Models of random motion This study focuses on models for anomalous diffusion, i.e., random motions for which the mean squared displacement ⟨ r^2(t) ⟩ scales non-linearly with time: ⟨ r^2(t) ⟩ = 2dD t^α where r(t) is the position of ta random walker at time t, ⟨·⟩ denotes ensemble averaging (averaging over a population of walkers at time t), α∈ (0,1] is the anomalous coefficient, D ∈ℝ_+ the diffusion coefficient and d the dimension of the space (here d=3). The literature refers to motions with α < 1 as “subdiffusive” vs “superdiffusive” for α > 1 (α = 1 being standard BM) <cit.>. We note W_i the waiting time between the i-1^th and the i^th jumps of the random walker and consider (W_i )_( i≥ 1 ) the associated i.i.d family of random variables of density λ. We associate it with the jump time J_i of the i^th jump: J_i = ∑_n=1^i. W_n . with i≥ 1 Let Δ_i ∈ℝ^d be the vector in space representing the i^th displacement in space. We note (Δ_i )_( i≥ 1 ) the corresponding family of random variables, of law Δ X_i. The position of the particle in the d-dimensional-space at time t, r(t), with initial position r_0∈ℝ^d is r(t) = r_0 + ∑_i ≥ 0. Δ_i 1_{ J_i < t }. Consider a walker located at position x at time s, that has arrived there at time J_i=t-s. With these notations, the next jump of the walker will happen at time J_i+1 = t - s + W_i+1, and its new position will be x + Δ_i+1. In the current study, we focus on three motion models, that we define below for the spatial dimension d=1: * Brownian motion (BM) <cit.> is a stationary process with independent Gaussian increments: λ = δ_dt, with dt the simulation time step. For BM, (Δ_i )= (𝒩(0, √(2 D dt))) is an i.i.d. Gaussian random variable family ∀ i, α = 1. * Fractional Brownian motion (fBM) <cit.>, which is also a stationary Gaussian process but different from white noise due to the temporal auto-correlation of its increments: λ = δ_dt, 𝔼[Δ_iΔ_j] = D( |i dt|^α + |j dt|^α + |i dt - jdt|^α), α < 1. * Continuous time random walk (CTRW) <cit.>, which also has Gaussian distributed jumps, but which is not a stationary process if the distribution of its residence times is heavy-tailed, for instance according to a power-law: λ(t) =α/ϵ(ϵ/ϵ+t)^α + 1, (Δ_i) = 𝒩(0, √(2 D dt)), ∀ i, α < 1. We used ϵ = 10^-7 throughout this work. In this study, random walks were simulated in d=3 space dimensions by simulating a d=1 independent random walk for each of the 3 dimensions. The random walks were simulated in a sphere Ω of diameter {Ω_x,Ω_y,Ω_z} centered on (x,y,z)=(0,0,0). Their initial location was uniformly distributed in Ω. To keep a constant density of walkers in Ω, some form of boundary condition has to be imposed at the surface of the sphere. We rejected reflective boundaries because they induce artificial correlations that strongly impact the auto-correlation signal. Instead, we used the following condition: whenever a walker leaves the sphere, we remove it from the simulation and replace it by a new walker, the initial location of which is chosen at random over the surface of the sphere. §.§.§ Modelling of FCS measurements We simulated an FCS illumination volume centered at (0,0,0), the center of the spherical domain Ω in which the random walks occur. The point spread function (PSF) of the microscope is modelled as a 3d Gaussian with beam waists ω_i << Ω_i, ∀ i∈{x,y,z} <cit.>. In agreement with the experimental situation we considered identical beam waists in the x and y directions, i.e. ω_x=ω_yω_xy. The illumination intensity Φ is thus given by Φ(x,y,z) =Φ_0 e^-2 (x^2+y^2/ω_xy^2+z^2/ω_z^2), where Φ_0 controls the illumination intensity. The probability that a particle located at (x,y,z) emits a photon is modelled as a Poisson process with a rate proportional to the value of the illumination at this position <cit.>. Since the particle location changes according to the random walk, we model photon emission by a single walking particle as a non-homogeneous Poisson process <cit.>, with time-dependent rate μ(t) = Φ(r(t)). If γ = {γ(t), t ≥ 0 } is the process characterizing the times of photon emission by a single molecule, one has ( γ(t) )_t ≥ 0 = ∂𝒫( ( μ(t) )_t ≥ 0) where ∂𝒫 is the process of the jump times of a Poisson process, i.e. if ( Θ_i )_i≥ 0 are the jump times of 𝒫, then ∂𝒫 = ∑_i ≥ 0δ_Θ_i . Now, to retrieve Γ = {Γ(t) , t≥ 0 }, the counting process characterizing the emission times of photons from an FCS experiment with N molecules, we sum the N processes characterizing each molecule Γ(t) = ∑_n=1^N γ_n(t). By the additive property of Poisson processes (∂𝒫(σ) +∂𝒫(ν) = ∂𝒫(σ+ν) <cit.>), the intensity of the system can be modelled as the sum of the intensities of the N processes: μ̅(t) = ∑_n=0^N .μ_n(t). and Γ = ∂𝒫( ( μ̅ (t) )_t ≥ 0) We simulate the process of photon emission by all the N molecules, Γ, by thinning <cit.>. We suppose that its rate μ̅(t) is bounded for t ∈ [0,T_obs] by its maximal value ||μ̅||_∞ < +∞. We first sample the photon emission times that would be expected from a Poisson process with constant (homogeneous) rate ||μ̅||_∞: Γ̃ = ∂𝒫( ( ||μ̅||_∞)_t ≥ 0). We refer to those as candidate emission times T̃_̃ĩ: Γ̃ = ∑_i ≥ 0. δ_T̃_̃ĩ. ∼∂𝒫( ||μ̅||_∞) We then reject some of the candidate emission times T̃_̃ĩ to adapt them to μ̅(t): we associate with every candidate emission time T̃_̃ĩ, a uniformly-distributed random variable U_i ∼𝒰(0,||μ̅||_∞) and reject every T̃_̃ĩ for which U_i > μ̅(T̃_̃ĩ). The emission times that were not rejected thus define the photon emission times of our initial process: Γ = ∑_i ≥ 0. δ_T̃_̃ĩ·1_ U_i ≤μ̅(T̃_̃ĩ). ∼∂𝒫( ( μ̅ (t) )_t ≥ 0) The output of the simulation is the resulting collection of the times the N random walkers emitted photons, ( T_i )_i≥ 0. Note that the above process is currently in continuous time, but it will be binned later during pre-processing. §.§.§ FCS simulation parameters For the simulations of the current paper, we used the following parameter values: * Size of the spatial domain 𝒟 (μm): (𝒟_x,𝒟_y,𝒟_z)= (1.05,1.05,2.4) * Time step dt= 1 μs * Beam waist (μm): ω_xy∈{0.200,0.225,0.250,0.275,0.300}, ω_z ∈{0.500, 0.600, 0.700}. * Mean number of random walkers in the illuminated volume v(=4/3 πω_xy^2 ω_z): n=5 * Maximal illumination Φ_0=6×10^4 ieeetr
http://arxiv.org/abs/2407.12544v1
20240717132735
Two-field inflation from one complex scalar with symmetry breaking
[ "Yoshihiko Abe", "Toshimasa Ito", "Koichi Yoshioka" ]
hep-ph
[ "hep-ph" ]
KUNS-3009 Two-field inflation from one complex scalar with symmetry breaking Yoshihiko Abe^1[yabe3@wisc.edu], Toshimasa Ito^2[toshimasa.i@gauge.scphys.kyoto-u.ac.jp], and Koichi Yoshioka^2[yoshioka@gauge.scphys.kyoto-u.ac.jp] ^1Department of Physics, University of Wisconsin-Madison, Madison, WI 53706, USA ^2Department of Physics, Kyoto University, Kyoto 606-8502, Japan We study two-field inflation derived from a single complex scalar with a nonzero vacuum expectation value. Inflation is characterized by two parameters, the vacuum expectation value and the mass parameter of the phase mode, which give rise to a variety of inflationary structures. We categorize the potential trajectories of the two inflaton fields and determine the parameter regions consistent with current observational data. Furthermore, we examine the reheating process through the inflaton decay to right-handed neutrinos and the subsequent lepton number generation within these parameter regions. Our finding suggests that the existence of multiple fields can significantly alter the possibilities for inflaton oscillations and reheating. § INTRODUCTION Inflation in the early universe gives a natural solution to the horizon and the flatness problems of the big-bang theory, and generates the primordial perturbations <cit.>. The inflationary expansion driven by a single scalar field has been extensively studied. Further, multiple fields inflation models have also been studied with field theoretical <cit.> and other motivations such as supergravity and string theory <cit.>. We consider a two-field inflation model from a single complex scalar field with a non-minimal coupling to gravity, which is assumed to develop a non-vanishing vacuum expectation value (VEV). If we consider only one of the radial and phase (Nambu-Goldstone) components to play the role of the inflaton, the inflationary expansion is realized by the similar dynamics to the new inflation <cit.>, the chaotic inflation <cit.>, the natural inflation <cit.>, the Higgs inflation <cit.>, etc. The VEV and the soft-breaking mass parameter of the phase mode determine the preferred inflation scenario. In this paper, we study the details of two-field inflation from a single complex scalar with symmetry breaking. The inflation trajectories are classified according to the field component that drives inflation, with the VEV and soft-breaking mass parameter being varied. Subsequently, we show the parameter space of our inflation model that is consistent with the current cosmological observations. The classification of trajectories enables us to comprehend the typical inflation features and predictions. It is found that the Higgs-like inflation driven by the radial mode is favored in the wide region, while various other types can realize the successful inflation. Furthermore, we investigate the possibility of reheating the universe through the decay to right-handed (RH) neutrinos from a complex inflaton field with the non-minimal coupling. This paper is organized as follows. In Section <ref>, we introduce our model and classify the inflation trajectories into typical three categories. Section <ref> shows the parameter space consistent with the experimental constraints from the current cosmological observations. In Section <ref>, we discuss the reheating via the inflaton decay to RH neutrinos and evaluate the reheating temperature consistent with the thermal leptogenesis. Section <ref> is devoted to our conclusions. The details of the notation and definition of parameters are summarized in Appendices. § MOTIONS OF INFLATON WITH SYMMETRY BREAKING We first introduce our two-field inflation model and analyze the inflation dynamics by solving the equations of motion (EOMs). In this paper, we set the reduced Planck scale M_P ≈ 2.4 × 10^18 GeV to be unity otherwise stated. We use the Minkowski metric convention η_μν = (+1, -1, -1, -1). §.§ Complex scalar with non-minimal coupling We consider a complex scalar field Φ with a non-minimal coupling to gravity. In this paper, we start our discussion from the following action: S=∫ d^4x√(-g_J)[-1/2Ω^2(Φ)R_J+g^μν_J∂_μΦ∂_νΦ^*-U(Φ,Φ^*)], where we introduce the non-minimal coupling ξ, Ω^2(Φ)=1+2ξ|Φ|^2. R_J is the Ricci scalar obtained from the Jordan frame metric g_Jμν. Let the potential U(Φ,Φ^*) be given by U(Φ,Φ^*)=λ/2|Φ|^4-μ_Φ^2/2|Φ|^2-m_χ^2/4(Φ^2+Φ^*2)+U_0, as considered in Ref. <cit.> in the context of dark matter. The third term explicitly breaks the (1) phase rotation symmetry of Φ to Z_2 and generates the pseudo Nambu-Goldstone boson (pNGB) mass. The parameter m_χ (≥0) is called as the soft-breaking mass. For a transformation Φ↦ e^i π /2Φ, the action is invariant under a sign flip of m_χ^2, m_χ^2 ↦ - m_χ^2. That implies we can impose m_χ^2 ≥ 0 consistently in the model. The ultraviolet completeness of this pNGB mass term is not specified in this paper, but its origin is discussed in several works <cit.>. We suppose that the scalar field Φ develops a non-vanishing VEV v_ϕ. The stationary condition allows us to derive the relations among μ_Φ^2, U_0, and v_ϕ, demanding that U=0 at the vacuum. If we parameterize Φ in the non-linear representation as Φ = ϕ/√(2) e^i χ, the scalar potential for the fields ϕ and χ is given by U(ϕ,χ)=λ/8(ϕ^2-v_ϕ^2)^2+m_χ^2/4ϕ^2(1-cos2χ). We note that χ is a dimensionless variable. In order to move to the Einstein frame from the Jordan frame <cit.>, we consider the field-dependent Weyl rescaling g_μν = Ω^2 g_Jμν = (1 + ξϕ^2) g_Jμν, where g_μν denotes the metric in the Einstein frame. This redefinition results in the following action S = ∫ d^4x √(-g)[ - R/2 + 1 + ξϕ^2 + 6 ξ^2 ϕ^2/2(1 + ξϕ^2)^2_μϕ^μϕ + ϕ^2/2(1 + ξϕ^2)_μχ^μχ - V(ϕ, χ) ], where R represents the Ricci scalar of g_μν, and the inflaton potential is given by V(ϕ, χ) U(ϕ, χ)/(1 + ξϕ^2)^2 = λ (ϕ^2 - v_ϕ^2)^2 + 2m_χ^2 ϕ^2 ( 1 - cos 2χ)/8(1 + ξϕ^2)^2. In the following, we refer to the first term in (<ref>) as the Higgs potential and the second term as the pNGB potential, respectively. Note that the pNGB potential differs from the usual natural inflation potential in Refs. <cit.> in that the overall coefficient of the potential depends on ϕ, which naturally arises from the pNGB soft-breaking term in our case. The field space metric K_ab(φ) for real scalar fields φ is generally defined by the kinetic term as L_kin. = 1/2 K_ab(φ) _μφ^a _νφ^b g^μν, where φ is the multiplet of scalar fields and indices a and b run all components. In the present model, φ^1 = ϕ and φ^2 = χ, and their metric reads from (<ref>) as K_ab = (1+ξϕ^2+6ξ^2ϕ^2/(1+ξϕ^2)^2,ϕ^2/1+ξϕ^2). §.§ Slow-roll and slow-turn approximation §.§.§ Equations of motion and inflation parameters The equations of motion for the background fields are given by d^2ϕ/dN^2+γ^1_11(dϕ/dN)^2+γ^1_22(dχ/dN)^2+(3-ε)(dϕ/dN+K^11∂/∂ϕln V) =0, d^2χ/dN^2+2γ^2_12dϕ/dNdχ/dN+(3-ε)(dχ/dN+K^22∂/∂χln V) =0, where the e-folding N is defined by d N = H dt with H being the Hubble parameter, and γ^a_bc is the Levi-Civita connection derived from the field space metric K_ab. The slow-roll (SR) parameter ε is defined by ε - Ḣ/H^2 = 1/2 K_11(dϕ/dN)^2 + 1/2 K_22(dχ/dN)^2. For the detail, see Appendix <ref>. In order to realize the SR inflation, the first SR condition ε≪ 1, is imposed <cit.>. This condition also appears in the case of single-field inflation. The acceleration of the inflaton is characterized by the parameters η^1 = d^2ϕ/dN^2+γ^1_11(dϕ/dN)^2+γ^1_22(dχ/dN)^2, η^2 =d^2χ/dN^2+2γ^2_12dϕ/dNdχ/dN, in the ϕ- and χ-directions, respectively. As there are two fields, it is beneficial to decompose them into the direction of the inflation trajectory and the direction of rotation <cit.>. The former is called as the parallel direction and its unit vector is ê_∥. The latter is the vertical direction to ê_∥ and its unit vector is ê_⊥. Using these unit vectors, the conditions of the acceleration for successful inflation are given by | η_∥/v| | K_abê^a_∥η^b/v| ≪ 1, η_⊥/v K_abê^a_⊥η^b/v≪ 1, where v = √(2 ε) is the speed of field vector <cit.>. Eq. (<ref>) is the second SR condition and Eq. (<ref>) is called the slow-turn (ST) condition. The ST condition is one of the indication of small vertical acceleration. Once the SR conditions (<ref>), (<ref>) and the ST condition (<ref>) are satisfied, the SR and ST parameters are well approximately expressed with the inflaton potential V <cit.>, ε ≈1/2 K^ab_a ln V _b ln V, η_∥/v ≈ - ê_∥^a (_a _b ln V-γ^c_ab_cln V)ê^b_∥, η_⊥/v ≈ - ê_∥^a (_a _b ln V-γ^c_ab_cln V)ê^b_⊥. The unit vector in the parallel and vertical directions are expressed as ê_∥^a = 1/v dφ^a/dN≈-K^ab_b ln V/√(K^cd_c ln V _d ln V), ê_⊥^a ≈±(K^ab-ê_∥^aê_∥^b) (_b _c ln V-γ^d_bc_dln V)ê_∥^c/√((K^ab-ê_∥^aê_∥^b) (_a _c ln V-γ^e_ac_eln V) (_b _d ln V-γ^f_bd_fln V)ê_∥^c ê_∥^d). The direction of ê_⊥ is chosen so that η_⊥≥ 0. We can use the EOMs to show that the above expression of the vertical unit vector is equivalent to K^abJ_bcê_∥^c with some asymmetric J_ab, which is another expression according to the definition of perpendicular to ê_∥. In the case of two-dimensional field space, a simple solution is J_ab=√( K)/vϵ_ab, where ϵ_ab is the totally asymmetric tensor. Note that all of the above quantities are written in terms of fields and their derivatives through the potential V and the field space metric K_ab, which values are obtained by solving the equations of motion (<ref>) and (<ref>). §.§.§ Behavior of slow-roll parameters We start our discussion from the parameter dependence of the SR parameters ε and |η_∥/v|. Figure <ref> shows the dependence of the soft-breaking mass m_χ and the field value ϕ. In the left panels, the m_χ dependence is given with λ=1, v_ϕ=10^-4, ϕ=1, and χ=π/3. The blue, orange, and green lines represent the results obtained for ξ = 10, 10^3, and 10^5, respectively. The value of VEV v_ϕ is motivated from the energy scale of grand unification or lepton number violation. When m_χ is sufficiently smaller than the Planck scale, the inflation is primarily driven by ϕ and effectively reduces to the single-field inflation, analogous to the Higgs inflation. In this case, the SR parameters are approximately given by ε ≈8(1 + ξ v_ϕ^2)^2 ϕ^2/(ϕ^2 - v_ϕ^2)^2(1+ ξϕ^2 + 6 ξ^2 ϕ^2), η_∥/v ≈4(1 + ξ v_ϕ^2)(1 + ξϕ^2) [ v_ϕ^2 + ϕ^2 ( 1 + 2 ξϕ^2 + 12 ξ^2 ϕ^2) ]/(ϕ^2 - v_ϕ^2)^2 ( 1 + ξϕ^2 + 6 ξ^2 ϕ^2)^2. These expressions do not involve the soft-breaking mass. When m_χ is sufficiently large, the pNGB potential tends to dominate the inflation. Consequently, the SR parameters do not involve m_χ again since m_χ becomes an overall parameter of the inflaton potential. We note that a larger ξ makes the inflaton potential flatter, which results in smaller SR parameters. For the detail expressions of the SR parameters, see Appendix <ref>. The right panels of Figure <ref> show the ϕ dependence of SR parameters with λ=1, m_χ=10^-4, χ=π/3 and ξ=10^5 (ξ=10^-2) for v_ϕ=10^-4 (v_ϕ=20). This large VEV is motivated by the fact that the natural inflation requires VEV beyond the Planck scale <cit.>. A large value of ϕ (> v_ϕ) results in a reduction of the SR parameters due to the proportionality ε∝ϕ^-4 and η_∥/v ∝ϕ^-2, as derived from the approximate formulae presented in Appendix <ref>. These behaviors can be attributed to the dominance of the Higgs potential in the large ϕ region. In contrast, for a small value of ϕ (< v_ϕ), the SR parameters are expressed as ε ≈8 ϕ^2/λ^2 v_ϕ^8(A^2 + m_χ^4 sin^2 2χ), η_∥/v ≈4/λ v_ϕ^4A^3 + 3A m_χ^4 sin^2 2χ - 2 m_χ^6 cos 2χsin^2 2χ/ A^2 + m_χ^4 sin^2 2χ, where A = λ v_ϕ^2 (1 + ξ v_ϕ^2) - 2m_χ^2 sin^2χ. It is found that ε becomes small and |η_∥/v| is constant for sufficiently small ϕ due to the flatness of the inflaton potential. When m_χ≪√(λ) v_ϕ and ξ v_ϕ^2 ≪ 1, we have |η_∥/v| ≈4/v_ϕ^2, which implies that the VEV must exceed the Planck scale in order to achieve a SR inflation with the Higgs potential in the region of small field value ϕ. §.§ Slow roll versus slow turn Let us consider the condition characterizing the two-field nature of the inflationary trajectory according to Eqs. (<ref>) and (<ref>). Because the turn in the field space can be dominant if |η_∥/v| ≪η_⊥ /v <cit.>, let us define μ := max( η_⊥/v/|η_∥/v|), which is evaluated on the inflationary trajectory. This μ parameter represents how much the two-field nature originating from the movements of ϕ and χ is strong. We use and for the horizon-exit values of these fields. As typical values of , we choose = 23 π /48, π/3, and π/100, which respectively correspond to the following three cases: * Near the top of the pNGB potential * Middle of the pNGB potential * Near the bottom of the pNGB potential Figure <ref> shows the μ parameter in the (m_χ, ξ) plane for the aforementioned values of . We set λ = 1 and > v_ϕ = 10^-4 as in Section <ref>, and evaluate μ at the e-folding N = 55. For large m_χ or ξ, it is not possible to satisfy the SR and/or ST conditions for a fixed e-folding, as shown by the white region in Figure <ref>. In the figure, ϕ tends to drive the inflation with the field-turning effect becoming dominant when the pNGB potential is effective as m_χ increases. This implies the SR/ST violating (white) region for large m_χ with a fixed ξ. Conversely, a larger ξ makes smaller in the > v_ϕ region due to the flattering effect of the inflaton potential. For too large ξ, the inflaton potential for ϕ becomes too flat, which may result in a too-slow acceleration. That implies a large ξ results in large μ for a fixed m_χ (a fixed size of the pNGB potential). If m_χ or ξ is relatively large, the inflaton tends to move in the χ-direction. It can be seen from the ratio of the gradient of the inflaton potential K^22∂_2ln V / K^11∂_1ln V = m_χ^2 ( 1 + ξϕ^2 +6 ξ^2ϕ^2 ) sin 2χ/ϕ [ λϕ^2 + 2m_χ^2 ( 1 - ξϕ^2 ) sin^2χ ] , which is proportional to m_χ^2 and ξ^2 when the dynamics of ϕ is dominated by the Higgs potential. We also find a simple approximated formula for the μ parameter for small v_ϕ and m_χ μ≈ m_χ^2 ξ^5/2 v_ϕ^2 sin (2) / 8 λ. It is found from this expression that μ is vanishing for m_χ=0, and is positively proportional to ξ as μ∝ m_χ^2 ξ^5/2. From these behaviors, we find that a large μ can be realized in the large m_χ and large ξ region, which implies the strong nature of two-field inflation. It is noted that the inflaton potential (<ref>) is rewritten as V(ϕ, χ) = λ(ϕ^2 - v_ϕ^2)^2 + 2 ( m_χ/√(λ))^2 ϕ^2 ( 1 - cos 2χ)/8(1 + ξϕ^2 )^2. While we have shown Figure <ref> with a fixed value λ=1, the μ parameter for other values of λ can be obtained by rescaling m_χ as m_χ/√(λ), since μ is roughly given by the ratio of the potential and the overall λ dependence is cancelled, as seen from the above potential. §.§ Classification of inflation trajectory The parameter μ defined in Eq. (<ref>) is an appropriate quantity for characterizing the multi-field nature of the inflaton motion. In this section, we classify the inflation trajectory with the help of μ. This classification enables us to comprehend the multi-field nature of inflation without having to examine in great detail the complex motions of inflatons due to their interactions with each other. Figure <ref> shows typical inflation trajectories in the present two-field model. The vectors in the figures indicate the potential gradients appearing in the EOMs, K^ab_b ln V with a,b = 1, 2. The red and blue solid lines represent the trajectories with N=50 and N = 60, respectively. In the shaded region, the SR and/or ST condition is not satisfied. We here adopt the values of v_ϕ = 10^-4 and v_ϕ = 20 as in the previous case. In the upper (lower) three panels, the VEV of Φ is chosen as v_ϕ = 10^-4 (v_ϕ = 20), which is the large (small) VEV case under the situation > v_ϕ (< v_ϕ). When m_χ is small and the pNGB potential contribution is irrelevant (the left panels of Figure <ref>), the inflation is mainly driven by ϕ. This is similar to the single-field inflation scenario such as the Higgs inflation and chaotic inflation. In this case, μ≪ 1 as a result of little motion in the χ-direction during the inflation. As m_χ increases, χ also plays a part of the inflaton as well as ϕ, which is shown in the middle panels of Figure <ref>. As illustrated in the figures, the multi-field effect becomes pronounced, frequently resulting in μ≳ 1. In the right panels, where m_χ is large, the inflaton trajectory is mainly along the pNGB χ-direction, similar to the natural inflation. In this case, the inflation process is also driven by a single field, which results in the situation of μ≪ 1. Let us classify the inflaton trajectories into the following three categories in the present two-field inflation model: * Higgs inflation type driven by ϕ (shown by (a) and (d)) * Mixed type with both ϕ and χ as inflaton (shown by (b) and (e)) * Natural inflation type mainly driven by χ (shown by (c) and (f)) In the following section, we examine the relation between inflationary trajectories and the cosmological observables. The symbols H(iggs), M(ixed), and N(atural) will be used to indicate which of these types the inflation trajectory we are referring to. § CONSTRAINTS FROM COSMOLOGICAL OBSERVATIONS This section presents an analysis of the inflation observables and the parameter constraints for the current two-field inflation model. §.§ Transfer functions and observables In the context of single-field inflation, the observables are derived from the fluctuations of the inflaton field and the curvature perturbation, denoted by R, in the co-moving gauge on hypersurfaces at a constant time <cit.>. In the case of multi-field inflation, this scalar fluctuation is defined as a projection of the multi-field fluctuations onto the tangent direction of the inflation trajectory: R = K_abê_∥^a δφ_f^b/v. The perturbation δφ_f is the multi-field version of the Mukhanov-Sasaki variable <cit.> δφ_f = δφ + ψdφ/d N, where δφ is the scalar field fluctuation and ψ is a part of the diagonal metric perturbation <cit.>. Furthermore, it is necessary to consider the fluctuations in the vertical direction with respect to the inflation trajectory, which give rise to the isocurvature perturbation <cit.> S = K_abê_⊥^a δφ_f^b/v. These two perturbations R and S have the following relation <cit.> [ R; S ] = [ 1 ; 0 ][ R_*; S_* ]. The values at the horizon exit are denoted with ∗ such as R_* and S_*. This relation indicates that the curvature perturbation R is not frozen after the horizon exit and the isocurvature perturbation S contributes to it through the transfer function  <cit.>. In contrast to the single-field inflation scenario, can contribute to the inflationary observables in the multi-field case. The power spectrum and tensor-to-scalar ratio r are given by <cit.> = V_*/24π^2 ε_* ( 1 + ^2), r = 16 ε_*/1 + ^2. In addition, we consider the spectral index n_s and the running spectrum α given by n_s = 1 + d ln/d ln k, α = d n_s/d ln k, and the isocurvature fraction <cit.> β_ = ^2/1 + ^2 + ^2. These quantities also depend on the transfer functions <cit.>, and all of these can be expressed by the parameters and field values in the present inflation model. See Appendix <ref> for the detail. These inflationary observables are experimentally constrained by the Planck Collaboration <cit.> = (2.1 ± 0.1) × 10^-9, r< 0.07, n_s = 0.9649 ± 0.0042, α = -0.0045 ± 0.0067, β_ < 0.038. §.§ Spectral index and running spectrum Figure <ref> shows typical predictions in the (n_s,r) plane in the present two-field model. The power spectrum is normalized by λ as = 2.1 × 10^-9 such that it is consistent with the Planck observation. The parameters v_ϕ and are fixed at specific values. In the left panel, we explicitly use v_ϕ = 10^-4 and =π/3, on the other hand v_ϕ = 20, = π /3 in the right panel. In addition to these, we explicitly use 0≤ m_χ≤ 1.4× 10^-5 (ξ=10^-3), 0≤ m_χ≤ 2.3× 10^-5 (ξ=10^-2) and 0≤ m_χ≤ 1.1× 10^-4 (ξ=1) in the left panel and 0≤ m_χ≤ 7.0 × 10^-6 (ξ=10^-6 and ξ=10^-3) in the right panel. We first focus on the left panel of Figure <ref>. When m_χ is sufficiently small, the Higgs potential dominates the inflaton potential. Consequently, the prediction of the observables is similar to the Higgs inflation case. In this cases, n_s (r) becomes large (small) as N increases. This is due to the fact that the initial value must be sufficiently large in order to flatten the inflaton potential during the inflation, which results in larger n_s and small r. The change in n_s occurs more quickly when N is larger, which makes larger, since m_χ appears in the Lagrangian in the combination of m_χ^2 ϕ^2. As m_χ increases, the inflaton motion in the χ-direction also contributes to the inflation. Consequently, becomes more closely at the bottom of the Higgs potential in order to achieve a specific fixed e-folding. This results in a smaller inflaton potential, which in turn implies smaller ε_* due to the Planck normalization. From the single-field inflation analogy, this gives rise to an increase in the value of n_s. From this consideration, we can argue that the value of n_s increases when the pNGB mode also contributes to the inflation, provided that > v_ϕ. Conversely, sufficiently large m_χ can drive the natural-like inflation. The left panel of Figure <ref> shows that larger m_χ results in a reduction in both n_s and r. This feature is also seen in the context of natural inflation. For small ξ and large m_χ, the pNGB part of the action (<ref>) during the inflation is approximated as S ≈∫ d^4x √(-g)[ ^2 /2∂_μχ∂^μχ - m_χ^2^2/4 ( 1 - cos 2χ ) ]. This action shows that being larger than Planck scale facilitates the flatter pNGB potential for the natural-like inflation. Consequently, we can conclude that the natural-like inflation is feasible despite v_ϕ being smaller than the Planck scale. We note that a larger ξ flattens the inflaton potential enough and tends to result in a smaller value of r. Let us move to the right panel of Figure <ref>. When m_χ is sufficiently small, we can say that an increase of N results in an increase of n_s and a decrease of r. This is due to the fact that, in order to ensure the flatness of the inflaton potential during inflation, it is necessary for a larger value of N to initiate the inflation from the part of flatter potential. In this region, the inflation driven by almost only the radial component ϕ. On the other hand, if the Higgs potential is no longer dominant, the natural inflation occurs with a particular m_χ, which leads to a large r. This behavior can be seen in the figure, and we can say that natural-like inflation also occurs for a large VEV as the single-field natural inflation. Finally, we discuss the relation between ξ and the observables. When m_χ is sufficiently small, the Higgs potential is dominant, so a larger ξ tilts the inflaton potential in the region of ϕ < v_ϕ. This results in smaller n_s and larger r. Conversely, if the pNGB potential is the dominant one, a larger ξ leads to a flatter inflaton potential, which in turn results in a smaller r. These can be read from the right panel of Figure <ref>. §.§ Constraints on potential parameters We show the parameter space in the (m_χ, ξ) plane which is consistent with the cosmological observations by considering three typical initial values χ_*, as discussed in Section <ref>: = 23 π/48, π/3, π/100. The discussion is mainly divided into the cases of > v_ϕ and < v_ϕ. These cases typically correspond to the large and small field inflation scenarios along with ϕ, respectively. We investigate the inflationary features using v_ϕ=10^-4 as the case of small VEV (≪1) and v_ϕ=20 as the case of large VEV (∼O(10)). In addition to them, we also investigate the third case with v_ϕ=5 as a typical intermediate VEV scale (∼O(1)). In the following, we consider the four patterns of VEV and the ϕ direction: * > v_ϕ = 10^-4≪ 1 * > v_ϕ =20 = O(10) * < v_ϕ = 20 = O(10) * < v_ϕ = 5 = O(1) In each pattern, we analyze the inflation with various values of the soft-breaking mass m_χ. The inflaton trajectories are classified into three types, as discussed in Section <ref> and referred to by the symbols H, M and N. In all figures of parameter constraints given below, the Planck normalization = 2.1 × 10^-9 is imposed to determine the coupling constant λ. §.§.§ > v_ϕ and v_ϕ≪ 1 First, we examine the case where the VEV is small and ϕ is rolling down from a large initial value. Figure <ref> shows the parameter space which is consistent with the observations, where we set v_ϕ = 10^-4, > v_ϕ, and N=55. In the white region, the condition N = 55 is not satisfied, which is found in the large m_χ region. This is due to the fact that the Planck normalization provides an upper limit for m_χ, and the SR and ST conditions exclude small for enough e-folding. The pink region is consistent with the observations and N=55. The gray, blue, red colored regions are excluded by the constraints from the observations on n_s, r, and α, respectively. The black (brown) dashed line represents =1 (λ = 1). The Planck normalization and the perturbativity of the quartic coupling λ provides the upper bound on the non-minimal coupling, which is given by ξ≲10^5, as discussed in Refs. <cit.>. The soft-breaking mass parameter m_χ determines which contribution of potential is dominant, the Higgs potential or the pNGB potential. In the following, the parameter space is analyzed with paying attention to the value of m_χ. We find some small allowed region labeled by N around (m_χ , ξ) ∼ (10^-5, 10^-3) in Figure <ref>(a) where the inflation is mainly driven by χ, and ϕ oscillates at the end of inflation. It is noticed that the inflationary expansion in this region is similar to the one discussed in Refs. <cit.>. Small m_χ region: In the small m_χ region, the inflation is dominantly driven by ϕ, which region is labelled by H in the figure. There the spectral index and the tensor-to-scalar ratio are approximately given by n_s ≈ 1 - 8 (3 + 5 ξ^2 + 24 ξ^2 ^2 + 2 ξ^2 ^4 + 12 ξ^3 ^4 )/^2(1 + ξ^2 + 6 ξ^2 ^2)^2 + O ( m_χ^2 ), r ≈128/^2 ( 1 + ξ^2 + 6 ξ^2 ^2) + O ( m_χ^2 ), where the e-folding is expressed as <cit.> N ≈1 - √( 1 + 32 ξ + 192 ξ^2) + 2 (1 + 6 ξ ) ξ^2 + 12 ξln1 + 12 ξ + √(1 + 32 ξ + 192 ξ^2)/2 (1 + 6 ξ) ( 1 + ξ^2)/16 ξ. When ξ is also small, these expressions reduce to n_s ≈ 1 - 3/N + O ( m_χ^2 ), r ≈16/N + O ( m_χ^2 ), and are excluded by the observations as seen in Figure <ref>. This is due to the fact that a smaller ξ results in a more tilted inflaton potential, which makes the spacetime away from the de Sitter space and r becomes larger than the observed value. The above discussion and the approximate formulae lead to the conclusion that there is a lower bound for the non-minimal coupling ξ. This can be read from Figure <ref> as ξ≳ 10^-2, which is consistent with the result in Ref. <cit.>. Next, let us discuss the running spectrum. In the small ξ region, it is approximately given by α≈ - 3/N^2 + O ( m_χ^2 ), which implies that α is consistent with the observation for sufficiently large e-folding N under the current situation. This result is to be expected since the magnitudes of |1-n_s| and |α| are generally hierarchical in single-field inflation <cit.>. On the other hand, for large ξ, we obtain the approximate formula for α, α ≈ - 32 / 9 ξ^2 ^4 . A large value of ξ ensures that α remains sufficiently small and consistent with the experimental data. That means a hierarchy between |1 - n_s| and |α|, and the inflation is mainly achieved by the Higgs potential part and reduces to the single-field inflation scenario effectively. We also discuss the isocurvature fraction for small m_χ. The general form is given by (<ref>) and the transfer function T_𝒮𝒮 is written as T_𝒮𝒮 = exp∫_N_*^N β̃ dN'. The explicit form of the function β̃ is found in Appendix <ref>, which can be determined by the model parameters and field values obtained by solving the EOMs. In the present case, β̃ is approximately given by β̃≈ -8/ϕ^2 ( ξ≪ 1 ), β̃≈ - 4 v_ϕ^2 / 3 ϕ^2 (ξ≫ 1). A negative, moderate β̃ leads to an exponentially small T_𝒮𝒮, and thus β_iso is sufficiently small to be consistent with the observational data. Note here that the condition ξ≫ 1 implies ≪ 1. Large m_χ region: If we have a large m_χ and the χ contribution, both ϕ and χ can play the roll of the inflaton when ξ is small, as shown in the left and middle panels of Figure <ref>. For the spectral index, we find that n_s becomes larger with increasing of m_χ under ξ≪ 1 as discussed in Figure <ref>, which leads to a contradiction to the observational data. Conversely, the parameter regions where the inflation is primarily driven by χ can be allowed as shown in the left panel. In particular, the spectral index is approximated as n_s≈ 1 - 4(2 + T_RS^2)/(1 + T_RS^2 )^2sin^2, when m_χ≫√(λ) v_ϕ and ξ≪ 1. The spectral index on this inflation trajectory can match to the observations. That is driven by the pNGB mode with a help of the dynamical radial component. Furthermore, the inflation may also be allowed in the case of ξ≫ 1, where the pNGB mode contributes sufficiently, as shown in the left and middle panels of Figure <ref>. This is due to the fact that a large ξ not only flattens the inflaton potential but also facilitates the movement in the χ-direction as discussed in Section <ref>. On the other hand, if the initial value is sufficiently small as in the right panel, the inflaton moves almost no further in the χ-direction, independently of v_ϕ. Finally, we give a comment on the isocurvature bound β_. The function β appearing in Eq. (<ref>) is approximated as β ≈ - 2 /ϕ^2 sin^2χ (ξ≪1), β ≈ - 2ξ/sin^2 χ (ξ≫1), where we assume m_χ≫√(λ)v_ϕ and v_ϕ≪ 1. Since β is negative in the both cases, we can say that β_iso is sufficiently small in this case. §.§.§ > v_ϕ and v_ϕ = O(10) We consider > v_ϕ with the VEV being a relatively large value of v_ϕ = 20. Figure <ref> shows the allowed parameter region for the inflation under this VEV and field value. The aforementioned colors are used to show the experimentally-excluded regions. In the white region, the e-folding N = 55 cannot be satisfied with a large m_χ, similar to Figure <ref>. Small m_χ region: In the small m_χ region with > v_ϕ = 20 and ξ≫ 1, the spectral index, the tensor-to-scalar ratio, and the running spectral are approximately given by n_s ≈ 1 - 8 v_ϕ^2 / 3 ^2 , r ≈ 64 v_ϕ^4 / 3 ^4 , α ≈ - 32 v_ϕ^4 / 9 ^4 . The relation between and the e-folding reads N = (1 + 6 ξ)(^2- ϕ_e^2) - v_ϕ^2 ln^2/ϕ_e^2 - 6 ( 1 + ξ v_ϕ^2) ln1 + ξ^2/1 + ξϕ_e^2/8 ( 1 + ξ v_ϕ^2), where we introduce ϕ_e for the final value of ϕ, which can be derived from the SR and ST conditions. From these approximated formulae, we find that the inflation in this case is consistent with the observational data. This is mainly due to the fact that, as in the previous case, a large ξ results in the flattening of the inflaton potential. In the case of small ξ, on the other hand, the gradient of the inflaton potential increases, resulting in the inconsistency between theoretical predictions and the observational data, as seen in Figure <ref>. Since β is approximately given by (<ref>) and (<ref>) also in this case, we find β_iso sufficiently small. The inflaton behavior discussed here is similar to the previous one with the small VEV v_ϕ≪ 1, since both inflation motions are mainly driven by the radial mode only. Large m_χ region: In this case, the inflaton trajectories largely depend on the initial value . For large m_χ, we have a significant contribution from the pNGB potential, which in turn allows the inflaton to move in the χ-direction. In particular, the natural inflation can be achieved if the initial field values and parameters are tuned, as seen in the middle panel of Figure <ref>. If both scalar fields drive the inflation, a larger ξ can lead to the inflation consistent with the observations, as seen in the left and middle panels. A small value of ξ makes the inflation the chaotic or the natural one, which in any case is excluded by the observational data. In the large VEV case, we have the upper bound on m_χ. For ξ≫ 1, the power spectrum is approximated as P_R≈ (λ^2 + 4m_χ^2sin^2 )^3 (1 + T_RS^2) / 1536 π^2ξ^3 ^2 m_χ^4sin^2 2. A larger v_ϕ requires a larger , which in turn results in a larger m_χ due to the Planck normalization P_R. Accordingly, we find that the experimentally allowed parameter value of m_χ becomes larger in the present case than that of v_ϕ≪ 1 with ξ≫ 1. On the other hand, for ξ≪ 1, the power spectrum is approximated as P_R≈^4 (λ^2 + 4m_χ^2sin^2 )^3 (1 + T_RS^2)/ 1536 π^2 [ λ^2 ^4 +4 m_χ^2 ( λ^2 + m_χ^2 ) sin^2 ] . When m_χ is large, namely the pNGB potential is dominant, we have P_R∝ m_χ^2 since the soft-breaking mass parameter is the overall coefficient of the inflaton potential. Therefore, we find that a large v_ϕ leading a large requires a smaller m_χ for ξ≪ 1. This behavior is opposite to the case of ξ≫ 1. As for the isocurvature bound, the function β in this case becomes β ≈ - 2/ϕ^2sin^2χ (ξ≪ 1), β ≈ - 2ξ/sin^2 χ (ξ≫ 1), for m_χ≫√(λ) v_ϕ and v_ϕ≫ 1. We find that β_ tends to be exponentially small in this type of inflation, similar to the case of v_ϕ≪ 1. §.§.§ < v_ϕ and v_ϕ = O(10) Let us move on to discussing the different type of inflation with < v_ϕ, namely, ϕ rolls down to the vacuum from some smaller field value. As mentioned before, we choose two typical values for the VEV as v_ϕ = 5 and 20. We note that the SR inflation is not possible for a small value of VEV, as shown in Eq. (<ref>). We first consider the large VEV case, namely, v_ϕ = 20. Figure <ref> shows the allowed parameter region in the (m_χ, ξ) plane for < v_ϕ and N = 55. It is noted that a rather small value of ξ is not excluded because the inflaton potential is sufficiently flat when < v_ϕ. On the other hand, the large ξ region is excluded because the inflaton potential becomes tilted when < v_ϕ. These behaviors are in contrast to the previous case with > v_ϕ. Small m_χ region: When m_χ is small, the observables are explicitly written as n_s ≈ 1 - 8 ( 1 + ξ v_ϕ^2 ) / v_ϕ^2 - 8 ( 5 + 6 ξ v_ϕ^2 +ξ^2 v_ϕ^4 ) ^2/ v_ϕ^4, r ≈ 128 ( 1 + ξ v_ϕ^2 )^2 ^2/ v_ϕ^4 (1+ξ^2), α ≈ -64(1+ξ v_ϕ^2)^2(5+ξ v_ϕ^2-12ξ^2 v_ϕ^2)^2/v_ϕ^6, in the region where the SR and ST conditions are satisfied. If ξ≪1, the observables r and α become sufficiently small to be consistent with the observations due to the suppression by a large value of v_ϕ. Furthermore, n_s is in the experimentally allowed range for v_ϕ∼20, as shown in Figure <ref>. We can say again this behavior is due to the flat inflaton potential in the region ϕ_*<v_ϕ. On the other hand, ξ≲O(1) leads to a more tilted inflaton potential in the region < v_ϕ. That makes n_s smaller than the observed value. We find this result from the approximate formula of n_s in this case, n_s ≈ 1 - 8/v_ϕ^2 - 8ξ . The severe constraint can be read from this formula and also found in Figure <ref>. A smaller ξ is required for the successful inflation under the current conditions. Finally, we discuss the isocurvature fraction β_iso in this case. The green shaded region in Figure <ref> is experimentally excluded by the isocurvature bound. In the region of m_χ≪ v_ϕ, the function β̃ in (<ref>) is approximated as β̃≈ -8ϕ^2/3v_ϕ^4(1+ξ v_ϕ^2)(3 + 4ξ +12ξ^2 -5ξ^2 v_ϕ^2 +12 ξ^3 v_ϕ^2). For sufficiently small or large ξ, β is always negative and thus β_iso is found to be small enough. When ξ∼𝒪(1), on the other hand, β turns out to be positive, which can lead to a large β_iso and is experimentally disfavored. This behavior is actually seen in Figure <ref>. Large m_χ region: In this region, we have three different patterns of inflation, depending on the initial value . In the case where is small, the inflation is mostly driven by ϕ. If is large, both ϕ and χ contribute to the inflation, as in the left and middle panels of Figure <ref>. These two patterns are denoted by the symbols H and M in the figure. As previously discussed, large values of ξ and m_χ lead to a small n_s. This is due to the fact that the inflaton potential of the radial component is dominated by the quadratic term. Therefore the inflation in which both components join is challenging to achieve in the case of < v_ϕ as opposed to the scenario of > v_ϕ. We also have a characteristic behavior in the region with a large m_χ and a small ξ in the middle panel of Figure <ref>. In this region, the inflation similar to the natural inflation occurs with a very specific value of m_χ, as previously discussed in Figure <ref>. That appears under the conditions that χ_* is tuned to realize a fixed value of e-folding N almost only by the χ motion. Consequently, r becomes larger than the observed value, which is a characteristic feature found from the approximated formula <cit.> r ≈32/^2 sin^2. Since this natural-like inflation is almost driven by a single field χ, the hierarchy between |1-n_s| and |α| is hold. We note that in the region where natural-like inflation occurs, β is approximated as β≈ - 2 /ϕ^2 sin^2χ, which also makes β_iso exponentially small due to its negative value. §.§.§ < v_ϕ and v_ϕ = O(1) As the final pattern of inflation in the present two-field model, consider the intermediate value of VEV ∼O(1). Figure <ref> shows the experimentally allowed parameter region for < v_ϕ = 5. One can see that a large portion of area is more severely constrained by the condition N=55, compared to the large VEV case of < v_ϕ=20. A large ξ is not allowed when v_ϕ is small in order to realize a flat inflaton potential. Besides, compared to the large VEV case, the region with a large m_χ is also excluded. This is because the inflaton potential is flatter for a larger v_ϕ and a smaller m_χ. For large m_χ and < v_ϕ, the power spectrum is approximated as P_R≈λ^3 v_ϕ^12 ( 1 + T_RS^2 ) / 1536 π^2^2 [ λ^2 v_ϕ^4 +4 ( λ v_ϕ^2 + m_χ^2 ) m_χ^2sin^2 ], and it is found that a small v_ϕ only allows a small value of m_χ because of the Planck normalization. The parameter region shown in Figure <ref>, especially in the middle panel, is limited also by the inconsistency with the observed value of n_s. We find an approximate formula for it as n_s≈ 1-8(1+ξ v_ϕ^2)/v_ϕ^2 + O( m_χ^2 ), which shows that n_s gives smaller values than the observation in this case <cit.>. This is due to the large acceleration of inflaton in this region. The tensor-to-scalar ratio is also approximated as r ≈128 ^2(1+ξ v_ϕ^2)^2/v_ϕ^4 + O(m_χ^2), which is still smaller in the entire region where the inflaton potential is flat. The approximated formula for α is expressed as Eq. (<ref>). The current case leads to the conclusion ϕ_*≪1, which in turn implies that α is consistent with the observations provided that m_χ is sufficiently small. Finally, we discuss the isocurvature fraction β_. This is found to be large in most entire region in this case. The approximation of β̃ is given by Eq. (<ref>). In contrast to the large VEV case, we find β̃≈ 0 at the beginning of inflation. The inflaton is almost insensitive to the ϕ-direction initially, which results in the approximation β≈ 0 being realized for a prolonged period during inflation. This means that a relatively small VEV under the condition < v_ϕ leads to a large β_, which is inconsistent with the observation. § COSMIC REHEATING In this section, we introduce RH neutrinos with Yukawa coupling to the complex scalar field Φ, and consider the reheating <cit.> so that the inflaton heats the thermal bath by the decay to the neutrino sector. In the vacuum after the inflation epoch, the VEV v_ϕ generates the RH neutrino masses and also the pNGB associated with the lepton number (1)_L symmetry, called the majoron <cit.>. The majoron has been widely studied in the context of dark matter physics <cit.> and the lepton asymmetry generation <cit.>. In this section, we consider this type of RH neutrino couplings can also be available to the reheating process for non-minimally coupled inflaton. §.§ Coupling to RH neutrinos We consider the following action for the RH neutrinos N_i in the Einstein frame, S_N = ∫ d^4x √(-g)[ 1/2 ( 1 + 2 ξ |Φ|^2 )^3/2N_i i D N_i - 1/2 ( 1 + 2 ξ |Φ|^2 )^2 (f_iΦN_i P_R N_i +h.c.) ], where the generation-diagonal Yukawa couplings are introduced between the inflaton and the RH neutrinos, and f_i are generally complex-valued. The covariant derivative D includes the spin connection and P_R is the chiral projection operator. This form of the action, in particular the Φ dependence of the coefficients, can be derived from the original canonical action for N_i in the Jordan frame and Weyl rescaled with the inflaton field, similarly to the inflaton action in Section <ref>. The action has the lepton number symmetry under which N_i and Φ have the charges +1 and -2, respectively. This symmetry may correspond to the U(1) rotation of inflaton field (the origin of pNGB and its soft-breaking mass), which implies a natural support of two-field inflation from a complex scalar. In the following, we focus on the case that ϕ field oscillates at the end of the inflation and decay to the thermal bath through RH neutrinos. We also assume χ stays at a constant value χ̅ during the reheating, and perform the chiral rotation of RH neutrino fields N_i → e^-i γ^5 (χ̅+ f_i) /2 N_i, such that the Yukawa couplings (then the RH neutrino masses) become real valued. During the reheating era, the inflaton oscillates at the bottom of the potential which is determined by the VEV u_ϕ, u_ϕ = √(λ v_ϕ^2Ω^2(v_ϕ)-m_χ^2(1-cos 2χ̅)/λΩ^2(v_ϕ)-ξ m_χ^2(1-cos 2χ̅)). Note that u_ϕ can be vanished if the pNGB potential has a large contribution during the reheating. We expand the scalar field around this VEV so that the scalar field has a canonically normalized kinetic term as ϕ = u_ϕ+Ω^2(u_ϕ)/√(Ω^2(u_ϕ)+6ξ^2 u_ϕ^2)ρ, and we obtain the action for the scalar fluctuation and the (normalized) RH neutrinos S = ∫ d^4x √(-g)[ 1/2_μρ^μρ -V(ρ) + 1/2N_i (i D -m_N_i)N_i - 1/2 y_i ρN_i N_i ], The inflaton coupling to the RH neutrinos y_i and their mass eigenvalues m_N_i and m_ρ are given by y_i = Ω(u_ϕ)/√(Ω^2(u_ϕ)+6ξ^2 u_ϕ^2)f_i/√(2), m_N_i = 1/Ω(u_ϕ)f_i/√(2) u_ϕ, m_ρ = √(2λΩ^2(v_ϕ)[u_ϕ^2(1+2Ω^2(v_ϕ))-v_ϕ^2] +m_χ^2(1-5ξ u_ϕ^2)sin^2χ̅)/2Ω^2(u_ϕ)√(Ω^2(u_ϕ)+6ξ^2 u_ϕ^2). §.§ Reheating and leptogenesis We examine whether the inflaton coupling to RH neutrinos can be consistent with the reheating of the universe after inflation. A simplified analysis is performed here by taking into account of rough estimations of seesaw-induced neutrino masses and the condition for leptogenesis. The inflaton decays to the RH neutrino pair, ρ→ N_i N_i, during the oscillation. The width is determined by the Yukawa coupling y_i, Γ_ρ→ N_iN_i = y_i^2/16π m_ρ( 1 - 4 m_N_i^2/m_ρ^2)^3/2. Once the energy is converted to the N_i sector, the Standard Model (SM) fields are thermalized by the decay and scattering of N_i with sufficiently large neutrino Yukawa couplings y_ν_i. The decay process N_i→ L_jH, where L_j and H are the leptons and Higgs fields, is effective when N_i is heavier than the electroweak scale and the decay width is larger than the Hubble parameter at the temperature T. This condition is roughly given by the present model parameters as T < 21 g_*^-1/4y_iv_ϕ(m_ν_i/10^-1 eV)^1/2 T_D, where g_* is the effective degrees of freedom in the radiation with which the temperature is defined. The seesaw-induce light neutrino mass is given by m_ν_i=(y_ν_iv_h)^2/m_N_i with v_h=175 GeV (the generation structure is omitted for simplicity of discussion). The scattering of N_i to the SM sector is also effective if the scattering rate is sufficiently large. This process may be utilized to make the SM sector thermalized radiation component, when the decay process is unavailable in any sense. We roughly evaluate its effectiveness condition by considering the collision term from the scattering, which can be expressed in terms of the present model parameters as T < 0.17 g_*^-3/2y_iv_ϕ(m_ν_i/10^-1 eV) T_S. We have approximated the effective degrees of freedom for the entropy density as equal to that for the energy density. The SM sector is decoupled from the scattering process when its temperature is above T_S. As a typical example of this type of reheating, we focus on the parameter region which we have shown, in the middle panel of Figure <ref>, consistent with the inflationary observable. Figure <ref> shows the classification of the reheating and leptogenesis on the allowed region of (m_χ,ξ) plane. The uncolored region is excluded by the inflation phenomena only. The purple regions in Figure <ref> correspond to u_ϕ=0, namely, the inflaton oscillates around the origin. In this case, the decay process may be ineffective if the RH neutrino mass is regarded as small. According to (<ref>), the SM sector is thermalized up to the temperature T_S with g_*=112, at which the SM sector decouples from the thermal bath. Since the width (<ref>) is large enough, the RH neutrino sector is subsequently heated and reaches the temperature T_N, T_N = 0.17 g_*^-1/4y_im_ρ^1/2, with g_*=5.25. Here T_N is defined by the Hubble time being equal to the width (<ref>). (If T_N<T_S, the N_i reheating era does not exist and the reheating temperature is given by T_N.) As soon as this reheating is over, ρ is assumed to immediately roll down to the non-trivial vacuum, where the RH neutrinos are massive and the decay process becomes valid for heating the SM sector. If T_N<T_D, the SM sector is quickly thermalized, and otherwise, the SM thermalization occurs after the universe cools down to T_D. We then find the final reheating temperature T_R for the SM sector is given by T_R = T_N (T_N < T_S) ( (1 - ω ) T_S^3 + ω T_N^3 )^1/3 (T_S < T_N < T_D) ( (1 - ω) T_S^3 + ω T_D^3 )^1/3 (T_S < T_D < T_N) , where ω g_*^3N/(g_*^SM+g_*^3N). In the region of interest shown in Figure <ref>, the relation T_S< T_N< T_D is found to be numerically satisfied everywhere, and the reheating temperature of the radiation then becomes T_R∼ 0.36 T_N∼ 0.027y_im_χ^1/2. These parameter dependence on y_i and m_χ can be seen in the purple regions of Figure <ref>. We comment on the leptogenesis, the lepton number generation by the out-of-equilibrium decay of RH neutrinos <cit.>. The mechanism can work if at least T_R>m_N_i is satisfied. This condition now reads 0.027m_χ^1/2>v_ϕ, which implies m_χ>1.3× 10^-5 for the parameters used in Figure <ref>. Note that this bound on m_χ is independent of the Yukawa couplings and common to three panels in Figure <ref>. One can see from this bound that only a small portion of the purple region is consistent with successful reheating and leptogenesis in the present inflation scenario. Another case is that the VEV u_ϕ is non-vanishing, namely, the inflaton oscillates around the non-trivial vacuum. In Figure <ref>, it corresponds to the colored regions other than purple, where it is numerically checked that the VEV u_ϕ is roughly similar to v_ϕ. In this case, the RH neutrinos N_i are massive and the reheating by the inflaton decay to N_i is possible if m_ρ>2m_N_i. This minimum requirement for reheating imposes the following condition on the model parameters: Ω^2(v_ϕ)λ^1/2 > 2(Ω^2(v_ϕ)+6ξ^2v_ϕ^2)y_i , where we have neglected small χ̅ contributions. Furthermore, the RH neutrino decay to the SM sector can be sufficiently effective, unlike the u_ϕ=0 case. The reheating by the scattering is subdominant as can be seen by comparing the conditions (<ref>) and (<ref>), and is not included in the analysis below. The reheating temperature is then given by (<ref>) with g_*=112, if this temperature is smaller than T_D and hence the decay process is effective until the universe is heated up to (<ref>). That is converted to the constraint on the model parameters as λ^1/2 < 1.5× 10^4 v_ϕ√(Ω^2(v_ϕ)+6ξ^2v_ϕ^2) (m_ν_i/10^-1 eV)^2 . If not satisfied, the reheating temperature for the SM sector is determined by T_D, as discussed above. We find the inequality (<ref>) generally holds in the present inflation scenario unless v_ϕ is quite small, and the reheating temperature T_R is given by (<ref>) with g_*=112: T_R = 0.052λ^1/4y_iv_ϕ^1/2/(Ω^2(v_ϕ)+6ξ^2v_ϕ^2)^1/4(1-4(Ω^2(v_ϕ)+6ξ^2v_ϕ^2)^2y_i^2/Ω^4(v_ϕ)λ)^3/4 , where the phase space factor is explicitly included, though it is not so important for evaluating T_R. As seen from Figure <ref>, the reheating temperature is not continuously connected to the u_ϕ=0 case. For small ξ, it scales as T_R∝ξ^1/2 if a rough estimation λ∝ξ^2 from the power spectrum is supposed. On the other hand, for larger ξ (ξ v_ϕ≫1), T_R is almost constant and depends linearly on the inflaton coupling y_i only. The minimum requirement for the reheating (<ref>) also leads to a non-trivial upper bound on y_i. In Figure <ref>, the pink regions are excluded by this reheating condition. A smaller y_i is favored that can be read off from the fact that the brown region in Figure <ref> becomes wider as y_i decreases. For a fixed y_i, (<ref>) turns out to give a lower (upper) bound on ξ for small (large) value of ξ, if a rough estimation λ∝ξ^2 is taken into account. That indeed corresponds to the horizontal brown bands in the (m_χ,ξ) plane. The thermal leptogenesis is also possible for the u_ϕ≠0 case. The reheating temperature needs to be at least higher than the (lightest) RH neutrino mass scale, and that implies the parameter bound from (<ref>) and (<ref>), 4y_i^2 < λΩ^4(v_ϕ)/(Ω^2(v_ϕ)+6ξ^2v_ϕ^2)^2 -50(Ω^2(v_ϕ)λ v_ϕ)^2/3/Ω^2(v_ϕ)+6ξ^2v_ϕ^2. This bound is satisfied in the brown hatched regions in Figure <ref> where the leptogenesis is available. Assuming λ and ξ are determined by the inflation phenomena and v_ϕ is a fixed parameter in our analysis, (<ref>) gives a upper bound on the inflaton Yukawa coupling y_i. This is the phase space bound of the inflaton decay and does not strongly affect the model parameter constraints once the decay channel is open. These behaviors can be seen from the y_i difference in Figure <ref>. Note that (<ref>) (the brown hatched region) is necessarily stronger than (<ref>) (the brown region) since the latter is the vanishing limit of the phase space. For a fixed y_i, (<ref>) gives the bound on ξ, almost independently of m_χ. Taking a rough relation λ∝ξ^2 obtained from the power spectrum in mind, a lower (upper) bound on ξ for small (large) value of ξ can be derived from (<ref>), which corresponds to the horizontal brown-hatched bands in the (m_χ,ξ) plane. We finally comment on several other possibilities of reheating in the present inflation model. In the pink region of Figure <ref>, the inflaton decay to heavy N_i is forbidden and the main decay mode to the SM sector would be a top quark pair. The reheating only with this mode gives a roughly estimated temperature T_R∼10^-14λ^1/4v_ϕ^1/2(m_ν_i/eV)^2, which is suppressed by large N_i masses and becomes around 𝒪(MeV) to 𝒪(GeV) for the parameters in the pink regions. This temperature is just before the big bang nucleosynthesis, but would need to be improved, e.g., for the leptogenesis being successful. As another possibility, the inflaton can also interact with the scalar sector through the Higgs portal coupling, inflaton potential terms, etc. While we do not consider this type of reheating process in this paper, it would be interesting to examine it in connection with the inflaton shift to the non-trivial vacuum, and will be left for future work. § CONCLUSIONS In this paper, we study the two-field inflation originated from one complex scalar field with a non-vanishing VEV and a non-minimal coupling to gravity. We introduce the μ parameter to characterize the multi-field nature and classify the inflaton trajectories in the field space to three typical types, depending on which field components contribute to the inflationary expansion. According to this classification, there are several cases that only the radial component contributes to the inflation, only the pNGB mode contributes, and both do. The parameter μ tends to be large when both modes contribute to the inflation. Furthermore, we make the detailed classifications on the VEV, the soft-breaking mass m_χ, and the initial value of the inflaton field and , as the classification covers the above three typical patterns. For each classification, we show the parameter space consistent with the current cosmological observations. It is found that the experimental results favor the inflation mainly driven by the radial component of the scalar field, but we also find other parameter space consistent with the observations for these three types of inflationary expansion. While the natural inflation is known to generally require that the VEV is larger than the Planck scale, we show that VEVs smaller than the Planck scale allow the successful inflation with a sufficient contribution from pNGB modes under the condition ϕ_*>v_ϕ and ξ≲ 1. Moreover, we find that even when VEV is rather small, the inflation is also successful for large m_χ and where the contribution from the pNGB mode is comparable to the radial mode. For a VEV larger than the Planck scale, the pNGB-contributed inflation is possible without fine tuning. On the other hand, the small field inflation with < v_ϕ is shown to generally contradict with the observations unless sufficient contribution of the radial component exists. It is also found that a tiny value of VEV is not favored for < v_ϕ even when the pNGB mode contributes to some extent. We also examine whether the reheating by the inflaton decay to the RH neutrinos is possible where the complex scalar field generating the Majorana mass term plays the roll of inflaton. The VEV at the end of the inflation can be away from the scalar potential vacuum due to non-trivial field value which is a result of the inflation. Taking this into the account and evaluating the reheating temperature, we show that the parameter region with large m_χ is allowed in addition to the region with moderately large ξ. This is because the inflaton oscillates around the origin in the region where m_χ is sufficiently large. We would need detailed numerical analysis to examine the consistency with successful leptogenesis, but that is beyond the scope of this paper. Some mechanism to move the background χ field is also a left future problem so that the non-trivial potential vacuum is realized after the inflation dynamics and reheating. § ACKNOWLEDGMENTS The work of Y.A. is supported by JSPS Overseas Research Fellowships. The work of K.Y. is supported by JSPS KAKENHI Grant No. JP20K03949. § SCALAR FIELD DYNAMICS In this appendix, we summarize the details of the scalar sector. §.§ EOMs The general form of the action for the scalar fields φ^a is S = ∫ d^4x √(-g)[ - R/2 + 1/2 K_ab(φ) _μφ^a ^μφ^b - V(φ) ]. From this, the general EOM of the scalar field reads ∇^2 φ^a + γ^a_bc(φ) _μφ^b ^μφ^c + K^ab(φ) _b V(φ) = 0, where ∇ is the covariant derivative evaluated by the Einstein frame metric g_μν. The Levi-Civita connection in the scalar field geometry is given by γ^a_bc = 1/2 K^ad (_b K_dc + _c K_bd - _d K_bc ). We assume ds^2 = g_μν dx^μ dx^ν = dt^2 - a(t)^2 d x⃗^2, and φ^a depends only on the time. Then the EOM reduces to φ̈^a + 3 H φ̇^a + γ^a_bcφ̇^b φ̇^c + K^ab_b V = 0, where the dot is the derivative with respect to t, and the Hubble parameter is defined as H ȧ/a. When we introduce the e-folding N by dN = H dt, the EOMs (<ref>) become d^2 φ^a/d N^2 + γ^a_bcd φ^b/dNd φ^c/dN + (3 -ε) ( d φ^a/dN + K^ab_b ln V ) =0. In this derivation, we have used the Friedmann equation H^2 = 1/3( 1/2 K_abφ̇^a φ̇^b + V(φ) ), Ḣ = - 1/2 K_abφ̇^a φ̇^b, and introduced the slow-roll parameter ε given by ε - Ḣ/H^2 = 1/2 K_abd φ^a/dNdφ^b/dN. §.§ Geometry of scalar fields We consider the complex scalar Φ=ϕ e^iχ/√(2) with the non-minimal coupling ξ. The metric of the scalar field space is found to be K_ab = ( 1 + ξϕ^2 + 6 ξ^2 ϕ^2/(1 + ξϕ^2)^2, ϕ^2/1 + ξϕ^2). The non-vanishing Levi-Civita connection in this space is given by γ^1_11 = - ξϕ ( 1 -6 ξ +ξϕ^2 +6 ξ^2 ϕ^2 )/(1 + ξϕ^2)(1 + ξϕ^2 + 6 ξ^2 ϕ^2), γ^1_22 = - ϕ/1 + ξϕ^2 + 6 ξ^2 ϕ^2, γ^2_12 = γ^2_21 = 1/ϕ(1 + ξϕ^2). Here the index 1 and 2 correspond to ϕ and χ, respectively. The Ricci scalar of this geometry becomes R = 4 ξ ( 1 + 3ξ + ξϕ^2 + 6 ξ^2 ϕ^2 )/(1 + ξϕ^2 + 6 ξ^2 ϕ^2)^2. We note R→ 0 as the non-minimal coupling ξ→ 0. § SLOW-ROLL AND SLOW-TURN PARAMETERS In the space of field φ^a, the velocity is given by its derivative with respect to the e-folding, i.e., dφ^a/dN, and the (covariant) acceleration of the field vector is η^a=d^2φ^a/dN^2+γ^a_bc(dφ^c/dN)(dφ^b/dN). The slow-roll and slow-turn parameters are defined by the (half) size of the velocity vector and the parallel and vertical components of the acceleration vector as <cit.>: ε = 1/2K_abdφ^a/dNdφ^b/dN, η_∥^a/v = 1/v K_abê_∥^a (d^2φ^b/dN^2 +γ^b_dcdφ^c/dNdφ^d/dN), η_⊥^a/v = 1/v K_abê_⊥^a (d^2φ^b/dN^2 +γ^b_dcdφ^c/dNdφ^d/dN). For the acceleration vector, dividing by the speed v implies to express the rate of the change of the field velocity. When the slow-roll and slow-turn approximation are satisfied, ε, |η_∥|/v, η_⊥/v ≪ 1, these parameters can be explicitly written by the inflaton potential and the field space metric, as given in (<ref>)–(<ref>). We here present the slow-roll and slow-turn parameters in our two-field inflation model, especially in the approximation used in the text. When the soft-breaking mass m_χ is sufficiently large, the inflation is mainly driven by the χ potential, and the slow-roll parameters are given by ε ≈2/ϕ^2[ (1 -ξϕ^2)^2/1 + ξϕ^2 + 6 ξ^2 ϕ^2+ (1+ξϕ^2) ^2χ], η_∥/v ≈ 1 + ξϕ^2 /ϕ^2[ 2(1 - ξϕ^2)^2 ( 1 + 3 ξϕ^2 + 12 ξ^2 ϕ^2) tan^2χ/(1 + ξϕ^2 + 6 ξ^2 ϕ^2 )^2 +^2 χ[ 1 + 5 ξϕ^2 + 12 ξ^2 ϕ^2 + 2 ξ^2 ϕ^4 + 12 ξ^3 ϕ^4 + (1 - ξϕ^2)cos2χ] + 4(1- ξϕ^2) ] / [ ( 1 -ξϕ^2)^2 tan^2 χ + (1 + ξϕ^2)(1 + ξϕ^2 + 6 ξ^2 ϕ^2) ]. Both parameters do not depend on m_χ since the inflaton χ potential is proportional to it. When ϕ takes a larger value than v_ϕ during the inflation, the slow-roll parameters are expressed as ε ≈8/λ^2ξ ( 1 + 6 ξ )ϕ^4[ [λ(1 +ξ v_ϕ^2) - 2ξ m_χ^2 sin^2 χ]^2 + m_χ^4 ξ^2 ( 1 + 6 ξ)sin^2 2χ], η_∥/v ≈8 ξ/λϕ^2[ [λ(1 +ξ v_ϕ^2) - 2ξ m_χ^2 sin^2 χ]^3/ξ^2 ( 1 + 6 ξ )^2 + 2[λ(1 +ξ v_ϕ^2) - 2ξ m_χ^2 sin^2 χ] m_χ^4 sin^2 2χ/1 + 6 ξ - ξ m_χ^6 cos 2χsin^2 2χ] / [ [λ(1 +ξ v_ϕ^2) - 2ξ m_χ^2 sin^2 χ]^2/ξ ( 1 + 6 ξ) + ξ m_χ^4 sin^2 2χ] , On the other hand, in the small field region with ϕ < v_ϕ, the slow-roll parameters become ε ≈8 ϕ^2/λ^2 v_ϕ^8[ [λ v_ϕ^2(1 + ξ v_ϕ^2) - 2m_χ^2 sin^2χ]^2 + m_χ^4 sin^2 2χ], η_∥/v ≈4/λ v_ϕ^4[[λ v_ϕ^2(1 + ξ v_ϕ^2) - 2m_χ^2 sin^2χ]^3 + 3[λ v_ϕ^2(1 + ξ v_ϕ^2) - 2m_χ^2 sin^2χ] m_χ^4 sin^2 2χ - 2 m_χ^6 cos 2χsin^2 2χ] / [ [λ v_ϕ^2(1 + ξ v_ϕ^2) - 2m_χ^2 sin^2χ]^2 + m_χ^4 sin^2 2χ]. § FORMULAE OF COSMOLOGICAL OBSERVABLES §.§ Transfer functions In the multi-field inflation, the isocurvature mode turn into the curvature fluctuations, thus, it is important to represent the expression for the transport functions describing the super-horizon evolution of the perturbation, according to Ref. <cit.>. The effective mass matrix is defined as M^a_b K^ac (_c _b ln V-γ^d_cb_dln V) + 1/3εRê_⊥^a K_bcê_⊥^c. With the expression of the basis vector (<ref>) and the Ricci scalar in the field space (<ref>) at hand, the matrix M is fully determined by the couplings and background field values in the present inflation model. With this matrix, the transfer functions are defined as T_SS = exp[ ∫_N_*^N dN' (K_acê_∥^c M^a_bê_∥^b -K_acê_⊥^c M^a_bê_⊥^b ) ] , T_RS = ∫_N_*^N dN' η_⊥/ v exp[ ∫_N_*^N' dN” (K_acê_∥^c M^a_bê_∥^b - K_acê_⊥^c M^a_bê_⊥^b )]. These are also the functions of model parameters and field values through the expressions of the basis vectors and the slow-roll parameter (<ref>). Note that the transfer functions (<ref>) and (<ref>) always take positive values. §.§ Cosmological observables Here we give the exact form of the cosmological observables. First, the power spectrum and the tensor-to-scalar ratio are given as the following explicit forms in our model, = (1 + ξ^2)^4 ( 1 + T_RS^2) V( , )^3 / 3 π^2^2 / [ [λ ( 1 + ξ v_ϕ^2)(^2 - v_ϕ^2) + 2m_χ^2 (1 -ξ^2)sin^2 ]^2/1 + ξ^2 + 6 ξ^2 ^2 + m_χ^4 ( 1 + ξ^2 ) sin^2 2], r = 2 ^2/(1 + ξ^2)^4(1 + ^2) V(,)^2 ×[ [λ ( 1 + ξ v_ϕ^2)(^2 - v_ϕ^2) + 2m_χ^2 (1 -ξ^2)sin^2]^2/1 + ξ^2 + 6 ξ^2 ^2 + m_χ^4 ( 1 + ξ^2 ) sin^2 2]. The power spectrum consisting of the correlation function of scalar fluctuations depends on the positive power of T_RS. On the other hand, r, which consists of the ratio of tensor fluctuations to scalar ones, depends on the inverse power of T_RS. The spectral index n_s and its running α are also written by the field values in the present model under the slow-roll and slow-turn approximation <cit.>, n_s = 1 - 2 ε_* + 2 K_ab*ê_N *^a M^b_c *ê_N*^c , α = [-K^ab∂_a ln V ∂_b (- 2 ε + 2 K_cdê_N^c M^d_e ê_N^e )]_* , where ê_N = 1/√( 1 + T_RS^2)ê_∥ + T_RS/√( 1 + T_RS^2)ê_⊥. The isocurvature fraction is determined by the transfer functions as β_ = ^2/1 + ^2 + ^2. Using the explicit expressions of the basis vectors (<ref>), (<ref>) and the transfer functions (<ref>), (<ref>) defined by using the matrix (<ref>), all these cosmological observables can also be expressed by the parameters and field values in the present inflation model. In the text, various approximations of these formulae are presented, depending on the magnitude of coupling constants and field values. utphysm
http://arxiv.org/abs/2407.13656v1
20240718163048
Colorado Ultraviolet Transit Experiment Near-Ultraviolet Transmission Spectroscopy of the Ultra-hot Jupiter KELT-9b
[ "Arika Egan", "Kevin France", "Aickara Gopinathan Sreejith", "Luca Fossati", "Tommi Koskinen", "Brian Fleming", "Nicholas Nell", "Ambily Suresh", "P. Wilson Cauley", "Jean-Michele Desert", "Pascal Petit", "Aline A. Vidotto" ]
astro-ph.EP
[ "astro-ph.EP" ]
0000-0002-4701-8916]Arika Egan Laboratory for Atmospheric and Space Physics, University of Colorado Boulder, 1234 Innovation Drive, Boulder, CO 80303 Applied Physics Laboratory, Johns Hopkins University 11101 Johns Hopkins Rd, Laurel, MD 20723 0000-0002-1002-3674]Kevin France Laboratory for Atmospheric and Space Physics, University of Colorado Boulder, 1234 Innovation Drive, Boulder, CO 80303 0000-0002-4166-4263]Aickara Gopinathan Sreejith Space Research Institute, Austrian Academy of Sciences, Schmiedlstrasse 6, A-8042, Graz, Austria Laboratory for Atmospheric and Space Physics, University of Colorado Boulder, 1234 Innovation Drive, Boulder, CO 80303 0000-0003-4426-9530]Luca Fossati Space Research Institute, Austrian Academy of Sciences, Schmiedlstrasse 6, A-8042, Graz, Austria 0000-0003-3071-8358]Tommi Koskinen Lunar and Planetary Laboratory, University of Arizona, Tucson, AZ 85721, USA 0000-0002-2129-0292]Brian Fleming Laboratory for Atmospheric and Space Physics, University of Colorado Boulder, Boulder, CO 80303, USA 0000-0001-7131-7978]Nicholas Nell Laboratory for Atmospheric and Space Physics, University of Colorado Boulder, Boulder, CO 80303, USA 0000-0002-0506-0825]Ambily Suresh Laboratory for Atmospheric and Space Physics, University of Colorado Boulder, 1234 Innovation Drive, Boulder, CO 80303 0000-0001-9207-0564]P. Wilson Cauley Laboratory for Atmospheric and Space Physics, University of Colorado Boulder, 1234 Innovation Drive, Boulder, CO 80303 0000-0002-0875-8401]Jean-Michel Desert Anton Pannekoek Institute of Astronomy, University of Amsterdam, Amsterdam, The Netherlands 0000-0001-7624-9222]Pascal Petit Institut de Recherche en Astrophysique et Planétologie, Université de Toulouse, CNRS, CNES, 14 avenue Edouard Belin, F-31400 Toulouse, France 0000-0001-5371-2675]Aline A. Vidotto Leiden Observatory, Leiden University, P.O. Box 9513, 2300 RA, Leiden, The Netherlands Laboratory for Atmospheric and Space Physics 1234 Innovation Drive Boulder, CO 80303, USA University of Colorado, Boulder 690 UCB Boulder CO 80301 § ABSTRACT We present new near-ultraviolet (NUV, λ = 2479 – 3306 Å) transmission spectroscopy of KELT-9b, the hottest known exoplanet, obtained with the Colorado Ultraviolet Transit Experiment (CUTE) CubeSat. Two transits were observed on September 28th and September 29th 2022, referred to as Visits 1 and 2 respectively. Using a combined transit and systematics model for each visit, the best-fit broadband NUV light curves are = 0.1360.01460.0125 for Visit 1 and = 0.1110.01900.0162 for Visit 2, appearing an average of 1.54× larger in the NUV than at optical wavelengths. While the systematics between the two visits vary considerably, the two broadband NUV light curves are consistent with each other. A transmission spectrum with 25 Å bins suggests a general trend of excess absorption in the NUV, consistent with expectations for ultra-hot Jupiters. Although we see an extended atmosphere in the NUV, the reduced data lack the sensitivity to probe individual spectral lines. § INTRODUCTION KELT-9b is the hottest exoplanet discovered to date with an equilibrium temperature Teq ≃ 3921 K <cit.>. The planet has = 1.783 ± 0.009 , mass = 2.44 ± 0.70 <cit.>, and orbits an A0 star with Teff = 9495 ± 104 K <cit.> at a distance a = 0.0336 AU <cit.> with a period P = 1.4811 days <cit.>, an environment which prevents aerosol formation on the planet's day side <cit.> and possibly drives an escaping outflow. Both Hα <cit.> and Hβ <cit.> were detected to be optically thick out to ∼1.61 . Several metals have also been detected in the planet's atmosphere (out to 1.10 and between 10^-3 and 10^-6 bar) with ground-based high-resolution transmission spectroscopy, including Fe1, Fe2, Ti2, Mg1, Na1, Cr1, Sc2, and Ca2 <cit.>. The O1 triplet at 7774 Å was measured by <cit.>, and most recently, <cit.> reported new detections of Ni1, Sr1, Tb2, V1, and Ba2. These ground-based detections have provided important constraints on the abundances, temperature-pressure (TP) profile, and overall structure of the atmosphere. Both H-Balmer lines and metal lines are best fit with model atmospheres that include non-local thermodynamic effects (NLTE) <cit.>. Compared with LTE models, the NLTE models predict upper atmospheric temperatures on the order of 8,000 K <cit.>. Quantities of the upper atmosphere, like abundances and mass-loss estimates, have been made using optical absorption features, but these observations are unable to directly probe the upper atmosphere <cit.>. For example, the Hα and Ca2 detections from <cit.> were found at effective altitudes between 1.2 - 1.44 . The O1 triplet in <cit.> was measured with an effective altitude of 1.17 . KELT-9b is believed to have an escaping atmosphere, but detections of such have not been conclusively made, as the planet's Roche lobe is at 2.017 (Section <ref>). Ultraviolet transmission spectroscopy can provide a unique complement to atmospheric characterization studies. The near-ultraviolet (NUV) bandpass between ∼ 1800 - 3500 Å contains hundreds of strong and abundant metal lines like Fe and Mg <cit.> that have been observed escaping in several planetary atmospheres <cit.>. Observations of high-altitude and escaping metal lines can constrain the energy balance and ionization states of the upper atmospheric layers <cit.>. For example, Mg important coolant in the upper atmospheres of HD189733b <cit.> and KELT-20b <cit.>, and Fe was found to be strongly tied to atmospheric heating <cit.>. In addition, NUV observations have been used to assess the presence of scattering hazes <cit.>. Here we present new NUV observations of KELT-9b obtained with the Colorado Ultraviolet Transit Experiment (CUTE), a CubeSat mission dedicated to exploring the upper atmospheres of highly-irradiated ultra-hot Jupiters <cit.>. We first provide an overview of the CUTE CubeSat and describe the KELT-9b observations in Section <ref>. Section <ref> describes the data reduction and general light curve modeling. Broadband NUV light curves with 100 Å and 25 Å transmission spectra are presented in Section <ref>, and a summary is provided in Section <ref>. § OBSERVATORY & OBSERVATIONS §.§ CUTE Instrument Description The CUTE instrument is an NUV spectrograph operating between 2479 Å and 3306 Å with R ∼ 750 and an average dispersion of 0.404 Å pixel-1. CUTE operates in low-Earth orbit with a period of about 95 minutes and an inclination of ∼98. Like transit observations obtained with the Hubble Space Telescope (HST), CUTE time-series observations exhibit gaps in coverage due to Earth occultations. The spectrum is recorded on a passively-cooled back-illuminated CCD with 515 × 2048 (spatial × spectral) active pixels <cit.>. The CCD experiences temperatures between -5 C and -12 C over the course of an orbit, resulting in a temperature-dependent dark current that is included in our systematics modelling and removal, as described in Section <ref> and <cit.>. A two-dimensional spectrum trimmed to 100 rows centered around the spectral trace, and the one-dimensional spectrum, are shown in Figure <ref>. ccccccc CUTE KELT-9b Observations Visit # Visit Start Mid-transit Time Visit End Total Orbits Total Frames Valid Framesa (2022 UTC) (2022 UTC) (2022 UTC) 1 Sept. 27 - 14:54 Sept. 28 - 01:36 Sept. 28 - 12:02 13 46 44 2 Sept. 29 - 02:09 Sept. 29 - 13:09 Sept. 30 - 00:47 15 57 54 aA valid frame has spacecraft pointing jitter < 10 RMS. §.§ KELT-9b observations Two successive transits of KELT-9b were observed with CUTE in 2022 on September 28th and September 29th, referred to hereafter as Visit 1 and Visit 2, respectively. Each visit was planned such that the full observation window was 5× the 3.91 hour transit duration, resulting in ≈ ± 10.5 hours on either side of the transit center. Bias and dark frames were obtained before and after each observation window with a 0.75 pointing offset to point the CUTE aperture at dark sky but maintain similar sky background levels. Exposures are 5 minutes each and Visit 1 consists of 13 CUTE orbits and Visit 2 consists of 15 orbits, each with 4 CCD exposures per orbit. While the CUTE spacecraft is in Earth's shadow for approximately half of the 95-minute orbit, Earth and moon avoidance angles as well as time required for the spacecraft to settle into fine-pointing mode reduce the total available observing time to approximately 20 – 30 minutes per orbit. Occasionally, the CUTE spacecraft exhibits anomalously high pointing jitter within a given 5 minute observation that smears the spectrum across a larger region of the detector. This reduces the per-pixel signal to noise, and integrating over more pixels results in a reduction of the overall signal-to-noise ratio (SNR) of those exposures compared to a low-jitter frame, rendering them unusable in light curve analysis. Observations with a jitter > 10 RMS are excluded from light curve analysis. Table <ref> provides a summary of the observations analyzed herein. § DATA ANALYSIS §.§ Data Reduction The data were reduced using a modified process of the CUTE Data Reduction Pipeline <cit.>. Raw stellar frames are corrected for bad pixels and cosmic rays. Bias frames are corrected for bad pixels and cosmic rays, and a master bias frame is created by taking the median of the corrected individual bias frames from each visit; this master bias frame is then subtracted from all stellar frames. The background is estimated and subtracted using the bias-subtracted frame. Bad pixels are replaced with the median of a 3×3 grid of non-bad pixels surrounding the bad pixel. Cosmic rays are identified and replaced using the L.A.Cosmic routine <cit.>. A region is defined around the spectrum and summed to create a one-dimensional stellar + background spectrum. A background region below the spectrum of the same size as the stellar region is summed to form a background one-dimensional spectrum that is subtracted from the stellar + background spectrum to produce the final 1D stellar spectrum. The stellar and background regions are shown in the top panel of Figure <ref> as surrounded by the white and orange lines respectively, and the corresponding 1D stellar spectrum is shown in the bottom panel of Figure <ref>. To create a broadband NUV light curve, the entire spectral bandpass is summed and plotted in time; the NUV light curves for Visits 1 and 2 are shown in Figure <ref>. §.§ Light curve & systematics fitting Like other space telescopes (e.g. HST; ) CUTE exhibits orbital- and visit-dependent systematics. As seen in the top plot of Figure <ref>, the systematics embedded in both visits take different forms, though a transit signal appears to be present in both. Out of transit, the uncorrected counts vary by 10.8% in Visit 1 and 8.8% in Visit 2. CUTE systematics are heavily correlated with its orbit, evident in the trend of decreasing counts as a function of CUTE orbit, a trend visible in the normalized uncorrected light curves for both Visits 1 and 2. These trends are strongly related to the temperature of the CCD, which varies approximately 6 C per orbit <cit.>, starting at higher temperatures as CUTE enters the Earth's shadow and cools until exiting the shadow. In addition to thermal trends, CUTE's wide field of view and compact design () subjects the focal plane to low scattered light levels that change throughout an orbit and throughout a visit. Finally, pointing jitter increases the extent of the spectral trace on the detector, reducing the observation's signal-to-noise ratio. There are several parameters that describe CUTE's pointing that are related to the CCD temperature, scattered light levels, and exhibit covariances amongst each other (e.g. Figure 3 in ): Azimuth and elevation angles of the telescope with respect to (1) the Earth, (2) the Sun, and (3) the Moon; the Earth latitude and longitude over which the observation began; the CCD temperature; and the R.A., Dec., and roll angles of the telescope. Several of these 12 parameters are necessary to detrend the raw light curves and isolate the astrophysical signal from the background. Individually, all of these parameters are correlated with the background levels by some function (e.g. linear, quadratically), as well as related to each other. Due to the covariance among the parameters, it becomes challenging to include all of them, or select a subset that best describes the systematics. To include as many parameters as possible in the detrending analysis, we utilized principal component analysis (PCA) using the 12 parameters listed above. PCA transforms a set of correlated or potentially correlated variables into an orthogonal set of uncorrelated variables, each called a principal component. It does so by scaling each variable to unit variance, calculating the covariance matrix of the unit-scaled set, and then decomposes the covariance matrix into eigenvectors, or the principal components (PCs). Each PC is then responsible for some level of variation in the whole data set. We used the python package <cit.> to carry out the PCA, and the PCs for visits 1 and 2 are shown in Figure <ref>, plotted against the transit phase. From Figure <ref>, it is evident how different the CUTE spacecraft states are between Visits 1 and 2. For both visits, PC 0 is likely heavily influenced by the detector temperature, as each spacecraft orbit, which typically contains 4 exposures (though there are a few in each visit with only 3 or 2 exposures) begins with a higher temperature as the spacecraft enters the Earth's shadow, with successively lower temperatures as the spacecraft cools within the shadow. Hence, the appearance of 4 distinct rows in PC 0 for both Visits 1 and 2 is due to exposures being taken at similar locations in a given orbit. There are PCs that exhibit dips near the mid-transit time, which may have otherwise been mistaken as a transit signal. For example a dip is very evident in PC 6 for Visit 2, and more subtly in PC 0 for Visit 1. We emphasize that no CUTE observations were used in the PCA for either visit, only CCD temperature and spacecraft orientation values. We model the total flux in an exposure, f, as the sum of a stellar flux offset, F_0, a transit light curve model in both time t and transit parameters Θ, T(t, Θ), and a systematics model consisting of the sum of first-order polynomials for the 12 PCs, additional first-order polynomials for the jitter components, j, along the x, y, and z spacecraft axes: f = F_0 + T(t,Θ) + ∑^i ≤ 12_i=1 a_iPC_i + ∑^z_i=x b_ij_i where a_i are the coefficients for the PCs and b_i are the coefficients for the jitter term j. As shown in Figure <ref>, each PC does not contribute equally to the total variance in the PC dataset, but rather each subsequent PC has a small total variance contribution. PC 0 has the largest variance out of all the PCs, with 49.61% for Visit 1 and 55.02% for Visit 2. ccccc KELT-9 system parameters, Θ Parameter Unit Symbol Value Source Stellar Effective Temperature K T_eff 10170 <cit.> Stellar Mass M_⊙ M 1.978 ± 0.023 <cit.> Stellar Radius R_⊙ 2.178 ± 0.011 <cit.> Stellar Surface Gravity cgs log(g) 4.093 <cit.> Semi-major axis AU a 0.03368 <cit.> Planet Mass 2.44 ± 0.70 <cit.> Planet Radius R_p 1.783 ± 0.009 <cit.> Orbital Period Days P 1.48111871 <cit.> Transit Center Time Julian Date T_c 2457095.68572 <cit.> Inclination degree i 86.79 <cit.> Eccentricity degree ε 0 <cit.> Argument of Periastron degree ϖ 90 <cit.> The use of PCA with the chosen parameters limits us to a first-order polynomial. The raw CUTE counts correlate linearly with the CCD temperature, and since PCA produces a set of orthogonal components, one of which is strongly linear with CUTE data, all other PCs are limited to a first-order polynomials as well. We use <cit.> as T(t, Θ) and let only the planetary to stellar radius ratio, float as a free parameter. The planet's mid-transit time, semi-major axis, eccentricity, longitude of periapsis, and orbital inclination were fixed to nominal values listed in Table <ref>. The CUTE data does not have enough coverage during ingress and egress to fit for limb-darkening coefficients, so we instead used package <cit.> to calculate quadratic-law limb-darkening coefficients using stellar parameters from Table <ref>. The python package was used to carry out an MCMC fit to each model.[We provide details of this process in Appendix <ref>.] We first ran Equation <ref> without including jitter to identify which PCs were necessary to produce the best fit. To identify a starting point for which PCs should be included in the model, we use the KELT-9b optical transit depth, ∼0.8%, to define the minimum variance a PC must have to necessarily be included in the model. This requires that PCs 0 – 8 are included for Visit 1 and PCs 0 – 9 for Visit 2. We then reran the model including successive PCs until all 12 were included. As is done in <cit.>, <cit.>, and <cit.>, the best-fit model is identified as that which minimizes the corrected Akaike information criterion for small samples, AICc, defined as AICc = χ^2 + 2k(k+1)/n-k-1 where k is the number of free parameters and n are the number of data points. Once the best-fit model is found for the NUV broadband light curve, the same functional form is used to fit the spectral light curve. An analysis of the background levels across the spectral axis of the CCD show variation, indicating that the systematics model will vary as a function of pixel location along the spatial axis on the detector. Additionally, scattered light levels across the CCD are not flat (e.g. see Figure 8 in <cit.>). Therefore, we expect the same general trends to appear across CUTE's bandpass, but with different magnitudes that are dependent on the spatially-inhomogeneous scattered light levels. This is in contrast to the assumption that systematics vary weakly with wavelength, as is commonly found in studies of HST data (e.g. ). § RESULTS & DISCUSSION We show the broadband NUV light curves for Visits 1 and 2 in Figure <ref>; the top plots show the uncorrected counts normalized to the first three out-of-transit orbits, the middle plot shows the best fit broadband NUV light curve atop the systematics-removed CUTE data with the error regions shaded in the respective colors, and the bottom plot shows the residuals. For both visits, the best model included the minimum number of PCs; when each additional PC was included in the model, the AICc grew by less than 1 for both visits, indicating that additional PCs do not improve the fit and are functionally the same. For both visits, the best-fit model was found when only the x-axis jitter was included. We find this reasonable as jitter along the x-axis translates into jitter along the shorter telescope axis, or along the spatial direction of the detector (i.e. vertically as shown in Figure <ref>). For both visits, Figure <ref> demonstrates the importance of a long out of transit baseline for constraining the out-of-transit continuum level in the presence of noisy data. Whereas CUTE was able to observe KELT-9b for approximately 22 hours for each transit, HST STIS using the MAMA detectors, the other operating instrument capable of obtaining NUV transmission spectroscopy, is typically limited to a maximum of 5 orbits per target, or approximately 8 hours, with rare exceptions being granted to allow up to 6 orbits <cit.>. While the broadband raw light curves for Visits 1 and 2 vary considerably from each other, the best fit light curves are consistent with each other. The best fit for Visits 1 and 2 respectively are: _V1 = 0.1360.01460.0125 with a reduced chi-squared χ^2_ν = 1.0053, and _V2 = 0.1110.01900.0162 with χ^2_ν = 0.9987. Compared to the TESS red-optical _opt = 0.0804 from <cit.>, KELT-9b appears an average of 1.54× larger in the NUV. This means that the NUV broadband transit is probing relatively low pressures in the atmosphere. The corresponding Roche lobe filling factor is larger than that observed for other hot Jupiters such as HD 209458b and HD 189733b but not as large as for the ultra-hot Jupiter WASP-121b (e.g., compare Figure <ref> in this paper to Figure 8 in ). In principle, the results for KELT-9b here appear similar to those for WASP-189b where CUTE observations indicated that the upper atmosphere of the planet is hotter and more extended than expected (). The interpretation of the observations of KELT-9b, however, is complicated by the uncertainty on planet mass that allows for a range of solutions on atmospheric structure and mass loss (). We attempted a joint fit between the two visits by keeping the as a shared parameter and the systematics models separate for each visit. However, the joint fit produced a equal to the average of the individual fits with a χ^2_ν = 1.35. We therefore keep the remaining analysis to a per-visit basis. To further explore the light curve differences, we produced a transmission spectrum with 100 Å and 25 Å wide bins, corresponding to R ∼ 28 and 112 respectively, for each visit. Using the approximation from <cit.> and the values in Table <ref>, we calculate the Roche Lobe radius to be R_L = 2.017 and include it as a visual reference. For each bin in Figure <ref>, we used the same fitting procedure as in Section <ref>. These transmission spectra are shown in Figure <ref> and shown in Tables <ref> and <ref> respectively. ccc KELT-9b Transmission Spectrum, 100 Å bin widths Central λ, Å Visit 1 Visit 2 2533 0.1740.02940.0246 0.1520.03780.0300 2633 0.1220.04610.0366 0.1070.04270.0361 2733 0.1410.05210.0399 0.1140.04330.0362 2833 0.0940.03940.0363 0.1300.04350.0346 2933 0.1210.04930.0410 0.0790.03050.0301 3033 0.1670.02180.0185 0.0780.02990.0306 3133 0.1460.03330.0247 0.1070.03110.0249 3233 0.0770.02990.0311 0.1110.03140.0240 ccc KELT-9b Transmission Spectrum, 25 Å bin widths Central λ, Å Visit 1 Visit 2 2495 0.1360.06660.0578 0.0780.03350.0440 2520 0.2110.05620.0429 0.2250.05560.0458 2545 0.1490.07320.0702 0.1420.06500.0584 2570 0.1530.07500.0660 0.1550.05760.0475 2595 0.1340.06160.0548 0.1120.05450.0624 2620 0.0840.03830.0575 0.1360.06150.0583 2645 0.1940.07750.0572 0.1040.04880.0572 2670 0.1020.04630.0506 0.1100.04770.0414 2695 0.1210.05880.0632 0.1000.04540.0475 2720 0.1160.05710.0611 0.1260.05540.0504 2745 0.1430.06600.0545 0.1090.05280.0568 2770 0.1970.06850.0512 0.1470.06440.0525 2795 0.0910.04260.0569 0.1280.06710.0638 2820 0.1920.05530.0411 0.1440.06180.0502 2845 0.0930.04420.0508 0.1560.06750.0591 2870 0.0810.03530.0491 0.1140.05460.0569 2895 0.0910.04050.0505 0.0620.02310.0355 2920 0.2150.04670.0376 0.1070.04560.0434 2945 0.1230.05500.0466 0.1040.04340.0398 2970 0.0950.04450.0599 0.1100.04880.0459 2995 0.1920.04410.0354 0.1370.05030.0390 3020 0.1500.05010.0372 0.0650.02440.0364 3045 0.1410.05660.0446 0.0870.03600.0346 3070 0.1510.05160.0399 0.0910.03800.0371 3095 0.2300.04090.0328 0.1530.05930.0461 3120 0.1530.06000.0455 0.1200.04960.0398 3145 0.0800.03500.0471 0.1250.04700.0368 3170 0.1010.04490.0450 0.0740.03040.0369 3195 0.0810.03560.0483 0.1470.04700.0364 3220 0.0730.02940.0415 0.0670.02650.0403 3245 0.2060.04220.0337 0.1250.04610.0366 3270 0.0830.03700.0520 0.1420.04830.0383 3295 0.0940.04270.0485 0.0900.03960.0473 In the 100 Å transmission spectrum, the two visits are generally consistent with each other except at the central wavelength of 3033 Å, where Visit 1 exhibits a transit depth about 2.5σ times greater than Visit 2 and is ≥3σ larger than the optical light measurement from <cit.>. The light curves for that bin is shown in Figure <ref>. A transmission spectrum with smaller bins would aid in assessing what might be responsible for the difference between the two visits at 3033 Å and to search for signs of strongly absorbing ions. Additional transit observations would be necessary to increase the required signal to noise ratios for finer line detection. The 25 Å spectrum, shown in the bottom half of Figure <ref>, indicates that Visits 1 and 2 are generally consistent with each other within their 1σ uncertainties. The shape of the 25 Å transmission spectrum is generally more noisy for Visit 1 than it is for Visit 2, which may be due to unidentified or remaining systematics. There are a few hints of extended absorption in the CUTE NUV transmission spectrum. The 25 Å bin centered at 2520 Å shows that both visits are > 1σ above the broadband visible with Visit 1 being 1.3σ greater and Visit 2 being 2.05σ greater. Three additional 25 Å bins in Visit 1 also show >1σ above the broadband NUV , located at 2920 Å 3095 Å and 3245 Å. The bin at 3095 Å lies >1σ above the planet's Roche lobe, directly suggesting an escaping atmosphere. However, without small wavelength bins, it is challenging to discern which atomic species might be responsible for the increased absorption. While there are no 25 Å bins that lie > 1σ above each visit's respective broadband NUV , there is one bin that lie > 1σ above the visible light . Notably, there are a few bins in Visit 1's 25 Å transmission spectrum that lie 1σ above the planet's Roche lobe. These bins are centered at 2520 Å, 2920 Å, and 3095 Å. However, due to the low sensitivity, more transit observations are required to confirm or refine these detections. Without small wavelength bins, it is challenging to discern what atomic absorbers might be responsible for the higher absorption. As discussed in several papers, the NUV is filled with neutral and ionized atoms that can induce extended transit depths, and KELT-9b has had a wealth of atomic detections made in optical wavelengths, motivating a search for them in the NUV where atomic transitions tend to be stronger than in the optical. For example, Fe, which has several lines present in CUTE's bandpass (e.g. ), has been detected several times with ground-based spectrographs: <cit.> detected 9 individual lines of Fe2 with the PEPSI spectrograph; both Fe1 and Fe2 were observed in <cit.> with TRES; and Fe2 definitively detected in <cit.> with FOCES and in <cit.> with FIES. We note that the spectrum of WASP-121b that shows the clearest NUV signatures of escaping metals to date () can be explained by Fe2 lines that dominate the spectrum and the strong Mg2 h&k lines (see Figure 23 in for a model fit). While the absorption in the CUTE spectrum at 2520 Å coincides with Fe2 lines, absorption due to Fe2 at 2600 Å, which was seen in WASP-121b <cit.>, is not present in the CUTE spectrum. At the same time, neither the model or observations of Fe2 in WASP-121b show strong features at 2920 Å or 3095 Å. The Mg2 resonance lines at 2800 Å are expected to be present in KELT-9b as they have been observed in the ultra-hot Jupiters WASP-12b <cit.>, WASP-189b <cit.> and WASP-121b <cit.>. Interestingly, there is no suggestion in the 100 Å or 25 Å transmission spectra of significantly extended absorption around 2800 Å. It is possible that residual systematics are masking the transit signal, however. In any case, the available data are unable to produce meaningfully smaller resolution transmission spectra. We produced a 25 Å per bin transmission spectrum for Visit 2, shown in Figure <ref> and tabulated in Table <ref>. A predicted NUV transmission spectrum from <cit.> binned to 25 Å is shown in purple. The transmission spectrum comes from a model atmosphere with lower pressures P ≳ 10^-4 bar being modeled with the HELIOS radiative transfer code, and pressures with P ≲ 10^-4 bar and up to P = 10^-10 bar modeled with Cloudy. Cloudy assumes hydrostatic equilibrium which was regarded as a safe assumption to make as atmospheric escape becomes significant only when an outflow velocity becomes a significant fraction of the sound speed. reported KELT-9b's sonic point to be near P = 10^-11 - 10^-12 bar, therefore it is expected that this model underpredicts the extended atmosphere observed by CUTE. The CUTE data for Visit 2 exhibit a combination of varying signal and systematics across the bandpass, such that some 25 Å bins clearly display a transit even in the raw data, while other bins either have too little signal or too strong of systematics to be modeled by our fitting procedure. Those which were unsuccessfully fit are consistent the = 0.078 from <cit.>. The bins which were successfully fit with a light curve model seem to generally lie above the <cit.> prediction. The transmission spectrum shows several features above the Roche lobe, though no individual bins are detected with > 3σ confidence. To date, atmospheric escape has never directly been observed on KELT-9b, though extended absorption in the optical and subsequent modeling has indicated the planet is undergoing atmospheric escape (e.g. <cit.>). While some points in the CUTE data lie above the Roche lobe boundary, we remain skeptical of the true magnitude of these features due to there potentially being uncaptured systematics in the CUTE data. There is not enough spectral resolution to isolate and fit specific atomic absorption features, but we can make inferences as to what may be responsible for the extended absorption based on previous studies. To do this, we obtained line lists from the NIST Atomic Spectral Database (<cit.>) for all elements with n < 78 up to triply ionized. As the middle and upper atmospheres are predicted to reach temperatures of order 10,000 K, atoms and ions may be collisionally excited to 0.83 eV before absorbing stellar light. We therefore set maximum lower lever energy to be 0.83 eV to hone the list of potential absorbers. Resulting line lists are listed in the appendices. § CONCLUSION Herein we presented NUV transmission spectroscopy of the ultra-hot Jupiter KELT-9b obtained with the CUTE CubeSat. Two consecutive transit observations of KELT-9b, made on September 28th and 29th 2022, show differing raw light curves but consistent best-fit NUV radii. In order to maximize the number of spacecraft parameters used to detrend the data, we used principal component analysis to transform a set of correlated spacecraft parameters into a set of orthogonal parameters. From that, the two NUV broadband light curves produced consistent transit depths, with Visit 1 having = 0.1360.01460.0125 and Visit 2 having = 0.1110.01900.0162. We further produced two transmission spectra, with 100 Å and 25 Å wide bins respectively. Within the error bars, the transmission spectra are consistent between the two visits. However, the data do not have enough sensitivity to spectroscopically isolate specific atomic absorbers. The 25 Å transmission spectrum contains three bins with absorption above the Roche lobe boundary with greater than 1σ significance. However, Visit 2 does not exhibit the same signal, and in general the Visit 1 and Visit 2 25 Å transmission spectra have different shapes, perhaps suggesting that there are uncharacterized systematics remaining in the spectral light curves. It is part of our future work to explore these hints of atmospheric escape. Despite the data quality, the broadband NUV transit depth of the low-resolution CUTE spectra show promise for the direct detection of an escaping atmosphere on KELT-9b. We are continuing to assess additional methods for removing the systematics present in the data and resolving the disagreement between Visits 1 and 2 in the higher resolution transmission spectra. Additional higher resolution spectra obtained with e.g. HST STIS will likely provide rich insight into KELT-9b's upper atmosphere. The low-resolution CUTE spectrum shows promise for the direct detection of an escaping atmosphere on KELT-9b. We are continuing to assess additional methods for removing the systematics present in the data and resolving the disagreement between Visits 1 and 2. Higher resolution spectra obtained with e.g. HST STIS will likely provide rich insight into KELT-9b's upper atmosphere. and 100 Å transmission spectra likely due to systematics remaining after data reduction. As Visit 1 is more strongly impacted by systematics, we used only Visit 2 to produce a transmission spectrum with 25 Å bins. The transmission spectrum shows several features with extended absorption, including four spectral bins that extend to or beyond the planet's Roche lobe radius of 1.9 R_p, opt, though not with 3σ confidence. As we are currently unable to produce a higher resolution transmission spectrum, we compared potential absorbers to the extant literature. Several of the deeper transit features contain atomic features corresponding to detected ions, such as Mg1 and Na1. The bin at 2795 Åcontains Mg2 which has not yet been detected in KELT-9b's atmosphere. Lines for neutral and ionized Fe, Ti, and V are present throughout the CUTE's bandpass and may be contributing to multiple absorption features. The deepest feature is at at 2950 Å has a = 0.188 ± 0.023, to = 2.42 R_p,opt, and a transit depth of 3.56 ± 0.053. § ACKNOWLEDGMENTS Much of this work was funded by NASA grants NNX17AI84G and 80NSSC21K166 (PI- K. France). AAV acknowledges funding from the European Research Council (ERC) under the European Union's Horizon 2020 research and innovation programme (grant agreement No 817540, ASTROFLOW). A.G.S. was supported by the Schrödinger Fellowship through the Austrian Science Fund (FWF) [J 4596-N]. We are additionally grateful for the thoughtful and thorough feedback provided by the reviewer. CUTE <cit.>, <cit.>, <cit.>, <cit.> § USING LMFIT enables a modular approach to model creation and curve fitting. The class turns a given function into a model to be fit with, and several classes with different independent variables and parameters can be added together, producing a class. The module has several classes built in, including a polynomial model, called up to the 7th degree. In a , a class can only have a single independent variable even if instantiated multiple times, i.e. the user cannot instantiate + as they have two different independent variables. To enable the sum of polynomials in Eq. <ref>, we utilized the open source nature of and added several more Polynomial models to account for all PCs and jitter terms (i.e. , , , etc.). aasjournal
http://arxiv.org/abs/2407.13111v1
20240718023931
PG-Attack: A Precision-Guided Adversarial Attack Framework Against Vision Foundation Models for Autonomous Driving
[ "Jiyuan Fu", "Zhaoyu Chen", "Kaixun Jiang", "Haijing Guo", "Shuyong Gao", "Wenqiang Zhang" ]
cs.MM
[ "cs.MM", "cs.CV" ]
Reconfigurable Intelligent Surface Aided Vehicular Edge Computing: Joint Phase-shift Optimization and Multi-User Power Allocation Kangwei Qi, Qiong Wu, Senior Member, IEEE, Pingyi Fan, Senior Member, IEEE, Nan Cheng, Senior Member, IEEE, Wen Chen, Senior Member, IEEE, and Khaled B. Letaief, Fellow, IEEE This work was supported in part by the National Natural Science Foundation of China under Grant No. 61701197, in part by the National Key Research and Development Program of China under Grant No.2021YFA1000500(4), in part by the 111 Project under Grant No. B12018. Kangwei Qi, Qiong Wu are with the School of Internet of Things Engineering, Jiangnan University, Wuxi 214122, China. (e-mail: kangweiqi@stu.jiangnan.edu.cn, qiongwu@jiangnan.edu.cn). Pingyi Fan is with the Department of Electronic Engineering, Beijing National Research Center for Information Science and Technology, Tsinghua University, Beijing 100084, China (e-mail: fpy@tsinghua.edu.cn). Nan Cheng is with the State Key Lab. of ISN and School of Telecommunications Engineering, Xidian University, Xi’an 710071, China (e-mail: dr.nan.cheng@ieee.org). Wen Chen is with the Department of Electronic Engineering, Shanghai Jiao Tong University, Shanghai 200240, China (e-mail: wenchen@sjtu.edu.cn). K. B. Letaief is with the Department of Electrical and Computer Engineering, the Hong Kong University of Science and Technology (HKUST), Hong Kong (email:eekhaled@ust.hk). July 22, 2024 ================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================ [1]indicates equal contributions. [2]indicates corresponding author. [3]<https://challenge.aisafety.org.cn/#/competitionDetail?id=13> § ABSTRACT Vision foundation models are increasingly employed in autonomous driving systems due to their advanced capabilities. However, these models are susceptible to adversarial attacks, posing significant risks to the reliability and safety of autonomous vehicles. Adversaries can exploit these vulnerabilities to manipulate the vehicle's perception of its surroundings, leading to erroneous decisions and potentially catastrophic consequences. To address this challenge, we propose a novel Precision-Guided Adversarial Attack (PG-Attack) framework that combines two techniques: Precision Mask Perturbation Attack (PMP-Attack) and Deceptive Text Patch Attack (DTP-Attack). PMP-Attack precisely targets the attack region to minimize the overall perturbation while maximizing its impact on the target object's representation in the model's feature space. DTP-Attack introduces deceptive text patches that disrupt the model's understanding of the scene, further enhancing the attack's effectiveness. Our experiments demonstrate that PG-Attack successfully deceives a variety of advanced multi-modal large models, including GPT-4V, Qwen-VL, and imp-V1. Additionally, we won First-Place in the CVPR 2024 Workshop Challenge: Black-box Adversarial Attacks on Vision Foundation Models[3] and codes are available at <https://github.com/fuhaha824/PG-Attack>. § INTRODUCTION With the continuous advancement of artificial intelligence technology, vision foundational models have been widely applied in various fields, especially in autonomous driving systems <cit.>. These advanced models possess powerful perception, decision-making, and control capabilities, which can greatly improve the performance and safety of self-driving cars <cit.>. In complex road environments, they can process massive amounts of data from multiple sensors in real-time, accurately identify surrounding objects, vehicles, and pedestrians, and make appropriate driving decisions <cit.>. However, despite the impressive performance of vision foundational models, they face a significant challenge: the threat of adversarial attacks <cit.>. Some malicious adversaries may exploit vulnerabilities in these models by carefully designing adversarial examples, and manipulating the models' perception and understanding of the surrounding environment. Therefore, improving the robustness and safety of vision foundational models in autonomous driving scenarios and defending against the risks of adversarial attacks has become crucial. Developing effective attack methods can help us better understand the threat patterns of adversarial attacks and develop targeted defense measures. However, creating effective adversarial examples for these vision foundational models faces numerous challenges. For instance, vision foundational models consume significant memory, making them difficult to use directly for inferring adversarial attacks, which poses major challenges for deploying such models for attack inference. Additionally, when attacking models like GPT-4V <cit.>, it is crucial to consider the content's integrity; if perturbations cause images to be misclassified as violent or other negative content, OpenAI’s policy will prevent their evaluation. To address this challenge, we propose a novel Precision-Guided Adversarial Attack Framework (PG-Attack) that seamlessly integrates two innovative approaches: Precision Mask Patch Attack (PMP-Attack) and Deceptive Text Patch Attack (DTP-Attack). PMP-Attack leverages a masked patching approach to pinpoint the attack region, maximizing the representation discrepancy of the target object in the model's feature space while minimizing overall perturbation. DTP-Attack, on the other hand, introduces deceptive text patches to disrupt the model's scene understanding, further augmenting the attack's efficacy. This integrated attack methodology effectively enhances the attack success rate across a wide spectrum of tasks and conditions. By strategically integrating PMP-Attack and DTP-Attack, our approach aims to maximize the attack success rate while maintaining high SSIM scores, effectively addressing the competition's requirements and constraints. The main contributions of this paper can be summarized as follows: * By integrating masked patch with adversarial attacks, we propose PMP-Attack, a novel attack method that enables precise localization of attack regions while balancing attack effectiveness with structural similarity between pre- and post-attack images. * We innovatively introduce Deceptive Text Patch Attack (DTP-Attack). DTP-Attack synergistically complements PMP-Attack, disrupting the model's scene understanding and further enhancing the attack's efficacy. * Our experiments demonstrate that PG-Attack successfully deceives a variety of advanced multi-modal large models, including GPT-4V <cit.>, Qwen-VL <cit.>, and imp-V1 <cit.>. Additionally, we won the First-Place winner in the CVPR 2024 Workshop Challenge: Black-box Adversarial Attacks on Vision Foundation Models, fully demonstrating the effectiveness and impact of this method. § RELATED WORK §.§ Vision Foundation Models Motivated by the success of large language models <cit.>, the field of computer vision has similarly embraced equally powerful models. Qwen-VL's <cit.> visual encoder uses the Vision Transformer (ViT) <cit.> architecture with pre-trained weights from Openclip's <cit.> ViT-bigG. It resizes input images to a specific resolution, splits them into patches with a stride of 14, and generates a set of image features. Imp's <cit.> visual module employs the SigLIP-SO400M/14@384 <cit.> as its pretrained visual encoder, enabling it to obtain fine-grained visual representations through large-scale image-text contrastive learning. Additionally, GPT-4V <cit.> offers a more profound understanding and analysis of user-provided image inputs, highlighting the significant advancements in multimodal capabilities within computer vision. §.§ Adversarial Attack Adversarial attacks are classified into white-box attacks <cit.> and black-box attacks <cit.> based on the attacker's knowledge of the target model. In white-box attacks, the attacker has full access to the target model's details, such as its network architecture and gradients. In black-box attacks, the attacker does not have access to the internal information of the target model. Adversarial transferability describes how effectively an attack developed on a source model performs when applied to a different target model. In computer vision, common adversarial attack methods include FGSM <cit.>, I-FGSM <cit.>, PGD <cit.>, etc. In natural language processing, attacks such as TextFoole <cit.>, BAE <cit.>, and BERT-Attack <cit.> manipulate the text by adding, altering, or deleting specific components to achieve the desired attack performance. In the attack on the multimodal large model, Zhang et al. <cit.> combines visual and textual bimodal information and proposes the first white-box attack, Co-attack, by utilizing the synergistic effect between images and text in the VLP model. Then, SGA <cit.> first explores the black-box attacks and use data augmentation to generate multiple groups of images, match them with multiple text descriptions, and comprehensively utilize cross-modal guidance information to improve the transferability of adversarial examples in black-box models. CMI-Attack  <cit.> enhances modality interaction by using Embedding Guidance and Interaction Enhancement modules, significantly boosting the attack success rate of transferring adversarial examples to other models. Based on this, we adopt CMI-Attack as the baseline method for our Precision Mask Perturbation Attack. Our approach further refines this by using mask patches to precisely locate attack regions and removing the text attack component, thereby focusing on enhancing the efficacy and subtlety of the visual perturbations. § METHODOLOGY §.§ Problem Formulation Crafting effective adversarial examples that can disrupt a model's performance across multiple tasks—color judgment, image classification, and object counting—is extremely challenging. The key difficulty lies in optimizing perturbations that can subtly alter the model's perception for each individual task, while maintaining high cross-task transferability and image similarity under diverse conditions. Specifically, the adversarial examples must induce misclassification, color confusion, and counting errors simultaneously, without compromising spatial consistency or raising human suspicion. Optimizing for such diverse goals risks getting trapped in local optima, making the design of highly transferable and robust adversarial attacks an intricate endeavor. Furthermore, directly employing multimodal large models to infer adversarial attacks poses a significant challenge due to their immense memory footprint, rendering the direct utilization of such models for attack inference arduous. These challenges require careful planning, experimentation, and a deep understanding of both the target models and the nature of adversarial perturbations. With limited submission opportunities and a need for high naturalness in the adversarial examples, efficient use of resources and iterative refinement are crucial for success in the competition. To address the aforementioned challenges, we have adopted the following measures: * Strategic Problem Transformation: We first view the entire task as a black-box transfer attack problem in the visual question answering (VQA) domain, which can then be transformed into an adversarial attack problem on vision-and-language models that have been widely used to solve VQA tasks. Specifically, we aim to generate input-based adversarial examples that cause the model under evaluation to fail to accurately answer the three types of task questions mentioned above. * Optimized Transferability and Effectiveness: Visual-Language Pre-training (VLP) models such as CLIP <cit.> and TCL <cit.>, which leverage large-scale multimodal pre-training, offer several advantages for generating adversarial examples. Compared to multimodal large models, VLP models require significantly less memory, achieve faster inference speeds, and adversarial examples generated from them exhibit strong transferability. For these reasons, we leverage a VLP model as the source model for generating adversarial examples. §.§ Framework Our proposed method consists of three phases, as illustrated in Figure <ref>. Phase I is the modality expansion phase, where we input the initial dataset into the YOLOv8 model to compute the binary images with the key objects masked. Similarly, to obtain the textual modality of the dataset, we input the dataset into the BLIP model <cit.> and generate image captions through its Image Captioning task. Phase II represents the first attack stage of our method, employing the image attack component from the CMI-Attack framework and further enhancing its effectiveness through data augmentation. Notably, considering the challenge's specific SSIM value requirements, we confine the attack range to the target region to achieve optimal performance. We refer to this process as the Precision Mask Perturbation Attack. Phase III constitutes the second attack stage of our method, where we incorporate disruptive text information into the images obtained from the previous stage in a bullet-chat-like manner to further enhance the attack's effectiveness against the VQA task of the black-box model. The disruptive text is designed based on the content of the specific VQA task being attacked, aiming to mislead the model's understanding. We refer to this attack process as the Deceptive Text Patch Attack. The whole description of the PG-Attack is shown in Algorithm <ref>. §.§ Precision Mask Perturbation Attack This involves combining the CMI-Attack with mask patch method. The CMI-Attack <cit.> enhances the overall effectiveness and robustness of the attack by ensuring the perturbations are subtle yet impactful. The mask patch method, on the other hand, targets specific areas of the image to improve the attack's precision and focus. The original CMI-Attack framework incorporates text-based attacks; however, since the competition does not involve text attacks, we have modified the optimization objective of CMI-Attack. The overall optimization goal of our framework is to maximize the semantic distance between the adversarial image Img_Adv generated by the source model in the feature space of the image encoder E_I and the caption in the feature space of the text encoder E_T. This is formally represented by Equation <ref>: max_Img_Adv, Caption𝒟 (E_I (Img_Adv),E_T(Caption)). It is noteworthy that the competition's evaluation metrics incorporate assessments of luminance, contrast, and structure. Therefore, while maintaining the effectiveness of the attack on the target region, minimizing the impact of the attack on other areas will lead to a relatively higher overall SSIM value. To address this, we innovatively employ a mask image to constrain the attack range during each iteration of image perturbation. This constitutes a novel aspect of our approach. The process is formally described by Equation <ref>: X_t=X_t-1· M+(X_t-1+δ)·(1-M), where X_i denotes the image at the i-th attack iteration, M represents the 0-1 matrix obtained from the mask image, and δ denotes the perturbation calculated in the current step to be added. §.§ Deceptive Text Patch Attack DTP-Attack further deceives models by adding a text patch attack to the image. This stage leverages textual elements to further deceive the models, exploiting any weaknesses in handling mixed content (visual and textual). The main algorithmic formula for the DTP-Attack is represented as follows: Img^adv_DTP← Img + RenderText(D,D_Color,D_Size), where Img^adv_DTP represents the adversarial image after applying the DTP-Attack. RenderText(D,D_Color,D_Size) is the function responsible for rendering the text patch onto the image. D represents the textual content, D_Color signifies the color of the text, and D_Size denotes the size of the text. The incorporation of textual elements into the adversarial attack expands the attack surface and increases the complexity of the deception, making it more challenging for the model to discern between genuine and manipulated content. § EXPERIMENTS §.§ Dataset The dataset is provided by the CVPR 2024 Workshop Challenge and generated using the CARLA simulator. The dataset for Phase I of the competition encompasses 461 images, encompassing key objects such as cars, pedestrians, motorcycles, traffic lights, and road signals. Notably, the cars exhibit a diverse array of colors, including but not limited to red, black, white, alternating green and white, alternating purple and white, alternating black and white, and others. Interestingly, the traffic lights display a reddish-orange hue instead of the typical red, along with yellow and green colors. For Phase II, the dataset consists of 100 images featuring similar key objects to Phase I. §.§ Evaluation Metrics The final score, which serves as the overall evaluation metric for the adversarial attack algorithms, is calculated as a weighted average of two components: the Attack Success Rate (ASR) and the Structural Similarity Index (SSIM). Specifically, for a set of n images, the final score is computed as Equation <ref>: 1/n∑_i=1^nASR_i[α+(1-α)·SSIM(x_i,x_adv)], where ASR_i is the Attack Success Rate for the ith image, SSIM(x_i,x_adv) quantifies the structural similarity between the original image x_i and adversarially perturbed image x_adv, and α (set to 0.5) determines the relative weighting between ASR and SSIM. A higher final score indicates better performance, as it signifies both a high success rate in misleading the target models and a high degree of visual similarity preservation compared to the original images. §.§ Implementation Details Reproduction Process. The reproduction of the attack process requires strictly following the procedures outlined in Figure 1. First, the modality expansion phase is conducted to obtain the captions and target mask images. Subsequently, the captions, original images, and target mask images are utilized in the CMI-Attack framework to generate the adversarial images from the first attack stage. Finally, in the last phase, disruptive text is added to the images, further enhancing the attack capability against the VQA task. Hyperparameter Settings. Regarding the hyperparameter settings, we first followed the image augmentation scheme proposed in SGA <cit.>. Additionally, we further enhanced the CMI-Attack attack settings by applying scaling factors of [1, 1/2, 1/4, 1/8, 1/16] to the images. We also augmented text by replicating each text three times and feeding it into the CMI-Attack attack setting. The attack step was set to 2/255 and the number of attack steps was set to 60. Environment Configuration. Our proposed method is implemented using PyTorch, and the experiments are conducted on an NVIDIA GeForce RTX 3090 GPU. §.§ Ablation Study In this section, we conduct ablation experiments to analyze various parameters of our approach. These parameters include the perturbation ϵ range of the CMI-Attack<cit.> on the mask part, the color of the disruptive text, the quantity of disruptive text, and the font of the disruptive text. The ablation study shows that increasing the perturbation range on the mask part significantly boosts the attack success rate, indicating that larger perturbations are more effective in deceiving the model, as shown in Figure <ref>. Additionally, the text color plays a crucial role in the attack's effectiveness, with black and contrasting color schemes, such as a white background with a black frame, resulting in higher success rates, as demonstrated in Figure <ref>. The effectiveness of disruptive text quantity varies, with six text elements achieving the highest attack success rate, followed by seven and five, suggesting an optimal quantity for maximum disruption, as illustrated in Figure <ref>. Finally, the choice of font does impact the attack success rate, with Times New Roman outperforming Calibri and Arial in misleading the model, as shown in Figure <ref>. Through these ablation experiments, we identify key factors that influence the success rate of our proposed attack, providing insights for further optimization. § CONCLUSION This study highlights the vulnerabilities of vision foundation models in autonomous driving systems by demonstrating the effectiveness of our Precision-Guided Adversarial Attack Framework (PG-Attack). Extensive experimentation showed that adversarial attacks could significantly compromise advanced multi-modal models, including GPT-4V, Qwen-VL, and imp-V1. Our approach achieved first place in the CVPR 2024 Workshop Challenge: Black-box Adversarial Attacks on Vision Foundation Models, setting a new benchmark for attack efficacy and robustness. These findings underscore the critical need for more robust defenses and security measures to protect vision foundation models against adversarial threats. Broader impacts. Our work indicates that downstream tasks of vision foundation models are currently exposed to security risks. PG-Attack aids researchers in understanding vision foundation models from the perspective of adversarial attacks, thereby facilitating the design of more reliable, robust, and secure vision foundation models. By exposing these vulnerabilities, we hope to encourage the development of enhanced security measures and defenses, ultimately contributing to the safer deployment of autonomous driving technologies and other critical applications reliant on vision foundation models. ieeenat_fullname
http://arxiv.org/abs/2407.13051v1
20240717233933
New Characterizations of First Order Sobolev Spaces
[ "Przemysław Górka", "Kacper Kurowski" ]
math.FA
[ "math.FA" ]
§ ABSTRACT We provide new characterizations of Sobolev spaces that are true under some mild conditions. We study modified first order Sobolev spaces on metric measure spaces: -Newtonian space, -Newtonian space, and Gigli-like space. We prove that if the measure is Borel regular and σ-finite, then the modified -Newtonian space is equivalent to the Hajłasz–Sobolev space. Moreover, if additionally the measure is doubling then all modified spaces are equivalent to the Hajłasz–Sobolev space. [2020]Primary 46E36, 30L99, 46E35; Secondary 43A85, 42B35. Cheddar: A Swift Fully Homomorphic Encryption Library for CUDA GPUs Jung Ho Ahn July 22, 2024 =================================================================== § INTRODUCTION In recent decades, several definitions of first order Sobolev spaces on metric measures spaces have been proposed. Hajłasz <cit.>, defined the so-called Hajłasz–Sobolev space M^1,p(X) on a metric measure space (X, , ) as the space of those f ∈ L^p() for which there exists a nonnegative function g ∈ L^p() such that the inequality f(x) - f(y) ≤ g(x) + g(y) x, y holds for almost every x, y ∈ X. Each function g that satisfy the above inequality is called the Hajłasz gradients of u. Another proposition for a first order Sobolev space on a metric measure space is the Newtonian space N^1,p(X) introduced by Shanmugalingam <cit.>. It is the space of all functions f ∈ L^p() for which there exists a nonnegative Borel function g ∈ L^p() such that the inequality f(γ(a)) - f(γ(b)) ≤∫_γ g holds for p-modulus almost every rectifiable γ [a,b] → X. Yet another definition of a first order Sobolev space on a metric measure space has been proposed by Gigli <cit.>. He defined this space as the space of all functions f ∈ L^2() for which there exists a nonnegative function g ∈ L^2() such that ∫_C[0,1];X f(γ(0)) - f(γ(1)) dμ(γ) ≤∫_C[0,1];X∫_0^1 g(γ(t)) γ̇(t) dt dμ(γ) for all test plans μ, where γ̇ is the metric speed of γ. Other definitions of first order Sobolev spaces on metric measure space than the ones listed above have been introduced (see Cheeger <cit.> for instance). Nevertheless, in the paper we will focus only on those three spaces. The advantage of M^1,p spaces is that, unlike most other approaches, the theory is rich without assuming the measure is doubling or the space is connected <cit.>. It is well known that if the measure on the metric space is doubling and supports some Poincaré inequality then Newtonian space N^1,p and the Hajłasz–Sobolev space M^1,p are equivalent <cit.>. Let us make the following observations. The integral along the curve ∫_γ g is usually defined as a Lebesgue integral. However, we can equivalently treat it as a Lebesgue–Stieltjes integral[The proof of this statement is a part of Remark <ref>] ∫_γ g(γ(t)) dμ_γ(t). The latter interpretation is more general as the latter integral is well-defined for all γ [a,b] → X that are right-continuous and of bounded variation. Next, let us fix x, y ∈ X and let γ_x^y [0,1] → X be defined by γ_x^y(t) = x [0,1/2)(t) x + [1/2,1](t) y. Then ∫_γ_x^y g = g(y)x,y for any Borel g X → [0,∞]. The right hand side of this expression is similar to the right hand side of the definition of a Hajłasz gradient. We can make this resemblance even more apparent by “symmetrizing” the integral, that is, by taking the average of this integral and the integral along the curve γ_y^x: 1/2∫_γ_x^y g + ∫_γ_y^x g = 1/2g(x) + g(y)x, y. In consequence, if we were to define a modification of Newtonian spaces in which we use the symmetrized integral[We will make this notion rigorous in Definition <ref>] instead of the usual integral along the curve, we might define function spaces that is highly comparable with the Hajłasz–Sobolev spaces. Within this paper we explore this idea and apply similar modifications to the definition of the first order Sobolev spaces introduced by Gigli. The main result of this paper, Theorem <ref>, shows that, if the measure is Borel regular and σ-finite, the modified Newtonian space is equivalent to the Hajłasz–Sobolev space. Also, if, additionally, is doubling, the modified “Gigli-like” space is equivalent to the Hajłasz–Sobolev space. This theorem therefore provides new characterizations of the Hajłasz–Sobolev spaces that are true in rather general settings. The reminder of the paper is structured as follows. We devote Section <ref> for the Preliminaries. In Section <ref> we first recall some of the basic properties of functions of bounded variation from the interval [a,b]. Then we introduce the family of test curves [a,b];X and their reversal. Finally, we introduce the symmetrized integral along curves from this family. In Section <ref> we endow family [a,b];X with the topology of convergence in measure and discuss examples of functions that are Borel in this topology. Section <ref> is devoted to the introduction of three normed spaces that can be viewed as first order Sobolev spaces. The ones introduced in Subsections <ref> and <ref> are modifications of the Newtonian spaces and the Sobolev spaces introduced by Gigli, respectively. The one introduced in Subsection <ref> can be seen as a space whose definition is “in-between” the definition of the other two spaces. Section <ref> is dedicated for the comparison between the Hajłasz–Sobolev spaces and the modified Newtonian spaces. Finally, in Section <ref> we compare the Hajłasz–Sobolev spaces and the Gigli-like spaces. We also dedicate this section for Theorem <ref> that summarizes all the comparisons between the previously metioned spaces. § PRELIMINARIES Notation We make the convention that |∞ - ∞| = ∞ and |(-∞) - (-∞)| = ∞. Moreover, we let [n]=1,...,n for n ∈ℕ. We will use λ to denote the Lebesgue measure. If (X, ) is a metric space, we will use ℬ(X) to denote the family of Borel subsets of X. Let (X_i, _i) be a metric space and γ_i [a,b] → X_i be Borel map such that ⊷ (γ_i) is separable, where i∈ [n]. Then, F_γ_1,...,γ_n: [a,b]^n →_i=1^n X_i defined as F_γ_1,...,γ_n(t_1,...,t_n)=(γ_1(t_1),...,γ_n(t_n)) is a Borel map. Let X̃_i ⊷γ_i, then by assumption we have X̃_i is separable. Therefore, by the Lindelöf Theorem, for every open set U ⊂ X^n we have F_γ_1,...,γ_n^-1[U]=F_γ_1,...,γ_n^-1[_i=1^n X̃_i ∩ U] is a Borel subset of [a,b]^n. Let us remark that assuming the continuum hypothesis CH we have separability of image of Borel maps. Let (X, ) be a metric space and γ [a,b] → X be a Borel map. Then, ⊷ (γ) is separable. Suppose that the image of γ is not separable. Then there exist > 0 and an uncountable family t_i_i ∈ I such that γ(t_i)_i ∈ I is a 2-separated family of elements of ⊷γ. We have that ℬ Bγ(t_i), _i ∈ I is a family of pairwise disjoint balls. Each element of ℬ is an open set, hence the union of any subfamily of ℬ is open. Thus, every element of ⋃_j ∈ J Bγ(t_j), J ∈ 2^I is open. In consequence, every element of 𝒜γ^-1⋃_j ∈ J Bγ(t_j), J ∈ 2^I is Borel in [a,b]. Therefore, #𝒜≤𝔠. Let us notice that for J, J' ∈ 2^I if J J', then γ(t_j) _j ∈ Jγ(t_j) _j ∈ J', hence γ^-1⋃_j ∈ J Bγ(t_j), γ^-1⋃_j ∈ J' Bγ(t_j), , as the balls are pairwise disjoint. This shows that #𝒜≥#2^I. As I is uncountable, assuming the continuum hypothesis we have #𝒜≥ 2^𝔠 and we get a contradiction. § FUNCTIONS OF BOUNDED VARIATION Let [a,b] ⊆. A tuple Δ = t_i_i=0^n shall be called a partition of [a,b] if a = t_0 < ⋯ < t_n = b. The family of partitions of [a,b] shall be denoted by [a,b]. For partition Δ = t_i _i=0^n we define its diameter as: Δmax_i ∈ [n] t_i - t_i-1. If (X, ) is a metric space, γ [a,b] → X, and t_i_i=0^n = Δ∈ [a,b], then we define the Δ-variation of γ by V^Δ(γ) ∑_i=1^n γ(t_i), γ(t_i-1) . A sequence Δ_n _n of partitions of [a,b] will be called normal, if lim_n →∞Δ_n = 0. While we defined partitions to be tuples of elements, we shall often work with them as if they are sets. For example, for Δ, σ∈ [a,b] by Δ∪σ we shall denote the unique element of [a,b] which is a tuple consisting of all terms of Δ and σ arranged in an increasing sequence. We will also use symbol t ∈Δ to mean that t is a term within tuple Δ. Let (X, ) be a metric space. For γ [a,b] → X we define V(γ) = V_γsup_Δ∈ [a,b] V^Δ(γ), and the value of V(γ) shall be called variation of γ. Let (X, ) be a metric space. We shall say that γ [a,b] → X is of bounded variation if V(γ) < ∞. We shall denote by [a,b]; X the family of all functions γ [a,b] → X of bounded variation. We also define [a,b]; X γ∈ [a,b]; X γ is right-continuous ona, b. For all r, t ∈ [a,b] with r ≤ t, if γ∈ [a,b]; X (γ∈ [a,b]; X), then γ|_[r,t]∈ [r,t]; X (γ|_[r,t]∈ [r,t]; X). Let Δ∈ [r,t], then V^Δγ|_[r,t]≤ V^Δ∪a,bγ≤ V(γ). Taking supremum over Δ∈ [r,t] we see V(γ|_[r,t] ) ≤ V(γ) < ∞, so γ|_[r,t]∈ [a,b]; X. The other claim follows from the fact that restricting functions preserves their right-continuity. Let (X, ) be a metric space and γ∈ [a,b]; X. Then * ∀ t ∈a,b ∀ > 0 ∃δ > 0 ∀ r, s ∈ (t, t+δ) γ(r),γ(s) ≤ and ∀ t ∈a,b ∀ > 0 ∃δ > 0 ∀ r, s ∈ (t-δ, t) γ(r),γ(s) ≤. In consequence, for all t ∈ [a,b], if t_n → t^+ (and t ∈a,b) or t_n → t^- (and t ∈a,b), then γ(t_n)_n is a Cauchy sequence. Moreover, if t_n → t^- (or t^+) and s_n → t^- (or t^+) (respectively), then if (r_n) is a sequence which alternates between (t_n) and (s_n) then γ(r_n)_n is a Cauchy sequence. * ∀ r ∈a,b ∀ t ∈a,b lim_s → r^+γ(s), γ(t) exists and ∀ r ∈a,b ∀ t ∈a,b lim_s → r^-γ(s), γ(t) exists. * Without loss of generality, it suffices to prove only one of (<ref>) and (<ref>), as the other can be proved analogously. We will show (<ref>). For this purpose we fix t ∈a,b and suppose the claim is false. Then, there exist > 0 and sequences (s_n)_n and (r_n)_n such that ∀ n ∈ s_n, r_n ∈ a, t, s_n < r_n < s_n+1, and γ(s_n), γ(r_n) ≥. Let us define partitions Δ_k = a, s_1, r_1, s_2, …, s_k-1, r_k-1, s_k, b, where k ∈ℕ. Then, for all k ∈ we have V^Δ_k = γ(a), γ(s_1) + ∑_i=1^k-1γ(s_i), γ(r_i) +γ(r_i), γ(s_i+1) + γ(s_k), γ(b) ≥∑_i=1^k-1γ(s_i), γ(r_i) ≥ (k-1), so V^Δ_k→∞ as k →∞. This contradicts γ∈ [a,b]; X. Hence, the claim is proved. For the claim with Cauchy sequences, it is again sufficient to prove it for one of the sides. Let t ∈a,b and t_n → t^-. Let > 0. There exists δ > 0 as in (<ref>). Then there exists N ∈ such that for n, m ≥ N we have γ( t_n), γ(t_m) ≤ and the claim is proved. The last claim follows from the fact that a sequence alternating between the two still approaches from below (or above). * Without loss of generality, it suffices to prove only one of (<ref>) and (<ref>) as the other can be proved analogously. We will show (<ref>). Let r ∈a,b, t ∈ [a,b] and > 0. By property (<ref>) from Lemma <ref> there exists δ > 0 such that if s_1, s_2 ∈ (r, r+δ), then γ(s_1), γ(s_2) ≤. Therefore γ(s_1), γ(t) - γ(t), γ(s_2) ≤γ(s_1), γ(s_2) ≤ and since > 0 is arbitrary, the claim is proved. Let (X, ) be a metric space and γ∈ [a,b]; X. Then ⊷γ is totally bounded and ⊷γ≤ V(γ). It is sufficient to show that every sequence in ⊷γ has a Cauchy subsequence. Let (γ(t_n))_n be a sequence in ⊷γ, where t_n ∈ [a,b]. By compactness of [a,b], there exist t∈ [a,b] and a subsequence (t_n_k)_k which converges to t in such a way that all of its terms are either: strictly smaller than t, strictly greater than t, or equal to t. In either of these cases, sequence γ t_n_k_k is Cauchy by Lemma <ref> or by being a constant sequence. Now, for the second part. Let x, y ∈⊷γ, then from the very definition of variation of γ we have x, y≤ V(γ). Therefore, taking supremum over x, y ∈⊷γ, we have ⊷γ≤ V(γ). Let (X, ) be a metric space and γ∈ [a,b]; X. We define functions ϕ_γ^L, ϕ_γ^R [a,b] →0. ∞ of the left-jumps and the right-jumps of γ by the formulas ∀ t ∈ [a,b] ϕ_γ^L(t) lim_s → t^-γ(s), γ(t) and ϕ_γ^R(t) lim_s → t^+γ(s), γ(t) , where we put ϕ_γ^L(a) = 0 and ϕ^R_γ(b) = 0. Functions ϕ_γ^L and ϕ_γ^R are well-defined by Lemma <ref>. In the case of γ∈ [a,b]; X we have ϕ^R_γ≡ 0; in this case we will also simplify our notation by writing ϕ_γ instead of ϕ^L_γ. Let (X, ) be a metric space and γ∈ [a,b]; X. Then sets L_γ t ∈ [a,b] ϕ_γ^L(t) > 0 and R_γ t ∈ [a,b] ϕ_γ^R(t) > 0 are precisely the sets of points of left- and right-discontinuity of γ. Let (X, ) be a metric space and γ∈ [a,b]; X. Then[If g:[a,b]→ [0,∞], then ∑_t ∈ [a,b] g(t) := sup_K⊂ [a,b], #K<∞∑_t ∈ K g(t).] ∑_t ∈ [a,b]ϕ_γ^L(t) + ϕ_γ^R(t) ≤ V(γ). Moreover, the set of points at which γ is not continuous is at most countable. Let A= s_i_i=1^k be a finite subset of [a,b]. We arrange s_i in ascending order, that is, s_1 < s_2 < ⋯ < s_k. For > 0 there exist r_i_i=1^k ⊂ [a,b] and t_i_i=1^k ⊂ [a,b] such that * r_1 ∈ (a,s_1) if s_1 ≠ a or r_1 = a if s_1 = a, * t_k ∈ (s_k,b) if s_k ≠ b or t_k = b if s_k = b, * For all i ∈ [k-1] we have s_i < t_i < r_i+1 < s_i+1 * For all i ∈ [k] we have: γ(s_i), γ(r_i) ≥ϕ^L_γ(s_i) - /2k and γ(s_i), γ(t_i) ≥ϕ^R_γ(s_i) - /2k. From the definition of the variation of γ we have V(γ) ≥∑_i=1^k γ(r_i), γ(s_i) + γ(s_i), γ(t_i) ≥∑_i=1^k ϕ_γ^L(s_i)- /2k + ϕ_γ^R(s_i)- /2k = ∑_i=1^k ϕ_γ^L(s_i) +ϕ_γ^R(s_i) - . Therefore, since is arbitrary and since A is an arbitrary finite subset of [a,b], we have V(γ) ≥∑_t ∈ [a,b] ϕ^L_γ(t) + ϕ_γ^R(t) as needed. Next, we prove that the set of points at which γ is not continuous is at most countable. For this purpose it is enough to prove that L_γ and R_γ are at most countable. We shall prove that L_γ is at most countable. Since we have L_γ = ⋃_n=1^∞L_n, where L_n t ∈ [a,b] ϕ^L_γ(t) ≥1/n, it is enough to show that L_n is a finite set for every n. Let us suppose there is N ∈ such that L_N is not a finite set. Then, for each k ∈ we can select k elements t_1^k, …, t_k^k ∈ L_N. Therefore, by (<ref>) we have V(γ) ≥∑_n=1^k ϕ^L_γ(t^k_n)≥∑_n=1^k n/N = k/N. In this way we get V(γ) =∞. However, that contradicts γ∈ [a,b]; X. Hence, the claim is proved. Let (X, ) be a metric space and γ∈ [a,b]; X. Then V(γ) = lim_Δ→ 0 Δ∈ [a,b] V^Δ(γ). First of all we shall prove the lemma Let (X, ) be a metric space and γ∈ [a,b]; X. Then for every > 0 there exists δ > 0 such that for all r, t ∈ [a,b], r < t with t - r ≤δ we have sup_s ∈ [r,t]γ(r), γ(s) + γ(s), γ(t) ≤γ(r), γ(t) + . Suppose the thesis is false. Then there exists > 0 such that for all n ∈ there are r_n, t_n ∈ [a,b] such that r_n < t_n, t_n - r_n ≤ 1/n, and there exists s_n ∈ [r_n, t_n] such that γ(r_n), γ(s_n) + γ(s_n), γ(t_n) > γ(r_n), γ(t_n) + . Taking into account (<ref>) with compactness of [a,b], each of (r_n)_n, (t_n)_n, and (s_n)_n has convergent subsequence (r_n_k)_k, (t_n_k)_k, and (s_n_k)_k converging to τ. We have three possibilities: * Sequence (r_n_k)_k has a further subsequence (r_n_k_l)_l such that for all l ∈ we have τ≤ r_n_k_l. We then have r_n_k_l, s_n_k_l, t_n_k_l→τ^+ and from the right-continuity of γ at τ, we have γ(r_n_k_l), γ(s_n_k_l) + γ(s_n_k_l), γ(t_n_k_l) 0, γ(r_n_k_l), γ(t_n_k_l) 0. However, this contradicts (<ref>). * Sequence (t_n_k)_k has a further subsequence (t_n_k_l)_l such that for all l ∈ we have t_n_k_l < τ. We have r_n_k_l, s_n_k_l, t_n_k_l→τ^-. Let (τ_l)_l be a sequence whose terms alternate between the terms of the other three other sequences. By Lemma <ref> we know that γτ_l_l is a Cauchy sequence. In consequence, γ(r_n_k_l), γ(s_n_k_l) + γ(s_n_k_l), γ(t_n_k_l) 0, γ(r_n_k_l), γ(t_n_k_l) 0, and this contradicts (<ref>). * Neither of the mentioned cases is true. Then, for all large k ∈ we have r_n_k < τ≤ t_n_k. We then have two possibilities: * Sequence (s_n_k)_k has a subsequence (s_n_k_l)_l such that for all l ∈ we have s_n_k_l < τ. Let (τ_l)_l be a sequence which terms alternate between the terms of [1]r_n_k_l_l and [1] s_n_k_l_l. By Lemma <ref> sequence γτ_l_l is Cauchy. In consequence, |-γ(r_n_k_l), γ(t_n_k_l) +γ(s_n_k_l), γ(t_n_k_l) | ≤γ(r_n_k_l), γ(s_n_k_l) 0, and this contradicts (<ref>). * Sequence (s_n_k)_k has a subsequence (s_n_k_l)_l such that for all l ∈ we have τ≤ s_n_k_l. Let (τ_l)_l be a sequence which terms alternate between the terms of [1]t_n_k_l_l and [1] s_n_k_l_l. By Lemma <ref> sequence γτ_l_l is Cauchy. In consequence, |γ(r_n_k_l), γ(t_n_k_l) - γ(r_n_k_l), γ(s_n_k_l) | ≤γ(s_n_k_l), γ(t_n_k_l) 0, and once again, this contradicts (<ref>). We see that in all cases we have arrived at a contradiction. As such, the thesis of the lemma is true. Now, we are in position to prove the proposition. It is sufficient to show that for any M > 0 such that M < V(γ) there exists δ > 0 such that if Δ∈ [a,b] satisfies Δ≤δ, then M ≤ V^Δ(γ). Since γ∈ [a,b]; X, we have V(γ) < ∞. Let V(γ) - M /2. There exists τ_j_j=0^m = σ∈ [a,b] such that V(γ) - V^σ(γ) ≤. Let δ_1 min_j ∈ [m] τ_j - τ_j-1/2. By Lemma <ref> there exists δ∈ 0, δ_1 such that for all r, t ∈ [a,b], r < t with t - r ≤δ we have ∀ s ∈ [r,t] γ(r), γ(s) + γ(s), γ(t) ≤γ(r), γ(t) + /(m-1). Let (t_i)_i=0^n = Δ∈ [a,b] be any such that Δ≤δ. For j ∈ [m-1] let us denote ^< τ_j max t ∈Δ t < τ_j , τ_j^≤min t ∈Δτ_j ≤ t . By definition, ^< τ_j and τ_j^≤ are consecutive elements of Δ and therefore [0]τ_j^≤ - ^< τ_j ≤δ. Since τ_j ∈ [^< τ_j, τ_j^≤], by definition of δ we have γ(^< τ_j), γ(τ_j) + γ(τ_j), γ(τ_j^≤) ≤γ(^< τ_j), γ(τ_j^≤) + /(m-1). From [0]τ_j^≤ - ^< τ_j ≤δ and the fact that δ < δ_1 = min_j ∈ [m] τ_j - τ_j-1/2, we also have ^< τ_j   <  τ_j  ≤ τ_j^≤ ≤ ^< τ_j+1  <  τ_j+1 ≤ τ_j+1^≤ for all j ∈ [m-2]. Let us denote by I the family of indices i of 0, 1, …, n such that there is j ∈ [m-1] with t_i = τ_j^≤. Thus, V^σ∪Δ(γ) = ∑_ i = 1 i ∉ I ^n γ t_i , γ t_i-1 + ∑_ j=1 ^m-1γ^< τ_j, γτ_j + γτ_j, γτ_j^≤ ≤∑_ i = 1 i ∉ I ^n γ t_i , γ t_i-1 + ∑_ j=1 ^m-1γτ_j^≤, γ^< τ_j + /(m-1) = ∑_i=1^nγ t_i , γ t_i-1 + = V^Δ(γ) + . Terefore, by the definition of σ, we have V(γ) ≤ V^σ(γ) + ≤ V^σ∪Δ(γ) + ≤ V^Δ(γ) + 2 . Finally, since = V(γ) - M /2, we conclude M = V(γ) - 2 ≤ V^Δ(γ) and M ≤ V^Δ(γ) for any Δ∈ [a,b] with Δ≤δ, as needed. As a corollary we have. Let (X, ) be a metric space and γ∈ [a,b]; X. Then for t ∈ [a,b] we have Vγ|_ [a,t] + Vγ|_ [t,b] = Vγ. Let (X, ) be a metric space. For γ∈ [a,b]; X we define function V_γ [a,b] →0, ∞ by the formula ∀ t ∈ [a,b] V_γ(t) Vγ|_[a, t]. For γ∈ [a,b]; X function V_γ [a,b] →0, ∞ is non-negative, non-decreasing and bounded by V(γ). Furthermore, V_γ∈ [a,b]; 0, ∞ and ∀ t ∈ [a,b] V_V_γ t = V_γ(t). Finally, we have ∀ t ∈a,b lim_s → t^- V_γ(t) - V_γ(s) = ϕ_γ(t). Hence, V_γ is left-continuous at points of left-continuity of γ. Suppose s, t ∈ [a,b] are such that s ≤ t. Let > 0 and Δ_s ∈ [a,s] be such that V_γ(s) ≤ V^Δ_s( γ|_[a, s] ) +. Then Δ_t Δ_s ∪t∈ [a,t] and V^Δ_s( γ|_[a, s] ) ≤ V^Δ_t( γ|_[a, t] ) ≤ V_γ(t). Hence, V_γ(s) ≤ V_γ(t) + for all > 0. Thus, V_γ(s) ≤ V_γ(t) for s ≤ t and V_γ is non-decreasing. In consequence, V_γ is bounded by V_γ(b) = V(γ). Let us prove that V_γ∈ [a,b]; 0, ∞. For t_i_i=0^n = Δ∈ [a,b] we have V^ΔV_γ = ∑_i=1^n V_γ(t_i) - V_γ t_i-1 = V_γ(t_n) - V_γ(t_0) = V(γ). Thus, by the definition of the variation we have V(V_γ) = V_γ. Furthermore, by Remark <ref> we have γ|_ [a,t] ∈ [a,t]; X for all t ∈ [a,b], and in consequence, V_V_γ t = V V_γ|_[a,t] = V γ|_[a,t] = V_γ(t). Next, we shall show that V_γ is right-continuous. For this purpose we fix s ∈a,b and suppose that V_γ is not right-continuous at s. Then there exists > 0 such that for all t ∈s, b we have 2≤ V_γ(t) - V_γ(s) = Vγ|_[ s, t ], where Corollary <ref> was applied. For n ∈ let t_i^n_i=0^m_n = Δ_n ∈([s, b]) be such that t_1^n-t_0^n ≤ 1/n and Vγ|_[ s, b ] - V^Δ_nγ|_[ s, b ] ≤. Let s_n t_1^n, then Vγ|_[ s, s_n ] +Vγ|_[ s_n, b ] = Vγ|_[ s, b ] ≤ V^Δ_nγ|_[ s, b ] +≤γ(s), γ(s_n) +Vγ|_[ s_n, b ] + , hence Vγ|_[ s, s_n ] ≤γ(s), γ(s_n) + . Therefore, by (<ref>) we get ≤γ(s), γ(s_n). However, since s_n → s^+, this inequality contradicts the right-continuity of γ. Thus, V_γ is right-continuous. Finally, let us prove that ∀ t ∈a,b lim_s → t^- V(t) - V(s) = lim_s → t^-γ (t), γ(s) . Let t ∈a,b, > 0 and for n ∈ let t_i^n_i=0^m_n = Δ_n ∈ [a, t] be such that t_m_n^n-t_m_n-1^n ≤ 1/n and Vγ|_[ a, t ] - V^Δ_nγ|_[ a, t ] ≤. Denote s_n t_m_n-1^n, then Vγ|_[ a, s_n ] +Vγ|_[ s_n, t ] = Vγ|_[ a, t ] ≤ V^Δ_nγ|_[ a, t ] +≤ Vγ|_[ a, s_n ] + γ(s_n), γ(t) + , and thus V_γ(t) - V_γ(s_n) = Vγ|_[ s_n , t ] ≤γ(s_n), γ(t) + . Therefore, since s_n → t^-, we get lim_ s → t^- V_γ(t) - V_γ(s) = lim_ n →∞ V_γ(t) - V_γ(s_n) ≤lim_ n →∞γ(s_n), γ(t) + = lim_ s → t^-γ(t), γ(s) + . Thus, since > 0 was arbitrary, we have lim_ s → t^- V_γ(t) - V_γ(s) ≤lim_ s → t^-γ(t), γ(s) . On the other hand, for all s ∈a,t we have γ(t), γ(s) ≤ Vγ|_ [s, t] = V_γ(t) - V_γ(s), hence lim_ s → t^- V_γ(t) - V_γ(s) ≥lim_ s → t^-γ(t), γ(s) . Thus, gathering the above inequality with (<ref>) the proof follows. Let (X, ) be a metric space and γ∈ [a,b]; X. If γ is continuous, then so is V_γ. Let (X, ) be a metric space. Every element of [a,b] ; X is a Borel map. Let γ∈ [a,b] ; X, then by Lemma <ref> there are at most countably many points of discontinuity of γ. Let us denote this set by D. Set D is Borel as it is countable. This means that [a,b] ∖ D is Borel. Function γ|_ [a,b] ∖ D is Borel, since it is continuous. Moreover, the map γ|_D is Borel, since any subset of D is Borel. Let B ∈X, then γ^-1 B = γ|_D^-1B∪γ|_ [a,b] ∖ D^-1B. Hence, γ|_D^-1B is Borel in D, and γ|_ [a,b] ∖ D^-1B is Borel in [a,b] ∖ D. However, as D and [a,b] ∖ D are Borel subsets of [a,b], both of the mentioned preimages are Borel in [a,b]. We conclude that γ^-1 B is Borel in [a,b] and therefore function γ is Borel. Since totally bounded sets are separable, thanks to the above lemma and Corollary <ref> we are able to use Proposition <ref> for maps from [a,b] ; X. Let (X, ) be a metric space. We shall say that a function γ [a,b] → X is a test curve, if * γ∈ [a,b]; X, * γ is left-continuous at b, * The limit lim_ s → t^- γ(s) exists for every t ∈ a, b. We shall denote the family of all test curves γ [a,b] → X by [a,b]; X, and for t∈ (a,b] we define γ(t^-)= lim_ s → t^- γ(s). Let (X, ) be a metric space and x, y ∈ X. Then, γ [a,b] → X be defined by ∀ t ∈ [a,b] γ(t) x, for t ∈ a, (a + b)/2 , y, otherwise belongs to γ∈ [a,b] ; X. Let (X, ) be a metric space and γ∈ [a,b]; X . If t ∈ [a,b] is a continuity point of γ, then γ|_[a,t]∈ [a,t]; X . Let (X, ) be a metric space and γ∈ [a,b]; X, then ⊷γ = ⊷γ∪γ(t^-) t ∈ (a,b] and ⊷γ is compact. We have ⊷γ∪γ(t^-) t ∈ (a,b] ⊆⊷γ. Now, suppose that x ∈⊷γ. Then by compactness of [a,b] there is a sequence (t_n)_n of elements of [a,b] and t∈ [a,b] such that γ(t_n) → x and t_n → t as n →∞. We have then two possibilities: * Sequence (t_n)_n has a subsequence t_n_k_k such that t_n_k < t for all k ∈. Then t_n_k→ t^- and γ t_n_kγ t^-. Hence x = γ(t^-) and thus x∈γ(t^-) t ∈ (a,b]. * For all large n ∈ t ≤ t_n . Then t_n→ t^+ and therefore γ t_nγ t. In this way we get x = γ(t) ∈⊷γ. Thus, we have proved ⊷γ⊆γ(t^-) t ∈ (a,b] ∪⊷γ, and we conclude that ⊷γ = ⊷γ∪γ(t^-) t ∈ (a,b] . Next, we shall prove that ⊷γ is compact. Let (x_n) be a sequence in ⊷γ. Then for every n ∈ we have t_n ∈ [a,b] such that x_n = γ(t_n) or x_n = γ(t_n^-). As [a,b] is compact, we have a subsequence (t_n_k)_k and t ∈ [a,b] such that t_n_k→ t as k →∞. We have three possibilities: * Sequence (t_n_k)_k has a subsequence (t_n_k_l)_l such that t_n_k_l < t for all l ∈. Then t_n_k_l→ t^-. Since γ∈ [a,b]; X, limit γ(t^-) exists, so for every > 0 there exists δ > 0 such that if t - s≤ 2δ for s ∈ [a, t), then γ(t^-), γ(s)≤. This means that for every s ∈ [a,t) such that t - s≤δ we have γ(t^-), γ(s)≤ and γ(t^-), γ(s^-)≤. Hence, γ(t^-), x_n_k_l≤ for all large l ∈. As > 0 is arbitrary, we have x_n_k_l→γ(t^-). * Sequence (t_n_k)_k has a subsequence (t_n_k_l)_l such that t_n_k_l > t for all n ∈. Then t_n_k_l→ t^+. Since γ∈ [a,b]; X, limit γ(t^+) = γ(t) exists, so for every > 0 there exists δ > 0 such that if t - s≤ 2δ for s ∈ (t,b], then γ(t), γ(s)≤. This means that for every s ∈ (t,b] such that t - s≤δ we have γ(t), γ(s)≤ and γ(t), γ(s^-)≤. Hence, γ(t), x_n_k_l≤ for all large l ∈. As > 0 is arbitrary, we have x_n_k_l→γ(t). * For all large k ∈ we have t_n_k = t. Then x_n_k = γ(t) for infinitely many k, or x_n_k = γ(t^-) for infinitely many k. In either case, x_n_k has a further subsequence which converges to γ(t) or γ(t^-). Let (X, ) be a metric space and C ⊆ (a,b) be a dense set. Then * If γ, γ' (a,b) → X are right-continuous and such that γ|_C = γ' |_C, then γ = γ'. * If γ C → X is such that γ(t^+):=lim_C ∋ s → t^+γ(s) exists for t ∈ [a,b), then t ↦γ(t^+) is right-continuous. The statement (1) is straightforward. Now, for the second part of the lemma we fix > 0 and t ∈ [a,b). There exists δ > 0 such if s ∈ t, t+2δ∩ C, then γ(t^+), γ(s)≤. Hence, if s ∈ t, t+δ, then γ(t^+), γ(s^+)≤ and t ↦γ(t^+) is right-continuous as needed. For γ∈ [a,b]; X we define γ [a,b] → X by ∀ t ∈ [a,b] γ(t) γ a + b - t ^-, where we use γ a^- = γ(a). Let (X, ) be a metric space and for γ∈ BV([a,b];X) we denote by C_γ the set of continuity points of γ. Then function · has the following properties: * · [a,b]; X→ [a,b]; X, * γ = γ, * C_γ = b+a-C_γ, * For all t ∈ [a,b] we have V_γ(t) + V_γ(t) = V(γ) and hence V(γ) = V(γ). For simplicity of notation, let us define function w [a,b] → [a,b] by the formula w(t) = a+b-t for t ∈ [a,b]. * Let t ∈a, b, then, since γ∈ [a,b]; X, the limit γw(t)^- exists. For > 0, there exists δ > 0 such that if w(s) ∈ a, w(t) satisfies t-s = w(t) - w(s) ≤ 2δ, then [2]γ[1] w(t)^- , γ w(s) ≤. Therefore, if s ∈ (t,b] satisfies |t-s| ≤δ, then γ(t), γ(s) = [2]γ[1] w(t)^- , γ w(s) ^- ≤. As > 0 is arbitrary, we conclude that γ is right-continuous at t for all t ∈a, b. Let t ∈a,b, we will show if γ is left-continuous at w(t), then γ is left-continuous at t. Let >0, since γ be left-continuous at w(t), then γ(t) = γ (w(t))^- = γ w(t). Since γ is right-continuous at w(t), there exists δ > 0 such that if w(s) ∈ w(t), b, satisfies t-s = w(t) - w(s)≤ 2δ, then γ(w(t)), γ(w(s)) ≤. Therefore, if s ∈a,t, satisfies t-s≤δ, then γ(t), γ(s) = γ(w(t)), γ(w(s))^-≤. We conclude that γ is left-continuous at t. Note that, in particular, since γ is left-continuous at a, we have that γ is left-continuous at b. Moreover, we have shown that if t∈ [a,b] is such that γ is continuous at a+b-t, then γ is continuous at t. Let Δ=t_i_i=0^k be a partition of [a,b]. Then we have V^Δγ = V^wΔ(γ) + V^Δγ -V^wΔ(γ) = V^wΔ(γ) + ∑_i=1^k γ(t_i), γ(t_i-1) - γ w t_i, γ w t_i-1 = V^wΔ(γ) + ∑_i=1^kγ wt_i^- , γ w t_i-1^- -γ w t_i, γ w t_i-1 ≤ V^wΔ(γ) + ∑_i=1^kγ w t_i^- , γ w t_i + γ w t_i-1^- , γ w t_i-1 = V^wΔ(γ) + ∑_i=1^kϕ_γw t_i + ϕ_γw t_i-1 ≤ V^wΔ(γ) + 2 ∑_t ∈ [a,b] ϕ_γ(t) ≤ 3 V(γ), where in the last step we used Corollary <ref>. Therefore, V(γ) ≤ 3V(γ) and γ∈ [a,b]; X. By the definition of γ we have ⊷γ⊆⊷γ and ⊷γ is compact by Remark <ref>. Therefore, the existence of left-limits γ(t^-) for t ∈a,b follows from Lemma <ref>. We conclude that γ∈ [a,b]; X. * Next, we will prove that γ = γ. Let t ∈ C_γ, then γ is continuous at a+b-t. Hence, γ is continuous at t. Therefore, γ(t) = γ (a+b-t)^- = γa+b-t = γ (a+b - (a+b-t))^- = γ(t^-) = γ(t), and thus, γ|_C_γ = γ|_C_γ. Since a, b ∈ C_γ, C_γ is dense in (a,b), and γ, γ are right-continuous functions on (a,b), by Lemma <ref> we have γ = γ on the entire [a,b]. * We have previously shown that if γ is continuous at t, then γ is continuous at w(t). Now, if γ is continuous at t, then γ = γ is continuous at w(t). This proves that a+b - C_γ= C_γ. * We know that the set C_γ contains a and b, and is dense in [a,b]. The same is true for C_γ, the set of continuity points of γ. Let t ∈ C_γ. Since C_γ is dense in [a,b], then C_γ∩[a,t] is dense in [a,t]. Therefore, there exists a normal sequence τ_i^n_i=0^m_n = Δ_n of partitions of [a,t] such that Δ_n ⊆ C_γ for all n ∈. Then a+b-Δ_n_n ⊂ C_γ is a normal sequence of partitions of [a+b-t,b]. Therefore, for all n ∈ we have V^Δ_nγ|_[a,t] = ∑_i=0^m_nγ(τ_i^n), γ(τ_i-1^n) = ∑_i=0^m_nγ(w(τ_i^n))^-, γ(w(τ_i-1^n))^- = ∑_i=0^m_nγw(τ_i^n), γw(τ_i-1^n) = V^a+b-Δ_nγ|_[w(t),b]. Hence, by Corollary <ref> and Corollary <ref> we have V_γ (t) = Vγ|_[a,t] = Vγ|_[w(t),b] = V_γ(b) - V_γ(w(t)) = V(γ) - V_γ(w(t))= V(γ) - V_γ((w(t))^-), where the last equality follows from Proposition <ref> and the assumption t∈ C_γ. Let us point out that by Proposition <ref> we have V_γ∈ [a,b]; [0,∞) and by point (a) we have V_γ∈ [a,b]; [0,∞). Therefore, for t ∈ C_γ we have V_γ (t) = V(γ) - V_γ(t). Finally, since (a,b) ∋ t ↦ V(γ) - V_γ(t) and (a,b) ∋ t ↦ V_γ(t) are right-continuous maps which coincide on a dense set C_γ and a,b ∈ C_γ, Lemma <ref> finishes the proof. The final part of the lemma is a consequence of the fact that V γ = V_γ(b) = V(γ) - V_γ(a+b-b) = V(γ) - V_γ(a) = V(γ), where we used the fact that a ∈ C_γ. §.§ Integration along a curve Let (X, ) be a metric space and γ∈[a,b];X. Since V_γ∈[a,b];[0, ∞) and V_γ is non-decreasing, by the Caratheodory Extension Theorem there exists a measure[Such a measure will be called a Lebesgue–Stieltjes measure induced by V_γ.] μ_γ defined on [a,b] such that ∀ r, t ∈ [a,b],   r < t μ_γ r, t = V_γ(t) - V_γ(r) and μ_γa = 0. Let f X → be Borel. We define the Lebesgue–Stieltjes integral of f along curve γ by ∫_γ f [a,b] f ∘γμ_γ. We note that f ∘γ is Borel since f is Borel and γ is Borel by Lemma <ref>. Let (X, ) be a metric space and x, y ∈ X. Let γ [a,b] → X be defined by γ(t) x, for t ∈ a, (a + b)/2 , y, otherwise. Then, for Borel map f X → we have ∫_γ f = f(y) x, y. Having in mind the above example we define the symmetrized integral. Let (X, ) be a metric space and γ∈ [a,b] ; X. Let f X → be Borel. We define the symmetrized Lebesgue–Stieltjes integral of f along curve γ by γ f 1/2∫_γ f + ∫_γ f . Let (X, ) be a metric space, x, y ∈ X and let γ [a,b] → X be a curve from Example <ref>, then γ(t) = y, for t ∈ a, (a + b)/2 , x, otherwise. Thus, for Borel map f X → such that (f(x) + f(y)) makes sense we have γ f = (f(x) + f(y)) x, y /2. Let (X, ) be a metric space and γ∈ [a,b]; X. If γ is continuous, then for every Borel map f X → [0,∞] we have γ f = ∫_γ f. Moreover, the right hand side in the above equality coincides with the integral of a Borel function along a rectifiable curve <cit.>. Let w [a,b] → [a,b] be defined by the formula w(t) = a+b-t, then since γ is continuous we have γ = γ∘ w. Moreover, by Lemma <ref> we get V_γ(t) =V(γ) -V_γ(w(t)). Let us observe that μ_γa = 0 = lim_s → b^- V_γ(b) - V_γ(s) = lim_s → b^-μ_γs, b = μ_γb = μ_γ w a , and for r,t ∈ [a,b] we have μ_γr,t = V_γ(t) - V_γ(r) = V_γ w(r) - V_γ w(t) = μ_γ w(t), w(r) = μ_γ w r,t. In this way we have proved that for every B such that B={a} or B=(r,t] we have μ_γ B = μ_γ w B. Therefore, since sets of the mentioned form generate the entire σ-algebra of Borel sets on [a,b], by a standard application of the Dynkin π-λ Lemma, we have μ_γ B = μ_γ w B = (w^-1)_#μ_γ(B) for all Borel sets. Therefore, for all Borel f X → [0,∞] we have ∫_γ f = [a,b] f ∘γμ_γ = [a,b] f ∘γ∘ w (w^-1)_#μ_γ = [a,b] f ∘γμ_γ = ∫_γ f. Now, we shall prove ∫_γ f coincides with the integral of a Borel function along a rectifiable curve. Let γ̃: [0, V(γ)] → X be the arc-length parametrization of γ, i.e, γ = γ̃∘ V_γ. It is well known that γ̃=γ∘ h, where h:[0,V(γ)]→ [a,b] is given by the formula h(t)=inf V_γ^-1[{t}] (see <cit.>). Therefore, ∫_[0,V(γ)] f ∘γ̃ dλ = ∫_[0,V(γ)] f ∘γ∘ h dλ = ∫_[a,b] f ∘γ d h_#λ. Furthermore, for every r, t ∈ [a,b] such that r<t we have h^-1({a})={0} and h^-1((r,t])=(V_γ(r),V_γ(t)]. Thus μ_γ({a})=0=λ({0})=h_#λ({a}) and μ_γ((r,t])=V_γ(t) - V_γ(r) = λ((V_γ(r), V_γ(t)]) = h_#λ ((r,t]). Hence, h_#λ=μ_γ on Borel sets of [a,b] and we get ∫_[0,V(γ)] f ∘γ̃ dλ = ∫_[a,b] f ∘γ dμ_γ. Let (X, ) be a metric space, γ∈ [a,b]; X and let us define a Borel measure μ^S_γ on X as follows μ_γ^S 1/2γ_#μ_γ + γ_#μ_γ. Then for every Borel map g X →0, ∞ we have γg = Xgμ^S_γ. Let g X →0, ∞ be Borel. Then γg = 1/2∫_γ g + ∫_γ g = 1/2[a,b] g ∘γμ_γ + [a,b] g ∘γμ_γ = 1/2X gγ_#μ_γ + X g γ_#μ_γ = Xgμ_γ^S . Let (X, ) be a metric space and γ∈ [a,b]; X, then μ^S_γ(⊷γ)=μ^S_γ(X)= V(γ). By Lemma <ref> we have μ^S_γ(X) = 1/2∫_γ 1 + ∫_γ 1 = 1/2 V(γ ) + V(γ) = V(γ). Furthermore, μ^S_γ(⊷γ) ≤μ^S_γ(X) = 1/2γ_#μ_γ (X) + γ_#μ_γ(X) =1/2γ_#μ_γ (⊷γ ) + γ_#μ_γ(⊷γ ) ≤1/2γ_#μ_γ (⊷γ) + γ_#μ_γ(⊷γ) =μ^S_γ(⊷γ). Let ψ [c,d] → [a,b] be defined by the formula ∀ t ∈ [c,d] ψ(t) a + b-a/d-c(t-c). Then the push-forward ψ_# defined by the formula ∀γ∈ [a,b]; X ψ_#(γ) = γ∘ψ has the following properties: * ψ_# [a,b]; X → [c,d]; X is a bijection, * C_ψ_#(γ) = ψ^-1(C_γ), * ψ_#γ = ψ_#(γ), * V_ψ_#(γ) = V_γ∘ψ * (ψ^-1)_#(μ_γ) = μ_ψ_#(γ) * For all Borel f X → [0,∞] we have ∫_γ f = ∫_ψ_#(γ) f and γ f = ψ_#(γ) f. * The map ψ is an increasing homeomorphism. Let γ∈ [a,b]; X, then for all Δ∈ [c,d] we have ψΔ∈ [a,b] and V^ψΔ(γ) = V^Δ(ψ_#(γ)). Therefore, Vψ_#(γ) ≤ V( γ) and we get ψ_#(γ) ∈ [a,b]; X. Next, let us note that since ψ is an increasing homeomorphism, for γ∈ [c,d]; X we have that if t ∈ [a,b] is a point of left-continuity (right-continuity) of γ, then ψ^-1(t) is a point of left-continuity (right-continuity) of ψ_#(γ). Moreover, for all s ∈c,d we have ψ_#(γ)(s^-) = γψ(s) ^-. This means that ψ_#(γ) ∈ [c,d]; X. We have proved that ψ_# [a,b]; X → [c,d]; X. Since ψ^-1 has the same properties as ψ, we have (ψ^-1)_# [c,d]; X → [a,b]; X and (ψ^-1)_# is an inverse of ψ_#. Hence, ψ_# is a bijection. * We have previously shown that ψ^-1C_γ⊂ C_ψ_#(γ). Hence, we also get ψC_ψ_#(γ)⊂ C_γ, which implies ψ^-1C_γ= C_ψ_#(γ). * Let s ∈ d+c -C_ψ_#(γ), then ψ(s) ∈ a+b -C_γ. Hence, ψ_#(γ) (s) = ψ_#(γ) (d+c-s)^- = ψ_#(γ) d+c-s = γψ c+d-s = γ a+b-ψ(s) = γ (a+b-ψ(s))^- = γψ(s) = ψ_#(γ)(s). Therefore, ψ_#(γ) |_d+c -C_ψ_#(γ) = ψ_#(γ) |_d+c -C_ψ_#(γ). Since d+c -C_ψ_#(γ) is dense in [c,d], the equality from third point is a consequence of Lemma <ref>. * We have previously shown that Vψ_#(γ) ≤ V( γ). Since ψ^-1 has the same properties as ψ and (ψ^-1)_# is an inverse of ψ_#, we also have Vψ_#(γ) ≥ V( γ). Hence, Vψ_#(γ) = V( γ). For s ∈ C_ψ_#(γ) let us define function ψ^s [c,s] → [a,ψ(s)] by the formula ∀ r ∈ [c,s] ψ^s(r) a + ψ(s) - a / s -c ( r- c). By a straightforward calculation we have ψ^s = ψ|_[c,s]. Then by Remark <ref> we have ψ^s_#γ|_[a,ψ(s)] = ψ_#(γ) |_[c,s]∈ [c,s]; X. Therefore, by already proved equality of variations we get V_γ( ψ(s)) = Vγ|_[a, ψ(s)] = Vψ^s_#γ|_[a,ψ(s)] = Vψ_#γ|_[c, s] = V_ψ_#γ (s). Therefore, V_γ∘ψ = V_ψ_#γ on C_ψ_#(γ). Set C_ψ_#(γ) is dense in [c,d] and since V_γ∘ψ and V_ψ_#γ are both right-continuous, we have V_γ∘ψ = V_ψ_#γ by Lemma <ref>. * We have μ_ψ_#(γ)(c) = 0 = μ_γ(a) = μ_γ( ψc ) = (ψ^-1)_#(μ_γ)(c), and for r, s ∈ [c,d] μ_ψ_#(γ)(r, t) = V_ψ_#(γ)(t) - V_ψ_#(γ)(r) = V_γ( ψ(t)) - V_γ( ψ(r)) = μ_γψ(r), ψ(t) = μ_γψ r, t = (ψ^-1)_#(μ_γ) r, t . Thus, by the Dynkin Lemma we conclude that μ_ψ_#(γ) = (ψ^-1)_#(μ_γ) on Borel subsets of [c,d]. * Finally, for the last point, let us note that ∫_γ f = [a,b] f ∘γμ_γ = [c,d] f ∘γ∘ψ ((ψ^-1)_#μ_γ) = [c,d] f ∘ (ψ_#(γ)) μ_ψ_#(γ) = ∫_ψ_#(γ) f. Therefore, γf = 1/2∫_γ f + ∫_γ f = 1/2∫_ψ_#(γ) f + ∫_ψ_#γ f = 1/2∫_ψ_#(γ) f + ∫_ψ_#(γ) f = ψ_#(γ)f which proves the last claim. Let (X, ) be a metric space. Let r, t ∈ [a,b] satisfy r <t. For γ∈ [a,b]; X we define its left-adjusted restriction γ|_[r,t^-] [r,t] → X by ∀ s ∈ [r,t] γ|_[r,t^-](s) γ(s), for s ∈r,t, γ(t^-), for s = t. It is worth noting that γ|_[r,t^-]∈ [r,t]; X and if γ is continuous at t, then γ|_[r,t^-] = γ|_[r,t]. Let (X, ) be a metric space, t ∈ (a,b] and γ∈ [a,b]; X. Then, for all s ∈a,t we have γ(s) = γ|_a,t^-(s) and V_γ(s) = V_γ|_a,t^-(s). Moreover, since γ|_[0]a,t^- is by definition left-continuous at t, then by Proposition <ref> so is V_γ|_[0]a,t^-. In particular V_γ|_[0]a,t^-=V_γ(t^-). Let (X, ) be a metric space and γ∈ [a,b]; X. If f X → [0,∞] is Borel, then for all t ∈ (a,b) we have γf = γ|_[a,t^-] f + f(γ(t)) + f(γ(t^-)) /2 V_γ(t) - V_γ(t^-) + γ|_[t, b] f. Moreover, if t is a point of continuity of γ, then γf = γ|_[a,t] f + γ|_[t, b] f. Let t ∈ (a,b). We have ∫_γ f = [a,b]f ∘γμ_γ = a, t f ∘γμ_γ + t f ∘γμ_γ + (t,b] f ∘γμ_γ. Therefore by Remark <ref>, we get a, t f ∘γμ_γ = a, t f ∘γ|_[0]a,t^-μ_γ|_[0]a,t^- = a, t f ∘γ|_[0]a,t^-μ_γ|_[0]a,t^- = ∫_γ|_[0]a,t^- f. For similar reasons we have (t,b] f ∘γμ_γ = (t, b] f ∘γ|_[0]t, b μ_γ|_[0] t, b = t, b f ∘γ|_[0]t, b μ_γ|_[0] t, b = ∫_γ|_[0] t, b f. Also t f ∘γμ_γ = f ∘γ(t) V_γ(t) - V_γ(t^-) , where we used the fact that μ_γ( t ) = lim_n →∞μ_γ t-1/n, t = lim_n →∞ V_γ(t) - V_γ t -1/n = V_γ(t) - V_γ(t^-). Therefore, for all t ∈ (a,b) we get ∫_γ f = ∫_γ|_[0]a,t^- f + f ∘γ(t) V_γ(t) - V_γ(t^-) + ∫_γ|_[0] t, b f. Using the previously found formula for γ and point a+b-t, we have ∫_γ f = ∫_γ|_[0]a,(a+b-t)^- f + f ∘γ(a+b-t) V_γ(a+b-t) - V_γ((a+b-t)^-) + ∫_γ|_[0] a+b-t, b f. By Lemma <ref> we get f ∘γ(a+b-t) V_γ(a+b-t) - V_γ((a+b-t)^-) = f ∘γ(a+b-t) V(γ) - V_γ(t^-) - (V(γ) - V_γ(t)) = f∘γ(t^-) V_γ(t) - V_γ(t^-) . Let Φ_t : [a,t] → [a+b-t,b] be defined as follows Φ_t (s)= s+b-t. Then for s ∈ [a,t] we have (Φ_t)_#γ|_ [a+b-t,b] (s) = γ|_ [a+b-t,b]s+b-t = γ s + b - t = γa + b - s +b - t ^- = γa+t-s^- = γ|_[a,t^-]a+t-s^- = γ|_[a,t^-](s). Hence, (Φ_t)_# ( γ|_ [ a+b-t, b]) = γ|_ [a,t^-] and by Proposition <ref> we have ∫_γ|_ [ a+b-t, b] f = ∫_γ|_ [a,t^-] f. Next, let Ψ_t [t,b] → [a, a+b-t] by defined as follows Ψ_t(s) a + s-t. For s ∈t, b we have a+s-t ∈a, a+b-t, hence (Ψ_t)_#γ|_ [a,(a+b-t)^-] (s) = γ|_ [a,(a+b-t)^-]Ψ_t(s) = γ|_ [a,(a+b-t)^-] a+ s - t = γ a+ s - t = γa + b - a+ s - t ^- = γb+t-s^- = γ|_[t,b]b+t-s^- = γ|_[t,b](s). Also (Ψ_t)_#γ|_ [a,(a+b-t)^-] (b) = γ|_ [a,(a+b-t)^-]Ψ_t(b) = γ|_ [a,(a+b-t)^-] a + b -t = γ a + b- t ^- = γ(t) = γ(t) = γ|_[t,b](t) = γ|_[t,b]( (t+b-b)^- ) = γ|_[t,b](b), where we have used part (b) of Lemma <ref>. Therefore, we conclude that (Ψ_t)_#γ|_ [a, (a+b-t)^-] = γ|_[t,b] and by Proposition <ref> we have ∫_γ|_ [a, (a+b-t)^-] f = ∫_γ|_[t,b] f. Therefore we can write ∫_γ f = ∫_γ|_ [t,b] f + f ∘γ(t^-) V_γ(t) - V_γ(t^-) + ∫_γ|_ [a,t^-] f, and finally, we conclude γf = 1/2∫_γ f + ∫_γ f = 1/2( ∫_γ|_[0]a,t^- f + f ∘γ(t) V_γ(t) - V_γ(t^-) + ∫_γ|_[0] t, b f . + . ∫_γ|_ [t,b] f + f ∘γ(t^-) V_γ(t) - V_γ(t^-) + ∫_γ|_ [a,t^-] f ) = γ|_[a,t^-] f + f(γ(t)) + f(γ(t^-)) /2 V_γ(t) - V_γ(t^-) + γ|_[t, b] f as claimed. Now, let t ∈ (a,b) be a point of continuity of γ. It is also a point of continuity of V_γ by Proposition <ref>. Hence, V_γ(t) - V_γ(t^-) = 0. Moreover, from the definition of γ|_[a,t^-] we have γ|_[r,t^-] = γ|_[r,t]. Therefore γf = γ|_[a,t^-] f + f(γ(t)) + f(γ(t^-)) /2 V_γ(t) - V_γ(t^-) + γ|_[t, b] f = γ|_[a,t] f + γ|_[t, b] f as needed. Let (X, ) be a metric space and γ∈([a,b]; X). If f X → [0,∞] is Borel and such that γ f < ∞, then: * If t ∈a,b and r_n → t^-, then lim_n →∞γ|_ [r_n, t^-] f =0, * If t ∈a,b and r_n → t^+, then lim_n →∞γ|_ [t, r_n^-] f + f(γ(r_n^-)) + f(γ(r_n))/2γ(r_n^-), γ(r_n) =0. First of all let us observe that if c, d ∈ [a,b] and c<d, then ∫_γ|_ [ a+b-d, a+b-c] f = ∫_γ|_ [c,d^-] f. Indeed, let Φ : [c,d] → [a+b-d,a+b-c] be defined as follows Φ (s)= a+b -c -d +s. Then (Φ)_# ( γ|_ [ a+b-d, a+b-c]) = γ|_ [c,d^-] and by Proposition <ref> we get (<ref>). Without loss of generality we can assume that r_n is strictly monotone. First, let us assume that t ∈a,b and r_n → t^-. Since lim_n→∞μ_γ((r_n, t))=μ_γ(⋂_n ∈r_n, t ) =0, lim_n→∞μ_γ(a+b-t, a+ b -r_n)=0, and γ f < ∞, having in mind (<ref>) we get 2 γ|_ [r_n, t^-] f = ∫_γ|_ [r_n, t^-] f + ∫_γ|_ [r_n, t^-] f = ∫_γ|_ [r_n, t^-] f + ∫_γ|_ [a+b-t, a+b-r_n] f = ∫_r_n, t f∘γ dμ_γ + ∫_a+b-t, a+ b -r_n f ∘γ dμ_γ 0. Next, in the same manner, assuming that t ∈a,b and r_n → t^+ we have 2γ|_ [t, r_n^-] f = ∫_γ|_ [t, r_n^-] f + ∫_γ|_ [t, r_n^-] f = ∫_γ|_ [t, r_n^-] f + ∫_γ|_ [a+b -r_n,a+ b-t] f = ∫_t, r_n f ∘γ dμ_γ + ∫_a+b-r_n, a+b-t f ∘γ dμ_γ 0. Finally, by Lemma <ref> we have ∑_n=2^∞ f(γ(r_n^-)) + f(γ(r_n))/2γ(r_n^-), γ(r_n)≤γ f < ∞, hence f(γ(r_n^-)) + f(γ(r_n))/2γ(r_n^-), γ(r_n)→ 0 as n →∞, which ends the proof. § TOPOLOGY ON TC Let (X, ) be a metric space. By [a,b]; X we will denote the space of Borel functions[ If we assume CH then by Remark <ref> every Borel map γ [a,b] → X has separable image.] γ [a,b] → X with separable image. On the space [a,b]; X we introduce an equivalence relation ∼ by ∀γ, γ' ∈ [a,b]; X γ∼γ' γ = γ ' almost everywhere. We define space [a,b]; X as a quotient [a,b]; X [a,b]; X / ∼. For simplicity, we will also refer to elements of [a,b]; X as Borel functions when we can refer to a representative of its equivalence class. On space [a,b]; X we define a metric _ [a,b]; X× [a,b]; X→ 0, ∞ by the formula[By Proposition <ref> the map t ↦min 1, γ(t), γ'(t) is measurable.] ∀γ, γ' ∈ [a,b]; X _γ, γ' [a,b] min 1, γ(t), γ'(t) t. Let (X, ) be a metric space, then metric _ metrizes convergence in the Lebesgue measure λ. Moreover: * Let γ, γ_n ∈ [a,b]; X for all n ∈. Suppose that γ_n →γ in [a,b]; X, _. Then there exists a subsequence (γ_n_k)_k such that γ_n_k→γ almost everywhere. * The space [a,b]; X, [_] is complete if and only if (X, ) is complete. * The space [a,b]; X, [_] is separable if and only if (X, ) is separable. Let γ, γ_n ∈ [a,b]; X for all n ∈. Suppose that γ_n →γ in [_]. Then for all > 0 λ( γ_n(t), γ(t) > ) ≤ [0]γ_n(t), γ(t) > min(1,) 1 t≤b-a/min(1,)_γ_n, γ 0, so γ_n →γ in measure. Now suppose γ_n →γ in measure. Fix > 0. Then _γ_n, γ≤ (b-a) + λ( γ_n(t), γ(t) > ) (b-a). As > 0 is arbitrary, we have that γ_n →γ in _. * If γ_n →γ in [a,b]; X, _, then γ_n, γ→ 0 in Lebesgue measure. Therefore, by the classical Riesz Theorem, there exists a subsequence such that for almost every t ∈ [a,b] we have γ_n_k(t) , γ(t)→ 0. * Now, let assume that X is complete. Let (γ_n)_n be a Cauchy sequence in [a,b]; X. There exists strictly increasing sequence n_k such that λ(γ_n_k(t), γ_n_k+1(t) > 1/2^k ) < 1/2^k. For k ∈ℕ we define A_k =⋃_j=k^∞γ_n_k(t), γ_n_k+1(t) > 1/2^k, A = ⋂_k=1^∞A_k. Then, λ(A_k) → 0 and λ(A)=0. Let us observe that for t∈ [a,b] ∖ A we have that γ_n_j(t) is Cauchy in X. Indeed, if t ∉ A_k and i ≥ j ≥ k, then we have γ_n_j(t), γ_n_i(t)≤ 1/2^j-1. Since X is complete, we can define the following map γ: [a,b]∖ A → X, γ(t)=lim_j→∞γ_n_j(t). Let us observe that for every closed set D ⊂ X we have[(D)_1/l=⋃_x∈ DB(x,1/l)] γ^-1(D)= ([a,b]∖ A) ∩⋂_l=1^∞⋃_k=1^∞⋂_j=k^∞γ_n_j^-1((D)_1/l). Therefore, the function γ̃:[a,b] → X defined by γ̃(t) γ(t), for t ∈ [a,b] ∖ A, x̃, for t ∈ A, where x̃ is a fixed point from X, is a Borel map. Moreover, for a fixed >0 and k ∈ℕ, by (<ref>) we have ([a,b]∖ A) ∩γ_n_j(t), γ̃(t) > ⊂ A_k, for j ≥ k such that /2 > 1/2^j. Therefore, since λ(A_k) → 0, we have that γ_n_j→γ̃ in _. Hence, since γ_n is a Cauchy sequence, γ_n →γ̃ in _. Next, let us suppose that the space [a,b]; X, [_] is complete. We shall prove that (X, ) is complete. For this purpose we fix a Cauchy sequence y_n in X. Now, we define the sequence f_n:[a,b] → X as follows f_n(t)=y_n. It is obvious that f_n is a Cauchy sequence in [a,b]; X, [_]. Therefore, there exists f∈ [a,b]; X, [_] such that f_n → f in _. Furthermore, by i) there exists a subsequence f_n_j and a set of full measure D⊂ [a,b] such that (f_n_j(t),f(t)) → 0 for t∈ D. Let t_0 ∈ D, then in particular (y_n_j,f(t_0)) → 0. Therefore, since y_n is a Cauchy sequence, we have y_n → f(t_0) in (X, ). * Let us assume that X is separable and let S be a countable dense subset. For n ∈ and i ∈ [n] denote A_i,ni-1n(b-a), in(b-a) . We will show that D=⋃_n=1^∞D_n where D_n t ↦∑_i=1^n x_i A_i,n(t) x_i ∈ S for all i ∈ [n] is a countable dense subset in [a,b]; X. It is clear that D is countable. Fix > 0 and γ∈ [a,b]; X. By Lusin's Theorem[Separabilty of X allows us to use the Lusin Theorem, see <cit.>], there exists a compact F ⊆ [a,b] such that [a,b] ∖ F ≤/3 and γ|_F is continuous. As F is compact, γ|_F is absolutely continuous and there exists δ > 0 such that if s - t≤δ, then γ(s), γ(t) ≤/4. Let n ∈ be such that δ≥ (b-a)/n. Then A_i,n≤δ. For every i ∈ [n], if F ∩ A_i,n∅, let x_i ∈ S be such that Bx_i, /3(b-a)∩γ F ∩ A_i,n∅; otherwise, let x_i be arbitrary. Let γ' ∈ [a,b]; X be defined by the formula γ'(t) ∑_i=1^n x_i A_i,n(t). Then γ' ∈ D. Furthermore, if t ∈ F∖b, then γ'(t), γ(t) ≤2/3(b-a). Indeed, for such a t there exists i ∈ [n] such that t ∈ A_i,n∩ F. Thus, there exists t_0 ∈ A_i,n∩ F such that γ(t_0) ∈Bx_i, /3(b-a), and since A_i,n≤δ we have t_0 - t≤δ. Then γ'(t), γ(t) = x_i, γ(t) ≤ x_i, γ(t_0) + γ(t_0), γ(t) ≤2/3(b-a). Finally, _γ, γ' = [a,b] min 1, γ(t), γ'(t) t = [a,b] ∖ F min 1, γ(t), γ'(t) t + F min 1, γ(t), γ'(t) t ≤ [a,b] ∖ F 1 t + F 2/3(b-a)t ≤, which proves that D is dense in [a,b]; X. Next, we shall show the converse implication. For this we suppose that (X, ) is not separable. Hence, there exists δ >0 and uncountable δ-separated set X_δ⊂ X. Then Y_δ :={γ : [a,b] → X: γ≡ x, where x ∈ X_δ} is uncountable min(1, δ)-separated set in [a,b]; X. Therefore, [a,b]; X is not separable. By Lemma <ref> the canonical immersion T [a,b]; X → [a,b]; X is injective. Hence, the map _TC: [a,b]; X × [a,b]; X → [0, ∞) defined by the formula ∀γ, γ' ∈ [a,b]; X _TCγ, γ'[_] T( γ), T( γ') is a metric on [a,b]; X. Let (X, ) be a metric space. For γ∈ [a,b]; X we define its essential variation [a,b]; X→ 0, ∞ by the formula ∀γ∈ [a,b]; X (γ) inf V(γ') γ' ∼γ. Let (X, d) be a metric space. Let γ∈ [a,b]; X be such that there exists γ∈ [a,b]; X such that γ = γ almost everywhere. Then γ = V( γ). Of course we have γ≤ V( γ). Next, let γ' ∈ [a,b]; X be such that γ = γ' almost everywhere. We also have that γ' = γ almost everywhere, so we have set D of full measure in [a,b] (hence, dense) such that γ' = γ in D. Since D is dense in [a,b], there exists a normal sequence (t^n_i)_i=0^m_n_n = (Δ_n)_n of partitions of [a,b] such that for all n ∈ we have Δ_n ∖a,b⊆ D. We have Δ_n → 0, so t^n_1→ t^n_0 = a and t^n_m_n-1→ t^n_m_n = b. Hence, since γ∈ [a,b]; X, we have γ t^n_1, γ(a) 0 and γb , γ( t^n_m_n-1) 0. By Proposition <ref> we have that V^Δ_n(γ) → V(γ) as n →∞, and therefore V(γ') ≥lim sup_n →∞ V^Δ_n(γ') = lim sup_n →∞γ' t^n_1, γ'(a) + ∑_i=2^m_n-1γ( t^n_i ), γ t^n_i-1 + γ'b , γ'( t^n_m_n-1) = lim sup_n →∞γ' t^n_1, γ'(a) - γ t^n_1, γ(a) + V^Δ_n(γ) - γb , γ( t^n_m_n-1) + γ'b , γ'( t^n_m_n-1) = V(γ) + lim sup_n →∞γ' t^n_1, γ'(a) + γ'b , γ'( t^n_m_n-1) ≥ V(γ). Hence, V(γ') ≥ V(γ) and in this way we have γ≥ V( γ). Let (X, ) be a complete metric space. Let γ∈[a,b];X be such that ( γ) < ∞, then there exists γ∈ [a,b]; X such that γ = γ almost everywhere. There exists γ' ∈ [a,b]; X such that ( γ) ≤ V( γ' ) and γ = γ' almost everywhere. By Lemma <ref> the set [a,b] ∖ C_γ' of points of discontinuity of γ' is at most countable. Since (X, ) is complete, by Lemma <ref> the quantities γ' |_C_γ' (t^+) and γ' |_C_γ' (b^-) are well defined. We define function γ [a,b] → X by the formula ∀ t ∈ [a,b] γ(t) γ' |_C_γ' (t^+) if t ∈a,b, γ' |_C_γ' (b^-) if t= b. Then, by Lemma <ref> and by simple considerations we have γ∈ [a,b]; X . Moreover, since γ|_C_γ' = γ' |_C_γ' and C_γ' is of full measure in [a,b], then γ = γ almost everywhere. Let (X, d) be a metric space and M ≥ 0. Let (γ_n)_n be a sequence of elements of [a,b]; X such that γ_n →γ in [_] and ( γ_n) ≤ M for all n ∈. If (X, ) is complete, or γ has a right-continuous representative, then (γ) ≤ M. Let > 0, then there exists a sequence (γ_n) of elements of [a,b]; X such that for all n ∈ V(γ_n) ≤ M + and γ_n = γ_n almost everywhere. If γ has a right-continuous representative, then let us denote it by γ. Otherwise, let γ be any representative of of γ. There exists a subsequence (γ_n_k)_k such that γ_n_k→γ almost everywhere. Let D be a set of full measure in [a,b] such that γ_n_k→γ everywhere in D. Let us also require that b ∉ D. Let v_k V_γ_n_k. Then (v_k)_k is a sequence of non-decreasing functions bounded by M +. By Helly's Selection Theorem there exists a non-decreasing function v [a,b] → [0, M+] and a subsequence (v_k_m)_m such that v_k_m→ v everywhere. For all m ∈ and s,t ∈ D such that s ≤ t we have γ_n_k_m(s), γ_n_k_m(t) ≤ v_k_m(t) - v_k_m(s). Hence, by passing with m →∞, for s,t ∈ D such that s ≤ t we have γ(s), γ(t) ≤ v(t) - v(s). Let t ∈a,b∖ D and (r_n)_n ⊂ D such that r_n → t^+, then since v ∈ [a,b]; [0,∞) by the above inequality we have (γ(r_n))_n is a Cauchy sequence which is convergent either because γ is right-continuous, or (X, ) is complete. Hence, γ|_D(t^+) exists for t ∈a,b∖ D. Therefore, we can define γ̂a,b→ X by the formula ∀ t ∈a, b γ̂(t) γ(t), if t ∈ D, γ|_D(t^+), if t ∈ [a,b) ∖ D, and let us define v̂ [a,b] → by the formula ∀ t ∈ [a,b] v̂(t) v(t), if t ∈ D, v |_D(t^+), if t ∈ a, b∖ D, M+2, if t = b. Let us note that v̂(b)≥v̂(b^-) + as v was bounded by M + from above. Then, since D is dense, for all t, s ∈a, b such that s < t we have γ̂(s), γ̂(t) ≤v̂(t) - v̂(s). Since v̂ is non-decreasing and bounded, the limit v̂(b^-) exists. Hence, there exists δ > 0 such that, if r, s ∈ (b-δ, b) and s < r, then γ̂(r), γ̂(s) ≤v̂(r) - v̂(s) ≤. Hence, if we define γ̂(b) = γ̂(t), where t ∈ (b- δ, b) is arbitrary, then for all r, s ∈ (b-δ, b] such that s < r we have γ̂(r), γ̂(s) ≤v̂(r) - v̂(s). Note that we used the fact that v̂(b)≥v̂(b^-) +. Finally, for r, s ∈ [a,b] such that s < r we have γ̂(r), γ̂(s) ≤v̂(r) - v̂(s). Since γ̂ = γ on D (which is a set of full measure) and γ ia a representative of γ, we have that γ̂ also is a representative of γ. Next, let Δ = (t_i)_i^m be a partition of [a,b], then V^Δ(γ̂) = ∑_i=1^m γ̂(t_i), γ̂(t_i-1) ≤∑_i=1^m v̂(t_i) - v̂(t_i-1) = v̂(t_m) - v̂(t_0) = v̂(b) - v̂(a) ≤ M + 2 . This shows that V(γ̂) ≤ M + 2 and thus γ≤ M + 2. As > 0 was arbitrary, we have γ≤ M as needed. Let (X, ) be a metric space. Then V [a,b]; X→ 0, ∞ is lower semicontinuous. In the first step we assume that (X, d) is complete. Suppose that γ_n →γ in [a,b]; X, [_TC]. Let T [a,b]; X→ [a,b]; X be the canonical immersion. Then T(γ_n) → T(γ) in [a,b]; X, [_]. By Lemmata <ref> and <ref> we have lim inf_n→∞ V(γ_n) = lim inf_n→∞ T(γ_n)≥ T(γ) = V(γ) which, as (γ_n)_n was arbitrary, is equivalent to lower semicontinuity of V. Now, in the second step we we consider an arbitrary metric space (X, d). Let (X̂, d̂) be a completion of (X, d) and let i : X →X̂ be an isometric embedding. Then, I: [a,b]; X→ [a,b]; X̂ defined as I(γ)(t) =i(γ(t)) is a continuous map, and by the first step V_X̂ [a,b]; X̂→ 0, ∞ is lower semicontinuous. Therefore, V= V_X̂∘ I is lower semicontinuous. Let (X, ) be a metric space, then: * for (s_1, …, s_n) ∈ [a,b]^n and a Borel map f X^n → the function [a,b]; X∋γ↦ fγ(s_1), …, γ(s_n) is Borel, * for every Borel and bounded from below or from above map f X → the function [a,b]; X∋γ↦∫_γ f is Borel, * for every Borel and bounded from below or from above map f X → the function [a,b]; X∋γ↦γ f is Borel. First of all we shall prove the following lemma. Let (X, ) be a metric space and let C be a linear subspace of Borel functions on X with values in ℝ such that * C contains all (bounded) Lipschitz functions on X, * C is closed upon pointwise limits of (equibounded) functions. Then C contains all (bounded and) Borel functions on X. The strategy of the proof is based on the proof of <cit.>. First of all we shall show that C contains characteristic functions of all open sets. It is clear ∅, X∈ C. Therefore, let us fix nonempty open set U ⊆ X such X ∖ U ≠∅. For such U we define the following sets F_n x ∈ U x, X ∖ U≥ 1/n . Then (F_n)_n is an increasing sequence of closed sets such that U = ⋃_n=1^∞ F_n. Let f_n X → be a sequence of Lipschitz and bounded functions given by the formula ∀ x ∈ X f_n(x) min 1, n x, X ∖ U. Since f_n ≡ 1 on F_n, f_n ≡ 0 on X ∖ U and f_n X → [0,1], then f_n → U pointwise. In this way we have proved that U ∈ C. Next, let 𝒜 B∈X B∈ C. We have shown that family 𝒜 contains a π-system of open sets. Let us notice that if B ∈𝒜, then X ∖ B = 1 - B∈ C as 1 is Lipschitz (and bounded). Hence, X ∖ B ∈𝒜. Moreover, if (B_n) is a sequence of disjoint elements of 𝒜, then ⋃_n=1^∞ B_n = lim_n →∞∑_k=1^n B_k, so ⋃_n=1^∞ B_n is a pointwise limit of (equibounded by 1) sequence of elements of C. Thus, ⋃_n=1^∞ B_n∈ C and ⋃_n=1^∞ B_n ∈𝒜. We have shown that 𝒜 is a λ-system containing a π-system of open sets. Hence, by the Dynkin Lemma 𝒜 contains all Borel sets. Now, let f X → be bounded Borel map. There exists M ∈ such that f≤ M everywhere. For n ∈ and i ∈0, …, n, let A^n_i f^-1 M -1 + 2i/n , M -1 + 2(i+1)/n and f_n ∑_i=0^n M -1 + 2i/n A^n_i. Then (f_n)_n is a sequence of functions (equibounded by M), which are linear combinations of indicators of Borel sets, so f_n ∈ C for all n ∈. Moreover, we have f_n - f ≤ 2/n everywhere. Therefore, f_n → f everywhere and we have f ∈ C. Thus, we have proved that C contains all bounded Borel functions f X →. In the case when C is closed upon pointwise limits of not necessarily equibounded sequences of elements of C, then, if f X → is Borel, we have that f_n max -n, min n, f is a sequence of elements of C such that f_n → f pointwise. Hence, C contains all Borel functions f X →. (a) Since the map [a,b]; X∋γ↦ fγ(s_1), …, γ(s_n) is a pointwise limit of maps [a,b]; X∋γ↦ f_kγ(s_1), …, γ(s_n), where f_k=max(-k, min(k,f)), we can assume that f X^n →. Let 𝖳𝖾𝗌𝗍^nX f [a,b]^n × X^n →f≤ 1 and Lip(f) ≤ 1 . Then, for all f ∈𝖳𝖾𝗌𝗍^n(X), the map ℳ [a,b]; X ∋γ↦ [a,b]^n f s_1, …, s_n, γ(s_1), …, γ(s_n) s is Lipschitz, hence Borel. Indeed, for all γ, γ' ∈ℳ [a,b]; X we have[The measurability of the map ( s_1, …, s_n)↦ f s_1, …, s_n, γ(s_1), …, γ(s_n) follows from Proposition <ref>.] [a,b]^n f s_1, …, s_n, γ(s_1), …, γ(s_n) s - [a,b]^n f s_1, …, s_n, γ'(s_1), …, γ'(s_n) s ≤ [a,b]^n f s_1, …, s_n, γ(s_1), …, γ(s_n) - f s_1, …, s_n, γ'(s), …, γ'(s_n) s = [a,b]^n min 2, f s_1, …, s_n, γ(s_1), …, γ(s_n) - f s_1, …, s_n, γ'(s_1), …, γ'(s_n) s ≤ [a,b]^n min 2, ∑_i=1^n γ(s_i), γ'(s_i) s ≤∑_i=1^n [a,b] min2, γ(s_i), γ'(s_i) s_i ≤ 2n _ℳγ , γ' . Let l_1, …, l_n, r_1, …, r_n ∈ [a,b] satisfy l_i < r_i for all i ∈ [n]. Fix f ∈𝖳𝖾𝗌𝗍^n(X) and define f_m [a,b]^n × X^n → by f_m(s,x) f(s,x) g_m(s), where g_m [a,b]^n → [0,1] is a smooth function such that g_m ≡ 1 on _i=1^n [l_i, r_i] and g_m ≡ 0 on [a,b]^n ∖_i=1^n [l_i -1/m, r_i +1/m] Since each of f_m is Lipschitz and bounded by 1, there is c_m > 0 such that c_m f_m∈𝖳𝖾𝗌𝗍^n(X). In consequence, ℳ [a,b]; X ∋γ↦ [a,b]^n f_m s_1, …, s_n, γ(s_1), …, γ(s_n) s are continuous, hence Borel. Since f_m are bounded by 1 and the pointwise limit of f_m is a function (s, x) ↦ f(s, x) _i=1^n [l_i, r_i] (s), by the Lebesgue dominated convergence theorem we get that ℳ [a,b]; X ∋γ↦_i=1^n [l_i, r_i] f s, γ(s) s = [a,b]^n f s_1, …, s_n, γ(s_1), …, γ(s_n) _i=1^n [l_i, r_i] (s) s = lim_n →∞ [a,b]^n f_m s_1, …, s_n, γ(s_1), …, γ(s_n) s is Borel. Furthermore, since embedding [a,b]; X↪ℳ [a,b]; X is continuous, we get that TC [a,b]; X ∋γ↦_i=1^n [l_i, r_i] f s, γ(s) s is Borel. Fix l_1, … l_n ∈a,b and let r_i,m∈l_i, 1 be such that r_i, m→ l_i^+. Then for a fixed γ∈𝖳𝖢 [a,b]; X we have fl_1, …, l_n, γ(l_1), …, γ(l_n) - __i=1^n [l_i, r_i,m] f s_1, … s_n, γ(s_1), …, γ(s_n) ds ≤__i=1^n [l_i, r_i,m] fl_1, …, l_n, γ(l_1), …, γ(l_n) - fs_1, …, s_n, γ(s_1), …, γ(s_n) ds ≤__i=1^n [l_i, r_i,m] ∑_i=1^n (s_i - l_i) + γ(s_i), γ(l_i) ds 0, where we have used the right continuity of γ. We conclude that [a,b]; X∋γ↦ f r_1, …, r_n , γ(r_1), …, γ(r_n) , where r_1, …, r_n ∈a,b, is Borel as a pointwise limit of Borel functions. Since elements of [a,b]; X are left-continuous at b, for all tuples (r_i)_i=1^n such that r_i ∈ [a,b] we can select l_i, m∈a,b such that l_i, m→ r_i^- if r_i = b and l_i, m = r_i otherwise. Then f(l_1, m, …, l_n, m, γ(l_1, m), …, γ(l_n, m)) → f( r_1 , …, r_n, γ( r_1 ), …, γ( r_n)) for all γ∈𝖳𝖢 [a,b]; X and [a,b]; X∋γ↦ f r_1 , …, r_n, γ( r_1 ), …, γ( r_n) is Borel. This shows that [a,b]; X∋γ↦ f r_1 , …, r_n, γ( r_1 ), …, γ( r_n) are Borel for all f ∈𝖳𝖾𝗌𝗍^n(X) and r_1, …, r_n ∈ [a,b]. By scaling f by a constant we get the same result for any Lipschitz and bounded f. Since c ↦max( -m, min( m, c)) are 1-Lipschitz, by composing f with the above functions, we can obtain the same result for any Lipschitz function. Finally, for each s = (s_1, …, s_n) ∈ [a,b]^n we define C_s f X^n → Borel [a,b]; X∋γ↦ fγ(s_1), …γ(s_n) is Borel. Since every Lipschitz f X^n → can be treated as a Lipschitz function f [a,b]^n × X^n. by (s,x) ↦f(s, x) = f(x), we get that C_s contains all Lipschitz functions f X^n →. Therefore, since X^n is a metric space and C_s is a subspace of Borel functions, by Lemma <ref> we have that C_s contains all Borel functions. (b) First of all we shall prove the following lemma. Let (X, ) be a metric space. Let[By (X) we denote the set of bounded and continuous functions on X.] f ∈(X). Let t^n_i_i=1^m_n = Δ_n _n be a normal sequence of partitions of [a,b]. Then for every γ∈ [a,b]; X we have ∫_γ f = lim_n →∞∑_i=1^m_nγ( t^n_i), γ( t^n_i-1 ) f ∘γ (t^n_i). For n ∈ let A_i t^n_i-1, t^n_i , for i ∈ [m_n], and A_0 a, then for every n ∈ and every i ∈ [m_n] we have μ_γA_i = μ_γ t^n_i-1, t^n_i = V_γ( t^n_i) - V_γ( t^n_i-1 ) < ∞. Next, we define the sequence of simple functions g_n [a, b] → as follows g_n ∑_i=0^m_n f ∘γ t^n_i A_i . Since f ∈(X), there exists M > 0 such that f≤ M everywhere in X. Therefore, g_n ≤ M everywhere. Furthermore, g_n → f ∘γ pointwise in [a,b]. Indeed, let us note that for all n ∈ and all i ∈0, …, m_n we have that g_n( t^n_i ) = f ∘γ( t^n_i ). In particular, g_n(a) = f ∘γ(a) and g_n(b) = f ∘γ(b). Let t ∈ (a,b), then t is a point of right-continuity of γ and, since f is continuous, it is also a point of right-continuity of f ∘γ. For each n ∈, let t_n^≤min s ∈Δ_n t ≤ s . Since Δ_n _n is a normal sequence of partitions, t_n^≤→ t^+ (where we allow that t^≤_n = t). Thus, g_n(t) = g_n( t_n^≤) = f ∘γ( t_n^≤) f ∘γ(t). Now, since M ∈ L^1( μ_γ), by the Lebesgue Dominated Convergence Theorem we have ∑_i=1^m_nμ_γ t^n_i-1, t^n_i f ∘γ (t^n_i) = [a,b] g_n μ_γ [a,b] f ∘γμ_γ = ∫_γ f. Finally, let us notice [a,b] g_n μ_γ - ∑_i=1^m_nγ( t^n_i), γ( t^n_i-1 ) f ∘γ (t^n_i) = ∑_i=1^m_nμ_γ t^n_i-1, t^n_i - γ( t^n_i), γ( t^n_i-1 ) f ∘γ (t^n_i) ≤ M ∑_i=1^m_nμ_γ t^n_i-1, t^n_i - γ( t^n_i), γ( t^n_i-1 ) = M ∑_i=1^m_n V_γ( t^n_i ) - V_γ( t^n_i-1 ) - γ( t^n_i), γ( t^n_i-1 ) = M V_γ( t^n_m_n) - V_γ( t^n_0) - V^Δ_n(γ) = M V(γ) - V^Δ_n(γ) 0. We conclude that ∫_γ f = lim_n →∞ [a,b] g_n μ_γ = lim_n →∞∑_i=1^m_nγ( t^n_i), γ( t^n_i-1 ) f ∘γ (t^n_i) as needed. Let us define the following family C f X → f is Borel and bounded and [a,b]; X∋γ↦∫_γ f is Borel. Let us observe that [a,b];⊆ C. Indeed, let f ∈ X; and t^n_i_i=0^m_n = (Δ_n)_n be a normal sequence of partitions. Then, from (a) we have that functions [a,b]; X∋γ↦∑_i=1^m_n fγ t^n_iγ t^n_i, γ t^n_i-1 are Borel. By Lemma <ref> for all γ∈ [a,b]; X we have ∫_γ f = lim_n →∞∑_i=1^m_n fγ t^n_iγ t^n_i, γ t^n_i-1. Hence, γ↦∫_γ f is Borel as a pointwise limit of a sequence of Borel functions. Now, we will show that C is closed upon pointwise limits of equibounded sequences. Let (f_n)_n be a sequence of elements of C such that there exists M ≥ 0 such that f_n → f pointwise and for all n ∈ we have f_n ≤ M. Let γ∈ [a,b]; X, then by the Lebesgue Dominated Convergence Theorem, we have ∫_γ f_n = [a,b] f_n ∘γμ_γ [a,b] f ∘γμ_γ = ∫_γ f. Therefore, the map [a,b]; X∋γ↦∫_γ f is a pointwise limit of a sequence of Borel functions, hence it is Borel. Thus, f ∈ C. We have shown that X; ⊆ C and that C is bounded upon pointwise limits of equibounded sequences. Hence, by Lemma <ref> family C contains all Borel and bounded functions. Finally, if f X → is Borel and bounded from above or from below, then sequence (f_n)_n defined by f_n max -n, min n, f is a sequence of bounded Borel functions such that f_n → f pointwise. Moreover, since f is bounded from above or from below, there exists N ∈ such that f ≤ N or -N ≤ f and subsequence f_n _n ≥ N is monotonic. Therefore, by the Lebesgue Monotone Convergence Theorem, we have ∫_γ f_n = [a,b] f_n ∘γμ_γ [a,b] f ∘γμ_γ = ∫_γ f. Thus, function [a,b]; X∋γ↦∫_γ f is a pointwise limit of a sequence of Borel functions, hence it is Borel. This proves the claim. (c) Since γf = 1/2∫_γ f + ∫_γ f and having in mind (b), it is sufficient to show · [a,b]; X→ [a,b]; X is Borel. We will prove that, in fact, this function is a _TC-isometry. Every element of [a,b]; X is continuous on a set of full measure. Therefore, for γ, γ' ∈ [a,b]; X we have that γ(t) = γ a + b - t , γ'(t) = γ' a + b - t for almost all t ∈ [a,b]. Thus, _ TCγ, γ' = [a,b] min 1, γ(t), γ'(t) t = [a,b] min 1, γ( s), γ'(s ) s = _ TCγ, γ ' , so _ TCγ, γ' = _ TCγ, γ '. § FUNCTION SPACES §.§ -Newtonian Space Within this subsection we introduce the -Newtonian spaces which are a modification of the Newtonian spaces introduced by Shanmugalingam <cit.>. To perform this modification, we replace the family of rectifiable curves with the family [0,1];X and the integral along the curve ∫_γ with the symmetrized integral γ, to define the -Newtonian space X. The theory of the p-modulus can be recreated with almost no changes. However, the results for the p-weak upper gradients require greater care due to the possible discontinuity of the considered curves. Let (X, ) be a metric space and let be a Borel measure on X. We denote the family of non-trivial test curves (i.e. ones with non-zero variation), by γ∈ [0,1]; X V(γ) > 0 . Note that by Corollary <ref> we have that V [a,b]; X→0, ∞ is lower semicontinuous, hence Borel. Thus, is a Borel subset of [a,b]; X. For Γ⊆ we define F(Γ) ρ X → 0, ∞ρ is Borel and γρ≥ 1 for all γ∈Γ. Then, for p ∈ 0, ∞ we define the p-modulus of families of test curves ^p 2^→0, ∞ by the formula ∀Γ⊆ ^p( Γ) inf_ρ∈ F(Γ) ρ^p_L^p(). We shall say that property P holds for ^p almost every γ∈, if the set of γ∈ for which P does not hold has a p-modulus of 0. ^p is an outer measure on . Moreover, for Γ⊆ we have ^p( Γ ) = 0 if and only if there exists ρ∈ F( Γ) such that ρ∈ L^p( ) and γρ = ∞ for all γ∈Γ. First of all we shall prove that ^p is an outer measure. Let us note that ^p(∅) = 0 as 0 ∈ F( ∅). If Γ_1 ⊆Γ_2 ⊆, then F(Γ_2) ⊆ F(Γ_1). Hence, ^p(Γ_1) ≤^p(Γ_2). For n ∈ let Γ_n ⊆. Then, for a fixed > 0 and for all n ∈ there is ρ_n ∈ F( Γ_n) such that ρ_n ^p_L^p() ≤^p( Γ_n ) + 2^-n. Let us define ρ∑_n=1^∞ρ_n^p ^1/p. Then, for all n ∈ we have ρ≥ρ_n, so ρ∈ F( ⋃_ n ∈Γ_n ). Therefore, ^p ⋃_ n =1^∞Γ_n ≤ρ^p_L^p() = X ∑_n=1^∞ρ_n^p = ∑_n=1^∞ X ρ_n^p ≤∑_n=1^∞^p( Γ_n ) + 2^-n = ∑_n=1^∞^p( Γ_n ) + . Since > 0 is arbitrary, we have ^p ⋃_ n =1^∞Γ_n ≤∑_n=1^∞^p( Γ_n ) as needed. Now, we shall prove the characterization of sets of p-modulus 0. Let ρ∈ F(Γ) be such that ρ∈ L^p( ) and γρ = ∞ for all γ∈Γ. Then ρ /n ∈ F(Γ) for all n ∈. Hence, ^p(Γ) ≤ρ / n ^p_L^p() = 1/n^pρ^p_L^p()→ 0 as n →∞. In order to prove the other direction, let ^p(Γ) = 0. Then for all n ∈ there is ρ_n ∈ F(Γ) such that ρ_n _L^p()≤ 2^-n. Then ρ∑_n=1^∞ρ_n ∈ F(Γ), we have ρ_L^p()≤∑_n=1^∞ρ_n _L^p()≤ 1, and γρ = ∑_n=1^∞∫_γ^S ρ_n = ∞ for all γ∈Γ, as claimed. If ρ∈ L^p() is Borel, then ∫_γ^S ρ < ∞ for ^p almost every γ∈. Let (X, ) be a metric space, be a Borel measure on X and f_n, f X → be Borel and such that f_n → f in L^p(), where p ∈1, ∞. Then, there exists a subsequence (f_n_k)_k such that ^p-a.e.γ f_n_k - f→ 0. Let (f_n_k)_k be a subsequence of (f_n)_n such that ∀ k ∈ f_n_k - f _L^p()^p ≤ 2^-k(p+1). For k ∈ let us define ρ_k f_n_k - f and Γ_k γ∈γρ_k ≥ 2^-k. Then, for all k ∈ we have 2^k ρ_k ∈ F( Γ_k ) and ^p( Γ_k ) ≤ 2^k ρ_k _L^p()^p = 2^pkρ_k _L^p()^p ≤ 2^pk· 2^-k(p+1) = 2^-k. Since Γγ∈γρ_k ↛0 satisfies Γ⊆⋃_k=m^∞Γ_k for all m ∈, we have ^p( Γ ) ≤∑_k=m^∞^p( Γ_k ) ≤∑_k=m^∞ 2^-k = 2^-m+1 0. Hence ^p( Γ ) = 0 and (f_n_k)_k is the sought subsequence. Let (X, ) be a metric space and f X →. We shall say that a Borel function g X →0, ∞ is an upper S-gradient of f if ∀γ∈ f ∘γ(1) - f ∘γ(0) ≤∫_γ^S g and a p-weak upper S-gradient of f, if ^p-a.e. f ∘γ(1) - f ∘γ(0) ≤∫_γ^S g. We will denote the family of upper S-gradients of f by f and the family of p-weak upper S-gradients of f by f. Let (X, ) be a metric space, be a Borel measure on X and f X → be a measurable map. If g, g' X → [0,∞] are Borel functions such that g is a p-weak upper S-gradient of f and g' = g almost everywhere, then g' also is a p-weak upper S-gradient of f. Let us consider a sequence (ρ_n)_n of Borel functions defined by ρ_n g - g'. Then ρ_n → 0 in L^p(). Hence, by the Fuglede Lemma (Lemma <ref>) we have a subsequence (ρ_n_k)_k such that ^p-a.e. γ g - g' = γρ_n_k 0. Hence, γ g - g' = 0 for ^p almost every γ∈. Therefore, f ∘γ( b) - f ∘γ(a) ≤γ g ≤γ g' + γ g- g' = γ g' for ^p almost every γ∈. Consider the space (X, |.|, λ), where X = [0,1], |.| is the Euclidean metric, and λ is the Lebesgue measure and let f X → be defined by f ∞1. Then there is no Borel g X →0,∞ such that g ∈ L^p(X) and g is a p-weak upper S-gradients of f, even though f = 0 -almost everywhere and every nonnegative Borel function g is an upper S-gradient of the zero function. First, let us note that if g X →0, ∞ is Borel, then γ g ≥ 0 for any γ∈, hence the latter claim follows. Now, let us suppose that g X →0,∞ is a p-weak upper S-gradient of f such that g ∈ L^p(X). Consider γ [0,1] → X given by γ(t) = t for t ∈ [0,1]. Clearly, γ∈. Furthermore f ∘γ(1) - f ∘γ (0) = ∞, but γ g = ∫_0^1 g(t) dt ≤ g _L^p(X) < ∞. Hence, f ∘γ(1) - f ∘γ (0) > γ g. Therefore, ^p(γ) = 0 as g is a p-weak upper S-gradient of f. By Proposition <ref> there is ρ∈ L^p(X) such that γρ = ∞. However, γρ = ∫_0^1 ρ(t) dt ≤ρ_L^p(X) < ∞, so such ρ cannot exist. In consequence, g is not a p-weak upper S-gradient of f. Let (X, ) be a metric space and be a Borel measure on X. Let p ∈1, ∞. We define a space ^1,p(X) f: X →ℝ∫_X |f|^p d< ∞ and L^p() ∩ f ≠∅ . On space ^1,p(X) we define a seminorm ∀ f ∈^1,p(X) f _^1,p(X) f _L^p() + inf_ g g _ L^p( ) , where the infimum is taken over all p-weak upper S-gradients g of f. Also, we define an equivalence relation ∼ by ∀ f, f'∈^1,p(X) f ∼ f' f - f' _^1,p(X) = 0. Finally, we define the TC-Newtonian space, as the quotient space ^1,p(X) ^1,p(X) / ∼. On this space · _^1,p(X) is a norm. Let (X, ) be a metric space with Borel measure . For a function f:X → and g ∈ L^p() ∩ f let us define Γ_1 γ∈γ g = ∞, Γ_2 γ∈∃ s, t ∈ [0,1] such that s<t, γ(s) γ(t^-) and f (γ(s)) - f( γ(t^-)) > γ|_[s,t^-] g , Γ_3 γ∈∃ t ∈ [0,1] such that γ(t^-) γ(t) and f (γ(t^-)) - f( γ(t)) > g (γ(t^-)) + g( γ(t))2γ(t^-), γ(t). Then ^p(Γ_1 ∪Γ_2 ∪Γ_3) = 0. First, let us notice that since g ∈ L^p(), then ^p( Γ_1)=0 by Proposition <ref>. Since g is a p-weak upper S-gradient of f, the family Γ' γ∈ f (γ(1)) - f( γ(0)) > γ g has a p-modulus of 0. In consequence, by Proposition <ref> there is ρ∈ L^p(), ρ≥ 0 such that γρ = ∞ for all γ∈Γ'. Fix γ∈Γ_2 ∖Γ_1. There are s, t ∈ [0,1] such that γ(s) γ(t^-) and f (γ(s)) - f( γ(t^-)) > γ|_[s,t^-] g. Let us define Φ :[0,1] → [s,t] as follows Φ(x)=s+(t-s)x. Then by Proposition <ref> the map γ̃ = Φ_# ( γ|_ [ s, t^-]) belongs to [0,1]; X and we have f (γ̃(0)) - f(γ̃(1)) = f (γ(s)) - f( γ(t^-)) > γ|_[s,t^-] g= γ̃ g. Thus, γ̃∈Γ' and γ̃ρ = ∞. Therefore, γρ≥γ|_[s,t^-]ρ = γ̃ρ = ∞, hence ^p(Γ_2 ∖Γ_1) = 0 by Proposition <ref>. Now, fix γ∈Γ_3 ∖Γ_1. There is t ∈ [0,1] such that γ(t^-) γ(t) and f (γ(t^-)) - f( γ(t)) > g (γ(t^-)) + g( γ(t))2γ(t^-), γ(t). Let γ' [0,1] → X be defined as follows γ'(s) = γ(t^-) for s ∈0,1/2 and γ'(s) = γ(t) for s ∈1/2,1. Clearly, γ' ∈ and by Example <ref> we have f (γ'(0)) - f( γ'(1)) = f (γ(t^-)) - f( γ(t)) > g (γ(t^-)) + g( γ(t))/2γ(t^-), γ(t) = g (γ'(0)) + g( γ'(1))/2γ'(0), γ'(1) = γ' g, hence γ' ∈Γ'. Thus, γ'ρ = ∞ and γρ≥ρ (γ(t^-)) + ρ( γ(t))/2γ(t^-), γ(t) = ρ (γ'(0)) + ρ( γ'(1))/2γ'(0), γ'(1) = γ'ρ = ∞. In consequence, ^p(Γ_3 ∖Γ_1) = 0 by Proposition <ref>. Finally, since p-modulus is an outer measure, ^p(Γ_1 ∪Γ_2 ∪Γ_3) ≤^p(Γ_1) +^p(Γ_2 ∖Γ_1) +^p(Γ_3 ∖Γ_1) = 0. Let (X, ) be a metric space and be a Borel measure on X. Let f X → be a measurabe map such that f = 0 -almost everywhere and L^p() ∩ f ≠∅. If f is Borel or is Borel regular, then 0 is a p-weak upper S-gradient of f. Consider the set E' x ∈ X f(x) 0 . Since f = 0 -almost everywhere, we have (E') = 0. If f is Borel, then E' is Borel. If is Borel regular, then there is a Borel set E of measure 0 such that E' ⊆ E. In both cases, we have a Borel set E such that (E) = 0 and E' ⊆ E. Let us consider the families Γ_E'γ∈ E' ∩⊷γ∅ and Γ_E^+ γ∈μ_γ^S E ∩⊷γ > 0 . Our next goal is to show that ^p(Γ_E') = 0. First, we will show that ^pΓ_E^+ = 0. Indeed, let us consider a function ρ∞E. Clearly, ρ is a Borel function such that ρ = 0 -almost everywhere, and hence ρ_L^p( ) = 0. If γ∈Γ_E^+, then we have γρ = ∫_Xρ dμ_γ^S = ∞μ_γ^S E ≥∞μ_γ^S E ∩⊷γ = ∞. Therefore, ^pΓ_E^+ = 0 by Proposition <ref>. Since L^p() ∩ f ≠∅, there is g ∈ L^p() that is a p-weak upper S-gradient of f. Let ΓΓ_1 ∪Γ_2∪Γ_3, where Γ_1, Γ_2, Γ_3 are as in Lemma <ref>. We will show that Γ_E'∖Γ_E^+ ⊆Γ. Fix γ∈Γ_E'∖Γ_E^+. If γ g = ∞, then γ∈Γ. Suppose that γ g < ∞ and γ∉Γ. Then for all s, t ∈ [0,1] such that s<t and γ(s) γ(t^-) we have f(γ(s)) - f(γ(t^-)) ≤γ|_ [s, t^-] g. Also, for all t ∈ [0,1] such that γ(t^-) γ(t) we have f (γ(t^-)) - f( γ(t)) ≤g (γ(t^-)) + g( γ(t))/2γ(t^-), γ(t). We will show that f ∘γ has finite values. First, we will show that f∘γ(0) is finite. Notice that since γ∈Γ_E'⊆, we have V_γ > 0 and there is t ∈0,1 such that γ(0) γ(t). If γ(0)=γ(t^-), then γ(t^-) γ(t), and both f(γ(t^-))=f(γ(0)) and f(γ(t)) are finite since f (γ(t^-)) - f( γ(t)) ≤g (γ(t^-)) + g( γ(t))/2γ(t^-), γ(t)≤γ g < ∞. Otherwise, if γ(0) ≠γ(t^-), then f(γ(0)) and f(γ(t^-)) are finite because f (γ(0)) - f( γ(t^-)) ≤γ|_[0,t^-] g ≤γ g < ∞. Now, let us observe that f(γ(s^-)) is finite for all s ∈0,1. Indeed, if γ(0) = γ(s^-), then this fact is clear. If γ(0) γ(s^-) then f(γ(s^-)) is finite because f (γ(0)) - f( γ(s^-)) ≤γ|_[0,s^-] g ≤γ g < ∞. Finally, we have that f(γ(s)) is finite for all s ∈0,1. Indeed, if γ(s^-) = γ(s), then it is clear. If γ(s^-) γ(s), then we have f (γ(s^-)) - f( γ(s)) ≤g (γ(s^-)) + g( γ(s))/2γ(s^-), γ(s)≤γ g < ∞. We will show next that for all t ∈ [0,1] we have f( lim_r → t^-γ(r)) = lim_r → t^- f( γ(r)) and f( lim_r → t^+γ(r)) = lim_r → t^+ f( γ(r)). First, let (r_n)_n be a sequence of elements of 0,t such that r_n → t^-. Then γ|_[r_n, t^-] g → 0 as n →∞ by Lemma <ref>. For all n ∈ we either have γ(r_n) =γ(t^-) or γ(r_n) γ(t^-). Since in the former case we have f(γ(r_n)) -f(γ(t^-)) = 0 due to the finiteness of f∘γ, in both cases we have f (γ(r_n)) - f( γ(t^-)) ≤γ|_[r_n,t^-] g 0, proving the first equality in (<ref>). Next, let (r_n)_n be a sequence of elements of t,1 such that r_n → t^+. By Lemma <ref> we have γ|_[t, r_n^-] g + g (γ(r_n^-)) + g( γ(r_n))/2γ(r_n^-), γ(r_n) 0. For all n ∈ we either have γ(t) = γ(r_n), γ(r_n) γ(t) and γ(r_n^-) = γ(r_n), or γ(r_n) γ(t) and γ(r_n^-) γ(r_n). In the first case we have f(γ(t)) - f(γ(r_n)) = 0 due to the finiteness of f∘γ. In the second case, we have f (γ(t)) - f( γ(r_n)) = f (γ(t)) - f( γ(r_n^-)) ≤γ|_[t,r_n^-] g 0. In the last case we have f (γ(t)) - f( γ(r_n)) ≤ f (γ(t)) - f( γ(r_n^-)) + f (γ(r_n^-)) - f( γ(r_n)) ≤γ|_[t, r_n^-] g + g (γ(r_n^-)) + g( γ(r_n))/2γ(r_n^-), γ(r_n) 0. Therefore, in all cases we have f(γ(t)) - f(γ(r_n))→ 0 as n →∞, proving the second equality in (<ref>). Now, we shall show that f ∘γ is continuous on entire [0,1]. By (<ref>) we have f ∘γ is continuous at points of continuity of γ. On the other hand, notice that if t is a point of discontinuity of γ, then since γ(t^-)= γ(1-t), by Lemma <ref> we have μ_γ^S{γ (t^-) } ≥ 1/2γ_#μ_γ{γ(1-t) }≥1/2μ_γ{1-t } = 1/2(V_γ(1-t) - V_γ((1-t)^-) ) = 1/2(V_γ((1-t)^-) - V_γ(1-t) ) = 1/2(V_γ(t) - V_γ(t^-))= 1/2γ(t^-), γ(t) > 0 and, similarly, μ_γ^S(γ(t)) ≥1/2γ(t^-), γ(t) > 0. Hence γ(t^-), γ(t) ∩ E = ∅ as γ∉Γ_E^+. In consequence, f(γ(t^-)) = f(γ(t)) = 0 and by (<ref>) we have lim_r → t^- f( γ(r)) = f( lim_r → t^-γ(r)) = 0 = f(γ(t)) = f( lim_r → t^+γ(r)) = lim_r → t^+ f( γ(r)). Thus, f ∘γ is also continuous at points of discontinuity of γ. Since γ∈Γ_E', there exists x ∈⊷γ such that f(x) 0. Therefore by Remark <ref> there exists t ∈ [0,1] such that x = γ(t) or x = γ(t^-). However, since f ∘γ is continuous on [0,1] and f( lim_r → t^-γ(r)) = lim_r → t^- f( γ(r)), in both cases we have f(x) = f(γ(t)). Recall that γ∈∖Γ_E^+, so μ_γ^S(X) > 0 and μ_γ^S(E) = 0. In consequence, there is s' ∈ [0,1] such that γ(s') ∉ E or γ(s'^-) ∉ E, hence, as f ∘γ is continuous and f( lim_r → s'^-γ(r)) = lim_r → s'^- f( γ(r)), we have f(γ(s')) = 0. First, suppose that s' < t. Let s sup s' ∈ [0,1] f(γ(s')) = 0 and s' < t . Clearly, s < t, γ(s) γ(t), and γ s, t⊆ E. In consequence, recalling the definitions from subsection <ref>, 2μ_γ^S(E) ≥ 2μ_γ^Sγ s, t≥γ_#μ_γγ s, t≥μ_γ s, t = V_γ(t) - V_γ(s) ≥γ(t), γ(s) > 0, which contradicts that γ∉Γ_E^+. Now, suppose that s' > t. Let s inf s' ∈ [0,1] f(γ(s')) = 0 and s' > t . Clearly, f(γ(s)) = 0, so also f(γ(s^-)) = 0. Thus, s > t, γ(s^-) γ(t), and γ t, s ⊆ E. In consequence, 2μ_γ^S(E) ≥ 2μ_γ^Sγ t, s≥γ_#μ_γγ t, s≥μ_γ t, s = V_γ(s^-) - V_γ(t) ≥γ(s^-), γ(t) > 0, which also contradicts that γ∉Γ_E^+. The obtained contradiction shows that there is no γ∈Γ_E'∖Γ_E^+ such that γ∉Γ, hence Γ_E'∖Γ_E^+ ⊆Γ. From Lemma <ref> we have ^p(Γ) = 0. Thus, recalling that ^p(Γ_E^+) = 0, we have ^p(Γ_E') ≤^p(Γ_E'∖Γ_E^+) + ^p(Γ_E^+) ≤^p( Γ ) + ^p(Γ_E^+) = 0. This implies that 0 is a p-weak upper S-gradient of f. Indeed, we have ^p(Γ_E') = 0 and if γ∈∖Γ_E', then f(γ(0)) = f(γ(1)) = 0 and f(γ(0)) - f(γ(1)) = 0 = γ 0, which ends the proof. §.§ -Newtonian Space In this subsection we introduce the -Newtonian spaces. On the one hand, in the definition of -Newtonian spaces we will require that a certain condition is satisfied for sufficiently many Borel measures defined on [0,1]; X. On the other hand, the sense in which we understand the “sufficiently many” part mimics the similar requirement defined for -Newtonian spaces using the p-modulus. As we will see, the main difference between the -Newtonian spaces and the -Newtonian spaces arises due to the fact that we require that not just the upper gradients but also the elements of the -Newtonian space are equivalence classes of Borel functions. Let (X, ) be a metric space. Let μ be a Borel measure on the space [0,1]; X, _. For f X → Borel, bounded from below or above, we define its integral along μ by ∫_μ f [0,1]; X ∫_γ f μ(γ) and the symmetrized integral along μ by μ f [0,1]; X γ f μ(γ) . Let us remark that by Theorem <ref> the maps [0,1]; X∋γ↦∫_γ f and [0,1]; X∋γ↦γ f are Borel. Let (X, ) be a metric space, then we will denote the family of test measures by μ Borel measure on [0,1]; X μ [0,1]; X ∖ = 0 . For Γ⊆ we define F̃(Γ) ρ X → 0, ∞ρ is Borel and μρ≥ 1 for all μ∈Γ. Let (X, ) be a metric space and let be a Borel measure on X. Then, for p ∈ 0, ∞ we define the generalized p-modulus of families of test measures ^p 2^→0, ∞ by the formula ∀Γ⊆ ^p( Γ) inf_ρ∈F̃(Γ) ρ^p_L^p(). We shall say that property P holds for ^p almost every μ∈, if the set of μ∈ for which P does not hold has a generalized p-modulus of 0. Let (X, ) be a metric space and let be a Borel measure on X. Fix Γ⊆ and let Γ̃⊆ be defined by Γ̃δ_γγ∈Γ. Then ^p(Γ̃) = ^p(Γ). Let γ∈Γ, then for all Borel ρ X →0, ∞ we have γρ = [0,1]; X γ'ρ δ_γ(γ') = δ_γρ. Therefore, F(Γ)=F̃(Γ̃). In consequence ^pΓ̃ = inf_ρ∈F̃Γ̃ρ_L^p()^p = inf_ρ∈ FΓρ_L^p()^p = ^pΓ and the claim is proved. Let (X, ) be a metric space and let be a Borel measure on X. Let Γ⊆ be a Borel set such ^p(Γ) = 0. Then for the family Γ̂⊆ defined by Γ̂μ∈μΓ > 0 we have ^pΓ̂ = 0. Since ^p(Γ) = 0, by Proposition <ref> there exists a Borel map ρ X →0, ∞ such that ρ∈ L^p() and for all γ∈Γ we have γρ = ∞. Let μ∈Γ̂. Then, since μΓ > 0, we have μρ = [0,1]; X γρ μ(γ)≥Γ γρ μ(γ) = Γ∞μ(γ) = ∞μΓ = ∞ that is, μρ = ∞. By the analogue for the generalized p-modulus to Proposition <ref> we have ^pΓ̂ = 0. Let (X, ) be a metric space and let be a Borel measure on X. For a Borel function f X → we will say that a Borel function g X →0, ∞ is its generalized p-weak upper S-gradient, if ^p-a.e. [0,1]; X f ∘γ(1) - f ∘γ(0) μ(γ) ≤μ g . Let us remark that by Theorem <ref> the map [0,1]; X∋γ↦ |f ∘γ(1) - f ∘γ(0)| is Borel.[It is worth to remember about our notation.] We will denote the family of generalized p-weak upper S-gradients of f by f Let (X, ) be a metric space and be a Borel measure on X. Let p ∈1, ∞. We define a space 𝒩̃_T̂Ĉ^1,p(X) f: X →ℝ f is Borel, ∫_X |f|^p d< ∞ and L^p() ∩ f ≠∅ . On space 𝒩̃_T̂Ĉ^1,p(X) we define a seminorm ∀ f ∈𝒩̃_T̂Ĉ^1,p(X) f _𝒩_T̂Ĉ^1,p(X) f _L^p() + inf_ g g _ L^p( ) , where the infimum is taken over all generalized p-weak upper S-gradients g of f. Also, we define an equivalence relation ∼ by ∀ f, f'∈𝒩̃_T̂Ĉ^1,p(X) f ∼ f' f - f' _𝒩_T̂Ĉ^1,p(X) = 0. Finally, we define the T̂Ĉ-Newtonian space, as the quotient space 𝒩_T̂Ĉ^1,p(X) 𝒩̃_T̂Ĉ^1,p(X) / ∼. On this space · _𝒩_T̂Ĉ^1,p(X) is a norm. Let (X, ) be a metric space, be a Borel measure on X and let f X → and g X →0,∞ be Borel maps. Then the following statements are equivalent * g is a generalized p-weak upper S-gradient of f, * g is a p-weak upper S-gradient of f. (i) ⇒ (ii) If g X →0,∞ is a generalized p-weak upper S-gradient of f, then there exists D ⊂ such that ^p (∖ D)=0 and for μ∈ D we have [0,1]; X f ∘γ(1) - f ∘γ(0) μ(γ) ≤μ g . Let[By Theorem <ref> Γ is a Borel set.] Γγ∈ f ∘γ(1) - f∘γ(0) > γg and Γδ_γγ∈Γ. Let us observe that D∩Γ̃=∅. Indeed, suppose there exists γ∈Γ such that δ_γ∈ D∩Γ̃, then we have f ∘γ(1) - f∘γ(0) = [0,1]; X f ∘γ'(1) - f ∘γ'(0) δ_γ(γ') ≤ [0,1]; X γ'gδ_γ(γ') = γg. This leads us to the contradiction with γ∈Γ. Therefore, ^p (Γ̃) ≤^p ((∖ D)∩Γ̃)+^p ( D∩Γ̃) =0. Hence, by Lemma <ref> we have ^p(Γ) = 0 and g is a p-weak upper S-gradient of f. (ii) ⇒ (i) Now, let g be a p-weak upper S-gradient of f. Then, keeping Γ as it was introduced in the previous implication, we have ^p(Γ) = 0. Then, by Lemma <ref> we have ^pΓ̂ = 0, where Γ̂ = μ∈μΓ > 0 . Hence, for μ∈∖Γ̂ we have [0,1]; X f ∘γ(1) - f ∘γ(0) μ(γ) = ∖Γ f ∘γ(1) - f ∘γ(0) μ(γ) ≤ [0,1]; X γgμ(γ) . This shows that g is a generalized p-weak upper S-gradient of f. Let (X, ) be a metric space and let be a Borel measure on X. If f, f' ∈𝒩̃_T̂Ĉ^1,p(X) are such that f = f' -almost everywhere, then f- f'_𝒩_T̂Ĉ^1,p(X) = 0. It is sufficient to show that if f = 0 -almost everywhere, then f _𝒩_T̂Ĉ^1,p(X) = 0. Since f ∈𝒩̃_T̂Ĉ^1,p(X), there is a g ∈ L^p() that is a generalized p-weak upper S-gradient of f. By Theorem <ref> it is also a p-weak upper S-gradient of f, so f ∈^1,p(X). As f is Borel, by Proposition <ref> the zero function is a p-weak upper S-gradient of f, hence by Theorem <ref> the zero function is a generalized p-weak upper S-gradient of f. Since f = 0 -almost everywhere, we have f _𝒩_T̂Ĉ^1,p(X) = 0. §.§ Gigli-like space Within this subsection we introduce the last of the considered function spaces — a modification of the spaces considered by Gigli <cit.>. This time we replace the family of absolutely continuous curves with [0,1];X and we again replace the integral along the curve with the symmetrized integral γ. In the next definition we define the -admissibilty of Borel measures defined on the space [0,1];X which mimics the notion of test plans present in Gigli's work. Let us note that in Gigli's work the topology on the space of curves is induced by the supremum metric, hence the evaluations maps e_t C[0,1];X→ X, e_t(γ) = γ(t) for t ∈ [0,1], are Borel. Let (X, ) be a metric space and be a Borel measure on X. We will say that a Borel probability measure μ on [0,1];X is -admissible if * There exists a constant C ≥ 0 such that for all t ∈ [0,1] and Borel B ⊆ X we have (e_t)_#μ(B) ≤ C (B), where e_t is the evaluation map e_t [0,1];X → X, e_t (γ) :=γ(t),[Note that here we require that (e_t)_#μ is a Borel measure on X, even if the evaluation map e_t is not Borel.] * [0,1]; X V(γ) μ( γ) < ∞. We will denote the space of -admissible probability measures by 𝒫^()(X ) and ^():= ∩𝒫^()(X). Let (X, ) be a metric space and be a Borel measure on X. We will say that a Borel function g X → [0,∞] is an -upper S-gradient of a Borel measurable function f X → if ∀μ∈^() [0,1]; X f(γ(1)) - f(γ(0)) μ(γ) ≤μ g . We will denote the family of all -upper S-gradients of f by f. Every upper S-gradient is an -upper S-gradient. Let (X, ) be a metric space and be a Borel measure on X. Let p ∈1, ∞. We define a space G̃^1,p(X) f: X →ℝ f is Borel, ∫_X |f|^p d< ∞ and L^p() ∩ f ≠∅. On space G̃^1,p(X) we define a seminorm ∀ f ∈G̃^1,p(X) f _G^1,p(X) f _L^p() + inf_ g g _ L^p( ) , where the infimum is taken over all -upper S-gradients g of f. Also, we define an equivalence relation ∼ by ∀ f, f'∈G̃^1,p(X) f ∼ f' f - f' _G^1,p(X) = 0. Finally, we define the Gigli-like space, as the quotient space G^1,p(X) G̃^1,p(X) / ∼. On this space · _G^1,p(X) is a norm. Let (X, ) be a metric space and be a Borel measure on X. If f ∈G̃^1,p(X) is such that f = 0 -almost everywhere, then f _G^1,p(X) = 0. Let E x ∈ X f(x) 0. Since f ∈G̃^1,p(X), f is Borel, hence E is a Borel set. Furthermore, as f = 0 -almost everywhere, (E) = 0. We will show that 0 is an -upper S-gradient of f. Fix μ∈^(), there exists C > 0 such that (e_t)_#μ≤ C for all t ∈ [0,1]. We will show that set E' γ∈ [0,1]; X f(γ(0) ) 0 or f(γ(1) ) 0 satisfies μ(E') = 0. Note that E' is Borel as by Theorem <ref> γ↦ f(γ(t)) is a Borel function for a fixed t ∈ [0,1], hence μ(E') is well-defined. We have μ(E') = μγ∈ [0,1]; X f(γ(0) ) 0 or f(γ(1) ) 0 = μ e_0^-1E∪ e_1^-1E ≤μ e_0^-1E + μ e_1^-1E = (e_0)_#μ(E) + (e_1)_#μ(E) ≤ 2C (E) =0. Therefore, f(γ(0)) = 0 and f(γ(1)) = 0 for μ-almost every γ∈ [0,1]; X, hence f(γ(0)) - f(γ(1)) = 0 for μ-almost every γ∈. In consequence, ∫_ [0,1]; Xf(γ(0)) - f(γ(1)) dμ(γ) = ∫_ [0,1]; X 0 dμ(γ) = ∫_ [0,1]; Xγ 0 dμ(γ). Hence, 0 is an -upper S-gradient of f. This, combined with the fact that f = 0 -almost everywhere, gives us f _G^1,p(X) = 0. § HAJŁASZ–SOBOLEV SPACES VS TC-NEWTONIAN SPACES We devote this section to the comparison of the Hajłasz–Sobolev and the -Newtonian spaces. The main result of this section, Theorem <ref>, shows that the -Newtonian spaces are much more similar to the Hajłasz–Sobolev spaces than the usual Newtonian spaces. Indeed, the theorems showing the equivalence between the Newtonian space N^1,p and the Hajłasz–Sobolev space M^1,p require that the measure μ on the space is doubling and supports some Poincaré inequality (for example, see <cit.> and <cit.>). Let us note that the proof of the (b) part of Theorem <ref> is inspired by the proofs of <cit.> and <cit.>. However, since we allow curves to be discontinuous, the proof becomes much more technical. Let (X, ) be a metric space and be a measure on X. Let f X → be measurable and finite - almost everywhere. We will say that a measurable function g X →0, ∞ is a Hajłasz gradient of f, if there exists a measurable set E ⊂ X of measure 0 such that for every x, y ∈ X ∖ E we have f(y) - f(x) ≤ g(x) + g(y) x,y . We will denote the family of Hajłasz gradients of f by f. Let (X, ) be a metric space and be a measure on X. Let p ∈1, ∞. We define the Hajłasz-Sobolev space M^1,p(X) as the space M^1,p(X) f ∈ L^p() ∃ g ∈ L^p() g ∈ f endowed with the norm f _M^1,p(X) f _L^p() + inf_g ∈ f g _L^p(). The following theorem is the main result of this section. Let (X, ) be a metric space, be a Borel measure on X, and p ∈ [1, ∞). Then, for any measurable functions f X →, g X →0,∞ such that f and g are finite -almost everywhere we have: * If g is a p-weak upper S-gradient of f, then g/2 is a Hajłasz gradient of f, * If is σ-finite and Borel regular, g is a Hajłasz gradient of f, then there exist Borel functions f̃:X →ℝ and g̃: X → [0,∞], equal -a.e. to f,g, respectively, such that 76g̃ is an upper S-gradient of f̃. Having in mind Proposition <ref>, the folowing result is a corollary to Theorem <ref>. Let (X, ) be a metric space, be σ-finite and Borel regular measure, and p ∈ [1, ∞). Then, M^1,p(X)≅^1,p(X). The proof of Theorem <ref> will follow from Theorem <ref> and Theorem <ref>. Let (X, ) be a metric space and be a Borel measure on X. Let f X → be finite -almost everywhere. Then if g X →0, ∞ is a p-weak upper S-gradient of f, then g/2 is a Hajłasz gradient of f. Let Γγ∈ f ∘γ(1) - f ∘γ(0) > γg. Since g is a p-weak upper S-gradient of f, then we have ^pΓ = 0. Therefore, there exists a sequence ρ_n X →0, ∞ of Borel functions such that ρ_n_L^p()^p → 0 as n →∞ and γρ_n≥ 1 for all n ∈ and all γ∈Γ. Hence, there exists a subsequence ρ_n_k_k such that ρ_n_k→ 0 -almost everywhere as k →∞. Let E ⊆ X be such that X ∖ E = 0 and for x ∈ E we have ρ_n_k(x) → 0 as k →∞. Let Γ_E γ∈∃ x, y ∈ E γ = x 0, 1/2 + y 1/2, 1 . We observe that Γ_E ⊆∖Γ. Indeed, if γ∈Γ_E, then, since γ(1), γ(0) ∈ E, we have γρ_n_k = ρ_n_k( γ(1)) + ρ_n_k( γ( 0 ) ) /2γ(1), γ(0) 0. In particular, there exists k ∈ such that γρ_n_k < 1 and hence γ∉Γ. Let x, y ∈ E be such that x y. Then we define γ∈Γ_E as follows γ = x 0, 1/2 + y 1/2, 1. For such γ we have f(y) - f(x) = f ∘γ(1) - f ∘γ(0) ≤γg = g(γ(1)) + g(γ(0)) /2γ(1), γ(0) = g(y) + g(x) /2 y, x. Since X ∖ E = 0, this shows that g/2 is a Hajłasz gradient of f. Let (X, ) be a metric space and be σ-finite Borel regular measure on X. Let f X → and g X → [0,∞] be measurable and finite -almost everywhere. If g is a Hajłasz gradient of f, then there exist Borel functions f̃ X →ℝ and g̃ X → [0,∞], equal -a.e. to f,g, respectively, such that 76g̃ is an upper S-gradient of f̃. The proof will be divided into some lemmata and steps. Let (X, ) be a metric space and be a Borel regular and σ-finite measure on X. Let f X → be measurable, finite -almost everywhere and g X → 0, ∞ be a Hajłasz gradient of f. Then there exist Borel functions f': X →ℝ and g' : X → [0, ∞] such that f=f' and g=g' -almost everywhere such that ∀ x, y ∈ X f'(y) - f'(x) ≤ g'(x) + g'(y) x,y . Since is Borel regular and σ-finite, there are Borel functions f and g (with g being non-negative), such that f = f and g = g -almost everywhere. Since is Borel regular, there exists Borel set E ⊆ X such that E = 0, f is finite on X ∖ E and ∀ x, y ∈ X ∖ E f(y) - f(x) ≤g(x) + g(y) y, x . Now, let us define the following Borel maps: f' fX ∖ E and g' gX ∖ E + ∞E. Then we have f' = f and g' = g -almost everywhere. Moreover, by a straightforward checking we have f'(y) - f'(x) ≤ g'(y) + g'(x) x, y for all x, y ∈ X. Let (X, ) be a metric space. Let f X → be measurable and g X →0, ∞ be Borel and such that ∀ x, y ∈ X f(x) - f(y) ≤ g(x) + g(y) x, y. Let γ∈ [0,1]; X. Suppose that for some a, b ∈ [0,1], a < b there exists M > 0 such that [ See Definition <ref>.] ∀ s ∈a,b ϕ_γ(s) ≤ M. Then f(γ(a)) - f(γ(b^-)) ≤ 8 M g(γ(a)) + 16γ|_[0]a, b^-g + 8 M g(γ(b^-)). First of all let us observe that if V_γ(b^-) - V_γ(a) ≤ 4M, then the claim follows immediately. Indeed, f(γ(a)) - f(γ(b^-)) ≤ g(γ(a)) + g(γ(b^-)) [0]γ(a),γ(b^-) ≤[1] g(γ(a)) + g(γ(b^-)) V_γ(b^-) - V_γ(a) ≤ 4Mg(γ(a)) +4Mg(γ(b^-)). In the rest of the proof we assume that V_γ(b^-) - V_γ(a) > 4M. Step 1 We will iteratively construct a tuple t_n _n=0^m of elements of [a,b], where m≥ 2 with the following properties: * t_0 = a and t_m = b, * t_n _n=0^m is strictly increasing, * For all i ∈ [m-1] t_i is a point of continuity of γ, * For all i ∈ [m-1] we have V_γ(t_i) - V_γ(t_i-1 ) ∈M, 3M and V_γ(t_m^-) - V_γ(t_m-1 ) ∈M, 4M. Let t_0 a. Suppose that we have constructed t_i for some i ∈_0. If V_γ(b^-) - V_γ(t_i) ≤ 4M, then we define m i+1 and t_m b. Otherwise, we have V_γ(b^-) - V_γ(t_i) > 4M. We will show that in this case there exists t ∈t_i, b such that γ is continuous at t and V_γ(t^-) - V_γ(t_i) ∈ M , 3M. We will then define t_i+1 t. To prove the existence of such a point t, we will first consider a point s min S, where S τ∈t_i, b V_γ(τ) - V_γ( t_i ) ≥ M . Clearly, S ∅ as b ∈ S. The right-continuity of V_γ implies that inf S ∈ S, hence s min S is well-defined. Since t_i ∉ S and V_γ(b^-) - V_γ( t_i ) > 4M, we have s ∈ t_i, b. By the definition of s and S we have V_γ(s^-) - V_γ(t_i) ≤ M. By the assumptions of the current lemma and by Proposition <ref> we have V_γ(s) - V_γ(s^-) = ϕ_γ(s) ≤ M, which implies that V_γ(s) - V_γ(t_i) = V_γ(s) - V_γ(s^-) + V_γ(s^-) - V_γ(t_i) ≤ M + M = 2M. This, combined with the definition of s gives us V_γ(s) - V_γ(t_i) ∈M, 2M. Now, since V_γ is right-continuous at s, there is δ∈ (0, b-s) such that if τ∈s, s+ δ, then 0 ≤ V_γ(τ) - V_γ(s) ≤ M. Since γ can have at most countable points of discontinuity, there is t ∈s, s+ δ such that γ is continuous at t. Note that for such a t we have V_γ(t) - V_γ(t_i) = V_γ(t) - V_γ(s) + V_γ(s) - V_γ(t_i) ∈M, 3M. Hence, t ∈ (t_i, b) with the desired properties exists, and, as previously mentioned, we define t_i+1 t. We will now show that the tuple (t_i)_i=0^m has all the desired properties. Firstly, we have to show that the construction must end, that is, that for some i ∈ we have V_γ(b^-) - V_γ(t_i) ≤ 4M. Let us suppose that for all i ∈ we have V_γ(b^-) - V_γ(t_i) > 4M, then V_γ≥ V_γ(b^-) - V_γ(a) = V_γ(b^-) - V_γ(t_i) + ∑_j=1^i ( V_γ(t_j) - V_γ(t_j-1)) ≥ V_γ(b^-) - V_γ(t_i) + Mi > M(i+4). Since the right-hand side diverges to ∞ as i →∞, we get a contradiction. Hence, there exists i ∈ such that V_γ(b^-) - V_γ(t_i) ≤ 4M. The only desired property of the tuple (t_i)_i=0^m that does not follow directly from the construction is that V_γ(t_m^-) - V_γ(t_m-1 ) ≥ M. Recall that we assume V_γ(b^-) - V_γ(a) > 4M, which implies that m ≥ 2. By the definition of m we have V_γ(b^-) - V_γ(t_m-2) > 4M, hence V_γ(t_m^-) - V_γ(t_m-1 ) = V_γ(b^-) - V_γ(t_m-2) - V_γ(t_m-1)- V_γ(t_m-2) ≥ 4M - 3M = M, as needed. Step 2 For i ∈ [m] denote A_i [0] t_i-1, t_i^- and A_0 A_1, A_m+1 A_m. We will show the existence of a tuple (x_i)_i=0^m+1 of points in X that satisfy the following properties: * x_0 = γ(a), x_m+1 = γ(b^-), * For all i ∈0, …, m+1 we have x_i ∈⊷γ|_A_i, * For all i ∈ [m] we have g(x_i) ≤1/Mγ|_ A_ig, * For all i ∈ [m+1] we have x_i, x_i-1≤ 8M. First of all let us observe that for all i ∈0, …, m+1 we have μ_γ|_ A_i^S⊷γ|_A_i≥ M. Indeed, by Remark <ref>, Remark <ref> and Corollary <ref>, for i ∈ [m] we have μ_γ|_ A_i^S⊷γ|_A_i = μ_γ|_ A_i^SX = V(γ|_ A_i) = V_γ|_ [t_i-1,1] (t_i^-) = V_γ(t_i^-) - V_γ(t_i-1) ≥ M. Moreover, for i ∈0, m+1 the claim follows from the fact that A_0 = A_1 and A_m = A_m+1. We conclude that[γ|_ A_n,ig = 1/μ_γ|_ A_i^S X∫_X g d μ_γ|_ A_i^S = 1/μ_γ|_ A_i^S⊷γ|_A_i∫_⊷γ|_A_i g d μ_γ|_ A_i^S.] γ|_ A_n,ig is well-defined for all i ∈0, …, m+1. Let us now construct the tuple (x_i)_i=0^m+1. We define x_0 γ(a) and x_m+1γ(b^-). Next, we will show that there exists x ∈⊷γ|_A_i such that g(x) ≤γ|_ A_n,ig. Indeed, suppose such x does not exists. We can assume that γ|_ A_n,ig < ∞. Then for all x ∈⊷γ|_A_i we have g(x) > γ|_ A_ig, hence 0 < γ|_ A_i g(x) - γ|_ A_ig = γ|_ A_ig - γ|_ A_ig = 0, where the first inequality follows from the fact that the integrand is a strictly positive function and we integrate it over a set with positive measure. We obtain a contradiction, hence x with the desired property exists — we then define x_i x. The fact that tuple (x_i)_i=0^m+1 satisfies the first two of the desired properties follows immediately from the construction. Let us prove the third property. Fix i ∈ [m]. We then have g(x_i) ≤γ|_ A_ig = 1/μ_γ|_ A_i^S Xγ|_ A_ig≤1/Mγ|_ A_ig, as needed. It remains to show the last property. First, note that for all i ∈ [m+1] there is some y_i ∈⊷γ|_A_i∩⊷γ|_A_i-1. Indeed, when i = 1 or i = m+1 it follows from the fact that A_i = A_i-1, and when i ∈ [m] ∖1 it follows from the fact that t_i is a point of continuity of γ, so y_i γ(t_i) has the desired property. Next, note that for all i ∈ [m] we have ⊷γ|_A_i = ⊷γ|_A_i≤ V_γ(t_i^-) - V_γ(t_i-1) ≤ 4M, where in the first inequality we have used Corollary <ref> and Remark <ref>. Hence, since A_0 = A_1 and A_m+1 = A_m, for all i ∈0, …, m+1, we have ⊷γ|_A_i≤ 4M. Therefore, for all i ∈ [m+1] we have x_i, x_i-1≤ x_i, y_i + y_i, x_i-1≤⊷γ|_A_i + ⊷γ|_A_i-1≤ 8M and x_i, x_i-1≤ 8M as claimed. Step 3 We will show the desired inequality. Using the properties of tuple (x_i)_i=0^m+1, we have f(x_0) - f(x_m+1) ≤∑_i=1^m+1 f(x_i-1) - f(x_i) ≤∑_i=1^m+1 g(x_i-1) + g(x_i) x_i-1, x_i ≤∑_i=1^m-1 g(x_i-1) + g(x_i) (8M) = 8Mg(x_0) + ∑_i=1^m 16M g(x_i) + 8Mg(x_m+1) ≤ 8Mg(x_0) + ∑_i=1^m γ|_ A_i16 g + 8Mg(x_m+1) = 8Mg(x_0) + γ|_ [a, b^-] 16g + 8Mg(x_m+1), where in the last equality we have used Lemma <ref> and the fact that for i ∈ [m-1] t_i is a point of continuity of γ. Let (X, ) be a metric space. Let f X → be measurable and g X → [0,∞] be bounded Borel map such that ∀ x, y ∈ X f(x) - f(y) ≤ g(x) + g(y) x, y. Then ∀γ∈ [0,1]; X f(γ(0)) - f(γ(1)) ≤γ18g. Fix γ∈ [0,1]; X. If γ is continuous, then for all s ∈ (0,1) and all M > 0 we have ϕ_γ(s) ≤ M. Hence, by Lemma <ref>, for all M > 0 we have f(γ(0)) - f(γ(1)) = f(γ(0)) - f(γ(1^-)) ≤ 8M g(γ(0)) + γ16g + 8M g(γ(1^-)). Therefore, since g is bounded, by passing to the limit M → 0^+ we obtain f(γ(0)) - f(γ(1)) ≤γ16g. Suppose that γ has points of discontinuity. By Lemma <ref> we have that sup_τ∈ [0,1]ϕ_γ(τ) is finite and t ∈ [0,1] ϕ_γ(t) = sup_τ∈ [0,1]ϕ_γ(τ) is a nonempty finite set. Therefore, the quantity t_0 inf t ∈ [0,1] ϕ_γ(t) = sup_τ∈ [0,1]ϕ_γ(τ) is well defined, t_0 is a point of discontinuity of γ and t_0 ∈ (0,1). Next, for n ∈ let us define t_-n inf t ∈0,t_-(n-1)ϕ_γ(t) = sup_τ∈0,t_-(n-1)ϕ_γ(τ) , if t_-(n-1) >0, 0, if t_-(n-1) =0, t_n sup t ∈t_n-1, 1ϕ_γ(t) = sup_τ∈t_n-1, 1ϕ_γ(τ) , if t_(n-1) <1, 1, if t_(n-1) =1. We shall prove that f(γ(t_0)) - f(γ(1))≤ 8 ϕ_γ(t_0) g(γ(t_0)) + 18γ|_[0]t_0, 1g and f(γ(0)) - f(γ(t_0^-))≤ 8 ϕ_γ(t_0) g(γ(t_0^-))+18γ|_[0]0, t_0^-g. The proof of (<ref>) goes in the same manner as the proof of (<ref>). Therefore, we give the detailed proof of (<ref>). First of all, let us observe that for any m ∈ℕ such that t_m-1<1 we have ∑_i=1^m-1f(γ(t_i)) - f(γ(t_i^-)) ≤ 2 γ|_[0]t_0, 1^-g. Indeed, by our assumption and by Lemma <ref> we have ∑_i=1^m-1f(γ(t_i)) - f(γ(t_i^-)) ≤∑_i=1^m-1 (g(γ(t_i)) + g(γ(t_i^-)))γ(t_i), γ(t_i^-) = 2∑_i=1^m-1g(γ(t_i^-)) + g(γ(t_i)) /2 V_γ(t_i) - V_γ(t_i^-) ≤ 2∑_i=1^m-1(γ|_[0]t_i-1, t_i^-g + g(γ(t_i^-)) + g(γ(t_i)) /2 V_γ(t_i) - V_γ(t_i^-) ) + 2 γ|_[0]t_m-1, 1^-g =2 γ|_[0]t_0, 1^-g. Step 1 Proof of (<ref>) in the case when there is n ∈ such that t_n = 1. Let m ∈ be the smallest such n. Since γ is left-continuous at 1, the definition of m implies that γ has no points of discontinuity within (t_m-1, t_m). Let A_i t_i-1, t_i. for i ∈ [m]. Notice that for all s ∈ A_i we have ϕ_γ(s) ≤ϕ_γ(t_i). In particular, for all s ∈ A_m we have ϕ_γ(s) = 0, as ϕ_γ(1) = 0. Fix ∈0, ϕ_γ(t_m-1). By Lemma <ref>, Lemma <ref> and since ϕ_γ(t_i) ≤ϕ_γ(t_i-1) we have ∑_i=1^m f(γ(t_i-1)) - f(γ(t_i^-)) ≤∑_i=1^m-1( 8 ϕ_γ(t_i) g(γ(t_i-1)) + 16γ|_[0]t_i-1, t_i^-g + 8 ϕ_γ(t_i) g(γ(t_i^-)) ) + 8 g(γ(t_m-1)) + 16γ|_[0]t_m-1, 1^-g + 8 g(γ(1^-)) ≤ 8 ϕ_γ(t_0) g(γ(t_0)) + ∑_i=1^m-1 16γ|_[0]t_i-1, t_i^-g + 8 ϕ_γ(t_i) g(γ(t_i^-)) + 8 ϕ_γ(t_i) g(γ(t_i)) + 16γ|_[0]t_m-1, 1^-g + 8 g(γ(1^-)) = 8 ϕ_γ(t_0) g(γ(t_0)) + 16( ∑_i=1^m-1γ|_[0]t_i-1, t_i^-g + g(γ(t_i^-)) + g(γ(t_i)) /2 V_γ(t_i) - V_γ(t_i^-) . + . γ|_[0]t_m-1, 1^-g) + 8 g(γ(1^-)) = 8 ϕ_γ(t_0) g(γ(t_0)) + 16γ|_[0]t_0, 1^-g + 8 g(γ(1^-)). Hence, by the triangle inequality and by (<ref>) we get f(γ(t_0)) - f(γ(1^-)) ≤∑_i=1^m f(γ(t_i-1)) - f(γ(t_i^-)) + ∑_i=1^m-1f(γ(t_i)) - f(γ(t_i^-)) ≤ 8 ϕ_γ(t_0) g(γ(t_0)) + 18γ|_[0]t_0, 1^-g + 8 g(γ(1^-)). By passing to the limit → 0^+ and using the left-continuity of γ at 1, we obtain f(γ(t_0)) - f(γ(1))≤ 8 ϕ_γ(t_0) g(γ(t_0)) + 18γ|_[0]t_0, 1g. Step 2 Proof of (<ref>) in the case when there is no n ∈ such that t_n=1. In this subcase, (ϕ_γ(t_n))_n is a strictly decreasing sequence. By Lemma <ref> we have ϕ_γ(t_n) → 0 as n →∞. For i ∈ let A_i t_i-1, t_i and B_i t_i, 1. Notice that for all s ∈ A_i we have ϕ_γ(s) ≤ϕ_γ(t_i) and for all s ∈ B_i we have ϕ_γ(s) ≤ϕ_γ(t_i). By Lemma <ref>, Lemma <ref> and (<ref>) for all n ∈ we have f(γ(t_0)) - f(γ(1^-)) ≤∑_i=1^n-1f(γ(t_i-1)) - f(γ(t_i^-)) + f(γ(t_n-1)) - f(γ(1^-)) + ∑_i=1^n-1f(γ(t_i)) - f(γ(t_i^-)) ≤∑_i=1^n-1 8 ϕ_γ(t_i) g(γ(t_i-1)) + 16γ|_[0]t_i-1, t_i^-g + 8 ϕ_γ(t_i) g(γ(t_i^-)) + 8 ϕ_γ(t_n) g(γ(t_n-1)) + 16γ|_[0]t_n, 1^-g + 8 ϕ_γ(t_n-1) g(γ(1^-)) + 2γ|_[0]t_0, 1g ≤ 8 ϕ_γ(t_0) g(γ(t_0)) + ∑_i=1^n-1 16γ|_[0]t_i-1, t_i^-g + 8 ϕ_γ(t_i) g(γ(t_i^-)) + 8 ϕ_γ(t_i) g(γ(t_i)) + 16γ|_[0]t_n-1, 1^-g + 8 ϕ_γ(t_n-1) g(γ(1^-)) +2γ|_[0]t_0, 1g = 8 ϕ_γ(t_0) g(γ(t_0)) + 16( ∑_i=1^n-1γ|_[0]t_i-1, t_i^-g + g(γ(t_i^-)) + g(γ(t_i)) 2V_γ(t_i)-V_γ(t_i^-). + .γ|_[0]t_n-1, 1^-g) + 8 ϕ_γ(t_n-1) g(γ(1^-)) +2γ|_[0]t_0, 1g = 8 ϕ_γ(t_0) g(γ(t_0)) + 18γ|_[0]t_0, 1^-g + 8 ϕ_γ(t_n-1) g(γ(1^-)). By passing to the limit n →∞ and using the left-continuity of γ at 1, we obtain f(γ(t_0)) - f(γ(1))≤ 8 ϕ_γ(t_0) g(γ(t_0)) + 18γ|_[0]t_0, 1g, and the proof of (<ref>) follows. Now, we can finish the proof. Indeed, by (<ref>) and (<ref>) we have f(γ(0)) - f(γ(1)) ≤f(γ(0)) - f(γ(t_0^-)) + f(γ(t_0^-)) - f(γ(t_0)) + f(γ(t_0)) - f(γ(1)) ≤ 18γ|_[0]0, t_0^-g + 8 ϕ_γ(t_0) g(γ(t_0^-)) + ϕ_γ(t_0)/2 g(γ(t_0^-)) + g(γ(t_0)) +8 ϕ_γ(t_0) g(γ(t_0)) + 18γ|_[0]t_0, 1g ≤γ 18g, where we have used Lemma <ref> and the fact that ϕ_γ(t_0) = V_γ(t_0) - V_γ(t_0^-). Now we are in a position to continue the proof of Theorem <ref>. Let f' and g' be like in Lemma <ref> and put g̃=g'. It means that functions f': X →ℝ and g̃ : X → [0,∞] are Borel such that f=f' and g=g̃ -almost everywhere such that ∀ x, y ∈ X f'(y) - f'(x) ≤g̃(x) + g̃(y) x,y . Let us consider the sets E_k x ∈ X g̃(x) ≤ 2^k . Since g is finite -almost everywhere, then so is g̃. From (<ref>) and the definition of E_k we have that f' |_E_k is 2^k+1-Lipschitz. Therefore by the McShane's Lemma there exists function f_k' which is a 2^k+1-Lipschitz extension of f' |_E_k. This function has the following form f_k' (· ) inf_ y ∈ E_k f'(y) + 2^k+1·, y. Moreover, let us define g_k' g̃ E_k + 2^k+1 X ∖ E_k . Thus, for all k ∈ we have g_k' ≤ 2g̃. Also, since g̃ is finite -almost everywhere and X ∖ x ∈ X g̃(x) = ∞ = ⋃_k=1^∞ E_k, we have f_k' → f' -almost everywhere. Moreover f_k'(x) - f_k'(z) ≤ g'_k(x) + g'_k(z) x,z for all x,z ∈ X and all k ∈. Indeed, if x, z ∈ E_k then f_k'(x) - f_k'(z) = f' (x) - f' (z) ≤g̃(x) + g̃(z) x,z = g_k'(x) + g_k'(z) x,z. If x ∉ E_k or z ∉ E_k, then g'_k(x) + g'_k(z) ≥ 2^k+1 and since f_k' is 2^k+1-Lipschitz we have f_k'(x) - f_k'(z) ≤ 2^k+1 x,z≤ g'_k(x) + g'_k(z) x,z. Now, by (<ref>) we have that functions f'_k and g'_k satisfy the conditions of Lemma <ref>, so for all γ∈ [0,1]; X we have f'_k(γ(0)) - f'_k(γ(1)) ≤γ 18 g'_k≤γ 36g̃. Next, we define f̃lim inf_k →∞ f'_k. Clearly, f̃ = f and g̃ = g -almost everywhere. Fix γ∈, that is, γ∈ [0,1]; X such that V(γ)> 0. If γg̃ = ∞, then f̃(γ(0)) - f̃(γ(1)) ≤γ76g̃. Therefore, we assume that γg̃ < ∞. Step 1 There exists t ∈ (0,1) such that g̃(γ(t)) and g̃(γ(t^-)) are finite. Since γg̃ < ∞, we have μ_γ^S X ∖ x ∈ X g̃(x) < ∞ = 0. Moreover, μ_γ^Sγ (0,1) > 0. Indeed, since γ is left-continuous at 1 we can write μ_γ^Sγ (0,1) ≥ 1/2γ_#μ_γγ (0,1) ≥1/2μ_γ (0,1) = 1/2V_γ(1) - V_γ(0) = 1/2V_γ(1) = 1/2V(γ) > 0. Therefore, gathering (<ref>) with (<ref>) we have μ_γ^Sγ (0,1) ∩ x ∈ X g̃(x) < ∞ > 0, which implies the existence of t ∈ (0,1) such that g̃(γ(t)) < ∞. Next, we shall prove that g̃(γ(t^-)) < ∞. If γ(t) = γ(t^-), then g̃(γ(t^-)) = g̃(γ(t)) < ∞. Therefore, we shall assume that γ(t) γ(t^-). In this case we have[See the proof of Proposition <ref>.] μ_γ^S{γ (t^-) } > 0. Hence, by (<ref>) and (<ref>) we have that g̃(γ(t^-)) < ∞ and the proof of Step 1 is completed. Since g̃(γ(t)) < ∞ and g̃(γ(t^-)) < ∞, there is N ∈ such that γ(t), γ(t^-) ∈ E_N. Hence, for k ≥ N we have f̃(γ(t)) = f'_k(γ(t)) and f̃(γ(t^-)) = f'_k(γ(t^-)). In particular, f̃(γ(t)) and f̃(γ(t^-)) are finite. Step 2 For t from Step 1 we have f̃(γ(t))-f̃(γ(1)) ≤γ36g̃ and f̃(γ(0))-f̃(γ(t^-)) ≤γ36g̃. In order to prove (<ref>) we define γ' [0,1] → X given by the formula γ'(s) γ t+ (1-t)s . We claim that γ' ∈ [0,1]; X and γ' g̃≤γg̃. Indeed, let us define Φ [0,1] → [t,1] as follows Φ(s) = t+ (1-t)s, then γ' = Φ_#( γ|_ [t,1] ). Since γ|_ [t,1] ∈ [t,1]; X, by Proposition <ref> we have γ' ∈ [0,1]; X and γ' g̃ = Φ_#( γ|_ [t,1] ) g̃ = γ|_ [t,1] g̃≤γg̃, where the last inequality follows from the fact that g̃≥ 0. Now, by (<ref>) and (<ref>) for all k ∈ we have f'_k(γ(t)) - f'_k(γ(1)) = f'_k(γ'(0)) - f'_k(γ'(1)) ≤γ'36g̃≤γ36g̃ . Hence, for all k ≥ N we have f'_k(γ(1)) ≤γ36g̃ + f̃(γ(t)). Thus, f̃(γ(1)) = lim inf_k →∞ f'_k(γ(1)) ≤γ36g̃ + f̃(γ(t)) < ∞. On the other hand, a similar calculation shows -∞ < f̃(γ(t)) ≤γ36g̃ + f̃(γ(1)). Note that those inequalities imply that f̃(γ(1)) is finite. As both f̃(γ(t)) and f̃(γ(1)) are finite, we conclude f̃(γ(t))-f̃(γ(1)) ≤γ36g̃ and (<ref>) follows. Now, to prove (<ref>) we define γ” [0,1] → X given by the formula γ”(s) γ(st), if s ∈ [0,1), γ(t^-), if s=1. Then γ”∈ [0,1]; X and γ”g̃≤γg̃. Indeed, let us define Ψ [0,1] → [0,t] as follows Ψ(s) = st, then γ” = Ψ_#( γ|_ [0,t^-] ). Since γ|_ [0,t^-] ∈ [0,t]; X, by Proposition <ref> we have γ”∈ [0,1]; X and γ”g̃ = Ψ_#( γ|_ [0,t^-] ) g̃ = γ|_ [0,t^-] g̃≤γg̃. Now, by (<ref>) and by (<ref>) for all k ∈ we have f'_k(γ(0)) - f'_k(γ(t^-)) = f'_k(γ”(0)) - f'_k(γ”(1)) ≤γ”36g̃≤γ36g̃ . Hence, for all k ≥ N we have f'_k(γ(0)) ≤γ36g̃ + f̃(γ(t^-)). Thus, f̃(γ(0)) = lim inf_k →∞ f'_k(γ(0)) ≤γ36g̃ + f̃(γ(t^-)) < ∞. On the other hand, a similar calculation shows that -∞ < f̃(γ(t^-)) ≤γ36g̃ + f̃(γ(0)). Since f̃(γ(t^-)) and f̃(γ(0)) are finite, we have f̃(γ(t^-))-f̃(γ(0)) ≤γ36g̃ and (<ref>) follows. Now we are in position to finish the whole proof. By Step 2, inequality (<ref>) and by Lemma <ref> for k ≥ N we have f̃(γ(0)) -f̃(γ(1)) ≤f̃(γ(0)) - f̃(γ(t^-)) + f̃(γ(t^- )) - f̃(γ(t)) +f̃(γ(t)) - f̃(γ(1)) ≤γ36g̃ + f'_k(γ(t^- )) - f'_k (γ(t)) + γ36g̃ ≤γ72g̃ + g'_k(γ(t^- )) + g'_k(γ(t)) γ(t^- ), γ(t) ≤γ72g̃+ 4g̃(γ(t^- )) + g̃(γ(t)) γ(t^- ),γ(t) /2 ≤γ76g̃. Finally, let us observe that by replacing f̃ by f̃{|f̃|<∞} we ensure that our function has finite values and g̃ is an upper S-gradient of f̃{|f̃|<∞}. Indeed, it is sufficient to note that for all x, y ∈ X we have f̃[0]|f̃|<∞(x) - f̃[0]|f̃|<∞(y) ≤f̃(x)-f̃(y) . If x,y ∈[0]|f̃|<∞, then the inequality is clear. Otherwise, the right hand side equals ∞, hence the inequality is also true. § COMPARISON OF SPACES Let (X, ) be a metric space and let be a Borel regular measure on X. Then for any measurable functions f X → and g X → [0,∞] such that f, g are finite -almost everywhere we have: * If is σ-finite and g is a is a Hajłasz gradient of f, then there are Borel functions f': X →ℝ and g':X → [0,∞] such that f = f' and g = g' -almost everywhere and 76 g' is an -upper S-gradient of f'. * If is doubling and f, g ∈ L^1_loc() are Borel maps such that g is an -upper S-gradient of f, then g/2 is a Hajłasz gradient of f. * By Theorem <ref> there are Borel maps f' X →ℝ and g' X → [0,∞] such that f = f' and g=g' -almost everywhere and 76 g' is an upper S-gradient of f'. Therefore, by Remark <ref> we have 76 g' is an -upper S-gradient of f'. * Under these conditions the Lebesgue differentiation theorem is satisfied [Theorem 1.8]lectures-analysis-metric. Fix x, y ∈ X such that x ≠ y and r ∈ (0, x, y/2). We then define Borel measures μ_x,r and μ_y,r as follows μ_x,r(A) := _ B(x, r) A d and μ_y,r(A) := _ B(y, r) A d. Next, let μ' be a Borel probability measure on X × X defined by μ' =μ_x,r⊗μ_y,r. In particular, for Borel sets A, C ⊆ X we have μ'(A × C) = μ_x,r(A) μ_y,r (C). Next, for w, z ∈ X we define function γ_w^z [0,1] → X by the formula ∀ t ∈ [0,1] γ_w^z (t) w [0,1/2 ) (t) + z [0] 1/2, 1(t). By Example <ref>, γ_w^z ∈ [0,1] ; X. Therefore, h X^2 → [0,1]; X defined by h(w, z) γ_w^z for all w, z ∈ X is well-defined. Moreover, it is continuous, hence Borel. Now, let μ h_#μ'. Then μ is a Borel probability measure on [0,1]; X. Let us note that for all t ∈ [0,1] and every Borel set A ⊂ X we have (e_t)_#μ (A) = μ_x,r(A) [0, 1/2 ) (t) + μ_y,r(A) [0] 1/2, 1(t) ≤ C (A), where C = max B(x, r) ^-1 , B(y, r) ^-1. Moreover, [0,1]; X V(γ) μ(γ) = [0,1]; X V(γ) h_#(μ')(γ) = X^2 w, z μ'(w, z) ≤X^2 r + x, y + r μ'(w, z) = 2r + x, y < ∞. Therefore, μ∈𝒫^()(X ). Furthermore, we have μ( [0,1]; X ∖ )= μ'(Δ), where Δ =(z,z) ∈ X^2 z ∈ X. Now, recall that r < x, y/2, so Δ⊆ X^2 ∖ B(x,r) × B(y,r), hence μ'( Δ ) ≤μ' X^2 ∖ B(x,r) × B(y,r) =μ_x,r(X)μ_y,r(X) - μ_x,r(B(x,r))μ_y,r(B(y,r)) = 1-1 = 0. In this way we have μ∈^(). Now, since g is an -upper S-gradient of f, we have _ B(x, r) f d - _ B(y, r) f d = X^2 f(w) μ'(w,z) - X^2 f(z) μ'(w,z) ≤X^2 f(w) - f(z) μ'(w,z) = [0,1]; X f(γ(1)) - f(γ(0)) h_#(μ')(γ) = [0,1]; X f(γ(1)) - f(γ(0)) μ(γ) ≤ [0,1]; X γ g  μ( γ) = [0,1]; X γ g   h_#μ'( γ) = X^2 g(w) + g(z) /2 w, z μ'(w, z) = X^2 g(w) / 2 w, z μ'(w, z) + X^2 g(z) / 2 w, z μ'(w, z) ≤ X^2 g(w) / 2 r + x, y + r μ'(w, z) + X^2 g(z) / 2 r + x, y + rμ'(w, z) = 2r + x, y/2_ B(x, r) g d + _ B(y, r) g d. Therefore, _ B(y, r) f d - _ B(x, r) f d≤ 2r + x, y/2_ B(x, r) g d + _ B(y, r) g d. The above inequality is true for any x, y ∈ X and r ∈ (0, x, y/2). Since μ is Borel regular and doubling, and f, g ∈ L^1_loc(), for -almost all z ∈ X we have _ B(z, ) f d f(z) and _ B(z, ) g d g(z). Therefore, by passing to the limit r → 0^+ for -almost all x, y ∈ X we have f(x) - f(y) ≤x, y/2 g(x) + g(y) and g/2 is a Hajłasz gradient of f. Finally, let us make a comparison between the various definitions of First Order Sobolev spaces. Let (X, ) be a metric space be a Borel measure and p∈ [1, ∞), then: * 𝒩_T̂Ĉ^1,p(X, , ) ↪ N^1,p_TC(X, , ) ∼↪ M^1,p(X, , )[Here, symbol ∼↪ is used to indicatate that the “embedding” is not necessarily injective; however, its kernel consists of equivalence classes in which functions are equal to 0 -almost everywhere] and 𝒩_T̂Ĉ^1,p(X, , ) ↪ M^1,p(X, , ), * If is Borel regular, then 𝒩_T̂Ĉ^1,p(X, , ) ↪ N^1,p_TC(X, , ) ↪ M^1,p(X, , ), * If is σ-finite and Borel regular, then 𝒩_T̂Ĉ^1,p(X), , ) ≅ N^1,p_TC(X, , ) ≅ M^1,p(X, , ) ↪ G^1,p(X, , ), * If is doubling and Borel regular, then 𝒩_T̂Ĉ^1,p(X), , ) ≅ N^1,p_TC(X, , ) ≅ M^1,p(X, , ) ≅ G^1,p(X, , ). * By Theorem <ref> and Corollary <ref> we have 𝒩_T̂Ĉ^1,p(X, , ) ↪ N^1,p_TC(X, , ) and by Theorem <ref> we have N^1,p_TC(X, , ) ∼↪ M^1,p(X, , ). We have 𝒩_T̂Ĉ^1,p(X, , ) ↪ M^1,p(X, , ) since by Corollary <ref> the composition of the previous two embeddings has a trivial kernel, and hence is injective. * follows from (1) and Proposition <ref>. * By Theorem <ref> we have N^1,p_TC(X, , ) ≅ M^1,p(X, , ). Theorem <ref>, Theorem <ref> and Corollary <ref> give M^1,p(X, , ) ↪𝒩_T̂Ĉ^1,p(X, , ), and by Theorem <ref> and Proposition <ref> we obtain M^1,p(X, , ) ↪ G^1,p(X, , ). * It follows from (3), Theorem <ref> and Proposition <ref>. Przemysław Górka Faculty of Mathematics and Information Science, Warsaw University of Technology, Pl. Politechniki 1, 00-661 Warsaw, Poland przemyslaw.gorka@pw.edu.pl Kacper Kurowski Faculty Mathematics and Information Science, Warsaw University of Technology, Pl. Politechniki 1, 00-661 Warsaw, Poland kacper.kurowski.dokt@pw.edu.pl
http://arxiv.org/abs/2407.13281v1
20240718083405
Auditing Local Explanations is Hard
[ "Robi Bhattacharjee", "Ulrike von Luxburg" ]
cs.LG
[ "cs.LG" ]
Semantic-aware Representation Learning for Homography Estimation Yuhan Liu^†, Qianxin Huang^†, Siqi Hui, Jingwen Fu, Sanping Zhou, Kangyi Wu, Pengna Li, Jinjun Wang July 22, 2024 ======================================================================================================== § ABSTRACT In sensitive contexts, providers of machine learning algorithms are increasingly required to give explanations for their algorithms' decisions. However, explanation receivers might not trust the provider, who potentially could output misleading or manipulated explanations. In this work, we investigate an auditing framework in which a third-party auditor or a collective of users attempts to sanity-check explanations: they can query model decisions and the corresponding local explanations, pool all the information received, and then check for basic consistency properties. We prove upper and lower bounds on the amount of queries that are needed for an auditor to succeed within this framework. Our results show that successful auditing requires a potentially exorbitant number of queries – particularly in high dimensional cases. Our analysis also reveals that a key property is the “locality” of the provided explanations — a quantity that so far has not been paid much attention to in the explainability literature. Looking forward, our results suggest that for complex high-dimensional settings, merely providing a pointwise prediction and explanation could be insufficient, as there is no way for the users to verify that the provided explanations are not completely made-up. § INTRODUCTION Machine learning models are increasingly used to support decision making in sensitive contexts such as credit lending, hiring decisions, admittance to social benefits, crime prevention, and so on. In all these cases, it would be highly desirable for the customers/applicants/suspects to be able to judge whether the model's predictions or decisions are “trustworthy”. New AI regulation such as the European Union's AI Act can even legally require this. One approach that is often held up as a potential way to achieve transparency and trust is to provide local explanations, where every prediction/decision comes with a human-understandable explanation for this particular outcome (e.g., LIME <cit.>, SHAP <cit.>, or Anchors <cit.>). However, in many real-world scenarios, the explanation receivers may not necessarily trust the explanation providers <cit.>. Imagine a company that uses machine learning tools to assist in screening job applications. Because the company is well-advised to demonstrate fair and equitable hiring, it is plausible that it might bias its explanations towards depicting these properties. And this is easy to achieve: the company is under full control of the machine learning model and the setup of the explanation algorithm, and prior literature <cit.> has shown that current explainability tools can be manipulated to output desirable explanations. This motivates the question: what restrictions or procedures could be applied to prevent such explanation cheating, and more specifically, what are ways to verify that the provided explanations are actually trustworthy? One approach is to require that the explanation providers completely publicize their models, thus allowing users or third-party regulators to verify that the provided explanations are faithful to the actual model being used. However, such a requirement would likely face stiff resistance in settings where machine learning models are valuable intellectual property. In this work, we investigate an alternative approach, where a third-party regulator or a collective of users attempt to verify the trustworthiness of local explanations, simply based on the predictions and explanations over a set of examples. The main idea is that by comparing the local explanations with the actual predictions across enough data one could, in principle, give an assessment on whether the provided explanations actually adhere to the explained model. The goal of our work is to precisely understand when this is possible. §.§ Our contributions: data requirements for auditing. We begin by providing a general definition for local explainability that encompasses many popular explainability methods such as Anchors <cit.>, Smooth-grad <cit.>, and LIME <cit.>. We define a local explanation for a classifier f at a point x as a pair (R_x, g_x), where R_x is a local region surrounding x, and g_x is a simple local classifier designed to approximate f over R_x. For example, on continuous data, Anchors always output (R_x, g_x) where R_x is a hyper-rectangle around x and g_x is a constant classifier; gradient-based explanations such as Smooth-grad or LIME implicitly approximate the decision function f by a linear function in a local region around x. Obviously, any human-accessible explanation that is being derived from such a local approximation can only be trustworthy if the local function g_x indeed approximates the underlying function f on the local region R_x. Hence, a necessary condition for a local explanation to be trustworthy is that the function g_x is close to f on the region R_x, and this should be the case for most data points x sampled from the underlying distribution. To measure how closely a set of local explanations adheres to the original classifier f, we propose an explainability loss function L_γ(E, f), which quantifies the frequency with which f differs by more than γ from the local classifier g_x over the local region R_x (see Sec. <ref> for precise definitions). We then introduce a formalism for auditing local explanations where an auditor attempts to estimate the explainability loss L_γ(E, f). In our formalism, the auditor does so with access to the following objects: * A set of data points X = {x_1, …, x_n} drawn i.i.d from the underlying data distribution. * The outputs of a classifier on these points, f(X) = {f(x_1), …, f(x_n)}. * The provided local explanations for these points E(f, X) = {E(f, x_1), …, E(f, x_n)} Observe that in our formalism, the auditor has only restricted access to the machine learning model and the explanations: they can only interact with them through their evaluations at specific data-points. We have chosen this scenario because we believe it to be the most realistic one in many practical situations, where explanation providers try to disclose as little information on their underlying machine learning framework as possible. In our main result, Theorem <ref>, we provide a lower bound for the amount of data needed for an auditor to accurately estimate L_γ(E, f). A key quantity in our analysis is the locality of the provided explanations. We show that the smaller the provided local regions R_x are, the more difficult it becomes to audit the explainer. Intuitively, this holds because estimating the explainability loss relies on observing multiple points within these regions, as illustrated in Panel (b) of Figure <ref>. By contrast, if this fails to hold (Panel (a)), then there is no way to validate how accurate the local explanations are. We also complement our lower bound with an upper bound (Theorem <ref>) that demonstrates that reasonably large local regions enable auditing within our framework. Our results imply that the main obstacle to auditing local explanations in this framework is the locality of the provided explanations. As it turns out, this quantity is often prohibitively small in practice, making auditing practically impossible. In particular, for high-dimensional applications, the local regions R_x given by the explainer are often exponentially small in the data-dimension. Thus the explanations cannot be verified in cases where there does not exist any prior trust between the explanation provider and the explanation receivers. We stress that estimating the local loss L_γ(E, f) serves as a first baseline on the path towards establishing trustworthy explanations. It is very well possible that an explanation provider achieves a small local loss (meaning that the local classifiers closely match the global classifier f) but nevertheless provides explanations that are misleading in some other targeted manner. Thus, we view successful auditing in this setting as a necessary but not sufficient condition for trusting an explanation provider. Our results might have far-reaching practical consequences. In cases where explanations are considered important or might even be required by law, for example by the AI Act, it is a necessary requirement that explanations can be verified or audited (otherwise, they would be completely useless). Our results suggest that in the typical high dimensional setting of modern machine learning, auditing pointwise explanations is impossible if the auditor only has access to pointwise decisions and corresponding explanations. In particular, collectives of users, for example coordinated by non-governmental organizations (NGOs), are never in the position to audit explanations. The only way forward in auditing explanations would be to appoint a third-party auditor who has more power and more access to the machine learning model, be it access to the full specification of the model function and its parameters, or even to the training data. Such access could potentially break the fundamental issues posed by small local explainability regions in our restricted framework, and could potentially enable the third party auditor to act as a moderator to establish trust between explanation receivers and explanation providers. §.§ Related Work Prior work <cit.> on auditing machine learning models is often focused on applying explainability methods to audit the models, rather than the explanations themselves. However, there has also been recent work <cit.> arguing for more rigorous ways to evaluate the performance of various explanation methods. There are numerous approaches for doing so: including performance based on human-evaluation <cit.>, and robustness <cit.>. There has also been a body of work that evaluates explanations based on the general notion of faithfulness between explanations and the explained predictor. Many approaches <cit.> examine neural-network specific measures, and typically rely on access to the neural network that would not be present in our setting. Others are often specialized to a specific explainability tool – with LIME <cit.> and Shap <cit.> being especially popular choices. By contrast, our work considers a general form of local explanation, and studies the problem of auditing such explanations in a restricted access setting, where the auditor only interacts with explanations through queries. To our knowledge, the only previous work in a similar setting is <cit.>, in which local explanations are similarly audited based on collecting them on a set of data sampled from a data distribution. However, their work is restricted to a discrete setting where local fidelity is evaluated based on instances that receive identical explanations. In particular, they attempt to verify that points receiving identical explanations also receive identical predictions. By contrast, our work lies within a continuous setting, where a local explanation is said to be faithful if it matches the underlying model over a local region. A central quantity to our analysis is the locality of an explanation, which is a measure of how large the local regions are. Prior work has rarely measured or considered this quantity, with a notable exception being Anchors method <cit.> which utilizes it to assist in optimizing their constructed explanations. However, that work did not explore this quantity beyond treating it as a fixed parameter. § LOCAL EXPLANATIONS §.§ Preliminaries In this work, we restrict our focus to binary classification – we let μ denote a data distribution over ^d, and f:^d →{± 1} be a so-called black-box binary classifier that needs to be explained. We note that lower bounds shown for binary classification directly imply lower bounds in more complex settings such as multi-class classification or regression. For any measurable set, M ⊆^d, we let μ(M) denote the probability mass μ assigns M. We will also let supp(μ) denote the support of μ, which is the set of all points x such that μ({x': ||x - x'|| ≤ r}) > 0 for all r > 0. We define a hyper-rectangle in ^d as a product of intervals, (a_1, b_1] ×…× (a_d, b_d], and let ℋ_d denote the set of all hyper-rectangles in ^d. We let ℬ_d denote the set of all L_2-balls in ^d, with the ball of radius r centered at point x being defined as B(x, r) = {x': ||x - x'|| ≤ r}. We will utilize the following two simple hypothesis classes: 𝒞_d, which is the set of the two constant classifiers over ^d, and ℒ_d, which is the set of all linear classifiers over ^d. These classes serve as important examples of simple and interpretable classifiers for constructing local explanations. §.§ Defining local explanations and explainers One of the most basic and fundamental concepts in Explainable Machine Learning is the notion of a local explanation, which, broadly speaking, is an attempt to explain a complex function's behavior at a specific point. In this section, we describe a general form that such explanations can take, and subsequently demonstrate that two widely used explainability methods, LIME and Anchors, adhere to it. We begin by defining a local explanation for a classifier at a given point. For x ∈^d, and f: ^d →{± 1}, a local explanation for f at x is a pair (R_x, g_x) where R_x ⊆^d is a region containing x, and g_x: R_x →{± 1} is a classifier. Here, g_x is typically a simple function intended to approximate the behavior of a complex function, f, over the region R_x. The idea is that the local nature of R_x simplifies the behavior of f enough to provide intuitive explanations of the classifier's local behavior. Next, we define a local explainer as a map that outputs local explanations. E is a local explainer if for any f: ^d →{± 1} and any x ∈^d, E(f, x) is a local explanation for f at x. We denote this as E(f, x) = (R_x, g_x). We categorize local explainers based on the types of explanations they output – if ℛ denotes a set of regions in ^d, and 𝒢 denotes a class of classifiers, ^d →{± 1}, then we say E ∈ℰ(ℛ, 𝒢) if for all f, x, E(f, x) outputs (R_x, g_x) with R_x ∈ℛ and g_x ∈𝒢. Local explainers are typically constructed for a given classifier f over a given data distribution μ. In practice, different algorithms employ varying amounts of access to both f and μ – for example, SHAP crucially relies on data sampled from μ whereas gradient based methods often rely on knowing the actual parameters of the model, f. To address all of these situations, our work takes a black-box approach in which we make no assumptions about how a local explainer is constructed from f and μ. Instead we focus on understanding how to evaluate how effective a given explainer is with respect to a classifier f and a data distribution μ. §.§ Examples of Explainers We now briefly discuss how various explainability tools in practice fit into our framework of local explanations. Anchors: The main idea of Anchors <cit.> is to construct a region the input point in which the desired classifier to explain remains (mostly) constant. Over continuous data, it outputs a local explainer, E, such that E(x) = (R_x, g_x), where g_x is a constant classifier with g_x(x') = f(x) for all x' ∈^d, and R_x is a hyper-rectangle containing x. It follows say that the Anchors method outputs an explainer in the class, ℰ(ℋ_d, 𝒞_d). Gradient-Based Explanations: Many popular explainability tools <cit.> explain a model's local behavior by using its gradient. By definition, gradients have a natural interpretation as a locally linear model. Because of this, we argue that gradient-based explanations are implicitly giving local explanations of the form (R_x, g_x), where R_x = B(x, r) is a small L_2 ball centered at x, and g_x is a linear classifier with coefficients based on the gradient. Therefore, while the radius r and the gradient g_x being used will vary across explanation methods, the output can be nevertheless interpreted as an explainer in ℰ(ℬ_d, ℒ_d), where ℬ_d denotes the set of all L_2-balls in ^d, and ℒ_d denotes the set of all linear classifiers over ^d. LIME: At a high level, LIME <cit.> also attempts to give local linear approximations to a complex model. However, unlike gradient-based methods, LIME includes an additional feature-wise discretization step where points nearby the input point, x, are mapped into a binary representation in {0, 1}^d based on how similar a point is to x. As a consequence, LIME can be construed as outputting local explanations of a similar form to those outputted by gradient-based methods. Finally, as an important limitation of our work, although many well-known local explanations fall within our definitions, this does not hold in all cases. Notably, Shapley-value <cit.> based techniques do not conform to the format given in Definition <ref>, as it is neither clear how to construct local regions that they correspond to, nor the precise local classifier being used. §.§ A measure of how accurate an explainer is We now formalize what it means for a local classifier, g_x, to “approximate" the behavior of f in R_x. For explainer E and point x, we let the local loss, L(E, f, x) be defined as the fraction of examples drawn from the region R_x such that g_x and f have different outputs. More precisely, we set L(E,f,x) = _x' ∼μ[g_x(x') ≠ f(x) | x' ∈ R_x]. μ is implicitly used to evaluate E, and is omitted from the notation for brevity. We emphasize that this definition is specific to classification, which is the setting of this work. A similar kind of loss can be constructed for regression tasks based on the mean-squared difference between g_x and f. We contend that maintaining a low local loss across most data points is essential for any reasonable local explainer. Otherwise, the explanations provided by the tool can be made to support any sort of explanation as they no longer have any adherence to the original function f. To measure the overall performance of an explainer over an entire data distribution, it becomes necessary to aggregate L(E, f, x) over all x ∼μ. One plausible way to accomplish this would be to average L(E, f, x) over the entire distribution. However, this would leave us unable to distinguish between cases where E gives extremely poor explanations at a small fraction of points as opposed to giving mediocre explanations over a much larger fraction. To remedy this, we opt for a more precise approach in which a user first chooses a local error threshold, 0 < γ < 1, such that local explanations that incur an explainabiliy loss under γ are considered acceptable. They then measure the global loss for E by determining the fraction of examples, x, drawn from μ that incur explainability loss above γ. Let γ > 0 be a user-specified local error threshold. For local explainer E, we define the explainability loss L_γ(E, f) as the fraction of examples drawn from μ that incur a local loss larger than γ. That is, L_γ(E, f) = _x ∼μ[L(E, f, x) ≥γ]. We posit that the quantity L_γ(E, f) serves as an overall measure of how faithfully explainer E adheres to classifier f, with lower values of L_γ(E,f) corresponding to greater degrees of faithfulness. §.§ A measure of how large local regions are The outputted local region R_x plays a crucial role in defining the local loss. On one extreme, setting R_x to consist of a single point, {x}, can lead to a perfect loss of 0, as the explainer only needs to output a constant classifier that matches f at x. But these explanations would be obviously worthless as they provide no insight into f beyond its output f(x). On the other extreme, setting R_x = ^d would require the explainer to essentially replace f in its entirety with g_x, which would defeat the purpose of explaining f (as we could simply use g_x instead). Motivated by this observation, we define the local mass of an explainer at a point x as follows: The local mass of explainer E with respect to point x and function f, denoted Λ(E, f, x), is the probability mass of the local region outputted at x. That is, if E(f, x) = (R_x, g_x), then Λ(E, f, x) = _x' ∼μ[x' ∈ R_x]. Based on our discussion above, it is unclear what an ideal local mass is. Thus, we treat this quantity as a property of local explanations rather than a metric for evaluating their validity. As we will later see, this property is quite useful for characterizing how difficult it is to estimate the explainability loss of an explainer. We also give a global characterization of the local mass called locality. The locality of explainer E with respect to function f, denoted Λ(E, f), is the minimum local mass it incurs. That is, Λ(E, f) = inf_x ∈ supp(μ)Λ(E, f, x). § THE AUDITING FRAMEWORK Recall that our goal is to determine how explanation receivers can verify provided explanations in situations where there isn't mutual trust. To this end, we provide a framework for auditing local explanations, where an auditor attempts to perform this verification with as little access to the underlying model and explanations as possible. Our framework proceeds in with the following steps. * The auditor fixes a local error threshold γ. * A set of points X = {x_1, …, x_n} are sampled i.i.d from data distribution μ. * A black-box classifier f is applied to these points. We denote these values with f(X) = {f(x_1), …, f(x_n)}. * A local explainer E outputs explanations for f at each point. We denote these explanations with E(f, X) = {E(f, x_1), …, E(f, x_n)}. * The Auditor outputs an estimate A(X, f(X), E(f, X)) for the explainability loss. Observe that the auditor can only have access to the the model f and its corresponding explanations through the set of sampled points. Its only inputs are X, f(X), and E(f, X). In the context of the job application example discussed in Section <ref>, this would amount to auditing a company based on the decisions and explanations they provided over a set of applicants. In this framework, we can define the sample complexity of an auditor as the amount of data it needs to guarantee an accurate estimate for L_γ(E, f). More precisely, fix a data distribution, μ, a classifier, f, and an explainer E. Then we have the following: For tolerance parameters, ϵ_1, ϵ_2, δ > 0, and local error threshold, γ > 0, we say that an auditor, A, has sample complexity N(ϵ_1, ϵ_2, δ, γ) with respect to μ, E, f, if for any n ≥ N(ϵ_1, ϵ_2, δ, γ), with probability at least 1-δ over X = {x_1, …, x_n}∼μ^n, A outputs an accurate estimate of the explainability loss, L_γ(E, f). That is, L_γ(1 + ϵ_1)(E, f) - ϵ_2 ≤ A(X, f(X), E(f, X)) ≤ L_γ(1-ϵ_1)(E, f) + ϵ_2. Next, observe that our sample complexity is specific to the distribution, μ, the classifier, f, and the explainer, E. We made this choice to understand the challenges that different choices of μ, f, and E pose to an auditor. As we will later see, we will bound the auditing sample complexity using the locality (Definition <ref>), which is a quantity that depends on μ, f, and E. § HOW MUCH DATA IS NEEDED TO AUDIT AN EXPLAINER? §.§ A lower bound on the sample complexity of auditing We now give a lower bound on the amount of data needed to successfully audit an explainer. That is, we show that for any auditor A and any data distribution μ we can find some explainer E and some classifier f such that A is highly likely to give an inaccurate estimate of the explainability loss. To state our theorem we use the following notation and assumptions. Recall that ℋ_d denotes the set of hyper-rectangles in ^d, and that 𝒞_d denotes the set of the two constant binary classifiers over ^d. Additionally, we will include a couple of mild technical assumptions about the data distribution μ. We defer a detailed discussion of them to Appendices <ref> and <ref>. We now state our lower bound. Let ϵ_1, ϵ_2 < 1/48 be tolerance parameters, and let γ < 1/3 be any local error threshold. Let μ be any non-degenerate distribution, and λ > 0 be any desired level of locality. Then for any auditor A there exists a classifier f: ^d →{± 1} and an explainer E ∈ℰ(ℋ_d, 𝒞_d) such that the following conditions hold. * E has locality Λ(E, f) = λ. * There exist absolute constants c_0, c_1 >0 such that if the auditor receives n ≤c_0/max(ϵ_1, ϵ_2)λ^1 - c_1max(ϵ_1, ϵ_2) many points, then with probability at least 1/3 over X = {x_1, …, x_n}∼μ^n, A gives an inaccurate estimate of L_γ(E, f). That is, A(X, f(X), E(f, X)) ∉ [L_γ(1+ϵ_1)(E, f) - ϵ_2, L_γ(1-ϵ_1)(E, f) + ϵ_2]. In summary, Theorem <ref> says that auditing an explainer requires an amount of data that is inversely proportional to its locality. Notably, this result does not require the data-distribution to be adversarially chosen, and furthermore applies when the explainer E can be guaranteed to have a remarkably simple form being in ℰ(ℋ_d, 𝒞_d). Proof intuition of Theorem <ref>: The main intuition behind Theorem <ref> is that estimating the local explainability loss, L(E, f, x), requires us to observe samples from the regions R_x. This would allow us to obtain an empirical estimate of L(E, f, x) by simply evaluating the fraction of points from R_x that the local classifier, g_x, misclassifies. This implies that the locality λ is a limiting factor as it controls how likely we are to observe data within a region R_x. However, this idea enough isn't sufficient to obtain our lower bound. Although the quantity Ω(1/λ^1 - O(ϵ)) does indeed serve as a lower bound on the amount of data needed to guarantee seeing a large number of points within a region, R_x, it is unclear what a sufficient number of observations within R_x is. Even if we don't have enough points in any single region, R_x, to accurately estimate L(E, f, x), it is entirely plausible that aggregating loose estimates of L(E, f, x) over a sufficient number of points x might allow us to perform some type of estimation of L_γ(E, f). To circumvent this issue, the key technical challenge is constructing a distribution of functions f and fixing m = O(1/ϵ) such that observing fewer than m points from a given region, R_x, actually provides zero information about which function was chosen. We include a full proof in Appendix <ref>. §.§ An upper bound on the sample complexity of auditing. We now show that if λ is reasonably large, then auditing the explainability loss L_γ(E,f) can be accomplished. As mentioned earlier, we stress that succeeding in our setting is not a sufficient condition for trusting an explainer – verifying that the local explanations g_x match the overall function f is just one property that a good explainer would be expected to have. Thus the purpose of our upper bound in this section is to complement our lower bound, and further support that the locality parameter λ is the main factor controlling the sample complexity of an auditor. Our auditing algorithm proceeds by splitting the data into two parts, X_1 and X_2. The main idea is to audit the explanations given for points in X_1 by utilizing the data from X_2. If we have enough data, then it is highly likely for us to see enough points in each local region to do this. We defer full details for this procedure to Appendix <ref>. We now give the an upper bound on its sample complexity. There exists an auditor, A, for which the following holds. Let μ be a data distribution, f be a classifier, and E be an explainer. Suppose that E has locality λ with respect to μ and f. Then A has sample complexity at most N(ϵ_1, ϵ_2, δ, γ) = Õ(1/ϵ_2^2 + 1/λγ^2ϵ_1^2). This bound shows that the locality is sufficient for bounding the sample complexity for auditing local explanations. We defer a full proof to Appendix <ref>. Observe that the dependency on λ is O(1/λ) which matches the dependency in our lower bound provided that ϵ_1, ϵ_2 → 0. § THE LOCALITY OF PRACTICAL EXPLAINABILITY METHODS CAN BE EXTREMELY SMALL Theorems <ref> and <ref> demonstrate that the locality λ characterizes the amount of data needed for an Auditor to guarantee an accurate estimate of the explainability loss L_λ(E, f). It follows that if λ is extremely small, then auditing could require a prohibitive amount of data. This leads to the following question: how small is λ for practical explainability algorithms? To answer this, we will examine examine several commonly used algorithms that adhere to our framework. We begin with gradient-based methods, which can be construed as providing an explainer in the class ℰ(ℬ_d, ℒ_d), where ℬ_d denotes the set of L_2 balls in ^d, and ℒ_d denotes the set of linear classifiers. To understand the impact of dimension on the locality of such explainers, we begin with a simple theoretical example. Let μ be the data distribution over ^d that is a union of three concentric spheres. Specifically, x ∼μ is equally likely to be chosen at uniform from the sets S_1 = {x: ||x|| = 1-α}, S_2 = {x: ||x|| = 1}, and S_3 = {x: ||x|| = 1 + β}, where α, β are small d-dependent constants (Defined in Appendix <ref>). Let f: ^d →{± 1} be any classifier such that f(x) = 1 if x ∈ S_1 ∪ S_3 and f(x) = -1 if x ∈ S_2. Observe that μ is a particularly simple data distribution over three spherical manifolds, and f is a simple classifier that distinguishes its two parts. We illustrate this distribution in panel (a) of Figure <ref>. Despite its simplicity, locally explaining f with linear classifiers faces fundamental challenges. We illustrate this in Figure <ref>. Choosing a large local neighborhood, as done at point A, leads to issues posed by the curvature of the data distribution, meaning that it is impossible to create an accurate local linear classifier. On the other hand, choosing a neighborhood small enough for local linearity, as done in point B, leads to local regions that are exponentially small with respect to the data dimension. We formalize this in the following theorem. Let μ, f, be as described above, and let E be any explainer in ℰ(ℬ_d, ℒ_d). Let x^* be any point chosen on the outer sphere, S_3. Then E outputs an explanation at x^* that either has a large local loss, or that has a small local mass. That is, either L(E, f, x^*) ≥1/6, or Λ(E, f, x) ≤ 3^1-d. Theorem <ref> demonstrates that if a locally linear explanation achieves even a remotely reasonable local loss, then it necessarily must have an extremely small local explanation. This suggests that, gradient based explanations will be exponentially local with respect to the data dimension, d. We believe that this is also exhibited in practice particularly over image data, where explanations are often verified based on perceptual validity, rather than relevance to practical training points beyond the point being explained. For example, the explanations given by SmoothGrad <cit.> are visualized as pixel by pixel saliency maps. These maps often directly correspond to the image being explained, and are clearly highly specific to the it (see e.g. Figure 3 of <cit.>). As a result, we would hardly expect the implied linear classifier to have much success over almost any other natural image. This in turn suggest that the locality would be extremely small. We also remark that a similar argument can be made for Lime, which also tends to validate its explanations over images perceptually (for example, see Figure 4 of <cit.>). Unlike the previous methods, Anchors <cit.> explicitly seeks to maximize the local mass of its explanations. However, it abandons this approach for image classifiers, where it instead maximizes a modified form of locality based on super-imposing pixels from the desired image with other images. While this gives perceptually valid anchors, the types of other images that fall within the local region are completely unrealistic (as illustrated in Figure 3 of <cit.>), and the true locality parameter is consequently extremely small. Thus, although Anchors can provide useful and auditable explanations in low-dimensional, tabular data setting, we believe that they too suffer from issues with locality for high-dimensional data. In particular, we note that it is possible to construct similar examples to Theorem <ref> that are designed to force highly local Anchors-based explanations. § CONCLUSION Our results in Section <ref> demonstrate that the locality of a local explainer characterizes how much data is needed to audit it; smaller local regions lead to larger amounts of data. Meanwhile, our discussion in Section <ref> shows that typical local explanations are extremely local in high-dimensional space. It follows that in many cases, auditing solely based on point-wise decisions and explanations is impossible. Thus, any entity without model access, such as a collective of users, are never in a position to guarantee trust for a machine learning model. We believe that the only way forward is through a more powerful third-party auditor that crucially as more access to the machine learning model, as this could potentially break the fundamental challenges posed by small explainability regions. We believe that investigating the precise types of access this would entail as an important direction for future work that might have broad practical consequences. apalike § PROOF OF THEOREM <REF> §.§ An additional assumption We also include the assumption that the locality parameter be small compared to the tolerance parameters. More precisely, we assume that λ < ϵ_2^2. We believe this to be an extremely mild assumption considering that we typically operate in the regime where λ is exponentially small with the dimension, d, whereas the tolerance parameters are typically between 10^-2 and 10^-3. §.§ Main Proof Fix ϵ_1, ϵ_2, γ, λ, and μ, as given in the theorem statement. Our goal is to show the existence of classifier f and explainer E so that the auditor, A, is likely to incorrectly estimate the parameter, L_γ(E, f). To do so, our strategy will be instead to consider a distribution over choices of (E, f), and show that in expectation over this distribution, A estimates L_γ(E, f) poorly. To this end, we define the following quantities: * Let E be the explainer given in Section <ref>. * Let f^* be the random classifier defined in Section <ref>. * Let n be any integer with n ≤1/2592λ^(1-8max(ϵ_1, ϵ_2). * Let X be a random variable for a set of points {x_1, …, x_n} drawn i.i.d from μ. * Let Y = f^*(X) be a random variable for {f^*(x_1), f^*(x_2), …, f^*(x_n)}. Y has randomness over both f^* and X^*. * Let Δ^n = (^d)^n ×{± 1}^n, and σ denote the measure over Δ^n induced by (X, Y). * By definition E's output is independent of function, f^*. Thus, we will abbreviate A's output by writing A(X, f^*(X), E(f^*, X)) = A(X, Y, E). This emphasizes that both X and the output of E are independent of f^*. * We let I^* denote the interval that the auditor seeks to output an estimate it. That is, I^* =[L_γ(1+ϵ_1)(E, f^*) - ϵ_2, L_γ(1-ϵ_1)(E, f^*) + ϵ_2]. Using this notation, we seek to lower bound that the auditor fails meaning we seek to lower bound, _f^*, X[A(X, Y, E) ∉ I^*]. To do so, let T_1 denote the event T_1 = (1/2 - ϵ_2 < L_γ(1+ϵ_1)(E, f^*) ≤ L_γ(1 - ϵ_1)(E, f^*) < 1/2 + ϵ_2), and T_2 denote the event T_2 = (1/2 + 3ϵ_2 < L_γ(1+ϵ_1)(E, f^*) ≤ L_γ(1 - ϵ_1)(E, f^*) < 1/2 + 5ϵ_2). The key observation is that any estimate, A(X, Y, E), can be inside at most one of the intervals, [1/2 - ϵ_2, 1/2 + ϵ_2] and [1/2 + 3ϵ_2, 1/2 + 5ϵ_2]. Using this, we can re-write our desired probability through the following integration. Let (x, y) denote specific choices of (X, Y). Note that in this context, x represents a set of points in (^d)^n, and y represents a set of labels in {± 1}^n. We then have the following: _f^*, X[A(X, Y, E) ∉ I^*] = ∫_Δ^n_f^*[A(x, y, E) ∉ I^* | X=x, Y=y]dσ(x,y) ≥∫_Δ^n_f^*[T_1|X=x, Y=y](A(x, y, E) ∉ I_1) + _f^*[T_2|X=x, Y=y](A(x, y, E) ∉ I_1)dσ(x,y) ≥∫_Δ^nmin(_f^*[T_1|X=x, Y=y], _f^*[T_2|X=x, Y=y])dσ(x,y), where the last equation holds because at least one of the events, A(x, y, E) ∉ I_1 and A(x, y, E) ∉ I_2, must hold. To bound this last quantity, we utilize Lemma <ref>. Let S^* = {(x, y): [T_1 | X=x, Y=y], [T_0 | X=x, Y=y] ≥2/5}. By Lemma <ref>, we have that σ(S^*) ≥5/6. It follows that _f^*, X[A(X, Y, E) ∉ I^*] ≥∫_Δ^nmin(_f^*[T_1|X=x, Y=y], _f^*[T_2|X=x, Y=y])dσ(x,y) ≥∫_S^*min(_f^*[T_1|X=x, Y=y], _f^*[T_2|X=x, Y=y])dσ(x,y) ≥∫_S^*2/5dσ(x,y) = 2/5σ(S^*) ≥1/3, which completes the proof as this implies with probability at least 1/3, the Auditors estimate is not sufficiently accurate. §.§ Non-degenerate Distributions Theorem <ref> includes the assumption that μ is non-degenerate, which is defined as follows. We say that data distribution μ over ^d is non-degenerate if for all x ∈^d, there exists 1 ≤ i ≤ d such that μ({x': x_i' = x_i}) = 0. Being non-degenerate essentially means that at any point, x, the data distribution μ has a finite probability density with respect to some feature. This condition any distribution with a well-defined density over ^d (i.e. such as a Gaussian) and is also met for most practical data-sets in which any of the features is globally continuously distributed (i.e. mass in kg over a distribution of patients). We exclude data distributions with point masses because they can pose particularly simple cases in which there is a strict lower bound on how small the local region assigned to a given point can be. For example, in the extreme case where μ is concentrated on a single point, auditing any model or explanation over μ is trivial. We now show a useful property of non-degenerate distributions. Let μ be a non-degenerate distribution and R be a hyper-rectangle. Then R can be partitioned into two hyper-rectangles, R_1, R_2 such that μ(R_1), μ(R_2) ≥μ(R)/4. Let R = (a_1, b_1] × (a_2, b_2] ×…× (a_d, b_d]. First, suppose that there exists 1 ≤ i ≤ d such that for all r ∈ (a_i, b_i], μ({x: x = r}∩ R) ≤μ(R)/4. Let r^* = sup{r: μ(R ∩{x: x_i ≤ r}) ≤μ(R)/4}. It follows that setting R_1 = R ∩{x: x_i ≤ r^*} and R_2 = R ∖ R_1 will suffice as R_1 will have probability mass at least μ(R)/4 and probability mass at most μ(R)/2. Otherwise, suppose that no such i exists. Then, thus, there exists r_1, r_2, …, r_d} such that μ(R ∩{x: x_i = r_i}) > 0. It follows that the point (r_1, …, r_d) violates Definition <ref>, which is a contradiction. Thus some i exists, which allows us to apply the above argument, finishing the proof. §.§ Constructing f^* and E We begin by partitioning the support of μ into hyper-rectangles such that each rectangle has probability mass in the interval [α/4, α]. We then further partition these rectangles into a large number of equal parts. Formally, we have the following: Let α > 0 be fixed, and K > 0 be any integer. Then for some integer L > 0, there exists a set of hyper-rectangles, {R_i^j: 1 ≤ i ≤ L, 1 ≤ j ≤ K} such that the following hold: * R_i^1, … R_i^K partition rectangle R_i. * For all 1 ≤ i ≤ L, α≤μ(R_i) ≤ 4α. * For all 1 ≤ i ≤ L and 1 ≤ j ≤ K, μ(R_i)/4K≤μ(R_i^j) ≤μ(R_i)/K. First construct R_1, …, R_L by using the following procedure: * Begin with the set 𝒜 = {R^*} where R^* is a large rectangle containing the support of μ. * If 𝒜 contains a rectangle, R, such that μ(R) > 4α, then apply Lemma <ref> to split R into two rectangles with mass at least μ(R)/4 and mass at most 3μ(R)/4. * Repeat step 2 until no such rectangles, R, exist. This process clearly must terminate in a set of rectangles each of which has mass in the desired range, and also must terminate as a single rectangle can only be cut at most log1/α/log3/4 times. Next, to construct R_i^1, R_i^2, R_i^K, we simply utilize an analogous procedure, this time starting with {R_i} and replacing α with μ(R_i)/4K. We now construct a fixed explainer, E. Let E denote the explainer so that for all x ∈ supp(μ), we have E(x) = (R_x, g^+1) where R_x is the unique hyper-rectangle, R_i that contains x, and g^+1 is the constant classifier that always outputs +1. We now construct a distribution over functions f, and let f^* be a random function that follows this distribution. We have the following: Let f^* be a random classifier mapping ^d to {± 1} constructed as follows. Let m be an integer and 0 ≤ p_1, …, p_2m, q_1, …, q_2m≤ 1 be real numbers that satisfy the conditions set forth in Lemma <ref>. Then f^* is constructed with the following steps: * Let P be a binary event that occurs with probability 1/2. * If P occurs, then set r_i = p_i for 1 ≤ i ≤ p_2m. Otherwise set r_i = q_i. * If x ∉∪_i=1^L R_i, then f^*(x) = +1. * For each rectangle R_i, randomly select 1 ≤ k ≤ 2m at uniform. * For each sub-rectangle R_i^j, with probability r_k, set f(x) = -1 for all x ∈ R_i^j, and with probability 1-r_k, set f^*(x) = +1 for all x ∈ R_i^j. Note that m is constructed based on ϵ_1, ϵ_2, and γ which we assume to be provided as in the statement of Theorem <ref>. §.§ Properties of f^* and E We now prove several useful properties of this construction. To do so, we use the following notation: * We let f^* denote the random variable representing the way f is generated. We use f^* = f to denote the event that f^* equals a specific function f:^d →{± 1}. * We let P denote the indicator variable for the binary event used in Section <ref> to construct f. * We let m denote the integer from Lemma <ref> that is used to construction f^*. * We let X = (x_1, …, x_n) ∼μ^n denote a random variable of n i.i.d selected points from μ. We use x to denote a specific instance of X. * We let Y = (y_1, …, y_n) be a random variable over labels constructed by setting y_i = f^*(x_i). We similarly use y to denote a specific instance of Y. * We let σ denote the measure over (^d ×{± 1})^n associated with (X, Y) . * We let Δ^n denote the domain of the pair of random vectors (X, Y) (as done in Section <ref>) We begin with a bounds on the probability that we see any rectangle that has a large number of points selected from it. Let R_1, …, R_L be as defined in section <ref>, and m as given. Let U denote the subset of Δ^n such that U = {(x, y): ∃ 1 ≤ i ≤ z, |X ∩ R_i| ≥ 2m }. Then σ(U) ≤1/180. We bound the probability that a single rectangle, R_i, contains at least 2m points from X, and then apply a union bound over all L rectangles. By construction, μ(R_i) ≤ 4λ, which implies that for each point x_j ∈ X the probability that X_j falls within rectangle R_i is at most 4λ. Thus, for any set of 2m distinct points from X, the probability that they all fall within R_i is at most (4λ)^2m. By taking a union bound over all n2m subsets of 2m point from X, and substituting our assumed upper bound for n (point 3. of Section <ref>), we have the following [|X ∩ R_i| ≥ 2m] ≤n2m(4λ)^2m ≤(en/2m)^2m(4λ)^2m ≤(e/2m1/2592max(ϵ_1, ϵ_2)λ^1 - 8max(ϵ_1, ϵ_2))^2m(4λ)^2m = (4λ) (e/2m4^1 - 1/2mλ^1- 1/2m/2592max(ϵ_1, ϵ_2)λ^1 - 8max(ϵ_1, ϵ_2))^2m. By definition (see Lemma <ref>), m ≥1/16max(ϵ_1, ϵ_2). Substituting this, and noting that λ^1-1/2m is increasing with respect to m (since λ < 1), we have [|X ∩ R_i| ≥ 2m] ≤ (4λ) (e/2m4^1 - 1/2mλ^1- 1/2m/2592max(ϵ_1, ϵ_2)λ^1 - 8max(ϵ_1, ϵ_2))^2m ≤ (4λ) (e8max(ϵ_1, ϵ_2)/14λ^1- 8max(ϵ_1, ϵ_2)/2592max(ϵ_1, ϵ_2)λ^1 - 8max(ϵ_1, ϵ_2))^2m ≤ (4λ) (96/2592)^2m < λ/180. Finally, we apply a union bound over all rectangles. Observe that there are at most 1/λ such rectangles because by construction each rectangle has mass at most λ. Thus, our total probability is at most 1/λλ/180 which is at most 1/180 as desired. Next, we leverage the properties from the construction of f to bound the conditional probability of P=1 when (x, y) ∉ U. Let (x, y) be in the support of σ so that (x, y) ∉ U. Then [P = 1|(X, Y) = (x,y)] = [P = 0|(X, Y) = (x,y)] = 1/2. Our main idea will be to use Bayes-rule, and show that [(X, Y)= (x,y) | P = 1] = [(X, Y) = (x,y) | P = 0. This will suffice due to the fact that the prior distribution for P is uniform over {0, 1}. To do so, we first note that X is independent from P. For this reason, it suffices to show that [Y=y | P = 1, X = x] = [Y = y | P = 0, X= x]. To do so, we will express these probabilities in terms of the real numbers, p_1, …, p_2m and q_1, …, q_2m from which they were constructed (see Definition <ref>). For each rectangle, R_i (see Lemma <ref>), let Y ∩ R_i denote the function values of all points in the set X ∩ R_i. It follows from step 4 of Definition <ref> that the values in Y ∩ R_i and Y ∩ R_j are independent from each other. Thus, we can re-write our desired probability as [Y=y | P = 1, X = x] = ∏_i=1^L [(Y ∩ R_i) = (y ∩ R_i)| P= 1, (X ∩ R_i) = (x ∩ R_i)]. We now analyze the latter quantity for a rectangle, R_i. For convenience, let us relabel indices so that x ∩ R_i = {x_1, x_2, …, x_l} and y ∩ R_i = {y_1, …, y_l} for some integer l ≥ 0. We also let X_1, …, X_l and Y_1, …, Y_l denote the corresponding values for X ∩ R_i and Y ∩ R_i. We now further assume that that for all x_a, x_b ∈{x_1, …, x_l}, that x_a and x_b are contained within different sub-rectangles, R_i^a, R_i^b (see Definition <ref>). If this isn't the case, observe that we can simply remove the pair (x_b, y_b), as by the construction of f^*, this will be forced to be identical to (x_a, y_a). By applying this assumption, we now have that for a given choice of the parameter r_k (step 4 of Definition <ref>), the values of y_1, …, y_l are mutually independent. Utilizing this, we have [(Y ∩ R_i) = (y ∩ R_i)| P= 1, (X ∩ R_i) = (x ∩ R_i)] = 1/2m∑_j=1^2m∏_k=1^l (y_k/2 - y_kp_j + 1/2) = 1/2m∑_j=1^2m F(p_j), Where F is a polynomial of degree l. Here, the expression, y_k/2 - y_kp_j + 1/2 simply evaluates to p_j if y_k = -1 (as p_j is the probability of observing a -1) and 1 - p_j otherwise. Next, observe that the only difference when performing this computation for P = 0 is that we use the real numbers, q_1, … q_2m instead. Thus, we have, [(Y ∩ R_i) = (y ∩ R_i)| P= 0, (X ∩ R_i) = (x ∩ R_i)] = 1/2m∑_j=1^2m∏_k=1^l (y_k/2 - y_kq_j + 1/2) = 1/2m∑_j=1^2m F(q_j), To show these two expression are equal, by assumption (x, y) ∉ U which implies that l < 2m. Furthermore, by Lemma <ref>, ∑_k=1^2m p_k^t =∑_k=1^2m q_k^t,for all 0 ≤ t ≤ l. It follows that ∑_k=1^2m F(p_k) = ∑_k=1^2m F(q_k), which implies our desired result. Next, we bound the probability of events related to the value of L_γ, the parameter that the Auditor seeks to estimate. Let T_1 denote the event that 1/2 - ϵ_2 < L_γ(1+ϵ_1)≤ L_γ(1-ϵ_1) < 1/2 + ϵ_2. Let T_0 denote the event that 1/2 + 3ϵ_2 < L_γ(1+ϵ_1) < L_γ(1-ϵ_1)≤1/2 + 5ϵ_2. Then taken over the randomness of the entire construction, [T_1, P =1], [T_0, P = 0] ≥89/180, where P is the binary event defined above. By definition, [P=1] = [P = 0] = 1/2. Thus, it suffices to show that [T_1|P = 1], [T_0|P=0] ≥89/90. We begin with the case that P= 1 (the case for P=0 will be similar). For each rectangle, R_i, let r(R_i) denote the choice of r_k made for R_i in step 5 of Definition <ref>. The crucial observation is that the value of r(R_i) nearly determines the local loss that E pays for points in R_i with respect to f^*. In particular, if the number of sub-rectangles, K, is sufficiently large, then by the law of large numbers, we have that with high probability over the choice of f^*, for all rectangles R_i and for all x ∈ R_i, |L(E, f^*, x) - r(R_i)| < 0.01γ(ϵ). Let us fix K to be any number for which this holds, and assume that this value of K is set throughout our construction. Next, recall by Lemma <ref> that p_1 < p_2 < … < p_m < γ(1 - 2ϵ_1) < γ(1 + 2ϵ_1) < p_m+1 < … < p_2m. Recall that r(R_i) is chosen at uniform among {p_1 … p_2m} (step 5 of Definition <ref>). It follows from Equation <ref> that for any x ∈ R_i, and for any α∈{γ(1-ϵ_1), γ(1+ϵ_1)} that _f^*[L(E, f^*, x) ≥α for all x ∈ R_i] = 1/2. Furthermore, because we are conditioning on P = 1, the value of f^* within each rectangle, R_i, is independent. This implies that we can bound the behavior of L_α(E, f^*) by expressing as a sum of independent variables. Let α∈{γ(1-ϵ_1), γ(1+ϵ_1)}, we have by Hoeffding's inequality that [L_α(E, f^*) ∈[1/2 - ϵ_2, 1/2 + ϵ_2]] = [(∑_i=1^L μ(R_i)(L(E, f^*, x) ≥α for all x ∈ R_i)) ∈[1/2 - ϵ_2, 1/2 + ϵ_2]] ≥ 1 - 2exp(-2ϵ_2^2/∑_i=1^L μ(R_i)^2) ≥ 1 - 2exp(-2ϵ_2^2/16λ) ≥ 1 - 1/180 = 179/180; The penultimate inequality holds since μ(R_i) ≤ 4λ for each R_i, and because there are at most 1/λ such rectangles. The last inequality holds because λ < ϵ_2^2 by the assumption in Section <ref>. Thus by taking a union bound over both values of α, we have that L_α(E, f^*) ∈[1/2 - ϵ_2, 1/2 + ϵ_2] with probability at least 89/90. This completes our proof for the case P =1. For P= 0, we can follow a nearly identical argument. The only difference is that the values of q (see Lemma <ref>) are selected so that _f^*[L(E, f^*, x) ≥α] ≥1/2 + 4ϵ_2. This results in the expected loss falling within a different interval, and an identical analysis using Hoeffding's inequality gives the desired result. The main idea of proving Theorem <ref> is to show that for many values of X, Y, the conditional probabilities of T_1 and T_0 occurring are both fairly large. This, in turn, will cause the Auditor to have difficulty as its estimate will necessarily fail for at least one of these events. To further assist with proving this, we have the following additional lemma. Let S^* denote the subset of (^d ×{± 1})^n such that S^* = {(x, y): [T_1 | (X, Y) = (x,y)], [T_0 | (X,Y) = (x,y)] ≥2/5}. Then σ(S^*) ≥5/6. Let S_1' = {(x,y): [T_1 | (X,Y) = (x,y)] < 2/5}, and similarly S_2' = {(x,y): [T_2 | (X, Y) = (x,y)] < 2/5}. Then S^* = (^d ×{± 1})^n ∖(S_1' ∪ S_2'). Thus it suffices to upper bound the mass of S_1' and S_2'. To do so, let U be the set defined in Lemma <ref>. Then we have 89/180 ≤[T_1, P = 1] = ∫_(^d ×{± 1})^n[T_1, P =1|(X, Y) = (x, y)]dσ ≤∫_S_1'[T_1, P =1|(X, Y) = (x, y)]dσ + ∫_U ∖ S_1'[T_1, P =1|(X, Y) = (x, y)]dσ + ∫_Δ^n ∖ (S_1' ∪ U)[T_1, P =1|(X, Y) = (x, y)]dσ < 2/5σ(S_1') + (σ(U) - σ(U ∩ S_1')) + 1/2(σ(Δ^n ∖ U) - σ((Δ^n ∖ U) ∩ S_1')) ≤(2/5 - 1/2)σ(S_1') + 1/2σ(Δ^n ∖ U) + σ(U) ≤1/2179/180 + 1/180 - σ(S_1')/10 Here we are simply leveraging the fact that [P=1|X, Y = x,y] is precisely 1/2 when x, y are not in U, and consequently that the probability of T_1 and P = 1 is at most 2/5, 1, and 1/2 when (x,y) is in the sets S_1', U ∖ S_1' and (Δ^n ∖ U) ∖ S_1' respectively. Finally, simplifying this inequality gives us σ(S_1') ≤1/12. A completely symmetric argument will similarly give us that σ(S_2') ≤1/12. Combining these with a union bound, it follows that σ(S^*) ≥5/6, as desired. §.§ Technical Lemmas For all 0 < γ, ϵ_1, ϵ_2 < 1/48, there exists m > 0, and real numbers 0 ≤ p_1, p_2, …, p_2m, q_1, …, q_2m≤ 1 such that the following four conditions hold: * For all 0 ≤ t ≤ 2m-1, ∑_i=1^2m p_i^t = ∑_i=1^2m q_i^t. * p_1 ≤ p_2 ≤…≤ p_m < γ(1-2ϵ_1) < γ(1+2ϵ_1) < p_m+1≤… p_2m. * q_1 ≤ q_2 ≤…≤ q_m-1 < q_m = q_m+1 = γ(1+2ϵ_1) < q_m+2≤… q_2m. * 1/4ϵ_2≥ m ≥1/8max(2ϵ_1, ϵ_2)+1. Let l denote the largest integer that is strictly smaller than 1/8max(2ϵ_1, ϵ_2), and let ϵ = 1/8l. It follows that ϵ≥max(2ϵ_1, ϵ_2). Let P_l and Q_l be as defined in Lemma <ref>. Let m = 2l. Then it follows from the definitions of m, l that m = 2l ≤1/4max(2ϵ_1, max(ϵ_2)≤1/4ϵ_2, which proves the first part of property 4 in Lemma <ref>. For the second part, by the definition of l, we have l ≥1/8max(2ϵ_1, ϵ_2) - 1. Since ϵ_1, ϵ_2 ≤1/48, it follows that l ≥ 2 which implies m = 2l ≥ l+2 ≥1/8max(2ϵ_1, ϵ_2) + 1. Next, let p_1, …, p_4l denote the (sorted in increasing order) roots of the polynomial P_l'(x) = P_l( x - γ(1+2ϵ_1)/2γϵ). Since the roots of P_l are explicitly given in Lemma <ref>, it follows that the middle two roots of P_l'(x) (which are the values of p_m and p_m+1) satisfy p_m = γ(1+2ϵ_1-2ϵ), p_m+1 = γ(1+2ϵ_1 + 2ϵ). Because ϵ' > ϵ, these values clearly satisfy the inequalities given by point 2 in the Lemma statement. Next, define q_1, …, q_4l as the (sorted in increasing order) roots of the polynomial, Q_l'(x) = Q_l(x - γ(1+2ϵ_1)/2γϵ). Again using Lemma <ref>, we see that q_m = q_m+1 = γ(1+2ϵ_1), which satisfies point 3. To see that p_i and q_i are indeed in the desired range, we simply note that by substitution, both p_1 and p_2 must be larger than γ(1+2ϵ_1) - 4l(2γϵ). However, by definition, 4l(2γϵ) = γ. Thus, this quantity is larger than 0 which implies that p_1 and q_1 are both positive. Because γ < 1/10, a similar argument implies that p_2m and q_2m are at most 1. Finally, point 1 follows from the fact that p_1, …, p_2m and q_1, …, q_2m are the complete sets of roots of two polynomials that have matching coefficients for the first 2m coefficients. it follows by basic properties of Newton sums that ∑ p_i^t = ∑ q_i^t for 0 ≤ i ≤ 2m-1, and this proves point 1. For any l > 0, let P_l(x) = ((x+1)(x+3) … (x+4l - 1))((x-1)(x-3)…(x-4l+1)). Let Q_l(x) = P_l(x) - P_l(0). Then Q_l has 2l-1 distinct real roots over the interval (-4l, -1), 2l-1 distinct real roots over the interval (1, 4l), and a double root at x = 0. By symmetry, P_l'(0) = Q_l'(0) = 0, and by definition Q_l(0) = 0. It follows that x = 0 is a double root. Next, fix 1 ≤ i ≤ l - 1. By definition, we have that P_l(4i-1) = P_l(4i + 1) = 0. We also have that P_l(4i) = ∏_j = 1^2(l+i)(2j-1) ∏_j=1^2(l-i)(2j - 1). Meanwhile, we also have that P_l(0) = (∏_j=1^2l (2j-1))^2. By directly comparing terms, it follows that P_l(i) is strictly larger than P_l(0). Thus, by the intermediate value theorem, Q_l must have at least one root in both (4i-1, 4i) and (4i, 4i+1). Using a similar argument, we can also show that Q_l has at least one root in (4l-1, 4l). Since P_l is an even function, it follows that Q_l is as well which means it symmetrically has roots in the intervals (-4i, -4i+1) for 1 ≤ i ≤ l and (-4i - 1, 4i) for 1 ≤ i ≤ l-1. Taken all together, we have constructed 2(l + l-1) = 4l - 2 distinct intervals that each contain a root. Since Q_l also has a double root at x = 0, it follows that this must account for all of its roots as deg(Q_l) = deg(P_l) = 4l. § PROOF OF THEOREM <REF> §.§ Algorithm description The main idea of our auditor, , is to essentially performs a brute-force auditing where we choose a set of points, X_1, and attempt to assess the accuracy of their local explanations by using a a wealth of labeled data, X_2, to validate it. Our algorithm uses the following steps (pseudocode given in Algorithm <ref>). * (lines 1 -3) We first partition X based on the tolerance parameters, ϵ_1, ϵ_2, δ. X_1 will be the set of points that we validate, and X_2 will be the set of points we use for validation. * (lines 8), For each point x in X_1, we check whether there are enough points from X_2 that fall within its local region, R_x, to accurate estimate its local loss. * (line 9-13) For each point satisfying the criteria in line 8, we evaluate its empirical local loss and then tally up how many points have a loss that is larger than γ. * (line 17) We output the proportion of points with loss larger than γ among all points whose loss we measured. At a high level, we expect this algorithm to succeed as long as we have enough data in each of the local regions induced from points in X_1. §.§ Notation We use the following: * Let δ, ϵ_1, ϵ_2, γ be the tolerance parameters defined in Definition <ref>. * Let λ = Λ(E, f) denote the locality of E,f w.r.t. data distribution μ. * Let X_1 be the set of points that are specifically being audited. * Let X_2 be the set of points being used to audit. * Let |X_1| = m. By definition, m > log1/δ/ϵ_2^2. * We set |X_2| = n' = n - m. By definition, n' > stuff. * For any x ∈^d, we let E(x) = (R_x, g_x) be the local explanation outputted for x by explainer E. We also define the following quantities related to estimating how frequently the local loss outputted by the explainer E is above the desired threshold, γ. * Let r^* = _x ∼μ[L(E, f, x) ≥γ(1 + ϵ_1)]. * Let g^* = _x ∼μ[γ(1 - ϵ_1) ≤ L(E, f, x) ≤γ(1 + ϵ_1)]. * Let b^* = _x ∼μ[L(E, f, x) ≤γ(1 - ϵ_1)]. Here, r^* denotes the probability that a point has a large local error, b^*, the probability a point has a low local error, and g^*, the probability of an "in-between" case that is nearby the desired threshold, γ. By the definition of sample complexity (Definition <ref>), the goal of Algorithm <ref> is to output an estimate that is inside the interval, [r^* - ϵ_2, r^* + g^* + ϵ_2]. Next, we define r, g, b as versions of these quantities that are based on the sample, X_1. * Let r = _x ∼ X_1[L(E, f, x) ≥γ (1+ ϵ_1)]. * Let g = _x ∼ X_1[γ(1 - ϵ_1) ≤ L(E, f, x) ≤γ(1 + ϵ_1)]. * Let b = _x ∼ X_1[L(E, f, x) ≤γ(1 - ϵ_1)]. Observe that while x is drawn at uniform from X_1 in these quantities, we still use the true loss with respect toe μ, L(E, f, x), to determine whether it falls into r, g or b. Because of this, it becomes necessary to define two more fully empirical quantities that serve as estimates of r and b (we ignore g as it will merely contribute to a "margin" in our estimation terms). * Let r' = _x ∼ X_1 [(_x' ∼ X_2[g_x(x') ≠ f(x') | x' ∈ R_x] > γ) and |X_2 ∩ R_x| ≥log176/ϵ_2δ/2 (γϵ_1)^2 ]. * let b' =_x ∼ X_1 [(_x' ∼ X_2[g_x(x') ≠ f(x') | x' ∈ R_x] ≤γ) and |X_2 ∩ R_x| ≥log176/ϵ_2δ/2 (γϵ_1)^2 ]. The final estimate outputted by Algorithm <ref> is precisely r'/r' + b'. Thus, our proof strategy will be to show that for sufficiently large samples, r, g, b are relatively accurate estimates of r^*, g^*, b^*, and in turn r' and b' are relatively accurate estimates of r and b. Together, these will imply that our estimate is within the desired interval, [r^*, b^*]. §.§ The main proof (Theorem <ref>) Let n ≥61/ϵ_2^2log12/δ + log176/ϵ_2δ/2λγ^2ϵ_1^2log44log176/ϵ_2δ/ϵ_2δγ^2ϵ_1^2. By ignoring log factors, we see that n = Õ(1/ϵ_2^2 + 1/λγ^2ϵ_1^2), thus satisfying the desired requirement in Theorem <ref>. Let X_1 and X_2 be as in Algorithm <ref>, and let m, n' denote |X_1| and |X_2 respectively. Directly from Algorithm <ref>, it follows that , m = 61/ϵ_2^2log12/δ, n' ≥log176/ϵ_2δ/2λγ^2ϵ_1^2log44log176/ϵ_2δ/ϵ_2δγ^2ϵ_1^2. By letting ϵ = ϵ_2/7, and k = log16/ϵδ/2(γϵ_1)^2, we have that m ≥1/2ϵ^2log12/δ, and that n' ≥log176/ϵ_2δ/2λγ^2ϵ_1^2log44log16/ϵδ/ϵ_2δγ^2ϵ_1^2 = log16/ϵδ/2λγ^2ϵ_1^2log4log16/ϵδ/ϵδγ^2ϵ_1^2 = klog8k/δϵ/λ. Our bounds on m, k and n' allow us to apply Lemmas <ref> and <ref> along with a union bound to get that the following equations hold simultaneously with probability at least 1-δ over X ∼μ^n: |r - r^*|. |g - g^*|, |b - b^*| ≤ϵ, r(1-2ϵ) ≤ r' ≤ r + g + bϵ b(1-2ϵ) ≤ b' ≤ rϵ + g + b. Recall that our goal is to show that r'/r' + b'∈ [r^* - ϵ_2, r^*+g^*+ ϵ_2] holds with probability at least 1-δ. Thus, it suffices to show that this a simple algebraic consequence of equations <ref>, <ref>, and <ref>. To this end, we have r'/r' + b' (a)≥r(1-2ϵ)/r(1-2ϵ)+ rϵ + g + b ≥r(1-2ϵ)/r+ b+ g ≥r/r+b+g - 2ϵ (b)≥r^* - ϵ/r^* + b^* + g^* + 3ϵ - 2ϵ = r^*/1 + 3ϵ - ϵ/1+3ϵ - 2ϵ ≥ r^*(1 - 3ϵ) - ϵ - 2ϵ ≥ r^* - 4ϵ - 2ϵ (c)≥ r^* - ϵ_2. Here step (a) follows from Equations <ref> and <ref>, (b) from Equation <ref>, and (c) from the fact that ϵ = ϵ_2/11. For the other side of the inequality, we have r'/r' + b' (a)≤r + g + bϵ/r + g + bϵ + b(1-2ϵ) ≤r + g + bϵ/(r + g + b)(1-ϵ) ≤r + g/(r+g+b)(1-ϵ) + ϵ/1-ϵ ≤r + g/r+g+b + 2ϵ + ϵ(1+2ϵ) (b)≤r^* + g^* + 2ϵ/r^* + g^* + b^* - 3ϵ + 3ϵ + 2ϵ^2 = r^* + g^* + 2ϵ/1 - 3ϵ + 3ϵ + 2ϵ^2 (c)≤ (r^* + g^* )(1 + 4ϵ) + 2ϵ(1+4ϵ) + 3ϵ + 2ϵ^2 ≤ r^* + g^* + 6ϵ + 8ϵ^2 + 3ϵ + 2ϵ^2 (d)≤ r^* + g^* + 7ϵ + 4ϵ (e)≤ r^* + g^* + ϵ_2 Here step (a) follows from Equations <ref> and <ref>, (b) from Equation <ref>, (c) from the fact that 1/1-3ϵ≤ 1 + 4ϵ, (d) from ϵ = ϵ_2/11≤1/8, 1/2, and (e) from the ϵ = ϵ_2/11. §.§ Concentration lemmas In this section, we show several lemmas that allow us to bound the behavior of the random variables r, g, b, r' and b' (defined in section <ref>). We also use m and n' as they are defined in Section <ref> to be the sizes of |X_1| and |X_2| respectively. Finally, we also let ϵ = ϵ_2/11. We begin by bounding the differences between r, g, b and r^*, g^*, b^*. Suppose that m ≥1/2ϵ^2log12/δ. Then with probability at least 1- δ/2 over X_1 ∼μ^m, the |r - r^*|, |g - g^*|, |b - b^*| ≤ϵ. Observe that r is the average of m i.i.d binary variables each of which have expected value r^*. It follows by Hoeffding's inequality that [|r - r^*| > ϵ] ≤ 2 exp(-2 (ϵ m)^2/m) ≤ 2 exp( -2ϵ^2 1/2ϵ^2log12/δ) = δ/6. By an identical argument, we see that the same holds for [|g - g^*| > ϵ] and [|b- b^*| > ϵ]. Applying a union bound over all three gives us the desired result. Next, we show that if n' is sufficiently large, then it is highly likely that for any given point x, we observe a large number of points from X_2 within the explanation region, R_x. Let x ∈ supp(μ), and let k > 0 be an integer. Suppose that n' ≥k log8k/δϵ/λ. Then with probability at least 1 - δϵ/8 over X_2 ∼μ^m, |R_x ∩ X_2| ≥ k. Partition X_2 into k sets, X_2^1, X_2^2, …, X_2^k each of which contain at least log8k/δϵ/λ i.i.d points from μ. Because each point is drawn independently, we have that for any 1 ≤ i ≤ k, [X_2^i ∩ R_x = ∅] = (1 - _x' ∼μ[x' ∈ R_x] )^log8k/δϵ/λ ≤(1 - λ)^log8k/δϵ/λ ≤exp( -log8k/δϵ) = δϵ/8k. Here we are using the definition of λ as a lower bound on the probability mass of R_x. Next, we show that if R_x has a sufficient number of points, then it is quite likely for the empirical estimate of the local loss at x to be accurate. Let x ∈ supp(μ), and let k ≥log16/ϵδ/2 (γϵ_1)^2. Then conditioning on there being at least k elements from X_2 in R_x, the empirical local loss at x differs from the true local loss by at most γϵ_1 with probability at least 1 - δϵ/8. That is, _X_2 ∼μ^n'[|L(E, f, x) - 1/|X_2 ∩ R_x|∑_x' ∈ X_2 ∩ R_x(g_x(x') ≠ f(x'))| > γϵ_1 | |X_2 ∩ R_x| ≥ k ] ≤δϵ/8. The key idea of this lemma is that the distribution of k points drawn from μ conditioned on being in R_x is precisely the marginal distribution over which L(E, f, x) is defined. In particular, this means that the points in X_2 ∩ R_x can be construed as i.i.d drawn from the marginal distribution of μ over R_x. Given this observation, the rest of the proof is a straightforward application of Hoeffding's inequality. Letting L̂(E, f, x) = 1/|X_2 ∩ R_x|∑_x' ∈ X_2 ∩ R_x(g_x(x') ≠ f(x')) and K = |X_2 ∩ R_x|, we have _X_2 ∼μ^n'[ |L(E, f, x) - L̂(E, f, x)| > γϵ_1 | K ≥ k ] ≤ 2 exp( - 2(Kγϵ_1)^2/K) ≤ 2 exp(-log16/δϵ) = δϵ/8, as desired. It follows by a union bound that the probability that least one of the sets in {X_2^i ∩ R_x: 1 ≤ i ≤ k} is empty is at most δϵ/8. Thus with probability at least 1 - δϵ/8, all the sets are non-empty which implies that |R_x ∩ X_2| ≥ k, completing the proof. Finally, we use the previous two lemmas to show that r' and b' closely approximate r and b. Let k ≥log16/ϵδ/2 (γϵ_1)^2, and suppose that n' ≥k log8k/δϵ/λ. Then with probability at least 1 - δ/2 over X_2 ∼μ^n', the following equations holds: r(1-2ϵ) ≤ r' ≤ r + g + bϵ, b(1-2ϵ) ≤ b' ≤ rϵ + g + b. We begin by defining subsets of X_1 that correspond to r, g, b, r' and b'. We have * Let R = {x ∈ X_1: L(E, f, x) ≥γ (1+ ϵ_1)}. * Let G = {x ∈ X_1: γ(1 - ϵ_1) ≤ L(E, f, x) ≤γ(1 + ϵ_1)}. * Let B = {x ∈ X_1: L(E, f, x) ≤γ(1 - ϵ_1)]}. * Let R' = {x ∈ X_1: (_x' ∼ X_2[g_x(x') ≠ f(x') | x' ∈ R_x] > γ ) and |X_2 ∩ R_x| ≥log176/ϵ_2δ/2 (γϵ_1)^2}. * Let B' = {x ∈ X_1: (_x' ∼ X_2[g_x(x') ≠ f(x') | x' ∈ R_x] ≤γ ) and |X_2 ∩ R_x| ≥log176/ϵ_2δ/2 (γϵ_1)^2}. Observe that r, g, b, r', and b' are the probabilities that x ∼ X_1 is in the sets R, G, B, R', and B' respectively. Our strategy will be to use the previous lemmas to bound the sizes of the intersections, R' ∩ R, R' ∩ B, B' ∩ R, B' ∩ B'. To this end, let x ∈ R be an arbitrary point. By Lemma <ref>, with probability at least 1 - δϵ/8 over X_2 ∼μ^n', x ∈ R' ∪ B'. Furthermore, by Lemma <ref> (along with the definition of R), with probability at most δϵ/8, x ∈ B'. Applying linearity of expectation along with Markov's inequality, we get the following two bounds: _X_2[|R ∩ (X_1 ∖ (R' ∩ B')| > |R|ϵ] ≤𝔼_X_2[ |R ∩ (X_1 ∖ (R' ∩ B')|]/|R|ϵ ≤|R|δϵ/8/|R|ϵ = δ/8, _X_2[|R ∩ B' | > |R|ϵ] ≤𝔼_X_2[ |R ∩ B'|]/|R|ϵ ≤|R|δϵ/8/|R|ϵ = δ/8. Applying an analogous line of reasoning stating with x ∈ B, we also have _X_2[|B ∩ (X_1 ∖ (R' ∩ B')| > |B|ϵ] ≤δ/8, _X_2[|B ∩ R' | > |B|ϵ] ≤δ/8. Applying a union bound, none of these events occur with probability at least 1 - δ/2 over X_2 ∼μ^n'. Thus, it suffices to show that they algebraically imply the desired inequalities. To this end, suppose none of them hold. Then we have, r' = |R'|/|X_1| = |R' ∩ B| + |R'∩ G| + |R' ∩ R|/|X_1| ≤|B|ϵ + |G| + |R|/|X_1| = bϵ + g + r, r' = |R'|/|X_1| = |R' ∩ B| + |R'∩ G| + |R' ∩ R|/|X_1| ≥0 + 0 + |R| - |R ∩ B'| - |R ∖ (B' ∪ R')|/|X_1| ≥|R| - |R|ϵ - |R|ϵ/|X_1| = r(1-2ϵ). The upper and lower bounds on b' are analogous. § PROOF OF THEOREM <REF> §.§ Definitions and Notation Let α = 1/3670016d^4 and β = 1/3584d^2. Let S_1, S_2, S_3 be three (d-1)-spheres centered at the origin with radii (1-α), 1, and 1+β respectively for 0 < α, β. Let μ denote the data distribution so that x ∼μ is selected by first selecting i ∈{1, 2, 3} at uniform, and then selecting x from S_i at uniform. Let f denote the classifier ^d →{± 1} such that f(x) = +1 ||x||^2 ≤ 1 - α/2 -1 1- α/2 < ||x||^2 ≤ 1 + β/2 +1 ||x^2|| > 1 + β/2. Let x^* be an arbitrary point chosen on S_3, and let g be any linear classifier, and B(a, r) be any L_2-ball that contains x^*. There exists x ∈ S_2 and 0 ≤θ_1, θ_2, θ_3 ≤π such that S_1 ∩ B(a, r) = C(S_1, x(1-α), θ_1), S_2 ∩ B(a, r) = C(S_2, x, θ_2), S_3 ∩ B(a, r) = C(S_3, x(1+β), θ_3), where C(S, x, θ) denotes the spherical cap of angle θ centered at x on (d-1)-sphere S (see Definition <ref>). §.§ Main Proof We begin by showing that the structure of the data distribution μ provides significant difficulty for linear classifiers. At a high level, the curvature of the spheres, S_1, S_2, S_3, make separating them linearly only possible for small portions of the sphere. We formalize this with the following lemma. Let θ≥π/4. Let x be an arbitrary point on S_2, and let T_1(x, θ), T_3(x,θ) denote the sets T_1(x,θ) = C(S_2, x, θ) ∪ C(S_1, x(1-α), θ), T_3(x,θ)= C(S_2, x, θ) ∪ C(S_3, x(1+β), θ). Let g: ^d →{± 1} denote any linear classifier. Then g exhibits a loss of at least 1/3 over the conditional distribution of μ restricted to either T_1 or T_3. That is, _x' ∼μ[g(x') ≠ f(x') | x' ∈ T_1(x,θ)], _x' ∼μ[g(x') ≠ f(x') | x' ∈ T_3(x,θ)] ≥1/3. Next, we show that if the local explanation region B(a, r),contains a sufficiently large probability mass, then it also must include a region that takes the form given by T_1 or T_3 from Lemma <ref>. Suppose that μ(B(a, r)) ≥ 3^1-d. Let T_1 and T_3 be as defined in Lemma <ref>. Then there exist x ∈ S_2 and θ≥π/4 such that at least one of the following hold: * T_1(x, θ) ⊆ B(a, r), and μ(T_1(x, θ))/μ(B(a, r))≥1/2. * T_3(x, θ) ⊆ B(a, r), and μ(T_3(x, θ))/μ(B(a, r))≥1/2. We are now prepared to prove Theorem <ref>. (Theorem <ref>) Suppose B(a, r) ≥ 3^1-d. Then by Lemma <ref>, there exists θ≥π/4 such that either T_1(x,θ) or T_3(x, θ) is a subset of B(a, r) and satisfies the conditions outlined above. Suppose that T_1(x, θ) ⊆ B(a, r) (the other case is analogous). Let g be any linear classifier. Then it follows from Lemmas <ref> and <ref> that the loss g incurs over the conditional distribution of μ over B(a, r) can be bounded as follows: _z ∼ B(a, r)[g(z) ≠ f(z)] ≥[z ∈ T_1(x, θ)][g(z) ≠ f(z) | z ∈ T_1(x, θ)] ≥1/21/3 = 1/6, which concludes the proof. §.§ Proof of Lemma <ref> We will show that the claim holds for T_3(x, θ) as the proof for T_1(x, θ) is nearly identical (as α < β). Let w ∈^d be a unit vector and b ∈ be a scalar such that g(z) = 1 ⟨ w, z ⟩≥ b -1 ⟨ w, z ⟩ < b . Our main strategy will be to find a large set of points within T_3(x, θ) such that g(z) = g(z(1+β)) for all z within this set. This will force g to misclassify either z or z(1+β) which will lead to our desired error bound. To this end, define T^* = {z ∈ C(S_2, x, θ): g(z) = -1, g(z(1+β)) = +1, |⟨ x, z ⟩| ≤cosπ/8}. μ(T^*)/μ(C(S_2, x, θ))≤1/10. Let z be selected at uniform from C(S_2, x, θ) ∖(C(S_2, x, π/8) ∪ C(S_2, -x, π/8)). Note that z definitionally satisfies that |⟨ x, z ⟩| ≤cosπ/8. It suffices to upper bound the probability that g(z) ≠ g(z(1+β)). Let C_ϕ = {z: ⟨ z, x ⟩ = cosϕ}. Our main idea is to condition on z ∈ C_ϕ, and then integrate over all choices of ϕ. That is, if we let ϕ denote the random variable representing the angle between x and z, then _z[g(z) = -1, g(z(1+β)) = +1 ] = 𝔼_ϕ_z|ϕ[g(z) = -1, g(z(1+β)) = +1 ]. We will now bound the latter quantity. Fix any ϕ, and observe that the conditional distribution, z|ϕ can be written as z = xcosϕ + usinϕ where u is a random vector in R^d-1 that is uniformly distributed over the unit sphere, S^d-2⊆ R^d-1. Rewriting the condition that g(z) ≠ g(z(1+β)) in terms of u, observe that g(z) = -1, g(z(1+β)) = +1 ⟨ w, z ⟩≤ b ≤⟨ w, z(1+β) ⟩ b/1+β≤⟨ w, z ⟩≤ b b/1+β - ⟨ x cosϕ, w ⟩≤⟨ w, usinϕ⟩≤ b - ⟨ x cosϕ, w ⟩ ⟨ w, u ⟩∈[s, s + β/sinϕ], where s is a constant that depends solely on b, w, x, and ϕ. Note that we are using the fact that |b| ≤ (1+β) as otherwise g would trivially output the same label over all z ∼μ). By applying Lemma <ref> along with the fact that (by definition of ϕ) β/sinϕ≤β/sinπ/8≤1/1370d^2, we have that _u[u ∈[s, s + β/sinϕ]] ≤1/10, which implies the desired result. μ(C(S_2, x, π/8) ∪ C(S_2, -x, π/8))/μ(C(S_2, x, θ))≤7/30. By symmetry, μ(C(S_2, x, π/8)) = μ(C(S_2, -x, π/8)) so it suffices to bound one of them. Since θ≥π/4 by assumption, applying Lemma <ref>, we have μ(C(S_2, x, π/8) ∪ C(S_2, -x, π/8))/μ(C(S_2, x, θ)) ≤2μ(C(S_2, x, π/8))/μ(C(S_2, x, θ)) ≤2μ(C(S_2, x, π/8))/μ(C(S_2, x, π/4)) ≤ 21/2(sinπ/8/sinπ/4)^d-2 ≤7/30, as d ≥ 5 in the assumption of Theorem <ref>. We are now prepared to prove the main lemma. (Lemma <ref>) Let A^* ⊆ C(S_2, x, θ) be defined as the set of all points for which g classifies both the point and its image in (1+β)S_3 correctly. That is, A^* = {z ∈ C(S_2, x, θ): g(z) = -1, g((1+β)z) = +1}. By the previous two lemmas, we have μ(A^*)/μ(C(S_2, x, θ)) ≤μ(T^* ∪ C(S_2, x, π/8) ∪ C(S_2, -x, π/8))/μ(C(S_2, x, θ)) ≤1/10 + 7/30 = 1/3 Each z ∈ A^* is a point for which both z and (1+β)z are correctly classified, and each z ∈ C(S_2, x, θ) ∖ A^* corresponds to either z being misclassified, or (1+β)z being misclassified. It follows that the overall accuracy of g over T_3(x, θ) is at most _z ∼ T_3(x, θ)[g(z) = f(z)] ≤_z ∼ C(S_2, x, θ)[z ∈ A^*] + 1/2_z ∼ C(S_2, x, θ)[z ∉ A^*] ≤1/2(1 + _z ∼ C(S_2, x, θ)[z ∈ A^*]) ≤2/3 Thus g must incur loss at least 1/3 over T_3(x, θ), as desired. §.§ Proof of Lemma <ref> Throughout this section, we assume that μ(B(a, r)) ≥ 3^1-d. max(θ_1, θ_2, θ_3) ≥π/3. Assume towards a contradiction that this does not hold. Let x be as in Lemma <ref>. Then by the Definition of μ (Definition <ref>) and Lemma <ref>, it follows that μ(B(a, r)) = μ(C(S_1, x(1-α), θ_1)) + μ(C(S_2, x, θ_2)) + μ(C(S_3, x(1+β), θ_3)) = 1/3(Ψ(θ_1) + Ψ(θ_2) + Ψ(θ_3)) < Ψ(π/3), where Ψ is as defined in Section <ref>. However, Lemma <ref> implies that Ψ(pi/3) ≤ 3^1-dΨ(π) = 3^1-d. This contradicts our assumption on μ(B(a, r)) and implies the desired result. r ≥ 1-α. Lemma <ref> implies that B(a, r) must intersect some sphere among S_1, S_2, S_3 in a spherical cap of an angle at least π/3. Basic geometry implies that r ≥min(rad(S_1), rad(S_2), rad(S_3)) where rad(S_i) denotes the radius of S_i. The desired result follows from the fact that 1-α - rad(S_1) ≤ rad(S_2), rad(S_3). |θ_2 - max(θ_1, θ_3)| ≤1/4d. We first compute θ_1, θ_2, θ_3 in terms of a, r, α, and β. We begin with θ_2, and note that the expressions for θ_1 and θ_3 can be similarly derived. To this end, we have S_2 ∩ B(a, r) = {x: ||x|| = 1, ||x-a||≤ r} = {x: ||x|| = 1, ⟨ x, x ⟩ - 2⟨ x, a ⟩ + ⟨ a, a ⟩≤ r^2} ={x: ||x|| = 1, ⟨x/||x||, a/||a||⟩≥1 + a^2 - r^2/2a}, where we use a to denote ||a|| in a slight abuse of notation. It follows from Lemma <ref> that cosθ_2 = 1 + a^2 - r^2/2a. We can similarly show that cosθ_1 = (1-α)^2 + a^2 - r^2/2(1-α)a, cosθ_3 = (1 + β)^2 + a^2 - r^2/2(1+β)a. Let h: → be the function defined as h(s) = s^2 + a^2 - r^2/2sa. Thus, cosθ_1 = h(1-α), cosθ_2 = h(1), cosθ_3 = h(1+β). Note that in cases where h is outside of the interval [-1, 1] (meaning θ_i would not be defined), we simply set θ_i equal to π and 0 respectively, as these quantities still accurately describe the intersection between B(a, r) and the corresponding sphere, S_i. Case 1: 0 ≤ a ≤β/2 By definition, B(a, r) contains x^* and therefore intersects S_3. It follows from the triangle inequality that r ≥ 1 + β/2. However, this implies that B(a, r) must contain the entirety of S_2 and S_1, which implies that θ_1= θ_2 = max(θ_1, θ_3) = π, thus implying the lemma statement. Case 2: β/2 < a ≤ 1-α If r > 1 + 2β, then B(a, r) will contain S_1, S_2 and S_3, which implies θ_1 = θ_2 = θ_3 = π (implying the lemma statement). Thus, assume r ≤ 1+2β. Differentiating h w.r.t. s gives h'(s) = 1/2a(1 + r^2-a^2/s^2). By Lemma <ref>, r^2 ≥ a^2, which implies that h'(s) is nonnegative for s ∈ [1-α, 1+β]. Furthermore, we have that over the interval, [1-α, 1+β], h'(s) = 1/2a(1 + r^2-a^2/s^2) ≤1/β(1 + (1+2β)^2 - (β/2)^2/(1-α)^2) = 1/β(1 + 1 + 4β + 3.75β^2/(1-α)^2) ≤1/β(1 + 1 + 4(0.25) + 3.75(0.25)^2/0.875^2) ≤4/β. This is obtained by substituting appropriate upper and lower bounds for r, a, s, α, and β. Because h'(s) is nonnegative over the interval, we must have that h(1-α) ≤ h(1) ≤ h(1+β) which implies θ_1 ≥θ_2 ≥θ_3 (as cos is a decreasing function). It follows from our upper bound on h'(s) that |cosθ_2 - cos(max(θ_1, θ_3))| = cos(θ_2) - cos(θ_1) = h(1) - h(1-α) = ∫_1-α^1 h'(s)ds ≤∫_1-α^1 4/βds = 4α/β. Applying Lemma <ref> implies that |θ_2 - max(θ_1, θ_3)| ≤ 8√(α/β) = 1/4d, which implies the lemma statement. Case 3: a > 1-α First suppose that |a- r| > 3. If r > a+3, then the triangle inequality implies that S_1, S_2, S_3 ⊆ B(a, r) which implies the desired result. On the other hand, if r < a -3, then we must have a > 3, and that B(a, r) is disjoint from S_1, S_2, S_3 which again implies the desired result. Thus, we assume that |a- r| ≤ 3. We now use a similar strategy to the previous case, and bound the derivative, h'(s). By substituting that |a-r| ≤ 3, we have, for s ∈ [1-α, 1+β], |h'(s)| = |1/2a(1 + r^2-a^2/s^2)| = |1/2a(1 + (2a + 3)(3)/s^2)| = |1/2a(1 + (2a + 3)(3)/(1-α)^2)| = |1/2a(1 + (2a + 3)(3)/(0.875)^2)| ≤|1/2a(1 + 4(2a+3))| ≤|1/2a(13 + 8a)| ≤ 4 + 10 = 14. Here we are exploiting the fact that 1 - α≥√(3)/2, 0.65. It follows by the same argument given in Case 2 that |cosθ_2 - cos(max(θ_1, θ_3))| ≤ 14β. Applying Lemma <ref> implies |θ_2 - max(θ_1, θ_3)| ≤ 4√(14β) = 1/4d, as desired. Now we are ready to prove the lemma. (Lemma <ref>) Let x be as in Lemma <ref>, and let θ^* = max(θ_1, θ_2, θ_3). Then by applying Lemma <ref> to the Definition of μ (Definition <ref>) gives us μ(B(a, r)) = μ(C(S_1, x(1-α), θ_1)) + μ(C(S_2, x, θ_2)) + μ(C(S_3, x(1+β), θ_3)) = 1/3Ψ(θ_1) + 1/3Ψ(θ_2) + 1/3Ψ(θ_3) ≤Ψ(θ^*). Here Ψ denotes the function defined in Section <ref>. Next, let θ = min(max(θ_1, θ_3), θ_2). Let T_1(x, θ) and T_3(x, θ) be as defined in Lemma <ref>. Observe that if θ_1 ≥θ_3, then T_1(x, θ) ⊆ C(S_1, x(1-α), θ_1) ∪ C(S_2, x, θ_2) ⊆ B(a, r), and otherwise, T_3(x, θ) ⊆ C(S_3, x(1+β), θ_3) ∪ C(S_2, x, θ_2) ⊆ B(a, r). Thus, at least one of these sets is part of B(a, r). We now show that these sets have the desired mass. By the definition of θ^*, we have μ(T_1(x, θ))/μ(B(a, r)), μ(T_3(x, θ))/μ(B(a, r))≥2μ(C(S_2, x, θ))/3μ(C(S_2, x, θ^*)). Next, Lemma <ref> implies that θ^* ≥π/3, and Lemma <ref> implies that θ^* - θ≤1/4d. It follows that θ≥θ^* - 1/4d≥θ^*(1 - 1/4d). Substituting this, we find that 2μ(C(S_2, x, θ))/3μ(C(S_2, x, θ^*)) = 2/3Ψ(θ)/Ψ(θ^*) ≥2/3Ψ(θ^*(1 - 1/4d))/Ψ(θ^*) ≥2/3(1 - 1/4d)^d-1 ≥2/3(1/e)^1/4 ≥1/2, where the last steps follow from Lemmas <ref> and <ref>. This completes the proof. §.§ Technical Lemmas Suppose ϕ_1, ϕ_2 ∈ [0, π] such that |cos(ϕ_1) - cos(ϕ_2)| ≤ c. Then |ϕ_1 - ϕ_2| ≤ 4√(c). WLOG, suppose ϕ_1 ≤ϕ_2. Let x = ϕ_2-ϕ_1. Using the sum to product rules, it follows that α ≥ |cosϕ_1 - cosϕ_2| = |-2sinϕ_1 - ϕ_2/2sinϕ_1 + ϕ_2/2| ≥|2sinx/2sinϕ_1+ ϕ_2/2|. However, observe that π - ϕ_1 + ϕ_2/2≥ϕ_2 - ϕ_1 + ϕ_2/2 = x/2 and that ϕ_1+ϕ_2/2≥0 + 0 +x/2 = x/2. It follows that ϕ_1+ϕ_2/2∈ [x/2, π - x/2], which implies that sinϕ_1 + ϕ_2/2≥sinx/2. Substituting this, we have c ≥|2sinx/2sinϕ_1+ ϕ_2/2| ≥ 2sin^2x/2 We now do casework based on x. First suppose that x ≥π/2. Then c ≥ 2sin^2π/4 = 1. By definition, x ≤π, so it follows that x ≤ 4√(α), implying the desired result. Otherwise, if x ≤π/2, then sinx/2≥x/4, as the function t ↦sin(t) - t/2 is nonnegative on the interval [0, π/2]. Substituting this, we see that c ≥x^2/8. Thus x ≤√(8c) < 4√(c), as desired. For 0 ≤ c ≤ 1 and 0 ≤θ≤π, sin(cθ) ≥ csin(θ). Let f(θ) = sin(cθ) - csin(θ). Observe that f(0) = 0. Furthermore, for θ∈ [0, π], we have f'(θ) = ccos(cθ) - ccos(θ) = c(cos(cθ) - cos(θ)). Since cos is a decreasing function on the interval [0, π], it follows that cos(cθ) ≥cos(θ), which implies f'(θ) ≥ 0. Thus f is non-decreasing on the interval, and the desired inequality holds. For all x > 1, (1 - 1/x)^x-1≥1/e. Let f(x) = (1 - 1/x)^x-1. It is well known that lim_x →∞ f(x) = 1/e and lim_x → 1^+f(x) = 1. Thus it suffices to show that f(x) is a non-increasing function. To do so, we will show that ln f(x) is non-increasing by taking its derivative. We have d(ln f(x))/dx = d/dx((x-1)lnx-1/x) = d/dx((x-1)ln (x-1) - (x-1)ln x ) = (ln(x-1) + x-1/x-1) - (ln(x) + x-1/x) = 1/x - (ln(x) - ln(x-1)) = 1/x - ∫_x-1^x 1/tdt ≤1/x - ∫_x-1^x 1/xdt = 1/x - 1/x = 0. Let z be a point chosen at uniform over S_2, and let w be a fixed unit vector. Then if t ≤1/1370d^2, then for any s ∈, _z[⟨ w, z ⟩∈ [s, s+t]] ≤1/10. Let θ denote the random variable that represents the angle between w and z. Applying Lemma <ref>, it follows that for some choice of s' ∈ that _z[⟨ w, z ⟩∈ [s, s+t] ≤_θ[θ∈ [s', s'+4√(t)]. We now bound this quantity by utilizing the quantity Ψ (defined in Section <ref>). We have, _θ[θ∈ [s', s'+4√(t)] = ∫_s'^s' + 4√(t)sin^(d-2)ϕ dϕ/∫_0^πsin^(d-2)ϕ dϕ ≤2∫_π/2 - 2√(t)^π/2sin^(d-2)ϕ dϕ/2∫_0^π/2sin^(d-2)ϕ dϕ = 1 - Ψ(π/2 - 2√(t))/Ψ(π/2). Here we have simply chosen the interval of length 4√(t) that maximizes the corresponding the integral. Next, we continue by applying Lemmas <ref> and <ref> to get _θ[θ∈ [s', s'+4√(t)] ≤ 1 - Ψ(π/2 - 2√(t))/Ψ(π/2) ≤ 1 - (1 - 4√(t)/π)^d-1 ≤ 1 - (1 - 1/29d)^d-1 = 1 - ((1 - 1/29d)^29(d-1))^1/29 ≤ 1 - ((1 - 1/29d)^29d - 1)^1/29 ≤ 1 - (1/e)^1/29 ≤1/10, as desired. §.§ Spherical Caps Let S be a (d-1) sphere centered at the origin, let 0 ≤θ≤π be an angle, and let x ∈ S be a point. We let C(S, x, θ) denote the spherical cap with angle θ centered at x, and it consists of all points, x' ∈ S^d-1, such that ⟨ x, x' ⟩/||x||||x'||≥cosϕ. Here we take the convention of associating C(S_i, x_i, 0) with both the empty set and with {x_i}. While these are distinct sets, they both have measure 0 under π. We also associate C(S_i, x_i, π) with the entirety of S_i. We let Ψ(θ) denote the ratio of the (d-1)-surface area of the region, C(S, x, θ), to the (d-1)-surface area of the entire sphere. Thus, Ψ(θ) denotes the fraction of the sphere covered by a spherical cap of angle θ. By standard integration over spherical coordinates, we have Ψ(θ) = ∫_0^θsin^(d-2)ϕ dϕ/∫_0^πsin^(d-2)ϕ dϕ. Next, we bound Ψ(θ) with the following inequality. Let 0 ≤θ≤π and let 0 ≤ c ≤ 1. Then Ψ(cθ)/Ψ(θ)≥ c^d-1. By applying Lemma <ref> to the definition of Ψ, we have the following manipulations. Ψ(c ϕ)/Ψ(ϕ) = ∫_0^cϕsin^d-2θ dθ/∫_0^ϕsin^d-2θ dθ = ∫_0^ϕsin^d-2(c u)(cdu)/∫_0^ϕsin^d-2θ dθ ≥∫_0^ϕ(csin ( u))^d-2(cdu)/∫_0^ϕsin^d-2θ dθ = c^d-1. We similarly have an upper bound on this ratio. Let 0 ≤θ≤π/2 and 0 ≤ c ≤ 1. Then Ψ(cθ)/Ψ(θ)≤ c(sin cϕ/sinϕ)^d-2. We similarly have, Ψ(c ϕ)/Ψ(ϕ) = ∫_0^cϕsin^d-2θ dθ/∫_0^ϕsin^d-2θ dθ = ∫_0^ϕsin^d-2(c u)(cdu)/∫_0^ϕsin^d-2θ dθ ≤∫_0^ϕ(sin(u)sin cϕ/sinϕ)^d-2(cdu)/∫_0^ϕsin^d-2θ dθ = c(sin cϕ/sinϕ)^d-2. Here we are using the fact that t ↦sin ct/sin t is a non-decreasing function for t ∈ [0, π]. § NEURIPS PAPER CHECKLIST * Claims Question: Do the main claims made in the abstract and introduction accurately reflect the paper's contributions and scope? Answer: Justification: we mention all main results in both the abstract and introduction. Furthermore everything within these sections is revisited in the body. Guidelines: * The answer NA means that the abstract and introduction do not include the claims made in the paper. * The abstract and/or introduction should clearly state the claims made, including the contributions made in the paper and important assumptions and limitations. A No or NA answer to this question will not be perceived well by the reviewers. * The claims made should match theoretical and experimental results, and reflect how much the results can be expected to generalize to other settings. * It is fine to include aspirational goals as motivation as long as it is clear that these goals are not attained by the paper. * Limitations Question: Does the paper discuss the limitations of the work performed by the authors? Answer: Justification: all our assumptions are clearly stated, and we use the limitations of our methods as avenues for future work. We also stress that our framework is by no means comprehensive for evaluating local explanations. Guidelines: * The answer NA means that the paper has no limitation while the answer No means that the paper has limitations, but those are not discussed in the paper. * The authors are encouraged to create a separate "Limitations" section in their paper. * The paper should point out any strong assumptions and how robust the results are to violations of these assumptions (e.g., independence assumptions, noiseless settings, model well-specification, asymptotic approximations only holding locally). The authors should reflect on how these assumptions might be violated in practice and what the implications would be. * The authors should reflect on the scope of the claims made, e.g., if the approach was only tested on a few datasets or with a few runs. In general, empirical results often depend on implicit assumptions, which should be articulated. * The authors should reflect on the factors that influence the performance of the approach. For example, a facial recognition algorithm may perform poorly when image resolution is low or images are taken in low lighting. Or a speech-to-text system might not be used reliably to provide closed captions for online lectures because it fails to handle technical jargon. * The authors should discuss the computational efficiency of the proposed algorithms and how they scale with dataset size. * If applicable, the authors should discuss possible limitations of their approach to address problems of privacy and fairness. * While the authors might fear that complete honesty about limitations might be used by reviewers as grounds for rejection, a worse outcome might be that reviewers discover limitations that aren't acknowledged in the paper. The authors should use their best judgment and recognize that individual actions in favor of transparency play an important role in developing norms that preserve the integrity of the community. Reviewers will be specifically instructed to not penalize honesty concerning limitations. * Theory Assumptions and Proofs Question: For each theoretical result, does the paper provide the full set of assumptions and a complete (and correct) proof? Answer: Justification: this is a theory paper, and doing so is its main focus. our proofs are all included in the appendix. Guidelines: * The answer NA means that the paper does not include theoretical results. * All the theorems, formulas, and proofs in the paper should be numbered and cross-referenced. * All assumptions should be clearly stated or referenced in the statement of any theorems. * The proofs can either appear in the main paper or the supplemental material, but if they appear in the supplemental material, the authors are encouraged to provide a short proof sketch to provide intuition. * Inversely, any informal proof provided in the core of the paper should be complemented by formal proofs provided in appendix or supplemental material. * Theorems and Lemmas that the proof relies upon should be properly referenced. * Experimental Result Reproducibility Question: Does the paper fully disclose all the information needed to reproduce the main experimental results of the paper to the extent that it affects the main claims and/or conclusions of the paper (regardless of whether the code and data are provided or not)? Answer: Justification: This is a theory paper. Guidelines: * The answer NA means that the paper does not include experiments. * If the paper includes experiments, a No answer to this question will not be perceived well by the reviewers: Making the paper reproducible is important, regardless of whether the code and data are provided or not. * If the contribution is a dataset and/or model, the authors should describe the steps taken to make their results reproducible or verifiable. * Depending on the contribution, reproducibility can be accomplished in various ways. For example, if the contribution is a novel architecture, describing the architecture fully might suffice, or if the contribution is a specific model and empirical evaluation, it may be necessary to either make it possible for others to replicate the model with the same dataset, or provide access to the model. In general. releasing code and data is often one good way to accomplish this, but reproducibility can also be provided via detailed instructions for how to replicate the results, access to a hosted model (e.g., in the case of a large language model), releasing of a model checkpoint, or other means that are appropriate to the research performed. * While NeurIPS does not require releasing code, the conference does require all submissions to provide some reasonable avenue for reproducibility, which may depend on the nature of the contribution. For example * If the contribution is primarily a new algorithm, the paper should make it clear how to reproduce that algorithm. * If the contribution is primarily a new model architecture, the paper should describe the architecture clearly and fully. * If the contribution is a new model (e.g., a large language model), then there should either be a way to access this model for reproducing the results or a way to reproduce the model (e.g., with an open-source dataset or instructions for how to construct the dataset). * We recognize that reproducibility may be tricky in some cases, in which case authors are welcome to describe the particular way they provide for reproducibility. In the case of closed-source models, it may be that access to the model is limited in some way (e.g., to registered users), but it should be possible for other researchers to have some path to reproducing or verifying the results. * Open access to data and code Question: Does the paper provide open access to the data and code, with sufficient instructions to faithfully reproduce the main experimental results, as described in supplemental material? Answer: Justification: This is a theory paper. Guidelines: * The answer NA means that paper does not include experiments requiring code. * Please see the NeurIPS code and data submission guidelines (<https://nips.cc/public/guides/CodeSubmissionPolicy>) for more details. * While we encourage the release of code and data, we understand that this might not be possible, so “No” is an acceptable answer. Papers cannot be rejected simply for not including code, unless this is central to the contribution (e.g., for a new open-source benchmark). * The instructions should contain the exact command and environment needed to run to reproduce the results. See the NeurIPS code and data submission guidelines (<https://nips.cc/public/guides/CodeSubmissionPolicy>) for more details. * The authors should provide instructions on data access and preparation, including how to access the raw data, preprocessed data, intermediate data, and generated data, etc. * The authors should provide scripts to reproduce all experimental results for the new proposed method and baselines. If only a subset of experiments are reproducible, they should state which ones are omitted from the script and why. * At submission time, to preserve anonymity, the authors should release anonymized versions (if applicable). * Providing as much information as possible in supplemental material (appended to the paper) is recommended, but including URLs to data and code is permitted. * Experimental Setting/Details Question: Does the paper specify all the training and test details (e.g., data splits, hyperparameters, how they were chosen, type of optimizer, etc.) necessary to understand the results? Answer: Justification: this is a theory paper Guidelines: * The answer NA means that the paper does not include experiments. * The experimental setting should be presented in the core of the paper to a level of detail that is necessary to appreciate the results and make sense of them. * The full details can be provided either with the code, in appendix, or as supplemental material. * Experiment Statistical Significance Question: Does the paper report error bars suitably and correctly defined or other appropriate information about the statistical significance of the experiments? Answer: Justification: this is a theory paper Guidelines: * The answer NA means that the paper does not include experiments. * The authors should answer "Yes" if the results are accompanied by error bars, confidence intervals, or statistical significance tests, at least for the experiments that support the main claims of the paper. * The factors of variability that the error bars are capturing should be clearly stated (for example, train/test split, initialization, random drawing of some parameter, or overall run with given experimental conditions). * The method for calculating the error bars should be explained (closed form formula, call to a library function, bootstrap, etc.) * The assumptions made should be given (e.g., Normally distributed errors). * It should be clear whether the error bar is the standard deviation or the standard error of the mean. * It is OK to report 1-sigma error bars, but one should state it. The authors should preferably report a 2-sigma error bar than state that they have a 96% CI, if the hypothesis of Normality of errors is not verified. * For asymmetric distributions, the authors should be careful not to show in tables or figures symmetric error bars that would yield results that are out of range (e.g. negative error rates). * If error bars are reported in tables or plots, The authors should explain in the text how they were calculated and reference the corresponding figures or tables in the text. * Experiments Compute Resources Question: For each experiment, does the paper provide sufficient information on the computer resources (type of compute workers, memory, time of execution) needed to reproduce the experiments? Answer: Justification: This is a theory paper Guidelines: * The answer NA means that the paper does not include experiments. * The paper should indicate the type of compute workers CPU or GPU, internal cluster, or cloud provider, including relevant memory and storage. * The paper should provide the amount of compute required for each of the individual experimental runs as well as estimate the total compute. * The paper should disclose whether the full research project required more compute than the experiments reported in the paper (e.g., preliminary or failed experiments that didn't make it into the paper). * Code Of Ethics Question: Does the research conducted in the paper conform, in every respect, with the NeurIPS Code of Ethics <https://neurips.cc/public/EthicsGuidelines>? Answer: Justification: this paper does not utilize any datasets or human subjects. It also does not contribute towards any sort of harm. Guidelines: * The answer NA means that the authors have not reviewed the NeurIPS Code of Ethics. * If the authors answer No, they should explain the special circumstances that require a deviation from the Code of Ethics. * The authors should make sure to preserve anonymity (e.g., if there is a special consideration due to laws or regulations in their jurisdiction). * Broader Impacts Question: Does the paper discuss both potential positive societal impacts and negative societal impacts of the work performed? Answer: Justification: we discuss consequences of our results. Because they are theoretical, we don't believe our results can be used in a directly harmful manner. Guidelines: * The answer NA means that there is no societal impact of the work performed. * If the authors answer NA or No, they should explain why their work has no societal impact or why the paper does not address societal impact. * Examples of negative societal impacts include potential malicious or unintended uses (e.g., disinformation, generating fake profiles, surveillance), fairness considerations (e.g., deployment of technologies that could make decisions that unfairly impact specific groups), privacy considerations, and security considerations. * The conference expects that many papers will be foundational research and not tied to particular applications, let alone deployments. However, if there is a direct path to any negative applications, the authors should point it out. For example, it is legitimate to point out that an improvement in the quality of generative models could be used to generate deepfakes for disinformation. On the other hand, it is not needed to point out that a generic algorithm for optimizing neural networks could enable people to train models that generate Deepfakes faster. * The authors should consider possible harms that could arise when the technology is being used as intended and functioning correctly, harms that could arise when the technology is being used as intended but gives incorrect results, and harms following from (intentional or unintentional) misuse of the technology. * If there are negative societal impacts, the authors could also discuss possible mitigation strategies (e.g., gated release of models, providing defenses in addition to attacks, mechanisms for monitoring misuse, mechanisms to monitor how a system learns from feedback over time, improving the efficiency and accessibility of ML). * Safeguards Question: Does the paper describe safeguards that have been put in place for responsible release of data or models that have a high risk for misuse (e.g., pretrained language models, image generators, or scraped datasets)? Answer: Justification: no models or data are in this paper. Guidelines: * The answer NA means that the paper poses no such risks. * Released models that have a high risk for misuse or dual-use should be released with necessary safeguards to allow for controlled use of the model, for example by requiring that users adhere to usage guidelines or restrictions to access the model or implementing safety filters. * Datasets that have been scraped from the Internet could pose safety risks. The authors should describe how they avoided releasing unsafe images. * We recognize that providing effective safeguards is challenging, and many papers do not require this, but we encourage authors to take this into account and make a best faith effort. * Licenses for existing assets Question: Are the creators or original owners of assets (e.g., code, data, models), used in the paper, properly credited and are the license and terms of use explicitly mentioned and properly respected? Answer: Justification: no code or models are used. Guidelines: * The answer NA means that the paper does not use existing assets. * The authors should cite the original paper that produced the code package or dataset. * The authors should state which version of the asset is used and, if possible, include a URL. * The name of the license (e.g., CC-BY 4.0) should be included for each asset. * For scraped data from a particular source (e.g., website), the copyright and terms of service of that source should be provided. * If assets are released, the license, copyright information, and terms of use in the package should be provided. For popular datasets, <paperswithcode.com/datasets> has curated licenses for some datasets. Their licensing guide can help determine the license of a dataset. * For existing datasets that are re-packaged, both the original license and the license of the derived asset (if it has changed) should be provided. * If this information is not available online, the authors are encouraged to reach out to the asset's creators. * New Assets Question: Are new assets introduced in the paper well documented and is the documentation provided alongside the assets? Answer: Justification: No released assets Guidelines: * The answer NA means that the paper does not release new assets. * Researchers should communicate the details of the dataset/code/model as part of their submissions via structured templates. This includes details about training, license, limitations, etc. * The paper should discuss whether and how consent was obtained from people whose asset is used. * At submission time, remember to anonymize your assets (if applicable). You can either create an anonymized URL or include an anonymized zip file. * Crowdsourcing and Research with Human Subjects Question: For crowdsourcing experiments and research with human subjects, does the paper include the full text of instructions given to participants and screenshots, if applicable, as well as details about compensation (if any)? Answer: Justification: no human subjects Guidelines: * The answer NA means that the paper does not involve crowdsourcing nor research with human subjects. * Including this information in the supplemental material is fine, but if the main contribution of the paper involves human subjects, then as much detail as possible should be included in the main paper. * According to the NeurIPS Code of Ethics, workers involved in data collection, curation, or other labor should be paid at least the minimum wage in the country of the data collector. * Institutional Review Board (IRB) Approvals or Equivalent for Research with Human Subjects Question: Does the paper describe potential risks incurred by study participants, whether such risks were disclosed to the subjects, and whether Institutional Review Board (IRB) approvals (or an equivalent approval/review based on the requirements of your country or institution) were obtained? Answer: Justification: No human subjects Guidelines: * The answer NA means that the paper does not involve crowdsourcing nor research with human subjects. * Depending on the country in which research is conducted, IRB approval (or equivalent) may be required for any human subjects research. If you obtained IRB approval, you should clearly state this in the paper. * We recognize that the procedures for this may vary significantly between institutions and locations, and we expect authors to adhere to the NeurIPS Code of Ethics and the guidelines for their institution. * For initial submissions, do not include any information that would break anonymity (if applicable), such as the institution conducting the review.
http://arxiv.org/abs/2407.13655v1
20240718163045
Can dissipation induce a transition between many-body localized and thermal states?
[ "Yutao Hu", "Chao Yang", "Yucheng Wang" ]
cond-mat.dis-nn
[ "cond-mat.dis-nn", "cond-mat.stat-mech", "physics.optics", "quant-ph" ]
𝕀1 i d These authors contribute equally to this work. Shenzhen Institute for Quantum Science and Engineering, Southern University of Science and Technology, Shenzhen 518055, China International Quantum Academy, Shenzhen 518048, China Guangdong Provincial Key Laboratory of Quantum Science and Engineering, Southern University of Science and Technology, Shenzhen 518055, China These authors contribute equally to this work. Shenzhen Institute for Quantum Science and Engineering, Southern University of Science and Technology, Shenzhen 518055, China International Quantum Academy, Shenzhen 518048, China Guangdong Provincial Key Laboratory of Quantum Science and Engineering, Southern University of Science and Technology, Shenzhen 518055, China Corresponding author: wangyc3@sustech.edu.cn Shenzhen Institute for Quantum Science and Engineering, Southern University of Science and Technology, Shenzhen 518055, China International Quantum Academy, Shenzhen 518048, China Guangdong Provincial Key Laboratory of Quantum Science and Engineering, Southern University of Science and Technology, Shenzhen 518055, China § ABSTRACT The many-body mobility edge (MBME) in energy, which separates thermal states from many-body localization (MBL) states, is a critical yet controversial concept in many-body systems. Here we examine the quasiperiodic t_1-t_2 model that features a mobility edge. With the addition of nearest-neighbor interactions, we demonstrate the potential existence of a MBME. Then we investigate the impact of a type of bond dissipation on the many-body system by calculating the steady-state density matrix and analyzing the transport behavior, and demonstrate that dissipation can cause the system to predominantly occupy either the thermal region or the MBL region, irrespective of the initial state. Finally, we discuss the effects of increasing system size. Our results indicate that dissipation can induce transitions between thermal and MBL states, providing a new approach for experimentally determining the existence of the MBME. Can dissipation induce a transition between many-body localized and thermal states? Yucheng Wang July 22, 2024 =================================================================================== § INTRODUCTION Closed quantum many-body systems with disorder or quasiperiodic potentials can exhibit localization <cit.>, wherein the system cannot act as its own heat bath and thus does not reach thermodynamic equilibrium. Many-body localization (MBL) exhibits several intriguing properties, such as the absence of conductivity (even at finite temperatures) <cit.>, specific spectral properties <cit.>, entanglement entropy that follows an area law <cit.>, and the slow logarithmic growth of entanglement entropy <cit.>. In addition to its fundamental theoretical importance, MBL has potential significant applications in quantum information <cit.>, time crystals <cit.>, and other areas, which has garnered widespread theoretical and experimental attention. For example, MBL phenomena have been observed in platforms such as cold-atom systems <cit.>, circuits with superconducting qubits <cit.> and trapped-ion <cit.> systems. However, even in these highly controlled implementations, MBL systems are influenced by at least a slight coupling to the environment, such as inelastic scattering from lasers, which may ultimately disrupt localization. Furthermore, considering the observation of MBL in traditional solid-state experiments <cit.> and the applications of MBL phenomena, the impact of dissipation on MBL becomes even more unavoidable. Therefore, the impact of dissipation on MBL has attracted extensive research attention <cit.>. When coupled to a thermal bath, dissipation can lead to infinite heating in the long-time limit, making MBL typically considered unstable <cit.>, as observed in cold-atom experiments <cit.>. Although dissipation can disrupt MBL, transport in the dissipation-induced delocalized phase is different from that in a typical thermalized phase <cit.>. Additionally, once the dissipation is removed, the system returns to a many-body localized state. Can dissipation drive a transition from a MBL state to a thermal state? Conversely, can a certain type of dissipation drive a transition from a thermal state to a MBL state? The concept of the mobility edge (ME) is a central idea in the field of localization physics. Similar to how MBL can be seen as the counterpart of Anderson localization in many-body systems, a natural question arises: can the concept of the ME be extended to many-body systems? In other words, is there a critical energy that separates thermal states from many-body localized states, known as the many-body mobility edge (MBME)? Numerous numerical results and some experiments suggest the existence of the MBME <cit.>. However, due to computational size limitations, the existence of a definitive MBME remains uncertain. For instance, Ref. <cit.> argues that local fluctuations in the system with a putative MBME can act as mobile bubbles, inducing global delocalization, and hence, a MBME cannot exist. If a MBME exists, could the introduction of dissipation lead to some interesting results? Would such results provide insights into determining the MBME? In this work, we demonstrate that by introducing a special type of controllable dissipation into a system, which, from the perspective of computable sizes, has the MBME, we can drive the system into a steady state, which predominantly consists of either thermal states or many-body localized states, independent of the initial state. Therefore, within the limits of computable finite sizes, we can assert that dissipation can induce transitions between thermal states and MBL states. At the same time, our results also provide a feasible approach for determining the existence of the MBME in simulated systems such as cold atoms. § MODEL AND RESULTS We consider the quasiperiodic t_1-t_2 model <cit.> with nearest-neighbor (NN) interactions, given by H = -∑_j( t_1c_j+1^†c_j+t_2c_j+2^†c_j+ H.c.) +U∑_jn_jn_j+1 +2λ∑_jcos( 2πω j+ϕ) n_j, where t_1 and t_2 are the NN and next-NN hopping amplitudes, respectively, U is the NN interaction strength, ω is an irrational number, λ is the quasiperiodic potential strength, and ϕ is a phase offset. Without loss of generality, we take t_1=1, t_2=0.2, and ω =( √(5)-1) /2. Unless otherwise stated, we will use open boundary conditions in subsequent computations. In the non-interacting limit (U→ 0), this model displays a single-particle ME, as shown in Fig. <ref>(a), where we present the inverse participation ratio (IPR) of each eigenstate, which, for an arbitrary m-th eigenstate |ψ_m⟩=∑_j^Lψ_m,jc_j^†|∅⟩ with L being the system size, is defined as IPR=∑_j=1^L|ψ _m,j| ^4. It is known that for extended and localized states, the IPR tends to 0 or a finite nonzero value, respectively. From Fig. <ref>(a), we see that the extended and localized states are divided by one ME. In general, the t_1-t_2 model does not have an analytical expression for the ME. However, when t_1/t_2 is sufficiently large, the NN and next-NN hopping terms can be approximated by an exponential hopping with strength t=t_1e^p with p=ln(t_1/t_2) <cit.>. The latter has an exact expression for the ME, and based on this, we can obtain an approximate expression for the ME of the t_1-t_2 model in this case: E_c=-λ(t_1^2+t_2^2)+t_1^3/t_1t_2, as shown by the dashed line in Fig. <ref>(a). When the NN interaction is included, the single-particle ME may lead to the emergence of a MBME. To characterize the localized and thermal properties of this system, we consider the ratio of adjacent energy gaps <cit.> as r_i=min( δ _i+1,δ _i) /max(δ _i+1,δ _i), where δ _i=E_i-E_i+1 represents the energy spacing with the eigenvalues E_i listed in ascending order. For the system in the thermal region, its level statistics follow the Wigner-Dyson distribution, and the average ⟨ r⟩ converges to 0.53. In the MBL region, the level statistics are Poisson, with ⟨ r⟩≈ 0.39. We rescale the many-body spectrum as ϵ _i=( E_i-E_g) /( E_max-E_g), where E_g( E_max) is the eigenenergy of the ground (highest excited) state. Based on this, we divide the eigenvalues into 10 different energy windows and average samples and all gaps in each window to obtain an ⟨ r⟩, as shown in Fig. <ref>(b). We observe that as the strength of the quasiperiodic potential λ increases, the states in the uppermost window become localized first, similar to the single-particle case. In the inset of Fig. <ref>(b), we fix λ=0.8. The red and blue lines correspond to the ⟨ r⟩ of the lowest energy window and the highest energy window, respectively, which converge to 0.53 and 0.39. This indicates that the states in the highest window become localized while the states in the lowest window remain thermalized, suggesting the existence of a MBME. In the following discussion, we fix λ=0.8 and one-third filling. Next, we introduce the dissipation affecting a pair of neighboring sites j and j+1, described by <cit.> O_j=(c_j^†+ac_j+1^†)(c_j-ac_j+1), where a=± 1, and j=1,2,…,L-1. This form of dissipation can be implemented using cold atoms in optical superlattices <cit.> or through arrays of superconducting microwave resonators <cit.>. This bond dissipation maintains the particle number but modifies the relative phase between adjacent lattice sites. It synchronizes them from an in-phase (out-of-phase) mode to an out-of-phase (in-phase) mode when a is set to -1 (1). The dissipative dynamics of density matrix ρ is represented by the Lindblad master eqaution <cit.> ρ(t)/t=ℒ[ ρ( t) ] = -i[ H,ρ( t) ] +Γ∑_j( O_jρ O_j^†-1/2{ O_j^†O_j,ρ}) , Where ℒ is the Lindbladian superoperator, and the strength of jump operators is set to Γ, which is independent of the lattice points. Our results are essentially independent of the value of Γ, and without loss of generality, we set Γ=1. Here, ℒ is time-independent, so we can express ρ(t)=e^ℒtρ(0). The steady state is defined as ρ_s=ρ(t→∞), which corresponds to the eigenstate of the Lindbladian ℒ with the zero eigenvalue, i.e., ℒ[ρ_s] = 0. We analyze the properties of the steady-state density matrix ρ_s in the eigenbasis of the many-body Hamiltonian H, that is, ρ_nm=⟨ψ_n|ρ_s|ψ_m⟩, where |ψ_n⟩ and |ψ_m⟩ denote the eigenstates of H. Fig. <ref> illustrates that the system's steady state primarily occupies the low-energy thermal region when a=1 [Fig. <ref>(a)], whereas it predominantly occupies the high-energy MBL region when a=-1 [Fig. <ref>(b)]. This means that by adjusting dissipation, we can control whether the steady state of this system predominantly resides in the thermalization region or in the MBL region. When dissipation is present, we note that even if the steady state of the system resides in a localized region under the basis of the Hamiltonian, the system's dynamics are not necessarily localized. For instance, dissipation mechanisms like dephasing can disrupt coherence and consequently destroy the system's localization properties. The bond dissipation described by Eq. (<ref>) in our study expands to include terms related to dephasing dissipation, which also undermine localization. However, in the case of dephasing dissipation disrupting localization, once dissipation is removed, coherence and localization can be restored, completely eliminating the effects caused by dephasing. In contrast, the bond dissipation we discuss selectively stabilizes the system in specific regions through state selection. Upon removing dissipation, the effects it induces do not vanish. This can be observed through the evolution of the density matrix after dissipation is removed. After abruptly removing the dissipation, we observe the evolution of the density matrix: ρ(t)=∑_mne^i(E_m-E_n)tρ_mn|ψ_m⟩⟨ψ_n|, where E_m and E_n are the eigenvalues corresponding to the many-body eigenstates |ψ_m⟩ and |ψ_n⟩. We observe that the diagonal elements of ρ (where E_m=E_n) do not change with time, while the off-diagonal elements fluctuate over time. However, with longer observation times, we need to examine the average effects of the dynamics over this interval, leading to the vanishing of the effects caused by the off-diagonal elements. Therefore, when the steady state primarily occupies the localized (thermalization) region, removing dissipation reveals behavior characteristic of the localized (thermalization) region. Based on this observation, we can conclude that dissipation serves as an intermediate process capable of inducing a transition between thermalization and MBL states. Experimental platforms such as ultracold atoms <cit.> can probe the transport properties of a system. Next, we will examine the transport properties of the system after it reaches a steady state and the dissipation is removed. By using these properties to distinguish between thermalization and MBL behaviors, we can experimentally verify that dissipation can induce transitions between thermal and localized states. We assume the system reaches a steady state at the time t_0, after which dissipation is removed. At this point, the system returns to a non-equilibrium state, and we then study its response to a probed electric field. The change in current δ I, for a weak probed electric field, can be derived from linear response theory. ⟨δ I(t)⟩=∫_t_0^tLσ(t,t^')E(t^')dt^', where σ(t,t^') is the non-equilibrium conductivity, which depends only on the time difference t-t^' for an equilibrium state <cit.>. Here we consider a delta probed field E(t)=Eδ(t-t_0) such that ⟨δ I(t)⟩=LEσ(t,t_0). For convenience, we set e=ħ=E=1. So the conductivity in a finite size system can be written as <cit.> σ(t,t^')=i/Nθ(t-t^') Tr[ρ(t_0)[Î(t),B̂(t^')]], where the position operator B=∑_jjc_j^†c_j and the current operator I=dB/dt=-i[B,H]. The current change in the thermal region is expected to show large fluctuations because it is significantly affected by the electric field, whereas in the MBL state, the current change should exhibit smaller fluctuations due to minimal influence from the electric field [see the appendix <ref>]. In Fig. <ref>, the blue line represents the a=1 case. As discussed earlier, in this scenario, the steady state primarily occupies the thermalization region near the ground state. After removing dissipation, we observe that the current change consistently exhibits larger oscillations. In contrast, when a=-1 (the orange line in Fig. <ref>), the system's steady state mainly occupies the many-body localized region near the highest state. After removing dissipation, we see that the current change oscillates with a smaller amplitude and evolves to near zero at a faster rate, exhibiting relatively localized properties. § DISCUSSION ON THE EFFECTS OF SYSTEM SIZE As discussed earlier, in small systems, there exists a MBME, and dissipation can induce transitions between thermal and MBL states. However, when the system size increases, the existence of a MBME becomes a controversial issue. If there is no MBME, then the dissipation-induced transitions we discussed here would not exist. We use the density matrix renormalization group (DMRG) method to study the thermalization and localization properties of the ground state and the highest excited state in large systems. To characterize the localization properties of these two states, we consider the density distribution of single-particle excitation, defined as the difference in distribution after introducing an additional particle into the many-body system, i.e., δ n_j=ρ _N+1( j) -ρ _N( j), where ρ _N( j) =⟨ψ^N _g(e)|n_j|ψ^N _g(e)⟩ and |ψ^N _g(e)⟩ is the ground (highest excited) state of the system with N particles. We add a particle to a system with size L=144 and particle number N=32. Fig. <ref> shows the distribution of δ n at each lattice site. We see that for the ground state (the orange dots), the single particle excitation is distributed quite uniformly across different lattice sites, indicating an extended property. In contrast, for the highest excited state (the blue dots), δ n_j is primarily concentrated on a few lattice sites, showing localized behavior. These results correspond to those in Fig. <ref>, indicating a significant difference in localization properties between the ground state and the highest excited state in larger systems. Similarly, we can also define the IPR of single-particle excitation (SIPR) as SIPR=∑_jδ n_j^2/∑_j|δ n_j|. When the distribution of single-particle excitation is extended or localized, the corresponding SIPR respectively tends to 0 or a finite nonzero value. The SIPR corresponding to the ground state and the highest excited state are 0.023 (close to 0) and 0.301, respectively [Fig. <ref>], further indicating that they are extended and localized, respectively. Our results suggest that the MBME might exist in large systems. In this case, the transition between thermal and MBL states induced by dissipation should also be present in large systems. This provides a potential method for experimentally determining whether the MBME exists. Although the MBME has been observed in some systems, such as superconducting qubit circuits <cit.>, the system sizes used in experiments are still not large enough. Cold atom systems can simulate larger systems, but measuring the MBME in the system presents significant challenges <cit.>. Based on our results, by adjusting dissipation, we can selectively place the system in either the thermal region or the MBL region, facilitating measurements. This makes it possible to detect the MBME in cold atom experiments. § CONCLUSION We have investigated the impact of a type of bond dissipation on the quasiperiodic t_1-t_2 model with NN interactions, which possesses a mobility edge in the absence of interactions. By calculating the level spacing distribution, we observed the potential existence of a MBME. We further analyzed the distribution of the steady-state density matrix and revealed that dissipation can drive the many-body system into specific states primarily located in either the thermal region or the MBL region, regardless of the initial states. Thus, dissipation can be used to induce transitions between thermal and MBL states, allowing for the control of particle transport behaviors, as demonstrated by our analysis of the system's transport properties after removing dissipation. Finally, we used DMRG to study single-particle excitations in the ground state and highest excited state for large system sizes, demonstrating their different localization behaviors. Our findings suggest that dissipation can trigger transitions between thermal and MBL states, offering a novel method for experimentally identifying the presence of the MBME. § ACKNOWLEDGEMENTS This work is supported by National Key R&D Program of China under Grant No.2022YFA1405800, the National Natural Science Foundation of China (Grant No.12104205), the Key-Area Research and Development Program of Guangdong Province (Grant No. 2018B030326001), Guangdong Provincial Key Laboratory (Grant No.2019B121203002). § CHANGE IN CURRENT: AN EXAMPLE USING THE AA MODEL   We take the interacting Aubry-André (AA) model (i.e., with next-nearest-neighbor hopping strength t_2=0 in the Hamiltonian of Eq. (<ref>)) as an example to illustrate the different current changes between thermal and many-body localized states after introducing an electric field. We still fix one-third filling. We consider only the ground state and select two extreme values of the quasiperiodic potential: λ=0.1, representing the thermal state property, and λ=10, representing the MBL state property. As shown in Fig. <ref>, the current change in the thermal state has a large amplitude and a long oscillation period, similar to the case of a=1 shown in Fig. <ref> of the main text, while the MBL state exhibits a small amplitude around 0 and fast oscillations, similar to the case of a=-1 shown in Fig. <ref> of the main text. This can be understood as the current in the thermal state being significantly affected by the electric field, whereas the MBL state is minimally influenced by the electric field. 99 review1 D. A. Abanin, E. Altman, I. Bloch, and M. Serbyn, Colloquium: Many-body localization, thermalization, and entanglement, Rev. Mod. Phys. 91, 021001 (2019). review2 R. Nandkishore and D. A. Huse, Many-body localization and thermalization in quantum statistical mechanics, Annu. Rev. Condens. Matter Phys. 6, 15 (2015). Basko2006 D. M. Basko, I. L. Aleiner, and B. L. Altshuler, Metal–insulator transition in a weakly interacting many-electron system with localized single-particle states, Ann. Phys. (Amsterdam) 321, 1126 (2006). Gornyi2005 I. V. Gornyi, A. D. Mirlin, and D. G. Polyakov, Interacting Electrons in Disordered Wires: Anderson Localization and Low-T Transport, Phys. Rev. Lett. 95, 206603 (2005). Huse2007 V. Oganesyan and D. A. Huse, Localization of interacting fermions at high temperature, Phys. Rev. B 75, 155111 (2007). Huse2010 A. Pal and D. A. Huse, Many-body localization phase transition, Phys. Rev. B 82, 174411 (2010). Serbyn2016 M. Serbyn and J. E. Moore, Spectral statistics across the many-body localization transition, Phys. Rev. B 93, 041424(R) (2016). Chiara2006 G. De Chiara, S. Montangero, P. Calabrese, and R. Fazio, Entanglement entropy dynamics of Heisenberg chains, J. Stat. Mech. (2006) P03001. Prosen2008 M.Žnidarič, T. Prosen, and P. Prelovšek, Many-body localization in the Heisenberg XXZ magnet in a random field, Phys. Rev. B 77, 064426 (2008). Moore2012 J. H. Bardarson, F. Pollmann, and J. E. Moore, Unbounded Growth of Entanglement in Models of Many-Body Localization, Phys. Rev. Lett. 109, 017202 (2012). Serbyn2013 M. Serbyn, Z. Papić, and D. A. Abanin, Universal Slow Growth of Entanglement in Interacting Strongly Disordered Systems, Phys. Rev. Lett. 110, 260601 (2013). Altman2015 E. Altman and R. Vosk, Universal Dynamics and Renormalization in Many-Body-Localized Systems, Annu. Rev. Condens. Matter Phys. 6, 383 (2015). Nayak2014 B. Bauer and C. Nayak, Analyzing Many-Body Localization with a Quantum Computer, Phys. Rev. X 4, 041021 (2014). NormanYao M. P. Zaletel, M. Lukin, C. Monroe, C. Nayak, F. Wilczek, and N. Y. Yao, Colloquium: Quantum and classical discrete time crystals, Rev. Mod. Phys. 95, 031001 (2023). IBloch2015 M. Schreiber, S. S. Hodgman, P. Bordia, H. P. Lüschen, M. H. Fischer, R. Vosk, E. Altman, U. Schneider, and I. Bloch, Observation of many-body localization of interacting fermions in a quasirandom optical lattice, Science 349, 842 (2015). IBloch2016 P. Bordia, H. P. Lüschen, S. S. Hodgman, M. Schreiber, I. Bloch, and U. Schneider, Coupling Identical one-dimensional Many-Body Localized Systems, Phys. Rev. Lett. 116, 140401 (2016). Islam2015 R. Islam, R. Ma, P. M. Preiss, M. Eric Tai, A. Lukin, M. Rispoli and M. Greiner, Measuring entanglement entropy in a quantum many-body system, Nature 528, 77 (2015). Roushan2017 P. Roushan, C. Neill, J. Tangpanitanon, V. M. Bastidas, A. Megrant, R. Barends, Y. Chen, Z. Chen, B. Chiaro, A. Dunsworth, A. Fowler, B. Foxen, M. Giustina, E. Jeffrey, J. Kelly, E. Lucero, J. Mutus, M. Neeley, C. Quintana, D. Sank, A. Vainsencher, J. Wenner, T. White, H. Neven, D. G. Angelakis, and J. Martinis, Spectral signatures of many-body localization with interacting photons, Science 358, 1175 (2017). HHWang2021 Q. Guo, C. Cheng, Z.-H. Sun, Z. Song, H. Li, Z. Wang, W. Ren, H. Dong, D. Zheng, Y.-R. Zhang, R. Mondaini, H. Fan and H. Wang, Observation of energy-resolved many-body localization, Nature Phys. 17, 234 (2021). JSmith2016 J. Smith, A. Lee, P. Richerme, B. Neyenhuis, P. W. Hess, P. Hauke, M. Heyl, D. A. Huse, and C. Monroe, Many-body localization in a quantum simulator with programmable random disorder, Nature Phys. 12, 907 (2016). Lake2207 A. Nietner, A. Kshetrimayum, J. Eisert, and B. Lake, A route towards engineering many-body localization in real materials, arXiv:2207.10696. DAHuse2014 R. Nandkishore, S. Gopalakrishnan, and D. A. Huse, Spectral features of a many-body-localized system weakly coupled to a bath, Phys. Rev. B 90, 064203 (2014). Johri2015 S. Johri, R. Nandkishore, and R. N. Bhatt, Many-Body Localization in Imperfectly Isolated Quantum Systems, Phys. Rev. Lett. 114, 117401 (2015). DAHuse2015 D. A Huse, R. Nandkishore, F. Pietracaprina, V. Ros, and A. Scardicchio, Localized systems coupled to small baths: From Anderson to zeno, Phys. Rev. B 92, 014203 (2015). Hyatt2017 K. Hyatt, J. R. Garrison, A. C. Potter, and B. Bauer, Many-body localization in the presence of a small bath, Phys. Rev. B 95, 035132 (2017). Fischer2016 M. H. Fischer, M. Maksymenko, and E. Altman, Dynamics of a Many-Body-Localized System Coupled to a Bath, Phys. Rev. Lett. 116, 160401 (2016). Levi2016 E. Levi, M. Heyl, I. Lesanovsky, and J. P. Garrahan, Robustness of Many-Body Localization in the Presence of Dissipation, Phys. Rev. Lett. 116, 237203 (2016). Medvedyeva M. V. Medvedyeva, T. Prosen, and M. Žnidarić, Influence of dephasing on many-body localization, Phys. Rev. B 93, 094205 (2016). Everest2017 B. Everest, I. Lesanovsky, J. P. Garrahan, and E. Levi, Role of interactions in a dissipative many-body localized system, Phys. Rev. B 95, 024310 (2017). Knap2017 S. Gopalakrishnan, K. R. Islam, and M. Knap, Noise-Induced Subdiffusion in Strongly Localized Quantum Systems, Phys. Rev. Lett. 119, 046601 (2017). LNWu2019 L.-N. Wu, A. Schnell, G. D. Tomasi, M. Heyl, and A. Eckardt, Describing many-body localized systems in thermal environments, New J. Phys. 21, 063026 (2019). JRen2020 J. Ren, Q. Li, W. Li, Z. Cai, and X. Wang, Noise-Driven Universal Dynamics towards an Infinite Temperature State, Phys. Rev. Lett. 124, 130602 (2020). Gopalakrishnan R. Nandkishore and S. Gopalakrishnan, Many body localized systems weakly coupled to baths, Annalen der Physik 529, 1600181 (2016). Roeck2017 W. De Roeck and F. Huveneers, Stability and instability towards delocalization in many-body localization systems, Phys. Rev. B 95, 155129 (2017). Luitz2017 D. J. Luitz, F. Huveneers, and W. De Roeck, How a Small Quantum Bath Can Thermalize Long Localized Chains, Phys. Rev. Lett. 119, 150602 (2017). Richter2024 J. Richter, Temporal relaxation of disordered many-body quantum systems under driving and dissipation, arXiv:2403.03315. Denisov2018 I. Vakulchyk, I. Yusipov, M. Ivanchenko, S. Flach, and S. Denisov, Signatures of many-body localization in steady states of open quantum systems, Phys. Rev. B 98, 020202(R) (2018) Bloch2017 H. P. Lüschen, P. Bordia, S. S. Hodgman, M. Schreiber, S. Sarkar, A. J. Daley, M. H. Fischer, E. Altman, I. Bloch, and U. Schneider, Signatures of Many-Body Localization in a Controlled Open Quantum System, Phys. Rev. X 7, 011034 (2017). Bloch2019 A. Rubio-Abadal, J.-yoon Choi, J. Zeiher, S. Hollerith, J. Rui, I. Bloch, and C. Gross, Many-Body Delocalization in the Presence of a Quantum Bath, Phys. Rev. X 9, 041014 (2019). MBME0 J. A. Kjäll, J. H. Bardarson, and F. Pollmann, Many-body localization in a disordered quantum ising chain, Phys. Rev. Lett. 113, 107204 (2014). MBME1 D. J. Luitz, N. Laflorencie, and F. Alet, Many-body localization edge in the random-field Heisenberg chain, Phys. Rev. B 91, 081103(R) (2015). MBME2 I. Mondragon-Shem, A. Pal, T. L. Hughes, and C. R. Laumann, Many-body mobility edge due to symmetry-constrained dynamics and strong interactions, Phys. Rev. B 92, 064203 (2015). MBME3 T. Devakul and R. R. Singh, Early breakdown of area-law entanglement at the many-body delocalization transition, Phys. Rev. Lett. 115, 187201 (2015). MBME4 S. Nag and A. Garg, Many-body mobility edges in a one-dimensional system of interacting Fermions, Phys. Rev. B 96, 060203(R) (2017). MBME5 B. Villalonga, X. Yu, D. J. Luitz, and B. K. Clark, Exploring one-particle orbitals in large many-body localized systems, Phys. Rev. B 97, 104406 (2018). MBME6 X. Wei, C. Cheng, G. Xianlong, and R. Mondaini, Investigating many-body mobility edges in isolated quantum systems, Phys. Rev. B 99, 165137 (2019); X. Wei, R. Mondaini, and G. Xianlong, Characterization of many-body mobility edges with random matrices, arXiv:2001.04105. MBME7 T. Chanda, P. Sierant, and J. Zakrzewski, Many-body localization transition in large quantum spin chains: The mobility edge, Phys. Rev. Res. 2, 032045(R) (2020). MBME8 R. Yousefjani and A. Bayat, Mobility edge in long-range interacting many-body localized systems, Phys. Rev. B 107, 045108 (2023). noME W. De Roeck, F. Huveneers, M. Müller, and M. Schiulaz, Absence of many-body mobility edges, Phys. Rev. B 93, 014203 (2016). t1t2 J. Biddle, B. Wang, D. J. Priour, Jr., and S. Das Sarma, Localization in one-dimensional incommensurate lattices beyond the Aubry-André model, Phys. Rev. A 80, 021603(R) (2009). Biddle1 J. Biddle and S. Das Sarma, Predicted mobility edges in one-dimensional incommensurate optical lattices: An exactly solvable model of Anderson localization, Phys. Rev. Lett. 104, 070601 (2010). Biddle2 J. Biddle, D. J. Priour, B. Wang, and S. Das Sarma, Localization in one-dimensional lattices with non-nearest-neighbor hopping: Generalized Anderson and Aubry-André models, Phys. Rev. B 83, 075105 (2011). BD1 S. Diehl, A. Micheli, A. Kantian, B. Kraus, H. P. Büchler, and P. Zoller, Quantum states and phases in driven open quantum systems with cold atoms, Nat. Phys. 4, 878 (2008). BD2 B. Kraus, H. P. Büchler, S. Diehl, A. Kantian, A. Micheli, and P. Zoller, Preparation of entangled states by quantum Markov processes, Phys. Rev. A 78, 042307 (2008). BD3 S. Diehl, A. Tomadin, A. Micheli, R. Fazio, and P. Zoller, Dynamical Phase Transitions and Instabilities in Open Atomic Many-Body Systems, Phys. Rev. Lett. 105, 015702 (2010); S. Diehl, E. Rico, M. A. Baranov, P. Zoller, Topology by Dissipation in Atomic Quantum Wires, Nat. Phys. 7, 971 (2011). BD4 C.-E. Bardyn, M. A. Baranov, C. V. Kraus, E. Rico, A. Imamoǧlu, P. Zoller, S. Diehl, Topology by dissipation, New J. Phys. 15, 085001 (2013). BD5 Y. Liu, Z. Wang, C. Yang, J. Jie, and Y. Wang, Dissipation induced extended-localized transition, Phys. Rev. Lett. 132, 216301 (2024). BD6 D. Marcos, A. Tomadin, S. Diehl, and P. Rabl, Photon condensation in circuit quantum electrodynamics by engineered dissipation, New J. Phys. 14, 055005 (2012). BD7 I. Yusipov, T. Laptyeva, S. Denisov, and M. Ivanchenko, Localization in Open Quantum Systems, Phys. Rev. Lett. 118, 070402 (2017). BD8 O. S. Vershinina, I. I. Yusipov, S. Denisov, M. V. Ivanchenko, T. V. Laptyeva, Control of a single-particle localization in open quantum systems, Europhys. Lett. 119, 56001 (2017); I. I. Yusipov, T. V. Laptyeva, M. V. Ivanchenko, Quantum jumps on Anderson attractors, Phys. Rev. B 97, 020301 (2018); I. Vakulchyk, I. Yusipov, M. Ivanchenko, S. Flach, and S. Denisov, Signatures of many-body localization in steady states of open quantum systems, Phys. Rev. B 98, 020202(R) (2018). BD9 Y. Peng, C. Yang, and Y. Wang, Manipulating Relaxation Time in Boundary-Dissipative Systems via Bond Dissipation, arXiv:2406.04183. GLindblad G. Lindblad, On the generators of quantum dynamical semigroups, Commun. Math. Phys. 48, 119 (1976). HPBreuer H.-P. Breuer and F. Petruccione, The Theory of Open Quantum Systems (Oxford University Press, Oxford, 2002). transportExp1 C.-C. Chien, S. Peotta, and M. D. Ventra, Quantum transport in ultracold atoms, Nat. Phys. 11, 998 (2015). transportExp2 J.-P. Brantut, J. Meineke, D. Stadler, S. Krinner, and T. Esslinger, Conduction of ultracold fermions through a mesoscopic channel, Science 337, 1069 (2012). transportExp3 J.-P. Brantut, C. Grenier, J. Meineke, D. Stadler, S. Krinner, C. Kollath, T. Esslinger, and A. Georges, A thermoelectric heat engine with ultracold atoms, Science 342, 713 (2013). transportExp4 S. Krinner, D. Stadler, J. Meineke, J.-P. Brantut, and T. Esslinger, Superfluidity with disorder in a thin film of quantum gas, Phys. Rev. Lett. 110, 100601 (2013). transportExp5 S. Krinner, D. Stadler, D. Husmann, J.-P. Brantut, and T. Esslinger, Observation of quantized conductance in neutral matter, Nature 517, 64 (2015). transport1 D. M. Kennes, E. Y. Wilner, D. R Reichman, and A. J. Millis, Nonequilibrium optical conductivity: General theory and application to transient phases, Phys. Rev. B 96, 054506, (2017). transport2 M. Saha, S. K. Maiti, and A. Purkayastha, Anomalous transport through algebraically localized states in one dimension. Phys. Rev. B 100, 174201 (2019). BlochME T. Kohlert, S. Scherg, X. Li, H. P. Lüschen, S. Das Sarma, I. Bloch, and M. Aidelsburger, Observation of Many-Body Localization in a One-Dimensional System with a Single-Particle Mobility Edge, Phys. Rev. Lett. 122, 170403 (2019).
http://arxiv.org/abs/2407.11927v1
20240716171833
Bayesian Causal Forests for Longitudinal Data: Assessing the Impact of Part-Time Work on Growth in High School Mathematics Achievement
[ "Nathan McJames", "Ann O'Shea", "Andrew Parnell" ]
stat.ML
[ "stat.ML", "cs.LG", "stat.AP" ]
#1 1 1 Bayesian Causal Forests for Longitudinal Data: Assessing the Impact of Part-Time Work on Growth in High School Mathematics Achievement Nathan McJames^1,2, Corresponding Author: nathan.mcjames.2016@mumail.ie. This work has emanated from research conducted with the financial support of Science Foundation Ireland under grant number 18/CRT/6049. In addition Andrew Parnell's work was supported by: a Science Foundation Ireland Career Development Award (17/CDA/4695) and SFI Research Centre award (12/RC/2289_P2). For the purpose of Open Access, the authors have applied a CC BY public copyright licence to any Author Accepted Manuscript version arising from this submission. , Ann O'Shea^2, Andrew Parnell^1,2 ^1Hamilton Institute, Maynooth University, Co. Kildare, Ireland ^2Department of Mathematics and Statistics, Maynooth University, Co. Kildare, Ireland Received —; accepted — =============================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================== 0 Bayesian Causal Forests for Longitudinal Data: Assessing the Impact of Part-Time Work on Growth in High School Mathematics Achievement § ABSTRACT Modelling growth in student achievement is a significant challenge in the field of education. Understanding how interventions or experiences such as part-time work can influence this growth is also important. Traditional methods like difference-in-differences are effective for estimating causal effects from longitudinal data. Meanwhile, Bayesian non-parametric methods have recently become popular for estimating causal effects from single time point observational studies. However, there remains a scarcity of methods capable of combining the strengths of these two approaches to flexibly estimate heterogeneous causal effects from longitudinal data. Motivated by two waves of data from the High School Longitudinal Study, the NCES' most recent longitudinal study which tracks a representative sample of over 20,000 students in the US, our study introduces a longitudinal extension of Bayesian Causal Forests. This model allows for the flexible identification of both individual growth in mathematical ability and the effects of participation in part-time work. Simulation studies demonstrate the predictive performance and reliable uncertainty quantification of the proposed model. Results reveal the negative impact of part time work for most students, but hint at potential benefits for those students with an initially low sense of school belonging. Clear signs of a widening achievement gap between students with high and low academic achievement are also identified. Potential policy implications are discussed, along with promising areas for future research. Keywords: Part-Time Work; Bayesian Non-Parametrics; Causal Inference; Longitudinal Analysis; Student Achievement 1.9 1 § INTRODUCTION For many high school students, part-time jobs have become an integral part of their daily routine, just as important as homework, studying, and completing assignments <cit.>. The reasons for seeking part-time work can vary widely among students. Some work to support their families financially, others to develop their character, gain maturity, or simply to earn spending money <cit.>. Regardless of the reasons for students choosing to work part-time, however, this work can have a significant impact on their educational journey <cit.>. Our study introduces a new approach for modelling individual level growth in student achievement, and explores the causal effect of intensive part-time work on this growth, where part-time work is defined as upwards of 20 hours of work per week during the school year <cit.>. Estimating causal effects from longitudinal data is a challenging but essential task. Established methods include inverse probability weighting <cit.>, two-way fixed effects <cit.>, and difference-in-differences <cit.>. A key limitation of many of these approaches is that they often rely on strong assumptions that may not be appropriate for the target data. The parallel trend assumption of the difference-in-differences method, for example, assumes that the treatment group would have followed a similar trajectory to the control group had they not received treatment <cit.>. This can easily be violated in practice, as confounding variables may influence both the probability of receiving treatment and the trajectories in the outcome of interest. Students who self select into part-time work, for example, may experience less growth than their peers even without part-time work <cit.>. Some work has been conducted to tackle this limitation by relaxing the assumption of parallel trends conditional on covariates <cit.>, but important limitations remain. Other methods rooted in structural equation modelling such as G-estimation <cit.>, and longitudinal extensions of targeted minimum loss based estimation <cit.> excel in estimating causal effects from longitudinal data when faced with challenges such as drop-out, time varying covariates, and dynamic treatment regimes. A weakness of these methods, however, is that they are often restricted to estimating average causal effects, without the ability to explore individual level variations or heterogeneity in responses to treatment. This is an important limitation, especially in the context of part-time work, as there is research to show that the effects of part-time employment can vary significantly depending on factors such as gender, motivations for working part-time, and socioeconomic backgrounds <cit.>. When understanding heterogeneity in causal effects is important, Bayesian non-parametric methods based on Bayesian Additive Regression Trees <cit.> and Bayesian Causal Forests <cit.> have become the gold standard. The default implementations of these methods are only applicable to single time point observational data, however, precluding the study of trends in educational outcomes over time. Our study extends BART and BCF to the setting of longitudinal data. By combining the flexibility of these methods with the highly interpretable structure of the difference-in-differences model, we simultaneously relax the parallel trend assumption of the DiD methods, while also allowing for the study of individual level variations in the growth curves of student achievement, and the heterogeneous impact of part-time work on this growth. While other studies <cit.> have also introduced longitudinal extensions of BART and BCF, with a focus on situations where there is a staggered adoption of treatment, our proposed model assumes a very different structure, and includes three important features targeted specifically at our motivating data. First, our model places separate priors directly over the growth trajectories and the effects of treatment on this growth. This allows us to inform the model with prior information, and comes with the added advantage of allowing the incorporation of model explainability tools, and variable importance metrics directly associated with the parameters of interest <cit.>. The model structure also accommodates time varying covariates, such as evolving levels of student motivation which are common in education studies. Finally, while not the main focus of the present study, an additional feature not included in BCF models before is the ability to handle missing data in the covariates or the treatment assignment. We tackle this issue with a feature borrowed from <cit.>, and a novel update step for the treatment status indicator. The remainder of our paper is structured as follows: In Section <ref> we describe our motivating dataset, the High School Longitudinal Study of 2009 (HSLS), and outline some key features of the data. Section <ref> introduces the proposed model, and shows how we extend BART and BCF to provide a foundation for estimating growth curves of student achievement and heterogeneous treatment effects of part-time work. To further support the credibility of our proposed methodology, Section <ref> applies our model to simulated data designed to mirror the characteristics of the HSLS dataset. We benchmark our performance against other potential candidate models, showcasing the unique capabilities of our model in overcoming challenges that remain difficult for existing approaches. In Section <ref>, we deploy our model to the HSLS data and present the results of our study. Finally, we conclude our paper in Section <ref> with a discussion of our findings, implications for policy, and areas for future work. § DATA The High School Longitudinal Study of 2009 <cit.> is an ongoing study of a nationally representative sample of high school students in the US. It is the most recent of a series of five longitudinal studies launched by the National Center for Education Statistics. The first wave of data collection for HSLS took place in the fall of 2009 at the start of the academic year when the students were in the ninth grade. More than 20,000 high school students took part in this part of the study. A follow up of these students then took place in the spring of 2012 when the students were in the eleventh grade. Further follow ups have also taken place in 2013, 2016, and 2017 to discover how the students are progressing in the years after high school, but did not involve mathematics achievement tests so we will focus solely on the first two waves of the data. Due to some students dropping out of high school, some schools closing, and others disagreeing to continue their participation in the study, the second wave of data collection involved just under 19,000 of the original Wave 1 sample. Data collection during HSLS followed a similar procedure during both waves of the study. Mathematics achievement was assessed during both waves using a computer delivered assessment with questions designed to measure the algebraic reasoning abilities of the students. The resulting achievement estimates assigned to the students were calculated using Item Response Theory <cit.>. The contextual data gathered as part of the study was based on a survey answered by the students, a parent, school administrators, and school teachers. Information collected from the student survey includes characteristics such as sex, age, self concept in mathematics, sense of school belonging, and other details such as participation in activities like part-time work. Data from the parent survey includes important socioeconomic variables such as family income, parental employment and education. School related data includes information such as the administrator's perception of the overall climate within the school, and the level of expectations of student academic success. To ensure a representative sample of students, a stratified, two-stage random sampling design was employed by the study organisers. This involved first approaching eligible high schools, 944 of which agreed to participate in the study, and then randomly sampling students from each of those schools, leading to a total sample of 21,444 participating students. Sampling weights resulting from this design are provided in the dataset to account for non-participation bias and were used to appropriately weight the results discussed later in the paper. Table <ref> of the supplementary material provides weighted summary statistics for a subset of the categorical variables from the base year (Wave 1), and also provides mean achievement levels from both waves of the study. Our study uses the public-use version of the HSLS data. Some of the data in this public use version of the dataset has been obfuscated or removed in order to maintain the anonymity of the students and the schools who took part in the study. Therefore, a school identifier indicating which students attend the same school is not available in this version of the dataset, precluding a hierarchical modelling strategy. The restricted use version of the dataset does include this information but is only available with strict controls in place. This is a limitation of our study, but ensures our results are more easily reproducible without requiring a restricted use version of the dataset. Furthermore, there is evidence to suggest that part-time work is more likely to be influenced by student and family related variables than school related variables <cit.>, partially mitigating the potential for unmeasured confounders to bias our results. § METHODOLOGY §.§ The Model Our motivating dataset consists of two waves, but for the sake of generality in this section we will describe how the model applies to datasets of up to T waves of student data. We are interested in modelling trajectories of student achievement where we have data on n_1 students participating in an initial base year assessment, and subsequent follow-ups on n_2… n_T of the same students during waves 2 to T. We allow for the possibility of drop-out, whereby n_T≤ n_T-1≤…≤ n_1. We will represent the contextual data associated with student i up to time t by x_i,t, where t=1 indicates the data is from the base year (Wave 1), and subsequent values of t indicate the data encompasses extra information collected up to and including Wave t. We will not distinguish between data from different surveys or questionnaires, so x_i,t captures all of the student, parent, and school level data associated with student i up to time t. Given the accumulation of information on students over time as they complete more surveys from additional waves, the number of columns in x_i,t will be less than the number of columns in x_i,t+1. To distinguish between students who do and do not work part-time, we will let Z_i, t+1 be a binary indicator of length n_t+1 which indicates for each student if they reported having a part-time job which involved them working on average 20 hours or more per week during the period between Waves t and t+1. For the achievement data, let y_i,t denote the observed mathematics achievement of student i recorded at time t. Our research questions concern two quantities of interest. The first is related to the growth in mathematics achievement between Waves t and t+1, which we will denote by G_i,t+1=y_i,t+1-y_i,t. The second concerns the impact of part-time work on this growth. To understand this impact, we adopt the Neyman-Rubin causal model <cit.>, and postulate that for each individual i, there are two potential growth values. One that would be observed if the student worked part-time, G_i,t+1(Z_i,t+1=1), and one that would be observed if the student did not, G_i,t+1(Z_i,t+1=0). With these quantities defined, the impact of part-time work on the growth in student achievement during this period is captured by τ_i,t+1=G_i,t+1(Z_i,t+1=1)-G_i,t+1(Z_i,t+1=0). Of course, we only ever observe one of these potential growth values, namely G_i,t+1=G_i,t+1(Z_i,t+1=1)Z_i,t+1+G_i,t+1(Z_i,t+1=0)(1-Z_i,t+1), so we make the following assumptions: * The Stable Unit Treatment Value Assumption. We assume that the potential growth values of every student i between periods t and t+1 are independent of whether or not any other student j worked part-time in any period. * The Sequential Ignorability Assumption. We assume that conditional on their observed characteristics and treatment history up to the period of interest, the potential growth values of student i are independent of whether or not they worked part-time. Notationally, we assume that G_i,t+1(Z_i,t+1=0), G_i,t+1(Z_i,t+1=1) ⊥ Z_i,t+1 | x_i,t+1, Z_i,t. * The Overlap Assumption. We assume that for every observed covariate and treatment history, there is a non-zero probability of working, or not working part-time during any period of interest: 0<P(Z_i,t+1=1|x_i,t+1, Z_i,t)<1. If these conditions hold <cit.>, then we may write that E[G_i,t+1(Z_i,t+1)|x_i,t+1] = E[G_i,t+1|Z_i,t+1, x_t+1]. Our model for student achievement across all waves of data then becomes: y_i,t = μ(x_i,1) + ∑_w=1^T-1(δ_w+1(x_i,w+1, y_i,1… y_i,w, π̂_i,w+1) + τ_w+1(x_i,w+1, y_i,1… y_i,w)Z_i,w+1_G_w+1(x_i,w+1, y_i,1… y_i,w, π̂_i, w+1))I(t>w) + ϵ_i,t where the different parts of the model work together in a cumulative fashion to predict different parts of a student's mathematics achievement. Predictions for achievement at Wave 1 are given by μ(), while achievement at any subsequent Wave t is given by adding this to a cumulative sum of achievement growths, G_w+1(). Within each time period, δ_w+1() and τ_w+1() represent the growth that would have been realised without part-time work, and the expected impact of part-time work on this growth respectively. The additional covariate π̂_i,w+1 included in the δ_w+1() part of the model is a propensity score, which estimates the probability of observation i receiving treatment during this period conditional on their covariates. This inclusion follows the advice of <cit.>, who demonstrated that incorporating this “clever covariate” can help mitigate the issue of regularisation-induced confounding. Finally, ϵ_i,t represents the error term for student i at time t, which we assume to be normally distributed with mean 0 and variance σ^2, ϵ_i,t∼ N(0, σ^2). In our model, the contributions made by μ() and each of δ_w+1() and τ_w+1() come from ensembles of n_μ, n_δ, and n_τ regression trees based on the BART model of <cit.>. For ease of exposition as we discuss the Bayesian backfitting MCMC algorithm by which the regression trees fit to the data, let us consider the simplest scenario where n_μ=n_δ=n_τ=1 and T=2, leaving the general case for the supplementary material. The MCMC sampler begins with each of μ(), δ_2(), and τ_2() initialised as stumps (decision trees where the root is also the sole terminal node, and the terminal node parameter of each tree is set to zero). Next, each iteration starts by selecting at random one of four possible operations (grow, prune, change, or swap) to apply to the μ() tree in order to propose a new tree structure. This proposal is then accepted or rejected with a Metropolis-Hastings step before the terminal node (or now possibly nodes) of the μ() tree are updated via a Gibbs-sampling step which attempts to explain any leftover variation in the partial residual y_i,t less the contribution from δ_2() and τ_2(). Analogous operations are then applied to the δ_2() and τ_2() trees before the residual variance parameter is also updated via Gibbs-sampling. This cycle repeats for a specified number of iterations, providing a desired number of posterior draws for the tree structure and terminal node parameters of μ(), δ_2(), and τ_2(), as well as the residual variance parameter σ^2. Overfitting is prevented through the use of the tree prior from <cit.> which specifies that the probability of any node at depth d being non-terminal is given by α(1+d)^-β. Therefore, for a tree T with terminal nodes h_1 ... h_K, and non-terminal nodes b_1 ... b_L, we have that: P(T)=∏_k=1^Kα(1+d(h_k))^-β∏_l=1^L [1-α(1+d(b_l))^-β] The strength of this prior can be adjusted through setting different values for α and β. For the μ() and δ() trees we adopt the default prior from <cit.>, of α=0.95, β=2, while for the τ() trees we impose stronger regularisation as we expect there to be less heterogeneity in the effects of part-time work than in y itself, choosing α=0.25, β=3 as suggested by <cit.>. To ensure each tree contributes approximately equally to the overall prediction, the terminal node parameters of each tree are given a normal prior. In each type of tree, we have μ∼ N(0, σ_μ^2), δ∼ N(0, σ_δ^2), τ∼ N(0, σ_τ^2) After scaling y to follow a standard normal distribution prior to fitting the model, a sensible choice for σ_μ^2 is 1/n_μ, ensuring the terminal mode parameters in the μ() trees have adequate room to cover the range of the data. Similarly, we use a prior of σ_δ^2=1/n_δ, but given we expect the magnitude of the treatment effects to be relatively small in comparison to y we set σ_τ^2=0.5^2/n_τ. Finally, the conjugate prior for σ^2 is an inverse gamma distribution: σ^2 ∼Inverse-Gamma(ν/2, νλ/2), for which we have found a reliable default choice is to set ν=3, and λ=0.1. §.§ Special Features Two challenges related to missing data required us to build some extra functionality into our model. The first challenge was related to missing data in the covariates. Missing data in the covariates can arise for several reasons in the dataset. For example, some questions may have been purposely skipped by students, their parents, or teachers, and other times the answer to a particular question may not have been known. On average, 1.9% of the data was missing, and the most data missing for any particular variable was 19%. Common approaches for dealing with missing data in the covariates include single or multiple imputation <cit.>, but an extra possibility specific to tree based models is the approach developed by <cit.>, which involves treating missing data as an important feature of the data, operating under the assumption that the data is missing at random. To summarise, this approach involves directing observations with missing data to the left or right child of a node that is being split on, allowing the model to learn from any relationship between missingness and the outcome variable, handling missing data as an integral part of the model, thus accounting for uncertainty that may be present. The second challenge was related to missing data in the treatment Z_i,2 itself, as not all students answered the question on how many hours they worked part-time during school weeks. This type of missingness affected 3.4% of the observations in the data. This challenge is addressed by introducing an additional Gibbs-sampling step at the end of each iteration of the MCMC sampler, where the missing Z_i,2 values are themselves treated as parameters to be updated, with prior probability p_i, conditional on the rest of the data: P(Z_i,2=1|...)=1/1+(1-p_i/p_i)(e^1/2σ^2[(y_i,2-μ_i-δ_i-τ_i)^2-(y_i,2-μ_i-δ_i)^2]). These two added features allowed us to keep a full representative sample of students while accounting for the added uncertainty introduced into our results by the presence of missing data. One final challenge that is common when working with assessment data of student achievement is the use of plausible values <cit.>. In order to prevent the computer delivered assessment from taking unduly long, it was only possible for HSLS to present each student with a limited number of questions. This introduces some room for error in the achievement estimates of the students, and as a result, HSLS provides researchers with five plausible values of student achievement from the posterior of each student's achievement estimate. In line with best practice, we therefore ran five chains of our model, one applied to each plausible value of student achievement, and pooled them together after burn-in to appropriately handle this uncertainty. §.§ Alternative Methodologies Our work shares connections with several areas of Bayesian non-parametric modelling, and longitudinal methods for causal inference. First there is the clear connection to BART <cit.>, as this method provides a foundation for the different parts of our model. BART based methods have become popular in the area of causal inference <cit.> and have demonstrated impressive performance and reliable uncertainty quantification. A second strong connection is with the Bayesian Causal Forest model developed by <cit.>, which also uses BART as a foundation for estimating causal effects. Both methods could potentially be applied to our research questions but fall short of offering the same abilities in this context as our own model in several important ways which are worth discussing. The most natural way for BCF to be applied to our problem setting would be to manually calculate the growth values G_i,t+1 for each student i, and each time period t to t+1. Applying the BCF model to a specific time period would then yield the following, allowing us to recover what our model captures with the δ_t+1() and τ_t+1() part of the model: G_i,t+1=δ_t+1(x_i,t+1, y_i,t, π̂_i,t+1) + τ_t+1(x_i,t+1, y_i,t)Z_i,t+1+ϵ_i, ϵ_i∼ N(0, σ^2) A key limitation of this approach is that it does not model the full data generating process, only the growth between Waves t and t+1. This means that students who participate in Wave t but not in Wave t+1 (and consequently have no calculable G_i,t+1) are excluded from the model and are unable to inform the predictions made by the model. Secondly, manually calculating the G_i,t+1 growth values (to be used as the response variable in this approach), is likely to lead to a smaller signal to noise ratio in the response as the error terms from y_i,t and y_i,t+1 combine, making it more difficult for the model to detect the relationships it is trying to model. A BART only approach could also be applied to the data in a similar way using G_i,t+1 as the response, but this approach would share the same limitations. Of course, if treatment effects were the only quantity of interest then it would also be possible to apply BART or BCF directly to y_i,t+1, but this would preclude any inference on the growth values G_i,t+1, so would fail to address this aspect of our study. Researchers more familiar with difference-in-differences <cit.> based approaches might like to think of our model as a Bayesian non-parametric DiD model where our δ_t+1() trees model the difference for the control group (non part-time workers), and our τ_t+1() trees model the difference in this difference experienced by the treatment group (the part-time workers). Crucially, our approach handles this situation much more flexibly than traditional DiD based methods, as the flexibility of the δ_t+1() trees means we can relax the assumption of parallel trends conditional on the covariates of the students, and the τ_t+1() trees also allow us to capture heterogeneity in the effects of part-time work which is often not possible with DiD based methods. See the supplementary material for an illustration of how our proposed model fits into this framework. Finally, our method also shares similarities with causal methods applicable to longitudinal data such as G-estimation <cit.>, or longitudinal extensions of targeted minimum loss based estimation <cit.>. Both methods have gained popularity owing to their ability to handle complex situations such as time varying confounding, situations where the primary interest is in the causal effect of a series of sustained or irregular treatments, and where the interest is in the lagged effects of a treatment. Our focus however, will be on heterogeneity in the direct effect of a single period of part-time work on the immediately following mathematics assessment, which is not achievable with the available implementations of these methods. Additionally, our model will also provide insights into the growth trajectories of student achievement, a feature that is not modelled by these other approaches. § SIMULATION STUDIES In this section, we assess our proposed model's performance in a simulation study designed to match the features of the motivating HSLS data. We also compare our proposed model with alternative approaches in order to highlight the added performance offered by our method. Our simulation study consists of two data generating processes. DGP1 focuses on heterogeneity in treatment effects and growth curves, making it well-suited to flexible approaches based on BART and BCF. It features two waves of data to accommodate the alternative methods which can not handle multiple time periods. DGP2 is inspired by a synthetic dataset from the R package . This process includes more than two time points and features time-varying covariates. It focuses on estimating the Average Treatment Effect, enabling a fair comparison of our method with the and LTMLE packages, which do not support the estimation of heterogeneous treatment effects, but are correctly specified for the features of this DGP. §.§ Data Generating Process 1 Our first data generating process is based on a modified version of the first Friedman dataset <cit.>, a common benchmarking dataset featuring non linear effects and interaction terms. We will use this dataset to assess how well each of the flexible causal machine learning methods can capture heterogeneity in the growth curves of student achievement, and the treatment effects themselves. We simulate ten covariates measured at Wave 1: x_1… x_10, and a second observation of each of these ten variables again at the final Wave 2: x_11… x_20, where the second observation of each variable is equal to the first plus a small amount of random noise, e.g., x_16=x_6+r, with r a uniform random variable between 0 and 0.4. The structure of the simulated achievement level of each student is of the form described earlier: y_i,t = μ(x_i,1) + δ_2(x_i,2, y_i,1, π̂_i,2)I(t>1) + τ_2(x_i,2, y_i,1)Z_i,t+1I(t>1)_G(x_i,2, y_i,1, π̂_i,2) + ϵ_i,t, ϵ_i,t∼ N(0, σ^2) where μ(x_i,1) = 10sin(π x_1x_2)+20(x_3-0.5)^2+10x_4+5x_5, δ_2()=13μ(x_i,1)+3x_11^2+2x_15^2, and τ_2()=-x_4-x_14^2-x_15^3. The true propensity scores are given by p_i=P(Z_i,2=1|x_i,2)=Plogis(μ_i^*+δ_i^*), where μ_i^*+δ_i^* is a normally scaled version of each of the original μ_i+δ_i values. The compared methods are our longitudinal BCF model, BART using the approach outlined in <cit.>, a standard BCF model from <cit.>, and the causal Generalised Random Forest model (GRF) from <cit.>. The recently proposed BCF extension by <cit.> would also make an excellent method for comparison when a documented R package becomes available. As outlined earlier, given that the longitudinal BCF model is the only one capable of directly modelling the growth curves, we will apply the other competing methods to the transformed outcome y_i,2-y_i,1, the difference in outcomes between Waves 1 and 2, to enable the prediction of growth using BART and BCF. For the longitudinal BCF model, we use 100 trees in the μ() part of the model responsible for predicting y_i,1 at Wave 1, 70 trees in the δ() part of the model responsible for predicting the growth under control, and 30 trees in the τ() part of the model responsible for predicting the heterogeneous treatment effects. For the standard BCF approach, we use 170 trees in the prognostic part of the model which will provide estimates for the growth under control, and 30 trees for estimating the treatment effects. The BART and GRF approaches both use 200 trees in total. Each simulation consists of 500 training observations, and 1000 test observations. The Bayesian methods are run for 500 burn-in and 500 post burn-in iterations. Satisfactory convergence was assessed via visual inspection of the posterior samples for a small subset of the 1000 replications of the data generating process. Table <ref> summarises how the compared approaches perform when tasked with predicting δ_i and τ_i across 1000 replications of the simulation. For δ_i, this performance is evaluated using the average root mean squared error (RMSE) over the 1000 simulations: RMSE=√(1/N∑_i=1^N(δ_i-δ̂_̂î)^2). The equivalent metric used for τ_i is the precision in estimating heterogeneous effects (PEHE): PEHE=√(1/N∑_i=1^N(τ_i-τ̂_̂î)^2), also averaged over the 1000 simulations. Mean coverage rates of the 95% credible intervals, bias, and credible interval widths are also provided for both δ_i and τ_i. A visual representation of these results can be found in Figure <ref>. The clearest differences in Figure <ref> relate to model performance predicting the δ_i values, with our proposed LBCF model achieving much lower RMSE values. Comparison with the GRF model was not possible here, as the GRF model output only provides treatment effect estimates. In the right panel of Figure <ref>, the differences are more subtle, but the proposed model performs marginally better than the BART and BCF methods, which in turn both outperform the GRF based approach. Finally, the LBCF estimates are the least biased of all the compared methods, and are accompanied by close to ideal coverage rates. The credible interval widths from the LBCF estimator are similar to the competing methods when estimating the treatment effects, but are considerably narrower than the competing methods when estimating the growth values, offering a high degree of precision. §.§ Data Generating Process 2 Our second data generating process comes from the R package <cit.>, which implements G-estimation for longitudinal data. Our focus here is on estimating the average treatment effect. As described in <cit.>, the dataset includes: * A baseline covariate U∼ N(0,1) * Covariates L_t∼ N(1+L_t-1+0.5A_t-1+U), t=1,2,3, A_0=0 * Exposure A_t∼Bin(1, expit(1+0.1L_t+0.1A_t-1)), t=1,2,3 * Time varying outcome Y_t∼ N(1+A_t+γ_tA_t-1+∑_i=1^tL_t+U, 1), t=2,3,4 * Constants (γ_1, γ_2, γ_3)=(0, 0.5, 0.5) In this simulation study, the baseline covariate U remains fixed, while the time varying covariates L_t change at each wave in response to the values of the preceding covariates, and whether or not treatment was received. The likelihood of receiving treatment also depends on previous covariates and treatments. Note that while the time varying outcome depends on the treatment status at the current and previous time points, we will only estimate the direct effect of treatment at time t on y_t. The methods we will compare are G-estimation as implemented by , longitudinal targeted minimum loss based estimation from the LTMLE package, and our proposed method. The and LTMLE approaches will use the default settings of the R packages, which make them the correctly specified models, while our approach will use the same setup from the previous simulation study. As before, we will run 1000 replications of the simulation study, but will evaluate performance on the training sample of 500 observations (the LTMLE and packages can not make predictions on unseen data). Figure <ref> visualises the ATE estimates from the proposed approach, the package, and the LTMLE package. For the and LTMLE results, only one boxplot is shown. In the case of the results, this is because the package assumes the treatment has the same effect at all time points. In this simulation, this assumption is valid, but in general, the ability of our model to provide separate estimates at each time point is likely to be valuable. With the LTMLE package, it is necessary to define a contrast in order to estimate the effect of some sequence of treatments on the final observed outcome variable (in this case Y_3). For the simulation above, we tasked the LTMLE package with estimating the effect of the treatment sequence (A_1=0, A_2=0, A_3=1) relative to (A_1=0, A_2=0, A_3=0). This will recover the direct effect of A_3 on Y_3, which is equal to the direct effect of A_t on y_t, consistent across time. In contrast, our proposed LBCF model is able to provide ATE estimates for both the effect of A_2 on Y_2, and the effect of A_3 on Y_3, offering a more detailed and flexible analysis. Figure <ref> visualises the absolute bias in estimating the ATE for each of the approaches over 1000 replications of DGP2. The package is the best performer here, closely followed by the LBCF estimates which are consistently accurate across both time periods. We note, however, that the package assumes the treatment effect is the same at all time points, and this is unlikely to always be valid. The bias of the LTMLE package is consistently much higher, indicating the model often struggled to identify the true ATE from the data. A similar pattern is observed in Table <ref>, which provides additional information on the coverage rates, and mean credible/confidence interval widths. Here, the coverage achieved by the package and the two estimates provided by the proposed LBCF model are very close to ideal. The LTMLE package appears to underestimate the uncertainty in its estimates, however, and only achieves 76.8% coverage. The LBCF model's credible interval widths at both time points are slightly wider than those of the package but remain significantly narrower than the LTMLE package's confidence interval widths. In summary, the results from both data-generating processes in our simulation study underscore the proposed model's ability to provide flexible and accurate predictions, even when confronted with highly non-linear growth patterns or heterogeneity in treatment effects. The model achieved near-ideal coverage rates, exhibited minimal bias, and produced narrower credible intervals compared to other non-parametric causal models. In the second data-generating process, where the proposed model was benchmarked against a correctly specified G-estimation model, the LBCF model matched its strong performance, without making the same assumption that the treatment effect was consistent over time. Encouraged by the robust performance of our proposed model, we proceed to the next section, where we apply the longitudinal BCF method to the motivating HSLS dataset to assess the impact of part-time work on student achievement. § APPLICATION TO HIGH SCHOOL LONGITUDINAL STUDY Recall that HSLS includes two waves of data, with student achievement and other background characteristics measured at both time points. We are interested in understanding the amount by which the mathematics achievement of the students increases between these waves, how this growth depends on the characteristics of the students, the effect of part-time work on this growth, and how this effect is potentially moderated by other observed variables. We apply our model to this dataset using the same model structure from the simulation study, with the same number of trees, but run a larger number of burn-in (3000) and post burn-in iterations (2000), to ensure satisfactory convergence. As described in the methodology, missing data is handled internally by the model, so there is no requirement for multiple imputation. The plausible values of student achievement are appropriately accounted for by pooling 5 separate chains, each of which were applied to one of the 5 sets of plausible values. Sampling weights are also accounted for by appropriately weighting the average treatment effect results displayed below. Figure <ref> shows the posterior distribution of the average growth, and a histogram of the individual δ_i estimates for each student present in Wave 2 of the dataset. The average growth is close to 0.63, and the majority of the growth estimates are positive, indicating that most students are expected to increase their mathematics achievement between Waves 1 and 2. Within the sample there is large variation, however, with some students predicted to increase their mathematics achievement by up to 2 units on the achievement scale, while for a small number of students, mathematics achievement is actually predicted to decrease by a small amount. For context, achievement at Wave 1 was normally distributed with a mean of approximately 0, and a standard deviation of approximately 1. Therefore, an increase in achievement by two units, or two standard deviations, is quite significant. To identify key moderating variables contributing to the variation in δ_i values, variable importance measures were calculated for the δ() trees by counting how often different variables were selected for the splitting rules used in this part of the model. This investigation identified the achievement of the students measured at Wave 1 as being highly influential. Prompted by this finding, we created Figure <ref> which shows a scatter plot of the δ_i predictions versus the achievement of the students measured at Wave 1. The very strong positive relationship between Wave 1 achievement and predicted growth indicates that students who initially perform well in mathematics are predicted to increase their achievement by substantially more than those with lower achievement levels. At the extremely high levels of Wave 1 achievement, students are predicted to increase achievement by 1.5 units on average, while for students at the opposite end of the spectrum, growth in achievement is minimal. This observation points to a widening achievement gap between students at the high and low ends of the achievement spectrum <cit.>. The posterior distribution of the average treatment effect for working part-time at an intensity of greater than 20 hours per week between Waves 1 and 2 is displayed in Figure <ref>. The posterior mean of the ATE is approximately -0.08, with a 95% credible interval ranging from -0.050 to -0.110. This indicates that on average, part-time work is expected to reduce the growth in student achievement between Waves 1 and 2 by between 0.050 and 0.110 units. To contextualise this effect size, note that the standard deviation of the δ_i growth values in achievement is approximately 0.44. Thus, the observed effect size corresponds to a decrease in achievement growth by nearly 0.2 standard deviations, which can be considered a medium to large effect size <cit.>. A histogram of the individual conditional average treatment effects (ICATEs) for each of the students in the sample can be found in Figure <ref>. The majority of the ICATEs are centered quite close to the ATE of -0.08, but there are also signs of heterogeneity. Notably, there is an interesting tail of the histogram stretching across into a positive area where the effect of part-time work is actually predicted to have a positive effect on achievement growth. To explore this finding further, we calculated variable importance metrics for the τ() trees in our model to identify any variables that might strongly moderate the treatment effect. The most influential variable resulting from this analysis was a measure of the students' sense of school belonging at Wave 1 of the study. Figure <ref> visualises this variable's relationship with the ICATEs from the model. The results suggest that the students predicted to experience a positive effect from part-time work are those with an initially low sense of school belonging. This interesting finding, which might initially appear quite strange, aligns well with some `traditional' views that part-time work can benefit students. Early research has suggested, for example, that part-time employment can provide students with greater time management skills <cit.>, and other benefits such as a sense of purpose and responsibility. These benefits can be especially pronounced among students with low achievement or a diminished sense of belonging in school <cit.>. This sense of purpose and responsibility acquired through part-time work could serve to re-focus students, leading to spillover effects benefiting their academic performance <cit.>. Therefore, while part-time work may be associated with negative outcomes for the majority of students, there may be certain subgroups, such as students experiencing a low sense of belonging in school, who may experience positive effects from employment. In summary, this section presented two key findings from the analysis of the HSLS data. Firstly, substantial variation was observed in the extent to which students improved their achievement between Waves 1 and 2. Further analysis showed that this variation was driven primarily by the baseline achievement levels of the students, with initially high performing students showing much higher growth than their peers. These students, starting from a solid foundation of high achievement may find it easier to build upon their academic progress, as they are in a better place to acquire and digest new knowledge in class. The second key finding was that on average, part-time work had a modest, but negative effect on the growth of student achievement. This supports the “zero-sum" argument that part-time work detracts from study time, homework completion, and rest, hindering academic progress as a result. A notable exception was that students with initially low school belonging might actually benefit from part-time work, highlighting the ability of our model to capture complex relationships between student performance and employment. § DISCUSSION Drawing on longitudinal data from the High School Longitudinal Study of 2009, our study introduced an innovative method for modeling growth in student achievement. Our model also estimates the causal impact of interventions such as part-time work on this growth. By extending Bayesian Additive Regression Trees <cit.> and Bayesian Causal Forests <cit.>, the primary strength of our model lies in its ability to flexibly capture both individual growth trajectories in student achievement and the potentially heterogeneous treatment effects of part-time work, which may be influenced by various covariates. This approach contrasts with many existing methods that either lack the flexibility to model individual variations or are confined to single time-point observational data, precluding an analysis of achievement growth over time. Our model was also equipped with two special features that allowed it to handle missing data in the covariates and the treatment status indicator. Simulation study results from Section <ref> provide strong support for the impressive predictive performance of the model, which demonstrated clear advantages over three competing methods when tasked with predicting growth values at the individual student level, and heterogeneous treatment effects. Close to ideal coverage rates were also achieved. The proposed model also showed strong performance in a second simulation study, matching the performance of two correctly specified models designed specifically for use with longitudinal datasets. The results from our model application to the motivating HSLS data produced some interesting findings. First, the model was able to reveal a large disparity in the predicted growth values among students with initially high and low levels of academic achievement. This finding of a widening achievement gap underscores the importance of early interventions in schools and academic institutions. By addressing achievement gaps at the elementary and middle school levels, policy decisions can prevent these disparities from becoming entrenched. This is especially important given previous research which indicates that it becomes much more challenging to effectively remedy these gaps by the ninth or eleventh grade <cit.>. On average, part-time work was found to have a negative effect on student achievement, with the 95% credible interval for the ATE ranging from -0.050 to -0.110. This is important, as we calculated nearly 50% of students in our sample participated in some level of part-time work during high school, and more than 15% of students participated in intensive part-time work, requiring upwards of 20 hours of work a week. Large amounts of heterogeneity were apparent in the ICATEs, however, and an analysis of the variable importance metrics from the model identified sense of school belonging during Wave 1 as a significant contributor to this variation. The finding that students with a low sense of school belonging may actually be benefiting slightly from part-time work ties in with previous findings that show students can benefit from the routine, sense of purpose and responsibility that part-time work can provide <cit.>. From a policy perspective, however, we do not recommend that students beginning to disengage from the school system should take on intensive part-time work. Instead, we suggest that further research is needed to explore how disengaging students can be encouraged to find a sense of purpose or routine through other activities such as sports or youth programs. Alternatively, part-time work with moderate hours may be a more balanced approach. A limitation of the model proposed in our study is that owing to the fact each growth period and associated treatment effect is dedicated a separate BART model, the computational cost of running the model may become quite large in settings with many waves of data. Replacing the BART models with more efficient XBART models as in <cit.> and <cit.> would therefore make a promising area for future work, widening the applicability of the proposed method. Given the flexibility and widely adopted nature of the underlying BART framework, a natural extension of the longitudinal causal model adopted in our study might be to survival data <cit.>. Other natural extensions could include allowing multivariate <cit.> or multinomial outcomes <cit.>, or the incorporation of random effects <cit.>. Additionally, given the specificity of our results to a representative sample of ninth to eleventh grade high school students from the US, an application of a similar model to other countries or grade levels would be of interest. More generally, we expect that the model's flexibility will allow it to be applied to a wide variety of datasets across diverse fields and application areas. 1 xx Abadie2005abadie2005semiparametric Abadie, A. 2005, `Semiparametric difference-in-differences estimators', The Review of Economic Studies 72(1), 1–19. [Angrist et al.]Angrist, Imbens Rubin1996angrist1996identification Angrist, J. D., Imbens, G. W. Rubin, D. B. 1996, `Identification of causal effects using instrumental variables', Journal of the American Statistical Association 91(434), 444–455. Bachman Schulenberg2014bachman2014part Bachman, J. G. Schulenberg, J. 2014, How part-time work intensity relates to drug use, problem behavior, time use, and satisfaction among high school seniors: Are these consequences or merely correlates?, in `Risks and Problem Behaviors in Adolescence', Routledge, pp. 198–213. [Cai et al.]Cai, Choi, Hansen Harrell2016cai2016item Cai, L., Choi, K., Hansen, M. Harrell, L. 2016, `Item response theory', Annual Review of Statistics and Its Application 3(1), 297–321. Callaway Sant’Anna2021callaway2021difference Callaway, B. Sant’Anna, P. H. 2021, `Difference-in-differences with multiple time periods', Journal of Econometrics 225(2), 200–230. [Chipman et al.]Chipman, George McCulloch2010chipman2010bart Chipman, H. A., George, E. I. McCulloch, R. E. 2010, `BART: Bayesian additive regression trees', The Annals of Applied Statistics 4(1), 266–298. Donald Lang2007donald2007inference Donald, S. G. Lang, K. 2007, `Inference with difference-in-differences and other panel data', The Review of Economics and Statistics 89(2), 221–233. [Entwisle et al.]Entwisle, Alexander Olson2000entwisle2000early Entwisle, D. R., Alexander, K. L. Olson, L. S. 2000, `Early work histories of urban youth', American Sociological Review 65(2), 279–297. Friedman1991friedman1991multivariate Friedman, J. H. 1991, `Multivariate adaptive regression splines', The Annals of Statistics 19(1), 1–67. [Hahn et al.]Hahn, Murray Carvalho2020hahn2020bayesian Hahn, R. P., Murray, J. S. Carvalho, C. M. 2020, `Bayesian regression tree models for causal inference: Regularization, confounding, and heterogeneous effects (with discussion)', Bayesian Analysis 15(3), 965–1056. He Hahn2023he2023stochastic He, J. Hahn, P. R. 2023, `Stochastic tree ensembles for regularized nonlinear regression', Journal of the American Statistical Association 118(541), 551–570. Hernán Robins2020hernan2020causal Hernán, M. A. Robins, J. M. 2020, Causal Inference: What If, Boca Raton: Chapman & Hall/CRC. Hill2011hill2011bayesian Hill, J. L. 2011, `Bayesian nonparametric modeling for causal inference', Journal of Computational and Graphical Statistics 20(1), 217–240. Hogan Lancaster2004hogan2004instrumental Hogan, J. W. Lancaster, T. 2004, `Instrumental variables and inverse probability weighting for causal inference from longitudinal observational studies', Statistical Methods in Medical Research 13(1), 17–48. [Howieson et al.]Howieson, McKechnie, Hobbs Semple2012howieson2012new Howieson, C., McKechnie, J., Hobbs, S. Semple, S. 2012, `New perspectives on school students’ part-time work', Sociology 46(2), 322–338. Imai Kim2021imai2021use Imai, K. Kim, I. S. 2021, `On the use of two-way fixed effects regression models for causal inference with panel data', Political Analysis 29(3), 405–415. [Ingels et al.]Ingels, Pratt, Herget, Burns, Dever, Ottem, Rogers, Jin Leinwand2011ingels2011high Ingels, S. J., Pratt, D. J., Herget, D. R., Burns, L. J., Dever, J. A., Ottem, R., Rogers, J. E., Jin, Y. Leinwand, S. 2011, `High school longitudinal study of 2009 (HSLS: 09): Base-year data file documentation. NCES 2011-328.', National Center for Education Statistics . [Inglis et al.]Inglis, Parnell Hurley2022ainglis2022visualizations Inglis, A., Parnell, A. Hurley, C. 2022a, `Visualizations for Bayesian additive regression trees', arXiv preprint arXiv:2208.08966 . [Inglis et al.]Inglis, Parnell Hurley2022binglis2022visualizing Inglis, A., Parnell, A. Hurley, C. B. 2022b, `Visualizing variable importance and variable interaction effects in machine learning models', Journal of Computational and Graphical Statistics 31(3), 766–778. Kablaoui Pautler1991kablaoui1991effects Kablaoui, B. N. Pautler, A. J. 1991, `The effects of part-time work experience on high school students', Journal of Career Development 17(3), 195–211. Kapelner Bleich2015kapelner2015prediction Kapelner, A. Bleich, J. 2015, `Prediction with missing data via Bayesian additive regression trees', Canadian Journal of Statistics 43(2), 224–239. [Khorramdel et al.]Khorramdel, von Davier, Gonzalez Yamamoto2020khorramdel2020plausible Khorramdel, L., von Davier, M., Gonzalez, E. Yamamoto, K. 2020, Plausible values: principles of item response theory and multiple imputations, Springer International Publishing. King et al.1989king1989improving King, A. J. et al. 1989, Improving Student Retention in Ontario Secondary Schools. Student Retention and Transition Series., ERIC. Kraft2020kraft2020interpreting Kraft, M. A. 2020, `Interpreting effect sizes of education interventions', Educational Researcher 49(4), 241–253. [Krantsevich et al.]Krantsevich, He Hahn2023krantsevich2023stochastic Krantsevich, N., He, J. Hahn, P. R. 2023, Stochastic tree ensembles for estimating heterogeneous effects, in `International Conference on Artificial Intelligence and Statistics', PMLR, pp. 6120–6131. Kurz2022kurz2022augmented Kurz, C. F. 2022, `Augmented inverse probability weighting and the double robustness property', Medical Decision Making 42(2), 156–167. Lee Staff2007lee2007work Lee, J. C. Staff, J. 2007, `When work matters: The varying impact of work intensity on high school dropout', Sociology of Education 80(2), 158–178. [Lendle et al.]Lendle, Schwab, Petersen van der Laan2017lendle2017ltmle Lendle, S. D., Schwab, J., Petersen, M. L. van der Laan, M. J. 2017, `LTMLE: an R package implementing targeted minimum loss-based estimation for longitudinal data', Journal of Statistical Software 81, 1–21. Lin Tsai2020lin2020missing Lin, W.-C. Tsai, C.-F. 2020, `Missing value imputation: a review and analysis of the literature (2006–2017)', Artificial Intelligence Review 53, 1487–1509. [McCall et al.]McCall, Hauser, Cronin, Kingsbury Houser2006mccall2006achievement McCall, M. S., Hauser, C., Cronin, J., Kingsbury, G. G. Houser, R. 2006, `Achievement gaps: An examination of differences in student achievement and growth. the full report.', Northwest Evaluation Association . [McJames et al.]McJames, O’Shea, Goh Parnell2024mcjames2024bayesian McJames, N., O’Shea, A., Goh, Y. C. Parnell, A. 2024, `Bayesian causal forests for multivariate outcomes: application to Irish data from an international large scale education assessment', Journal of the Royal Statistical Society Series A: Statistics in Society p. qnae049. [Monahan et al.]Monahan, Lee Steinberg2011monahan2011revisiting Monahan, K. C., Lee, J. M. Steinberg, L. 2011, `Revisiting the impact of part-time work on adolescent adjustment: Distinguishing between selection and socialization using propensity score matching', Child Development 82(1), 96–112. [Morgan et al.]Morgan, Farkas, Hillemeier Maczuga2016morgan2016science Morgan, P. L., Farkas, G., Hillemeier, M. M. Maczuga, S. 2016, `Science achievement gaps begin very early, persist, and are largely explained by modifiable factors', Educational Researcher 45(1), 18–35. Murray2021murray2021log Murray, J. S. 2021, `Log-linear Bayesian additive regression trees for multinomial logistic and count regression models', Journal of the American Statistical Association 116(534), 756–769. Myint2024myint2024controlling Myint, L. 2024, `Controlling time-varying confounding in difference-in-differences studies using the time-varying treatments framework', Health Services and Outcomes Research Methodology 24(1), 95–111. Robins1997robins1997causal Robins, J. M. 1997, Causal inference from complex longitudinal data, in `Latent variable modeling and applications to causality', Springer, pp. 69–117. Robotham2012robotham2012student Robotham, D. 2012, `Student part-time employment: characteristics and consequences', Education+ Training 54(1), 65–75. [Roth et al.]Roth, Sant’Anna, Bilinski Poe2023roth2023s Roth, J., Sant’Anna, P. H., Bilinski, A. Poe, J. 2023, `What’s trending in difference-in-differences? a synthesis of the recent econometrics literature', Journal of Econometrics 235(2), 2218–2244. [Rowley et al.]Rowley, Edmunds, Dufur, Jarvis Silveira2020rowley2020contextualising Rowley, K. J., Edmunds, C. C., Dufur, M. J., Jarvis, J. A. Silveira, F. 2020, `Contextualising the achievement gap: Assessing educational achievement, inequality, and disadvantage in high-income countries', Comparative Education 56(4), 459–483. Sekhon2008sekhon2008neyman Sekhon, J. S. 2008, `The Neyman-Rubin model of causal inference and estimation via matching methods', The Oxford Handbook of Political Methodology 2, 1–32. https://doi.org/10.1093/oxfordhb/9780199286546.003.0011 Singh Ozturk2000singh2000effect Singh, K. Ozturk, M. 2000, `Effect of part-time work on high school mathematics and science course taking', The Journal of Educational Research 94(2), 67–74. [Sparapani et al.]Sparapani, Logan, McCulloch Laud2016sparapani2016nonparametric Sparapani, R. A., Logan, B. R., McCulloch, R. E. Laud, P. W. 2016, `Nonparametric survival analysis using Bayesian additive regression trees (bart)', Statistics in Medicine 35(16), 2741–2753. [Splawa-Neyman et al.]Splawa-Neyman, Dabrowska Speed1990splawa1990application Splawa-Neyman, J., Dabrowska, D. M. Speed, T. 1990, `On the application of probability theory to agricultural experiments. Essay on principles. Section 9.', Statistical Science 5(4), 465–472. https://doi.org/10.1214/ss/1177012031 [Steinberg et al.]Steinberg, Greenberger, Garduque McAuliffe1982steinberg1982high Steinberg, L. D., Greenberger, E., Garduque, L. McAuliffe, S. 1982, `High school students in the labor force: Some costs and benefits to schooling and learning', Educational Evaluation and Policy Analysis 4(3), 363–372. [Tompsett et al.]Tompsett, Vansteelandt, Dukes De Stavola2022tompsett2022gesttools Tompsett, D., Vansteelandt, S., Dukes, O. De Stavola, B. 2022, `gesttools: General purpose G-estimation in R', Observational Studies 8(1), 1–28. Wager Athey2018wager2018estimation Wager, S. Athey, S. 2018, `Estimation and inference of heterogeneous treatment effects using random forests', Journal of the American Statistical Association 113(523), 1228–1242. [Wang et al.]Wang, Martinez Hahn2024wang2024longbet Wang, M., Martinez, I. Hahn, P. R. 2024, `Longbet: Heterogeneous treatment effect estimation in panel data', arXiv preprint arXiv:2406.02530 . Wu2005wu2005role Wu, M. 2005, `The role of plausible values in large-scale surveys', Studies in Educational Evaluation 31(2-3), 114–128. [Wundervald et al.]Wundervald, Parnell Domijan2022wundervald2022hierarchical Wundervald, B., Parnell, A. Domijan, K. 2022, `Hierarchical embedded Bayesian additive regression trees', arXiv preprint arXiv:2204.07207 . [Yeager et al.]Yeager, Bryan, Gross, Murray, Krettek Cobb, HF Santos, Gravelding, Johnson Jamieson2022yeager2022synergistic Yeager, D. S., Bryan, C. J., Gross, J. J., Murray, J. S., Krettek Cobb, D., HF Santos, P., Gravelding, H., Johnson, M. Jamieson, J. P. 2022, `A synergistic mindsets intervention protects adolescents from stress', Nature 607(7919), 512–520. Zimmerman Kitsantas2005zimmerman2005homework Zimmerman, B. J. Kitsantas, A. 2005, `Homework practices and academic achievement: The mediating role of self-efficacy and perceived responsibility beliefs', Contemporary Educational Psychology 30(4), 397–417. § SUPPLEMENTARY MATERIALS - TABLE OF SUMMARY STATISTICS § SUPPLEMENTARY MATERIALS - LBCF DIAGRAM § SUPPLEMENTARY MATERIALS - LBCF ALGORITHM boxruled
http://arxiv.org/abs/2407.12771v1
20240717175149
The Role of Network and Identity in the Diffusion of Hashtags
[ "Aparna Ananthasubramaniam", "Yufei Zhu", "David Jurgens", "Daniel Romero" ]
cs.SI
[ "cs.SI", "cs.CL", "cs.CY" ]
dinat § .1em [table]position=top unlist =1 1,*]Aparna Ananthasubramaniam 1]Yufei “Louise” Zhu 1]David Jurgens 1]Daniel M. Romero [1]University of Michigan, Ann Arbor, MI, USA [*]Corresponding author: The Role of Network and Identity in the Diffusion of Hashtags [ ============================================================= § ABSTRACT Although the spread of behaviors is influenced by many social factors, existing literature tends to study the effects of single factors—most often, properties of the social network—on the final cascade. In order to move towards a more integrated view of cascades, this paper offers the first comprehensive investigation into the role of two social factors in the diffusion of 1,337 popular hashtags representing the production of novel culture on Twitter: 1) the topology of the Twitter social network and 2) performance of each user's probable demographic identity. Here, we show that cascades are best modeled using a combination of network and identity, rather than either factor alone. This combined model best reproduces a composite index of ten cascade properties across all 1,337 hashtags. However, there is important heterogeneity in what social factors are required to reproduce different properties of hashtag cascades. For instance, while a combined network+identity model best predicts the popularity of cascades, a network-only model has better performance in predicting cascade growth and an identity-only model in adopter composition. We are able to predict what type of hashtag is best modeled by each combination of features and use this to further improve performance. Additionally, consistent with prior literature on the combined network+identity model most outperforms the single-factor counterfactuals among hashtags used for expressing racial or regional identity, stance-taking, talking about sports, or variants of existing cultural trends with very slow- or fast-growing communicative need. In sum, our results imply the utility of multi-factor models in predicting cascades, in order to account for the varied ways in which network, identity, and other social factors play a role in the diffusion of hashtags on Twitter. Keywords: hashtags, cascade prediction, cascade evaluation, social network, social identity. § INTRODUCTION Roughly 1 in 5 posts on Twitter (now known as 𝕏) contain hashtags. The ubiquity of hashtags likely stems from their pragmatic and social functions in the process of cultural production on Twitter: 1) to facilitate and codify the creation of new culture and 2) to enable easy dissemination of new culture that is produced <cit.>. As such, modeling the spread of cultural innovation on Twitter requires a strong mechanistic understanding of how hashtags spread among users on the platform, as well as the diverse mechanisms underlying the creation and adoption of hashtags. The spread of behaviors, known as cascades, has been primarily studied through the lens of social networks, including analyzing the effects of different network topologies and contagion processes <cit.>, or how these effects vary by properties like the hashtag's topic and semantics <cit.>. Prior work suggests that many types of social context help in shaping how far, how fast, and to whom artifacts diffuse <cit.>. For instance, hashtags are often used to explicitly signal the user's social identity or affiliation <cit.>; in these cases, the Twitter network afford users exposure to the hashtag, but each user's identity helps determine whether they adopt the hashtag and, therefore, also shapes future exposures <cit.>. For example, <cit.> theorizes that the spread of hashtags on Black Twitter is driven by a combination of network and identity. Users coin these hashtags to “perform” their racial identity online. Adopters are often part of the Black Twitter network and, as such, continued adoption largely occurs within this community because 1) these users are more likely to be exposed to the hashtag, 2) exposed users outside the community tend not to adopt the hashtag if it does not signal their racial identity, which 3) minimizes exposure and adoption outside the community. As this example illustrates, the dynamics underlying network-only diffusion likely differ from the dynamics when diffusion involves a combination of network and other social factors like identity. As such, models of hashtag dissemination would likely benefit from including multiple interacting social factors. However, even in the context of studying identity-related hashtags, prior work has largely focused on the impact of a single factor (e.g., either network or identity) when modeling the diffusion of hashtags. In this work, we investigate the role of two social factors in the adoption of innovation: 1) the topology of Twitter's social network and 2) the probable demographic identity of users. We study the spread of 1,337 popular hashtags, in a network of nearly 3M users on Twitter. These hashtags represent the production of novel culture (e.g., #learnlife, #gocavs). In order to compare the effects of network and identity, we simulate the diffusion of each hashtag using: 1) a Network-only model where hashtags spread through the Twitter network using a modified linear-threshold model 2) an Identity-only model where hashtags diffuse between users who share relevant identities, and 3) a combined Network+Identity model that includes both social factors. We evaluate how well these models reproduce ten commonly studied properties of cascades, including their popularity, growth, and adopter composition. Overall, network and identity best reproduce a composite measure of all ten properties. However, there is important heterogeneity in the role of network and identity; for instance, hashtags regional or racial identity, and those discussing sports or news have highest comparative advantage with the Network+Identity model. We create a better-performing customized model, selecting whether to study each hashtag using either network alone, identity alone, or network and identity together. Our work underscores the importance of building models that integrate multiple social factors. § RELATED WORK §.§ Modeling Cultural Diffusion Online. The diffusion of behavior and information online is a topic of significant study in the literature. cf. <cit.>, <cit.> and <cit.> for recent reviews of this literature. Some key points from these reviews: Empirical models often aim to predict some property of the final cascade given some information about its initial adopters. Many such papers adapt models developed to simulate offline behaviors from first principles, including the Susceptible-Infectious-Recovered (SIR) compartment model, the linear threshold model of complex contagion, and stochastic simulations like Hawkes models or Poisson processes. Other papers use deep learning for the predictive task, including graph representation learning and predictive models from features of the network, adopters, and early parts of the cascade. Our work builds on these studies by using a more recent agent-based model of diffusion that accounts for diffusion dynamics particular to Twitter. For instance, by adopting a usage-based instead of adopter-based model, our framework accounts for frequency effects in the adoption <cit.>; and by modeling the fading of attention online our model allows for cultural artifacts to stop being used over time (e.g., to model hashtags that are used temporarily and then exit the lexicon) <cit.>. Using a first-principles model also allows us to test the specific mechanisms associated with network and identity that are encoded in the model – and to explicitly test the effects of network and identity in cultural diffusion rather than simply using network or identity features in our model. We also introduce a novel dataset of hashtag cascades and an ten-factor evaluation framework to support future work in this area. §.§ Social Factors in the Adoption of Hashtags. Prior work often attributes hashtag adoption to factors related to network, identity, lifecycle, discourse. Network factors include the position of initial adopters in the social network and simulating the diffusion of innovation through a social network <cit.>. Identity factors include wanting to join or signal membership to a certain community <cit.>. Lifecycle factors include the hashtag's growth trajectory <cit.> Discursive factors include the hashtag's relevance, topicality, and ease of use (e.g., length) <cit.>. In addition to individual social factors, some theoretical models of diffusion posit that the interaction of multiple social factors may play a role in the diffusion of hashtags. For instance, qualitative studies of sports hashtags <cit.> and racial hashtags <cit.> have suggested that hashtags pertaining to specific communities on Twitter may spread via network effects (someone is exposed to a hashtag, a precursor for adoption, when a member of their network uses it) and identity effects (an exposed user chooses to use the hashtag if they are a member of this community and want to signal this identity). However, most empirical models do not incorporate the interaction of network and identity. For instance, <cit.> notes a number of articles that, separately, describe the effect of “network factors” and “user factors” (e.g., identity) on the propagation of misinformation, but none that describe the effects of both network and user factors. Similarly, <cit.> lists several papers that model adoption decisions based on either “neighboring relations” (i.e., the network) or “individual/group characteristics” (like identity), but not both, while <cit.> describes hashtags as being either “identity-based” or “bond-based.” Our work builds in this prior literature by empirically modeling the interaction of two social factors, network and identity, in cultural diffusion. In addition, we explore the conditions under which the interaction of these two social factors is especially important in modeling the diffusion, and propose a combined model to predict which factors (network and/or identity) best model the properties of a given hashtag's cascade. § METHODS In order to test the roles of network and identity in the diffusion of hashtags, we 1) collect cascades of popular hashtags from Twitter, 2) estimate the identity of each user and the network among the users, and 3) use an agent-based model to simulate these cascades using both network and identity. To do this, we adapt the methods of <cit.>. We summarize all key points of the methods in this section, and the original paper has full details. §.§ Modeling Diffusion of Innovation Testing our study's hypotheses requires comparing empirical cascades against cascades simulated using the Twitter network and users' demographic identities. In this section, we describe how we produce the needed synthetic data. §.§.§ Simulation Formalism The diffusion of hashtags on Twitter is modeled using a common agent-based setup: a set of initial adopters use the hashtag at time t=0; and at each subsequent timestep, each agent will decide whether to use the hashtag depending on prior adoption by other agents. In the popular linear threshold model, an agent will use the hashtag if the (weighted) fraction of their network neighbors who are already adopters crosses a certain threshold <cit.>. To better simulate the dynamics underlying cultural production, <cit.> adapts the classic linear threshold model in two ways that are well-suited for our research question: First, adoption is usage-based rather than user-based. That is, rather than representing adoption as a binary property of the agent (i.e., an agent is either “an adopter” or “not an adopter”), each exposed agent i could use the hashtag h at each timestep with some time-varying probability p_ih. Therefore, unlike the linear threshold model, an agent can 1) use the hashtag multiple times, or 2) decide not to use it in one timestep but then decide to use it later. This assumption is consistent with the role of repeated exposure in the adoption of textual innovation <cit.>. Second, the model uses not only the topology of the social network but also the identity of agents to model the diffusion of innovation. As shown in Equation <ref>, the probability of an agent i's adoption of hashtag h p_ih is proportional to (i) the similarity between their identity and the hashtag's identity δ_ih; and (ii) the fraction of their neighbors j who adopted the hashtag, weighted by tie strength w_ij and similarity in identity δ_ij. p_ih∼ S ·δ_ih∑_j ∈ neighbors who adopted w_jiδ_ji/∑_k ∈ all neighbors w_kiδ_ki Therefore, consistent with prior work on adoption of innovation <cit.>, the network influences 1) the hashtags an agent is exposed to (opportunity to adopt) and 2) the agents’ level of exposure (likelihood of adopting). Consistent with prior work on identity performance and signalling <cit.>, the effects of identity are modeled in two ways: 1) agents preferentially use hashtags that match their own identity, and 2) agents give higher weight to exposure from demographically similar network neighbors. §.§.§ Model Parameters Each hashtag has a different propensity to be used on Twitter, due to differences in factors like the size of potential audience, communicative need, and novelty (e.g., a hashtag about a TV show with a small audience is likely to get fewer uses than a hashtag about a TV show with a large audience) <cit.>. Accordingly, in Equation <ref>, each hashtag is associated with a different constant of proportionality S. The S parameter is termed stickiness because larger values of this parameter bias the model towards higher levels of—or “stickier”—adoption. The stickiness of each hashtag is calibrated to the empirical cascade size (number of uses) using a nested grid search on a parameter space of [0.1, 1]: first, identifying the interval of width 0.1 in which the model best approximates the empirical cascade size; and second, identifying the best fitting stickiness value in that interval using a grid search with step size 0.01. Grid searches are performed using one run of the model at each value of stickiness. The model has three hyperparameters that apply across all hashtags. These are taken from the original paper, which tuned the parameters to the empirical cascade size with the same set of users. §.§.§ Comparing Network and Identity To understand the effects of network and identity, we compare the full Network+Identity model described above against two counterfactuals: 1) the Network-only model, where we simulate the spread of the word through just the network with no identity effects (this is achieved by setting δ_ij=1 and δ_ih=1) and 2) the Identity-only model, where we eliminate the effects of homophily by running simulations on a configuration model random graph with the same users and degree distribution as the original network. §.§ Network and Identity Estimation This section elaborates on how network and identity are estimated. Each agent in this model is a user on Twitter who is likely located in U.S.A., based on the geographic coordinates tagged on their tweets <cit.>. There are 3,959,711 such users in the Twitter Decahose, a 10% sample of tweets from 2012 to 2022. Since we use the same agent identities and network to model the diffusion of all hashtags during this ten-year period, the network and identities are inferred from 2018 data, which is at the midpoint of this timeframe (e.g., identities are from the 2018 American Community Survey and House of Representative elections, the network is inferred from interactions between 2012 and 2018). §.§.§ Agent Identities In this model, identity includes an agent's affiliations towards 25 identities within five demographic categories: (i) race/ethnicity (identities include different racial and ethnic groups such as non-Hispanic white, Black/African American, etc.), (ii) socioeconomic status (identities include categories of income level, educational attainment, and labor-force status); (iii) languages spoken (identities include the top six languages spoken in U.S.A.: English, Spanish, French, Chinese, Vietnamese, Tagalog); (iv) political affiliation (identities are Democrat, Republican, or Other Party); and (v) geographic location. Each agent's demographic identity is modeled as a vector Υ∈ [0,1]^25 whose entries represent the composition of the user's Census tract and Congressional district. An agent's location is inferred using the geographic coordinates they tweeted from, using the high-precision algorithm from <cit.>. An agent's political affiliation is the fraction of votes each party got in the agent's Congressional district during the 2018 House of Representatives election. An agent's race, socioeconomic status, and languages spoken are the fraction of the Census tract with the corresponding identity. §.§.§ Network This study uses a weighted Twitter mutual mention network, which has been shown to model information diffusion well <cit.>. In particular, the nodes in this network are all agents and there is an edge between agents i and j if both users mentioned the other at least once in the Twitter Decahose sample. The strength of the edge from i to j is proportional to the number of times user i mentioned user j in the sample. Although all ties are reciprocated, the network is directed because the strength of the edge from i to j may not match the strength of the edge from j to i. This network contains 2,937,405 users and 29,153,138 edges. §.§ Hashtags This study models the spread of 1,337 popular hashtags between 2013 and 2022. This section describes how hashtags and their initial adopters and identities are selected. §.§.§ Definition As this paper seeks to study the roles of network and identity in the lifecycle of the production of novel culture, we select hashtags where: * Hashtag is well-adopted: Sufficiently popular hashtags can be considered cultural objects, used to allow Twitter users to position their own thoughts in context of a broader conversation <cit.>. To ensure that the hashtag was popular enough to be considered part of a “broader conversation,” we included only hashtags with 1,000 or more uses in our Decahose sample. * Hashtag is new: Our goal is to model the spread of hashtags from when they're coined to their dissemination. Therefore, we include hashtags that have had low adoption before the data collection window (i.e., they are novel) and whose initial adopters we can identify in our data. * Hashtag represents truly innovative culture: We are interested in modeling the diffusion of cultural innovation on Twitter. Therefore, hashtags of interest do not reference common words or phrases, and are not simply the names of existing named entities (e.g., celebrities, movie titles). Instead, they are neologisms or novel phrases that are partly or wholly created by the community. §.§.§ Identification We apply the above definition to systematically select hashtags from the Twitter Decahose sample between January 2012 and December 2022. First, we collect all tweets from the Decahose sample that were posted by the 2,937,405 users in our network. These tweets contain 198,988 hashtags that were used at least 100 times. Next, we filter these hashtags, as follows: * Popularity: To limit our study to hashtags that eventually became popular, we eliminate 116,477 hashtags that were used fewer than 1,000 times between 2013 and 2022. Frequencies are counted without considering case(e.g., #GoSox is considered the same hashtag as #gosox). While some studies may also consider less popular hashtags, we eliminate these because many of the properties we're interested in can't be calculated or are too noisy on small cascades. * Novelty: To limit our study to newly coined hashtags, we eliminate 77,134 hashtags that were used more than 100 times in 2012 (e.g., #obama2012, #sup, #sobad, #sandlot). * Innovativeness: To ensure the hashtag represents production of novel culture (e.g., it is not a reference to some named entity, a common phrase, or a dictionary word), we eliminate 3,144 hashtags that were entries in the Merriam Webster English-language dictionary (e.g., #explore, #dirt) or in Wikidata, a repository of popular named entities and phrases (e.g., #domesticviolence, #billcosby, #interiordesign). Since hashtags cannot contain certain characters that might appear in the dictionary and Wikidata (e.g., spaces, apostrophes, periods), we replaced these characters with both spaces and underscores to ensure that we eliminate hashtags using these different conventions. Two authors reviewed a sample of 100 of these hashtags and determined that 84% of them were examples of novel cultural production, rather than references to entities, dictionary words, or other non-cultural or existing cultural references (annotation guidelines in Appendix <ref>). * Presence of Seed Nodes: To ensure that the hashtag was coined between 2013 and 2022, we eliminate 896 hashtags whose cascade began before 2013 (e.g., #theedmsoundofla, #southernstreets, #rastafarijams). The procedure to identify seed nodes is described in Section <ref>. After this filtering, we were left with 1,337 hashtags. §.§.§ Initial Adopters Each cascade's initial adopters are the users whose adoption of the hashtag 1) was likely not influenced by prior usage on Twitter and 2) likely influenced future adoption of the hashtag. To identify these users, we first find instances where each hashtag had a period of contiguous usage, by looking for periods of time when the hashtag was used at least 100 times in the Decahose sample (likely at least 1,000 times overall) with less than a month's gap between successive uses. Prior work has determined that after one month of inactivity, the subsequent usage of a hashtag is likely not in response to any prior usage <cit.>. Additionally, a hashtag's prior period usage is more likely to be remembered if it was used more frequently in the prior period <cit.>. As such, we assume that the cascade starts during the first period where the hashtag was used more than 1,000 times, since: any usage prior to this start date is likely unrelated to the cascade, since it was used too infrequently for users in the cascade to have a high likelihood of adoption; and adopters after the start date are likely to remember the usage in this first period because of its high frequency. The hashtag's initial adopters are the first ten users to use the hashtag after the start date. §.§.§ Hashtag Identity Each hashtag signals an identity, determined by the composition of its initial adopters. Initial adopters who are more strongly aligned with a particular identity are more likely to coin hashtags that signal that identity <cit.>. Accordingly, if the median initial adopter is sufficiently extreme in any given register of identity (in the top 25th percentile of that identity, using the threshold from <cit.>), the hashtag signals that identity. § EVALUATING SIMULATED CASCADES When comparing empirical and simulated adoption, researchers often choose to focus on reproducing certain desired properties of a cascade rather than predicting exactly which individuals will adopt the focal behavior, because there is a high degree of stochasticity in adoption decisions <cit.>. However, the properties used in the literature vary widely and are often uncorrelated in their performance (common metrics include measures like cascade size, growth, properties of the adopter subgraph, and virality). In order to comprehensively study the effects of network and identity on the diffusion of hashtags on Twitter, we develop a framework to analyze a model's ability to reproduce ten different properties of cascades, related to a cascade's popularity, growth, and adopter composition. This requires evaluating models across all ten measures and then combining the ten evaluation scores into a composite Cascade Match Index (cmi) to measure the overall performance across the ten measures. To enable error analysis, we do not compare the distribution of properties over all trials; instead we calculate the cmi score for each pair of simulated and empirical cascades and then average errors over all simulations. For each of the ten metrics, we explain 1) what property of the hashtag is being measured and 2) how comparisons between pairs of simulated and empirical cascades are made. §.§ Popularity Cascades are often modeled with the goal of understanding the dynamics underlying popularity <cit.>. More popular hashtags experience high levels of adoption or adoption in parts of the social network that are very distant from the initial adopters, increasing the influence they have on popular culture. M1: Level of Usage One of the most common metrics used to measure the popularity of a new behavior is simply how often the behavior is used. M1 calculates the number of times a hashtag is used in each cascade, including repeated usage by a user. Comparing simulated and empirical usage requires a measure that operates on a logarithmic rather than a linear scale (e.g., not relative error), because the level of usage could span several orders of magnitude. For instance, if the empirical cascade had 1,000 uses in the Decahose sample (or an expected 10,000 uses on all of Twitter), simulation 1 had 5,000 uses, and simulation 2 had 20,000 uses, a measure like relative error would show that simulation 1 has smaller error than simulation 2 (|10,000 - 5,000| vs. |10,000 - 20,000|); however, since cascades often grow exponentially <cit.>, it would be better for both to have the same magnitude of error since one is half as big and the other is twice as big as the empirical cascade. Therefore, we compare the ratio of simulated to empirical usage on a logarithmic scale |log(M1_sim/10 · M1_emp)|, henceforth referred to as the log-ratio error. We compare M1_sim to 10 · M1_emp because the empirical cascades are drawn from a 10% sample of Twitter and, therefore, we expect M1_emp to be 10 times larger on all of Twitter. M2: Number of Adopters In addition to the level of usage, another popular way of measuring popularity is the number of unique adopters in a cascade. M2 looks at the number of unique users in the downsampled cascade who adopted each hashtag. Unlike M1, M2 does not consider repeated usage and may be much lower than M1 when a cascade experiences a high volume of usage by a small group of users (e.g., for niche cascades that are really popular among a small group of users); however, in many cases, M1 and M2 are likely to be correlated. Since, like M1, the number of adopters also scales exponentially, comparisons between empirical and simulated cascades are made using the log-ratio error. M3: Structural Virality Another way of measuring the popularity of a hashtag is to assess how deeply the hashtag has permeated the network <cit.>. Structural virality measures exactly this. When initial adopters are not known, structural virality is operationalized as the mean distance between all pairs of adopters (the Wiener index). However, as initial adopters are known in our models, structural virality is defined as the average distance between each adopter and the nearest seed node. Unlike M1 and M2, path lengths in a network vary in a smaller range (e.g., prior work has found that paths are usually between 3 and 12 hops long <cit.>). Therefore, comparisons between the structural virality of each simulated and empirical cascade are made using relative error with respect to the empirical cascade |M3_sim-M3_emp|/M3_emp. §.§ Growth In order to understand how hashtags become viral, many studies look not just at the popularity of a hashtag but also how its adoption shifts over time <cit.>. There are a number of commonly studied properties of cascades that measure how they grow. M4: Shape of Adoption Curve The shape of a hashtag's adoption curve (or the number of uses over time) is indicative of different mechanisms that may promote or inhibit a cascade's growth <cit.>. M4 is modeled by splitting both the simulated and empirical time series into T evenly-spaced intervals, where T is the smaller of a) the number of timesteps in the simulation and b) the number of hours in the empirical cascade. To make the empirical curve comparable to the simulated curve, we first truncate the adoption curve's right tail once adoption levels remain low for a sustained period of time, to match the simulation's stopping criteria. We compare the empirical and simulated curves using the dynamic time warping (DTW) distance between them. M5: Usage per Adopter Hashtags where users tend to use the hashtag more often have different growth patterns than hashtags where each adopter uses the hashtag fewer times <cit.>. M5 calculates the average number of times each adopter used the hashtag. Comparisons between simulated and empirical cascades are made with the relative error. M6: Edge Density The structure of the adopter subgraph of the network often reflects how a cascade grows and spreads through the network <cit.>. In particular, the connectivity or edge density within M6 is operationalized as the number of edges, or edge density, within the adopter subgraph.[Another commonly studied property of the adopter subgraph is the number of connected components. We chose not to use the number of connected components because the corresponding error was reasonably correlated with edge density, so they didn't seem like sufficiently different measures; additionally, unlike edge density, the connected components often change dramatically after downsampling.] Since edges in the adopter subgraph can be very sparse or very dense and these scenarios change the number of edges by several orders of magnitude, the empirical and simulated edge densities are compared using the log-ratio error. M7: Growth Predictivity In many cases, it is useful to be able to predict how big a cascade will become based on a small set of initial adopters <cit.>. In order to test how well each model achieves this task, we attempt to predict the size of each empirical cascade based on the characteristics of the first 100 adopters in each simulation using a multi-layer perceptron regression with 100 hidden layers, an Adam optimizer, and ReLU activation. Predictors include a set of 711 attributes from <cit.> that are not directly used by our models: the timestep at which each of the first 100 adopters used the hashtag; the degree of each adopter in the full network and adopter subgraph (note that the identity-only model preserves degrees of each agent); and the age and gender of each adopter, inferred using <cit.>'s demographic inference algorithm, etc. Simulated and empirical cascades are compared using the relative error of the predicted cascade size. §.§ Adopters In addition to modeling popularity and growth, there has been significant research on how certain subpopulations come to adopt new culture <cit.>. We identify a set of three measures of how well a simulated cascade reproduces the composition of adopters. M8: Demographic Similarity New culture is often adopted in demographically (e.g., racially, socioeconomically, linguistically) homogenous groups. This may occur when the cultural artifact is explicitly signaling an affiliation with the demographic identity (e.g., #strugglesofbeingblack) or when the artifact does not explicitly acknowledge an identity but ends up being used more in one group than another by convention (e.g., Democrats use more swear words online <cit.>) <cit.>. We compare the distribution of demographic attributes from Section <ref> in adopters from empirical and simulated cascades. Since there are many demographic attributes, we construct a one-dimensional measure of these attributes using a propensity score. This propensity score is the predicted probability obtained by regressing the demographic attributes on a binary variable indicating whether the user is from the simulated or empirical cascade. This propensity score has two important properties: 1) users that are adopters in both cascades will not factor into the construction of the propensity score since they are represented as both 1's and 0's in the logistic regression; and 2) if the empirical and simulated adopters have similar demographic distributions, the propensity scores of adopters in the empirical cascade will have a similar distribution as the propensity scores of adopters in the simulated cascade <cit.>. The differences in demographics between simulated and empirical cascades is measured using the Kullback–Leibler (KL) divergence of the distribution of the empirical adopters' and simulated adopters' propensity scores. M9: Geographic Similarity Another property of interest is whether a model can reproduce where adopters of a hashtag are located in U.S.A. <cit.>. The location of adopters is modeled as a smoothed county-level distribution of the fraction of users in the county who adopted the hashtag. Geographic similarity is measured as the Lee's L spatial correlation <cit.> between the spatial distributions of empirical and simulated usage <cit.>. M10: Network Property Similarity Another property of cascades is the position of adopters within the network (e.g., the communities they belong to, their centrality) <cit.>. We calculate each user's position in the network along four relatively low-correlated (Pearson's R<0.5) network properties, including PageRank, eigencentrality, transitivity, and community membership (using the Louvain community detection algorithm <cit.>). Similar to M8, we represent the adopters' network positions using a propensity score, and compare the distribution of empirical adopters' and simulated adopters' propensity scores using KL divergence. §.§ Composite Metric In order to evaluate how well each model reproduces properties M1 through M10, we construct a composite Cascade Match Index (cmi) encompassing all ten metrics. The cmi is defined as the normalized similarity between simulated and empirical cascades, averaged over all ten metrics. See Section <ref> for details. The ten measures comprising the cmi are overall poorly correlated with each other (Figure <ref>), suggesting that M1-M10 do, in fact, measure distinct properties of the cascade and are not redundant. § NETWORK AND IDENTITY MODEL DIFFERENT ATTRIBUTES OF A CASCADE Now we use the methods from the prior sections to test our main hypothesis: That network and identity together better predict properties of cascades compared to network or identity alone. §.§ Experimental Setup To test our hypothesis, we simulate hashtag cascades using the Network+Identity, Network-only, and Identity-only models, and determine which one best reproduces properties of the empirical cascades. For each of the 1,337 hashtags and three models, we 1) seed the model at the hashtag's initial adopters, 2) fit the stickiness parameter, 3) run five simulations at this parameter, and 4) compare properties of the simulated and empirical cascades. Then we construct the cmi and compare values across the three models. §.§ Results Figure <ref> shows that the Network+Identity model outperforms the Network-only and Identity-only counterfactuals on the composite cmi—suggesting that, on the whole, hashtag cascades are best modeled using a combination of network and identity. However, our results also suggest that, while models involving both network and identity are most performant overall, there is important heterogeneity in what social factors are required to reproduce different properties of hashtag cascades. Thus, while incorporating network structure and identity into the model leads to the highest overall performance, the network-only or identity-only model may be a better choice for some features of interest. As shown in Table <ref>, the Network+Identity model had the top performance on a larger number of individual metrics (5 of 10) than the Network-only (2) or Identity-only (3) models. Notably, however, the Network+Identity model did not have the top performance on all metrics. Overall, the Network-only model tended to perform best on popularity-related metrics; it had the highest score on M2 and M3, as well as a higher score on a composite index of the three growth-related measures (Figure <ref>). On the other hand, the Identity-only model tends to perform better on adopter-related metrics, while growth-related metrics were best modeled by a combination of both factors. Moreover, on the whole, the Network+Identity model had the highest score on the cmi in 42% (2,791) of trials, while the Network-only model had the highest score in 30% (1,992) and the Identity-only model in 28% (1,902) of trials. A possible explanation for the heterogeneity in performance is that different mechanisms are responsible for different properties of cascades. For instance, when we select the model that has the highest score on the cmi for each trial (we'll call this the optimal customized model), the average score on the cmi improves from 0.06 in the Network+Identity model to 0.27 with the optimal customized model (this is a jump in 0.21 points, in contrast to a jump in 0.11 points between the Network+Identity and Identity-only model) (Figure <ref>, pink vs. dark blue bars); moreover, the performance on popularity, growth, and adopter characteristics each improves in this optimal customized model as well. This suggests that the mechanisms underlying the diffusion of hashtags are likely heterogeneous: most hashtags are best modeled by a combination of network and identity, but some are better modeled by network alone or identity alone. Identifying which of these three mechanisms best applies to the hashtag can lead to significant predictive gains. Since our goal is to produce a unified model that simultaneously reproduces all properties of cascades, one option is to identify conditions under which network and identity are needed—that is, to create a predicted customized model that uses features of the hashtag and early adopters to decide whether to use network or identity or both, instead of an optimal customized model where the model selection is performed post-hoc. We explore this idea in the next section. § ROLES OF NETWORK AND IDENTITY IN DIFFERENT CONTEXTS The diffusion of hashtags specifically, as well as the process of cultural production more generally, varies across contexts. For instance, hashtags with demographically homogenous initial adopters are more likely to be used to signal identity <cit.>. Additionally, hashtags have different patterns of diffusion depending on their topic or semantic context <cit.>. The goals of this section are to understand whether information about the hashtag and its initial adopters 1) are associated with model performance and 2) can be used to develop a predicted customized model. §.§ Experimental Setup In order to understand the relationship between the context in which each hashtag was coined and the role of network and identity, we run a linear regression to test the association between the cmi and several properties of the hashtag. As shown in Equation <ref>, we estimate the effect of each covariate c_i on the cmi of each model (e.g., β_1 estimates the effect of the first covariate on cmi in the Network+Identity model; β_1 + β_1^N estimates the effect in the Network-only model). Our regression estimates the effect of each property after controlling for all other properties (e.g., the effect of racial similarity in initial adopters is independent of the effect of their geographic proximity, even though these two factors are correlated). CMI ∼β_0 + ∑_i β_i c_i + ∑_i β_i^I c_i * 1_Id-only + ∑_i β_i^N c_i * 1_Net-only Covariates are four sets of properties of the hashtag's context (the distribution of each property is in Figure <ref>): Topic. The topic of a hashtag (e.g., whether it is related to sports, pop culture, or some other subject matter) may be associated with the extent to which network and identity play a role in its diffusion. For instance, prior work has shown that hashtags related to different topics may diffuse at different scales and via different mechanisms <cit.>. Therefore, we include each hashtag's topic, measured using the model from <cit.>, as a covariate in Equation <ref>. Appendix <ref> has more details. Communicative Need. Properties of hashtag cascades may also be attributable to differences in the hashtags' semantic roles <cit.>. For instance, hashtags that are in higher demand (e.g., because they belong to a fast-growing subset of the semantic space) or lower supply (e.g., because there are fewer alternatives to choose among) may have higher levels of communicative need and, therefore, different social factors may be responsible for their spread <cit.>. <cit.> quantified communicative need using two measures: 1) semantic sparsity, or how many similar hashtags exist in the lexicon when the focal hashtag was introduced (a hashtag in a sparse space may be in higher demand since there are fewer hashtags that can serve the same function); and 2) semantic growth, or the growth in the semantic space over time (a hashtag in a high-growth space may be in higher demand since it serves a purpose of increasing popularity). For instance, a hashtag like #broncosnation (signifying support for the city of Denver's local football team) has low semantic sparsity, because many cities had similar sports hashtags when it was coined; it also has low semantic growth because, while sports team hashtags are popular, the use of these sorts of hashtags has remained fairly stable over time. Appendix <ref> has details on how these measures are operationalized. Identity. As described in Section <ref>, each hashtag's identity is based on the demographics of the first ten adopters. Since the identities of early adopters may influence the perception of the hashtag <cit.>, and since having more homogenous initial adopters may lead to stronger perceptions, covariates include the mean similarity of initial adopters within each component of identity (location, race, socioeconomic status, languages spoken, political affiliation). Initial Network Position. Another factor in a hashtag's diffusion is where in the network the hashtag is introduced <cit.>. For instance, more central initial adopters or those belonging to larger communities in the network may be able to spread the hashtag more broadly because of their influence. Therefore, covariates include the median initial adopter's PageRank, eigencentrality, and how many initial adopters fall into each of the network's communities. §.§ Results Figures <ref>- <ref> show the results of the regression model (Equation <ref>); the y-axes plot the predicted cmi for each model, corresponding to different levels of each covariate, conditional on all other covariates. In general, the Network+Identity model performs as well as or better than the other models under all conditions. This suggests that the conclusions from Section <ref>—that network and identity better predict cascades together than separately—are robust. In addition, there are three key takeaways about model performance. First, the Network+Identity model tends to outperform the other models in cases where there is a theoretical expectation that network and identity would each contribute a portion of the underlying diffusion mechanism. For instance, when initial adopters have a high level of racial similarity, the Network+Identity model's performance improves while other models get worse; this is consistent with the theoretical framework of <cit.>, where hashtags used to signal racial identity on Black Twitter diffuse via a mechanism that combines network and identity. Similarly, regional hashtags may require network and identity to constrain adopters to the local area <cit.>; consistent with this expectation, the Network+Identity model has its highest performance among hashtags that promote regional culture, including sports hashtags (which often express support for local teams), news hashtags (which are often related to regional events), and hashtags whose initial adopters are located near each other. The Network+Identity model also has the best performance on geographic distribution of adoption, suggesting a connection between this model and the ability to predict geographic localization <cit.>. Similarly, hashtags related to certain topics—sports, film/TV/video, diaries/daily life, and news/social concern—tend to be better modeled by the Network+Identity simulations, and prior work has shown that network and identity contribute to their growth. Accordingly, identity also mediates the spread of sports hashtags on the Twitter network, so only fans of a specific team adopt the hashtag but the hashtag can still be seen by supporters of rival teams <cit.>. The other types of hashtags are often used in conversations that involve stance-taking and, in the process, identity signaling (e.g., sharing their opinion on issues of social concern, their favorite TV show, and aspects of daily life) <cit.>. Second, the Network+Identity model may outperform the Network-only and Identity-only models because hashtags that diffuse via two mechanisms are more likely to become popular than hashtags diffusing via just one <cit.>. For instance, the Network+Identity model outperforms baselines among hashtags in very slow- or very fast-growing areas, but not among hashtags with moderate growth (Figure <ref>). Similarly, the model has its highest comparative advantage when initial adopters are moderately central. In cases of extreme growth or moderate initial adopter centrality, hashtags that diffuse via multiple mechanisms (network and identity) may be overrepresented in our sample of popular hashtags. These hashtags are also likely to pertain to a smaller Third, the Network+Identity model often has its strongest comparative advantage when the Network-only and Identity-only models perform well. For instance, all three models perform well when the hashtag is related to topics like sports, film, pop culture, and daily life; or in moderate ranges of covariates like linguistic, socioeconomic, and political identity, centrality, and semantic growth. This suggests that, even when single-variable models have relatively high predictive power, combining multiple social factors can improve performance. §.§ Selecting Among Models Since there are associations between the characteristics of the hashtags and the relative performance of the three models, we develop a predicted customized model that uses these characteristics to determine whether network alone, identity alone, or both together would perform best on the cmi. Using the features described in Section <ref>, we trained a random forest classifier to predict whether each hashtag would be best predicted by the Network+Identity, Network-only, or Identity-only model. Predictions were obtained using a repeated 5-fold cross-validation (the model was trained on sets 2-5 and predictions generated for set 1; then trained on sets 1 and 3-5 and predictions generated for set 2; and so on). The random forest classifier weakly outperforms a baseline that always selects the Network+Identity model (0.44 vs. 0.41 accuracy); however, in spite of this, the predicted customized model significantly outperforms the Network+Identity model on the cmi (Figure <ref>, light blue bars), suggesting that the classifier may be picking out examples of hashtags that are “obviously” or “easily” identifiable as being better-modeled by network or identity alone and where the single-variable models are associated with significant predictive improvements over the Network+Identity model. This predicted customized model achieves its gain in performance by better reproducing properties related to popularity (where it equals the Network-only model's performance) and adopter characteristics, and trading off slightly lower performance on the growth-related measures (Figure <ref>, comparing light and dark blue bars). These results suggest that the initial characteristics of cascades can, in some cases, signal the driving mechanism behind the hashtag's diffusion and therefore the best model to estimate the cascade. § DISCUSSION Our work suggests that modeling cultural production and the adoption of cultural innovation requires explicitly incorporating the role of multiple social factors in the process of diffusion. This study examines the role of Network and Identity in the diffusion of novel hashtags on Twitter. In order to test the roles of network and identity in diffusion, we evaluate whether a model containing network and identity better reproduces properties of each hashtag's cascade than models containing just network or just identity—and whether this holds across different types of hashtags. The results support our hypothesis from three standpoints. First, the model with both identities and network better reproduces an aggregate of cascade properties than models with identity or network alone. Second, many individual properties are also better modeled with network and identity together. Third, these findings are true across many different types of hashtags (different topics, identities, etc.). These findings are significant because most existing work has focused on the effects of single factors (e.g., network or identity) rather than creating a model that combines multiple social factors to explain the diffusion of behaviors and culture. Our work suggests that there is value in adding extra complexity by multiple interacting factors. However, our analysis also reveals that there is important heterogeneity in the roles network and identity play in cultural production. For instance, network structure does a worse job modeling the adopter composition of cascades, while identity underperforms at modeling a cascade's popularity. Additionally, there are several contexts where the network and identity likely offer non-duplicative conditions for diffusion or jointly confer some selective advantage to new hashtags. Under these conditions, it is especially important for models of cascades to combine both factors. Finally, our analysis has two limitations that can be addressed by future work: First, our model only considered network and identity, but did not integrate other social factors known to influence the spread of innovation (e.g., the type of relationships between users or the perception or planned use of a hashtag). This limitation could be responsible for some heterogeneity in performance (perhaps factors other than network and identity are required to model hashtag cascades and are particularly important for reproducing certain properties or in the extremes of some hashtag characteristics' parameter space). However, such factors are difficult to model at scale and, thus, were outside the scope of the paper. Our Network+Identity model always used both network and identity rather than selecting which features would work best for each hashtag. Our work was a first step towards developing such a customized model. However, future work could likely improve upon this initial model. In order to facilitate future work, we release a database of the 1,337 hashtags included in this study, which were coined between 2013 and 2022, used frequently, and likely to represent novel cultural production; using a 10% sample of Twitter, we develop a database of each hashtag's adoption and a rich set of features like the hashtag's topic, embedding, communicative need, and the identities of adopters. We also release a composite cmi that evaluates the performance of a model on its ability to reproduce ten frequently-studied properties of cascades, including those related to popularity, growth, and adopter composition. Based on a comprehensive literature review, we identify ten frequently-modeled properties of cascades related to their popularity (e.g., cascade size), growth (e.g., shape of the growth curve), and adopter composition (e.g., demographic similarity) and construct a composite cmi that compares empirical and simulated cascades across all ten properties. *equationsection *figuresection *tablesection § APPENDIX § ANNOTATION PROMPT Would the coining of this hashtag be an example of cultural production (Yes/No)? In this case, cultural production is the process of creating and disseminating new, innovative culture. While “culture” is a broad term, our definition excludes hashtags that make reference to entities by their official name (e.g., a person by their full name or stage name, a location, a song title), common phrases, and single words, since those hashtags do not seem innovative. However, the following types of hashtags can and should be considered examples of cultural production, because their existence requires innovative choices and combinations of words: nicknames or fan-created names for entities, slogans, combinations of dictionary words, slogans, and acronyms. Examples of hashtags to say `Yes' to: #goravens, #rio2016, #votefreddie, #blacklivesmatter, #myboyfriendnotallowedto, #incomingfreshmenadvice § CONSTRUCTING THE CASCADE MATCH INDEX Since M1-M7 are compared using a measure of distance or error (i.e., closer to 0 is better) and M8-M10 are compared using similarity scores, we convert M1 - M7 from difference scores into similarity scores by taking their additive inverse. This means that higher values of the cmi correspond to better fit between empirical and simulated cascades. Additionally, since each measure is on a different scale, we standardize all similarities using a z-score; to facilitate cross-model comparisons, z-scores are calculated across all three models (Network+Identity, Network-only, Identity-only) rather than within each model. Finally, since model parameters are calibrated to the cascade size, and since empirical cascades (which came from the Twitter Decahose) are expected to be 10% the size of simulated cascades, we downsample the larger cascade to match the size of the smaller one for properties M2 - M10 (e.g., if the simulated cascade ends up being 10 times bigger than the empirical cascade, we randomly sample 10% of the simulated cascade and compare that downsampled cascade to the empirical cascade). This downsampling ensures that the comparison between the empirical and simulated cascade is independent of size—e.g., that certain models do not better match properties because they were easier to calibrate to the correct cascade size. § HASHTAG CHARACTERISTICS §.§ Topic We define a hashtag's topic as the most frequent topic of the tweets it appears in, where tweet topics are inferred using <cit.>'s supervised multi-label topic classifier. From the original set of 23 topics, we combine categories containing fewer than 50 hashtags into other categories that they most frequently co-occur with (e.g., Learning & Educational with Youth & Student Life), and end up with seven categories: diaries and daily life (379 hashtags, e.g., #relationshipwontworkif, #learnlife, #birthdaybehavior), sports (269 hashtags, e.g., #seahawksnation, #throwupthex, #dunkcity), celebrity and pop culture (213 hashtags, e.g., #freesosa, #beyoncebowl, #kikifollowspree), film/TV/video (154 hashtags, e.g., #iveseeneveryepisodeof, #betterbatmanthanbenaffleck, #doctorwho50th), news and social concern (130 hashtags, e.g., #impeachmentday, #getcovered, #saysomethingliberalin4words), music (103 hashtags, e.g., #lyricsthatmeanalottome, #nameanamazingband, #flawlessremix), and other hobbies (89 hashtags, e.g., #camsbookclub, #amazoncart, #polyvorestyle). §.§ Semantic Sparsity and Growth Semantic sparsity and growth are measured as follows: Each hashtag's 250-dimensional embedding is constructed by training the word2vec algorithm over a window of 5 tokens and 800 epochs; in order to ensure that the hashtags in our study have high enough token frequency to be included in the final model, word2vec was trained on all tweets containing the 1,337 hashtags in our sample and a random sample of 20 million other tweets containing hashtags in our Twitter Decahose sample. Using the resulting word embeddings, semantic sparsity is the number of hashtags that were used in similar contexts at the time when the hashtag was coined (similarity means the cosine similarity of the embeddings is at least 0.3,[The threshold of 0.3 is slightly lower than the threshold of 0.35 used in the original paper, so that more hashtags have neighbors.] representing the supply of similar hashtags) and the semantic growth is the Spearman rank correlation between the frequency of all tokens that are similar to the hashtag and the month (where a correlation of 1 means that words that are similar to the hashtag are becoming more popular over time, and 0 means the hashtag is used in contexts of static popularity).
http://arxiv.org/abs/2407.13283v1
20240718083617
Heterogeneous Clinical Trial Outcomes via Multi-Output Gaussian Processes
[ "Owen Thomas", "Leiv Rønneberg" ]
stat.ME
[ "stat.ME", "stat.AP" ]
1]Owen Thomas* 2]Leiv Rønneberg THOMAS et al [1]HØKH, Akershus University Hospital, Lørenskog, Norway [2]MRC Biostatistics Unit, University of Cambridge, Cambridge, UK *owen.thomas@ahus.no [Summary]We make use of Kronecker structure for scaling Gaussian Process models to large-scale, heterogeneous, clinical data sets. Repeated measures, commonly performed in clinical research, facilitate computational acceleration for nonlinear Bayesian nonparametric models and enable exact sampling for non-conjugate inference, when combinations of continuous and discrete endpoints are observed. Model inference is performed in Stan, and comparisons are made with brms on simulated data and two real clinical data sets, following a radiological image quality theme. Scalable Gaussian Process models compare favourably with parametric models on real data sets with 17,460 observations. Different GP model specifications are explored, with components analogous to random effects, and their theoretical properties are described. , (2024), Heterogeneous Clinical Trial Outcomes via Multi-Output Gaussian Processes, arxix.org, 2024;00:1–6. Heterogeneous Clinical Trial Outcomes via Multi-Output Gaussian Processes L. Rønneberg July 22, 2024 ========================================================================= Abbreviations: GP, Gaussian Process; HMC, Hamiltonian Monte Carlo; RCT, Randomised Control Trial; ARD, Automatic Relevance Determination; NUTS, No U-Turn Sampler; CTA, Computed Tomography Angiography; keV, kiloelectronVolts; ROI, Region Of Interest; SVM, Support Vector Machine; ESS, Effective Sample Size; HU, Hounsfield Unit; ICM, Intrinsic Coregionalization Model § INTRODUCTION Clinical research is often performed with structured data built into the study design, sometimes by repeated measurements of individuals at different time points or locations, or using different measurements methods simultaneously on the same individuals. For example, longitudinal studies follow the same cohort repeatedly at different points using the same measurement process, while many Randomised Control Trials (RCTs) will simultaneously measure primary and secondary endpoints reflecting different aspects of a clinical process. In the article, we observe that either of these types of structured measurement can be used to enable computational tractability of a class of complex statistical models that would otherwise not scale to real clinical data sets of thousands of measurements. Specifically, appropriate repeated measurements enable the pursuit of exact inference for Gaussian Process (GP)<cit.> models by representing their covariance matrices as Kronecker products<cit.>. GPs are Bayesian nonparametric models, that are capable of capturing nonlinear covariate dependence, or multi-output correlations<cit.> that would ordinarily be neglected by commonly-used parametric statistical models. Here we present and run new GP models using repeated measurements in the covariate structure and multiple heterogeneous (mixed continuous and discrete) outputs: exact inference in Stan<cit.> is scaled to tens of thousands of measurements on the processors of a domestic-issue laptop. The models represent the joint covariance between all of the data as Kronecker-structured, and perform Hamiltonian Monte Carlo (HMC) sampling for hyperparameters, missing output values, and on the latent GP space for non-conjugate inference in the presence of heterogeneous outputs. The clinical data sets follow the theme of radiological image quality, in which the covariates consist of patient characteristics, body locations and time, while the outputs represent measurements of image quality, either continuously "objectively" on the Hounsfield scale<cit.> of radiodensity, or discretely "subjectively" from expert evaluations. § MAIN ARTICLE CONTRIBUTIONS * The observation that widely-used repeated measurements in clinical research, either in the form of making the same measurement at different points, or making different measurements simultaneously, facilitates the use of Kronecker structure in the covariance matrix of Gaussian Process models and thereby the scaling of exact sampling for Bayesian nonparametric models to large clinical data sets with modest computational resources. * An implementation of the relevant Gaussian Process models in Stan<cit.> (with wrapper in R), allowing for running the inference on multiple real world data sets, and benchmarking various model specificatons such as covariance function choices. * An empirical investigation using real-world clinical data sets into the predictive abilities of the Gaussian Process models compared to standard parametric regression models run in brms<cit.>, demonstrating the utility of more complex models. § METHODS In this section, we describe the methodological concepts relevant to the models implemented in this article. §.§ Repeated Measurements The value of repeated measurements is well-established and widely-understood in clinical research. The use of random effects is common when data is drawn from a structured population for which a hierarchical model is appropriate, for example when measuring different individuals repeatedly, or in a multi-centre study. Further structure emerges if data is collected in a systematic way, for example at consistent follow-up times for the entire population, or at multiple, consistent anatomical locations in the body for scans or biopsies. Data can also exhibit repetitions at the outcome level, when different outcomes of interest are measured at the same locations, individuals, and time points: this is common in RCTs with a combination of primary and secondary endpoints. Within a regression framework, the multiple outcomes can be represented as a matrix Y, and the covariates corresponding to treatments, locations, times, patient characteristics, or anything else that might influence the outcomes, can be represented as a covariate matrix X. Different clinical study designs will imposed different structure on the covariate matrix X. If we consider a design in which N_1 individuals are measured at N_2 time points, at N_3 anatomical locations, with the time points and anatomical locations being identical between individuals, then we can divide columnwise the long-format covariate matrix X of height N_1 N_2 N_3 into matrices X_1, X_2, and X_3, with X_1 containing information x_1 about the N_1 unique individuals, repeated N_2 N_3 times, X_2 containing information x_2 about the N_2 unique times of measurement, repeated N_1 N_3 times, and X_3 containing information x_3 about the N_3 unique anatomical locations, repeated N_1 N_2 times. There are in addition N_4 distinct outcomes measured for every value of X, resulting in an outcome matrix Y of size N_1 N_2 N_3 by N_4. §.§ Gaussian Processes Gaussian Processes (GPs) are nonlinear, Bayesian models designed for flexible, probabilistic supervised learning <cit.>. They model a dependent variable y conditional on independent variables X, via a mean function m(x) and covariance[also known as kernel function] function k(x,x',θ), defining a latent variable f that is joint-normally distributed over all the observed data points. The covariance functions are described by hyperparameters θ that can be learned from data, while the mean function will be taken to be zero here with no loss of generality. The latent function f can be passed through a Gaussian likelihood with a noise variance σ^2 to model a continuous output y: f ∼ 𝒢𝒫(0, k(x,x',θ)) y | x = f(x) + ϵ, ϵ∼𝒩(0, σ^2) Specific choices of covariance functions k(x,x',θ) have corresponding implicit parametric basis functions. One advantage of the "function-space" formulation is the ability to use a finite representation of a function with a potentially infinite-dimensional parametric representation: this is the sense in which the models are considered "nonparametric", in that they avoid specifying a parametric model for the latent mean function. §.§ Scalability and Kronecker-Structure Covariance Matrices Asserting a GP with a covariance function k(x,x',θ) over a data set with N covariate observations defines a N× N covariance[also known as kernel matrix] matrix 𝐊, where the matrix element 𝐊[i,j] is equal to the covariance function k(x^i,x^j',θ) evaluated at the ith and jth data point. One challenge of working with GPs is the need to perform a decomposition of the N× N covariance matrix, which is the dominant computational demand when evaluating the marginal likelihood for sampling or computing the predictive distributions. This results in 𝒪(N^3) cubic scaling in computational costs with the number of data points N, ruling out exact inference for generic covariance matrices for larger data sets . Various methods exist to enable approximate inference for larger data sets<cit.>, while exact inference is possible for larger data sets when there is structure in the covariance matrix that can be exploited, such as Kronecker or Toeplitz<cit.>. In instances where data can be represented as lying in a grid structure, with separable covariance structure between each dimension of the grid, the full covariance matrix can be represented as a Kronecker product between the dimensions. A Kronecker product is an operation performed on two matrices generating a third matrix composed of each individual element of the first matrix separately multiplied with the entire second matrix, combined in a blockwise fashion, i.e. for two matrices A and B: 𝐀⊗𝐁 = [ a_11𝐁 ⋯ a_1n𝐁; ⋮ ⋱ ⋮; a_n1𝐁 ⋯ a_nn𝐁 ] For a data set consisting of covariates measured over a two dimensional grid, for example the pixels of an image, the covariance matrix over the entire data set (𝐊_12 = K([x_1,x_2],[x'_1,x'_2])) can be represented as the Kronecker product between two covariance matrices representing each grid dimension independently (𝐊_1 = K(x_1.x'_1), 𝐊_2 = K(x_2.x'_2),), with the constraint that the covariance functions used are separable, i.e. for a grid composed of n_1 vertical grid points x_1, and n_2 horizontal grid points x_2, then: 𝐊_12 = 𝐊_1⊗𝐊_2. We can apply this reasoning to clinical scenario described in <ref>: for a longitudinal study consisting of N_1 individuals measured at N_2 time points at N_3 anatomical locations, the covariate data X can be represented as lying on a three dimensional grid, where one of the grid dimensions x_1 is the variation between individuals, the second x_2 as the variation in time, and the third x_3 variation in anatomical location. Consequently, defining separate covariance matrices representing the variation across individuals (the N_1× N_1 matrix 𝐊_1), variation over time (the N_2× N_1 matrix 𝐊_2), and anatomical variation (the N_3× N_3 matrix 𝐊_3) we can represent the covariance between the covariate measurements X as: 𝐊_X = 𝐊_1 ⊗𝐊_2 ⊗𝐊_3. Conveniently, when a decomposition of the full matrix is required, the components of the Kronecker product can be decomposed separately, such that the 𝒪(N^3) = 𝒪(N_1^3 N_2^3 N_3^3) computational demands become 𝒪(N_1^3 + N_2^3 + N_3^3). For datasets with appropriate grid structure or repeated measurements, this enables exact inference for tens of thousands of data points on a personal computer. This has been used previously for image data, where the pixels lie on a regularly spaced grid, or when there are multiple endpoints evaluated at the same covariate locations. In this article, we note that the widespread use of repeated measurements in clinical research, can be represented within the Kronecker structure described above. §.§ Heterogeneous Multi-Task Gaussian Processes The GP framework extends naturally to multi-dimensional responses 𝐘, analogous to the case of multivariate regression<cit.>. The different response dimensions (also known as "tasks" or "outputs") are appended into a single vector, and a between-task covariance matrix 𝐊_out is used to model correlations between the tasks. When the different tasks are evaluated at the same values of the covariates, generating a covariance matrix between covariates 𝐊_X then Kronecker structure again emerges in the full covariance matrix 𝐊_f= 𝐊_out⊗𝐊_X, and computational accelerations become possible. Different tasks often correspond to different methods of evaluating some outcome, and often exhibit heterogeneity of distributions, e.g. each task may be variously continuous valued, binary or follow some other distribution. In this case, each output will require different likelihood functions to map from the latent function f to observation y. For likelihoods other than Gaussian, the latent variable cannot be integrated out analytically and inference must be performed for the latent variable, through sampling, variational inference, or another approximation. All of the data sets used in this article use one Kronecker component for the multi-output correlation, and three Kronecker grid components for the covariates, which are grouped into grid dimensions x_1, x_2 and x_3. The full covariance structure is therefore: 𝐊_f = 𝐊_out⊗𝐊_1 ⊗𝐊_2 ⊗𝐊_3 §.§ Covariance Matrix Design and Random Effects Imposing Kronecker structure on the full covariance matrix puts some constraints on the types of covariance functions that can be used. Principally, the covariance function must be separable between the different grid components x_1 and x_2, meaning the overall covariance function can be represented as the product of covariance functions defined over of the grid components, i.e. k([x_1,x_2],[x'_1,x'_2]) = k_1(x_1,x'_1)k_2(x_2,x'_2). Many commonly used multi-dimensional covariance functions exhibit this property, but some designs that might be desirable for interpretation do not exhibit separability. Two are discussed below: additive covariance functions and random effects covariance functions. An additive covariance function represents a common covariance function over different dimensions as a sum of separate covariance functions<cit.>, i.e. k([x_1,x_2],[x'_1,x'_2]) = k_1(x_1,x'_1) + k_2(x_2,x'_2). Additive covariance functions might be desired if we are interested in isolating the particular contribution to the output variation from one particular dimension. This is especially useful in medical research when one intervention or treatment is considered to be of central clinical relevance. Within the Kronecker framework, it is possible to assert an additive covariance function within each component of the Kronecker product, but the resulting covariance function over the entire data set is a product of the sum within the component with the covariance functions over the rest of the dimensions, making it different to interpret the sum components separately, i.e.: (𝐊_1 + 𝐊_2) ⊗𝐊_3 = (𝐊_1⊗𝐊_3) + (𝐊_2⊗𝐊_3) Random effects are often desirable in the presence of repeated measurements, where we are not necessarily interested in the variation between individuals but we would still like to include it in the model <cit.>. This can be performed in an elegant way within GP regression models by including a structured diagonal noise component representing the random variation across individuals. In the Kronecker context, this can achieved by adding spherical noise to the Kronecker product component that represents the variation between individuals, representing the "noise" sampled when moving between individuals. As this is an additive covariance function where one of the covariance functions is diagonal noise, the same problem emerges when combining additive covariance functions with Kronecker structure: the resulting covariance function components cannot be interpreted totally straightforwardly, as they are multiplied with the covariance functions associated with the other Kronecker components. Formally, a traditional random effects model is represented in the first half of the following inequality, and the model we implemented in the second: (𝐊_1⊗𝐊_2) + (σ^2 ℐ_1⊗ℐ_2) ≠ (𝐊_1 + σ^2 ℐ_1) ⊗𝐊_2 This formulation of a random effect is equivalent to adding some extra variance to the coefficients corresponding to non-patient specific covariates. We detail this in Appendix <ref>, for the case where all kernels are linear and thus GP regression is equivalent to Bayesian linear regression. We ran separate models with and without the "random effect" noise in the individual-level covariance function to explore its influence on the model fit and predictions. We call the models with and without this the mixed-effect GP ("GP.m") and the fixed-effect GP ("GP.f"). The covariance functions for each covariate were chosen conditional on the data. For continuous-valued covariates, a squared-exponential covariance function with a lengthscale hyperparameter enabling automatic relevance determination (ARD) was used. For binary covariates, a linear covariance function was used with a variance hyperparameter, as a more complex covariance function would be unnecessary for binary data. Nominal or ordinal covariates were one-hot encoded to corresponding binary variables and a linear covariance function used. The multi-output covariance matrix 𝐊_out is parametrised as a Cholesky-decomposed correlation matrix multiplied with a diagonal matrix representing the separate output variances. §.§ Full Model Specification In summary, the three-component Kronecker model used in this article, with n_g Gaussian-distributed outputs y_g, n_b Bernoulli-distributed outputs y_b, and repeated covariates partitioned into repeated measure grid components x_1, x_2 and x_3, becomes: ρ_1, ρ_2, ρ_3 ∼InvGamma(2,1) Lengthscales for each kernel Kronecker component α∼InvGamma(2,1) Kernel variances for each output α^(n)∼InvGamma(2,1) Noise variance for each Gaussian-distributed output η∼𝒩(0,1) Standard Normal latent η for computation purposes L ∼LkjCholesky(3) Correlation matrix between outputs with LkjCholesky prior L^(n)∼LkjCholesky(3) Noise correlation matrix between Gaussian outputs with LkjCholesky prior σ^2_re∼InvGamma(2,1) Variance of optional random effects kernel component K_1 = k(x_1,x_1,ρ_1) First Kronecker kernel matrix component K_2 = k(x_2,x_2,ρ_2) Second Kronecker kernel matrix component K_3 = k(x_3,x_3,ρ_3) + σ^2_reℐ Third Kronecker kernel matrix component, with optional random effects f = diag(α) L ⊗chol(K_3) ⊗chol(K_2) ⊗chol(K_1) η Latent variable f constructed from kernels and η Σ^(n) = diag(α^(n)) L^(n) ( diag(α^(n)) L^(n)) ^ T Noise covariance between Gaussian-distributed outputs y_g∼𝒩(f[1:n_g , ], Σ^(n) ) Distribution of n_g Gaussian-distributed outputs y_g y_b∼Bernoulli(Φ(f[n_g + 1:n_g + n_b , ])) Distribution of n_b Bernoulli-distributed outputs y_b §.§ Stan Implementation The methods described here were implemented in Stan<cit.>, a probabilistic programming language designed for Bayesian inference, in which model specification is performed explicitly within the language, and No U-Turn Sampler (NUTS) Hamiltonian Monte Carlo<cit.> (HMC) is performed for the model parameters. The Gaussian Process was represented for inference with a standardised, uncorrelated Gaussian latent variable η, which was reshaped and transformed by the covariance Kronecker components to form the latent variable f conditioned on the covariance. HMC was performed for the GP hyperparameters and noise terms, the latent function representation η, the latent function f for discrete-valued tasks, and the missing values of the outputs. The Stan programs were called from within the R programming language via Rstan<cit.>. Similar computational speedups have been achieved in Stan previously <cit.>. §.§ Missing Data In the case of missing values of the output variable y, which often arise when exploiting Kronecker structure over an incomplete grid, the Bayesian framework offers a convenient solution: the missing y values are treated as parameters to be inferred, using the distribution implied by the latent variable f and likelihood p(y|f), conditional on the observed covariates. The scalability associated with the Kronecker decomposition is preserved, while interpretable posterior distributions for the unobserved data are provided. The inference for continuous-valued missing outputs y can be integrated in a straightforward way with most inference schemes. Given that Stan cannot perform inference over discrete variables, then the missing binary outputs pose a challenge. This was resolved by calculating the log-likelihood contributions according to the Bernoulli likelihood with an appropriate link function, with the missing y varying continuously between zero and one. This allows for efficient gradient-informed inference procedures, and the interpretation of the imputed variables as probabilities. §.§ Radiological Image Quality The clinical theme of this article is radiological image quality, a field in which it is common to combine continuous-valued "objective" measurements of image quality with discrete-valued "subjective" expert evaluations of image quality. This article uses two real data sets, the first of which is observational time-series data, and the second of which comes from a RCT with thorough repeated measures. In each case the effect of primary interest is the effect on the image quality of volume of contrast medium per body weight. The first real data set was previously published as a clinical study in <cit.>. The image quality was measured in venal blood vessels of 53 patients, with continuous-valued attenuation in Hounsfield Units as the objective endpoint, and binary evaluations of image quality from three different consultant radiologists as the subjective endpoints. Each patient was injected with a different quantity of contrast medium per body weight, and evaluations of the images were taken at six thirty-second interval time points after injection. Evaluations were also performed for each side of the body. Further covariates used were gender, age and tube voltage. Further clinical details can be found here. The combination of 53 patients, 4 endpoints, 7 time points and 2 body sides implies a covariance structure of size 2,968, but the use of the Kronecker product for each of these contributions means that the computations are readily tractable on a personal laptop. The second real data set comes from a Randomised Control Trial assessing image quality in arterial blood vessels following Computed Tomography Angiography (CTA) <cit.>. 210 patients were randomised to receive full versus half doses of contrast volume per body weight, with measurements repeated at 7 different locations in the body, and 3 different spectral energy levels measured in kiloelectronVolts (keV). Examples of images recorded at different body levels and energy levels are presented in Figure <ref>. Attenuation level and image noise recorded in selected Regions Of Interest (ROIs), each measured in Hounsfield units were used as continuous-valued "objective" outcomes, and two consultant radiologists evaluated every image on a nominal scale for image quality. Because of the very skew distribution of the nominal data, these were simplified to a binary variable of "Excellent" vs every other level, results into two binary "subjective" endpoints. Examples of the ordinal scale of subjective image quality are shown in Figure <ref>. Sex, age and flow speed of injection were also collected as relevant covariates. The combination of 210 patients, 4 endpoints, 7 body locations and three energy levels resulted in a data set of size N=17,640 but again the Kronecker structure enabled exact inference with limited computational resources. § RELATED WORK Gaussian Processes have been used for decades under various names such as kriging or Bayesian kernel regression, and are related to the widely used Support Vector Machines (SVMs). Their used has increased in the past couple of decades with the advent of greater computational resources, and much research has occurred concerning their scalability to larger data sets under contemporary computational constraints. These methods often include variational inference schemes via the "inducing point" framework<cit.>, or spectral methods to approximate the full covariance function via sampled Fourier features<cit.>. The speedups possible through Kronecker structure have been used for some time, but have previously been used on structured image data<cit.>, drug combinations<cit.>, spatio-temporal modelling<cit.>, and multi-task regression <cit.>. Heterogeneous multiple outputs have also received research focus: while multiple Gausssian-distributed outputs allow for analytical marginalisation of the latent variable, the presence of discrete or other non-Gaussian-distributed outputs forces the use of non-conjugate inference methods such as variational methods, Expectation Propagation or sampling of the latent variable. Here we opt for the latter solution, aided by the development of the Stan programming language and the underlying continuity and smoothness of all latent parameters of interest. § RESULTS The results describing the behaviour of the trained models are presented here. We ran experiments using 10-fold cross-validation for two GP models with and without the random effect component in the individual-level covariance function ("GP.m" and "GP.f"), and two parametric models run in brms also with or without individual-level random effects ("brms.m" and "brms.f"). §.§ Comparison in brms Two brms models were used as comparison methods: ("brms.m" and "brms.f") with or without random effects at the individual level, respectively. Both models had linear fixed effects based on the covariates for each data set, i.e. for the simulated data example, each output modelled with the following model formulae: with p being shared between the outputs. Either Gaussian or Binomial likelihoods were then added to the objects, and all outputs were learned jointly in a single call to . Code is available in the supplementary material. For the first real data set, an interaction was included between time and contrast volume per unit body mass, and for the second real data set, an interaction was included between randomisation group and energy level. §.§ Data Simulation Two distinct latent parametric functions were used to simulate the data, providing a nonlinear relationship between the outputs and the covariates: f_1 = exp(.15x_1) - .6x_2^2 + sin(3x_3) f_2 = - exp(-.15x_1) + |3x_2| - cos(3x_3) Two further latent functions were defined for the binomial outputs: f_3 = - f_1 and f_4 = f_2. All four of the latent functions then had Gaussian noise of mean zero and standard deviation 0.1 added: the resulting noisy samples for the first two outputs became the observed continuous variables, while the final two noisy samples were pushed through a probit link function and rounded to generate the observed binomial variables. No random effects were included in the data simulation process. The covariates were sampled from 𝒰_[-5,5]. 20 unique grid points were sample for x_1, 7 for x_2, and 3 for x_3. Combined with the four outputs and the Kronecker structure, this resulted in a total number of 1,680 unique observations. The statistical models that included random effects at the individual level ("GP.m" and "brms.m") treated the third covariate x_3 as representing a measurement at the individual-level data, for the sake of comparison, but this choice is not expected to have a large influence on the results, as there was no variation in the simulated data beyond the observed covariates and shared noise. §.§ Losses and Testing The models were evaluated using 10-fold cross-validation to predict the test outputs given the training data and test covariates. Loss functions were evaluated using the mean predictive f_pm across posterior samples and the observed data, using an absolute/L1 loss for Gaussian outputs y_g and for binomial outputs y_b, the logarithm of one minus the probability of the observed data ("the log probability of the wrong answer"), i.e. l(y_g, f_pm) = |y_g- f_pm| and l(y_b, f_pm) = log(Φ(-f_pm*(2y_b - 1)), where Φ is the probit link. This resulted in distinct populations of losses per method, with one evaluated loss per output data point. These are plotted as histograms for each output in Figures <ref>, <ref>,<ref>, with the binomial losses represented on the log scale as well as the probability scale. The differences between populations of losses for each output were tested formally using a paired Wilcoxon rank sum test, corrected for four-fold multiple testing degeneracy between methods. The p-values from the Wilcoxon tests were further supplemented by rank-biserial correlations as effect sizes. Nonparametric tests were chosen for model evaluation in order to avoid making distributional assumptions about the populations of losses, and to make the results robust to monotonic transformations in the losses. For each of the four models on each of the data sets, we present posterior summaries of the interpretable model parameters from the first of the ten CV folds in <ref>. Considering the size of the data sets and the randomisation process used, we consider posterior summaries of a single CV fold to be representative of the population as a whole. §.§ Results for Simulated Data We see the predictive loss results in Table <ref> and Figure <ref>. We see that the brms model effectively fails and returns samples from the prior on binomial output 1 and Gaussian output 1, which are derived from the same latent parametric function. The GP models consequently registered significantly lower losses with large effect sizes. We see a small and possibly spurious effect of the GP models appearing to not predict around the prior at p=0.5. For the second continuous output, we again see the GP models outperforming the brms models, with the GP.f performing substantially better than GP.m. For the second binomial output, we see that the brms models get many of the labels correct with high confidence, but also many of the labels wrong with high confidence. Consequently, the results here are more mixed, with the only significant differences being brms.m outperforming both GP.m and brms.f. We would expect from the absolute value present in <ref> that the smooth GP model would find the second and fourth endpoints more challenging to model accurately. Parameter posterior summaries are presented for all four models trained on this data set in <ref>. §.§ Observational Time-Series Data We see the predictive loss results for the first real data set in Table <ref> and Figure <ref>. We see that all of the methods were confidently correct in their predictions of labels for the binomial outputs, suggesting that this is a relatively easy classification problem. That said, when considering the ranking of the predictions, we see that GP.m model outperforms the other methods on the binomial outputs. We see a small number of extremely confident predictions from the brms model, especially for output 3, which is a concern even if they are correct. When considering the continuous output, we see more mixed results between the methods, with the only clearly significant result being that the GP.f model appears to perform uniformly worse than the other methods. Parameter posterior summaries are presented for all four models trained on this data set in <ref>. §.§ Randomised Control Trial Data We see the predictive loss results for the second real data set in Table <ref> and Figure <ref>. We again see that all four of the methods predict the correct label confidently most of the time, again suggesting that this is a relatively easy prediction problem. The brms.f method appears to return a small number of predictions from the prior for the second binomial output. We see from the hypothesis testing that the GP.m model outperforms the other three methods for the first binomial output (with a small effect size relative to GP.f), whereas the GP.f model outperforms the other methods for the second binomial output. For the continuous outputs, the GP.m model outperforms the three other methods in prediction, while the GP.f model performs slightly better or equally well as the brms models. Parameter posterior summaries are presented for all four models trained on this data set in <ref>. § DISCUSSION In this work, we have demonstrated that the repeated measurements used in applied medical research can be used to scale complex Bayesian nonparametric Gaussian Process models to practical clinical questions. The more complex models have showed to have increased predictive ability compared to the widely-used parametric models, indicating that this may be worthwhile, if achieving optimal model specification is a concern. Model specification is particularly relevant in medical work when there are specific clinical questions or causal hypotheses to investigate: a poorly-specified model is prone to bias estimates of the parameters of interest and hence possibly give misleading results. The question remains as to how to represent clinical hypotheses within a GP framework: while in principle parametric regression models with finite linear combinations of features have corresponding covariance function representations, it may not be straightforward to interpret covariance function variances and lengthscales in a practical way. Preexisting familiarity with ANOVA and similar models that explain contributions to the variance rather than the functional form of the mean may be a helpful reference point. For randomised data, in which one of the covariate dimensions has been generated by randomised interventions, it is common to estimate causal effects with parametric models, sometimes including other covariates to increase power. Such estimates will be potentially biased by the limited expressive capabilities of the parametric representation and corresponding model misspecification issues. If including other covariates, the covariance function representation would have the advantage of including more flexible function spaces that might more accurately reflect the true generative process, with interpretation of the parameters associated with non-randomised covariates being less important. We can therefore expect a more accurate estimation of the underlying causal effect when using a more flexible nonparametric model. With non-randomised observational data, we may still be interested in achieving some insight into the causal mechanism underlying the data, with all of the appropriate caveats of potential confounding. In this case, the covariance function representation may still help by providing a more flexible function space that increases precision of the causal estimate of interest, and possibly reducing the bias of the estimated causal effect by more accurately modelling the influence of observed confounders. The separability of the covariance functions necessary to exploit Kronecker structure is potentially an important limitation: it is somewhat analogous to being forced to include interaction effects in a parametric model. If the practitioner is interested in extracting a standalone main effect for the purposes of interpretation, or encoding separate mean functions as random effects at an individual level, then this may hinder interpretation. Further work exploring the interpretation of additively structured Kronecker sub-components would help to elucidate the implications of this constraint. As observed earlier, the stricter constraint of having the training data lie on a (mostly) complete grid is obeyed surprisingly frequently in clinical research, as the importance of performing repeated measurements to isolate different contributions to variation is well-understood. The use of a Gaussian Process object also opens up the possibility of using Bayesian Optimization<cit.> type algorithms to measure new data points that are optimally informative, according to some acquisition function. The analytical posterior mean and variance of the latent GP object could be used to assess which unsampled areas of the covariate space would be best explored to reduce uncertainty in the effect of interest. Given the relatively strict grid structure necessary, performing this iteratively may be challenging, but the posterior predictive estimates provided by the fitted GPs could still be used to motivate future research study designs. In conclusion, in this article, we have demonstrated the ability and utility of scaling Gaussian Process models to large real-world clinical data sets through the use of Kronecker-structure covariance matrices and repeated measurements in the data. § ACKNOWLEDGMENTS We thank Thien Trung Tran, Cathrine Helgestad Kristiansen and Peter Lauritzen for providing the radiological data used in this article. §.§ Author contributions OT helped with conceptualising the project, constructing the code, running the experiments, and writing the manuscript. LR helped with model design, providing the starting codebase, writing the manuscript, and deriving theoretical results. §.§ Financial disclosure None reported. §.§ Conflict of interest The authors declare no potential conflict of interests. § STAN OUTPUTS § RANDOM EFFECTS IN THE INTRINSIC COREGIONALIZATION MODEL In this appendix, we demonstrate the effect of adding a random effect to an Intrinsic Coregionalization Model (ICM), in the simplest case where the covariance functions over the inputs and the outputs are both linear. Recall that the ICM for a multi-output GP over m outputs can be written as [ f_1; ⋮; f_m ]∼𝒢𝒫(0,B κ_x(x,x')), where B is the m × m coregionalisation matrix giving the covariance over the outputs, and κ_x(x,x') the covariance function over the inputs. In the case of a fully observed dataset, we can write this using the Kronecker product 𝐟(𝐗)∼𝒩(0,B⊗ K_x). By exploiting the connections between Gaussian Processes and Bayesian linear regression, in the case of a linear kernel over the inputs, κ_x(𝐱,𝐱')=1+𝐱^T𝐱', the ICM is equivalent to the following linear model: [ 𝐟_1(𝐗); ⋮; 𝐟_m(𝐗) ]= [ 𝐗 0 ⋯ 0; 0 𝐗 ⋯ 0; ⋮ ⋮ ⋱ ⋮; 0 0 ⋯ 𝐗 ][ β_1; β_2; ⋮; β_m ], [ β_1; β_2; ⋮; β_m ]∼𝒩(0,B⊗ℐ_(p+1)), where 𝐗∈ℝ^m× (p+1) denotes the design matrix of a linear regression including the intercept, β_j∈ℝ^(𝕡+1) the corresponding output-specific coefficient vector, and ℐ_p the (p+1)-dimensional identity matrix. Hence, this is equivalent to fitting a linear model to each output, with output-specific coefficients β_j for j=1,…,m. Due to the joint distribution over the coefficient vectors, these linear models are correlated in the prior, allowing the models to influence each other, borrowing strength across outputs. When working with multiple covariance functions, that are multiplied together, we need to consider the effect on the induced feature space. Consider a set of covariates that can be blocked into two sets 𝐗=[𝐗^(1),𝐗^(2)], where 𝐗^(2) correspond to patient-specific covariates, while 𝐗^(1) are other covariates in the model. In this paper, we consider covariance structures that decompose across these groups of covariates, e.g. κ(𝐗,𝐗')=κ_1(𝐗^(1),𝐗^(1))κ_2(𝐗^(2),𝐗^(2)). Generally, multiplying together kernels has the effect of modelling interactions between the covariates, e.g. if both κ_1 and κ_2 are linear kernels, the induced features space is ϕ_1× 2={1,x^(1)_1,…,x^(1)_p_1,x^(2)_1,…,x^(2)_p_2,x^(1)_1x^(2)_1,…,x^(1)_p_1x^(2)_p_2}, and GP regression is equivalent to performing a linear regression using the extended basis ϕ_1× 2. Similarly in the multi-output setting GP regression becomes equivalent to the model in equation (<ref>), but with the m × ((p_1+1)(p_2+1)) matrix Φ_1 × 2 taking the place of X, and modulo some dimensionality changes on the identity matrix in the prior over the coefficients. Further complicating the model by adding a random effect to κ_2 as in Section <ref>: [ f_1; ⋮; f_m ]∼𝒢𝒫(0,B κ_1(𝐗^(1),𝐗^(1))(κ_2(𝐗^(2),𝐗^(2))+γ^2 ℐ_m)), has the effect of adding extra variance to the coefficients corresponding to the non-patient specific covariates. The model becomes [ 𝐟_1(𝐗); ⋮; 𝐟_m(𝐗) ]= [ Φ_1× 2 0 ⋯ 0; 0 Φ_1× 2 ⋯ 0; ⋮ ⋮ ⋱ ⋮; 0 0 ⋯ Φ_1× 2 ][ β_1; β_2; ⋮; β_m ], where [ β_1; β_2; ⋮; β_m ]∼𝒩(0,B⊗ℐ̃_(p_1+1)(p_2+1)), where ℐ̃_(p_1+1)(p_2+1) is a block diagonal matrix with blocks {(1+γ^2)ℐ_(p_1+1),ℐ_p_2,ℐ_p_1p_2}.
http://arxiv.org/abs/2407.12658v1
20240717154215
FastSAM-3DSlicer: A 3D-Slicer Extension for 3D Volumetric Segment Anything Model with Uncertainty Quantification
[ "Yiqing Shen", "Xinyuan Shao", "Blanca Inigo Romillo", "David Dreizin", "Mathias Unberath" ]
eess.IV
[ "eess.IV" ]
FastSAM-3DSlicer Y. Shen et al. Johns Hopkins University, Baltimore, MD, 21218, USA University of Maryland School of Medicine and R Adams Cowley Shock Trauma Center, Baltimore, MD, 21201, USA {yshen92,unberath}@jhu.edu FastSAM-3DSlicer: A 3D-Slicer Extension for 3D Volumetric Segment Anything Model with Uncertainty Quantification Yiqing Shen1 Xinyuan Shao1 Blanca Inigo Romillo1 David Dreizin2 Mathias Unberath1() July 22, 2024 ================================================================================================================ § ABSTRACT Accurate segmentation of anatomical structures and pathological regions in medical images is crucial for diagnosis, treatment planning, and disease monitoring. While the Segment Anything Model (SAM) and its variants have demonstrated impressive interactive segmentation capabilities on image types not seen during training without the need for domain adaptation or retraining, their practical application in volumetric 3D medical imaging workflows has been hindered by the lack of a user-friendly interface. To address this challenge, we introduce FastSAM-3DSlicer, a 3D Slicer extension that integrates both 2D and 3D SAM models, including SAM-Med2D, MedSAM, SAM-Med3D, and FastSAM-3D. Building on the well-established open-source 3D Slicer platform, our extension enables efficient, real-time segmentation of 3D volumetric medical images, with seamless interaction and visualization. By automating the handling of raw image data, user prompts, and segmented masks, FastSAM-3DSlicer provides a streamlined, user-friendly interface that can be easily incorporated into medical image analysis workflows. Performance evaluations reveal that the FastSAM-3DSlicer extension running FastSAM-3D achieves low inference times of only 1.09 seconds per volume on CPU and 0.73 seconds per volume on GPU, making it well-suited for real-time interactive segmentation. Moreover, we introduce an uncertainty quantification scheme that leverages the rapid inference capabilities of FastSAM-3D for practical implementation, further enhancing its reliability and applicability in medical settings. FastSAM-3DSlicer offers an interactive platform and user interface for 2D and 3D interactive volumetric medical image segmentation, offering a powerful combination of efficiency, precision, and ease of use with SAMs. The source code and a video demonstration are publicly available at <https://github.com/arcadelab/FastSAM3D_slicer>. § INTRODUCTION Precise segmentation of anatomical structures and pathological regions from medical images is essential for accurate diagnosis, treatment planning, and monitoring of disease progression <cit.>. However, manual segmentation is time-consuming, labor-intensive, and prone to inter-observer variability. Deep-learning-driven automatic segmentation models have shown promise in reducing manual effort, they often struggle to generalize across diverse datasets, anatomical variations, and unseen pathologies during inference <cit.>. The Segment Anything Model (SAM) <cit.> and its variants for volumetric medical images, such as SAM-Med3D <cit.>, have emerged as flexible zero-shot solutions that enable interactive segmentation of novel objects without requiring retraining. These models are designed to generalize across various tasks and datasets through large-scale pre-training and support for manual prompting. The interactive nature of SAM-based segmentation makes fast inference times particularly important, as it allows users to make immediate adjustments and corrections, improving the accuracy and efficiency of the segmentation process. FastSAM-3D <cit.>, a computationally efficient 3D SAM architecture, has been specifically optimized for real-time interactive segmentation of 3D volumes such as computed tomography (CT). FastSAM-3D utilizes a compact Vision Transformer (ViT) encoder <cit.>, distilled from the larger SAM-Med3D <cit.>, and incorporates an efficient 3D Sparse Flash Attention <cit.> mechanism to reduce computational costs while maintaining high segmentation quality. Despite the potential of efficient SAM variants to enable highly responsive interactive segmentation, there is currently a lack of user-friendly interfaces to facilitate their practical application in medical image analysis workflow. Interactive prompting in 3D medical volumes is more challenging compared to 2D interfaces for natural images, due to the increased complexity of visualizing 3D data on a 2D screen and the need for seamless integration with existing medical image analysis workflows. 3D Slicer[<https://www.slicer.org>] <cit.> is an open-source software platform widely used for the analysis and visualization of volumetric medical images. It supports a variety of plug-ins and offers a robust framework for the integration of new tools, making it an ideal choice for creating an interface for interactive volumetric image segmentation, e. g., with FastSAM-3D. In this manuscript, we describe FastSAM-3DSlicer, a plugin for integrating 2D and 3D SAM models, including the efficient FastSAM-3D, into the well-established image analysis platform 3D Slicer. This extension enables users to load 3D volumes, interactively annotate structures of interest using point prompts, and visualize the resulting segmentations in real time within a familiar software environment. In summary, our contributions are three-fold. Firstly, we propose a novel 3D Slicer-based extension for both 2D and 3D SAM models, including the efficient FastSAM-3D, for volumetric image segmentation. It enables interactive prompting in a 3D manner with SAM. Secondly, we show the quantitative comparison of the inference time of different SAM models within our interface on both CPU and GPU environments. Finally, we propose an uncertainty quantification scheme based on the fast inference speed of FastSAM-3D in our extension. It can guide the user for better prompting. § METHODS §.§.§ Overview of the 3D Slicer Extension The interface for our FastSAM-3DSlicer extension is illustrated in Fig. <ref>, with its overall implementation details shown in Fig. <ref>. Unlike previous works integrating 2D SAMs into 3D Slicer, such as TomoSAM <cit.> and SAMME <cit.>, FastSAM-3D's efficiency at the volume level eliminates the need to prepare image embeddings before interactive segmentation (i.e., user prompting) in 3D Slicer. This improvement enhances the user experience by reducing pre-processing time and allowing for more immediate interaction with the data. Upon importing a 3D volumetric image into FastSAM-3DSlicer, the extension automatically generates three node types, including a volume node, a segmentation node, and two markup nodes. These node types represent specific data structures within 3D Slicer that manage different aspects of the image and segmentation process. The volume node contains the raw 3D image in grayscale, represented as a NumPy array, which serves as the input for the SAM model The segmentation node stores all the segmented masks, which are updated when users add new point prompts and generated when users perform the add mask operation. The two markup nodes hold all the include and exclude points that the user can add interactively. When a user adds an include or exclude point prompt, FastSAM-3DSlicer first converts this point from RAS to XYZ coordinates based on the affine matrix stored in the input NIfTI file. Using this new point prompt and previous input point prompts, FastSAM-3DSlicer crops or pads the raw image to match the selected SAM model's input size and feeds it into the image encoder to generate image embeddings in real time. The prompt encoder processes all the input points to generate prompt embeddings. The mask decoder then translates the image embeddings, prompt embeddings, and previous masks into a new mask. This segmentation mask is resized to the raw image dimensions based on saved coordinate information and updated in the segmentation node. In addition to supporting the FastSAM-3D model, FastSAM-3DSlicer also supports SAM-Med3D <cit.>, MedSAM <cit.>, and SAM-Med2D <cit.>, all of which follow the same structural process. §.§.§ Uncertainty Quantification Scheme Uncertainty quantification in FastSAM-3DSlicer provides users with a measure of confidence in the segmentation results, which can be used to guide user prompting. Regions with higher uncertainty indicate a greater need for additional prompts. Our method leverages the efficiency of FastSAM-3D by running the image encoder once and performing multiple decoding steps to generate an ensemble of segmentations. It begins with the initial segmentation, where the image encoder processes the input 3D volumetric image 𝐈 to produce its image embedding 𝐄 = Encoder(𝐈). Next, the mask decoder generates the initial segmentation logits 𝐌_0 based on the image embedding 𝐄 and the initial set of point prompts (𝐏_0) provided from the user, i.e. 𝐌_0 = Decoder(𝐄, 𝐏_0). The segmentation mask 𝐌_0 is the binarized segmentation logits 𝐌_0, obtained by applying a threshold τ: 𝐌_0 = 1(𝐌_0 > τ), where 1(·) is the indicator function. To quantify uncertainty, subsequent point prompts are sampled from the initial segmentation mask 𝐌_0. These pseudo-point prompts are used to run the decoder multiple times, each time producing a slightly different segmentation mask due to variations in the sampled prompts. Let 𝐏_i denote the point prompts sampled from the initial segmentation mask 𝐌_0 for the ith iteration: 𝐏_i = SamplePrompts(𝐌_0). The decoder is then run N times using these sampled prompts, while keeping the image encoder constant, to produce N different segmentation masks {𝐌_i}_i=1^N with 𝐌_i = Decoder(𝐄, 𝐏_i). Since the image encoder only runs once, the majority of the computational efficiency is preserved, as the decoder accounts for a smaller proportion of the total computation. Inspired by the self-ensembling <cit.>, the segmentation logits from each decoder run are averaged to produce the final ensemble result 𝐌: 𝐌 = 1/N∑_i=1^N𝐌_i. This averaging process not only provides a robust final segmentation mask but also allows for the calculation of uncertainty. Specifically, the variability among the N segmentation masks can be used to estimate the uncertainty, formally expressed as the standard deviation or variance of the logits at each voxel: Uncertainty(x) = √(1/N∑_i=1^N (𝐌_i(x) - 𝐌(x))^2) where x denotes a voxel in the 3D volume. § EXPERIMENTS §.§.§ Implementation Details We implemented the proposed 3D Slicer extension using 3D Slicer version 5.6.2 and Python version 3.10. The experiments were conducted in two distinct environments to evaluate inference times. The first environment was a CPU-only setup utilizing a laptop-level AMD Ryzen 5 5500U CPU, while the second environment employed a GPU setup with one NVIDIA RTX 2060 GPU. Following previous work <cit.>, we use the test set of totalsegmentator <cit.> to test the inference time. All data are prepared in NIfTI format for loading. Code for the 3D Slicer extension is available at <https://github.com/arcadelab/FastSAM3D_slicer>. A video demo for our 3D slicer extension is available at <https://www.youtube.com/watch?v=oJ9ZhnPWqSs>. §.§.§ Results for Inference Time Table <ref> presents a comparison of inference times for different SAMs with both CPU and GPU environments within our FastSAM-3DSlicer extension. The SAM-Med2D <cit.> exhibits an inference time of 1.52 seconds per slice on the CPU and 0.52 seconds per slice on the GPU. While SAM-Med2D <cit.> is effective for 2D slice-based segmentation, it fails to address volume-level segmentation in 3D Slicer, as the user needs to provide prompts for each slice individually, which results in longer inference time. MedSAM <cit.> shows considerably higher inference times, with 48.9 seconds per slice on the CPU and 12.69 seconds per slice on the GPU. The increased processing time is due to its higher resolution of 1024×1024 compared to the 256×256 resolution of SAM-Med2D <cit.>. Its high computational cost limits its practicality for real-time applications in image analysis settings. The SAM-Med3D <cit.>, optimized for 3D segmentation, achieves 7.75 seconds per volume on the CPU and 1.76 seconds per volume on the GPU. This demonstrates a substantial improvement over MedSAM <cit.>, particularly in GPU environments, making it a more viable option for volume-level segmentation. FastSAM-3D <cit.> significantly outperforms the other models in terms of inference speed (p<0.01). It achieves 1.09 seconds per volume on the CPU and 0.73 seconds per volume on the GPU. This reduction in inference time highlights the efficiency and optimization of FastSAM-3D for real-time interactive segmentation of 3D volumes. FastSAM-3D demonstrates the lowest inference times on volume-level segmentation across both CPU and GPU environments, affirming its suitability for integration into image analysis workflows where speed and efficiency are critical. The improvements in inference time not only facilitate faster segmentation but also enable more immediate and iterative interaction with the volumetric data, thereby enhancing the overall utility of the 3D Slicer extension in medical image analysis applications. §.§.§ Illustrative Visual Examples Fig. <ref> presents a qualitative comparison of segmentation results obtained from different SAM models integrated into the FastSAM-3DSlicer interface. The examples showcase the performance of each model on various anatomical structures and regions of interest. FastSAM-3D and SAM-Med3D generate segmentations for the entire 3D volume, demonstrating their ability to capture spatial context and produce coherent masks. In contrast, MedSAM and SAM-Med2D operate on 2D slices, resulting in smaller and more localized segmentations when visualized in the 3D view. Across all examples, FastSAM-3D exhibits the highest agreement with the ground truth, highlighting its superior performance in terms of both efficiency and accuracy for real-time 3D interactive segmentation within the FastSAM-3DSlicer. §.§.§ Results for Uncertainty Quantification Fig. <ref> showcases the uncertainty quantification results obtained through the self-ensembling approach in the FastSAM-3DSlicer extension. The results demonstrate that FastSAM-3D exhibits the lowest overall uncertainty among the compared models, highlighting its robustness and reliability for real-time 3D interactive segmentation. It allows for the estimation of uncertainty by calculating the variance among the ensemble of segmentations. The uncertainty information provided by FastSAM-3DSlicer is particularly valuable for guiding user interactions, as it identifies regions where additional user prompts may be required to improve segmentation accuracy. By focusing on areas of high uncertainty, users can iteratively refine the segmentation results, leading to more precise and reliable delineations of anatomical structures or regions of interest. § CONCLUSION We presented FastSAM-3DSlicer, a 3D Slicer extension designed to facilitate real-time, interactive segmentation of 3D volumetric medical images with SAMs. Our extension integrates SAM-Med2D, MedSAM, SAM-Med3D, and FastSAM-3D, offering a user-friendly interface that automates the handling of user prompts and segmented masks within the familiar 3D Slicer environment. Our extension demonstrates superior performance in terms of inference time, particularly with the FastSAM-3D model, which achieves low inference times on both CPU and GPU environments. This makes FastSAM-3D highly suitable for real-time applications, reducing the computational burden while maintaining high segmentation quality. Furthermore, the integration of an innovative uncertainty quantification scheme leverages the rapid inference capabilities of FastSAM-3D, providing users with additional information about the reliability of the segmentation results. Overall, by combining computational efficiency, precision, and ease of use, FastSAM-3DSlicer addresses the need for a user-friendly interface for SAMs, thereby enhancing decision-making processes and improving patient outcomes in medical settings. splncs04
http://arxiv.org/abs/2407.12735v1
20240717165542
EchoSight: Advancing Visual-Language Models with Wiki Knowledge
[ "Yibin Yan", "Weidi Xie" ]
cs.CV
[ "cs.CV" ]
Tutorial on Quantum Error Correction for 2024 Quantum Information Knowledge (QuIK) Workshop Priya J. Nadkarni, Narayanan Rengaswamy, and Bane Vasić P. J. Nadkarni, N. Rengaswamy, and B. Vasić are the Program Chairs for the First Quantum Information Knowledge (QuIK) Workshop held during the 2024 IEEE International Symposium on Information Theory at Athens, Greece. Email: narayananr@arizona.edu ================================================================================================================================================================================================================================================================================================================================== § ABSTRACT Knowledge-based Visual Question Answering (KVQA) tasks require answering questions about images using extensive background knowledge. Despite significant advancements, generative models often struggle with these tasks due to the limited integration of external knowledge. In this paper, we introduce EchoSight, a novel multimodal Retrieval-Augmented Generation (RAG) framework that enables large language models (LLMs) to answer visual questions requiring fine-grained encyclopedic knowledge. To strive for high-performing retrieval, EchoSight first searches wiki articles by using visual-only information, subsequently, these candidate articles are further reranked according to their relevance to the combined text-image query. This approach significantly improves the integration of multimodal knowledge, leading to enhanced retrieval outcomes and more accurate VQA responses. Our experimental results on the Encyclopedic VQA and InfoSeek datasets demonstrate that EchoSight establishes new state-of-the-art results in knowledge-based VQA, achieving an accuracy of 41.8% on Encyclopedic VQA and 31.3% on InfoSeek. § INTRODUCTION Visual Question Answering (VQA) addresses the challenge of enabling machines to understand and respond to questions about visual content, typically images or videos. Broadly, this task can be divided into two categories: standard VQA <cit.> with questions that can be answered directly from the visual content, for example, counting objects, identifying colors, or recognizing simple actions, which rely solely on commonsense and information present in the image; and knowledge-based VQA <cit.> requiring additional context or external knowledge, such as historical facts, detailed object properties, or specific situational contexts not evident in the visual content. Addressing these two types of questions presents different challenges for VQA systems. Questions that draw answers directly from visual content demand robust image understanding capabilities, encompassing tasks such as object detection, scene recognition, and spatial reasoning. Conversely, questions requiring external knowledge call for additional mechanisms to access and integrate information from external sources. In this paper, we focus on the latter type of visual question answering, by building a retrieval-augmented multimodal system, that enables searching an external knowledge base for more nuanced understanding and accurate responses. Despite the recent accomplishments in developing Visual-language Models (VLMs) <cit.>, knowledge-based VQA remains challenging. This complexity primarily stems from two aspects. (i) Existing VLMs struggle to adequately encode all essential knowledge, due to its limited model capacity, and infrequent inclusion of encyclopedic, long-tail information in their training data <cit.>. (ii) The visual component of the questions often provides limited help in addressing the queries, as establishing a meaningful connection between entity knowledge and visual attributes can be difficult. For example, an image of a church alone does not reveal information about its construction date. In this paper, we introduce EchoSight, a novel retrieval-augmented vision-language system designed for knowledge-based visual question answering. EchoSight employs a dual-stage search mechanism that integrates a retrieval-and-reranking process with the Retrieval Augmented Generation (RAG) paradigm. Initially, the system performs a visual-only retrieval from an external knowledge base, to effectively narrow the knowledge search space, only focusing on candidates that are closely align with the visual context of the reference image. In the subsequent multimodal reranking stage, the system refines the candidates ranking by incorporating both the reference image and the textual query. This approach guarantees that the selected results are pertinent not only visually, but also contextually to the multimodal query. After acquiring the most relevant information through this coarse-to-fine grained search, our model generates the precise answer to the posed question. Overall, we present three contributions: First, we propose a multimodal retrieval-augmented generation framework, termed as EchoSight, that enables LLMs to answer visual questions that require fine-grained encyclopedic knowledge; Second, we adopt a retrieval-and-reranking scheme to improve retrieval performance, specifically, it first searches images with visual-only information, and then conduct a fine-grained multimodal reranking on the candidates; Third, we conduct thorough experiments on both Encyclopedic VQA <cit.> and InfoSeek <cit.> benchmarks, EchoSight demonstrates state-of-the-art performance on both benchmarks, significantly outperforming existing VLMs or other retrieval-augmented architectures. § METHOD This section starts with the problem formulation of retrieval-augmented VQA (Sec. <ref>), followed by detailing the retrieval-and-reranking module in EchoSight (Sec. <ref>), and finally the answer generation module (Sec. <ref>). §.§ Problem Formulation Given a reference image, and question of free-form texts, our goal is to construct a visual question answering system, that can benefit from the access of an external knowledge base. In our case, this is a million-scale dataset of entity articles and their corresponding images from Wikipedia webpage, i.e., ℬ = {(a_1, I_1), …, (a_n, I_n)}. The overall architecture of our proposed method, EchoSight, is illustrated in Figure <ref>. It consists of four main components: an external knowledge base (KB), a retriever, a reranker, and an answer generator. (i) The process begins with the retriever, which utilizes the reference image to filter and extract relevant KB entries with similar images; (ii) Next, the reranker takes these candidate entries and employs their textual contents to have them reranked, based on their relevance to both the reference image and the textual question; (iii) Finally, the reranked KB entries are fed into the answer generator to produce the final answer. §.§ Retrieval and Reranking The goal of this stage is to identify relevant entries from a large-scale external knowledge base using the given reference image and question. We employ a two-stage procedure: first, a visual-only search identifies candidates that are visually similar to the query image. Subsequently, a multimodal reranking process evaluates both visual and textual information to reorder the retrieved entries. This ensures that the most pertinent article entry can be ranked at the top, facilitating efficient and accurate answer generation. Visual-only Search. Given the extensive size of the knowledge base, potentially encompassing millions of image-article pairs, optimizing the efficiency of the image search process is critical. To achieve this, we transform all images into vectors and utilize the cosine similarity metric to assess their proximity to a reference image. S_Ω= { s_i = ⟨v_r/||v_r||·v_i/||v_i||⟩, i= 1, …, n }, where v_r= Φ_vis(I_ref) and v_i = Φ_vis(I_i) denote the visual embedding for reference image and database image, respectively, computed by a pre-trained visual encoder. We employ the FAISS library<cit.> for vector search, and keep the top k best-matched images and their corresponding wiki article entries from the knowledge base, i.e., ℰ_v = {(a_1, I_1), …, (a_k, I_k)}, k ≪ n. Multimodal Reranking. After initially filtering the candidates based on visual similarities, the reranker module integrates both textual and visual inputs from the multimodal query and the top k retrieved Wikipedia article entries. This stage aims to prioritize entries that are most pertinent to the question, ensuring the articles with highest relevancy are ranked at the top. Specifically, we employ the Q-Former <cit.> architecture to extract multimodal information from the reference image and textual question, resulting 32 query tokens. z_m^i = Q-Former( I_ref, Q )^i, where z_m^i denotes the ith query token embedding of the reference image I_ref and textual question Q. On the candidates side, we break each of the wiki articles into sections, with each section prefixed by the article’s title, for example, a_i = {sec_1^i, sec_2^i, …, sec_p^i }, and further encode them with Q-Former’s text encoder. We initialize the Q-Former with BLIP-2's weights and fine-tunes with all parameters open except the visual encoder. The reranking score for each section is calculated as follows: S_r^sec = max_1 ≤ i ≤ N_q( sim(z_m^i, z_s^sec) ), where S_r^sec is the reranking score for section “sec”, determined using the Q-Former’s Image-to-Text Correspondence (ITC) method. This method computes the highest pairwise similarity between each multimodal query token embedding z_m^i from the reference image and question pair, and the [CLS] token embedding of a Wikipedia article section z_s^sec. N_q denotes the number of query tokens. In the final step of multimodal reranking, the reranker combines the visual similarity score from the previous stage and the reranking score into a weighted sum: sec_vl = _sec∈ℰ_v( α· S_v^sec + (1 - α) · S_r^sec), where sec_vl refers to the highest-ranked entry section produced by the reranker, α is a weight parameter that balances the visual similarity score S_v^sec and the reranking score S_r^sec. Note that S_v^sec is calculated in the visual-only search stage using the best-matched image from the wiki entry to which sec belongs. Reranker Training. Here, we implement hard negative sampling within a contrastive learning framework. Specifically, hard negative samples are specifically selected from examples that are visually similar yet contextually distinct, i.e., the initial visual-only retrieval efforts were unsuccessful. With such training, the reranker is thus forced to select the most relevant articles for the multimodal queries, enhancing the overall accuracy and effectiveness of the system <cit.>. The training objective of the reranker is given as follows: ℒ = - logexp(max_1 ≤ i ≤ N_qsim(z_m^i, z_s))/∑_j=1^Nexp(max_1 ≤ i ≤ N_qsim(z_m^i, z_s^j)), where z_s denotes the positive section embedding, and N is the total number of samples including both positive and negatives. §.§ Answer Generation with LLMs Once the relevant entries are identified from the knowledge base, large language models (LLMs) will integrate such information to answer the questions, i.e., A = LLM(sec_vl, Q), where the off-the-shelf LLM acts as an answer generator, sec_vl denotes the retrieved wiki article section, and Q refers to the target question. Comparing to existing generative VLMs, such retrieval-augmented generation (RAG) <cit.>, enables the model with the essential contextual knowledge, improving the system's ability to handle complex questions that demand precise and detailed knowledge. § EXPERIMENTS §.§ Datasets Encyclopedic VQA <cit.> contains 221k unique question and answer pairs each matched with (up to) 5 images, resulting in a total of 1M VQA samples. These images are derived from iNaturalist 2021 (iNat21) <cit.> and Google Landmarks Dataset V2 (GLDv2) <cit.>. The visual questions focus on the fine-grained categories and instances. There are single-hop and two-hop questions that require different reasoning steps in the dataset. Notably, the dataset provides a controlled knowledge base with 2M Wikipedia articles with images, ensuring all the questions can be answered if correct Wikipedia article is given. For our experiments on E-VQA, we consider the single-hop questions using the provided 2M knowledge base. InfoSeek <cit.> comprises 1.3M visual information-seeking questions, covering more than 11K visual entities from OVEN <cit.>. InfoSeek provides a knowledge base with 100K Wikipedia articles with images. The questions of the dataset are diverse and the answers can be referenced from Wikipedia. There are a human-labeled 8.9K collection and an automated generated 1.3M collection in InfoSeek. Due to the unavailability of groundtruth for test split, we report evaluation results on the validation split. We note that, the original authors did not publicly release their knowledge base, we therefore filter a 100K knowledge base from E-VQA instead. We will release ours to the community for reproduction and future comparison. §.§ Metrics To evaluate the performance of our proposed retrieval-augmented QA model, we focus on two aspects, namely, retrieval and question answering. The retrieval results gauge the system's capability to accurately retrieve relevant articles from a large-scale multimodal knowledge base, while the question answering results assess its holistic effectiveness in providing precise and correct answers to visual questions Metrics for Retrieval. We utilize the standard metric Recall@K. Recall@K assesses whether the correct article entries appear among the top k retrieved results. An article is considered correct only if its URL exactly matches the target URL, making our retrieval evaluation more stringent and precise compared to methods that only match the content of answers to the retrieved articles. Metrics for Question Answering. Here, we follow the conventional practise, use different metrics depending on the considered datasets. For E-VQA dataset <cit.>, we use the BEM (Balanced Evaluation Metric) score <cit.>, while for the InfoSeek dataset <cit.>, we employ the VQA accuracy <cit.> and Relaxed Accuracy <cit.>. These metrics are chosen to align with the evaluation settings specific to each dataset. §.§ Implementation Details The Retriever. We compute the visual embedding for the reference images and images from database with a frozen Eva-CLIP vision encoder (Eva-CLIP-8B) <cit.>. The pooled last-layer embedding are used as the features for computing cosine similarity between images, with FAISS library. The Reranker. The reranking module is initialized with pre-trained BLIP-2 <cit.> weights using the LAVIS Library <cit.>. The number of query tokens N_q is 32 and weighting parameter α is 0.5. Instead of using in-batch contrastive learning, we employ hard negative sampling, where each positive sample is paired with N = 24 negative samples. In practise, a positive sample is constructed using the evidence section text from the corresponding Wikipedia article. While for negative samples, we perform a visual-only search on the reference images. Knowledge base entries with images that fail to match the reference images ranked within the top k are selected as negative samples. During training, we randomly sample sections from these negative entries as well as from the non-evidence sections of the positive entries. Note that, as only E-VQA dataset provides labeled evidence sections for all its training data, we train the reranker on this dataset, and directly use it on InfoSeek in a zero-shot manner. We adopt OneCycleLR <cit.> scheduler, with AdamW <cit.> optimizer. We use learning rate 10^-4, batch size 6, and the negative samples per example being 24. For training the reranker module with 900K examples, 150K steps require 40 hours on 1 Nvidia A100 (80G). The Answer Generator. We use Mistral-7B-Instruct-v0.2 <cit.> as the question generator for E-VQA and LLaMA-8B-Instruct <cit.> for InfoSeek. §.§ Results In this section, we present experimental results on the E-VQA and InfoSeek benchmarks. On Retrieval. The experiment results for the retrieval tasks across different configurations are detailed in Table <ref> and Table <ref>. The CLIP I-T setting involves using CLIP for cross-modal similarity search, from the reference image to the Wikipedia article. The articles are represented as CLIP embedding of their title and descriptions. The `Google Lens' refers to the approach used in Encyclopedic VQA <cit.>, where Google Lens indexes billions of images from the Internet, not limited to Wikipedia, to find and return the most closely matching images along with an entity prediction. The best corresponding knowledge base entry identified by Google Lens is considered the result of its retrieval effort. Given its vast image index and capability to associate images with relevant entities, Google's retrieval can be viewed as a upperbound in E-VQA retrieval. From both tables, we can draw the observation that, our proposed reranking module has shown to significantly improve the retrieval performance, for example, it improves Recall@1 from 13.3% to 36.5% on E-VQA benchmark, 45.6% to 53.2% on InfoSeek, largely bridging the gap towards the `Google Lens' upperbound. VQA Results. As shown in Table <ref>, we present the comparison to state-of-the-art approaches on final VQA results. For methods that do not utilize an external knowledge base or retrieval system, we present the results of large language models (LLMs), and multimodal large language models (MLLMs). The vanilla method refers to scenarios where only the textual question of the multimodal query is provided. The performance of multimodal-LLMs, including BLIP2 <cit.> and LLaVA <cit.>, are reported in Wiki-LLaVA <cit.>, where both the reference image and question are simultaneously processed. For methods with external knowledge bases, we compare with Wiki-LLaVA <cit.> and DPR_V+T^* <cit.>. It is clear that our proposed EchoSight (w. reranking) has outperform the prior works by a significant margin, even approaching the upperbound results reported by original E-VQA <cit.> benchmark, where two giant models are adopted, i.e., `Google Lens' for knowledge retrieval, and PaLM as answer generation. §.§ Ablation Study For all experiments in ablation study, we use the E-VQA dataset. On the retrieval side, we conduct the following ablation studies: (i) to compare different visual backbones in retrieval module, (ii) to study the impact of reranking scope and (iii) to investigate the importance of hard negative sampling. On final answer generation, we carry out ablation studies on: (i) the impact of different language models, (ii) to experiment the answer generator under oracle retrieval results. [1]The E-VQA accuracy is tested with Mistral-7B and InfoSeek accuracy is tested with LLaMA3-8B. Impact of vision backbones. We assess the effect of different visual backbones on the retrieval stage, as detailed in Table <ref>. We compare the Vision Transformer (ViT) from EvaCLIP-8B <cit.> with OpenAI's CLIP-ViT-Large <cit.>. The EvaCLIP-8B's ViT achieves a recall@20 of 48.8%, outperforming the CLIP-ViT-Large, which scored 32.2%. This substantial improvement is likely due to EvaCLIP-8B's larger parameter size and more extensive training dataset, allowing it to develop more robust representations. While the initial Recall@1 shows a modest difference between the two models (10% for CLIP-ViT-Large and 13% for EvaCLIP-8B), adopting our multimodal reranking significantly boosts performance, increasing Recall@1 to 23.8% and 36.5% for CLIP-ViT-Large and EvaCLIP-8B, respectively. This results in a marked 13% difference, underscoring the effectiveness of our approach, especially when combined with a more capable backbone. Impact of reranking scope. The reranking scope refers to the number of candidates considered by the reranker module. Involving a higher reranking scope means calculating more embeddings during the reranking process. The reranking scope, which can be any number up to k, i.e., the total number of candidates returned by the retriever. As shown in Table <ref>, our reranker can consistently improve the results with increasing scope from Top-5 to Top-500, though it will significantly increase the computation cost, resulting in diminishing returns. Considering the balance of efficiency and quality, the scope of 20 candidate entries is used when reporting our final VQA accuracy on E-VQA and InfoSeek. Impact of hard negative sampling. The training strategy of the reranker module is critical for its performance. Rather than using randomly selected, irrelevant article entries, we employ a hard negative sampling during training, i.e., top negative candidates returned by the retriever. This approach ensures the reranker to be trained on more demanding examples, thereby improving its performance and robustness. The effects of different training strategies on reranking performance are detailed in Table <ref>. Consistency of EchoSight across LLMs. The choice of LLMs influences the RAG paradigm greatly <cit.>. We compare PaLM <cit.>, GPT-4 <cit.>, Mistral-7B-Instruct-v0.2 <cit.> and LLaMA3-8B-Instruct <cit.> as answer generators. Specifically, we provide them with same reranking results (KB entries). As shown in Table <ref>, the accuracy results are calculated with BEM <cit.> following <cit.>. The results indicate that though better language models yield better scores, the overall performance across all tested language models is quite stable. This validates our method adapts well across modern language models. Effect of oracle retrieval. Oracle retrieval indicates that the correct Wikipedia entry is always provided for generating the answer. As shown in Table <ref>, LLMs can almost flawlessly answer the question if oracle retrieval is provided. § RELATED WORK §.§ Visual Question Answering Visual Question Answering (VQA) is the task of answering open-ended questions based on an image with natural language response. VQA tasks can be divided into two types: standard VQA and knowledge-based VQA. Standard VQA. Datasets such as VQAv1 <cit.>, VQAv2 <cit.>, and VizWiz <cit.> focus on questions that can be answered by analyzing the image content alone, without external information. These datasets typically cover questions about objects in the image, their attributes and other perceptual details that can be inferred from the visual input. Knowledge-based VQA. The task involves questions that require information not present in the image. Pioneering datasets like OK-VQA <cit.> and A-OKVQA <cit.>, which include questions needing knowledge beyond what is visually depicted, necessitate the integration of external world knowledge and commonsense reasoning. However, both datasets focus primarily on commonsense and general world knowledge, often neglecting more specialized or encyclopedic facts, and they do not provide external knowledge bases. To fill this gap, datasets such as Encyclopedic VQA (E-VQA) <cit.> and InfoSeek <cit.> have been developed. These datasets utilize Wikipedia as a knowledge base to provide detailed and specific information on various topics. E-VQA covers a wide range of topics like animals, plants, and landmarks, while InfoSeek focuses on info-seeking questions about various visual entities. These datasets require models to recognize visual entities and accurately retrieve and use relevant information from external sources <cit.>. §.§ Vison Language Models for VQA Advances in Vision Language Models (VLMs) such as GPT-4V <cit.>, Gemini <cit.>, LLaVA <cit.>, and Phi-3-Vision <cit.> have demonstrated impressive capabilities in standard Visual Question Answering (VQA) tasks, exhibiting strong image analysis and accurate response generation <cit.>. However, these models encounter difficulties with knowledge-based VQA due to issues such as hallucination, where responses are generated based on nonexistent content and internal biases <cit.>, and the lack of efficient knowledge retrieval mechanisms which hampers the integration of external knowledge bases for reasoning <cit.>. Recently, research has shifted towards retrieval-augmented generative systems. While Retrieval-Augmented Generation (RAG) has been well-established in Large Language Models (LLMs), its application in VLMs remains underexplored. Systems like KAT <cit.>, REVIVE <cit.>, and REVEAL <cit.> show promise for questions involving commonsense reasoning, yet they struggle with complex, knowledge-intensive tasks like Encyclopedic VQA (E-VQA) and Infoseek. These limitations stem from their restricted ability to fetch and incorporate precise information from extensive encyclopedic knowledge bases <cit.>. EchoSight addresses these issues through a novel two-stage process combining visual-only retrieval and multimodal reranking. This approach significantly enhances the alignment between retrieved textual knowledge and visual content, leading to improved performance on benchmarks such as Encyclopedic VQA and InfoSeek. § CONCLUSION In this paper, we introduced EchoSight, a novel retrieval-augmented vision language system designed to address the challenges of knowledge-based Visual Question Answering (VQA). Our approach enhances the retrieval capabilities of multimodal models through a two-stage process: initial visual-only retrieval followed by a multimodal reranking stage. This methodology significantly improves the alignment between visual and textual information, leading to more accurate and contextually relevant answers. Experimentally, we have conducted thorough ablation studies to demonstrate the effectiveness of our proposed components. While comparing to existing state-of-the-art approaches on the Encyclopedic VQA and InfoSeek datasets, EchoSight demonstrates significant performance improvement, with an accuracy of 41.8% on E-VQA and 31.3% on InfoSeek. The success of EchoSight highlights the importance of efficient retrieval processes and the integration of multimodal information in enhancing the performance of large language models (LLMs) in knowledge-based VQA tasks. § LIMITATIONS Although our proposed EchoSight demonstrates impressive performance on Knowledge-based VQA like Encyclopedic-VQA and InfoSeek, several limitations must be acknowledged. EchoSight's performance is heavily dependent on the quality and comprehensiveness of the underlying knowledge base used for retrieval. Domain-specific knowledge not covered in these databases may lead to sub-optimal performance in specialized queries. In addition, the retrieval process, especially when involving multimodal reranking of candidates, introduces significant computational overheads, making it less suitable for real-time applications. These overheads can impact the efficiency and response time of the system. Future work focusing on improving the quality of knowledge bases and mitigating computational overheads remains to be explored. § DATASET DETAILS In this section, we provide more details of in the Dataset we used. We summarize the statistics of in Table <ref>. §.§ E-VQA We focus only on Single-hop questions of E-VQA <cit.>, namely Templated, Automatic, and Multi Answer questions in the table. §.§ InfoSeek And for Infoseek <cit.>, due to the missing entities in the knowledge-base we use, we remove the examples in the dataset. Specifically, 916,385 examples in training split out of 934,048 are kept (98.1%), and 71,335 examples of validation split out of 73,620 are kept (96.9%). Therefore, the results we obtain with our knowledge base are consistent with the dataset's original setting while considering for the limitations of our knowledge base. § QUALITATIVE RESULTS §.§ reranking results Qualitative results of our EchoSight's multimodal reranking are as shown in Figure <ref>. §.§ VQA results As shown in Figure <ref>, our EchoSight demonstrates significant improvements in multimodal understanding and generation tasks compared to the state-of-the-art GPT-4V <cit.>. § PROMPT TEMPLATE §.§ E-VQA The prompt we use for LLM when testing E-VQA <cit.> is shown as follow: basicstyle= USER: Context: <CONTEXT> Question: <QUESTION> The answer is: §.§ InfoSeek Due to the strict metrics of exact match are used by InfoSeek <cit.>, we have to consider the format of the prompt so that the generated answer is comparable with the ground truth. Thereby, by using a one-shot example to keep the format correct, our prompt we use for InfoSeek is: basicstyle=, breaklines=true, breakindent=0pt SYSTEM: You always answer the question the user asks. Do not answer anything else. USER:Context: The sounthern side of the Alps is next to Lake Como. Question: Which body of water is this mountain located in or next to? Just answer the questions, no explanations needed. Short answer is: Lake Como Context: <CONTEXT> Question: <QUESTION> Just answer the questions, no explanations needed. Short answer is:
http://arxiv.org/abs/2407.13285v1
20240718083708
Collaborative real-time vision-based device for olive oil production monitoring
[ "Matija Šuković", "Igor Jovančević" ]
cs.CV
[ "cs.CV", "cs.AI", "cs.ET", "I.2.10" ]
Collaborative real-time vision-based device for olive oil production monitoring 1st Matija Šuković Faculty of Natural Sciences and Mathematics University of Montenegro Cetinjska 2, 81000 Podgorica, Montenegro matija.sukovic23@gmail.com 2nd Igor Jovančević Faculty of Natural Sciences and Mathematics University of Montenegro Cetinjska 2, 81000 Podgorica, Montenegro igorj@ucg.ac.me ===================================================================================================================================================================================================================================================================================================================================================================== § ABSTRACT This paper proposes an innovative approach to improving quality control of olive oil manufacturing and preventing damage to the machinery caused by foreign objects. We developed a computer-vision-based system that monitors the input of an olive grinder and promptly alerts operators if a foreign object is detected, indicating it by using guided lasers, audio, and visual cues. computer vision, agrifood, high-end IoT device © 2024 IEEE. Personal use of this material is permitted. Permission from IEEE must be obtained for all other uses, in any current or future media, including reprinting/republishing this material for advertising or promotional purposes, creating new collective works, for resale or redistribution to servers or lists, or reuse of any copyrighted component of this work in other works. § INTRODUCTION An olive grinding machine is used to extract extra virgin olive oil from freshly harvested olives. An entry-level olive grinder is a small unit targeted for medium to large-sized olive farms. Grinders like these require the fruits to be washed and cleaned from any foreign objects, such as leaves or branches, before entering the machine. To reduce the manual labor of cleaning the olives, these grinders are commonly paired with a washing unit, such as the one shown in Fig. <ref>, which automatically washes the olives and removes leaves and small branches. While it decreases the manual labor, the washing unit indirectly increases the risk of foreign objects, e.g. rocks or small tools and pieces, entering the grinder, due to the reduced manual inspection of olives. Rocks are especially problematic, as they are often picked together with the fruit since olive trees thrive on rocky terrain. They can have similar shape, size, and even color to the fruit, making them hard to spot. Yet if one enters the grinder, the machinery needs to be halted right away as the rock will wear down the blades at best, and jam the machine causing immense damage at worst. In such case, the grinder has to be halted and opened to extract the foreign object(s), which exposes the crushed olives to air, making them unsuitable for extra virgin olive oil and causing them to be discarded. The washing unit is designed in such a way that it allows for an easy visual inspection as the olives are dumped into it, by having a wide input area for the fruits to spread as they are slowly fed to the machine. The operating personnel is expected to perform a manual inspection as they dump freshly harvested olives from bags and boxes. We propose to enrich this process with a device that will perform an automatic visual inspection powered by computer vision, reducing strain on workers and decreasing the risk of rocks entering the grinder, thus lessening the chance of expensive damage happening to the machinery and, in general, increasing the quality of the end product. § SOLUTION CONCEPT The concept of the prototype solution, shown in Fig. <ref>, can be thought of as a digital assistant in the olive oil production process. The device is to be installed above the washing unit. It features LED lights and a camera facing down, overlooking the entrance of the machine. The frames captured by the camera are fed to a single-board computer running an Artificial Intelligence (AI) model that detects the rocks. If any are detected, the device fires a series of audio and visual alerts to draw the workers' attention. Most notably, it guides a laser pointer to a detected rock by controlling a pan-tilt head with a laser diode. The laser beam makes it trivial for the workers to locate and extract the rock before it enters the machine. The target hardware to run the AI model on is either a Raspberry Pi or a device from the NVIDIA Jetson series, depending on how demanding the final model will be. The goal is for the end product to be low-cost and thus affordable by small olive farms that use entry-level grinders. With this in mind, we will focus on tuning and optimizing the model to run well on the cheaper Raspberry Pi, resorting to a more expensive Jetson device if necessary. For the camera of the prototype, we chose a Raspberry Pi Camera Module v3. This camera is suitable for our application for several reasons. It has a high-quality sensor capable of capturing images at a resolution of 12MP (megapixel). It also has auto-focus built-in, which helps in keeping the image sharp no matter how many olives are contained in the washing unit. Lastly, it is compact enough to not get in the way of workers, and it is designed to be easily integrated with embedded devices. For future prototypes we will consider using a 3D camera in order to improve the laser tracking, or fitting the camera to a pan-tilt motor to give the device added functionalities, such as surveillance of the machinery while the washing unit is not operating. § RELATED WORK The problem we are tackling in this work is fairly unique in its nature. Foreign object detection in agriculture and food industries is done in controlled environments designed to provide the best possible visibility and clarity of the scene, whereas we are building on top of an existing system which was not optimised for automatic inspection. A solution for a similar use case was proposed by a team from the ICT Research Institute in Korea, where they used deep learning methods to detect foreign objects in almond and green onion flake food processing <cit.>. In their work, the authors present a synthetic method of obtaining a dataset for visual food inspection, where highly spread-out objects are exposed to several light sources, including a light platform which makes objects easily detectable on the bright background. They also trained a deep learning model to perform image segmentation on datasets acquired with the proposed method. The lab conditions created by the authors may be achievable in highly-automated food processing, but for our use case we need to rely on the surrounding conditions as little as possible, since our device will not be working in a controlled environment. The task of our model is much more complex because of this, and so we need real-world data as opposed to creating synthetic datasets. The authors haven't taken model optimisation into account since it is not a concern for their use case, but in our work we will need to optimise the model as much as possible, as it is meant to run on low-powered embedded devices. As opposed to image segmentation done by the authors, where objects are detected on pixel-level, our model will be an object detector, which is only concerned with finding the bounding box of the object. This information is enough to point a laser to the center of the bounding box, and thus the detected object. § DATA ACQUISITION To develop a supervised AI model capable of detecting foreign objects, we were required to collect data of sufficient quality and amount. For this purpose, we obtained access to an olive grinder during the harvesting season, where we could set up equipment for data acquisition during normal operation of the machine (Fig. <ref>). A wood construction was built to hold a camera above the input of the washing unit. A ring of LED lights was installed around the camera lens, which helped make the foreign objects more apparent and reduced the impact of the day-night cycle to the quality of the data. These lights also proved to be useful to human operators during the manual inspection process. The camera was connected to a single-board computer, which would start taking pictures at a rate of one per second whenever the washing unit was operating. The equipment was installed at the beginning of the harvesting season in late October, and it was collecting data until the end of the season around mid December. In this period, 81 304 images were obtained in total. In order to obtain enough positive samples, foreign objects were often manually tossed in the machine whenever it would be safe to do so. During the first month, these objects were a variety of rocks, debris and tools. It became evident, however, that rocks are the most common naturally occurring foreign object, and so for the remainder of the harvesting season we shifted our focus to acquiring as many samples of rocks as possible. The data acquisition process resulted in two substantial datasets: one for general foreign object detection in olive oil production, and one focusing on rock detection for the same use case. The latter was cleaned and annotated, resulting in a dataset counting 1878 images containing 5245 annotations. An example image of this dataset is shown in Fig. <ref>. Due to the high frequency of picture taking, the raw dataset contains a large amount of highly correlated images. The first step in cleaning the dataset was to reduce the amount of similar images, as these would not be beneficial for model training, and could even cause overfitting to the train data. A fingerprinting algorithm called 'dHash'<cit.> was applied to the images. This algorithm generates hash codes from images by crushing them down to a size of 9x8, applying grayscale transformation, and using the differences between adjacent pixels as the input to a hash function. The resulting hashes will be similar to each other if the respective source images are similar. A similarity threshold of 98% was used when grouping similar images. For each group, one or more images were manually selected for preservation depending on the quality of their content, before deleting the rest. During the annotation process, further cleaning was applied by manually deleting some images. For example, if a particular rock appeared in too many consecutive frames without moving or changing position, some of those frames would be discarded to prevent overfitting. We applied tiling to the training dataset to improve our model's performance on detecting small objects. Every image in the dataset was split into patches sized 640x640 each (Fig. <ref>). This size was chosen because it is commonly used in pre-trained models, and is thus the best image size to use when fine-tuning these models. Our original images are sized 1920x1080, and tiling produced six patches for each image. The patches overlap on the y axis by roughly 37%, but there is no overlap on the x axis due to the width of the original image being divisible by the width of the patch. This is not ideal, since having some overlap is useful to prevent loss of data in cases where an annotation is located at the line on which the image splits. This is a rare occurrence in our case, since our target objects are so small, it is unlikely that they will be split by tiling. The overlap on the y axis should ideally be less then what we have. As we work on training new models, we will experiment with various patch sizes to see what gives us the best results. We will need to take into account the inference time, as increasing the number of patches will have a negative effect on the time it takes the model to process a single image. At inference-time, we are using SAHI (Slicing Aided Hyper Inference)<cit.> to tile the input image, run inference on each patch, and combine the results into the final predictions for the original image. In order to provide our model more examples of target objects, various transformations were applied to the original dataset. This is a common practice when working with computer vision tasks, where the existing dataset is expanded by applying transformations such as flips, rotations, color tweaks and so on. Using the Albumentations library<cit.>, a data augmentation pipeline was created through which images were passed. The pipeline contains various augmentations, each having a set probability for being applied. Images can be flipped horizontally and vertically, randomly rotated up to 10 degrees, or randomly rotated by 90 degrees. Some transformations were always applied, to ensure that the pipeline never generates exact duplicates. For example, the pipeline would always apply a random shift in brightness and contrast, as well as CLAHE (Contrast Limited Adaptive Histogram Equalization) <cit.>. Lastly, it would apply an elastic transform to the image, which generates a random displacement vector for every pixel. All the mentioned transformations give results diverse enough to allow us to generate randomly augmented samples on demand, by passing the original images through the pipeline as many times as it is required. (Fig. <ref>) § OBJECT DETECTION MODEL With the dataset ready, work began on training an AI model for rock detection. This proved to be a difficult task for several reasons. The images we are working with (Fig. <ref>) are high-resolution and contain many small objects. Since the targeted rocks can be so similar to the background olives, our dataset has a very low signal-to-noise ratio, making it difficult for models to extract features that are specific to the target object. One way we combat this problem is by adding an amount of background images (images that contain no target objects) to the dataset, which helps the model better understand the subtle differences between olives and rocks. Another issue comes from the fact that our target objects can be very small compared to the image size. This poses a problem for state-of-the-art computer vision models since they are all based on Convolutional Neural Networks (CNN). The main building block of these neural networks is the convolutional layer, where a kernel is passed over an image, performing a convolution operation, in essence summarising several adjacent pixels into a single value (Fig. <ref>). With each passage through a convolutional layer, more abstract features are extracted from the image. The problem with small objects in large images is that their features can get lost after passing the first few convolutional layers of the neural network, being overwhelmed by the features of the background. In order to surpass this issue, on top of tiling the dataset, additional changes needed to be applied to the model architecture. For the development of the prototype, we settled with the YOLO version 8<cit.> architecture. This is a state-of-the-art computer vision model which strikes a good balance between complexity and accuracy. When compared to some popular alternatives, it is slightly less accurate while being much faster and less demanding. This is important for our use case, as the trained model will have to run on a low-power embedded device, so it needs to be simple and well optimised. YOLO offers several model types, ranging from 'nano' to 'extra large'. The models get progressively more complex with slower inference time, but higher overall performance. For this prototype we use a modified version of the 'large' model, with an added P2[P2 refers to the number of strides of the feature pyramids used in the model.] head built specifically to increase performance on detecting smaller objects. § LASER TRACKING In this section, we describe the hardware and software behind the laser tracking feature of the prototype. Once the AI model detects a rock amongst the olives, the device needs to communicate that information to the nearby operators as soon as possible in a simple and straightforward manner, so that the threat can be safely handled. Flashing lights and warning sounds can attract the operators' attention, but they are not helpful when it comes to pinpointing the location of the foreign object. Our model, being an object detector, has that information, so the device ought to pass it to the human operators somehow. A display showing the location of the threat would suffice, but this approach generates some friction during the workflow, as operators now need to analyze the display, and then map what they see to the real world. To reduce this friction, we decided to implement a laser tracking system, where the device can use a laser head to point to the rock. This approach ensures that dangerous objects can be removed as quickly as possible, with the least amount of strain on the operator. Suffice it to say that mounting a controllable laser head to our device would pose a serious threat to the retinas of the surrounding workers. Eyesight damage is a real health hazard when working with lasers, so we needed to ensure the safety of the users of our device. Lasers with a rated power of less than 5mW are deemed safe for the human eye, as the reflex to blink will kick in before the laser can damage the eye's retina. The problem is that these lasers are simply not powerful enough for our purpose, since their beam is not sufficiently visible under the strong lights of our device. We needed to utilize more powerful diodes, but that meant we had to deal with a health hazard. If left as is, workers would be required to wear protective goggles during the operation. This would hinder their ability to perform their tasks, so we never considered it as a viable option. We solved this problem by mounting an adjustment lens to the laser diode. With the added lens, we were able to scatter the laser beam enough that it no longer poses a threat, while still being sufficiently visible. For the chassis of the pan-tilt motor controlling the laser, we modified and repurposed a CAD (Computer Aided Design) model (Fig. <ref>) from an open source DIY (Do It Yourself) project called LaserCat, featuring a toy designed to entertain cats by shooting a laser around a room for them to chase. The design utilizes two micro servo motors to allow for the pan-tilt motion of a head containing a laser diode. Two major modifications were applied to the original CAD model. Firstly, the author designed the laser head with a 5mW laser diode in mind, which can be commonly found in a 6mm form factor. We are using a more powerful diode for our project, so the head had to be modified to be able to house a larger 9mm diode. Secondly, in order to make calculations easier when performing laser tracking, the general geometry of the pan-tilt motor had to be modified. For example, in the original model the position of the laser diode is offset in relation to the rotation axis of the pan servo (Fig. <ref>). It would require performing additional calculations to mitigate this and precisely point a laser to a target. Instead of doing that, the CAD model was modified so that the laser diode lays exactly on the pan motor's rotation axis. With this modification, the origin of the laser beam can be mapped to a single point in space, no matter the angle of the two servo motors. The laser head was 3D printed and mounted to the same construction that was used for data acquisition. Using this setup we were able to develop and test the laser tracking software, which takes a point on an image captured by the camera and calculates the angles the two servos need to take for the laser to shine on the target point. When performing calculations, the camera sensor is treated as the origin point of the Cartesian coordinate system (Fig. <ref>). We treat the image taken by the camera as a 2D plane at a distance H from the camera sensor. We physically measure the value of H as the height of the camera sensor in relation to the input of the washing unit. The laser's origin point is located at (x_l,y_l,z_l). The final prototype will be constructed in such a way that z_l=0, and values of x_l and y_l are precisely known. Also known are the coordinates of the target point on the image plane, marked as (x_t,y_t). With these values known, we can derive the values for φ - the pan angle, and θ - the tilt angle. We project the point of the laser origin onto the image plane. We then calculate d - the distance between this projection and the target point: d = √((x_t - x_l)^2 - (y_t - y_l)^2) Now we can calculate φ and θ: φ = arccos(x_l + x_t/d), θ = arctan(d/H) These values need to be normalised to a value between -1 and 1, which corresponds to the servos' rotation range of -90^∘ to 90^∘ respectively. Depending of the quadrant which the target point occupies, the values may also need to be inverted. This method of determining the servo angles is precise if we assume that the target is at height H from the device. This, of course, will not be the case most of the time. A rock can be deep in the washing unit or high on top of a pile of olives, and there is little we can do to estimate the exact height at which it is located from the 2D image alone. We can tackle this problem by either modifying the hardware or the software of our device. From the hardware side, we could add a second camera to obtain binocular vision, granting us the ability to estimate depth of the environment the sensors are observing. At a higher cost, we could instead mount a reputable 3D camera which would precisely calculate the distance from the lens to every visible point. This remains an option for the future, but for the prototype solution we opted to correct errors in laser tracking with software. Once the initial angles are determined and the two servos take their positions, the laser turns on and the camera takes a picture of the scene. On the picture, the laser dot can be located with little difficulty. Once the dot is detected, we can calculate the error caused by the depth variance and adjust the tilt servo accordingly. § MODEL EVALUATION AND PERFORMANCE We evaluate the performance of the trained models primarily based on their Average Precision (AP)[AP is a widely used metric in computer vision projects, which combines metrics such as precision, recall and IoU (Intersection over Union) into a single value that gives a good indication of overall model performance.]. This metric alone, however, will not be enough to properly evaluate the model's performance, because in this case we value some aspects of the model more than the others. For example, we want the recall metric to be as high as possible, which measures the ratio between the number of correctly detected objects and number of occurring objects in total. The model is preferred to make some false alarms while detecting most of the rocks, rather than being precise and not raising an alert unless it is absolutely certain that it spotted a rock. Thus, while it is important to keep the AP metric as high as possible, we will also be focusing on keeping the recall metric high by tuning the model to prefer scoring well on this metric over precision. At the time of writing this paper, our best performing model scored 85.3% precision with 52.4% recall on never-seen-before test images. These results suggest that our device is able to detect more than half of the appearing rocks with high certainty, raising fairly little false alarms. Upon investigation of the results, it is evident that the smaller the rock appears to be, the less likely it is to be detected. In the following months we will keep working on training new models, ensuring that the performance gets even better for the final prototype. Once we install the prototype, it will be able to gather more data during its use, ensuring that the model keeps evolving and improving as time passes. § ACKNOWLEDGMENT We would like to thank Lučka Olive for collaborating with us and granting us unhindered access to their machinery. This work would not have been possible without their open-minded attitude towards new technologies and willingness to take risks in order to modernise and improve the process of olive oil production. This work was partially supported by Erasmus+ Project No. 2022-1-PL01-KA220-HED-000088359 entitled "The Future is in Applied Artificial Intelligence" (FAAI) <cit.>, which aims to join together Higher Education Institutions (HEI) and businesses. In this context, this project has to bridge the current AI skills gap, build an AI ecosystem of key partners, promote AI business opportunities, and support the creation of internship programs in AI. The FAAI project activities focus on HEI trainers, undergraduate and postgraduate students, and business managers. Furthermore, the project is promoting among businesses and young people the enormous opportunities provided by AI to build the ecosphere of modern society. The given work was performed within the framework of the FAAI work package 4 entitled "Artificial Intelligence framework for training in HE" and presents a real use case that is offered for studying applied AI. 00 almonds Son, G.J., Kwak, D.H., Park, M.K., Kim, Y.D., & Jung, H.C. (2021). U-Net-Based Foreign Object Detection Method Using Effective Image Acquisition System: A Case of Almond and Green Onion Flake Food Process. Sustainability, 13(24). dHash David Oftedal (2014), Difference Hash - An algorithm for comparing images based on their visual characteristics (2014), <https://01101001.net/differencehash.php> SAHI Akyon, F., Altinuc, S., & Temizel, A. (2022). Slicing Aided Hyper Inference and Fine-tuning for Small Object Detection. 2022 IEEE International Conference on Image Processing (ICIP), 966-970. albumentations Buslaev, Alexander & Parinov, Alex & Khvedchenya, Eugene & Iglovikov, Vladimir & Kalinin, Alexandr. (2018), Albumentations: fast and flexible image augmentations. CLAHE Stephen M. Pizer, E. Philip Amburn, John D. Austin, Robert Cromartie, Ari Geselowitz, Trey Greer, Bart ter Haar Romeny, John B. Zimmerman, Karel Zuiderveld, Adaptive histogram equalization and its variations, <https://www.sciencedirect.com/science/article/pii/S0734189X8780186X> YOLOv8 Jocher, G., Chaurasia, A., & Qiu, J. (2023). Ultralytics YOLO (Version 8.0.0) [Computer software]. <https://github.com/ultralytics/ultralytics> FAAI The Future is in Applied Artificial Intelligence (FAAI). (2022-2024), <https://faai.ath.edu.pl>
http://arxiv.org/abs/2407.13478v1
20240718125610
Empowering 5G PRS-Based ISAC with Compressed Sensing
[ "Esen Ozbay", "Pradyumna Kumar Bishoyi", "Marina Petrova" ]
eess.SP
[ "eess.SP" ]
Simple matrix models for the flag, Grassmann, and Stiefel manifolds [ July 22, 2024 =================================================================== § ABSTRACT To enable widespread use of Integrated Sensing and Communication (ISAC) in future communication systems, an important requirement is the ease of integration. A possible way to achieve this is to use existing communication reference signals for sensing, such as the 5G Positioning Reference Signal (PRS). Existing works have demonstrated promising results by using the PRS with classical signal processing techniques. However, this approach suffers from a loss of SNR due to the sparse resource allocation. In this work, we improve upon existing results by combining the 5G PRS with compressed sensing methods. We demonstrate that our method achieves better noise robustness compared to the existing works and has super-resolution properties, making it an ideal choice for range-Doppler map generation and target detection even in noisy environments. Compressed sensing, integrated sensing and communication, 5G PRS, 6G. § INTRODUCTION Integrated sensing and communication (ISAC) has become one of the most promising technologies of the upcoming sixth-generation (6G) wireless systems for accommodating the diverse requirements of advanced services like autonomous driving, digital twins, smart factories, and extended reality (XR)  <cit.>. The ISAC technology empowers the existing cellular base stations (BSs) with sensing capabilities, allowing the cellular wireless networks to not only provide high communication data rate but also accurate sensing and precise positioning services. The ISAC-enabled BS exploits the mutual benefit between sensing and communication <cit.>. On the one hand, the sensing functionality offers assistance to communications in terms of beam training and beam tracking, i.e. sensing-assisted communication. On the other hand, communication-assisted sensing, where the existing communication signals are reused for sensing to gather prior information about the surrounding targets. The primary focus of our work is to study and design communication-assisted sensing by leveraging the existing 5G communication signals and making minimal modifications to the communication infrastructure, which remains an interesting open research problem. In this direction, there are a few recent studies that investigate the feasibility and suitability of using current 5G New Radio (NR) orthogonal frequency division multiplexing (OFDM)-based communication signal for sensing purposes <cit.>. For example, Liu et al. in <cit.> demonstrates that pilot signals possess unique benefits over data signals, primarily due to their strong auto-correlation characteristics. This makes them particularly suitable for sensing applications. Following this, authors in <cit.> investigated the sensing performance of two 5G standard compliant downlink pilot signals, i.e., the channel state information reference signal (CSI-RS) and the demodulation reference signal (DMRS). The numerical simulation demonstrates that in regions with high signal-to-noise ratio (SNR), the CSI-RS pattern offers greater accuracy in range estimation compared to DMRS. However, for velocity estimation, both patterns yield the same level of accuracy. Further, in <cit.>, the authors conducted an analysis on the sensing performance of the positioning reference signal (PRS). They demonstrated that the PRS signal is both feasible and effective, especially when compared to DMRS and CSI-RS pilot signals. The PRS offers the following advantages: (i) its 31-bit long Gold sequence provides good auto-correlation property. (ii) There are four types of comb structures which allow for flexible time-frequency resource mapping. (iii) The different comb structure enables interference-free BS multiplexing, allowing multiple BSs to perform their sensing operation simultaneously. This motivates us to analyze the PRS-based sensing for ISAC system. One of the main challenges for sensing parameter extraction using PRS is due to the sparsity in both the time and frequency domains. Applying the conventional 2D fast Fourier transform (FFT), i.e., the periodogram, to process the echo signal received from the sparse PRS structure results in a decrease in SNR and an increase in range-Doppler ambiguities. A suitable signal-processing tool to overcome this limitation is compressed sensing (CS), which can perform the estimation of sparse signals from under-sampled measurements <cit.>. The performance improvement of CS compared to the periodogram is demonstrated in <cit.>, where CS was used with a stepped carrier OFDM signal to improve the output SNR. One of the popular CS algorithms is approximate message passing (AMP), which is an iterative method for solving the ℓ_1-minimization problem <cit.>. Further, in <cit.>, a complex extension to AMP, namely CAMP, is proposed to improve the sensing performance of an automotive radar operating at 77 GHz. The authors show that the CAMP algorithm significantly improved the SNR of the range-Doppler map while having a complexity similar to that of the periodogram. In this paper, we study the potential of CAMP <cit.> to enhance the performance of PRS-based sensing in the context of an ISAC system. We consider a ISAC system where the BS performs monostatic sensing by transmitting PRS signal and collecting the echo. In our analysis, we include all four PRS comb patterns from the 5G standard, namely 2, 4, 6, and 12. Increasing the comb size results in a sparser signal pattern in the time-frequency domain. In contrast to the periodogram-based estimation used in <cit.>, we apply the CS-based algorithm to analyze the sensing echo and produce a range-Doppler map, which is then used for target detection. Different from <cit.>, we deploy CAMP in a system based on existing 5G infrastructure. The contributions of this paper are as follows: * We analyze the performance of a 5G-based ISAC system utilizing PRS waveform as a sensing signal. We employ the CAMP algorithm <cit.> to process the received echo signal. The approach effectively reduces the degradation of the SNR caused by the non-continuous time-frequency PRS comb structure. This aspect is crucial to maintaining good sensing performance in terms of target detection. * Through simulations, we demonstrate that the CAMP-based scheme significantly improves the SNR of the range-Doppler map. It also outperforms the periodogram-based scheme in terms of accurately distinguishing targets that are in close proximity to each other. Notation: The following notation is used in this paper: lowercase letters represent scalars; uppercase letters represent two-dimensional signals; and , , represent the Discrete Fourier Transform in the subcarrier and OFDM symbol axes, respectively. Similarly, and represent the inverse Fourier transforms. The rest of this paper is organized as follows: in Section <ref>, we explain the system model. Section <ref> describes the proposed CAMP algorithm-based sensing. Section <ref> discusses performance results and Section <ref> concludes this paper. § SYSTEM MODEL We consider a 5G-compliant BS operating in the millimeter-wave (mmWave) band, located in an urban environment. The BS acts as monostatic radar for target detection. We assume that the BS has full-duplex capability and can cancel out any self-interference at the receiver end. The BS transmits an OFDM grid consisting of N_sy consecutive OFDM symbols with N_sc subcarriers (SCs) each. The transmitted signal carries both PRS signal and data signal, which are scheduled orthogonal in time in order to reduce interference between them. The set of resource elements (REs)[RE is the smallest physical resource in 5G NR, corresponding to one subcarrier in frequency domain and one OFDM symbol in time domain] allocated for PRS is denoted as 𝒫. The echo signal received from the target(s) is analyzed to create a range-Doppler map. Additionally, the map is utilized to detect the targets, enabling subsequent target identification. A block diagram of the transceiver module is given in Fig. <ref>. Our primary objective is to effectively analyze the echo signal to provide a range-Doppler map that is free from any ambiguity. §.§ The 5G PRS The PRS comb structure has four configurable parameters. The comb size, K_c, the time gap, g, the repetition factor, F, and the number of resource blocks, N_RB <cit.>. The 5G standards define a `comb pattern for each value of K_c. The PRS resource allocation is obtained by repeating these comb patterns in the time-frequency grid. Note that the density of PRS symbols in the OFDM grid is equal to 1/K_c. The total bandwidth spanned by the PRS is configured by the parameter N_RB. The comb pattern is repeated 12*N_RB/K_c times in the frequency axis in order to span 12*N_RB SCs. The number of OFDM symbols spanned by the PRS is configured by g and F. The comb pattern is repeated a total of F times in the time axis, with K_c(g-1) symbols left blank between each repetition. Assuming that g=1 (i.e., the comb patterns are placed consecutively), the PRS signal spans a total of K_cF OFDM symbols. A sample OFDM grid for K_c=12, g=1 is depicted in Fig. <ref>. §.§ The Transmitted Signal The signal transmitted by the BS can be represented by S = [ s_0,0 s_0,(N_sy-1); ⋮ ⋱ ⋮; s_N_sc,0 s_(N_sc-1),(N_sy-1); ], where s_n,m represents a QPSK symbol carried on the n-th subcarrier of the m-th OFDM symbol. The corresponding time-domain transmitted signal is s(t)=∑_m=0^N_sy-1∑_n=0^N_sc-1s_n,me^-j2π nΔ ftg(t-mT_s), where Δ f is the OFDM subcarrier spacing (SCS), g(t) is the pulse shape, T_s=1/Δ f+T_CP is the OFDM symbol duration including the cyclic prefix (CP), and T_CP is the CP duration. Note that, as mentioned above, only the REs with (n,m)∈𝒫 are allotted for PRS. §.§ The Received Echo Signal The channel experienced by the transmitted signal is the sum of echoes caused by the targets and the clutter. We model each target as a group of reflection centers <cit.>, making up a total of L reflection centers. A reflection center with radar cross-section (RCS) σ_l, distance R_l from the BS and radial speed v_l causes a radar echo with delay τ_l, Doppler shift f_D,l and attenuation α_l. These are given by τ_l=2R_lc, f_D,l= 2v_l/λ_c α_l=√(G_t G_r λ_c^2σ_l/(4π)^3 R_l^4), where c is the speed of light, λ_c is the carrier wavelength, and G_t, G_r are the BS transmit and receive antenna gains, respectively. Note that only the line-of-sight (LoS) path between the BS and the target is considered in this analysis. Any non-line-of-sight (NLoS) path caused by the targets is assumed to be negligible. After OFDM demodulation, the effective channel caused by target l over the (n,m)-th RE is <cit.> h^(l)_n,m≜α_l e^-j2π f_cτ_le^-j2π n Δ f τ_le^j2π mT_sf_D,l. Then, the received signal y_n,m can be expressed as y_n,m = s_n,m(h^cl_n,m + ∑^L_l=1 h^(l)_n,m) + w_n,m = s_n,mh_n,m + w_n,m, where w_n,m∼𝒞𝒩(0, N_0) is AWGN, and h^cl_n,m is the channel created by the clutter. §.§ Channel Estimation and Range-Doppler Estimation In order to construct a range-Doppler map from the received echo signal y_n,m, ∀ (n,m)∈𝒫, the BS must first estimate the effective channel over each OFDM symbol, h_n,m. The expression of the estimated channel, ĥ_n,m is ĥ_n,m = y_n,m/s_n,m, (n,m)∈𝒫 0, otherwise. Considering channel estimate in (<ref>), typically, a periodogram is computed to generate the range-Doppler map. The output after performing 2D-FFT over the received signal Y is = ||^2 = |{{H}}|^2. One of the major issues with applying the periodogram over PRS-based sensing is that PRS's sparse time-frequency structure results in a lower SNR gain. This leads to artefacts in the range-Doppler map and eventually increases the target misdetection rate. We illustrated this effect in our simulation results in Section <ref>. In order to address this issue, we employ CS-based processing, which has superior anti-noise capabilities and also can accurately estimate the sensing parameters from the sparse time-frequency structures. § CS-BASED PROCESSING OVER 5G PRS In this section, we present the 2D-CAMP Algorithm, which is used to obtain the range-Doppler map. The 2D-CAMP is an iterative solution to the ℓ_1-minimization problem, and yields a sparse estimate of the range-Doppler map from an incomplete OFDM grid <cit.>. The 2D-CAMP algorithm is designed to yield a sparse output. In other words, most elements of the output of the 2D-CAMP are equal to zero. This makes the 2D-CAMP particularly fitted for sensing applications in sparse channels, such as the mmWave channel. Another property of the 2D-CAMP is that it is designed to be used with incomplete measurements. This makes 2D-CAMP suited for ISAC scenarios where there are no contiguous blocks of time-frequency resources that can be exclusively allocated to sensing, such as the case of PRS-based sensing. The 2D-CAMP is described in Algorithm <ref>. §.§ Description of the 2D-CAMP The inputs to the CAMP algorithm are as follows: the two-dimensional channel estimate signal Ĥ_N_sc× N_sy, number of iterations N_iter, the tunable thresholding parameter τ_CAMP, and the PRS resource allocation set 𝒫. At each iteration t, the 2D-CAMP algorithm has the following steps: * Calculate a noisy estimate _t from the previous residual E_t-1, * Calculate the noise variance σ_t as the median of _t, * From _t, obtain a sparse estimate _t by performing element-wise soft thresholding, where , soft(x, λ_th) = x|x|-λ_th/|x|, |x| > λ_th 0, |x| ≤λ_th, * Calculate a new residual from the current estimate. §.§ Selection of the Parameter τ_CAMP τ_CAMP is the only tunable parameter of the 2D-CAMP and determines the sensitivity of the algorithm. For higher values of τ_CAMP, the soft thresholding step is more selective, leading to better noise robustness but a higher chance of missing weak targets. Lower values of τ_CAMP are more sensitive to both noise and to weak targets <cit.>. To achieve a desired false alarm probability P_FA, τ_CAMP can be selected as τ_CAMP = -√(ln(P_FA)). §.§ Computational Complexity The 2D-CAMP algorithm contains a double FFT at each iteration. This is the most complex operation in the algorithm, therefore complexity of the 2D-CAMP is determined by the complexity of the 2D-FFT, 𝒪(NlogN · MlogM). This is the same as the complexity of the periodogram <cit.>. § PERFORMANCE EVALUATION In this section, we evaluate the performance of CAMP algorithm on the 5G PRS-based sensing system. The simulation scenario is depicted in Fig. <ref>. We have considered a street named Kackertstrasse, which is located in Aachen, Germany. The BS is located at the beginning of the street and five vehicles are moving in lane. The 2D coordinates and the velocities of the BS and each vehicle are denoted in the Fig. <ref>. The vehicles were modelled using the reflection center model, where each vehicle is modelled as a group of reflection centers with different RCS values and positions <cit.>. Whether a particular reflection center is visible to the BS is determined according to the incidence angle. Each vehicle can have either one or two reflection centers that are visible to the BS. Further, we simulate the clutter with the MATLAB Ray Tracing tool by using a 3D-model of the street[The 3D street model was taken from openstreetmap.org.]. The BS is operating in FR 2 band and transmits PRS signal configured with the parameters K_c=12, N_RB=135, g=1, F=28. This yields a total bandwidth of 200 MHz and a CPI of 3 ms. For the detection of targets (vehicles), the Constant False Alarm Rate (CFAR) method with P_FA=10^-7 is used. Note that the power levels in the range-Doppler maps are normalized so that the global maximum in each map is equal to 1, and all the figures depict the relative power. The rest of the system parameters are given in Table <ref>. §.§.§ De-Noising Fig. <ref> depicts the range-Doppler maps obtained with the 2D-FFT (the periodogram) and the 2D-CAMP (τ_CAMP=3.4), respectively. Brighter colors represent higher received power for a given pair of range and speed values. Comparing the two figures, we observe that the 2D-CAMP yields a noiseless range-Doppler estimate of the targets. This is because the 2D-CAMP always produces sparse estimates, as opposed to the periodogram, which makes no assumptions about the map. In addition, we observe that the power level received from weaker targets is higher in 2D-CAMP when compared to the periodogram. For example, the relative power for target C increases from -46.7 dB to -43.2 dB in 2D-CAMP, which brings it above the detection threshold of CFAR. Furthermore, when comparing the two schemes, we can see that the 2D-CAMP improves the power level while maintaining a lower noise level and achieves higher SNR. This results in the generation of a noiseless range-Doppler map, leading to improved target detection. In Fig. <ref>, the CFAR with the periodogram output can only detect four of the nine reflection centers that are present. In contrast, the CFAR with the 2D-CAMP output can detect seven of them. §.§.§ Super-Resolution Capabilities In Fig. <ref>, we assess the super-resolution capabilities by increasing the FFT sizes of the periodogram. Specifically, we set N=5N_sc and M=5N_sy, and compare the resulting output with that of the 2D-CAMP (τ_CAMP=4) scheme. It can be seen that the 2D-CAMP produces a range-Doppler map that exhibits improved distinction between target peaks in comparison to the periodogram. For example, targets C and D produce two clearly distinguishable clusters when processed with the 2D-CAMP, whereas the periodogram produces peaks that are not clearly distinguishable. This is because with each iteration of 2D-CAMP, the thresholding step eliminates energy between the peaks and produces two distinct clusters. Conversely, the map obtained with the periodogram yields peaks that intersect with each other, making it difficult to distinguish close-by targets. This clearly shows that the super-resolution capability of the 2D-CAMP technique makes it well-suited for high-precision localization applications. §.§.§ The Effect of the Comb Size To fully understand the effect of time-frequency sparsity, in Fig. <ref>, we compare the range-Doppler map obtained with the 2D-CAMP (τ_CAMP=3.4) for different K_c values. It can be observed that the range-Doppler maps are very similar for strong targets (A, B, C and D), even though constructed from signals with varying SNR values. However, as the value of K_c increases, the received signal from the weaker target (E) becomes less distinct from the noise level. Adjusting the value τ_CAMP can effectively compensate for this issue, as it enhances the sensitivity of the 2D-CAMP to targets. Nevertheless, it is important to acknowledge that lowering τ_CAMP too much can cause the appearance of ghost targets. § CONCLUSION In this paper, we studied the 2D-CAMP, a CS algorithm, to obtain the range-Doppler map by using the 5G PRS as a sensing signal. We observed that the 2D-CAMP yielded a less noisy range-Doppler map than the periodogram method. Furthermore, 2D-CAMP showed a better capability to distinguish targets that are close to each other, while having the same asymptotic complexity as the periodogram. Given the limited resources available for sensing in ISAC scenarios, 2D-CAMP is a promising signal processing scheme for 5G-based sensing. In the future, we will explore different clutter suppression methods in combination with the CAMP to mitigate the effects of multipath. § ACKNOWLEDGMENT This work was partially funded under the Excellence Strategy of the Federal Government and the Länder under grant "RWTH-startupPD 441-23". IEEEtran
http://arxiv.org/abs/2407.13227v1
20240718072546
Solving the Model Unavailable MARE using Q-Learning Algorithm
[ "Fei Yan", "Jie Gao", "Tao Feng", "Jianxing Liu" ]
eess.SY
[ "eess.SY", "cs.SY" ]
a]Fei Yanfyan@home.swjtu.edu.cn, a]Jie Gaogjie303@163.com, a,b]Tao Fengsunnyfengtao@163.com, c]Jianxing Liujx.liu@hit.edu.cn [a]School of Information Science and Technology, Southwest Jiaotong University, Chengdu, Sichuan, PR China [b]National Engineering Laboratory of Integrated Transportation Big Data Application Technology, Chengdu, China [c]Department of Control Science and Engineering, Harbin Institute of Technology, Harbin 150001, PR China Modified algebraic Riccati equation (MARE); model unavailable; Q-learning (QL) algorithm. § ABSTRACT In this paper, the discrete-time modified algebraic Riccati equation (MARE) is solved when the system model is completely unavailable. To achieve this, firstly a brand new iterative method based on the standard discrete-time algebraic Riccati equation (DARE) and its input weighting matrix is proposed to solve the MARE. For the single-input case, the iteration can be initialized by an arbitrary positive input weighting if and only if the MARE has a stabilizing solution; nevertheless a pre-given input weighting matrix of a sufficiently large magnitude is used to perform the iteration for the multi-input case when the characteristic parameter belongs to a specified subset. Benefit from the developed specific iteration structure, the Q-learning (QL) algorithm can be employed to subtly solve the MARE where only the system input/output data is used thus the system model is not required. Finally, a numerical simulation example is given to verify the effectiveness of the theoretical results and the algorithm. § INTRODUCTION The Riccati equation constitutes an essential component within the framework of Linear Quadratic Regulator (LQR) and has been extensively investigated <cit.>. Nevertheless, the practical considerations of imprecise detection, packet losses and stochastic disturbances cannot be disregarded, which will introduce significant uncertainties into estimation of target state. Kalman filter and optimal estimation are two typical techniques to address this problem, which are closely linked to a modified algebraic Riccati equation (MARE) that was initially derived decades ago <cit.>. Since then, the MARE has gained compelling attention owing to its broad applications in Kalman filtering <cit.>, intermittent observations <cit.>, network synchronization <cit.> and optimal estimator <cit.>, etc. Additionally, the MARE has been further broadened to systems that are impacted by external disturbances constrained by input saturation <cit.> and discrete-time mean-field systems characterized by input delays<cit.>. It has been established that the stability characteristics of such systems are intimately linked to the stabilizing solution of MARE. Considerable amount of researches have concentrated on the solution of MARE. In the context where delectability is not assumed, the uniqueness of the almost stabilizing solution has been discussed in <cit.>. In an effort to further tackle this problem, the framework of cone-invariant operators has been employed, yielding an explicit, necessary, and sufficient condition for the existence of a mean-square stabilizing solution <cit.>. Subsequent contributions have focused on the explicit characterization of the solution. An analytic solution for the homogeneous MARE for single-input systems <cit.> and a closed-form solution in terms of closed-loop poles locations <cit.> have been deduced, respectively. However, inherent requirement of solving a set of Linear Matrix Inequalities (LMIs), which requires accurate system matrices, fails to adapt these methods to model unavailable scenarios. In scenarios where the system model is unavailable, QL algorithm <cit.> has been employed to resolve such problems. Subsequently, it has been implemented in multi-agent discrete-time graphical games <cit.>, optimal tracking control <cit.> and optimal output regulation <cit.>. In <cit.>, the QL algorithm is developed to solve a single DARE, thereby facilitating the computation of the optimal feedback gain. Then the conditions for the consensus of multi-agent system is determined and the consensus is achieved when system model is completely unavailable. However, to best of our knowledge, the development of a model unavailable algorithm to solve the MARE when system model is completely unavailable remains an unsolved problem. In this paper, we aim to develop a new iteration method to solve the MARE, and then use it to propose a model unavailable algorithm based on the QL algorithm. In <cit.>, the authors rewrite the MARE as a standard DARE with specific constraint and thus facilitating an iterative method. However, the initial conditions to the algorithm still hinge on the LMI problem and the iteration can only be started by trial and error, which is not suitable to the model unavailable scenario. In current study, we will address this problem by proposing a new iterative algorithm to solve the MARE. Specially, in the single-input case, we show that iteration can be started by an arbitrary positive input weighting; in the multi-input case, we reduce the selection of the input weighting matrix within the entire matrix space to a single sufficient large scalar. Based on this, the QL algorithm can be employed to solve the MARE when system matrices are completely unavailable. The structure of this paper is as follows. The problem is established in section 2. New iterative method to solve MARE is presented in section 3. In section 4, a model unavailable method based on QL algorithm is developed. Finally, simulation is given in section 5 and the conclusion is drawn in section 6. Notations: λ _i^u(A) denotes the unstable eigenvalues of matrix A. λ _max(A) denotes the maximum eigenvalue of matrix A. ℝ^n × m denotes matrix space. I_n denotes n-dimensional identity matrix. Matrix A ≻(≽) B means matrix A-B is positive (semi-) definite. A ≺(≼) B means matrix A-B is negative (semi-) definite. § PROBLEM FORMULATION Consider the following discrete-time MARE X = A^TXA - γA^TXB(B^TXB + R)^ - 1B^TXA + Q where A and B are system matrices, the characteristic parameter γ∈ (0,1), the input weighting matrix R is positive definite and the state weighting matrix Q is semi-positive definite. Throughout this paper, it is assumed that (A,B) is stabilizable and (A,√(Q)) is detectable. Then, it is well known that the MARE (<ref>) has a stabilizing positive definite solution when the characteristic parameter γ is larger than a critical value γ _c <cit.> which is bounded by 1 - 1/max_i|λ_i^u(A)|^2 ≤γ_c ≤1 - 1/∏_i |λ_i^u(A)|^2. In literature <cit.>, the MARE can be simply solved by the LMI technique and iterative method only when the system matrices are precisely obtained. However, this paper aims to solve the MARE when the system matrices A and B are completely unavailable. Therefore, a new iterative method should be proposed which is especially suitable for developing a model unavailable algorithm only using the input/output date of the dynamic system. § SOLVING MARE BY A NEW ITERATIVE METHOD In this section, we will propose a new iterative method to solve the MARE. For the single-input case, the iterative method can be initialized by an arbitrary positive input weighting R=r>0 if and only if the characteristic parameter γ∈ (1 - (∏_i |λ _i^u(A)|^2)^-1,1), i.e., the MARE has a stabilizing solution. For the multi-input case, we restrict to the case of γ∈ (γ,1) with γ being an estimate of γ_c, which facilitates us to propose the model free algorithm in the next section. It should be pointed out that the estimate γ=1-(∏_i |λ _i^u(A)|^2)^-1 for the single-input case. §.§ Single-Input Case For the single-input case, the control input matrix B=b∈ℝ^n, and the input weighting matrix R=r reduced to a positive scalar. The following lemma is required for the theoretical development. Consider the following DARE X = A^TXA - A^TXb(b^TXb + σ)^ - 1b^TXA + Q, where (A,B) is stabilizable and (A,√(Q)) is detectable. Defining that β = b^TXb/σ, then the scalar β converges to its minimum as σ→ +∞ and the infimum is given by β_c = ∏_i |λ_i^u(A)|^2-1. Proof. First, we show that X/σ is non-increasing as σ→∞. Consider the following quadratic performance index J_σ(x(0)) = x_0^TX_σx_0 = ∑_k = 0^∞[x^T(k)Qx(k) + u^T(k)σu(k)]. Suppose 0 < σ_1 < σ_2, then we will obtain J_σ_1^*(σ_2x(0)) = x_0^T(σ_2X_σ_1)x_0 = ∑_k = 0^∞[σ_2x^T(k)Qx(k) + σ_2u^T(k)σ_1u(k)] > ∑_k = 0^∞[σ_1x^T(k)Qx(k) + σ_1u^T(k)σ_2u(k)] ≥ x_0^T(σ_1X_σ_2)x_0 = J_σ_2^*(σ_1x(0)), where X_σ_i(i = 1,2) is the solution of the following DARE X_σ_i = A^TX_σ_iA - A^TX_σ_ib(b^TX_σ_ib + σ_i)^ - 1b^TX_σ_iA + Q. Since this is true for any x_0∈ℝ^n, the inequality σ_2X_σ_1 > σ_1X_σ_2 holds, that is, (X_σ_1/σ_1) > (X_σ_2/σ_2). This indicates that b^TX_σb/σ decreases with respect to σ. Due to the fact that b^TX_σb/σ > 0, the lower bound β _c exists as σ→∞. Then, consider the following scalar δ= √(σ/σ+ b^TXb) = √(1/1 + b^TX/σb), now we can see that the value δ reaches its maximum as b^TXb/σ reaches its minimum, i.e., σ→∞. In the view of Lemma 5 in <cit.>, the scalar δ has a supremum δ_c =lim_σ→∞ √(σ/σ+ b^TXb) = 1/∏_i |λ_i^u(A)|. Solving β _c from equation (<ref>), we will have β_c= lim_σ→∞ b^TXb/σ= ∏_i |λ_i^u(A)|^2-1. For the single-input MARE X = A^TXA - γA^TXb(b^TXb + r)^ - 1b^TXA + Q, there must exist a positive value ω_γ and a positive definite matrix X_γ such that X_γ = A^TX_γA - A^TX_γb(b^TX_γb + ω_γ)^ - 1b^TX_γA + Q, ω_γ = 1/γr + 1 - γ/γb^TX_γb. Proof. For any given positive initial value ω_0, we substitute ω_0 into the following standard algebraic Riccati equation X = A^TXA - A^TXb(b^TXb + ω)^ - 1b^TXA + Q, which yields a stabilizing solution X_0. If the pair (ω_0,X_0) satisfies equation (<ref>) and (<ref>) already, then take ω_γ = ω_0 and X_γ = X_0, the issue is addressed. Otherwise, we consider the following two situations, respectively. 1) The pair (ω_0,X_0) satisfies equation (<ref>) such that ω_0 > 1/γr + 1 - γ/γb^TX_0b. Then, we construct ω_1 which satisfies ω_1 = 1/γr + 1 - γ/γb^TX_0b. Obviously, we will have ω_0≥ω_1 > 0. Similarly, substituting ω_1 into DARE (<ref>) yields the second stabilizing solution X_1. It follows <cit.> that X_0≥X_1 > 0 when ω_0 is larger than ω_1. Then, ω_2=r/γ + [(1 - γ )b^TX_0b]/γ can also be constructed and we will have two non-increasing sequences ω_0≥ω_1≥ω_2 > 0 and X_0≽X_1≽X_2≻ 0 for the same reason. Subsequently, two non-increasing sequences {ω_0,ω_1,ω_2, …} and {X_0,X_1,X_2, …} are easily obtained, where ω_i > 0 and X_i = X_i^T ≻ 0. As a result, these two sequences converge to ω_γ and X_γ that satisfies (<ref>). 2) The pair (ω_0,X_0) satisfies equation (<ref>) such that ω_0 < 1/γr + 1 - γ/γb^TX_0b. Following the same development way in situation 1), we obtain two non-decreasing sequences {ω_0,ω_1,ω_2, …}, {X_0,X_1,X_2, …}, where ω_i > 0 and X_i = X_i^T ≻ 0. Suppose sequence ω_i is bounded, these two sequences will definitely converge to ω_γ and X_γ. In the converse, i.e., the sequence ω_i is unbounded, we will have lim_ω_i → + ∞1/γr/ω_i + 1 - γ/γb^TXb/ω_i = lim_ω_i → + ∞1 - γ/γγ _c/1 - γ _c ≤ 1 - γ _c/γ _cγ _c/1 - γ _c = 1. due to the fact that γ∈ (γ _c,1]. This indicates that with the proposed iterative method, there exists ω_i large enough such (<ref>) holds, i.e., the stabilizing solution X_γ to MARE (<ref>) is certainly to be obtained. In <cit.>, the authors have proposed a similar iterative method to solve the MARE. However, the algorithm must started by an inequality coupling with a DARE thus the iteration can only be start by trial and error. On the contrary, Theorem <ref> shows that the initial conditions for the input weighting ω can be an arbitrary given positive scalar to start the iteration. Therefore, the proposed iterative method is more ease of use. It needs to be emphasized that the crucial significance of such an improvement of the initialization of the iteration will establish the solid foundation for proposing the model free algorithm in the following section. §.§ Multi-Input Case For the multi-input case, the input matrix B∈ℝ^n × m is assumed to be full column rank. The results in Lemma <ref> are first extended to multi-input case and we will give an estimate of the critical value γ_c. For simplicity, consider the following multi-input DARE with the input weighting matrix Σ=σ I_m X = A^TXA - A^TXB(B^TXB + σI_m)^ - 1B^TXA + Q, then the scalar β=λ _max(B^TXB/σ) which is obviously non-increasing with respect to σ in the view of Lemma <ref>. Thus the infimum of β exists which is denoted by β_c = lim_σ→∞λ _max(B^TXB/σ). Defining that γ̅= β_c/1 + β_c, then we have the following conclusion. For the DARE X = A^TXA - A^TXB(B^TXB + ωI_m)^ - 1B^TXA + Q and any given γ∈ (γ̅,1), there exists a finite sufficient large value ω_t such that when ω > ω_t, it holds that ωI_m ≻1/γR + 1 - γ/γB^TXB. Furthermore, the MARE (<ref>) has a stabilizing positive definite solution. Proof. First we proof that a finite sufficient value ω_t exists for any given γ∈ (γ̅,1) such that equation (<ref>) holds when ω>ω_t. Then an iterative method similar to single-input case can be developed when ω > ω_t. We know that β = λ _max(B^TXB/ω) is monotonically decreasing with respect to ω and the positive definite matrices Ω=ω I_m and X satisfy (<ref>). As ω→∞, γ̃ converges to its infimum which is γ̅= β_c /(1 + β_c ). Then, for any given γ∈ (γ̅,1), there exists a finite sufficient large value ω_t such that γ_t ∈ (γ̅,γ). When taking ω > ω_t, we will obtain 1 > [(1-γ)/γ](B^TXB/ω). Note that ω_t is sufficient large and ω > ω_t, which means R/ω→ 0. Then we will have 1 > 1/γλ_max[R/ω + (1 - γ)/γB^TXB/ω], which ensures the inequality (<ref>). Then the iterative method is developed as follows. Construct matrix Ω_1 as in <ref> and we have Ω≽Ω_1≻ 0. Substituting Ω_1 into DARE (<ref>), we will have the stabilizing solution X_1. It follows <cit.> that X≽X_1≻ 0. Then Ω_2, Ω_3,… can be constructed in the same way and as a result, two non-increasing sequences {Ω_0,Ω_1,Ω_2, …} and {X_0,X_1,X_2, …}, where Ω_i≻ 0 and X_i = X_i^T ≻ 0, will be obtained. Obviously, these two sequences will converge to Ω_γ and X_γ such that X_γ = A^TX_γA - A^TX_γB(B^TX_γB + Ω_γ)^ - 1B^TX_γA + Q, Ω_γ = 1/γR + 1 - γ/γB^TX_γB. Then, the positive definite solution X_γ is obtained. Theorem <ref> is analogous to Theorem <ref> in the previous subsection but with some constraints specifically for multi-input case. In a special case where matrix Ω be with a specialized structure given by Ω=ωI_m, the MARE (<ref>) admits an positive definite solution if and only if γ is larger than the estimate value γ̅ which can be determined via Q-Learning algorithm when input weighting matrix R is characterized by a sufficient large magnitude. Given that ω_t is a finite value for any given γ within the interval (γ̅,1], one can infer that a scalar ω of sufficient large magnitude must exists, which permits the start of iteration. From this perspective, the QL algorithm can be adapted to solve MARE when the system model is completely unavailable. § SOLVING MARE WITHOUT SYSTEM MODEL In this section, the QL algorithm will be employed to derive the solution to MARE (<ref>) when system matrices A and b are unavailable. In Theorem <ref>, it is established that a single standard DARE needs to be considered in each iteration. This suggests that addressing MARE equates to solve a series of DAREs. Then, a model free algorithm can be developed as follows. Define the ℚ-function as ℚ(x(k),u(k)) = [ [ x(k); u(k) ]]^T ℍ[ [ x(k); u(k) ]] = [ [ x(k); u(k) ]]^T [ [ ℍ_xx ℍ_ux^T; ℍ_ux ℍ_uu ]] [ [ x(k); u(k) ]], where the kernel matrix ℍ is partitioned into ℍ_xx = Q + A^TXA, ℍ_uu = r + b^TXb, ℍ_ux = b^TXA. Then, we know that the optimal control is given by u(k)=-ℍ_uu^-1 ℍ_uu x(k). And γ_c can be computed by γ_c = 1-δ_c^2 =lim_r →∞ √(r(ℍ_uu)^-1). Therefore, the critical value γ_c can be obtained by means of δ_c. Besides, the solution X_γ and positive value ω_γ in equation (<ref>) can be expressed by X_γ = ℍ_xx - ℍ_ux^T(ℍ_uu)^ - 1ℍ_ux, Ω_γ = 1 - γ /γℍ_uu + r. In view of this, QL algorithm emerges as a way to address a series of DAREs when system model is not accessible. The variables ω and X can be computed iteratively until conditions in equations (<ref>) are satisfied, i.e., the solution has been acquired. Algorithm 1 Input: A control policy u(k)=-Kx(k)+n(k), where K is optimal feedback gain stabilizes (A,b), n(k) is an exploring noise; a predefined small threshold ε > 0; an any given initial value ω_0>0. Step 1: Let Z(k) = [x^T(k) u^T(k)]^T and solve ℍ^t from Z(k) = [x^T(k)Qx(k)] + r[u^t(k)]^Tu^t(k) + Z^T(k + 1)^tZ(k + 1). Step 2: Obtain the iterative optimal control u^t + 1(k) as u^t + 1(k) = - (ℍ_uu^t)^ - 1ℍ_ux^tx(k). Step 3: Let t ← t + 1, repeat Step 1 and Step 2 until ||ℍ_t + 1 - ℍ_t|| < ε. Step 4: Compute γ_c by γ_c = 1-δ_c^2 =1-r(ℍ_uu)^-1. Step 5: If modified parameter γ>γ_c, take ω_n=ω_0 as the initialized input weighting matrix. Otherwise, the algorithm ends. Step 6: Repeat Step 1 to Step 3 and compute X_n by X_n = ℍ_xx - ℍ_ux^T(ℍ_uu)^ - 1ℍ_ux. Step 7: Obtain the iterative input weighting matrix ω_n+1 as ω_n+1 = 1 - γ/γ(ℍ_uu-ω_n + r) + r. Step 8: Let n ← n + 1, repeat Step 6 to Step 7 until ||ω_n-1 - γ/γℍ_uu + r|| < ε. Step 9: Obtain the solution x_γ=x_n. End. As t → + ∞, it is established in <cit.> that the QL algorithm converges. And the iterative kernel matrix ℍ will converge to the true value. Assuming the input weighting matrix r be sufficiently large at first, the critical value γ_c in single-input case can be approximated with an arbitrary degree of precision. Therefore, the existence of stabilizing solution for MARE (<ref>) can be verified when system matrices A and b are entirely unknown. It is evident that an error is introduced at each iteration step. When implementing QL algorithm, the error is accumulated iteratively, potentially leading to the algorithm's failure to converge particularly in high-dimensional problems. In light of Theorem <ref>, it is observed that if the pairs (ω_l,X_l) and (ω_r,X_r) satisfy inequality (<ref>) and (<ref>) respectively, the two sequences X_l≺X_γ≺X_r, ω_l≤ω_γ≤ω_r holds. Hence, it suffices to consider the appropriately selected parameter ω. Only a single DARE (<ref>) needs to be solved until the condition (<ref>) is met. The dichotomy method based on QL algorithm can also be developed which ensures a higher accuracy. § SIMULATION The system matrices are given by A=[ [ 1 0.5; 0 1; ] ], b=[ [ 0; 1; ] ]. Obviously, matrix A is neutrally unstable. Set Q = I_2, r = 10, γ=0.8. Take ω_0 = 10 as the initial value, then we compute ω_γ and X_γ by the iterative method in Theorem <ref> and after 7 iterations, we obtain X_γ and ω_γ as follows X_γ = [ [ 5.5114 5.2161; 5.2161 11.7659; ]], Ω_γ = 15.4415 within the error of 10^-7. When the system matrices are unavailable, we run Algorithm 1 and obtain the solution to MARE (<ref>). Fig.1 shows the convergence process of γ_c^t, where γ_c^120=0.1653. As we can see in Fig.2, the kernel matrix ℍ converges to the true value with the error less than 10^-10 at each iteration. For each iteration, we compute X_n and the norm of ω_n - [(1 - γ )/γ ]ℍ_uu - r which is presented in Fig.3. As shown in Fig.3, the error is less than 10^-12 after 60 iterations. Let Δ(X)=A^TXA -γA^TXb(b^TXb + r)^ - 1b^TXA + Q, the norm of X_n-Δ_n(X_n) is presented for the nth iteration in Fig.4 which can be used to determine the solution. When the difference is 0, X_n=Δ_n(X_n) and MARE (<ref>) holds, which means the solution is obtained by X_γ=X_n. After 60 iterations, the solution X_γ and the positive value ω_γ are obtained as X_γ = [ [ 5.5114 5.2161; 5.2161 11.7659; ]], ω_γ = 15.4415, with the error ε = 10^ - 12 achieved, which means the MARE is solved by Algorithm 1. § CONCLUSION In current study, the modified algebraic Ricatti equation (MARE) is solved when system model is absolutely unavailable. A novel iterative approach is developed and explicated for both single-input and multiple-input case. Furthermore, an estimate of the critical value γ_c is deduced by our proposed method, which is attainable when system model is unavailable via QL algorithm. For single-input case, the positive definite solution of MARE can be obtained from arbitrarily specified input weighting value. Moreover, a dichotomy method is introduced to guarantee the accuracy of the solution. For multi-input case, the results are presented for a particular instance, Ω=ω I_m. Upon the pre-given positive definite matrix Ω with sufficiently large parameter ω, the problem is tackled when the characteristic parameter γ is larger than the pre-determined estimate bound γ̅. Based on this, a model-free algorithm is developed and by which the MARE is solved. unsrt
http://arxiv.org/abs/2407.12926v1
20240717180107
Single-Pulse Gamma-Ray Bursts have Prevalent Hard-to-Soft Spectral Evolution
[ "Ian Busby", "Davide Lazzati" ]
astro-ph.HE
[ "astro-ph.HE" ]
Department of Physics, Oregon State University, 301 Weniger Hall, Corvallis, OR 97331, USA Department of Physics, Oregon State University, 301 Weniger Hall, Corvallis, OR 97331, USA § ABSTRACT We analyze the spectral evolution of 62 bright Fermi gamma-ray bursts with large enough signal to noise to allow for time resolved spectral analysis. We develop a new algorithm to test for single-pulse morphology that is insensitive to the specific shape of pulses. Instead, it only checks whether or not there are multiple, isolated, statistical significant peaks in the light curve. In addition, we carry out a citizen science test to assess light curve morphology and spectral evolution. We find that, no matter the adopted assessment method, bursts characterized by single-peaked prompt emission light curves have a greater tendency to also have a consistently decaying peak energy, or hard-to-soft spectral evolution. This contrasts the behavior of multi-peaked bursts, for which the tendency is to have a peak frequency that is not monotonically decreasing. We discuss this finding in the theoretical framework of internal/external shocks, and find it to be consistent with at least some single pulse bursts being associated with particularly high-density environments. § INTRODUCTION Gamma-Ray Bursts (GRBs), the brightest explosions in the present universe <cit.>, have been the subject of intense study for more than 50 years <cit.>. Many discoveries and theoretical breakthroughs have allowed for the establishment of a standard model to interpret the variety of observations. In this model, all bursts are cosmological in origin <cit.> and there are two predominant classes: short bursts that last less than about 2 s and long bursts that last more than approximately 2 seconds <cit.>. All bursts are characterized by the presence of a central engine, either a black hole or a neutron star, that releases a relativistic, possibly magnetized outflow. Long bursts are associated with the core collapse of massive, compact, and fastly rotating stars, their duration set by the accretion time of bound stellar material on an accretion disk surrounding the engine <cit.>. Short bursts, instead, are associated with the merger of compact binary systems either made of two neutron stars or, perhaps, by a neutron star and a black hole <cit.>. In their case, the burst duration is expected to be driven by the viscous timescale of the accreting material. While not all bursts clearly fit into this scenario (e.g., ), it is a model that has had significant success in accounting for most observed properties of both individual bursts and of the ensemble of several thousand observed events <cit.>. One property of bursts that has so far eluded a robust interpretation and classification is their spectral behavior <cit.>. Most bursts are characterized by a non-thermal, broad band spectrum <cit.>. The hardness ratio of short bursts is higher than that of long ones <cit.>. Additional information can be extracted by looking at the evolution of the peak frequency of the spectrum ϵ_peak. This is defined as the photon energy where the ν F(ν) function peaks. Also in this case the behavior of ϵ_peak during the prompt phase of the burst defines two classes. In most events ϵ_peak tracks the luminosity (e.g., ): it grows at the beginning of a pulse, peaks when the luminosity is the highest, and decreases afterwards. If a burst is characterized by multiple pulses, this tracking repeats for as many pulses as are observed. This is also true across different bursts, as testified by the existence of the Golenetskii correlation <cit.>. A second class of events, instead, displays a consistently monotonic hard-to-soft behavior, in which the peak energy of the photons decreases with time, irrespective of the burst luminosity and of whether the burst is characterized by a single or multiple pulses (e.g., ). The origin of these two classes is unclear, partly because it can be studied only for the subclass of bright bursts, for which a time-resolved spectral analysis can be carried out. Recently, <cit.> have proposed a possible origin for bursts with hard-to-soft evolution. In their model some bursts take place within the accretion disk of supermassive black holes at the center of their host galaxies, and are therefore embedded in a gas that is many orders of magnitude denser than the interstellar medium. <cit.> show that in that case the burst prompt emission is not produced by either a photospheric or internal shock component but rather by the early onset of the external shock. The pulses of these bursts would inevitably be longer than their separation, merging in a single, possibly undulating observed pulse. Because pulse peak frequency would decrease with time (like in the afterglow), the envelope pulse would display hard-to-soft evolution, even during rebrightenings and in the initial growing phase of the pulse. A clear prediction of their model is therefore that hard-to-soft evolution should be preponderant in single pulse bursts, while the tracking behavior should instead predominantly be observed in multi-pulse bursts. In this paper we study a sample of bright bursts for which time-resolved spectroscopy was carried out looking for evidence of such an imbalance of spectral evolution types. This paper is organized as follows: in Section 2 we present the sample of burst and the techniques used to classify bursts both in the temporal and spectroscopic domains. In Section 3 we present our results and discuss their statistical significance. Finally, in Section 4 we discuss our results and possible strategies to improve their significance. § METHODS §.§ The sample The sample of bursts analyzed in this work was collected from the time-resolved spectra catalogue produced by <cit.>. Of the 81 bursts in the catalogue, 62 were utilized in this study. <cit.> used CSPEC data for 15 of the bursts, which they note has lower temporal resolution with higher spectral resolution, and a duration of around 8000 seconds. The other bursts in the catalogue were created using TTE data. For the creation of light-curves in this study TTE data was used. TTE has a duration of around 300 seconds <cit.>. <cit.> fit each burst on multiple time intervals. Four different spectral models were used: BAND, COMP, SBPL, and power law <cit.>. The Band function (BAND; ) is a four parameter piece-wise function with an exponentially smooth transition between two power laws <cit.>. The smoothly broken power law (SBPL; ) is also a smooth transition between two asymptotic power-law behaviours. In general, SBPL has a parameter that controls the length and smoothness of the transition. In the <cit.> catalogue, this parameter is fixed at 0.3. The Comptonized model (COMP) is a 3 parameter power law fit with an exponential cutoff. Finally, the power law fit is a simple 2 parameter power law. Additional details about the fits used can be found in <cit.> and the references therein. For a given interval of a given burst, <cit.> provides the best functional fit in their catalogue. Thus a burst may have different functional fits for different time intervals. Peak energy was of principle interest for this work, thus any spectral fit that did not include a peak energy value was omitted. In particular, the power law fits have no defined peak energy, and therefore any fit using the power law could not be used. All bursts with at least 4 spectral fits that included well-defined peak energy values were included. Of the 19 excluded, 16 were excluded due to having too few spectra with peak energy values. Due to the difference in duration for data used, some GRBs were excluded as a majority of spectra occurred after or before the TTE data. The identification numbers of excluded bursts and the reason for their exclusion are reported in Table <ref>. While we used the spectral fits from <cit.>, we re-accumulated burst light curves to ensure consistency and a format suitable for our horizontal line technique (see below). Light-curve TTE data was collected from the HEASARC BROWSE GBM Burst Catalogue <cit.> [https://heasarc.gsfc.nasa.gov/W3Browse/fermi/fermigbrst.html]. From the 12 NaI detectors the two with the brightest signals were chosen. For each burst we define a region of interest as: max(T_90start - 0.5T_90,T_dstart) ≤ t ≤min(T_90start +2T_90,T_end) where T_90start is the start of the T_90 period, T_dstart is the earliest available TTE data, and T_end is the end of the TTE data. If the burst is sufficiently short, the region of interest lasts 2.5T_90. Unfortunately, not all bursts have TTE data coverage over the entire 2.5T_90 period. TTE includes data from 30 seconds prior to the trigger time and lasts for around 300 seconds <cit.>. In the case that the T_90start-0.5T_90 occurred before 30 seconds prior to trigger time, the earliest TTE data was used as the beginning of the region of interest. Similarly, if T_90start + 2T_90 occurred after the 300 second duration, the last TTE time was used as the end of the region of interest. For all bursts, the region of interest was divided into 120 equal length bins. Broadly, the length of each bin was the length of the region of interest for the burst divided by 120. For bursts where the 2.5T_90 fell within the TTE data, bin duration was given simply by T_bin = T_90/48. This method ensured that the primary burst behavior was well described in the light curve. §.§ Analysis In order to classify the light-curves and time resolved spectra we developed two independent techniques. The first is an exercised in citizen science, in which individuals from the public were asked to classify light curves and spectra. The second, instead, is a set of computer algorithms that automatically classify burst behavior. In the citizen science project 24 participants were asked to rank all 62 light-curves and spectra. Light-curves and spectra were unlabelled and given to participants separately to minimize bias. Participants classified on a ternary system. Each light-curve was classified as a FRED, unsure, or not a FRED[Note that we used the terminology FRED (Fast Rise, Exponential Decay) even though individuals were not asked to evaluate the specific decay shape of the pulses]. Similarly the spectra were classified as "hard-to-soft", unsure, or not "hard-to-soft". Prior to classification participants were given an instruction sheet that gave brief descriptions of the meanings of FRED and hard-to-soft, as well as example light-curves from outside the data set. For both sets of data, a score of "yes" was assigned a value of 1, "unsure" a value of 0.5, and "no" a value of 0. The final score for a burst was the average score across all 24 participants. The uncertainty on the scores were taken using the standard error of the mean. Computational methods were also used to determine if given light-curves and time resolved spectra were single-peaked and hard-to-soft, respectively. By definition, a set of time resolved spectra being hard-to-soft means that the peak frequency for a given time is not larger than the peak frequency for all previous time steps. Let ϵ_peak,i be the peak energy at some time index i and let σ_ϵ,i be the error in peak energy at time index i. Let j be some time step before i. To determine if a given peak frequency is not larger than the previous points, the difference in peak energy should be negative within one uncertainty. In other words, ϵ_peak,i-ϵ_peak,j - √(σ_ϵ_i^2+σ_ϵ_j^2)<0, j<i should be true for all peak energies of index j, where j<i. Code was produced that systematically verified that Equation <ref> was true for every time value i, for all j before i. While testing a given burst, each time a spectra of the burst fails this test a point is added to a running total for that burst. After running the test on all peak energy values for that burst, the hard-to-soft score is the total number of times the burst violates Equation <ref>. This score, however, grows with the number of peak energy values. The total number of tests ran for N peak energy values is 1+2+...+(N-1), or N(N-1)/2. To normalize for the availability of peak energies, the score for a given burst is thus divided by N(N-1)/2. Finally, to place the test on a more meaningful scale, the normalized score was subtracted from one, so that a hard-to-soft burst would correspond to a score close to one. To test for single-peak behavior we developed a test in which a series of horizontal lines was used. To understand the logic of the test, consider a burst with a single peak. If a horizontal line is placed along the burst, all the points of the burst which are above the line should be in sequence. In other words, a single peak should only have one upward crossing and one downward crossing. Taking advantage of this fact, a set of sixteen equally spaced horizontal lines was placed on each burst. The highest line was placed two count-rate standard errors below the peak count rate. Photon counts follow a Poisson distribution thus the standard error for a count N is √(N). The average background count was determined by taking the median of all counts outside the region defined by T_90start<t<T_90start+T_90. The lowest line was placed two uncertainty above the background. This lowest line was excluded from the test, to ensure that any unusual background behavior would not affect the test. In total, fifteen lines affected the score in our test. For a given line, we determined the set of points which were at least one uncertainty above the test line. The test then determines if between the first and last crossing any points fell at least 1 uncertainty below the line. See Figure <ref> for an example line and a few light curve cases that yield similar scores. For every point that fell at least 1 uncertainty below the line, a tally was added to a running total for that burst. This test was run on all fifteen lines. To normalize the scores, they were divided by 1800. This comes from the fact that there were 15 lines and 120 bins on the region of interest. If there were perfect delta spikes at both ends of the region of interest, the score would then be 15·120, giving 1800. For a more general version, a test consisting of m lines with N_bins bins on the region of interest would be normalized by m· N_bins. Finally, the normalized score was subtracted from one. This places the scores on a scale from zero to one, where a score of one means that the burst was perfectly single peaked, no points violated the horizontal line test. Conversely, a score of 0 represents a burst with multiple extremely well separate narrow peaks. The virtue of using multiple lines is best seen in Figure <ref>. A single line may miss some behavior, such as in <ref>C where only one line catches the shallow valley behavior. This is similarly helpful for narrow, deep valleys, in which a single line would only show one point as violating the test, but the deep peak is caught by multiple lines, increasing the score as seen in <ref>B. This test is versatile in its ability to catch multiple different multi-peaked behaviors through the multiple lines. It is also powerful in not requiring the pulse to have any specific analytical description. § RESULTS §.§ Comparing methods The human and computer rankings generally agreed with each other. Comparisons of the human classifications and computer classifications for both hard-to-soft and single peaked metrics can be seen in Figures <ref> and <ref>, respectively. In both Figures <ref> and <ref> there is a positive trend suggesting the tests agree. It is important to note that while both the human and computational tests are on a 0 to 1 scale, they do not hold the same meaning. For instance, it is common for the human single-peak test scores to be near 0, as this simply means a majority of the participants stated "no" for the classification. However, on the computational single-peaked test, a score of 0 would mean that the burst has well-separated delta-function like peaks. This is an ideal multi-peaked burst, and as such most bursts do not have scores close to 0 even with multiple-peaks. Similarly, scores for the computational hard-to-soft test would have to be strictly monotonically increasing which again is unlikely. Thus a score of 0 is again quite unlikely. This means while a score of 0.8 for the human single-peak test indicates the burst is single-peaked, a similar score on the computational single-peak test does not. This holds similarly true for the hard-to-soft test. For both the computational single-peak and hard-to-soft tests a score must be much closer to 1 for the burst to be considered single-peaked or hard-to-soft than in the human tests. In order to quantitatively compare the results, both Pearson and Spearman tests were run to test for linear correlation and monotonic correlation, respectively. Table <ref> shows the Pearson and Spearman scores for both the single peak and hard-to-soft tests as well as the probability of a set of random data showing a correlation as strong or stronger. For both sets of tests, there was both a fairly strong Spearman and Pearson correlation with a high probability. This shows that the tests generally agreed with one another; a human scoring of hard-to-soft typically corresponded to a computational scoring of hard-to-soft and similarly for the single-peak test. There are several outliers that can be seen in Figures <ref> and <ref>. The most noticeable outlier for the single peak test is GRB110407998, whose light-curve and spectra are shown in Figure <ref>. GRB110507998 was ranked as more single peaked than other bursts with similar computational scores. The light-curve seems to show a generally single peaked behavior aside from a small secondary peak at 10 seconds. The horizontal line test picked up on this second peak, whereas the human eye seemed to generally classify this as background. The second peak rises up nearly 1000 photons/s above the background, well above the uncertainty. The most noticeable outlier in the hard-to-soft tests is GRB110721200 whose light-curve and spectra can be seen in Figure <ref>. GRB110721200 was generally considered much more hard-to-soft than other bursts with similar computational rankings. The spectra of GRB110721200 decrease until around 3 seconds before increasing slightly and then decreasing further. The computational hard-to-soft test picked up several failure points in the section of increasing peak energy around 3 seconds. Humans ranked this as hard-to-soft. Except for these outliers, the computational and human results generally agreed. It is difficult to directly compare the results given the different scales for the tests. For instance, only a handful of bursts scored a single-peaked human score greater than 0.8, whereas almost all bursts had computational single peak scores above 0.8. That being said, from Figure <ref>, the human test seemed to have more bursts sitting in the range around 0.5, or the "unsure" range when the computational test would consider them multi-peaked. This may indicate that the computational test is stricter in its classification of bursts as single-peaked. §.§ Correlation between light-curve and spectral evolution The hard-to-soft scores and single peaked scores were directly compared for both the computational and human methods. We will first consider the unbinned results. Figures <ref> and <ref> show the human and computational results respectively. To test for potential correlations the Pearson and Spearman tests were again run. Scores and probability p-values can be seen in Table <ref>. These tests are readily applicable, as they do not require uncertainty, and for neither the human nor computational tests are there well-defined uncertainties. On both the Pearson and Spearman tests, the human scores showed significant positive correlations. This indicates that bursts that were more single-peaked were similarly more hard-to-soft. The computational tests were less significant with respect to the Pearson tests, indicating they are not a linear correlation. However, the Spearman test was far more significant which indicates that there is a monotonic correlation between the single-peaked and hard-to-soft scores. This demonstrates that a more single-peaked burst is more hard-to-soft, albeit not linearly. To further analyze the data, both the human and computational scores were binned with respect to the single-peak scores. The hard-to-soft scores for each burst in a given bin were averaged and the standard error was calculated on the hard-to-soft scores. The human scores were binned into two bins corresponding to single-peaked and multi-peaked scores. Single-peaked scores were any burst with a score greater than 0.667, multi-peak bursts were the remaining bursts. The computational rankings were binned unevenly. Looking at the unnormalized single-peak scores for the horizontal line test, there was a clumping of bursts with unnormalized scores of three or less. This corresponds to final scores of at least 0.9983. Any burst with a computational single-peak score less than this was considered multi-peaked. The number of combined uncertainties between the two bins were calculated for both the human and computational tests. The results are shown in the last column of Table <ref>. For both the human and computational tests there is a significant difference between the hard-to-soft scores for the single-peaked and multi-peaked bursts. In particular, in both cases the single-peaked bursts had significantly larger hard-to-soft scores as compared to multi-peaked bursts. Along with the results of the Pearson and Spearman scores, this associates single-peaked behavior with hard-to-soft behavior. § SUMMARY AND DISCUSSION We have analyzed a set of Fermi GBM bright gamma-ray burst for which time-resolved spectroscopy was available <cit.>. The aim of our research was to investigate and quantify whether single-pulsed events have a stronger tendency to be characterized by the so-called hard-to-soft spectral evolution. In this case, the peak energy of the burst photons' spectra decrease monotonically in time, irrespective of the light curve behavior. Burst spectra and light curves were categorized in terms of their being single peaked and having a hard-to-soft evolution both with specifically designed software and by human interviews. While qualitative evidence of such a behavior has been cited in recent literature (e.g., ) this is, to our best knowledge, the first attempt at a comprehensive and quantitative study. We find that the human and software-based classification of single-pulse and hard-to-soft behaviors are highly correlated with each other. We also find statistically significant evidence that single-peaked bursts have spectral evolution predominantly characterized as hard-to-soft. The statistical significance of this finding is stronger for the human classification, especially with the Pearson test. This result supports the model by <cit.>, in which burst that explode in very dense environments — like inside the accretion disks of supermassive black holes — are single-pulsed and display coherent hard-to-soft evolution. Despite the strong (>4 σ) statistical significance, our results do not support a unique identification of a spectral behavior with a light curve class. As shown in Figures <ref> and <ref> there are single-peaked bursts with non hard-to-soft behavior as well as multi-peaked events with consistently decreasing peak energy. This should not be surprising and may be due to multiple reasons. On the one hand, there may intrinsically be bursts with these different properties. In addition, our analysis is by no means exhaustive due to data limitations. Consider for example a burst with multiple peaks with decreasing intensity (the first pulse is the brightest and the last the dimmest). This is not uncommon in multi-peaked bursts. If, however, the time resolved spectral analysis is carried out in such a way to have one spectrum for each pulse, it would be likely to see a monotonic trend in the peak frequency as well. In addition, a burst with sparse time-resolved spectroscopy may show hard-to-soft behavior accidentally. Alternatively, a bursts that displays a single peak as the result of fusing many sub-peaks together could have a non-monotonic spectral evolution but be categorized as a single-pulse event. Finally, only the best spectral fits were available meaning some bursts had different spectra functional fits for different time intervals. The peak energy values between different fits may differ slightly, contributing an additional source of noise to the results. Given the relatively small size of our sample, it is not possible to investigate further the origin of the correlation that we found, nor elaborate in detail on the source of contaminating events. Further research may include a larger burst sample and/or spectral intervals that are specifically designed to test the <cit.> model. Alternatively, one may look at positional coincidence with the center of the host galaxies, like in the still unique case of GRB 191019A <cit.>. Python (https://www.python.org/) We would like to thank the referee for their careful and insightful comments that led to an improved manuscript. We thank Giancarlo Ghirlanda and Rosalba Perna for useful discussions. DL acknowledges support from NSF grant AST-1907955. aasjournal
http://arxiv.org/abs/2407.12477v2
20240717110411
Composite solutions to a liquid bilayer model
[ "Georgy Kitavtsev" ]
math.AP
[ "math.AP", "physics.flu-dyn", "35B36, 35C20, 35G61, 76D08, 76D45" ]
http://arxiv.org/abs/2407.12723v1
20240717164037
The Future of Learning: Large Language Models through the Lens of Students
[ "He Zhang", "Jingyi Xie", "Chuhao Wu", "Jie Cai", "ChanMin Kim", "John M. Carroll" ]
cs.HC
[ "cs.HC", "cs.CY" ]
Integrating LLMs in Education: Challenges and Prospects]The Future of Learning: Large Language Models through the Lens of Students hpz5211@psu.edu College of Information Sciences and Technology, Pennsylvania State University University Park Pennsylvania USA 16802 jzx5099@psu.edu College of Information Sciences and Technology, Pennsylvania State University University Park Pennsylvania USA 16802 cjw6297@psu.edu College of Information Sciences and Technology, Pennsylvania State University University Park Pennsylvania USA 16802 jpc6982@psu.edu College of Information Sciences and Technology, Pennsylvania State University University Park Pennsylvania USA 16802 cmk604@psu.edu College of Education, Pennsylvania State University University Park Pennsylvania USA 16801 jmc56@psu.edu College of Information Sciences and Technology, Pennsylvania State University University Park Pennsylvania USA 16802 § ABSTRACT As Large-Scale Language Models (LLMs) continue to evolve, they demonstrate significant enhancements in performance and an expansion of functionalities, impacting various domains, including education. In this study, we conducted interviews with 14 students to explore their everyday interactions with ChatGPT. Our preliminary findings reveal that students grapple with the dilemma of utilizing ChatGPT's efficiency for learning and information seeking, while simultaneously experiencing a crisis of trust and ethical concerns regarding the outcomes and broader impacts of ChatGPT. The students perceive ChatGPT as being more “human-like” compared to traditional AI. This dilemma, characterized by mixed emotions, inconsistent behaviors, and an overall positive attitude towards ChatGPT, underscores its potential for beneficial applications in education and learning. However, we argue that despite its human-like qualities, the advanced capabilities of such intelligence might lead to adverse consequences. Therefore, it's imperative to approach its application cautiously and strive to mitigate potential harms in future developments. <ccs2012> <concept> <concept_id>10003120.10003121</concept_id> <concept_desc>Human-centered computing Human computer interaction (HCI)</concept_desc> <concept_significance>500</concept_significance> </concept> <concept> <concept_id>10003120.10003130.10011762</concept_id> <concept_desc>Human-centered computing Empirical studies in collaborative and social computing</concept_desc> <concept_significance>300</concept_significance> </concept> <concept> <concept_id>10003456.10003457.10003527</concept_id> <concept_desc>Social and professional topics Computing education</concept_desc> <concept_significance>500</concept_significance> </concept> </ccs2012> [500]Human-centered computing Human computer interaction (HCI) [300]Human-centered computing Empirical studies in collaborative and social computing [500]Social and professional topics Computing education [ John M. Carroll July 22, 2024 =================== § INTRODUCTION In the swiftly evolving landscape of artificial intelligence (AI), large-scale language models (LLMs) have emerged as a pivotal force for innovation and a driving force behind next-generation efficiency. LLMs, with their seemingly “omnipotent” capabilities, especially traits of general-purpose technologies <cit.>, are increasingly performing at levels comparable to humans in various tasks <cit.>. Their remarkable performance enhancements, coupled with an expanding range of functionalities—including text processing, question-answer dialogues, programming, image interpretation, and video creation—have captivated both academic and industrial communities. These advancements are progressively convincing the broader public of the significance of these models <cit.>. Understanding and utilizing LLMs are becoming essential skills in our daily lives <cit.>, akin to the use of personal computers and smartphones. Beyond comparisons with human capabilities, LLMs, due to their vast training data scale, complexity, more comprehensive understanding of tasks, and versatility in various scenarios are replacing some traditional AI tools. They surpass these traditional tools in a range of tasks, including writing <cit.>, diagnosis <cit.>, and retrieval <cit.>. This advancement illustrates that LLMs are actively redefining our perceptions of future possibilities in diverse fields. As  <cit.> states in their book, this powerful AI will have an impact on areas including autonomous driving, career choices, virtual companions, education, ethical concepts, and broader social issues. Among these, education is at the forefront of discussion, particularly since the COVID-19 pandemic. Education, long considered a crucial aspect of societal welfare <cit.>, has undergone several significant transformations and challenges, from campus social distancing <cit.> and virtual classrooms <cit.> to independent study <cit.> and collaborative learning <cit.>. In the context of the rapid development of LLMs, education, due to its importance, has naturally become one of the first fields to be impacted by such technologies, with related issues attracting increasing attention from researchers. Prior to this, the impact of AI on education had already been a topic of widespread discussion in the HCI community <cit.>. In this study, we focused on ChatGPT, a prime example of LLMs. We conducted semi-structured interviews with 14 participants from diverse educational and professional backgrounds to gain valuable insights into their experiences with LLMs' applications. By collating and analyzing their perspectives, we further elucidate the practical challenges and opportunities encountered in the utilization of LLMs. We discuss the potential opportunities to leverage LLMs in the education context. Specifically, this study aims to address the following research questions: RQ1. What are the impacts and scenarios of using ChatGPT on Intentional Learning and Incidental Learning? RQ2. What are the attitudes towards collaborative learning with LLMs from students' perspective? By exploring these questions, we seek to understand how ChatGPT and similar technologies can be integrated into educational settings to enhance learning outcomes and foster a more interactive and efficient learning environment. § RELATED WORK - AI IN EDUCATION LLMs are seen as valuable learning tools in educational settings now. For example, Kazemitabaar et al. <cit.> developed CodeAid, an LLM-powered programming assistant, and found that CodeAid significantly influenced student engagement and learning, highlighting distinct usage patterns and the effectiveness of responses in aiding programming tasks. Jin et al. <cit.> examined the use of LLMs as teachable agents in programming education. While their study showed benefits in knowledge-building and metacognitive skills, challenges with authentic interactions and knowledge transfer were noted. Han et al. <cit.> highlighted the potential benefits of AI in providing adaptive teaching materials and personalized feedback, while also addressing significant concerns about authorship, agency, and misinformation. These insights underscore the need for careful design and regulation of educational AI platforms. Shaer et al. <cit.> examined the use of LLMs in group ideation within educational settings. Their research showed that LLMs could enhance creativity and support collaborative innovation, especially during the idea generation process. However, the study also pointed out the limitations and biases of non-human agents in evaluating ideas. These studies collectively contribute to understanding the impact of LLMs in education, demonstrating their potential to enhance learning and creativity while also highlighting the importance of addressing trust, dependency, and ethical concerns <cit.>. However, we found that previous research primarily emphasizes the advantages of LLMs as tools, highlighting their effectiveness <cit.>, or focuses on attitudes towards their use <cit.>, such as concerns or trust <cit.>, and discussions on learning models remain relatively underexplored. Therefore, while our findings align with the overall trend in the literature, that LLMs are effective but can be optimized further, we specifically address research questions related to learning models, especially intentional learning and incidental learning. Overall, while there are some concerns, there is a generally positive attitude toward the use of LLMs in education <cit.>. § METHODS §.§ Participants Recruitment We recruited 14 participants (P1-P14) through social media and the authors' network of contacts, including 11 females and 3 males; 1 undergraduate student, 1 master's student, and the rest were PhD students.The age range of the participants was 18 to 35 years (median = 27, SD ≈ 3.08). The study was conducted under the approval of the university's Institutional Review Board (IRB). At the conclusion of the study, each participant received a $10 gift card (or an equivalent amount) as compensation. §.§ Data Collection and Analysis This study was conducted online through video conferencing software (e.g., Zoom). The research process was recorded after the informed consent of all participants. The interviews were semi-structured, lasted from 30 minutes to 1 hour. Initially, we asked participants to introduce their professional experiences and background information. Subsequently, we delved into the main challenges they encountered while using ChatGPT. Finally, we discussed the potential of integrating ChatGPT and other LLMs into the educational sector with the participants. Through thorough review, analysis, and reflection on the recorded sessions and their respective codings, we unearthed insights regarding participants' experiences using ChatGPT, as well as their attitudes towards applying ChatGPT or other LLMs in the field of education. We conducted reflexive thematic analysis (RTA) on the collected data <cit.>. The data analysis generally adhered to a six-step procedure: dataset familiarization, data coding, initial theme generation, theme development and review, theme refinement and definition, and report composition. After each interview and experiment, the research team briefly discussed the outcomes. All recordings were transcribed and coded by the first author and at least one other author. During this research process, the researchers met at least once a week to discuss the progress of the study, the results of the interviews and experiments, the findings and problems, and to continually refine the themes and processes. § FINDINGS §.§ Intentional Learning through ChatGPT Intentional learning through ChatGPT embodies a focused and purposeful approach to acquiring knowledge or skills. In this context, participants engage with this tool, aiming for specific goals: to obtain targeted information more efficiently and clearly than traditional search engines, or to spark creative thinking. §.§.§ ChatGPT as an Alternative to Traditional Search Engines Participants have shifted from traditional search engines like Google to posing questions directly to ChatGPT, which has significantly improved their efficiency in information retrieval. These inquiries often pertain to established concepts, theories, or general information. One of the most notable benefits of ChatGPT is its ability to rapidly synthesize and integrate complex information from multiple sources. This capability significantly surpasses traditional search engines in terms of speed and convenience. The efficiency of ChatGPT not only streamlines information retrieval but also impacts user reliance and search behavior. “When I have questions, I don't really want to use search engines now. I feel that when I ask something, it [ChatGPT] gives me a direct answer, so I don't need to search through numerous responses [provided by search engines] to find what I want. Moreover, I think the results given by both [ChatGPT and search engines] are not too different.” (P5) In addition to its efficiency in information retrieval, ChatGPT excels in breaking down intricate concepts into more understandable terms, such as in the fields of finance and economics. This capability aids in immediate understanding for individuals without a background in these fields and makes specialized knowledge accessible to a broader audience. “For example, for some knowledge about finance and economics, the search might be very complex, and very hard to understand. So, at this time, I use ChatGPT. I think that it must have seen these, its database includes these contents. I can ask it to explain complex concepts in an accessible manner.” (P9) Although ChatGPT can provide rapid and effective responses, participants have significant hesitation in fully trusting its output, primarily due to concerns about accurcay and potential content fabrication. This is because, fundamentally, ChatGPT “does not search like a search engine itself searches” (P9), but rather generates responses based on its training data. The accuracy and bias of these responses are significant points of concern, as highlighted by P12: “I guess the only concern I have is that I don't know if it's really accurate.” Users have encountered inaccuracies and overly literal responses, leading them to rephrase questions or restart interactions for clarity. This lack of confidence in the verifiability of ChatGPT’s results, especially in the absence of source information, also extends to its effectiveness in subjective tasks and concerns about its randomness and unstructured responses. §.§.§ ChatGPT as a Mentor to Inspire Ideas Another advantage of ChatGPT in intentional learning is its role in fostering inspiration. Particularly in situations where creative ideas are needed, ChatGPT can quickly provide insights to stimulate users' thinking. For instance, P5 described integrating ChatGPT into the design process: “When we are in the design process, especially during the initial brainstorming phase, we incorporate ChatGPT into this process, asking it to see if it can provide any good inspiration.” This approach is not unique to design. P7 found ChatGPT helpful to “draft an outline for interview questions”. Similarly, P8 used ChatGPT to brainstorm possible topics and titles for writing tasks. These instances highlight the role of ChatGPT in aiding the generation of creative ideas and enriching the brainstorming phases. “Sometimes, when I feel that my writing wasn't good, I ask ChatGPT to help revise it or give me some suggestions for possible topics. Also, when I'm not sure what title I should use for an article, I let it [ChatGPT] give me a possible title, and then I use this title.” (P8) A major concern with the increasing reliance on tools like ChatGPT is the potential erosion of critical thinking and learning abilities. As these technologies take over tasks traditionally requiring human cognition, there is a risk of individuals becoming overly dependent on AI for problem-solving and creative thinking. P11 succinctly captures this apprehension: “I feel that everyone might gradually lose their ability to think, as people won't be willing to think anymore. They'll just rely on machines to do the thinking for them.” This concern is further echoed in the context of younger learners. P9 highlighted the potential adverse effects on students who are still developing critical thinking skills. This underscores the importance of balancing the use of ChatGPT with the need to maintain and cultivate independent thought and learning processes. “I only used it [ChatGPT] to do some coding or refine my writing, which probably doesn't matter. However, say if a secondary or high school student use it for coursework, then they probably won't learn anything due to the lack of their thinking process.” (P9) §.§ Incidental Learning through ChatGPT It has been well-established that learning can happen in neither structured nor classroom-based environments such as workplace <cit.>. Through the use of ChatGPT to perform daily tasks and tackle problems, our participants have demonstrated the informal and incidental learning process happened via ChatGPT. §.§.§ ChatGPT Handling Email Communications Participants mentioned that they extensively use ChatGPT for tasks that are “lacking in creativity or high in repetitiveness,” such as drafting emails, weekly reports, speeches, and the like. For example, P14, a teaching assistant (TA), uses ChatGPT to accelerate her process of responding to student emails due to the large number of student inquiries. “A TA has to answer numerous student questions and respond to many emails. In fact, I use ChatGPT extensively for replying to emails. After using it frequently, you start to notice the vocabulary used, and you can learn a bit from it. Eventually, you might not even need ChatGPT anymore. You'll be able to write on your own, which I think is very good.” (P14) While P14 might not intend to learn knowledge through writing e-mails, she mentioned that with the interaction with ChatGPT, she has learned some writing techniques from the process, as she started to “notice the vocabulary”. §.§.§ ChatGPT as a Professional Writing Assistant Another major use of ChatGPT frequently mentioned by participants is as an assistant to refine their writing for professional purposes. For instance, P10 often uses ChatGPT as a translator to look up information in other languages, stating, “ChatGPT can quickly translate content into another language, and the quality of translated content is quite good.” P3, on the other hand, uses ChatGPT as a tool for enhancing grammar in conjunction with other software. She explained, “After using Grammarly, I sometimes feel that the sentences aren’t very authentic. Then, I put the whole paragraph in ChatGPT to polish it, and I find that the changes ChatGPT makes are particularly good, very authentic.” In this case, P3 explicitly compared ChatGPT's performance with other professional tools in educational contexts such as Grammarly, and felt ChatGPT was more authentic. Simiarly to the email communication scenario, the use of ChatGPT for translating and refining text can incidentally enhance users' language proficiency. However, this incidental learning process may not naturally happen in every usage scenario. P1, an undergraduate student, expresses concerns about dependency and laziness stemming from using ChatGPT. Instead, he strongly supports the use of ChatGPT in tasks that are highly repetitive because they are not meant for enhancing one's knowledge or skills. §.§ ChatGPT Sparked Ethical Consideration in Coursework for both Students and Teachers Participants generally viewed LLMs favorably, acknowledging its strengths in rapid data processing, efficiency in providing overviews or summaries, generating preliminary insights, and user-friendly format. During the interviews, we learned that the participants had already been using or encountering tasks handled by LLMs in their work or studies to varying extents. For instance, P14 noticed articles generated directly by ChatGPT in classroom assignments and expressed concerns about intellectual property and ethics, stating, “You [the student] need to rely on yourself to complete the content for the assignment to be truly effective. It [the assignment] should be a product of labor.” As an educational professional, she implemented measures to restrict the use of ChatGPT, “This year, after discussing with my lecturer, we [decided to] directly prohibit students from using ChatGPT in the entire class.” However, although most participants expressed a certain level of concern, even implementing restrictions on the use of ChatGPT in educational settings, as P4 noted: “This trend is inevitable... there's no way to stop students from using these technologies [LLMs-based application].” It's clear that the concerns are not against LLM tools per se but rather about how they are used. Even though there are potential ethical risks in using ChatGPT in educational settings, at the end of the interviews, P14 also expressed interest in and suggested a course at the school on how to harness ChatGPT, discussing “how to use prompts, as well as some limitations or advantages of the application itself, or even the current progress, these are all things that can be included in a course.” In addition, all participants in the interview study expressed interest in understanding how to use ChatGPT better and how to design prompts more effectively. They also hope schools or businesses can offer specialized courses related to ChatGPT or integrate them into the curriculum as a part of assignments and teaching. § DISCUSSION In our investigation into the integration of ChatGPT in educational settings, we first delved deeply into how participants are utilizing this tool and what their concerns are. We find that ChatGPT is predominantly used for a variety of purposes, such as general information inquiries, literature reviews, content creation, language refining, data organization, and inspiration in content summarization. We delve into the dilemmas faced by students in their interactions with ChatGPT, exploring the dual aspects of efficiency and trust. This includes the complexities of their engagement with these advanced artificial intelligence tools, highlighting the inherent potential and pitfalls in the use of ChatGPT. The discussion not only reflects the evolving relationship between humans and AI in the context of education but also contemplates the broader implications of this interaction for knowledge acquisition, critical thinking, and ethical considerations in the era of rapidly advancing AI technologies. §.§ Student's Dilemmas in Collaborating with LLMs: Efficiency and Trust Crisis From the interview results, participants show great interest in using LLMs-based applications but experience a sense of distrust during their use. This is primarily due to the “black box” issue inherent in LLMs <cit.>. Applying “explanatory AI” is considered a right choice <cit.>. In response to this, on one hand, researchers are continually enhancing the explainability of LLMs through methods like modeling design <cit.>, “jailbreaking” <cit.> or prompt engineering <cit.>. On the other hand, we observed that participants' behavior when utilizing LLM applications tends to be more utilitarian, paralleling the widely discussed challenges of data privacy and convenience in the information age <cit.>. In collaboration with LLMs, participants are willing to trade off the demand for the reliability of real data for seemingly usable results in exchange for higher efficiency. This trade-off can create a toxic effect; such collaborative behavior may effectively address the immediate issues but increases the transmission of incorrect information <cit.>, which is very dangerous in the field of education. Over time, this might exacerbate the formation of information cocoons and intensify existing biases <cit.>. User scrutiny might help mitigate these risks, however, reviewing and investigating the authenticity of the content generated by LLMs can lead to a significant amount of additional work <cit.>. The personal experience of users is crucial for swiftly reviewing and discerning the authenticity of generated content, yet such rich experience is often not applicable to novices, especially for understanding tacit knowledge <cit.>. If they use LLMs and receive incorrect information, novices might be unable to discern this and could be influenced by it. Novices might need to be more cautious than experienced individuals when using LLMs. From this perspective, the role of a reviewer or mentor for LLM-generated content could emerge as a new profession or replace TAs. Educators should supervise novices using LLM-based applications, having students include their LLM use in submissions. Tutors can then evaluate these submissions, prompting students to rethink potential shortcomings in the generated results, thereby promoting critical thinking and exploring new possibilities. Furthermore, we would like to emphasize the potential and importance of LLMs in facilitating incidental learning. Incidental learning is often described as learning that occurs unconsciously, where learning is a byproduct of another activity <cit.>. As participants have mentioned, using ChatGPT for extensive email editing can subtly instill better grammar and vocabulary. When students use LLMs to complete tasks, apart from the repetitive nature of the operation which can be seen as a form of continuous intentional learning state <cit.>, this kind of writing may trigger and enhance incidental learning <cit.>. This is similar to what the participants mentioned about learning vocabulary and grammar while writing emails. Further research is needed to explore the role of LLMs in facilitating incidental learning, including analyzing how interactions with LLMs influence the subconscious acquisition of skills and knowledge. §.§ Students' Concerns About More “Intelligent” Artificial Intelligence Building on the previous discussion of human trust in AI, it is essential to delve deeper into the intelligence of ChatGPT within HCI, particularly compared to traditional AI systems. ChatGPT's intelligence is multifaceted, encompassing various components that highlight its uniqueness in HCI. Beyond performance, it is crucial to examine its explainability and interactive capabilities. Firstly, ChatGPT's intelligence is evident in its ability to understand and generate natural language, surpassing the simple keyword matching of traditional search engines. ChatGPT can contextually analyze queries, understand nuances, and provide targeted responses, thanks to extensive training on vast text data. However, this raises concerns about potential bias and inaccuracies, unlike traditional AI systems that rely on structured and vetted data <cit.>. Issues of fabricated facts in LLMs can be partially mitigated by prompt engineering but still require human review <cit.>. Users generally trust traditional search engines like Google over LLMs because ChatGPT's advanced language processing creates a semblance of understanding and empathy <cit.>, often leading to the attribution of “human-like” qualities to the AI <cit.>. Because, humans have the ability to interpret external realities but cannot follow instructions <cit.>. The “human-like” communication of LLMs' applications weakens the concept of these AIs following instructions. With a “mindful brain” that traditional algorithm-based software lacks <cit.>, search engines are able to offer results that garner greater trust from humans. They achieve this by structuring and filtering data in a way that aligns with human social experiences, primarily through mechanisms like keyword searching and algorithmic curation. This process delivers facts in an emotionless, factual manner. Even though the ranking of results provided by search engines may be influenced by recommendation algorithms, the results themselves are not considered to be fabricated by AI. For experienced users, ChatGPT’s outputs are used cautiously and selectively. However, our preliminary results indicate that participants are concerned about the potential for ChatGPT’s interaction style to foster lazy thinking. Students, in particular, may lack the ability to accurately judge the content, leading to dependence and impaired judgment, similar to the effects of alcohol. To effectively utilize LLM-based applications, they must produce transparent and traceable results, necessitating heightened vigilance from users. ChatGPT represents a significant AI advancement, requiring thorough examination from ethical, epistemological, and temporal perspectives to ensure integration into human knowledge frameworks. Despite its capabilities, there remains a hesitance to fully trust advanced AI systems. Ensuring the responsible and transparent use of such technologies is essential to maintain information integrity in the AI era. From a learner's perspective, “human-like” AI systems might be seductive yet potentially harmful. Trust in ChatGPT due to its interactive capabilities could lead to the spread of incorrect knowledge. As noted by Preiksaitis and Rose <cit.>, in rapidly evolving educational fields, knowledge is always outdated. Hence, AI can trigger deeper thinking, but students must approach AI critically and skeptically to navigate its imperfections in accuracy and reliability. §.§ Limitations and Future Work There are some limitations to our study. Firstly, our participants were mainly PhD students because they are more familiar with technology and ChatGPT in general, with advanced knowledge in education and advanced skills in novel technology, so they may not be representative of undergraduates in education. Technology literacy is important factors that can shape students' behaviors and perception, undergraduates with low literacy or just start to use chatgpt may have different perceptions. Future work should extend our finding to diverse student groups based on education backgrounds such as levels, majors, programs. Secondly, the study mentioned some interesting points that were not covered in depth, such as copyright and ambiguity differentiation between AI and human-generated content, which could potentially lead to misinformation spreading and knowledge erosion. Future work should explore the potential disruption caused by LLM in the education context and try to mitigate these negative impacts in advance with management and technological infrastructure development and education pedagogy design. § SUMMARY We have summarized some valuable future work from the results and discussion sections, such as (1) the impact of “human-like” AI on students' judgment, as mentioned in the discussion section. However, the ubiquity and specific severity of such impacts in the educational field remain unclear. Future studies could investigate aspects like the degree of students' dependence on LLM-based applications in education and how students' judgment and learning methods are affected by LLM-based applications. (2) We suggest increasing human tutor or supervisor involvement to monitor and guide students’ use of ChatGPT. Future research may provide more specific training programs, as well as address more specific questions like who is more suitable to act as a ChatGPT supervisor in an educational environment. Is it teachers with expertise in a particular academic field? Teachers with a background in ChatGPT research? student TAs? Or another AI? This work was supported by the Center for Socially Responsible Artificial Intelligence (CSRAI) at Pennsylvania State University. This work is part of the “Optimizing Large-Scale Language Model-Based AI Integration and Human-Computer Interaction in Educational Scenarios” project, funded by the Big Ideas Grant (BIG) Summer 2023, CSRAI. We would like to extend our thanks to all participants for their valuable involvement. ACM-Reference-Format
http://arxiv.org/abs/2407.12776v1
20240717175539
Experimental Demonstration of a Quantum-Optimal Coronagraph Using Spatial Mode Sorters
[ "Nico Deshler", "Itay Ozer", "Amit Ashok", "Saikat Guha" ]
astro-ph.IM
[ "astro-ph.IM", "quant-ph" ]
Authors contributed equally to this work. Wyant College of Optical Sciences, University of Arizona, Tucson, AZ 85721, USA Authors contributed equally to this work. Department of Electrical and Computer Engineering, University of Maryland, College Park, MD 20742, USA Wyant College of Optical Sciences, University of Arizona, Tucson, AZ 85721, USA Department of Electrical and Computer Engineering, University of Maryland, College Park, MD 20742, USA § ABSTRACT An ideal direct imaging coronagraph, which selectively rejects the fundamental mode of a telescope, has been shown to achieve the quantum information limits for exoplanet detection and localization. In this study, we experimentally implement this quantum-optimal coronagraph using spatial mode (de)multiplexing. Our benchtop system includes a forward and inverse pass through a free-space programmable spatial mode sorter, designed to isolate photons in a point spread function (PSF)-adapted basis. During the forward pass, the fundamental mode is rejected, effectively eliminating light from an on-axis point-like star. On the inverse pass, the remaining modes are coherently recombined, enabling direct imaging of a faint companion. We develop a probabilistic measurement model that accounts for combined effects of fundamental shot noise and experimental noise specific to our benchtop setup, such as modal cross-talk, dark noise, and ambient background illumination. We leverage this measurement model to formulate a maximum-likelihood estimator of the exoplanet position given an image captured with the coronagraph. Using this approach, we successfully localize an artificial exoplanet at sub-diffraction distances (<σ) from its host star under a 1000:1 star-planet contrast ratio. Our system accurately localizes the exoplanet up to an absolute error <0.03σ over the separation range [0, 0.6]σ. Finally, we numerically evaluate the precision of our experimental coronagraph against state-of-the-art coronagraphs subject to comparable noise models. Experimental Demonstration of a Quantum-Optimal Coronagraph Using Spatial Mode Sorters Saikat Guha July 22, 2024 ====================================================================================== § INTRODUCTION The challenge of discovering habitable planets beyond our solar system has motivated astronomers to develop a diverse repertoire of exoplanet detection techniques. Broadly speaking, transit photometry, radial velocity, gravitational microlensing, and astrometry methods all monitor perturbations to the brightness, position, and spectrum of a prospective host star over time to infer the presence and dynamics of a faint orbiting companion <cit.>. While these methods have enjoyed great success in detecting exoplanets, contributing over 5,500 confirmed discoveries to date <cit.>, they fundamentally rely on indirect observations which provide limited information about more detailed planetary features. Remotely characterizing atmospheric composition, weather patterns, surface temperature, and surface gravity is crucial for understanding extrasolar chemical environments and identifying potential biosignatures <cit.>. By comparison, direct imaging techniques aspire to spatially observe/resolve orbiting exoplanets, providing more comprehensive planetary data <cit.>. However, direct imaging faces two compounding phenomenological challenges. First, exoplanets are extremely faint compared to their host stars, with relative brightness factors ranging from 10^-5 for Hot Jupiters to 10^-11 for Exo-Earths in the habitable zone. Second, the distance between an exoplanet and its host star often falls below the optical resolution capabilities of current space-based telescopes, residing in the so-called 'sub-diffraction regime' <cit.>. When imaged with a conventional telescope, light from the exoplanet overlaps with prominent diffraction features of the host star. This overlap, combined with the overwhelming shot noise generated by the bright star, effectively renders the exoplanet undetectable. Developments in coronagraph techniques have enabled complete nulling of an on-axis point-like star so that, in the absence of ambient background illumination, only light from the exoplanet reaches the detector <cit.>. In this way, state-of-the-art coronagraphs suppress photon shot noise intrinsic to measurements of classical states of light, thereby enhancing the signal-to-noise ratio of exoplanet signatures <cit.>. Inspired by new insights in passive superresolution imaging <cit.>, we recently reported the quantum information limits for exoplanet detection and localization <cit.>. Our findings revealed that these limits are achieved by a direct-imaging coronagraph that exclusively rejects the fundamental mode of the telescope. In contrast, current state-of-the-art coronagraphs discard information-bearing photons in higher-order modes <cit.>, resulting in sub-optimal performance over the sub-diffraction regime, as illustrated in Figure <ref>. Quantum-optimal coronagraphs, however, preserve information at sub-diffraction star-planet separations, where an abundance of exoplanets are expected to reside, given current statistical models <cit.>. In this work, we propose a quantum-optimal direct imaging coronagraph using a spatial mode sorter implemented with a multi-plane light converter (MPLC) <cit.>. To the best of our knowledge, this is the first experimental verification of a coronagraph design that theoretically saturates the quantum limits for exoplanet discovery tasks. Applying a maximum likelihood estimator to images collected with our bench-top setup, we localize an artificial exoplanet at sub-diffraction separations from an artificial host star with a contrast ratio of 1000:1. Denoting the Rayleigh diffraction limit of our imaging system as σ, the absolute error of the empirical mean MLE remains below 0.03σ over the separation range [0, 0.6]σ. The empirical precision (standard deviation) of the MLE varies over the sub-diffraction regime between ∼ 0.1σ and ∼0.01σ for an exoplanet in the range [0,0.1]σ and [0.1,0.6]σ, respectively. We invoke a probabilistic measurement model to characterize the impact of real-world noise sources and experimental constraints expected in an operating environment which hinder the attainment of quantum-limited performance set by shot noise. § METHODS §.§ Experimental Design Figure <ref>(a) illustrates the working principles of a quantum-optimal coronagraph design based on two cascaded mode sorters. The first mode sorter decomposes the incident optical field into a PSF-adapted transverse spatial mode basis <cit.> in order to isolate and eliminate photons in the fundamental mode. The second sorter inverts the mode decomposition, coherently recombining light in the residual modes to form an image of the exoplanet on a detector array. This scheme can be viewed as spatial mode filtering. Our experimental setup shown in Figure <ref>b emulates these working principles by double-passing the optical field through a single mode sorter implemented on a 3-plane MPLC <cit.>. On the forward pass, the MPLC spatially demultiplexes the optical field in the Fourier-Zernike modes (Appendix <ref>), which constitute a PSF-adapted basis for circular apertures, and focuses light in each mode to a distinct Gaussian spot on the sorting plane. The spot corresponding to the fundamental mode is directed to the opening of a pinhole mirror and absorbed at a beam dump. The remaining modes reflect off the pinhole mirror and are sent backwards through the mode sorter. The unitary nature of spatial mode sorting inverts the mode transformation during the backward pass. Non-reciprocal polarization elements split the optical path for the forward (pre-nulling) and backward (post-nulling) pass, sending the filtered field to a detector. Through this process, the field at the detector plane is identical to the field at the focal plane minus optical contribution from the fundamental mode. The 4f imaging system used in our setup is characterized by a circular aperture diameter D = 400 μm and a focal length f = 200 mm operating at wavelength λ = 532 nm, yielding a Rayleigh resolution of σ = 1.22λ f/D = 324 μm on the object plane. We prioritize demonstrating exoplanet localization at sub-Rayleigh star-planet separations; the regime where quantum-optimal coronagraphs offer the greatest theoretical advantage over existing high-performance coronagraph designs. To sample this separation regime, we align the coronagraph to a bright on-axis point-source (artificial star) and vertically step the position of a second dim point-source (artificial exoplanet) r⃗_e = (0,y_e) over the discrete domain y_e ∈𝒴 = [-a:Δ:+a] with endpoints a = .85 σ and sampling step size Δ = .0215 σ. Light from a sub-diffraction exoplanet couples predominantly to lower-order Fourier-Zernike modes. We therefore configured the MPLC to sort a truncated basis {ψ_0(r⃗),ψ_1(r⃗),ψ_2(r⃗),ψ_3(r⃗)} where ψ_0(r⃗) is the fundamental mode of the imaging system (Figure <ref>c). Collectively, these modes contain a majority of the energy in the field generated by a sub-Rayleigh companion as shown in Figure <ref>. We define the nominal region of support for this truncated basis to be |y_e| ≤ .6 σ. Expanding the basis to more than four modes was found to significantly degrade the cross-talk of the mode sorter due to the limited number of phase masks available on our programmable MPLC. In principle, introducing more masks would allow one to sort more modes and temper modal cross-talk, enabling access to greater star-planet separations and contrasts. §.§ Measurement Model We invoke a theoretically and empirically-driven probability distribution for the direct imaging measurement. Let X(r⃗_0) ∈ℝ^M be a random vector containing the number of photons measured at each pixel of the detector over fixed exposure time T when imaging a single point-source located at position r⃗_0. The number of photons measured in each pixel are independent and modeled with Poisson statistics, X(r⃗_0) ∼Poiss(λ_0 q(r⃗_0) +λ_B p_B) where λ_0 ∈ℝ is the photon flux entering the pupil from the point source, q(r⃗_0) ∈ℝ^M is the post-nulling photon arrival probability at each detector pixel after propagating the pupil field through the coronagraph, and p_B∈ℝ^M is an experimentally-observed background distribution with flux rate λ_B ∈ℝ (Appendix <ref>). For simplicity, the flux rates λ_0,λ_B are given in photons per integration period T. The action of the coronagraph on the incoming optical field appears in the post-nulling photon arrival probability, q(r⃗_0) = |ΨΩ C Ω^†Ψ^†ψ_0(r⃗_0)|^2, where the input field ψ_0(r⃗_0) is a shifted version of the fundamental mode induced by illumination by a point source located at r⃗_0. The |·|^2 operation is applied element-wise to convert the field at each pixel to intensity. We have also introduced several system-dependent matrices: Ψ∈ℂ^M × K is a truncated change-of-basis matrix that transforms a field from its modal representation to its spatial representation, C ∈ℝ^K × K is a diagonal nulling matrix which acts to reject the fundamental mode, and Ω∈ℂ^K × K is the unitary cross-talk matrix of the mode sorter whose entries are determined from calibration measurements (Appendix <ref>). A synthetic measurement of a star-planet system Y = X_s + X_e is constructed by adding multiple measurement realizations of the artificial star and exoplanet illuminated independently such that X_s = ∑_i = 1^N_sX^(i)(0⃗) and X_e = X(r⃗_e) where r⃗_e is the position of the exoplanet. The choice of N_s sets the star-planet contrast. We employ this synthetic measurement scheme to circumvent interference effects that would arise if both star and exoplanet were illuminated simultaneously by the laser source in our setup. The synthetic measurement thus constitutes an approximation of measuring two incoherent point sources with unequal brightness. The complete measurement model is given by, Y(r⃗_e) ∼Poiss( Λ_0 p(r⃗_e) + Λ_B p_B), where Λ_0 = Nλ_0, Λ_B = Nλ_B, and N=N_s + 1 being the total number of measurement realizations. We define the photon distribution from the star-planet system as, p(r⃗_e) = (1-b)q(0⃗) + b q(r⃗_e), with b = 1/N representing the relative brightness of the exoplanet. For our particular experimental setup we have integer constants K=4, M=77^2, and N_s = 1000. § RESULTS §.§ Exoplanet Localization In Figure <ref>, we compare simulated and experimental images of an artificial star-planet system captured with our coronagraph. For separations |y_e| > 0.1σ, where the signal-to-noise ratio (SNR) of the exoplanet exceeds ∼1, there are strong qualitative similarities between the simulated and experimental image intensity profiles, indicating that the exoplanet signal dominates over background noise. However, at separations below this threshold, where the vast majority exoplanet light is discarded with the fundamental mode, the noise in our experimental system becomes more prominent than the exoplanet signal. Additionally, we find that the asymmetry observed between images of the exoplanet positioned at equal distances above and below the optical axis (± y_e) emerges due to modal cross-talk. For each exoplanet location y_e ∈𝒴, we compiled repeated synthetic measurements Y^(i)(y_e) for i = 1,…,ℓ (with ℓ = 100) with exposure time T = 0.1 s. To localize the exoplanet we employ a maximum likelihood estimator (MLE), y̌^(i)_e(y_e) = _ y_e' ∈𝒴log P( Y^(i)(y_e) | y_e') Figure <ref>(a) shows a map of the log-likelihood ℒ(y_e;y_e') = log P( Y(y_e) | y_e') averaged over all experimental trials. The likelihood map exhibits a peak ridgeline (maximum likelihood) that corresponds almost exactly with the ground-truth position of the exoplanet, though exoplanet positions near the optical axis and outside of the modal region of support demonstrate greater estimator uncertainty (weakly-peaked likelihood). Figure <ref>b shows the MLEs obtained for all repeated measurements in the exoplanet translation scan. The estimator effectively localizes the exoplanet within the nominal region of support |y_e| ≤ .6σ of the truncated mode basis. Outside of this domain, the estimator experiences a bias due to the finite support of the truncated experimental mode set. §.§ Statistical Performance Analysis To quantify the performance of our coronagraph, we analyze the statistical error and imprecision of the MLE over repeated experimental trials. For a given, ground truth exoplanet position, we denote the mean and variance of the MLE to be y̅_e ≡𝔼_Y|y_e[y̌_e] and σ^2_e = 𝕍_Y|y_e[y̌_e] respectively. The unbiased empirical estimators of the MLE mean and variance are given by, y̌̅̌_e(y_e) ≡1/ℓ∑_i=1^ℓy̌_e^(i)(y_e), and σ̌^2_e (y_e) ≡1/ℓ-1∑_i=1^ℓ(y̌_e^(i)(y_e) - y̌̅̌_e (y_e) )^2. Figure <ref>(a) shows the statistical error of the MLE over the complete domain of the exoplanet positional scan. In the subdomain |y_e| ∈ [0.1,0.6]σ, the absolute error is |y̌̅̌_e - y_e|<0.02σ with an estimator imprecision σ̌_y_e≈ 0.01. In the subdomain |y_e|∈ [.01,.1]σ, the absolute error is |y̌̅̌_e - y_e|<0.03σ with imprecision <σ̌_y_e≈ 0.1. Figure <ref>(b) shows the empirical imprecision of the MLE as a function of the exoplanet position. We find that this empirical imprecision curve corresponds with the classical Cramer-Rao Lower Bound (CRLB) computed for our experimental measurement model. When the exoplanet is near the optical axis, most of the photons its photons couple to the fundamental mode and are discarded. Thus, the majority of detected photons are supplied by the background, giving rise to a central spike in the imprecision. Additionally, two secondary peaks in imprecision appear at outside the region of support for the truncated mode basis. § CONCLUSION In review, we have experimentally developed a quantum-optimal direct imaging coronagraph using spatial mode (de)multiplexing. We demonstrate high-accuracy localization of an artificial exoplanet at sub-diffraction distances from its host star under 1000:1 star-planet brightness contrast. The performance of our experimental system at deeply sub-diffraction separations is fundamentally limited by a combination of detector dark noise, stray light, and modal cross-talk. At large star-planet separations our system performance is limited by the truncation of the mode basis, which may be improved by extending the number of MPLC phase planes. Overall, we believe this work further substantiates the potential value of mode-sorting solutions for astronomical imaging tasks. Looking forward, even-order coronagraphs may be implemented using similar mode filtering principles presented here. Such a coronagraph would temper photon shot noise induced by a star of finite extent <cit.>. Furthermore, extending functionality to broadband sources is necessary for analyzing the spectrum of the exoplanet. It is well-known that broadband sources introduce modal mismatch such that the cross-talk matrix becomes wavelength-dependent. For sub-diffraction exoplanets, the constraints on modal mismatch become more stringent <cit.>. Interestingly, preliminary numerical work suggests that the MPLC method may be used to simultaneously sort spectral and spatial modes <cit.>, providing a common solution for exoplanet spectroscopy and/or broadband nulling of the star. § ACKNOWLEDGEMENTS ND acknowledges support from the National Science Foundation Graduate Research Fellowship under Grant No. DGE-2137419. IO and SG acknowledge that this research was supported by Raytheon and recognize the contributions of Jaime Bucay and Mark Meisner for their insights.
http://arxiv.org/abs/2407.12725v1
20240717164203
Is Sarcasm Detection A Step-by-Step Reasoning Process in Large Language Models?
[ "Ben Yao", "Yazhou Zhang", "Qiuchi Li", "Jing Qin" ]
cs.CL
[ "cs.CL" ]
Review of nonflow estimation methods and uncertainties in relativistic heavy-ion collisions Fuqiang Wang July 22, 2024 =========================================================================================== § ABSTRACT Elaborating a series of intermediate reasoning steps significantly improves the ability of large language models (LLMs) to solve complex problems, as such steps would evoke LLMs to think sequentially. However, human sarcasm understanding is often considered an intuitive and holistic cognitive process, in which various linguistic, contextual, and emotional cues are integrated to form a comprehensive understanding of the speaker's true intention, which is argued not be limited to a step-by-step reasoning process. To verify this argument, we introduce a new prompting framework called SarcasmCue, which contains four prompting strategies, viz. chain of contradiction (CoC), graph of cues (GoC), bagging of cues (BoC) and tensor of cues (ToC), which elicits LLMs to detect human sarcasm by considering sequential and non-sequential prompting methods. Through a comprehensive empirical comparison on four benchmarking datasets, we show that the proposed four prompting methods outperforms standard IO prompting, CoT and ToT with a considerable margin, and non-sequential prompting generally outperforms sequential prompting. § INTRODUCTION Sarcasm is a subtle linguistic phenomenon that uses rhetorical devices such as hyperbole and figuration to convey true sentiments and intentions that are opposite to the literal meanings of the words used <cit.>. Sarcasm detection aims to combine different types of cues, such as linguistic features, contextual information, emotional knowledge, to form a comprehensive understanding of the author's sarcastic attitude. Owing to its inherent ambivalence and figurative nature, sarcasm detection has persistently proven a formidable challenge spanning the eras from feature engineering to prompt engineering <cit.>. Recent large language models have demonstrated impressive performance in downstream natural language processing (NLP) tasks, in which “System 1” - the fast, unconscious, and intuitive tasks, e.g., sentiment classification, topic analysis, etc., have been argued to be successfully performed <cit.>. Instead, increasing efforts have been devoted to the other class of tasks - “System 2”, which requires slow, deliberative and multi-step thinking, such as logical, mathematical, and commonsense reasoning tasks <cit.>. To improve the ability of LLMs to solve such complex problems, a widely adopted technique is to decompose complex problems into a series of intermediate solution steps prior to answer generation, and elicit LLMs to think step-by-step, such as chain of thought (CoT) <cit.>, tree of thought (ToT) <cit.>, graph of thought (GoT) <cit.>, etc. However, sarcasm detection, as a holistic, intuitive, and non-rational cognitive process, is arguably in noncompliance with step-by-step logical reasoning due to two main reasons: (1) sarcasm expression does not strictly conform to formal logical structures, such as the law of hypothetical syllogism (i.e., if 𝒜⇒ℬ and ℬ⇒𝒞, then 𝒜⇒𝒞). For example, “Poor Alice has fallen for that stupid Bob; and that stupid Bob is head over heels for Claire; but don't assume for a second that Alice would like Claire”; (2) sarcasm judgment is typically a fluid combination of various cues, where each cue holds equal importance to the judgment of sarcasm, and there is no rigid sequence of steps among them. As shown in Fig. <ref>, linguistic, contextual and emotional factors are all crucial for rendering the sentence as sarcastic. Hence, the main research question can be summarized as: RQ: Is human sarcasm detection a step-by-step reasoning process? To answer this question, we propose a theoretical framework, called SarcasmCue, based on the sequential and non-sequential prompting paradigm. It consists of four prompting methods, i.e., chain of contradiction (CoC), graph of cues (GoC), bagging of cues (BoC) and tensor of cues (ToC). A cue is similar to a thought, which is concretely a coherent language sequence related to linguistics, context, or emotion that serves as an intermediate indicator toward identifying sarcasm, such as rhetorical devices, emotional words, etc. Each of the four prompting methods has its own focus and advantages. Specifically, * CoC. It builds upon CoT prompting and harnesses the quintessential property of sarcasm (namely the contradiction between surface sentiment and true intention). It aims to: (1) identify the literal meaning and surface sentiment by extracting keywords, sentimental phrases, etc.; (2) deduce the true intention by scrutinizing special punctuation, rhetorical devices, cultural background, etc.; and (3) determine the inconsistency between surface sentiment and true intention. It has a typical linear structure. * GoC. Generalizing over CoC, GoC frames the problem of sarcasm detection as a search over a graph and treats various cues (e.g., linguistic, contextual, emotional cues, etc.) as nodes, with the relations across cues represented as edges. Different from CoC and ToT, It allows language models to flexibly choose and weigh multiple cues when detecting sarcasm, rather than following a fixed hierarchy or linear reasoning path, unconstrained by the need for unique predecessor nodes. It represents a graphical structure. In summary, both CoC and GoC follow a step-by-step reasoning process. * BoC. In contrast, BoC and ToC are proposed based on the assumption that sarcasm detection is not a step-by-step reasoning process. BoC is a bagging approach that constructs a pool of diverse cues and creates multiple cue subsets through randomly sampling q cues at each round. LLMs are employed to generate multiple predictions based on these subsets, and such predictions are aggregated to produce the final result via majority voting. It has a set-based structure. * ToC. ToC treats each type of cues (namely linguistic, contextual, and emotional cues) as an independent, orthogonal view for sarcasm understanding and constructs a multi-view representation through the tensor product of these three types of cues. It allows language models to leverage higher-order interactions among the cues. ToC can be visualized as a 3D volumetric structure, where each coordinate axis corresponds to a distinct type of cue. This tensorial method aims to offer a more comprehensive and expressive means of fusing diverse cues. We present empirical evaluations of the proposed prompting approaches across four sarcasm detection benchmarks over 2 SOTA LLMs (i.e., GPT-4o, LLaMA 3-8B), and compare their results against 3 SOTA prompting approaches (i.e., standard IO prompting, CoT, ToT). We show that the proposed four prompting methods outperforms standard IO prompting, CoT and ToT with a margin of 2%, and non-sequential prompting generally outperforms sequential prompting. Between the two LLMs, GPT-4o consistently beats LLaMA by a striking margin across all tasks. The main contributions are concluded as follows: * Our work is the first to investigate the step-wise nature of sarcasm judgment by using both sequential and non-sequential prompting methods. * We propose a new prompting framework that consists of four sub-methods, viz. chain of contradiction (CoC), graph of cues (GoC), bagging of cues (BoC) and tensor of cues (ToC). * Comprehensive experiments over four datasets demonstrate the superiority of the proposed prompting framework in zero-shot sarcasm detection. § RELATED WORK This section reviews two lines of research that form the basis of this work: CoT prompting and sarcasm detection. §.§ Chain-of-Thought Prompting Inspired by the step-by-step thinking ability of humans, CoT prompting was proposed to “prompt” language models to produce intermediate reasoning steps that lead to the final answer. Wei et al. wei2022chain made a formal definition of CoT prompting in LLMs and proved its effectiveness by presenting empirical evaluations on arithmetic reasoning benchmarks. This work pioneered the use of CoT prompting in NLP. However, its performance hinged on the quality of manually crafted prompts, which was a costly and unstable process. To fill this gap, Auto-CoT was proposed to automatically construct demonstrations with questions and reasoning chains <cit.>. Different from Auto-CoT, Diao et al. diao2023active presented an Active-Prompt approach to determine which questions were the most important and helpful to annotate from a pool of task-specific queries, for reducing the human engineering workload. The impressive results of CoT prompting have sparked a surge of exploration into designing CoT prompting strategies across various tasks <cit.>. For instance, Wang et al. wang2024grammar used formal grammars as the intermediate reasoning steps for domain-specific language generation. Furthermore, Yao et al. yao2024tree introduced a non-chain prompting framework, namely ToT, which made LLMs consider multiple different reasoning paths and self-evaluated choices to decide the next course of action. They proved the effectiveness of the ToT approach on the tasks requiring non-trivial planning or search. Beyond CoT and ToT approaches, Besta et al. besta2024graph modeled the information generated by an LLM as an arbitrary graph (i.e., GoT), where units of information were considered as vertices and the dependencies between these vertices were edges. Although the above-mentioned approaches have shown exceptional performance on various arithmetic and logical reasoning tasks, all of them adopt the sequential decoding paradigm of “let LLMs think step by step”. Contrarily, it is argued that sarcasm judgment does not conform to step-by-step logical reasoning, and there is a need to develop non-sequential prompting approaches. §.§ Sarcasm Detection Sarcasm detection is habitually treated as a text classification task, where the target is to identify whether the given text is sarcastic or not <cit.>. It has evolved from early rule based and statistical learning based approaches to traditional neural methods, such as CNN, RNN, and further advanced to modern neural methods epitomized by Transformer models. In early stage, the rule based approaches infer the overall sarcasm polarity based on the refined sarcasm rules, such as the occurrence of the interjection word <cit.>. Statistical learning based approaches mainly employ statistical learning techniques, e.g., SVM, RF, NB, etc., to extract patterns and relationships within the data <cit.>. As deep learning based architectures have shown the superiority over statistical learning, numerous base neural networks, e.g., such as CNN <cit.>, LSTM <cit.>, GCN <cit.>, etc., have been predominantly utilized during the middle stage of sarcasm detection research, aiming to learn and extract complex features in an end-to-end fashion. As the field of deep learning continues to evolve, sarcasm detection research has stepped into the era of pre-trained language models (PLMs). An increasing number of researchers are designing sophisticated PLM architectures to serve as encoders for obtaining effective text representations. For example, Liu et al. liuetal2022dual proposed a dual-channel framework by modeling both literal and implied sentiments separately. They also constructed two conflict prompts to elicit PLMs to generate the sarcasm polarity <cit.>. Qiao et al. qiao2023mutual presented a mutual-enhanced incongruity learning network to take advantage of the underlying consistency between the two modules to boost the performance. Tian et al. tianetal2023dynamic proposed a dynamic routing Transformer network to activate different routing transformer modules for modeling the dynamic mechanism in sarcasm detection. However, the above-mentioned works still focus on how to utilize PLMs to extract effective features, without leveraging the extraodinary context learning capabilities of LLMs. In contrast, this paper makes the first attempt to explore the potential of prompting LLMs in sarcasm detection. § THE PROPOSED FRAMEWORK: SARCASMCUE The overall schematic illustration of the proposed SarcasmCue framework is illustrated in Fig. <ref>. We qualitatively compare SarcasmCue to other prompting approaches in Tab. <ref>. SarcasmCue is the only one to fully support chain-based, tree-based, graph-based, set-based and multidimensional array-based reasoning. It is also the only one that simultaneously supports both sequential and non-sequential prompting methods. §.§ Task Definition Consider a sarcasm detection task. Given the data set 𝒟={ ( 𝒳, 𝒴 ) }, where 𝒳 = {x_1, x_2, …, x_n} denotes the input text sequence and 𝒴 = {y_1, y_2, …, y_n} denotes the output label sequence. We use ℒ_θ to represent a large language model with parameter θ. Our task is to leverage a collection of cues 𝒞={c_1, c_2,...,c_k} to bridge the input 𝒳 and the output 𝒴, where each cue c_i is a coherent language sequence related to linguistics, context, or emotion that serves as an intermediate indicator toward identifying sarcasm. §.§ Chain of Contradiction We capture the inherent paradoxical nature of sarcasm, which is the incongruity between the surface sentiment and the true intention, and introduce chain of contradiction, a CoT-style paradigm that allows LLMs to decompose the problem of sarcasm detection into intermediate steps and solve each before making decision (Fig. <ref> (a)). Each cue c_k∼ℒ_θ^CoC ( c_k| 𝒳,c_1,c_2,...,c_k-1 ) is sampled sequentially, then the output 𝒴∼ℒ_θ^CoC ( 𝒴| 𝒳,c_1,...,c_k ). A specific instantiation of CoC involves three steps: Step 1. We first ask LLM to detect the surface sentiment via the following prompt p_1: 7.5cmGiven the input sentence [𝒳], what is the SURFACE sentiment, as indicated by clues such as keywords, sentimental phrases, emojis? The output sequence y_1 ∼ℒ_θ^CoC (𝒴|p_1 ) is generated from the language model ℒ_θ^CoC conditioned on input prompt p_1. Step 2. We then ask LLM to carefully discover the true intention via the following prompt p_2: 7.5cmDeduce what the sentence really means, namely the TRUE intention, by carefully checking any rhetorical devices, language style, unusual punctuation, common senses. The output sequence, denoted as y_2, is generated from the language model conditioned on prompt p_2 as well as the previous interaction p_1, y_1, formulated as y_2∼ℒ_θ^CoC (𝒴|p_1, y_1, p_2 ). Step 3. We finally ask LLM to examine the consistency between surface sentiment and true intention and make the final prediction: 7.5cmBased on Step 1 and Step 2, evaluate whether the surface sentiment aligns with the true intention. If they do not match, the sentence is probably `Sarcastic'. Otherwise, the sentence is `Not Sarcastic'. Return the label only. y_3 is therefore generated based on a joint understanding of the preceding context y_1, y_2 and p_1, p_2, p_3: y_3∼ℒ_θ^CoC ( 𝒴|p_1, y_1, p_2, y_2, p_3 ). The sarcasm label is identified from y_3 as the output of CoC. Notably, CoC is built based on the presumption that all the cues are linearly correlated, and detects human sarcasm through step-by-step reasoning. Different from the original CoT, however, the steps are explicitly designed for the sarcasm detection context. Further details are presented in Algorithm <ref> in App. <ref>. §.§ Graph of Cues The linear structure of CoC restricts it to a single path of reasoning. To fill this gap, we introduce graph of cues, a GoT-style paradigm that allows LLMs to flexibly choose and weigh multiple cues, unconstrained by the need for unique predecessor nodes (Fig. <ref> (b)). GoC frames the problem of sarcasm detection as a search over a graph, and is formulated as a tuple ( ℳ, 𝒢, ℰ ), where ℳ is the cue maker used to define what are the common cues, 𝒢 is a graph of “sarcasm detection process”, ℰ is cue evaluator used to determine which cues to keep selecting and in which order. Unlike ToT and GoT, GoC does not involve the modules of “thought generator” and “thought aggregation”. 1. Cue maker. Human sarcasm judgment often relies on the combination and analysis of one or more cues to achieve an accurate understanding. Such cues can be broadly categorized into three types: linguistic cues, contextual cues and emotional cues. Linguistic cues refer to the linguistic features inherent in the text, including keywords, rhetorical devices, punctuation and language style. Contextual cues refer to the environment and background of the text, including topic, cultural background, common knowledge. Emotional cues denote the emotional stance conveyed by the text, including emotional words, special symbols (such as emojis) and emotional contrasts. A total number of 4+3+3=10 cues are adopted. 2. Graph construction. In 𝒢= ( V,E ), the cues are regarded as vertices constituting the vertex set V, while the relations across cues form the edge set E. If there is an edge between cues c_k and c_j, it is considered that c_k and c_j are closely related. Given the cue c_k, the cue evaluator ℰ considers cue c_j to provide the most complementary information to c_k, which would combine with c_k to facilitate a deep understanding of sarcasm. 3. Cue evaluator. We involve 𝒢 in the LLM detecting sarcasm process. To advance this process, the cue evaluator ℰ assesses the current progress towards judging sarcasm by means of determining whether the cumulative cues obtained so far are sufficient to yield an accurate judgment. If so, the search goes to an end. Otherwise, it serves as a heuristic for the search algorithm, determining which additional cues to select and in what order, to further the detection process. Similar to ToT, an LLM is used as the cue evaluator ℰ. We employ a voting strategy to determine the most valuable cue for selection, by explicitly comparing multiple potential cue candidates in a voting prompt, such as: 7.5cmGiven an input text 𝒳, the target is to accurately detect sarcasm. Now, we have collected the keyword information as the first step: {keywords}, judge if this provides over 95% confidence for accurate detection. If so, output the result. Otherwise, from the remaining cues {rhetorical devices, punctuation, ...}, vote the most valuable one to improve accuracy and confidence for the next step. This step can be formulated as ℰ (ℒ_θ^GoC, c_j+1 ) ∼ Vote{ℒ_θ^GoC ( c_j+1|𝒳, c_1,2,...,j ) }_c_j+1∈{c_j+1,...,c_k}. In a nutshell, it greedily selects the most valuable cue until the final judgment is reached. Although the GoC enables the exploration of many possible paths across the cue graph, its nature remains grounded in a step-by-step reasoning paradigm (see Algorithm <ref> in App. <ref>). §.§ Bagging of Cues We further relax the assumption that the cues for sarcasm detection are inter-related. We introduce bagging of cues, an ensemble learning based paradigm that allows LLMs to independently consider varied combinations of cues without assuming a fixed order or dependency among them (Fig. <ref> (c)). BoC constructs a pool of the pre-defined k=10 cues 𝒞. From this pool, 𝒯 subsets are random sampled, each consisting of q  (i.e., 1≤ q ≤ k ) cues. BoC thus leverages LLMs to generate 𝒯 independent sarcasm predictions ŷ_t based on the cues of each subset. Finally, such predictions are aggregated using a majority voting mechanism to produce the final sarcasm detection result. This approach embraces randomness in cue selection, enhancing the LLM's ability to explore numerous potential paths, thus improving the robustness and accuracy of sarcasm detection. BoC consists of the following key steps: Step 1. Cue subsets construction. A total of 𝒯 cue subsets 𝒮_t ∈ [1, 2, ..., 𝒯]={(c_t_1, c_t_2,...,c_t_q), t ∈ [1, 2, ..., 𝒯] } are created by randomly sampling without replacement from the complete pool of cues 𝒞. Each sampling is independent. Step 2. LLM prediction. For each subset 𝒮_t, an LLM ℒ_θ^BoC is used to independently make sarcasm prediction through the comprehensive analysis of the cues in the subset and the input text. This can be conceptually encapsulated as ŷ_t ∼ℒ_θ ^BoC (𝒴| 𝒮_t, 𝒳 ). Step 3. Prediction aggregation. These individual predictions are then combined using an aggregation function, i.e., majority voting, to yield the final prediction: Y∼ Vote ( {ŷ_1, ŷ_2,...,ŷ_𝒯} ). BoC treats all cues as independent and does not follow the step-by-step reasoning paradigm for sarcasm detection (see Algorithm <ref> in App. <ref>). §.§ Tensor of Cues CoC and GoC methods mainly handle low-order interactions between cues, while BoC assumes cues are independent. To capture high-order interactions among cues, we introduce tensor of cues, a novel paradigm that allows LLMs to amalgamate three types of cues (viz. liguistic, contextual and emotional cues) into a high-dimensional representation (Fig. <ref> (d)). ToC treats each type of cues as an independent, orthogonal view for sarcasm understanding, and constructs a multi-view representation through the tensor product of such three types of cues. We first ask the LLM to extract linguistic, contextual, and emotional cues respectively via a simple prompt. Taking linguistic cue extraction as an example: 7.5cmInstruction: Please extract the linguistic cues from the input sentence for sarcasm detection, such as keywords, rhetorical devices, punctuation and language style. Input: [𝒳] We take the outputs of the LLM's final hidden layer as the embeddings of the linguistic, contextual and emotional cues, and apply a tensor fusion mechanism to fuse the cues as additional inputs to the sarcasm detection prompt. Inspired by the success of tensor fusion network (TFN) for multi-modal sentiment analysis <cit.>, we apply token-wise tensor fusion to aggregate the cues. In particular, the embeddings are projected on a low-dimensional space, i.e., L⃗i⃗n⃗ = ( e_1^l, e_2^l,...,e_L^l )^T, C⃗o⃗n⃗ = ( e_1^c, e_2^c,...,e_L^c )^T, E⃗m⃗o⃗ = ( e_1^e, e_2^e,...,e_L^e )^T. Suppose the LLM has a hidden dimensionality of d, fully-connected layers f_lin, f_con, f_emo are constructed to map the embeddings to dimensionality of {d_l, d_c, d_e}, respectively for linguistic, contextual and emotional cues. Then, a tensor product is computed to combine the cues into a high-dimensional representation 𝒵 = ( e_1, e_2,...,e_L )^T, where e_i = [ e_i^l; 1 ]⊗[ e_i^c; 1 ]⊗[ e_i^e; 1 ], ∀ i ∈ [1,2,...,L]. The additional value of 1 facilitates an explicit rendering of single-cue features and bi-cue interactions, leading to a comprehensive fusion of different cues encapsulated in each fused token e_i ∈ℛ ^(d_l+1) × (d_c+1) × (d_e+1). The values of d_l, d_c and d_e are delicately chosen such that the dimensionality of fused token is precisely d[Otherwise the fused tokens are truncated to d-dim vectors]. That enables an integration of the aggregated cues to the main prompt via: 7.5cmConsider the information provided in the current cue above. Classify whether the input text is sarcastic or not. If you think the Input text is sarcastic, answer: yes. If you think the Input text is not sarcastic, answer: no. Input: [𝒳] The embedded prompt above is prepended with the aggregated cue sequence 𝒵 before fed to the LLM. As it is expected to output a single token of “yes” or “no” by design, we take the logit of the first generated token and decode the label accordingly as the output of ToC. ToC facilitates deep interactions among these cues, providing a powerful and flexible framework for processing complex linguistic phenomena (see Algorithm <ref> in App. <ref>). Notably, as ToC manipulates cues on the vector level via neural structures, it requires access to the LLM structure and calls for supervised training on a collection of labeled samples. During training, the weights of the LLM are frozen, and the linear weights in f_lin, f_con, f_emo are updated as an adaptation of LLM to the task context. § EXPERIMENTS §.§ Experiment Setups Datasets. Four benchmarking datasets are selected as the experimental beds, viz. IAC-V1 <cit.>, IAC-V2 <cit.>, SemEval 2018 Task 3 <cit.> and MUStARD <cit.>. IAC-V1 and IAC-V2 are from the Internet Argument Corpus (IAC) <cit.>, specifically designed for the task of identifying and analyzing sarcastic remarks within online debates and discussions. It encompasses a balanced mixture of sarcastic and non-sarcastic comments. SemEval 2018 Task 3 is collected using irony-related hashtags (i.e. #irony, #sarcasm, #not) and are subsequently manually annotated to minimise the amount of noise in the corpuses. It emphasize the challenges inherent in identifying sarcasm within the constraints of MUStARD's concise format, and highlight the importance of context and linguistic subtleties in recognizing sarcasm. MUStARD is compiled from popular TV shows including Friends, The Golden Girls, The Big Bang Theory, etc. It consists of 690 samples total of 3,000 utterances. Each sample is a conversation consisting of several utterances. In this work, we only use the textual information. The statistics for each dataset are shown in Table <ref>. Baselines. A wide range of SOTA baselines are included for comparison. They are: * PLMs.  (1) RoBERTa <cit.>, (2) BNS-Net <cit.>, (3) DC-Net <cit.>, (4) QUIET <cit.> and (5) SarcPrompt <cit.> are five SOTA PLMs based approaches for sarcasm detection via pre-trained language modeling and refined representations. * Prompt tuning.  (6) IO, (7) CoT <cit.> and (8) ToT <cit.> are four SOTA prompting approaches by leveraging advanced prompt approaches to enhance LLM's performance. * LLMs.  (9) GPT-4o[https://openai.com/index/hello-gpt-4o/] and (10) LLAMA 3-8B-Instruct[https://llama.meta.com/llama3/] are the strongest general LLMs. Implementation. We have implemented the prompting methods for GPT-4o and LLaMA3-8B-Instruct, and reported the performance of PLMs in their original papers. The GPT-4o methods are implemented with the official openAI Python API library[https://github.com/openai/openai-python], while the LLaMA methods are implemented based on the Hugging Face Transformers library[https://huggingface.co/docs/transformers]. All prompting strategies are implemented for GPT-4o and LLaMA3-8B-Instruct except for ToC, which can solely be deployed on open-sourced LLMs. Following previous works in this field, LangChain[https://github.com/langchain-ai/langchain] is employed for the implementation of ToT and GoC. For the training of ToC, cross-entropy loss between the output logit and the true label token is computed to update the weights of the fully-connected layers. §.§ Main Results We report both Accuracy and Macro-F1 results for SarcasmCue and baselines in a zero-shot setting in Table <ref>, except for ToC which requires supervised training for context adaption. LLMs do not possess a unique advantage on sarcasm detection. Since sarcasm indicates the manifestation of sentiments and intentions opposite to the literal meaning of the texts, it usually violates logical reasoning pipelines that LLMs are known to excel at <cit.>. This is empirically validated in the experiment where LLMs are observed to have consistently lower performance over PLMs in terms of average F1 scores across the four datasets. This highlights the need to investigate prompting strategies for adapting LLMs for sarcasm detection, towards which this work has made the first attempt and achieved preliminary success. Human sarcasm detection does not necessarily follow a step-by-step reasoning process. The comparison between sequential (CoT, CoC, GoC, ToT) and non-sequential (BoC, ToC) prompting strategies fails to provide clear empirical evidences on whether sarcasm detection follows a step-by-step reasoning process. Nevertheless, the results on LLaMA3-8B-Instruct are more indicative to GPT-4o, since the latter has a strong capacity on its own (IO) and does not significantly benefit from any prompting strategies on its top. On LLaMA3-8B-Instruct where in-context learning is necessary for sarcasm detection due to its poor IO performance, non-sequential approaches can apparently offer more benefits over sequential ones, with a remarkable margin consistently present on all four datasets. This seems to support our hypothesize that sarcasm has a non-sequential nature. SarcasmCue successfully adapts LLMs to sarcasm detection. The proposed prompting strategies in the SarcasmCue framework achieve an overall superior performance to the baseline prompting methods and bring about accuracy increase over the original LLMs in a zero-shot setting. In particular, by explicitly designing the reasoning steps for sarcasm detection, CoC beats CoT by a tremendous margin on GPT-4o, whilst performing in par with CoT on LLaMA3-8B-Instruct, an interesting result that further suggests the non-squential nature of sarcasm detection. By pre-defining the set of cues on 3 main aspects, GoC and BoC manage to guide LLMs to reason along the correct paths, leading to more accuracy judgment of sarcasm than the freestyle thinking in ToT. The proposed trainable neural architecture in ToC achieves an effective tensor fusion of multi-aspect cues for sarcasm detection, pushing the capacity to a comparable level to PLMs without tuning the LLM parameters. § ACKNOWLEDGMENTS This work is supported by National Science Foundation of China under grant No. 62006212, Fellowship from the China Postdoctoral Science Foundation (2023M733907), Natural Science Foundation of Hunan Province of China (242300421412). § CONCLUSION In this work, we aim to study the step-wise reasoning nature of sarcasm detection, and introduce a prompting framework (called SarcasmCue) containing four sub-methods, viz. chain of contradiction (CoC), graph of cues (GoC), bagging of cues (BoC) and tensor of cues (ToC). It elicits LLMs for human sarcasm detection by considering sequential and non-sequential prompting methods. Our comprehensive evaluations across multiple benchmarks and state-of-the-art LLMs demonstrate that SarcasmCue outperforms traditional methods, with non-sequential prompting methods (GoC and ToC) showing particularly strong performance. In the future, we plan to develop the multi-modal version of SarcasmCue for multi-modal sarcasm detection. § LIMITATIONS The proposed SarcasmCue model has several limitations: (1) It incorporates only three types of cues – linguistic, contextual, and emotional – while other potentially useful cues, such as multimodal information, have not been integrated, potentially limiting the model's comprehensive understanding of sarcasm; (2) the performance of SarcasmCue is influenced by the capabilities of the underlying large language models (LLMs), meaning it performs better with more powerful LLMs. acl_natbib § ALGORITHMS OF FOUR PROMPTING METHODS 1. CoC. We present further details of CoC in Algorithm <ref>. 2. GoC. We present further details of GoC in Algorithm <ref>. 3. BoC. We present further details of BoC in Algorithm <ref>. 4. ToC. We present further details of ToC in Algorithm <ref>.
http://arxiv.org/abs/2407.13563v1
20240718143258
Para-Hermitian rational matrices
[ "Froilán Dopico", "Vanni Noferini", "María C. Quintana", "Paul Van Dooren" ]
math.NA
[ "math.NA", "cs.NA", "65F15, 15A18, 15A22, 15A54, 93B18, 93B20, 93B60" ]
δ̣ ϵ η̅ K #1#1 w̅ z̅ rank remark[theorem]Remark prop[theorem]Proposition example[theorem]Example parenumi parenum (parenumi)parenumi 𝔽 𝔾 ℝ ℂ ℕ 𝔼 ℙ ℤ ℚ λ 𝕃 cs null 𝒜(Ω) vn VN   fd FD   mq MQ   pvd PVD □  equationsection Para-Hermitian rational matricesThe work of all the authors has been partially supported by the Agencia Estatal de Investigación of Spain MCIN/AEI/10.13039/501100011033/ through grants PID2019-106362GB-I00 and RED2022-134176-T. Froilán DopicoDepartamento de Matemáticas, Universidad Carlos III de Madrid, Avda. Universidad 30, 28911 Leganés, Spain (dopico@math.uc3m.es). Vanni NoferiniDepartment of Mathematics and Systems Analysis, Aalto University, P.O. Box 11100, FI-00076, Finland (vanni.noferini@aalto.fi , maria.quintanaponce@aalto.fi). María C. Quintana[3] Paul Van DoorenDepartment of Mathematical Engineering, Université catholique de Louvain, Avenue Georges Lemaître 4, B-1348 Louvain-la-Neuve, Belgium (paul.vandooren@uclouvain.be). ========================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================== § ABSTRACT In this paper we study para-Hermitian rational matrices and the associated structured rational eigenvalue problem (REP). Para-Hermitian rational matrices are square rational matrices that are Hermitian for all z on the unit circle that are not poles. REPs are often solved via linearization, that is, using matrix pencils associated to the corresponding rational matrix that preserve the spectral structure. Yet, non-constant polynomial matrices cannot be para-Hermitian. Therefore, given a para-Hermitian rational matrix R(z), we instead construct a *-palindromic linearization for (1+z)R(z), whose eigenvalues that are not on the unit circle preserve the symmetries of the zeros and poles of R(z). This task is achieved via Möbius transformations. We also give a constructive method that is based on an additive decomposition into the stable and anti-stable parts of R(z). Analogous results are presented for para-skew-Hermitian rational matrices, i.e., rational matrices that are skew-Hermitian upon evaluation on those points of the unit circle that are not poles. Rational matrices, linearization, linear system matrices, strong minimality, para-Hermitian, *-palindromic, Möbius 65F15, 15A18, 15A22, 15A54, 93B18, 93B20, 93B60 § INTRODUCTION Rational matrices R(t) play a fundamental role in systems and control theory <cit.>, where they typically represent the transfer function of a linear time invariant system. In some important applications they have certain symmetries – called self-conjugacy – that need to be preserved for solving the underlying problem. For instance, such self-conjugate matrices may represent the spectral density function of a stochastic process, or play an important role in problems such as spectral factorization, Wiener filtering or optimal control. The most common instances of self-conjugate properties that one finds in the literature are rational matrices R(t) that are Hermitian (or skew-Hermitian) for every point t (that is not a pole of R(t)) on one of the following three curves: the real axis , the imaginary axis i and the unit circle S^1. To emphasize the distinction between these three cases, we will use the variable x in the real axis case, the variable s in the imaginary axis case and the variable z in the unit circle case; when discussing results that hold for every arbitrary rational matrix, we will use t as the generic variable. In this paper, we focus on para-Hermitian and para-skew Hermitian rational matrices. To be more precise, these are square rational matrices that satisfy the following entry-wise properties (see, e.g., <cit.> for the para-Hermitian case): Let 𝔽⊆ℂ be a field. A rational matrix R(z) ∈𝔽(z)^m× m is para-Hermitian (resp., para-skew-Hermitian) if, for all i and j, R(z)_ij = R(z^-1)_ji (resp., R(z)_ij = -R(z^-1)_ji ). If R(z) satisfies the property in (<ref>), we shall write R^*(z) = R(1/z ) (resp., R^*(z) = -R(1/z ) ). Para-Hermitianity is one of various possible self-conjugate properties of rational matrices. Other common such properties are mentioned in Table <ref>, where we also recall the name associated with each property. Note that, in the notation R^*(t), the superscript ^* means that the coefficients of R(t) are implicated in the Hermitian conjugation, but not the variable t; hence, R^* (t) := [R(t)]^*. Depending on the self-conjugate property of each class of rational matrices R(t), it is easy to see that its poles and zeros will be distributed with certain symmetries, that can be described respectively as mirror images with respect to the real line , the imaginary axis i or the unit circle S^1. In particular, the poles and zeros of a para-Hermitian (or para-skew-Hermitian) rational matrix R(z) which are not on the unit circle S^1, appear in pairs (λ,1/λ), symmetric with respect to S^1. In other words, each such pair have the same phase and reciprocal modulus. Moreover, if the coefficients of R(z) are real then the poles and zeros that are not real, also come in complex conjugate pairs (λ,λ), implying that they come in quadruples (λ,1/λ,λ,1/λ) if they are not on S^1. This property is very important and ought to be preserved when computing the poles and zeros of R(t). Para-Hermitian matrices, not necessarily rational, that are analytic on the unit circle are extremely relevant in signal processing <cit.>; see in particular <cit.> and the references therein for an extensive survey of their applications in various fields of engineering. There is some inconsistency in the literature about the name para-Hermitian. The most prevalent custom (see, e.g., <cit.> and the references therein) is to call simply “para-Hermitian" those matrices that depend on a variable z and evaluate to a Hermitian matrix on the unit circle (except possibly on their singularities). However, in some papers <cit.> these are called “discrete-time para-Hermitian" or “para-Hermitian on the unit circle", to distinguish them with matrices that depend on a variable s and are Hermitian on the imaginary axis (except possible singularities), which are called “continuous-time para-Hermitian" or “para-Hermitian on the imaginary axis". In this paper, we opt for the first option which is both less pedantic and more common in the literature; we use instead the word “even rational matrix" for the case of Hermitianity on the imaginary axis, analogously to what is done in the polynomial case <cit.>. We point out that *-even rational matrices are called para-Hermitian in <cit.>; this clarification can be helpful to readers as some of the results in <cit.> will be used in some proofs. Poles and zeros of rational matrices can be computed via linearizations, which ideally should preserve the structure of the original rational matrix. In Section <ref> we review some facts about the structural data of rational matrices and of their linear system matrices. In particular, we recall the notion of strongly minimal linearization <cit.>. In Section <ref>, we show that it is impossible to linearize a para-Hermitian rational matrix R(z) with a *-palindromic linear system matrix. We instead show, in Section <ref>, how to construct strongly minimal *-palindromic linearizations for the rational matrix (1+z)R(z), via Möbius transforms, if R(z) is para-Hermitian. In Section <ref>, we introduce an additive decomposition for rational matrices in terms of their stable and anti-stable parts, and show how to construct *-palindromic strongly minimal linearizations for (1+z)R(z) based on this decomposition. Finally, in Section <ref>, we pay particular attention to the construction of *-palindromic strongly minimal linearizations from the Taylor expansion around infinity (Subsection <ref>) and from the partial fraction decomposition (Subsection <ref>) of the stable part of R(z). Some conclusions and lines of future research are discussed in Section <ref>. § PRELIMINARIES Below, denotes either the field of real numbers ℝ or the field of complex numbers ℂ; 𝔽[t]^m× m and 𝔽(t)^m× m denote the sets of m × m matrices whose entries are in the ring of polynomials 𝔽[t] and in the field of rational functions 𝔽(t), respectively. The elements of 𝔽[t]^m× m and 𝔽(t)^m× m are called, respectively, polynomial matrices (or matrix polynomials) and rational matrices. Polynomial matrices can have finite zeros (i.e., finite eigenvalues) but no finite poles, while a rational matrix R(t)∈𝔽(t)^m× m can have both finite poles and zeros. These are defined as follows via the local Smith–McMillan form <cit.>. Given ∈ℂ, there exist rational matrices M_ℓ(t) and M_r(t) invertible at [A rational matrix M(t)∈(t)^m× m is said to be invertible at ∈ if the constant matrix M() is bounded (i.e., M()∈^m× m) and invertible.] such that M_ℓ(t)R(t)M_r(t) = (t-)^d_1 ,…, (t-)^d_r, 0_(m-r)× (m-r) , where d_1≤ d_2 ≤⋯≤ d_r are integers and r is the normal rank of R(t). The diagonal matrix in (<ref>) is unique and is called the local Smith-McMillan form of R(t) at (see e.g. <cit.>). The exponents d_i are called the structural indices or invariant orders of R(t) at . If there are strictly positive indices 0 < d_p≤⋯≤ d_r in (<ref>), then is a zero of R(t) with partial multiplicities (d_p, … , d_r). If there are strictly negative indices d_1≤⋯≤ d_q <0 in (<ref>), then is a pole of R(t) with partial multiplicities (-d_1, … , -d_q). The structural indices of R(t) at infinity are defined as those of R(1/t) at zero. The list of structural data of a rational matrix is not only formed by its finite and infinite pole and zero structures but also by its left and right minimal indices. Minimal bases and indices were introduced by Forney in <cit.>. The minimal bases and minimal indices of a rational matrix R(t) are those associated with the following rational vector subspaces [ 𝒩_r (R)={x(t)∈ℂ(t)^m× 1: R(t)x(t)=0}, and; 𝒩_ℓ (R)={y(t)^T∈ℂ(t)^1× m: y(t)^T R(t)=0}, ] which are called the right and left null-spaces of R(t), respectively. If R(t) is singular, then these null-spaces are non-trivial. If N_r(R) (resp., N_ℓ(R)) is non-trivial, it has minimal bases and minimal indices, which are called the right (resp., left) minimal bases and minimal indices of R(t). Rosenbrock's polynomial system matrices associated with a rational matrix R(t) contains its pole and zero information, whenever minimality conditions are satisfied <cit.>. In order to linearize a rational matrix R(t), one can thus consider linear polynomial system matrices associated with R(t). These are block partitioned pencils of the form L(t):= [[ -t A_1 +A_0 t B_1 - B_0; t C_1 -C_0 t D_1-D_0 ]] =: [ -A(t) B(t); C(t) D(t) ] , where A(t) is square and regular, i.e., A(t) ≠ 0, and R(t) =D(t)+C(t)A(t)^-1B(t), i.e., the Schur complement of A(t) in L(t) is R(t). Then, R(t) is said to be the transfer function of L(t). One can obtain the finite pole and zero structure of R(t) from the eigenvalue structures of the polynomial matrices A(t) and L(t), respectively, provided L(t) is irreducible or minimal, meaning that the matrices [[ -A(t) B(t) ]] and [[ -A(t); C(t) ]], have, respectively, full row and column rank for all t_0∈ℂ <cit.>. In the special case that L(t) is minimal and B(t), C(t) and D(t) are constant matrices, then (<ref>) is said to be a minimal generalized state-space realization of R(t). Minimal polynomial system matrices in subsets of ℂ and at infinity are defined and studied in <cit.>. It was shown in <cit.> that one can recover both the finite and infinite polar and zero structure of R(t) from the pencils A(t) and L(t) provided the pencils in (<ref>) have full rank also at infinity. Minimality at ∞ means that the matrices [ -A_1 B_1 ] and [ -A_1; C_1 ], have, respectively, full row and column rank. If L(t) is both minimal (in ℂ) and minimal at ∞ then L(t) is said to be strongly minimal <cit.> or, also, a strongly minimal linearization of R(t). Moreover, in this situation, the eigenvectors, root vectors, and minimal bases of R(t) can be easily recovered from those of L(t), and their minimal indices are the same <cit.>. We aim to linearize para-Hermitian rational matrices with structured linear system matrices, as in (<ref>), in such a way that the chosen structure preserves the pole and zero symmetries of the original rational matrix. However, a non-constant polynomial matrix P(z) and, in particular, a non-constant pencil cannot satisfy Definition <ref>. That is, given a para-Hermitian rational matrix R(z), it is not possible to construct a non-constant para-Hermitian linear system matrix of R(z). Given a polynomial matrix P(t) of degree d, the reversal matrix polynomial of P(t) is P(t):=t^d P(1/t ). Recall that a matrix polynomial P(z) is *-palindromic (resp., *-anti-palindromic) if it satisfies <cit.> P^*(z)=P(z) (resp., P^*(z)=-P(z)). Then, one could try to construct a *-palindromic linear system matrix of R(z), whose eigenvalues also have the same symmetries, i.e., they also appear in pairs (λ,1/λ), symmetric with respect to S^1, and to apply to such a linear system matrix a structure-preserving algorithm for computing its eigenvalues <cit.>. But, as we show in the following Section <ref>, it is impossible to linearize a para-(skew-)Hermitian rational matrix with an (anti)-*-palindromic system matrix. However, if one consider the rational matrix H(z):=(1+z)R(z) instead of R(z), we show then how to construct a *-palindromic (resp., *-anti-palindromic) linear system matrix for H(z) when R(z) is para-Hermitian (resp., para-skew-Hermitian). Ultimately, the point is that, if R(z) is para-Hermitian, then H(z) satisfies _1H^*(z)=H(z) where _1 H(z):=z H(1/z ), and, as we show in Theorem <ref>(a), the transfer function of a *-palindromic linear system matrix satisfies (<ref>). Analogously, if R(z) is para-skew-Hermitian, then H(z) satisfies _1H^*(z)=-H(z). § IT IS IMPOSSIBLE TO LINEARIZE A PARA-HERMITIAN RATIONAL MATRIX WITH A PALINDROMIC SYSTEM MATRIX We will show in this section that it is impossible to construct a *-palindromic (resp., *-anti-palindromic) linear system matrix of a para-Hermitian (resp., para-skew-Hermitian) rational matrix R(z). First, observe that Theorem <ref>(a) and Corollary <ref> do not assume any minimality on the structured linear system matrix. See Theorem <ref>(b) for a result on minimality. Consider a *-palindromic (resp., *-anti-palindromic) linear system matrix L(z). Then the following statements hold: (a) The transfer function H(z) of L(z) satisfies _1H^*(z)=H(z) (resp., _1 H^*(z) =-H(z)). (b) L(z) is minimal if and only if it is strongly minimal. We only discuss in detail the *-palindromic case, since the *-anti-palindromic one is analogous. A *-palindromic linear system matrix has the form L (z) = z [ -A C; B^* D ] + [ -A^* B; C^* D^* ], where zA+A^* is regular. For (a), it is easy to check that _1H^*(z)=z H^*(1/z )=H(z), where H(z) = (z D + D^*) + (z B^* + C^*) (z A + A^*)^-1 (z C+ B). To prove (b), first note that strong minimality implies minimality by definition. For the converse, assume that L(z)=:z L_0^*+L_0 is minimal. In particular, the first block row and the first block column of L(0)=L_0 have full rank. But, taking conjugate transposes, the same property holds for L_0^*, implying minimality at ∞ and therefore strong minimality. Let R(z)∈𝔽(z)^m× m be a nonzero rational matrix. If R(z) is para-Hermitian (resp., para-skew-Hermitian) then there is no *-palindromic (resp., *-anti-palindromic) linear system matrix whose transfer function is R(z). Again we only treat the *-palindromic case, as the *-anti-palindromic one is analogous. Suppose by contradiction that R(z) is the transfer function of a *-palindromic linear system matrix. Then, by Theorem <ref>(a), zR^* ( 1/z )=R(z). But, since R(z) is para-Hermitian, R^* ( 1/z )=R(z), and thus zR ( z )=R ( z ), which is a contradiction since R(z) 0. While in the following sections <ref> and <ref> we make statements that cover both the para-Hermitian and the para-skew-Hermitian case, we give proofs only for the para-Hermitian case, as the para-skew-Hermitian one is completely analogous and left as an exercise. § PARA-HERMITIAN RATIONAL MATRICES AND MÖBIUS TRANSFORM In this section we consider the following Möbius transform <cit.> T and its inverse T^-1: T: x ⟼ z=i-xi+x, and T^-1: z ⟼ x=i 1-z1+z. Note that the Möbius transformation T is minus the Cayley transform. We will use the fact that T maps x ∈ to T(x) ∈ S^1 and, conversely, its inverse T^-1 maps z ∈ S^1 to T^-1(z) ∈. Given a para-Hermitian rational matrix R(z), we can apply the change of variable z=T(x) in R(z). Namely, R(z) ⟼ R(T(x))=:G(x). Then, we obtain that G(x) is Hermitian, i.e., G^*(x) =G(x). Analogously, if G(x) is a Hermitian rational matrix, then the change of variable x = T^-1(z) maps G(x) ⟼ G(T^-1(z))=:R(z) and R(z) is para-Hermitian. We formalize this discussion in Lemma <ref>. A rational matrix R(z)∈ℂ(z)^m× m is para-Hermitian (resp., para-skew-Hermitian) if and only if G(x):=R(T(x))∈ℂ(x)^m× m is Hermitian (resp., skew-Hermitian), where T is the Möbius transformation in (<ref>). Suppose that R(z) is para-Hermitian, i.e., R^*( 1/z ) = R(z) for all z ∈. Then, for any x ∈, G^*(x) = R^*( -i-x-i+x) = R( i-xi+x) = G(x) and hence G(x) is Hermitian. Conversely, assume now that G(x) is Hermitian, i.e., G^*(x ) = G(x) for all x ∈. Then, for any z ∈, R^*( 1/z ) =G^*(-i 1-1/z1+1/z)=G(i 1-z1+z)=R(z). Now, we can state and prove Theorem <ref>, which is one of the main results of this paper. Before proceeding with the proof, we explain the key ideas. Observe that, given a para-Hermitian rational matrix R(z), and taking into account Lemma <ref>, we can linearize the Hermitian rational matrix G(x):=R(T(x)) with a Hermitian pencil S(x). For that, we can construct a Hermitian strongly minimal linear system matrix S(x) of G(x) as in <cit.>. We can now consider the Möbius transformation T^-1, then the rational matrix Q(z):=S(T^-1(z)) must be para-Hermitian with least common denominator (1+z). Finally, if we multiply Q(z) by (1+z), we obtain that (1+z)Q(z)=:L(z) is a *-palindromic (see Remark <ref>) linear system matrix for (1+z)R(z). Note that if a rational matrix Q(z) is para-Hermitian, that is, Q^*( 1/z ) = Q(z), and L(z):=(1+z)Q(z) is a pencil then L(z) must be *-palindromic. Indeed, L^*(z)=L(z) by (<ref>). Let R(z)∈ℂ(z)^m× m be a rational matrix. R(z) is para-Hermitian (resp., para-skew-Hermitian) if and only if there exists a strongly minimal *-palindromic (resp., *-anti-palindromic) linearization of (1+z)R(z). To prove the necessity, assume first that R(z) is para-Hermitian, then we follow the next steps: * Consider the change of variable z=T(x), where T is the Möbius transformation in (<ref>), and set G(x):=R(T(x))=R(i-xi+x). By Lemma <ref>, G(x) is a Hermitian rational matrix. * Linearize G(x) with a Hermitian strongly minimal linear system matrix S(x)=[ U_2- x U_1 V_2- x V_1; V_2^*- x V_1^* W_2- x W_1 ]=:[[ U(x) V(x); Y(x) W(x) ]], with U(x)∈[x]^ n× n nonsingular, Y(x)=V^*(x) and U_2, U_1, W_1, W_2 Hermitian matrices. Strong minimality means that [ U(x_0); Y(x_0) ]=[ U(x_0) V(x_0) ]=n for all x_0∈, and that [ U_1; V_1^* ]=[ U_1 V_1 ]=n. It is always possible to construct such a Hermitian pencil S(x) by <cit.>. * Consider the inverse Möbius transformation x=T^-1(z) to obtain the following para-Hermitian rational matrix Q(z): Q(z):=S(T^-1(z))=[[ U(i-iz1+z) V(i-iz1+z); Y(i-iz1+z) W(i-iz1+z) ]]=:[[ U(z) V(z); Y(z) W(z) ]]. Moreover, W(z)- Y(z) U(z) ^-1V(z)=R(z). * Finally, we set L(z) :=(1+z) Q(z)=[ (1+z)U_2-i(1-z) U_1 (1+z)V_2-i(1-z) V_1; (1+z)V_2^*-i(1-z) V_1^* (1+z)W_2-i(1-z) W_1 ] which is a *-palindromic (see Remark <ref>) linear system matrix for (z+1)R(z). In addition, L(z) is strongly minimal. To see that L(z) is strongly minimal, consider first any ∈ with ≠ -1, then by (<ref>) we have that [ (1+)U_2-i(1-) U_1; (1+)V_2^*-i(1-) V_1^* ]=[ U(i1-/1+); Y(i1-/1+) ] =n, and [ (1+)U_2-i(1-) U_1 (1+)V_2 -i(1-) V_1 ] =[ U(i1-/1+) V(i1-/1+) ] =n, so that L(z) is minimal at . Minimality at -1 follows from the fact that S(x) is minimal at ∞, that is, by (<ref>). Finally, minimality at ∞ follows from the fact that S(x) is minimal at -i. For the para-skew-Hermitian case, we obtain that G(x) is skew-Hermitian by Lemma <ref>. Then, we instead construct a skew-Hermitian strongly minimal linearization S(x). For that, we can use <cit.>. Finally, by following the steps above, we obtain a strongly minimal *-anti-palindromic linear system matrix of (1+z)R(z). To prove the sufficiency, let L(z) be a strongly minimal *-palindromic (resp., *-anti-palindromic) linearization of (1+z) R(z) =: H(z). Then, Theorem <ref>(a) implies that z (1+1z) R^*(1z) = ± (1+z) R(z), where + stands for the palindromic case and - for the anti-palindromic. Thus, R^*(1z) = ± R(z). Since in Theorem <ref> we obtain a linear system matrix L(z) for (z+1)R(z) instead of R(z), it is important to know how to recover the structural data of R(z) from those of L(z). The minimal indices are the same <cit.>, since multiplication by (z+1) does not change the left and right rational null-spaces of R(z). For the same reason the minimal bases can be recovered as described in <cit.>. The recovery of the invariant orders requires some analysis. For any finite ≠ -1 the invariant orders of (z+1)R(z) and those of R(z) at are the same and are related to those of L(z) as in <cit.>. Moreover, it is easy to recover the invariant orders at -1 and at ∞ of R(z) from those of L(z) as we state in the following Proposition <ref>. Let R(t)∈(t)^m× n be a rational matrix with normal rank r and let L(t):=[[ -A(t) B(t); C(t) D(t) ]]∈[t]^(p+m)× (p+n), with A(t) nonsingular, be a linear system matrix of (1+t)R(t). Then the following statements hold: (a) Assume that L(t) is minimal at -1. Let d_1≤⋯≤ d_s be the partial multiplicities of A(t) at -1 and let d_1≤⋯≤d_u be the partial multiplicities of L(t) at -1. Then: (a.1) The invariant orders at -1 of (1+t)R(t) are (-d_s,-d_s-1,… ,-d_1,0,…,0_r-s-u,d_1, d_2,…, d_u). (a.2)The invariant orders of R(t) at -1 are (-d_s,-d_s-1,… ,-d_1,0,…,0_r-s-u,d_1, d_2,…, d_u) - (1,1,…, 1). (b) Assume that L(t) is minimal at ∞. Let e_1≤⋯≤ e_s be the partial multiplicities of _1 A(t) at 0 and let e_1≤⋯≤e_u be the partial multiplicities of _1 L(t) at 0. Then: (b.1) The invariant orders of (1+t)R(t) at ∞ are (-e_s,-e_s-1,… ,-e_1,0,…,0_r-s-u,e_1, e_2,…, e_u) - (1,1,…, 1). (b.2) The invariant orders of R(t) at ∞ are (-e_s,-e_s-1,… ,-e_1,0,…,0_r-s-u,e_1, e_2,…, e_u). Statement (a.1) follows from <cit.>. For (a.2), we consider the local Smith–McMillan form at -1 of (1+t)R(t). That is, there exist rational matrices M_1(t) and M_2(t) invertible at -1 such that (1+t)R(t) = M_1(t) (t+1)^q_1 ,…, (t+1)^q_r, 0_(m-r)× (n-r)M_2(t) , where q_i, for i=1,…,r, are the invariant orders at -1 of (1+t)R(t). If we divide (<ref>) by (1+t), we obtain that the invariant orders at -1 of R(t) are q_i-1, for i=1,…,r. This, together with (a.1), proves (a.2). Statement (b.1) follows from <cit.>. For (b.2), we consider <cit.> applied to the linear system matrix _1 L(t) that is minimal at 0, and we obtain that (-e_s,-e_s-1,… ,-e_1,0,…,0_r-s-u,e_1, e_2,…, e_u) are the invariant orders of t(1+1/t)R(1/t)=(1+t)R(1/t) at 0. Finally, note that the invariant orders at 0 of R(1/t) are equal to those of (1+t)R(1/t), since (1+t) is invertible at 0. §.§ Alternative approaches: The choice of the Möbius transform We note that the use of the Möbius transform T in (<ref>) involves complex arithmetic when the rational matrix R(z) has real coefficients. To avoid this, we can instead consider the Möbius transform B and its inverse B^-1: B: s ⟼ z=1+s1-s, and B^-1: z ⟼ s=z - 1z + 1. The Möbius transformation B is also called bilinear transformation (see <cit.>) and maps the open left half of the complex plane onto the inside of the unit disk and the imaginary axis onto the unit circle. Given the transfer function of a discrete-time system, we can use the bilinear transformation to obtain the transfer function of a continuous-time system. In addition, if the transfer function is para-Hermitian, after the transformation it will be *-even as we prove in Lemma <ref>. A rational matrix R(z)∈ℂ(z)^m× m is para-Hermitian (resp., para-skew-Hermitian) if and only if G(s):=R(B(s))∈ℂ(s)^m× m is *-even (resp., *-odd), where B is the Möbius transformation in (<ref>). If R(z) is para-Hermitian, i.e., R^*( 1/z ) = R(z) for all z ∈. Then, for any s ∈, G^*(s) = R^*( 1+s1-s) = R( 1-s1+s) = G(-s). Conversely, if G(s) is *-even, i.e., G^*(s ) = G(-s) for all s ∈. Then, for any z ∈, R^*( z ) =G^*(z - 1z + 1)=G( -z + 1z + 1)=G( -1 + 1/z1 + 1/z)=R(1/z). Theorem <ref> can be also proved by using the bilinear transformation B instead of minus the Cayley transform T, since *-even (resp., *-odd ) rational matrices admit *-even (resp., *-odd ) linearizations <cit.>. Both proofs of Theorem <ref> are constructive so that one can obtain a strongly minimal *-palindromic (resp., *-anti-palindromic) linear system matrix of (1+z)R(z) from it. However, the linear system matrices that one obtains from the proofs are not unique and will depend on the linearization that one constructs after the corresponding Möbius transformation for the Hermitian (resp., skew-Hermitian) rational matrix G(x) or for the *-even (resp., *-odd ) rational matrix G(s). As we mentioned above, we can consider the linearizations for Hermitian (resp., skew-Hermitian) or *-even (resp., *-odd ) rational matrices constructed in <cit.>, but this implies computing the matrix coefficients of the Laurent expansion around infinity of G(x) or G(s). In the next sections, we give alternative methods that do not involve such computations when the corresponding rational matrix has no poles on the unit circle. To optimize the accuracy of numerical algorithms, it might also be desirable to make a choice of the Möbius transform so that the invariant orders at =-1 are not “shifted", as it is explained in Proposition <ref>, if =-1 is a pole or a zero of R(z). To avoid this, we can instead consider the Möbius transform B_α and its inverse B_α^-1: B_α: s ⟼ z=α+α sα-α s, and B_α^-1: z ⟼ s=α z - αα z + α, with α∈, α 0. Then, analogous to Lemma <ref>, we have the following Lemma <ref>. A rational matrix R(z)∈ℂ(z)^m× m is para-Hermitian (resp., para-skew-Hermitian) if and only if G(s):=R(B_α(s))∈ℂ(s)^m× m is *-even (resp., *-odd), where B_α is the Möbius transformation in (<ref>). By using the Möbius transformation B_α in (<ref>) and Lemma <ref>, we can construct a *-palindromic linearization for H_α(z):=(α +α z )R(z), as we state in Theorem <ref>. Then, we can choose α in B_α such that -α^2/|α|^2 is neither a pole nor a zero of R(z), to avoid the invariant orders of finite poles and/or finite zeros of R(z) to be “shifted". Theorem <ref> can be proved by following similar steps to those in the proof of Theorem <ref>. Note that if R(z) is para-Hermitian, then H_α(z) satisfies (<ref>), i.e., _1H_α^*(z)=H_α(z). Let R(z)∈ℂ(z)^m× m be a rational matrix and α∈ℂ, α 0. R(z) is para-Hermitian (resp., para-skew-Hermitian) if and only if there exists a strongly minimal *-palindromic (resp., *-anti-palindromic) linearization of (α +α z )R(z). §.§ Parametrizing para-Hermitian rational matrices The definition of para-Hermitian (resp., para-skew-Hermitian) rational matrices does not provide an explicit method for constructing all the matrices in this class. As a direct corollary of the results in this section we present in Corollary <ref> different ways to generate all the matrices in this class. Observe that this result does not assume any minimality of the involved matrices. Let 0≠α∈ℂ. Then R(z) ∈ℂ (z)^m× m is para-Hermitian (resp., para-skew-Hermitian) if and only if there exist constant matrices A∈ℂ^n× n, C, B ∈ℂ^n× m, and D∈ℂ^m× m such that the pencil z A + A^* (resp., z A - A^*) is regular and R(z) = 1/α + α z[ (z D + D^*) + (z B^* + C^*) (z A + A^*)^-1 (z C+ B) ] (resp., R(z) = 1/α + α z[ (z D - D^*) + (z B^* - C^*) (z A - A^*)^-1 (z C- B) ] ). It is easy to check that (<ref>) (resp., (<ref>)) is para-Hermitian (resp., para-skew-Hermitian). The fact that any para-Hermitian (resp., para-skew-Hermitian) rational matrix can be written as in (<ref>) (resp., (<ref>)) follows from Theorem <ref> and the structure of any *-palindromic (resp., *-anti-palindromic) pencil. § DECOMPOSITION INTO STABLE AND ANTI-STABLE PARTS Every rational matrix R(t) can be written with an additive decomposition as in Lemma <ref>, where the rational matrices R_in(t) and R_out(t) are called the stable and anti-stable parts of R(t), respectively; here, a rational matrix is called stable (resp. anti-stable) if all its poles have moduli strictly smaller (resp. strictly larger) than 1. If t=z and R(z) is para-Hermitian, the proper rational matrix R_p(z) defined in Lemma <ref> is also para-Hermitian. Then we will see in Theorem <ref> that we only need to perform a Möbius transform on R_p(z) to construct a strongly minimal *-palindromic linearization L(z) of (1+z)R(z). In particular, if R(z) has no poles on the unit circle, we can construct L(z) without considering a Möbius transform as we state in Theorem <ref>. Let R(t) ∈ℂ (t)^m × n be a rational matrix. Then there exists an additive decomposition of the form R(t)= R_in(t) + R_out(t) + R_S^1(t) + R_0, where R_in(t) is a strictly proper rational matrix that has all its poles inside the open unit disk, and is therefore stable; R_out(t) is such that R_out(0)=0 and has all its poles (infinity included) strictly outside the unit circle, and is therefore anti-stable; R_S^1(t) is a strictly proper rational matrix that has all its poles on the unit circle; and R_0 is a constant matrix. Moreover, the decomposition in (<ref>) is unique. In addition, R(z) ∈ℂ (z)^m × m is para-Hermitian (resp., para-skew-Hermitian) if and only if R_in^*(z) = R_out(1/z) (resp., R_in^*(z) = -R_out(1/z)), and the proper rational matrix R_p(z):=R_S^1(z)+R_0 is para-Hermitian (resp., para-skew-Hermitian). Consider the (unique) partial fraction expansion R(t) = P(t) + ∑_∈∑_j=1^d()1/(t-)^jA_,j, where P(t) is a polynomial matrix, d() denotes the degree of the pole , and A_λ , j∈ℂ^m× n are constant matrices. We can now define R_in(t) := ∑_||<1∑_j=1^d()1/(t-)^jA_,j and R_S^1(t) := ∑_||=1∑_j=1^d()1/(t-)^jA_,j. It is clear that R_in(t) and R_S^1(t) are strictly proper, and that the poles of U(t):=R_in(t)+R_S^1(t) are those poles of R(t) that are in the closed unit disk; the uniqueness of R_in(t) and of R_S^1(t) follows from the uniqueness of the partial fraction expansion. Moreover, S(t):=R(t)-U(t) is analytic at 0, because the coefficients of the Laurent expansion around t=0 with negative indices of R(t) and U(t) are equal by construction. Hence, we can (uniquely) define R_0:=S(0) and R_out(t):=S(t)-R_0. We assume now that R(z) is para-Hermitian. Then, since R^*(z)=R(1/z), we have that R_in^*(z) + R_out^*(z) + R_p^*(z)=R_in(1/z) + R_out(1/z) + R_p(1/z), where R_p(z):=R_S^1(z)+R_0. We note that both R_in^*(z) and R_out(1/z) are strictly proper rational matrices that have all their poles in the open unit disk. Since the decomposition in (<ref>) for R^*(z) is unique, this implies that R_in^*(z)=R_out(1/z). Then, the rational matrix H(z):=R_in(z)+R_out(z) is para-Hermitian and, therefore, R_p(z)=R(z)-H(z) is also para-Hermitian. Finally, let us assume that (<ref>) holds and that R_p(z) is para-Hermitian (resp., para-skew-Hermitian). Then, R^* (z) = R_in^* (z) + R_out^* (z) + R_p^* (z) = ± R_out (1/z) ± R_in (1/z) ± R_p (1/z) = ± R(1/z), where + stands for the para-Hermitian case and - for the para-skew-Hermitian. This concludes the proof. Next, we assume that R(z) is para-Hermitian (resp., para-skew-Hermitian) and has no poles on the unit circle. That is, the additive decomposition in (<ref>) is of the form R(z)= R_in(z) + R_out(z) + R_0, with R_in^*(z)=R_out(1/z) and R_0^*=R_0, (resp., R(z)= R_in(z) + R_out(z) + R_0, with R_in^*(z)= - R_out(1/z) and R_0^*=-R_0 ). Then, we have the following result. Let R(z)∈ℂ(z)^m× m be a para-Hermitian (resp., para-skew-Hermitian) rational matrix having no poles on the unit circle. Consider an additive decomposition of R(z) as in (<ref>) such that R_S^1(z)=0, and a minimal generalized state-space realization of R_in(z): R_in(z) = B( z A_1 - A_0)^-1C, with A_1 invertible. Then, R_out(z)= z C^*( A_1^* - z A_0^* )^-1B^*, (resp., R_out(z)= z C^*( z A_0^*-A_1^* )^-1B^*) is a minimal generalized state-space realization of R_out(z), and the following pencil L(z) is a strongly minimal *-palindromic (resp., *-anti-palindromic) linearization of (1+z)R(z): L(z) = [[ 0 A_0 - z A_1 C; z A_0^* - A_1^* 0 B^*(1+z); z C^* B(1+z) R_0(1+z) ]] . (resp., L(z) = [[ 0 A_0 - z A_1 C; A_1^* - z A_0^* 0 -B^*(1+z); -z C^* B(1+z) R_0(1+z) ]] ). The expression of the realization of R_out (z) follows directly from R^*_in (z) = R_out (1/z) in (<ref>). The minimality of the realization of R_out (z) at z 0 follows from the minimality of the realization of R_in (z) and at z=0 from the invertibility of A_1. It is clear that L(z) is *-palindromic and that its transfer function is (1+z) R(z). Then, the minimality of L(z) at any finite z -1 follows from the fact that R_in(z) and R_out(z) have no poles in common. The minimality at z=-1, follows from the fact that -1 is not a pole of R(z) by assumption. Finally, the strong minimality of L(z) follows from Theorem <ref>(b). In Section <ref>, we show how to construct a strongly minimal *-palindromic linearization of (1+z) R(z) for a para-Hermitian rational matrix R(z) without poles on the unit circle from the Taylor expansion of R_in(z) and from the partial fraction decomposition of R_in(z), instead of directly from a minimal generalized state-space realization (<ref>) as we did in Theorem <ref>. If there are poles on the unit circle, we can consider a Möbius transform for the para-Hermitian proper rational matrix R_p(z), as we described in Section <ref>, to construct a strongly minimal *-palindromic linearization L_p(z) for (1+z)R_p(z). Then, as we state in Theorem <ref>, we can combine L_p(z) with a *-palindromic linearization constructed from the stable and anti-stable parts, to construct a strongly minimal *-palindromic linearization for (1+z)R(z). In Theorem <ref>, whose proof is immediate and left as an exercise to the reader, we only consider for brevity the para-Hermitian case but the para-skew-Hermitian case is analogous. Let R(z)∈ℂ(z)^m× m be a para-Hermitian rational matrix expressed by the additive decomposition in Lemma <ref>. Consider a strongly minimal *-palindromic linearization of (1+z)[R_in(z)+R_out(z)]: L_in/out(z) := [[ 0 A_0 - z A_1 C; z A_0^* - A_1^* 0 B^*(1+z); z C^* B(1+z) 0 ]], as in Theorem <ref>, and a strongly minimal *-palindromic linearization of (1+z)R_p(z): L_p(z):=[[ -A_p(z) B_p(z); C_p(z) D_p(z) ]]. Then the following pencil is a strongly minimal *-palindromic linearization of (1+z)R(z): L():= [[ 0 A_0 - z A_1 0 C; z A_0^* - A_1^* 0 0 B^*(1+z); 0 0 -A_p(z) B_p(z); | z C^* B(1+z) C_p(z) D_p(z) ]]. § CONSTRUCTION OF LINEARIZATIONS FROM DIFFERENT REPRESENTATIONS OF R_IN(Z) In this section we assume that R(z) is a para-Hermitian rational matrix having no poles on the unit circle. That is, R_p(z) in the additive decomposition (<ref>) is a constant matrix R_0 and R(z) can be written as in (<ref>). Otherwise, we can consider a Möbius transform for the para-Hermitian proper rational matrix R_p(z) and use Theorem <ref>. Our first goal is to complement the construction in Theorem <ref> by showing how to construct explicitly a minimal generalized state-space realization of R_in (z) as in (<ref>) from a Taylor expansion of R_in (z) around the point at infinity. Observe in this context that there are infinitely many realizations as in (<ref>) of any R_in (z). In the second place, we explore explicit constructions of strongly minimal *-palindromic linearizations of (1 + z)R(z) from the partial fraction decomposition of R_in (z). We omit in this section to state results for para-skew-Hermitian rational matrices for brevity. They can be easily deduced as in previous sections. §.§ Construction from the Taylor expansion of R_in(z) The strictly proper rational matrix R_in(z) in (<ref>) can be represented via its Taylor expansion around the point at infinity. Namely, R_in(z):= R_-1 z ^-1 + R_-2 z ^-2 + R_-3 z ^-3 + ⋯ . In this section, we construct a strongly minimal *-palindromic linear system matrix as in Theorem <ref>, when R_in(z) is represented as in (<ref>) by using the algorithm in <cit.> or <cit.>. For that, let us consider the following block Hankel matrix H and shifted block Hankel matrix H_σ associated with R_in(z): H := [[ R_-1 R_-2 … R_-k; [+2mm] R_-2 R_-k-1; [+2mm] ⋮ ⋮; [+2mm] R_-k R_-k-1 … R_-2k+1 ]], H_σ := [[ R_-2 R_-3 … R_-k-1; [+2mm] R_-3 R_-k-2; [+2mm] ⋮ ⋮; [+2mm] R_-k-1 R_-k-2 … R_-2k ]]. Then for sufficiently large k the rank r_f of H equals the total polar degree of R_in(z), i.e., the sum of the degrees of the denominators in the Smith-McMillan form of R_in(z) <cit.>. We assume in the sequel that we are taking such a sufficiently large k. The algorithm in <cit.> for strictly proper rational matrices or <cit.> implies that the linear system matrix in Lemma <ref> is a strongly minimal linearization for R_in(z). <cit.> Let R_in(z)∈ℂ(z)^m× n be a strictly proper rational matrix as in (<ref>). Let H and H_σ be the block Hankel matrices in (<ref>) and r_f:= H. Let U:=[[ U_1 U_2 ]] and V:=[[ V_1 V_2 ]] be unitary matrices such that U^* HV =[[ H 0; 0 0 ]] =[[ U_1^* HV_1 0; 0 0 ]], where H is r_f × r_f and invertible. Let us now partition the matrices U_1 and V_1 as follows U_1=[[ U_11; U_21 ]] and V_1=[[ V_11; V_21 ]], where the matrices U_11 and V_11 have dimension m× r_f and n× r_f, respectively. Then L_sp(z):= [[ U_1^* H_σ V_1 - z H H V_11^*; | U_11H 0 ]] is a strongly minimal linearization for R_in(z). In particular, R_in(z) = U_11H(z H- U_1^* H_σ V_1)^-1H V_11^*. Then, we have the following result for the construction of a strongly minimal *-palindromic linear system matrix of (1+z)R(z), that follows from Lemma <ref> and Theorem <ref> just by replacing (<ref>) by (<ref>). Let R(z)∈ℂ(z)^m× m be a para-Hermitian rational matrix having no poles on the unit circle. Consider an additive decomposition of R(z) as in (<ref>) and a minimal generalized state-space realization of R_in(z) as in Lemma <ref>. Namely, R_in(z) = U_11H(z H- U_1^* H_σ V_1)^-1H V_11^* . Then, R_out(z) = z V_11H^*( H^* - z V_1^*H_σ^* U_1)^-1H^* U_11^* is a minimal generalized state-space realization of R_out(z), and the following pencil L(z) is a strongly minimal *-palindromic linear system matrix of (1+z)R(z): L(z) = [[ 0 U_1^* H_σ V_1 - z H H V_11^*; z V_1^* H_σ^* U_1 - H^* 0 H^* U_11^* (1+z); z V_11H^* U_11H(1+z) R_0(1+z) ]]. §.§ Construction from the partial fraction decomposition of R_in(z) Let us assume that Ω_in is the set of poles of the para-Hermitian rational matrix R(z) ∈ℂ(z)^m× m inside the unit circle, that Ω_out is the set of poles of R(z) outside the unit circle, and that R(z) has no poles on the unit circle. We also assume that the strictly proper rational matrix R_in(z) in (<ref>) is expressed via its partial fraction decomposition. Namely, R_in(z) = ∑__i ∈Ω_in R__i(z) with R__i(z):= ∑_j=1^d_iR_j/(z-_i)^j, where d_i denotes the degree of the pole λ_i. Then, since R_out(z) = R_in^*(1/z), we have that R_out(z) = ∑__i ∈Ω_in R__i^*(1/z) with R__i^*(1/z)= ∑_j=1^d_iz^j R_j^*/(1-z_i)^j. Let us first assume that R(z) has only one pole inside S^1. That is, taking into account (<ref>), R(z) can be written as R(z)= R_(z) + R_0+ R_^*(1/z) with R_(z)= ∑_j=1^dR_j/(z-)^j and R_0^* = R_0 . We set: K_(z):=[ (-z) I_m ; I_m (-z) I_m ; ⋱ ⋱ ; I_m (-z) I_m ]∈ℂ[z]^d m× d m, F:=[ R_d; R_d-1; ⋮; R_1 ]. Then, by assuming that R_d is invertible, we obtain the following result. Let R(z)∈ℂ(z)^m× m be a para-Hermitian rational matrix as in (<ref>) with λ inside the unit circle S^1 and R_d invertible. Then the following pencil L_(z) is a strongly minimal *-palindromic linearization for (1+z)R(z): L_(z) := [[ 0 K_(z) F; [2pt/2pt] | K_^*(z) 0 (1+z)(e_d⊗ I_m); | z F^* (1+z)(e_d^T⊗ I_m) (1+z)R_0 ]], where e_d is the last canonical vector of ^d and ⊗ denotes the Kronecker product. It is easy to see that L_(z) is *-palindromic, and the strong minimality follows from the facts that |λ|<1 and R_d is invertible. To show that the transfer function of L_(z) is (1+z)R(z) we can use <cit.>. First, we set L_(z) = [[ -A(z) B(z); C(z) D(z) ]]. Now, we consider the following rational matrix: H(z):=[[ (1+z)z^d-1(1-z)^dI_m ⋯ 1+z1-zI_m Z_d^T(z) ⋯ Z_1^T(z) I_m ]], where Z_k(z):=∑_j=k^dR_j/(z-)^j-k+1, for k=1,…,d. Then, by <cit.>, [[ C(z) D(z) ]]H(z)^T=(1+z)R(z) is the transfer function of L_(z), since [[ -A(z) B(z) ]] H(z)^T=0, H(z) has full row normal rank and the right-most block of H(z) is I_m. We now construct an equivalent linear system matrix to L_(z), that is also a strongly minimal *-palindromic linearization for (1+z)R(z), where R(z) is as in (<ref>), when R_d is invertible. For that, we consider the block Hankel matrix: H:= [[ R_d; R_d-1; R_d ⋮; R_d R_d-1 … R_1 ]] ∈ℂ^dm × dm, and we set M_(z):=[ S_d(z); R_d + S_d-1(z); S_d(z) ⋮ ; S_d(z) R_d + S_d-1(z) ⋯ R_2 + S_1(z) ], G:=[ R_d^*; R_d-1^*; ⋮; R_1^* ], with S_j(z):=(-z)R_j. Then, we obtain the following result. Let R(z)∈ℂ(z)^m× m be a para-Hermitian rational matrix as in (<ref>) with λ inside the unit circle S^1 and R_d invertible. Then the following pencil L_(z) is a strongly minimal *-palindromic linearization for (1+z)R(z): L_(z) = [[ 0 M_(z) F; [2pt/2pt] | M_^*(z) 0 (1+z) G; | z F^* (1+z) G^* (1+z) R_0 ]]. Observe that L_(z) satisfies the following identity: L_(z) = [[ I_dm ; H^* ; I_m ]] L_(z) [[ I_dm ; H ; I_m ]], where L_(z) is the strongly minimal *-palindromic linearization in Theorem <ref>. Note that L_(z) is also *-palindromic. In addition, since R_d is invertible, L_(z) is also a strongly minimal linearization for (1+z)R(z), since the transformation in (<ref>) preserves the transfer function of L_(z) <cit.> and the (strong) minimality in that case. If R_d is not invertible, we can construct a trimmed linear system matrix from L_(z) that is a strongly minimal *-palindromic linearization for (1+z)R(z), as we show in Theorem <ref>. The proof of Theorem <ref> is inspired by the proof of <cit.>, a result which is valid only for (unstructured) polynomial matrices. Let R(z)∈ℂ(z)^m× m be a para-Hermitian rational matrix as in (<ref>) with λ inside the unit circle S^1 and R_d 0. Let H be the block Hankel matrix in (<ref>), and let r:=rank(H). Consider unitary matrices U=[[ U_1 U_2 ]] and V=[[ V_1 V_2 ]] that “compress” the matrix H as follows: U^* H V =[[ 0 0; 0 U_2^* H V_2 ]]=: [[ 0 0; 0 H ]], where H is of dimension r× r and invertible. Now, we set: L_(z):=[[ U^* ; V^* ; I_m ]] L_(z) [[ U ; V ; I_m ]], where L_(z) is the linear system matrix in Theorem <ref>. Then, L_(z) is a “compressed” pencil of the form L_(z)=:[[ 0 0 0 0 0; 0 0 0 A_c(z) F_c; [2pt/2pt] 0 0 0 0 0; 0 A_c^* (z) 0 0 (1+z) G_c; 0 z F_c^* 0 (1+z) G_c^* (1+z) R_0 ]], where A_c (z) ∈ℂ[z]^r× r is a regular pencil, and L_c(z):= [[ 0 A_c(z) F_c; A_c^* (z) 0 (1+z) G_c; z F_c^* (1+z) G_c^* (1+z) R_0 ]] is a strongly minimal *-palindromic linearization for (1+z)R(z). We define the following pencils based on submatrices of (<ref>): [ X_1(z); X_2(z) ] := [[ K_(z); | (z+1)(e_d^T⊗ I_m) ]] and [ Y_1(z) Y_2(z) ]:= [[ K_^T(z) e_d⊗ I_m ]]. Then, we have that [ X_1(z); X_2(z) ] H= [ M_(z); (1+z) G^* ] and H [ Y_1(z) Y_2(z) ]= [ M_(z) F ]. Since [ X_1(z); X_2(z) ] and [ Y_1(z) Y_2(z) ] have full column rank and full row rank, respectively, for all z_0∈ℂ and also at ∞, i.e., the matrix coefficients of z of these two pencils have full rank, then [ M_(z); (1+z) G^* ] and [ M_(z) F ] have rank r for all z_0∈ℂ and at ∞. Moreover, the right null space of [ M_(z); (1+z) G^* ] is spanned by the columns of V_1 and the left null space of [ M_(z) F ] is spanned by the rows of U_1^*. Analogously, we set: [ X_1(z); X_2(z) ] := [[ ( K_^*(z))^T; | z(e_d^T⊗ I_m) ]] and [ Y_1(z) Y_2(z) ]:= [[ K_^*(z) (1+ z)(e_d⊗ I_m) ]], and we obtain that [ X_1(z); X_2(z) ] H^*= [ M_^*(z); z F^* ] and H^* [ Y_1(z) Y_2(z) ]= [ M_^*(z) (1+z)G ]. Observe that [ X_1(z); X_2(z) ] and [ Y_1(z) Y_2(z) ] have also full column rank and full row rank, respectively, for all z_0∈ℂ and at ∞. Then [ M_λ^*(z); z F^* ] and [ M_λ^*(z) (1+z) G ] have rank r for all z_0∈ℂ and at ∞. Moreover, the right null space of [ M_λ^*(z); z F^* ] is spanned by the columns of U_1 and the left null space of [ M_λ^*(z) (1+z) G ] is spanned by the rows of V_1^*. Then, (<ref>) and (<ref>) and the observations about the null spaces and the ranks imply the compressed form (<ref>), and that the first and second block columns and that the first and second block rows of the 3× 3 partitioned pencil L_c(z) in (<ref>) have each full rank r for all z_0 ∈ℂ and at ∞. This implies that L_c(z) is a strongly minimal linear system matrix, provided that A(z):=[[ 0 A_c(z); A_c^* (z) 0 ]] is regular and that A_c(z) and A_c^* (z) have no eigenvalues in common. Indeed, we now prove that A_c(z) is regular, which implies that A_c^* (z) and, thus, A(z) are regular. We will also prove that A_c(z) and A_c^* (z) do not have eigenvalues in common (finite nor infinite). Observe that [ 0 0; 0 A_c (z) ] = U^* K_λ (z) H V. Since K_λ (z) is invertible for all z_0 ∈ℂ, z_0 λ, and at ∞, we get that A_c (z) ∈ℂ[z]^r × r is invertible for all z_0 ∈ℂ, z_0 λ, and at ∞. Therefore A_c (z) is regular and has at most one eigenvalue equal to λ. The equality [ 0 0; 0 A_c^* (z) ] = V^* H^* K_λ^* (z) U proves analogously that A_c^* (z) is regular and has at most one eigenvalue equal to 1/λ. Since λ is inside the unit circle 1/λ and A_c(z) and A_c^* (z) do not have eigenvalues in common. Since L_c (z) is clearly *-palindromic, it only remains to prove that the transfer function of L_c(z) is (1+z)R(z). For that, we define the rational matrix N(z):=[[ (1+z)z^d-1(1-z)^dI_m ⋯ 1+z1-zI_m 1(z-)^d I_m ⋯ 1z- I_m I_m ]], which satisfies that [[ 0 0 0 0 0; 0 0 0 A_c(z) F_c; [2pt/2pt] 0 0 0 0 0; 0 A_c^* (z) 0 0 (1+z) G_c ]][ U^* 0 0; 0 V^* 0; 0 0 I_m ]N(z)^T=0. In particular, [[ 0 A_c(z) F_c; [2pt/2pt] A_c^* (z) 0 (1+z) G_c ]] M(z)^T=0, where M(z)^T:= [ [0 I_r]U^* 0 0; 0 [0 I_r]V^* 0; 0 0 I_m ]N(z)^T. Then, we have by <cit.> that the transfer function of the system matrix L_c(z) is [ z F_c^* (1+z) G_c^* (1+z) R_0 ] M(z)^T = [ z F^* (1+z) G^* (1+z) R_0 ]N(z)^T = (1+z)R(z). If the para-Hermitian rational matrix R(z) has more than one pole inside S^1, i.e., Ω_in={_1,…,_p} and R(z)= ∑_i=1^p R__i(z) + R_0+ ∑_i=1^p R__i^*(1/z) , where R__i(z) and R__i^*(1/z) are as in (<ref>) and (<ref>), we can construct a strongly minimal *-palindromic linearization from each pole _i and combine them, to obtain a strongly minimal *-palindromic linearization of (1+z)R(z). More precisely, if L__i(z) is a strongly minimal *-palindromic linearization of (1+z)[R__i(z)+R__i^*(1/z)], for i=1,…,p, of the form: L__i(z):= [[ 0 P_i(z) F_i; P_i^*(z) 0 (1+z) G_i; z F_i^* (1+z) G_i^* 0 ]], constructed by using Theorem <ref>, Theorem <ref> or Theorem <ref>, then L(z)=[[ 0 P_1(z) F_1; P_1^*(z) 0 (1+z) G_1; [2pt/2pt] ⋱ ⋮; [2pt/2pt] 0 P_p(z) F_p; P_p^*(z) 0 (1+z) G_p; z F_1^* (1+z) G_1^* ⋯ z F_p^* (1+z) G_p^* (1+z) R_0 ]] is a strongly minimal *-palindromic linearization of (1+z)R(z). We can consider an alternative decomposition for rational matrices that does not require knowledge of their stable and anti-stable parts, based on the polar sections at 0 and ∞. Namely, if a para-Hermitian rational matrix R(z)∈ℂ(z)^m× m is not proper then it must have a pole at z=∞ and a pole at z=0, with the same partial multiplicities. Hence, one can decompose R(z) as follows: R(z) = R_p(z) + R_0(z) + R_∞(z), R_0(z) = ∑_i=1^d R_-i z^-i, R_∞(z) = ∑_i=1^d R_i z^i, where R_p(z) is a proper para-Hermitian rational matrix, and R_0(z) and R_∞(z) are the polar sections at z=0 and z=∞, respectively, in the Laurent series of R(z) around infinity. The fact that their highest degree coefficient d is the same, follows from the para-Hermitian property. Moreover, this also implies that R_-i = R_i^*, i=1,…,d. If we also assume that the proper rational matrix R_p(z) in this decomposition (<ref>) is a constant matrix R_0, then R(z) is a para-Hermitian rational matrix of the form R(z)=R_dz^d+⋯+R_1 z+ R_0 + R_-11z+⋯+ R_-d1z^d, with R_i^*=R_-i, for i=1,…,d, and R_0^*=R_0. Then, if R_-d is invertible, we can apply Theorems <ref> or <ref> with λ = 0 to construct a strongly minimal *-palindromic linearization of (1+z)R(z). If R_-d is not invertible, we can apply Theorem <ref> with λ = 0. We consider here the representation in (<ref>) in the simplest case d=1. That is, R(z) is a m× m para-Hermitian rational matrix of the form R(z)=R_1 z+ R_0 + R_-11z, meaning that R_1^*=R_-1 and R_0^*=R_0. Let us first assume that R_-1 (and hence also R_1) is invertible. Then, the following linear system matrix L(z) is a strongly minimal *-palindromic linearization for (1+z)R(z): L(z)= [[ 0 -z I_m R_-1; - I_m 0 I_m (1+z); z R_1 I_m (1+z) R_0(1+z) ]]. Observe that L(z) above can be obtained from the pencil (<ref>) taking d=1 and λ=0. If R_1 is not invertible, i.e., R_1=r<m. Then, we can write R_1=L_1U_1^*, with L_1,U_1∈^m× r and L_1 = U_1 = r. And, since R_1^*=R_-1, we have that R_-1=U_1 L_1^*. We can thus construct the following trimmed pencil L(z)= [[ 0 -z I_r L_1^*; - I_r 0 U_1^* (1+z); z L_1 U_1 (1+z) R_0(1+z) ]] , that is a strongly minimal *-palindromic linearization for (1+z)R(z). § CONCLUSIONS AND FUTURE WORK Given a para-Hermitian (resp., para-skew-Hermitian) rational matrix R(z), in this paper we show how to construct a strongly minimal *-palindromic (resp., *-anti-palindromic) linearization for H(z):=(1+z)R(z), whose eigenvalues preserve the symmetries of the zeros and poles of R(z), and its minimal indices preserve the equality of the left and right minimal indices in the singular case. In some cases, the proposed techniques require some computations to be performed to construct the linearization; these computations can be performed by using reliable tools such as the SVD. However, a full analysis of the possible stability of the method is left as an open problem. To obtain our main results, we develop several other results on para-Hermitian and para-skew-Hermitian matrices that are also interesting by themselves, as the structured properties of several Möbius transforms acting on these classes of matrices, the properties of additive decompositions into stable and anti-stable parts, and ways to parameterize explicitly para-Hermitian and para-skew-Hermitian matrices. 01 AhmadMehrmann Sk. S. Ahmad, V. Mehrmann, Backward errors for eigenvalues and eigenvectors of Hermitian, skew-Hermitian, H-even and H-odd matrix polynomials, Linear and Multilinear Algebra, 61(9) (2013), 1244–1266 AmMaZa15 A. Amparan, S. Marcaida, I. Zaballa, Finite and infinite structures of rational matrices: a local approach, Electron. J. Linear Algebra, 30 (2015), 196–226. Antoulas A. C. Antoulas, Approximation of Large-Scale Dynamical Systems, SIAM, Philadelphia, 2005. BN G. Barbarino, V. Noferini, On the Rellich eigendecomposition of para-Hermitian matrices and the sign characteristics of palindromic matrix polynomials, Linear Algebra Appl., 672 (2023), 1–27. DMQVD F. M. Dopico, S. Marcaida, M. C. Quintana, P. Van Dooren, Local linearizations of rational matrices with application to rational approximations of nonlinear eigenvalue problems, Linear Algebra Appl., 604 (2020), 441–475. DNZ F. M. Dopico, V. Noferini, I. Zaballa, Rosenbrock's theorem on system matrices over elementary divisor domains, submitted, available at https://arxiv.org/abs/2406.18218. DQV F. M. Dopico, M. C. Quintana, P. Van Dooren, Linear system matrices of rational transfer functions, “Realization and Model Reduction of Dynamical Systems. A Festschrift to honor the 70th birthday of Thanos Antoulas”, pp. 95-113, Springer-Verlag (2022). dopquinvan2022 F. M. Dopico, M. C. Quintana, P. Van Dooren, Strongly minimal self-conjugate linearizations for polynomial and rational matrices, SIAM J. Matrix Anal. Appl., 43(3) (2022), 1354–1381. For75 G. D. Forney, Minimal bases of rational vector spaces, with applications to multivariable linear systems, SIAM J. Control, 13 (1975), 493–520. Geninetal Y. Genin, Y. Hachez, Y. Nesterov, R. Stefan, P. Van Dooren, S. Xu, Positivity and linear matrix inequalities, Eur. J. Control, 8(3) (2002), 275–298. realC. Heij, A. Ran, F. van Schagen, Introduction to Mathematical Systems Theory: Linear Systems, Identification and Control, Birkhäuser Verlag, Basel, 2007. Kai80 T. Kailath, Linear Systems, Prentice Hall, Englewood Cliffs, NJ, 1980. KressnerQRpal D. Kressner, C. Schröder, D. Watkins, Implicit QR algorithms for palindromic and even eigenvalue problems, Numer. Algor., 51 (2009), 209–238. GoodVibra D. S. Mackey, N. Mackey, C. Mehl, V. Mehrmann, Structured polynomial eigenvalue problems: Good vibrations from good linearizations, SIAM J. Matrix Anal. Appl., 28 (2006), 1029–1051. antitriangular D. S. Mackey, N. Mackey, C. Mehl, V. Mehrmann, Numerical methods for palindromic eigenvalue problems: Computing the anti‐triangular Schur form, Numer. Linear Algebra Appl., 16(1) (2009), 63-86. M4Mob D. S. Mackey, N. Mackey, C. Mehl, V. Mehrmann, Möbius transformations of matrix polynomials, Linear Algebra Appl., 470 (2015), 120–184. Nof12 V. Noferini, The behaviour of the complete eigenstructure of a polynomial matrix under a generic rational transformation, Electron. J. Linear Algebra, 23 (2012), 607–624. NV23 V. Noferini, P. Van Dooren, Root vectors of polynomial and rational matrices: theory and computation, Linear Algebra Appl., 656 (2023), 510–540. Rosen H. H. Rosenbrock, State-space and Multivariable Theory, Thomas Nelson and Sons, London, 1970. vandooren-laurent-1979 P. Van Dooren, P. Dewilde, J. Vandewalle, On the determination of the Smith-McMillan form of a rational matrix from its Laurent expansion, IEEE Trans. Circuit Syst., 26(3) (1979), 180–189. WPP S. Weiss, J. Pestana, I. K. Proudler, On the existence and uniqueness of the eigenvalue decomposition of a parahermitian matrix, IEEE Trans. Signal Process., 66(10) (2018), 2659–2672.
http://arxiv.org/abs/2407.12299v1
20240717034108
Dispersive Bootstrap of Massive Inflation Correlators
[ "Haoyuan Liu", "Zhehan Qin", "Zhong-Zhi Xianyu" ]
hep-th
[ "hep-th", "astro-ph.CO", "hep-ph" ]
=1 1.1 fig/ [ [ dD𝒟𝒯cl∂ℒℋz̅⟨⟩const. tr Tr Det →→⇒iαβγδϵκλρσζWå𝖺𝖻𝖼𝖽ℂℝM_PlRe Im ad F1_2F_1F2_3F_2ν[skipabove=0pt, skipbelow=5pt, leftmargin=0pt, rightmargin=0pt, innertopmargin=-5pt, innerbottommargin=7pt, innerleftmargin=2pt, innerrightmargin=2pt, splittopskip=0pt, splitbottomskip=0pt, linewidth=0pt, nobreak=true] keyeqn2[backgroundcolor=gray!15, skipabove=0pt, skipbelow=5pt, leftmargin=0pt, rightmargin=0pt, innertopmargin=-5pt, innerbottommargin=7pt, innerleftmargin=2pt, innerrightmargin=2pt, splittopskip=0pt, splitbottomskip=0pt, linewidth=0pt, nobreak=true] keyeqn § 15201emDispersive Bootstrap of Massive Inflation Correlators Haoyuan Liu^ aliuhy23@mails.tsinghua.edu.cn,      Zhehan Qin^ aqzh21@mails.tsinghua.edu.cn,      Zhong-Zhi Xianyu^ a,bzxianyu@tsinghua.edu.cn ^aDepartment of Physics, Tsinghua University, Beijing 100084, China ^bPeng Huanwu Center for Fundamental Theory, Hefei, Anhui 230026, China ============================================================================================================================================================================================================================================================================================== [overlay] [minimum width=40mm,minimum height=15mm] (b) at (16,9); § ABSTRACT Inflation correlators with massive exchanges are central observables of cosmological collider physics, and are also important theoretical data for us to better understand quantum field theories in dS. However, they are difficult to compute directly due to many technical complications of the Schwinger-Keldysh integral. In this work, we initiate a new bootstrap program for massive inflation correlators with dispersion relations on complex momentum planes. We classify kinematic variables of a correlator into vertex energies and line energies, and develop two distinct types of dispersion relations for both of them, respectively called vertex dispersion and line dispersion relations. These dispersion methods allow us to obtain full analytical results of massive correlators from a knowledge of their oscillatory signals alone, while the oscillatory signal at the tree level can be related to simpler subgraphs via the cutting rule. We further apply this method to massive loop correlators, and obtain new analytical expressions for loop diagrams much simpler than existing results from spectral decomposition. In particular, we show that the analyticity demands the existence of an “irreducible background” in the loop correlator, which is unambiguously defined, free of UV divergence, and independent of renormalization schemes. § INTRODUCTION There have been active and ongoing efforts in the study of n-point correlation functions of primordial curvature fluctuations in recent years <cit.>. These functions are, on the one hand, observables extracted from cosmic microwave background (CMB) or large-scale structure (LSS) data, and, on the other hand, generated by quantum process of particle productions and interactions during the cosmic inflation. Therefore, these correlation functions, subsequently called inflation correlators, are the central object that bridge the observational data with quantum field theory in inflationary spacetime. A particular class of correlation functions mediated by massive particles have attracted many attentions in recent years <cit.>. A propagating massive particle during inflation could impact the inflaton fluctuations through a resonant process, and leaves a distinct pattern in the inflation correlators as logarithmic oscillations in momentum ratios. The logarithmic nature is a consequence of exponential expansion of the inflating universe <cit.>, while the oscillations encode rich physical information about the massive particles. For these reasons, the logarithmic oscillations have been dubbed “clock signals” and “cosmological collider (CC) signals.” The phenomenological studies of CC physics have identified many scenarios producing large CC signals <cit.>, which are promising targets for the current and upcoming CMB and LSS observations <cit.>. To connect theory predictions to observational data, it is crucial to perform efficient and accurate computations of inflation correlators. It's not surprising that progress from analytical studies can facilitate this process. Theory-wise, inflation correlators encode important data of quantum field theories in the bulk de Sitter (dS), and are interesting objects in their own rights. Given the great success of amplitude program in other spacetime backgrounds such as Minkowski and AdS, we are now increasingly motivated in developing amplitude techniques in dS, which are more relevant to our very own universe. Many progresses have been made recently in the study of dS correlators or cosmological correlators in general. Relevant to this work is the analytical structure of massive inflation correlators in momentum space, which have been explored in recent years from different angles, e.g., <cit.>. To explain this analytical structure, it is convenient to start from a soft limit where the momentum K of a bulk massive propagator goes to zero <cit.>. As will be detailed below, a general graph in this limit can be separated into three pieces: a nonlocal signal which is in nonanalytic in the soft momentum K in the form of a branch cut; a local signal which is analytic in K, but nonanalytic in the energy ratios also in the form of a branch cut; and finally, a background which is analytic in both momentum K and other energy variables. Although we use the analytical property to classify the signals and the background, this classification has a practical consequence when doing real computations. To explain this point, we note that a bulk computation of a given graph involves a time integral at each bulk vertex and a momentum integral for each independent loop <cit.>. In particular, the bulk propagators contain a part that depends on the ordering of its two time variables, and this makes the bulk time integral heavily nested. Therefore, a direct integration is typically difficult.[See, however, a recently proposed method to compute arbitrary nested time integrals <cit.>.] However, a curious observation is that the computation of signals (both nonlocal and local) is generally simpler than the background. The reason is that, to get the signals, one can execute appropriate cuts of the graph to remove certain nested time integrals. The simplicity of signals also shows up in final results: Typically, both the signal and the background are (generalized) hypergeometric functions of momentum ratios, but the background is of higher “transcendental weight”[Here we are using the term “transcendental weight” to characterize the complexity of hypergeometric series arising in inflation correlators. Very loosely, an irreducible hypergeometric function of n-variables can be thought of as having weight n. This meaning can be made precise by the family-chain decomposition, as explained in <cit.>.] than the signal <cit.>. In addition, a closer inspection shows that the computation of nonlocal signal is simpler than that of local signal. To get the nonlocal signal, one can take a simpler nonlocal cut of the graph, which replaces the cut propagator by its real part <cit.>. The nonlocal signal also obeys the on-shell factorization at arbitrary loop orders <cit.>. In comparison, the computation of local signals requires a subtle and asymmetric cut, which depends on external kinematics and also retains the imaginary part of the propagator <cit.>. Besides, it remains challenging to identify local signals at arbitrary loop orders although some progress is ongoing. To recapitulate, our past experience shows that there is a “hierarchy” in the complexity and also the difficulty of computing the three parts of a given graph: In descending order, we have background > local signal > nonlocal signal. Thus, it is tempting to ask if we can bootstrap the full result of a given graph starting from its signal part alone, or better, if we can bootstrap the full shape with the knowledge of the nonlocal signal only. To answer these questions, in this work, we initiate a “dispersive” bootstrap program for massive inflation correlators, with the dispersion relation as a key ingredient. The dispersion relation is a very well studied technique, tailored to recover the full function from knowledge of its discontinuities. As the first step, we apply the dispersion relations and get full analytical expressions for a range of massive inflation correlators at both tree and 1-loop levels. The ingredient of the dispersion integral can be either the full signal (both local and nonlocal) or the nonlocal signal alone. Technically, these ingredients can be obtained by computing factorized time integrals, which correspond to simpler subgraphs at the tree level. The essential idea of this method is schematically illustrated in Fig. <ref>. The dispersion relation is an old tool. It has played a central role in the flat-space S-matrix bootstrap program <cit.>. There have also been many studies on the cutting rule and dispersion relations in CFT <cit.>. Given many types of cutting rules for inflation correlators proposed recently <cit.>, it is a natural next step to try to “glue” those cut subgraphs back together. While there are many discussions on dispersion relations at a conceptual level, we are not aware of any previous study using dispersion relations to explicitly bootstrap massive inflation correlators. We fill this gap by providing explicit calculations with dispersion relations for a few typical examples. Our results at the tree level are not new; All the tree correlators considered in this work have been worked out using other methods, and our method here is by no means “simpler” than existing methods such as cosmological bootstrap <cit.> or partial Mellin-Barnes representation <cit.>. Rather, we use these known examples as tests of principle for the dispersive bootstrap method. We expect that one can use this method to “glue” more subgraphs and get full results for more complicated graphs, either analytically or numerically, where other methods may not be immediately applicable. On the other hand, at the 1-loop level, we do obtain new analytical expressions for a class of 1-loop 3-point functions. Our expressions are substantially simpler than known results obtained with spectral decomposition <cit.>, and are far easier to implement numerically. This result shows that the dispersive bootstrap can be a promising way to compute inflation correlators with massive loops, which we will further develop in a future study. An appealing feature of our dispersion technique at the 1-loop level is that it is insensitive to the renormalization ambiguities, because the UV sensitive part of the 1-loop correlator can always be subtracted by a local counterterm and thus is local and analytic. In a sense, the background part of the 1-loop diagram obtained by the dispersion relation can be viewed as an “irreducible” companion of the signals, whose existence is enforced by the correct analytical behavior of the full correlator. Outline of this work At the heart of our dispersive bootstrap is a detailed understanding of the analytical structure of a specific graph contribution to an inflation correlator. In general, after properly removing all tensor structures, a tree-graph contribution to the inflation correlator is a scalar function of two types of kinematic variables: the vertex energies and the line energies. The vertex energy is the magnitude sum of momenta of all external lines at a vertex, while a line energy is the magnitude of the momentum flowing in an internal line. For physically reachable kinematical configurations (henceforth physical regions), vertex and line energies are necessarily positive real. However, to develop dispersion relations, we need to study a graph as a function of complex energies. Our strategy is to consider only one variable being complex at a time, with all other variables staying in their physical regions. We can complexify either a vertex energy or a line energy. In both cases, a massive inflaton correlator develops branch points on the corresponding complex plane, connected by branch cuts. With these branch cuts, we can build corresponding dispersion integrals which compute the full correlator. Thus, we have two distinct types of dispersion relations: the vertex dispersion relation built on a vertex energy complex plane, and the line dispersion relation built on a line energy complex plane. As we shall see, for a four-point correlator with single massive exchange, the vertex dispersion relation computes the whole graph from its signal, both local and nonlocal. On the other hand, the line dispersion relation computes the whole graph from its nonlocal signal only. While the vertex and line dispersion relations can be constructed for very general tree graphs, in this work, for definiteness, we will focus on 4-point correlators with s-channel massive exchange (Fig. <ref>) and the related 3-point single-exchange correlators (Fig. <ref>), the only exception being the 3-point 1-loop bubble graph (Fig. <ref>), which is related to tree graphs via spectral decomposition. In Sec. <ref>, we begin with a brief review of inflation correlators and the dispersion relation. In particular, we introduce the four-point seed integral ℐ^p_1p_2(k_12,k_34,k_s) in (<ref>) which is the central object to be studied in this work. Here k_i≡| k_i| (i=1,⋯,4,s) are magnitudes of momenta (also called energies) shown in Fig. <ref>, and k_ij≡ k_i+k_j. A very important technical step is the analytical continuation of inflation correlators on the complex energy plane. Thus, in Sec. <ref>, we use a few toy examples to explain how to take analytical continuation by contour deformation of an integral expression as a function of its (unintegrated) parameters. Then, we put this method in use in Sec. <ref> and identify the branch cut of the seed integral ℐ^p_1p_2(k_12,k_34,k_s) on the complex k_12 plane. With this method, we can compute the discontinuity of the seed integral across this branch cut without computing the integral itself, as summarized in (<ref>), which is the main result of this section. Then, in Sec. <ref>, we use the vertex dispersion relation to bootstrap a few 3-point and 4-point correlators. For the 3-point correlator, we also consider a one-loop example, where we make use of the loop signal computed via spectral decomposition and dispersively bootstrap the full loop correlator. While our computation of tree graphs recovers previously known results, we get a new analytical expression for the 3-point 1-loop correlator substantially simpler than the existing result. In Sec. <ref>, we switch to a different perspective and consider the seed integral ℐ^p_1p_2(k_12,k_34,k_s) on the complex k_s plane. We show that the seed integral also possesses a few branch points on k_s plane which are connected by branch cuts. The discontinuities of these branch cuts are again computable. Remarkably, all the discontinuities in this case can be related to the discontinuity of the nonlocal signal alone, as shown in (<ref>). So, we can build up a line dispersion relation connecting the whole seed integral with its nonlocal signal. Then, in Sec. <ref>, we use the line dispersion to recover the full seed integral from the nonlocal signal. This calculation has the advantage that it uses a minimal set of data to bootstrap the full shape, but the drawback that the computation is complicated. It is nevertheless a useful proof of concept and points to possibilities of (analytical or semi-analytical) computation of more complicated correlators from their readily available nonlocal signal alone. We provide further discussions and outlooks in Sec. <ref>. In the first two appendices, we collect a few frequently used notations (App. <ref>) and special functions, together with their useful properties (App. <ref>). We collect the details of analytical evaluations of vertex and line dispersion integrals in App. <ref> and App. <ref>, respectively. Finally, in App. <ref>, we use a simple 1-loop correlator in Minkowski spacetime to demonstrate the relation between the dispersive method and a conventional calculation with dimensional regularization. Comparison with previous works The dispersion relation is a topic with rich history. It is not surprising that this relation, together with several closely related concepts such as discontinuities, the optical theorem, cutting rules, has been explored in the context of cosmological correlators (and, relatedly, the wavefunction coefficients) from various different angles <cit.>. There are a few similarities and differences between the discontinuities studied in the previous works and the current work, on which we very briefly comment here. In previous works such as <cit.>, the discontinuity of an amplitude (typically a wavefunction coefficient) is normally defined to be the difference between the amplitude and its complex conjugate with one or several energies' signs flipped. In this combination, one can replace one or a product of several propagators by the real part. (It was the imaginary part in <cit.> due to a different convention.) Since the real part of a bulk propagator is always factorized, the discontinuity of an amplitude defined in this way possesses a cutting rule. The nice thing about this definition is that it has a natural origin from the the unitarity of the theory, and therefore, one can use this discontinuity to formulate an optical theorem for cosmological amplitudes <cit.>. Generalized to the loop level, such a discontinuity can be expressed as momentum integrals of products of (discontinuity of) tree sub-diagrams <cit.>. The dispersion relations for wavefunctions were used to construct wavefunction coefficients with massless scalars in <cit.>. Similar dispersion relations in full Mellin space were discussed in <cit.>. In comparison, the discontinuity we are going to use is defined with respect to a correlator alone, without invoking its complex conjugate. More importantly, for the dispersion relation to work as a bootstrap tool, we need to identify all branch cuts of a correlator on the entire complex plane of an energy, where the energy can take arbitrary unphysical value. To extract this information, it is essential to take analytical continuation of a correlator beyond its physical domain, which is not a trivial task as we shall show. Furthermore, our starting point is the correlators rather than the bulk propagators, so our dispersive bootstrap can be used to directly construct the full correlators, for both tree and loop diagrams, rather than the integrand as in <cit.>. With that said, there is certainly a connection between our definition of discontinuity of a correlator and the discontinuity defined in previous works. For instance, we find that the discontinuity of a tree diagram is also factorized, and expressible in terms of factorized part of propagators. Also, in the line dispersion relation introduced in this work, the discontinuity in the squeezed limit corresponds exactly to the nonlocal signal, so the discontinuity also obeys the nonlocal cutting rule and the factorization theorem as the nonlocal signal <cit.>. It would be interesting to explore the deeper connections between this work and previous works such as <cit.> where basic properties of amplitudes such as unitarity and locality are manifest. We leave this to future exploration. Notations and conventions We work in the slow-roll limit of the inflation where the spacetime is described by the inflation patch of the dS spacetime, and the spacetime metric reads s^2=a^2(τ)(-τ^2+x^2). Here x∈ℝ^3 is the spatial comoving coordinate, τ∈(-∞,0) is the conformal time, and a(τ)=-1/(Hτ) is the scale factor with H being the constant Hubble parameter. We take the energy unit H=1 throughout this work. We use bold italic letters such as k to denote 3-momenta and the corresponding italic letter k≡| k| to denote its magnitude, which is also called an energy. For sums of several indexed quantities, we use a shorthand notation such as k_12≡ k_1+k_2. Other frequently used variables are collected in App. <ref>. Finally, we make heavy use of the discontinuity of a complex function across its branch cut and it is useful to fix our convention from the very beginning. In this work, the branch cut of a function f(z) appears almost always on the real axis of z. Therefore, we define the discontinuity of a function f(z) for such a branch cut as: Disc_zf(z)≡lim_ϵ→0^+[f(z+ϵ)-f(z-ϵ)].    (z∈ℝ) § ANALYTICAL STRUCTURE ON A COMPLEX VERTEX-ENERGY PLANE §.§ Inflation correlators In this subsection, we set the stage by reviewing the basic kinematic structure of the correlation functions to be studied in this work. We consider generic boundary correlators of a massless or conformal scalar field, with arbitrary massive bulk exchanges. Apart from a three-point example in the next section, we will mostly consider tree-level diagrams. Also, we assume all bulk fields are directly coupled, i.e., without derivatives acting on them. Generalizations to derivative couplings or spinning exchanges are straightforward by including appropriate tensor structures. Vertex energies and line energies Using the standard diagrammatic rule in the Schwinger-Keldysh (SK) formalism <cit.>, it is straightforward to write down an integral expression for any tree-level correlation function. For definiteness, let us consider a scalar theory with a conformal scalar field ϕ_c and a collection of N_F massive fields _A(A=1,⋯, N_F). In dS, a conformal scalar field ϕ_c has an effective mass m^2=2, while the masses of _A can be arbitrary. We assume these fields are coupled directly via polynomial interactions with (possibly) power time dependences. Then, the SK integral for a generic tree-level correlator of ϕ_c takes the following form: 𝒢( k_1,⋯, k_N)=∑_å_1,⋯,å_V=±∫_-∞^0∏_ℓ=1^V[τ_ℓ(-å_ℓ)(-τ_ℓ)^p_ℓ]∏_i=1^NC_å_i(k_i,τ_i)∏_j=1^ID_å_j_j(K_j;τ_j,τ_j'). This is an integral of V time variables τ_ℓ for all V vertices, with the integrand being products of time-dependent coupling factors (-τ_ℓ)^p_ℓ and two types of propagators. We assume the powers p_ℓ are not too negative such that the graph remains perturbative in the τ→ 0 limit. The bulk-to-boundary propagator C_å(k;τ) is constructed from a conformal scalar field ϕ_c with mass m^2=2: C_å(k;τ)=ττ_f2ke^å kτ. Here |τ_f|≪ 1 is a final time cutoff, and is introduced to characterize the leading fall-off behavior of a conformal scalar as τ→ 0. In physical situations with external modes being massless scalars or tensors, this cutoff is unnecessary.[Also, the case of external massless mode can be conveniently obtained from the conformal scalar case here by acting appropriate differential operators of kinematic variables <cit.>.] Moreover, D_å(k;τ_1,τ_2) is the bulk propagator for the massive scalar field with mass m: D_-+ (k;τ_1,τ_2) =  π4e^-πν(τ_1τ_2)^3/2H_ν^(1)(-kτ_1)H_-ν^(2)(-kτ_2), D_+- (k;τ_1,τ_2) =  π4e^-πν(τ_1τ_2)^3/2H_-ν^(2)(-kτ_1)H_ν^(1)(-kτ_2), D_±± (k;τ_1,τ_2)=  D_∓±^(ν)(k;τ_1,τ_2)θ(τ_1-τ_2)+D_±∓^(ν)(k;τ_1,τ_2)θ(τ_2-τ_1), where H_ν^(j)(z)(j=1,2) is the Hankel function of j'th type. In this work, we choose to be in the principal series, namely, m>3/2, so that the mass parameter ν≡√(m^2-9/4) is positive, and we get oscillatory signals from . Generalization to complementary scalar with 0<m<3/2 is completely straightforward. In (<ref>), we have summations over all SK indices å_ℓ=± for all V vertices. When doing so, we require each of the SK indices appearing in the subscript of propagators to be identified with the corresponding index on the vertex to which the propagator attach. It is trivial to see that the conformal scalar bulk-to-boundary propagator (<ref>) satisfies the relation C_å(k_1;τ)⋯ C_å(k_n;τ)=C_å(k_1+⋯+k_n;τ) up to multiplications of prefactors τ_ℓτ_f/(2k_ℓ). As a result, the graph 𝒢({ k_i}) depends on all spatial vector momenta { k_i} only through two particular classes of scalar variables, the vertex energiesE_ℓ(ℓ=1,⋯,V) and the line energiesK_j (j=1,⋯,I): A vertex energy is assigned to each vertex of the tree diagram, and equals to the magnitude sum of the momenta of all external lines (bulk-to-boundary propagators) attached to the vertex. A line energy, on the other hand, is assigned to each internal line (bulk propagator) of the tree diagram, and equals to the magnitude of the momentum flowing through this bulk line. Clearly, by momentum conservation, a line energy can always be expressed as the magnitude of a vector sum of the momenta of all external lines at either side of the bulk line. Following the above analysis, we can always write the graph as: 𝒢( k_1,⋯, k_N)=∏_i=1^N(τ_f2k_i)×𝒢(E_1,⋯, E_V;K_1,⋯,K_I). We emphasize that this dependence works only for a particular diagram. Since we will develop dispersion relations at the diagrammatic level, this set of variables suit our purpose well. Explicitly, we have: 𝒢({E_ℓ};{K_j})=∑_å_1,⋯,å_V=±∫_-∞^0∏_ℓ=1^V[τ_ℓ(-å_ℓ)(-τ_ℓ)^p_ℓe^å_ℓ E_ℓτ_ℓ]∏_j=1^ID_å_j_j(K_j;τ_j,τ_j'). The dispersion relations always involve analytical continuation of the correlator in the complex plane of some variables. Typically, we consider the complex plane of only one variable at a time, and keep all other variables fixed in their physical region. For the tree diagram 𝒢({E_ℓ},{K_j}), we can choose to analytically continue a vertex energy E_ℓ or a line energy K_j. With these two choices, we can respectively develop a vertex dispersion relation, and a line dispersion relation. Each of them has its own merits and drawbacks. Four-point seed integral To be concrete, we will derive explicit dispersion relations for a tree-level four-point function of a conformal scalar ϕ_c with single exchange of a massive scalar in the s-channel, shown in Fig. <ref>. Dispersion relations for more general correlation functions have similar structures and will be presented in a future work. Assuming a direct coupling -12√(-g)ϕ_c^2, the integral expression for this graph reads: 𝒢_s( k_1,⋯, k_4) = -^2∑_å,=±å∫_-∞^0τ_1(-τ_1)^4τ_2(-τ_2)^4 × C_å(k_1;τ_1)C_å(k_2;τ_1)C_(k_3;τ_2)C_(k_4;τ_2)D_å(k_s;τ_1,τ_2). In light of the explicit expression for the conformal propagator (<ref>), it is useful to define the following dimensionless seed integral, as introduced in <cit.>, which enables direct generalization to arbitrary interactions and massless scalar/tensor external modes:[Note that our choice of arguments of the seed integral ℐ^p_1p_2 is different from previous papers including <cit.>, where the seed integral is defined to be a function of two dimensionless momentum ratios, often chosen as r_1=k_s/k_12 and r_2=k_s/k_34. Here, we prefer to explicitly retain the dependence on the three energies k_12, k_34, and k_s, since it is more transparent to consider the analytical property of the seed integral on the complex plane of an energy variable instead of a momentum ratio. ] ℐ^p_1p_2_𝖺𝖻(k_12,k_34,k_s) =-𝖺𝖻 k_s^5+p_12∫_-∞^0τ_1τ_2 (-τ_1)^p_1(-τ_2)^p_2e^𝖺k_12τ_1+𝖻k_34τ_2D_𝖺𝖻 (k_s;τ_1,τ_2); ℐ^p_1p_2(k_12,k_34,k_s)≡∑_å,=±ℐ^p_1p_2_å(k_12,k_34,k_s). The introduction of arbitrary power factors (-τ_i)^p_i(i=1,2) is to take account of various interaction types and external mode functions. The exponents p_1,2 can in general take complex values (as in models with resonant background). However, we will take p_1,2 to be real purely to reduce the complication of the analysis. The generalization to complex p_1,2 is straightforward. By construction, it is evident that the seed integral ℐ^p_1p_2_𝖺𝖻(k_12,k_34,k_s) is dimensionless, and thus can be expressed as a function of dimensionless momentum ratios. We will exploit this fact when doing explicit computations. Also, the graph 𝒢_s( k_1,⋯, k_4) is expressible in terms of the seed integral as: 𝒢_s( k_1,⋯, k_4)= ^2τ_f^416k_1k_2k_3k_4 k_sℐ^-2,-2(k_12,k_34,k_s). Thus, we have reduced the whole problem to an analysis of the seed integral. It is certainly possible to compute the entire seed integral by other methods such as partial Mellin-Barnes representation <cit.> or bootstrap equations <cit.>. However, to be in accordance with the spirit of the dispersive bootstrap, we avoid such a direct computation, but pay more attention to the analytical structure of the seed integral itself. Finally, it is worth noting that the physical regions of the energies (k_12,k_34,k_s) are given by 0≤ k_s≤ k_12 and 0≤ k_s≤ k_34 due to the triangle inequalities from momentum conservation. §.§ Dispersion relations In the current and next subsections, we make some mathematical preparations for deriving the vertex dispersion relation in Sec. <ref>. In this subsection, we very briefly explain what a dispersion relation is for nonexperts. Readers familiar with this topic are free to skip this entire subsection. At the mathematical level, a dispersion relation is nothing but a clever manipulation of the contour integral on a complex plane. As a simple but very typical example, suppose we have f(r) as a function of complex variable r, which possesses a branch cut along the negative real axis r<0, but is otherwise analytic everywhere. Furthermore, it is convenient (but not necessary) to assume that f(r) decreases fast enough as |r|→∞. Suppose that all quantitative information we have about f(r) is its discontinuity along the branch cut: Disc_rf(r)≡lim_→ 0^+[ f(r+)-f(r-)].     (r∈ℝ) Then, a dispersion relation makes use of this quantitative information to recover the original function f(r) for an arbitrary given point r on the complex plane. As shown in the left panel of Fig. <ref>, we enclose the given point r by a small contour 𝒞. Then, we have the following equality by virtue of the residue theorem: f(r)=∫_𝒞 r'2πf(r')r'-r. Now, as shown in the right panel of Fig. <ref>, we can deform the contour 𝒞 to a big circle 𝒞' without changing the answer of the integration. The new contour 𝒞' is chosen with radius |r'|→∞ except on the negative real axis, to which the contour approaches from both sides. By our assumption of the analytical property of f(r), the integration of f(r')/(r-r') along the big circle at |r'|→∞ vanishes. Then, we get: f(r)=∫_𝒞' r'2πf(r')r'-r=∫_-∞^+∞ r'2πDisc_r'f(r')r'-r. Thus, by performing an integration along the branch cut, we recover the value of f(r) at any point r. The requirement that f(r') decreases faster enough when |r'|→∞ is to make sure that the integration over f(r')/(r-r') vanishes along the large circle at infinity. This requirement can be loosen: So long as f(r') is bounded by a power function of finite order, namely |f(r')/r'^n|→ 0 as |r'|→∞ for some n∈ℤ_+, we can consider the following new function g(r_1,⋯,r_n;r'): g(r_1,⋯;r_n;r')≡f(r')(r'-r_1)⋯ (r'-r_n) , where r_1⋯ r_n are n arbitrarily chosen points. Then it is clear that g(r_1,⋯,r_n;r') decreases fast enough at infinity. So, we can use g in place of f to do dispersion integral, at the expense that we need the values of f(r') at n discrete points r'=r_1,⋯, r_n. This way of dealing with large-circle divergence is called subtraction, and the number n is called the order of the subtraction. It is worth mentioning that the study of dispersion relations has a long history in physics, with the Kramers-Krönig relation in classical electrodynamics as a notable early example <cit.>. In the S-matrix bootstrap program for relativistic field theories, the dispersion relations played a central role <cit.>. In these examples, the desired analytical property of the scattering amplitude is closely related to causality <cit.>. At the perturbative level, the analytical properties can also be diagnosed by methods such as Landau analysis <cit.>. At a fixed order in perturbation theory, the dispersion relation relates loop amplitudes with tree amplitudes, and in well-situated cases, it allows one to reconstruct loop amplitudes from simpler tree amplitudes. More remarkably, one can exploit the dispersion relation beyond the perturbation theory <cit.>. This has been shown useful in the study of hadron physics, e.g., <cit.>. Also, one can use the dispersion relation to connect UV and IR parts of a theory and derive nontrivial positivity bounds for low-energy effective theories <cit.>. §.§ Analytical continuation by contour deformation To derive a dispersion relation for the seed integral in (<ref>), we need to understand its analytical property as a function of complex energies. Now we face an obvious problem: While the original seed integral is well defined for energies taking physical values, it is not for arbitrary complex energies. Therefore, we need to redefine the seed integral so that it also applies to complex energies. We want to do it without evaluating the full integral. To see how this works, we demonstrate our method with three toy examples, before considering the full seed integral in the next subsection. One-fold integral First, let us consider a complex function I(z) for z∈ℂ defined by the following integral: I(z)≡∫_0^∞ w e^ zw, where the integral contour is chosen to be the positive real axis. The integral is convergent when z≠ 0 and Arg z∈(0,π), and is integrated to the following result: I(z)=-1 z. Clearly, this expression is analytic everywhere in z except when z=0. In particular, it is analytic for Arg z∉ (0,π), where the original integral (<ref>) is no longer well defined. So the question is how we can modify the original integral so that it is well defined for arbitrary z≠ 0. The answer for this example is simple enough. Indeed, let us consider the following integral: Ĩ_(z)≡∫_0^z^-1e^∞ w e^ z w, where is a small positive real number. That is, the contour is deformed to approach w=∞ from the direction Arg w=-Arg z+. This integral is convergent for any z≠ 0, and, in the mean time, we have: lim_→ 0^+Ĩ_(z)=I(z).    (Im z>0) Therefore, we can take Ĩ_→ 0^+(z) as the analytical continuation of the original integral I(z) to any z≠ 0. The lesson here is that, when z takes a value at which the original integral I(z) is not convergent (at infinity), we can deform the contour properly so that the integral is convergent again. One-fold integral with a branch cut Taking analytical continuation by deforming the integral contour may have obstructions when the integrand contains branch cuts. To see this, consider the following example: J(z)≡∫_0^∞ w e^ zw√(w), The situation is similar: The integral is well defined when Arg z∈(0,π), but the integrated result is analytic in a larger region: J(z)=√(π)e^3π/42z^3/2.     (0<Arg z<π) This time we can consider the following integral: J̃_(z)= ∫_0^z^-1e^∞ w e^ z w√(w). We still have J̃_→ 0^+(z)=J(z) when Im z>0. However, the new phenomenon here is that the integral possesses a branch point at z=0 due to the factor √(w). For definiteness, we can take the branch cut to be along the negative real axis w∈(-∞,0). Then we see that this branch cut implies the existence of a branch cut of the integral J̃_δ(z) along z∈ (-∞,0) when →0^+. Let us compute the discontinuity of this branch cut: Disc_zJ̃_(z) =  J̃_(ze^-)-J̃_(ze^+) = ∫_0^z^-1e^+e^∞ w e^ z w√(w)-∫_0^z^-1e^-e^∞ w e^ z w√(w) = -∫_0^-∞ u Disc_u[ e^-|z|u e^√(u)] = 2∫_0^∞ u e^+|z|u e^√(u) . Then we let → 0, and get: Disc_zJ(z) = 2lim_→ 0^+∫_0^∞ u e^+|z|u e^√(u) =-√(π)e^+π/4 |z|^3/2. Therefore, we have found a relation between the discontinuity of the integral J(z) and the integrand. The lesson from this example is that deforming the contour to approach the branch cut of the integrand from two different directions will lead to a discontinuity of the integral itself, and this contour deformation procedure provides us a way to relate the discontinuities of the integral and the integrand. Two-fold integral We will have to deal with time orderings when studying the seed integral. So, our third example will be a two-fold time-ordered integral: K(z_1,z_2)≡∫_0^∞ w_1 w_2 e^ z_1w_1+ z_2w_2θ(w_1-w_2). Again, when Im z_1>0 and Im z_2>0 hold at the same time, the integral is well defined, and can be directly integrated to: K(z_1,z_2)=-1z_1(z_1+z_2).     (Im z_1>0  and Im z_2>0) Now we want to analyze the above integral for more general choice for z_1 and z_2. In particular, we assume that z_2>0 stays in the positive real axis, while z_1∈ℂ can take arbitrary complex values. Then, we can first rewrite the original integral as an iterated integral: K(z_1,z_2)≡∫_0^∞ w_1∫_0^w_1 w_2 e^ z_1w_1+ z_2w_2 =1 z_2∫_0^∞ w_1 [e^ (z_1+z_2)w_1-e^ z_1w_1]. As shown above, the inner-layer integral is trivially convergent, and we only need to deal with the w_1-integral, which may be divergent. Now, we want to deform the contour of w_1-integral to make it convergent for any z_1≠ 0 and z_1+z_2≠ 0. As a consistent deformation of the original integral, we should use the same contour for both terms. Then, we need a judicious choice for the direction along which the contour goes to infinity. That is, we want to deform the integral contour in the following way: 1 z_2[∫_0^(z_1+z_2)^-1e^_1∞ w_1 e^ (z_1+z_2)w_1-∫_0^z_1^-1e^_2∞ w_1 e^ z_1w_1], such that two conditions hold at the same time: 1) 0<_1,_2<π so that both integrals converge; 2) _1-_2=Arg (z_1+z_2)-Arg z_2 mod 2π, so that both integrals share the same contour. Clearly, the two conditions can always be satisfied simultaneously, except when Arg (z_1+z_2)-Arg z_1=π, in which case no contour deformation works. For z_2>0, this corresponds to z_1>-z_2. So, we conclude that, the above contour deformation always works well for any z_1≠ -z_2 and z_2>0 so long as z_1 is away from an interval on the negative real axis (-z_2,0). How to deal with this interval? The solution is to rewrite the original integral in a different way: K(z_1,z_2)=∫_0^∞ w_1 w_2 e^ z_1w_1+ z_2w_2[1-θ(w_2-w_1)]. Then, the first term is factorized and thus is trivial, and the second term is again a nested integral but with the role of w_1 and w_2 switched. Thus, all above analysis still applies to this nested integral, and the contour deformation trick applies for all z_1≠-z_2 except in the interval z_1∈(-∞,-z_2). So the lesson is that, when we try to take the analytical continuation of a nested integral by deforming the contour, the two cases of z_1<-z_2 and -z_2<z_1<0 should be treated separately. §.§ Vertex dispersion relation of the seed integral Now we come back to the seed integral (<ref>). We observe that the integrands of ℐ_𝖺𝖻^p_1p_2 contain exponential functions, power functions and Hankel functions. Moreover, the opposite-sign integrals ℐ_±∓^p_1p_2 are factorized, meaning that the integrands are of product form f(τ_1)g(τ_2). On the contrary, the same-sign integrals ℐ_±±^p_1p_2 are nested, due to the time-ordered factor θ(τ_1-τ_2) or θ(τ_2-τ_1). Although the seed integrals are much more complicated than the toy examples considered above, they share some common features. In particular, the integrand of a seed integral is regular along the integral path, so that any potential singular behavior must be from a diveregence in the early time limit. (The integral is always convergent in the late-time limit by our assumption of IR regularity.) Therefore, let us consider the asymptotic behavior of Hankel functions in the early time limit τ→-∞: H^(1)_(-kτ)∼C_1√(-kτ)e^- kτ, H^(2)_-(-kτ)∼C_2√(-kτ)e^+ kτ, where C_1 and C_2 are kτ-independent constants. So, we see that, although the integrand of the seed integral is complicated, its behavior at the early time limit is simple, and is controlled by exponential functions, much like the toy examples considered above. Then, the previous discussion shows that, if we allow {k_12,k_34,k_s} to be outside the physical region, the seed integral in its original form (<ref>) can not always be convergent. To make sense of the seed integral for arbitrary {k_12,k_34,k_s}, we need analytical continuation. Below, we carry out this analytical continuation on the complex plane of the vertex energy k_12 while k_34 and k_s are fixed within their physical regions. From this result, we will derive the vertex dispersion relation. Factorized integrals We start from the factorized integrals ℐ_±∓^p_1p_2 which are free of time orderings and thus simpler. Without loss of generality, we focus on ℐ^p_1p_2_+-, and the treatment for ℐ^p_1p_2_-+ is very similar. Below we rewrite ℐ^p_1p_2_+- in an explicitly factorized form: ℐ^p_1p_2_+-(r_1,r_2) =  π4e^-πν𝒰^p_1_+(k_12,k_s)𝒰^p_2_-(k_34,k_s). 𝒰^p_1_+(k_12,k_s)=  k_s^5/2+p_1∫_-∞^0τ_1 (-τ_1)^3/2+p_1 e^+ k_12τ_1H_-ν^(2)(-k_sτ_1), 𝒰^p_2_-(k_34,k_s)=  k_s^5/2+p_2∫_-∞^0τ_2 (-τ_2)^3/2+p_2e^- k_34τ_2H_ν^(1)(-k_sτ_2). To analyze the behavior of ℐ^p_1p_2_+- on the complex k_12 plane, it suffices to consider 𝒰^p_1_+(k_12,k_s) alone. Of course, the integrals 𝒰^p_1_±(k_12,k_s) are simple enough to be done directly. However, we prefer to analyze their analytical structure without really evaluating them. Thus we will defer the direct integration until next section, where one can find the explicit results of 𝒰^p_1_±(k_12,k_s) in (<ref>). For convenience, let us fix k_s in the physical region k_s>0. (We can also fix k_34 in the physical region k_34>k_s although this is irrelevant for the analysis of 𝒰^p_1_+(k_12,k_s).) From (<ref>), we know that, at the early time limit, the integrand of 𝒰^p_1_+(k_12,k_s) is controlled by the exponential factor: e^(k_12+k_s)τ_1 . Therefore, for fixed integral contour τ_1∈(-∞,0), the phase of k_12+k_s controls the convergence of 𝒰^p_1_+(k_12,k_s) when τ_1→-∞. For example, if Im[k_12+k_s]>0, the integral in (<ref>) will diverge, although the function 𝒰^p_1_+(k_12,k_s) can actually be analytically continued to this region. In order to make this analytical continuation, we improve the definition of 𝒰_+^p_1 in (<ref>) by deforming the integration contour of 𝒰^p_1_+(k_12,k_s) in the following way, similar to what we did for the first two toy examples before: 𝒰_+^p_1(k_12,k_s)≡ k_s^5/2+p_1∫_-(1- 0^+)(k_12+k_s)^-1∞^0τ_1 (-τ_1)^3/2+p_1 e^ k_12τ_1H_-ν^(2)(-k_sτ_1). Clearly, this new definition agrees with (<ref>) for all k_12 and k_s in the interior of the physical region. On the other hand, the change of integration path is continuous in k_12, so is the integral 𝒰_+^p_1(k_12,k_s) for generic values of k_12. (One can see this point more explicitly by taking derivative of 𝒰_+^p_1(k_12,k_s) with respect to k_12.) However, like the second toy example (<ref>), the integrand of 𝒰_+^p_1(k_12,k_s) contains a branch point at τ_1=0 due to the Hankel function and the power factor. The branch point emanates a branch cut which we take to be on the positive real axis τ_1∈(0,+∞), and this branch cut can be an obstacle for contour deformation. As a result, when the integration contour is brought to the vicinity of the branch cut of the integrand, a discontinuity may occur and lead to a branch cut for 𝒰_+^p_1(k_12,k_s) with respect to k_12. Since the branch cut of the integrand in (<ref>) is on the positive real axis of τ_1, the integral contour of 𝒰_+^p_1(k_12,k_s) has a chance to approach this branch cut if (k_12+k_s) has a phase close to ±π. For k_s>0, this corresponds to k_12∈(-∞,-k_s). Thus we conclude that the only possible branch cut of 𝒰_+^p_1(k_12,k_s) on the complex k_12 plane for fixed k_s>0 is in the interval k_12∈ (-∞,-k_s), with the two branch points k_12=-∞ and k_12=-k_s. (In fact, the point k_12=-k_s is often divergent, since the integral 𝒰_+^p_1(k_12,k_s) at this point is typically divergent in the early time limit no matter how we deform the contour. This is called a partial energy singularity in the literature.) Apart from this integral as well as the two endpoints, the function 𝒰_+^p_1(k_12,k_s) is analytical in k_12 everywhere else. Now let us determine the discontinuity across the branch cut of 𝒰_+^p_1(k_12,k_s) at k_12∈(-∞, -k_s). Using the method identical to (<ref>), we have: Disc_k_12𝒰_+^p_1(k_12,k_s) = k_s^5/2+p_1∫_∞^0τ_1 Disc_τ_1[e^+ k_12τ_1(-τ_1)^3/2+p_1H^(2)_-(-k_sτ_1)].    (k_12<-k_s) Then, using the known discontinuity of the power function and the Hankel function as given in (<ref>) and (<ref>), we get: Disc_τ_1[e^+ k_12τ_1(-τ_1)^3/2+p_1H^(2)_-(-k_sτ_1)] = 2cosh(π)(-1)^p_1e^+ k_12τ_1τ_1^3/2+p_1H^(2)_-(k_sτ_1) θ(τ_1) . Therefore, Disc_k_12𝒰_+^p_1(k_12,k_s)=  2cosh(π)(-1)^p_1 k_s^5/2+p_1∫_∞^0τ_1 e^+ k_12τ_1τ_1^3/2+p_1H^(2)_-(k_sτ_1) =  2cosh(π)(-1)^p_1+1 k_s^5/2+p_1∫_-∞^0τ_1 e^- k_12τ_1(-τ_1)^3/2+p_1H^(2)_-(-k_sτ_1) =  2cosh(π)(-1)^p_1+1𝒰_+^p_1(-k_12,k_s). Now it is trivial to put back all factors independent of k_12 in (<ref>), and get the discontinuity for ℐ_+-^p_1p_2. The discontinuity of the other factorized seed integral ℐ_-+^p_1p_2 can be analyzed in the same way and the result is very similar. So we summarize the results for both factorized seed integrals together: Disc_k_12ℐ_±∓^p_1p_2(k_12,k_34,k_s)=2cosh(π)(-1)^p_1+1ℐ_±∓^p_1p_2(-k_12,k_34,k_s)θ(-k_12-k_s). Nested integrals Now we move on to the nested seed integrals ℐ_±±^p_1p_2. We will focus on ℐ_++^p_1p_2 and the treatment of ℐ_–^p_1p_2 is similar. Substituting the same-sign type propagators (<ref>) in (<ref>), we get an explicit expression for the integral ℐ_++^p_1p_2: ℐ_++^p_1p_2(k_12,k_34,k_s) =-π4 e^-πνk_s^5+p_12∫_-∞^0τ_1τ_2 (-τ_1)^3/2+p_1(-τ_2)^3/2+p_2e^ k_12τ_1+ k_34τ_2 ×[H^(1)_(-k_sτ_1)H^(2)_-(-k_sτ_2)θ(τ_1-τ_2)+H^(2)_-(-k_sτ_1)H^(1)_(-k_sτ_2)θ(τ_2-τ_1)]. As before, we fix k_s and k_34 to be in the interior of the physical region, i.e., k_34>k_s>0, and analyze the integral ℐ_++^p_1p_2(k_12,k_34,k_s) on the complex k_12 plane, where we need to perform analytical continuation by contour deformation. The way to deform the contour has been indicated in the third toy example (<ref>). In particular, for fixed values of k_34>k_s>0 and for arbitrary real k_12, we need to consider separately two cases: k_12<-k_34 and -k_34<k_12<k_s, both in the unphysical region. In each case, we need to pick up a specific ordering for the two time variables. Let us first analyze the case of k_12<-k_34, for which we choose to rewrite the integral as: ℐ_++^p_1p_2=  ℐ_++,F,>^p_1p_2+ℐ_++,N,>^p_1p_2, ℐ_++,F,>^p_1p_2≡ -π4e^-πk_s^5+p_12∫_-∞^0τ_1τ_2 e^ k_12τ_1+ k_34τ_2  × (-τ_1)^3/2+p_1(-τ_2)^3/2+p_2H^(1)_(-k_sτ_1)H^(2)_-(-k_sτ_2), ℐ_++,N,>^p_1p_2≡ -π4e^-πk_s^5+p_12∫_-∞^0τ_1τ_2 e^ k_12τ_1+ k_34τ_2(-τ_1)^3/2+p_1(-τ_2)^3/2+p_2 ×[H^(2)_-(-k_sτ_1)H^(1)_(-k_sτ_2)-H^(1)_(-k_sτ_1)H^(2)_-(-k_sτ_2)]θ(τ_2-τ_1). The subscript > means that we are working with the condition |k_12|>|k_34|. This notation is in line with the one taken in <cit.>. The analysis for the factorized integral ℐ_++,F,>^p_1p_2 is identical to that for ℐ_±∓^p_1p_2, and we have: Disc_k_12ℐ_++,F,>^p_1p_2(k_12,k_34,k_s) =  2cosh(π)(-1)^p_1+1ℐ_++,F,>^p_1p_2(-k_12,k_34,k_s)θ(-k_12-k_34). On the other hand, given the asymptotic behavior of the Hankel functions (<ref>), the analysis for the iterated integral ℐ_++,N,>^p_1p_2 is in parallel with the one for our third toy example (<ref>). In particular, when τ_1,τ_2→-∞, the integrand of ℐ_++,N,>^p_1p_2 behaves, up to unimportant power functions (denoted as #), like: # e^ (k_12+k_s)τ_1e^ (k_34-k_s)τ_2 -# e^ (k_12-k_s)τ_1e^ (k_34+k_s)τ_2. Thus, after finishing the inner-layer integral over τ_2, we get four terms which behave in the τ_1→-∞ limit like (again, up to unimportant power functions and constant coefficients): e^ (k_12+k_34)τ_1, e^ (k_12+k_s)τ_1, e^ (k_12+k_34)τ_1 , e^ (k_12-k_s)τ_1 . Therefore, for fixed k_34>k_s>0, one can deform the integration contour on the τ_1 plane to make all above four terms convergent, if k_12 is away from the interval (-k_34,0). This is exactly the condition k_12<-k_34 that we imposed from the very beginning. Then, we see that the integral ℐ_++,N,>^p_1p_2 is analytic everywhere in k_12 when k_12 is away from the negative real axis. On the negative real axis, the interval (-k_34,0) is not covered by the current case, while the interval (-∞,-k_34) may contain a branch cut due to the potential discontinuities of the integrand of ℐ_++,N,>^p_1p_2. However, by a direct computation, we can show that ℐ_++,N,>^p_1p_2 is in fact free of branch cut even in (-∞,-k_34). Explicitly: Disc_k_12ℐ_++,N,>^p_1p_2(k_12,k_34,k_s) = -π4 e^-πνk_s^5+p_12lim_ϵ→0^+{∫_∞ e^ϵ^0τ_1∫_τ_1^0τ_2 e^(-k_12+ϵ)τ_1+ k_34τ_2(-τ_1)^3/2+p_1(-τ_2)^3/2+p_2 ×[H^(2)_-(-k_sτ_1)H^(1)_(-k_sτ_2)-H^(1)_(-k_sτ_1)H^(2)_-(-k_sτ_2)]-(→-)}. We can reparameterize the two time variables in (<ref>) so that the two nested integrals in the curly brackets can be combined: -π4e^-πk_s^5+p_12∫_∞^0τ_1∫_τ_1^0τ_2{ e^(-k_12+)τ_1^+e^ k_34τ_2^+(-τ_1^+)^3/2+p_1(-τ_2^+)^3/2+p_2 ×[H^(2)_-(-k_sτ_1^+)H^(1)_(-k_sτ_2^+)-H^(1)_(-k_sτ_1^+)H^(2)_-(-k_sτ_2^+)]-(→-)}, where τ_1,2^+≡τ_1,2e^. Thus, (<ref>) says that we can find the discontinuity of the nested integral ℐ_++,N,>^p_1p_2 by computing a “discontinuity” of its integrand. Then, using the known discontinuities of the Hankel and power functions on their branch cuts, collected in (<ref>) and (<ref>), it is straightforward to show that the integrand of (<ref>) actually vanishes. Thus we conclude that ℐ_++,N,>^p_1p_2 has no branch cut in k_12 when k_12∈(-∞,-k_34). The other case with -k_34<k_12<k_s always satisfies |k_12|<|k_34|, and therefore we separate the integral ℐ_++^p_1p_2 in a different way: ℐ_++^p_1p_2 =  ℐ_++,F,<^p_1p_2+ℐ_++,N,<^p_1p_2, ℐ_++,F,<^p_1p_2≡ -π4e^-πk_s^5+p_12∫_-∞^0τ_1τ_2 e^ k_12τ_1e^ k_34τ_2 ×(-τ_1)^3/2+p_1(-τ_2)^3/2+p_2H^(2)_-(-k_sτ_1)H^(1)_(-k_sτ_2), ℐ_++,N,<^p_1p_2≡ -π4e^-πk_s^5+p_12∫_-∞^0τ_1τ_2 e^ k_12τ_1e^ k_34τ_2(-τ_1)^3/2+p_1(-τ_2)^3/2+p_2 ×[H^(1)_(-k_sτ_1)H^(2)_-(-k_sτ_2)-H^(2)_-(-k_sτ_1)H^(1)_(-k_sτ_2)]θ(τ_1-τ_2) . Like before, the subscript “<” here means that the way we split the integral works when |k_12|<|k_34|. Then, in complete parallel with the previous case, we can show that the the factorized part ℐ_++,F,<^p_1p_2 has a branch cut in the interval -k_34<k_12<-k_s, whose discontinuity is proportional to the factorized integral ℐ_++,F,<^p_1p_2 itself: Disc_k_12ℐ^p_1p_2_++,F,<(k_12,k_34,k_s) =2cosh(π)(-1)^p_1+1ℐ^p_1p_2_±±,F,<(-k_12,k_34,k_s)θ(k_12+k_34)θ(-k_12-k_s). On the other hand, the nested part ℐ_++,N,<^p_1p_2 does not have any branch cut in the region where it is defined (namely, |k_12|<|k_34|). Therefore, the discontinuity in this case is also fully from the factorized integral. Above we have present a detailed analysis for the integral ℐ_++^p_1p_2. The treatment for ℐ_–^p_1p_2 is completely the same. In particular, one can separate ℐ_–^p_1p_2 into ℐ_–,F,≷^p_1p_2 and ℐ_–,N,≷^p_1p_2 when |k_12|≷|k_34|. So, we can summarize our result for both same-sign seed integrals as follows: Disc_k_12ℐ_±±^p_1p_2(k_12,k_34,k_s) =2cosh(π)(-1)^p_1+1ℐ_±±,F,>^p_1p_2(-k_12,k_34,k_s)θ(-k_12-k_34) +2cosh(π)(-1)^p_1+1ℐ^p_1p_2_±±,F,<(-k_12,k_34,k_s)θ(k_12+k_34)θ(-k_12-k_s). Summary Now we have completed the analysis for the seed integral ℐ^p_1p_2_å on the complex k_12 plane, with k_34 and k_s fixed in the interior of their physical region k_34≥ k_s≥ 0. The discontinuities of all four SK branches are given in (<ref>) and (<ref>), respectively. When performing the dispersion integrals, we don't have to separate the seed integral according to their SK branches. Therefore, it is useful to sum over SK indices å,=± and to get the analytical structure for the full seed integral ℐ^p_1p_2 in (<ref>): Disc_k_12ℐ^p_1p_2(k_12,k_34,k_s)=2cosh(π)(-1)^p_1+1ℐ^p_1p_2_S(-k_12,k_34,k_s)θ(-k_12-k_s). In this expression, we have defined the signal part of the seed integral as: ℐ_S^p_1p_2(k_12,k_34,k_s) =  (ℐ_++,F,>^p_1p_2(k_12,k_34,k_s)+ℐ_–,F,>^p_1p_2(k_12,k_34,k_s))θ(|k_12|-|k_34|) +(ℐ_++,F,<^p_1p_2(k_34,k_12,k_s)+ℐ_–,F,<^p_1p_2(k_12,k_34,k_s))θ(|k_34|-|k_12|) +ℐ_+-^p_1p_2(k_12,k_34,k_s)+ℐ_-+^p_1p_2(k_12,k_34,k_s). Eqs. (<ref>) and (<ref>) are the main results of this section. They form the basis for the vertex dispersion relation, detailed in the next section. Note that the “signal” defined in (<ref>) is nothing but all factorized pieces in (<ref>) and (<ref>) summed, and it is this signal piece that is responsible for all discontinuities of the seed integral on k_12 plane. On the other hand, it does agree with the signal defined in previous works through the analytical properties in k_s/k_12 and k_s/k_34 as k_s→ 0<cit.>. Thus the results (<ref>) and (<ref>) make precise our intuition that the CC signal corresponds to the nonanalyticity of the correlator. § BOOTSTRAPPING CORRELATORS WITH VERTEX DISPERSION RELATION In this section, we will put the vertex dispersion relation in use, to bootstrap a few 3-point and 4-point correlators with massive exchanges. We begin with the simplest example, the 3-point tree correlator with single massive exchange in Sec. <ref>. The dispersive bootstrap yields a closed-form analytical expression for this example, identical to the one found with improved bootstrap equation in <cit.>. Then, in Sec. <ref>, we bootstrap the 3-point correlator mediated by two massive fields via a bubble loop. We will show that, with an additional input of spectral decomposition explored in a previous work <cit.>, the vertex dispersion relation can be generalized to loop processes, leading to analytical expressions much simpler than the one found with pure spectral method in <cit.>. In particular, our one-loop result here features a neat separation of the renormalization-dependent local part and the renormalization-independent nonlocal part, thus allows for unambiguous extraction of on-shell effects from loop process. Finally, in Sec. <ref>, we bootstrap the 4-point correlator with a single massive exchange in the s-channel. This is a well studied example, and we use it to demonstrate the use of vertex dispersion relation for kinematics more complicated than 3-point examples. §.§ Three-point single-exchange graph We begin with the simplest nontrivial example, namely a 3-point correlator with a single massive exchange. To be specific, we will consider the single massive exchange from the following interactions: Δ=_2 a^3φ'+12_3a^2φ'^ 2, where φ is a massless scalar field (typically the inflaton fluctuation in the context of CC physics), and is a real massive scalar field of mass m. For convenience, we take m>3/2 so that the mass parameter ν is a positive real, although generalization to light mass 0<m<3/2 is straightforward. Also, _3 and _2 in (<ref>) are coupling constants, and the powers of the scale factor a=-1/τ are inserted to ensure the scale invariance. Then, there is a single independent tree diagram, shown in Fig. <ref> that contributes to the 3-point correlator φφφ at the leading order _2_3, together with other two obtained by trivial momentum permutations. This process appears in a simple realization of the original quasi-single-field inflation with dim-6 inflaton-spectator coupling (_μϕ)^2^2<cit.>, and turns out to be the leading signal in this model with comparable signal strength with double-massive-exchange and triple-massive-exchange graphs. The time integral for the diagram in Fig. <ref> can be expressed in terms of the 4-point seed integral in (<ref>) as: φ_ k_1φ_ k_2φ_ k_3'= _2_3k_1k_2k_3^4[ℐ^0,-2(k_12,k_3,k_3)+2 perms], Therefore, technically, the 3-point function we are going to compute can be viewed as a limiting case of a 4-point correlator with k_4→ 0^+. Then, the problem reduces to the computation of ℐ^0,-2(k_12,k_3,k_3). For this particular integral, it turns out useful to use a new variable u≡2k_3/k_123. With this definition, the physical region 0≤ k_3≤ k_12 can be written as u∈[0,1]. It is known that this variable is useful for obtaining closed-form analytical expressions for many 3-point functions <cit.>. From the perspective of dispersion integral, the simplification can be observed from the fact that the partial-energy limit and the total-energy limit merge into a single limit u→ -∞, while the branch point k_12=∞ corresponds to u=0. Thus, the branch cut of the full 3-point function extends from 0 to -∞ on the entire negative real axis on the complex u plane, which makes the dispersion integral simpler. To avoid potential confusions, we use a new notation 𝒳(u) to denote the dimensionless three-point seed integral as a function of u=2k_3/k_123: 𝒳_å(u=2k_3/k_123)≡ℐ^0,-2_å(k_12,k_3,k_3). Then, we can rewrite the full 3-point seed integral as: 𝒳(u)≡∑_å,=±𝒳_å(u)=[𝒳_++,N(u)+𝒳_++,F(u)+𝒳_+-(u)]+c.c., where 𝒳_++,N and 𝒳_++,F are nested and factorized part of 𝒳_++, defined from the corresponding seed integrals as in (<ref>); See (<ref>) and (<ref>). Our dispersive bootstrap of 𝒳(u) comes naturally with two steps following the result in (<ref>): First, we will compute the signal part 𝒳(u) and its discontinuity across the branch cut. Second, we will perform the dispersion integral along the branch cut to get the full result. Below we carry out these two steps in turn. Computing the signal From the analysis of the previous section, we know that the discontinuity of 𝒳(u) on the negative real axis is fully from 𝒳_±±,F(u) and 𝒳_±∓(u), which can be combined together into the signal 𝒳_S(u): 𝒳_S(u) ≡  𝒳_++,F(u)+𝒳_+-(u)+c.c. =  ℐ^0,-2_++,F,>(k_12,k_3,k_3)+ℐ^0,-2_+-(k_12,k_3,k_3)+c.c. =  π4e^-πν[-𝒰_-^0(e^πk_12,k_3)𝒰_+^-2(k_3,k_3) +𝒰_+^0(k_12,k_3)𝒰_-^-2(k_3,k_3)]+c.c.. In writing this expression, we have removed the θ functions in the original expression (<ref>), because the relation |k_12|>|k_34| always holds true in the regions of our interest, including the region u∈(-∞,0) where the branch cut lies, and the physical region u∈(0,1). Also, we note that, in the final expression, we have a factor 𝒰^0_-(e^πk_12,k_3), in which the first argument k_12 should be analytically continued by a rotation of e^π. One can readily check that this way of analytical continuation brings 𝒰^p_1_-(k_12,k_3) to the corresponding τ_1-integral appears in ℐ_++,F,>^p_1p_2. As mentioned before, the single-layer integrals 𝒰 can be directly done, and the results are: 𝒰_±^p(K_1,K_2)=e^∓π4(3+2p)+π2π(K_2K_1)^5/2+p[(K_2K_1)^-𝐅^p_(K_2/K_1)+(→-)], where we have defined a function F_ν^p(z) for later convenience: 𝐅^p_(z)≡-π^1/22^3/2+pcsch(π)_2ℱ_1[54+p2-2,74+p2-2 1-|z^2]. Here _2ℱ_1 is the dressed version of Gauss's hypergeometric function, whose definition is collected in App. <ref>. It is straightforward to insert (<ref>) into (<ref>) to get an expression for the signal 𝒳_S. However, there are two small technical points worth mentioning. First, we want to write 𝒳_S as a function of u=2k_3/k_123, and this can be easily done by using the following identity of the hypergeometric function: F1[ a,b 2b|2r1+r]=(1+r)^aF1[a2,a+12 b+1/2|r^2]. Second, in (<ref>) we have a factor 𝒰_+^-2(k_3,k_3), which involves the cancellation of divergence between the two functions 𝐅^-2_±(k_3/k_3): 𝐅^-2_(k_3/k_3)+𝐅^-2_-(k_3/k_3)=(2π^3)^1/2sech(π). With these two points clarified, we obtain the signal part of 3-point correlator: 𝒳_S(u)= π(+sinh(π))4sinh(2π) _2ℱ_1[12-,52- 1-2|u]u^5/2-+(→-). Now, we can quote our previous result (<ref>) to get the discontinuity of 𝒳(u). After switching to u=2k_3/k_123 as the argument, the result reads: Disc_u𝒳(u) =-Disc_k_12ℐ^0,-2(k_12,k_3,k_3) =2cosh(π)ℐ^0,-2_S(-k_12,k_3,k_3)θ(-k_12-k_s) =2cosh(π)𝒳_S(uu-1)θ(-u), in which the minus sign in the first line follows from the relation u=2k_3/k_123. Then, from (<ref>), we find the discontinuity of the full 3-point correlator as: Disc_u𝒳(u)=π4(-csch(π)) _2ℱ_1[12-,52- 1-2|u](-u)^5/2-+(ν→-ν). Here we have used the following identity to simplify the expression: F1[ a,b c|u]=(1-u)^-aF1[ a,c-b c|uu-1]. Of course, the discontinuity in (<ref>) can be directly read from the analytical expression for 𝒳_S in (<ref>) by using the known analytical properties of hypergeometric function _2ℱ_1[⋯|u] and the power function u^5/2-ν. However, from (<ref>), we can check that the discontinuity from the hypergeometric functions get canceled. The net discontinuity (<ref>) is fully from the power factor u^5/2-ν, and this will be the key ingredient for our computation of dispersion integral in the next part. Dispersion integral With the discontinuity of the function 𝒳(u) known, we are ready to form a dispersion integral, which computes the full correlator from its (factorized) discontinuity. 𝒳(u)=u^32π∫_-∞^0 u'u'^ 3(u'-u)Disc_u'𝒳(u'). Here we have introduced a third-order subtraction (u'^3) to make sure that contour integral vanishes on the large circle. To understand this choice, we note that the large u limit of 𝒳(u) corresponds to the total-energy limit k_123→ 0. By power counting of time, one can see that the seed integral behaves like 𝒳(u)∼ u^2 as |u|→∞. Thus, a third-order subtraction suffices to make the dispersion integral well defined. (<ref>) is a well documented integral and can be directly done by . However, it is instructive to compute this integral more explicitly, by using the partial Mellin-Barnes (PMB) representation <cit.>. This method will be useful for more complicated integrals in the following sections where we do not have readily available integral formulae. Also, as we shall see, there is a nice correspondence between the pole structure of the Mellin integral and the analytical property of the final result. (See Fig. <ref>). To apply the PMB technique, we use the following MB representation for the hypergeometric function:[Generally, there is certain flexibility to deform the integral contour, so long as all poles coming from “Γ(+a s+⋯) are to the left of the contour, and those poles from “Γ(-b s+⋯)” are to the right. (Here a,b∈ℝ_+.) For convenience, here we just label the lower/upper bound of the integral as ∓∞.] _2ℱ_1[12∓ν,52∓ν 1∓2ν|u']=∫_-∞^∞ s2π (-u')^-sΓ[ s,12-s∓ν,52-s∓ν 1-s∓2ν]. We see that the MB representation effectively turns a complicated dependence on u' into a simple power function (-u')^-s. As a result, the dispersion integral over u' in (<ref>) can now be trivially carried out as: u^32π∫_-∞^0 u' (-u')^5/2∓ν-su'^ 3(u'-u)=[π(s±ν)]2u^5/2-s∓ν. It then remains to finish the Mellin integral over s: 𝒳(u)= ∫_-∞^∞ s2πu^5/2-s-ν[π(s+ν)]Γ[s,12-s-ν,52-s-ν 1-s-2ν] ×π8(1+ csch(π))+(→-). Since we have taken the variable u in the physical region, i.e., u∈ (0,1), we can perform the above Mellin integral by closing the contour from the left side. The integrand decreases fast enough when s goes to infinity in the left plane, so that the integral over the large semi-circle on the left plane vanishes, and we can finish the integral by collecting the residues of all poles to the left side of the original integration contour. From (<ref>), we see that there are two sets of left poles contributing to the final results, whose origins are highlighted in red and blue colors:[When computing integrals via PMB representation, if a Gamma function contributes poles, then all of its poles need to be collected. For example, here there are a set of poles from Γ[s], then we need to pick up the whole set of these poles, i.e., s=-n where n=0,1,2,⋯. The case where poles come from [π(s+)] is a little subtler: If we change the upper bound of the integral over u' from 0 to -ϵ where ϵ is a small positive real number, one can find we will get Γ[1/2+s+] instead of [π(s+)]. It is only in the limit ϵ→0 that the Gamma function Γ[1/2+s+] will meet another Gamma function Γ[1/2-s-] and give rise to [π(s+)]. This implies when considering poles from [π(s+)] we actually need to collect all poles from Γ[1/2+s+], while poles from Γ[1/2-s-] should be omitted. This analysis gives us another set of poles, i.e., s=-1/2-n- where n=0,1,2,⋯.] { s=-12-n-ν; s=-n.    (n=0,1,2,⋯) . We also show these poles in the right panel of Fig. <ref>. Clearly, from the factor u^5/2-s-ν in the integrand in (<ref>), we see that the poles s=-1/2-n-ν correspond to the background, whose residues sum to: 𝒳_B(u) =∑_n=0^∞(1+ csch(π))(-1)^n8u^3+nΓ[ 1+n,3+n,-1/2-n- 3/2+n-]+(→-) =-2u^31+4^2F2[1,1,3 3/2+,3/2-|u]. On the other hand, the poles at s=-n in the integrand of (<ref>) give rise to the signal: 𝒳_S(u) =∑_n=0^∞sech(π)(1+ csch(π))π8u^5/2+n-Γ[1/2+n-,5/2+n- 1+n,1+n-2]+(→-) =π(+sinh(π))4sinh(2π) _2ℱ_1[12-,52- 1-2|u]u^5/2-+(→-). Thus, the whole three-point correlator 𝒳(u) is neatly expressed as a sum of the signal and the background: 𝒳(u) =  {π(+sinh(π))4sinh(2π) _2ℱ_1[12-,52- 1-2|u]u^5/2-+(→-)}  -2u^31+4^2 F2[1,1,3 3/2+,3/2-|u]. This agrees with the results found previously using a different method <cit.>. To recapitulate our strategy, the PMB representation converts special functions into simple power functions, making the dispersion integrals easier to compute. Thereafter, the integration over Mellin variables can be directly computed via residue theorem. Therefore, the PMB representation provides a convenient way to calculate dispersion integrals analytically. For inflation correlators more complicated than the one considered here, the PMB representation remains useful, and will be shown below. §.§ Three-point one-loop bubble graph Although most of the discussions of this work focus on tree-level processes, the dispersion technique can also be applied to loop processes. In this subsection, we will explore a simple 1-loop diagram with dispersion relations, with the help of the technique of spectral decomposition <cit.>. Our example comes from the following interactions between the massless scalar φ and the principal scalar : Δ=_44a^2φ'^2σ^2+_32a^3φ'σ^2. Then, at _4_3, there is a unique diagram (up to trivial permutations) contributing to the 3-point correlator of φ with a bubble loop formed by ; See Fig. <ref>. Similar to the tree-level case, we can extract a dimensionless seed integral from the correlator: φ_ k_1φ_ k_2φ_ k_3'_1-loop=_3_48k_1k_2k_3^4[ 𝒥^0,-2(2k_3k_123)+2 perms]. Here 𝒥^p_1p_2 is the corresponding seed integral, defined as a function of the momentum ratio u=2k_3/k_123: 𝒥^p_1p_2(u)≡-12∑_𝖺,𝖻=±𝖺𝖻 k_3^5+p_12∫_-∞^0τ_1τ_2(-τ_1)^p_1(-τ_2)^p_2e^𝖺k_12τ_1+𝖻k_3τ_2𝐐^()_𝖺𝖻(k_3;τ_1,τ_2). Here, 𝐐^()_𝖺𝖻 denotes the 3-momentum loop integral: 𝐐^()_𝖺𝖻(k_3;τ_1,τ_2)≡∫^3q(2π)^3D^()_𝖺𝖻(q;τ_1,τ_2)D^()_𝖺𝖻(| k_3- q|;τ_1,τ_2). Here we mark out the mass parameter of propagators as is important in the following analysis. As explained in previous works <cit.>, the loop integral (<ref>) can be recast as a (continuous) linear superposition of massive propagators D^(')_𝖺𝖻 with different values of ', weighted by a spectral function ρ^dS_('): 𝐐^()_𝖺𝖻(k_3;τ_1,τ_2)=∫_-∞-ϵ^+∞+ϵ''πρ^dS_(')D^(')_𝖺𝖻(k_3;τ_1,τ_2). With the assumption that both the time integrals in (<ref>) and the spectral integral in (<ref>) are convergent,[The convergence of the spectral integral (<ref>) requires a proper regularization procedure, such as dimensional regularization, to make the spectral function ρ^dS_(') finite in the first place. However, as we will see below, our treatment of the loop process is completely independent of the regularization, and we can safely stay in d=3 throughout the discussion.] we can switch the order of two integrals, and write the loop correlator 𝒥^p_1p_2_ν as a spectral integral over tree correlator ℐ^p_1p_2_ν': 𝒥^p_1p_2_(u)=∫_-∞-ϵ^+∞+ϵ''2πρ^dS_(')ℐ^p_1p_2_'(k_12,k_3,k_3). Now we specialize to the case of (p_1,p_2)=(0,-2) as indicated in (<ref>), and form a dispersion integral for 𝒥^0,-2. Such a dispersion integral is possible, because all the 3-point tree-level correlators ℐ_'^0,-2 with different mass parameters ' satisfy the same dispersion relation (<ref>). Therefore, their linear superposition in (<ref>) should satisfies a similar dispersion integral. However, we should expect that the subtraction order for the loop correlator differs from the tree due to the different UV behavior. Therefore, let us write down the dispersion integral for the loop seed integral 𝒥^0,-2(u) on u plane in the following way: 𝒥^0,-2_(u)=u^m2π∫_-∞^0 u'u'^m(u'-u)Disc_u'𝒥_S^0,-2(u'). Here we leave the subtraction order m arbitrary, and we will determine it later. As mentioned above, the loop seed integral 𝒥^p_1p_2 has been computed purely from spectral decomposition in <cit.>. However, the result in <cit.> shows a significant hierarchy in the degree of complication between the signal and the background: The signal part of the loop diagram is a discrete sum of tree signals weighted by a simple coefficients, which can be understood as summing over all quasinormal modes of the loop. On the other hand, the background part is quite complicated, which, after regularization and renormalization, contains a highly intricate special function in the renormalized spectral function. Below, we shall exploit this hierarchy, using the signal computed via the spectral decomposition to bootstrap the full correlator, and thus bypassing any complications of regularization and renormalization. Thus, our starting point will be the signal part of the loop seed integral computed via the spectral decomposition <cit.>: 𝒥^0,-2_S(u)= u^4+2ν128πsin(-2πν)∑_n=0^∞(3+4ν+4n)(1+n)_1/2(1+2ν+n)_1/2(12+ν+n)_1/2(32+ν+n)_1/2 ×_2ℱ_1[2+2ν+2n,4+2ν+2n 4+4ν+4n|u]u^2n+(ν→-ν). We then need to get the discontinuity of the signal along the branch cut. For the signal (<ref>), its discontinuity along the branch cut u∈(-∞,0) is simply contributed by the u^±2 factor, which is similar to the tree-level case. The result is Disc_u𝒥^0,-2_S(u)= (-u)^4+2ν64π∑_n=0^∞(3+4ν+4n)(1+n)_1/2(1+2ν+n)_1/2(1/2+ν+n)_1/2(3/2+ν+n)_1/2 ×_2ℱ_1[2+2ν+2n,4+2ν+2n 4+4ν+4n|u]u^2nθ(-u)+(ν→-ν). Now we are ready to use (<ref>) and (<ref>) to compute the full correlator. However, at this point, we need to choose a subtraction (namely, to choose a value of m in (<ref>)) to make sure that the integral (<ref>) converges when u→0 and u→-∞. Examining the behavior of the integrand in these two limits, we see that the convergence as u→ 0 requires m≤4 while the convergence as u→ -∞ prefers a large m. So, m=4 is an optimal choice. Similar to the 3-point tree-level case, for every term in (<ref>), the dispersion integral can be done either by directly or by PMB representation. The final result for the loop seed integral 𝒥^0,-2(u) is again the sum of the signal 𝒥_S^0,-2(u) and the background 𝒥^0,-2_BG(u). The signal is already given in (<ref>), and the background is given by: 𝒥^0,-2_BG(u)= u^4128πsin(2πν)∑_n=0^∞(3+4n+4ν)(1+n)_1/2(1+n+2ν)_1/2(12+n+ν)_1/2(32+n+ν)_1/2 ×_3ℱ_2[1,2,4 1-2n-2ν,4+2n+2ν|u]+(ν→-ν). Here _3ℱ_2 is the dressed version of the generalized hypergeometric function, whose definition is collected in App. <ref>. Some readers may find it mysterious that no UV divergence ever shows up in our calculation. The reason is in fact clear: The UV divergence in this 1-loop correlator can be fully subtracted by a local counterterm ⊃δ_λ aφ'^3 with divergent coefficient _. At the correlator level, this counterterm produces a contact diagram ∝δ_λ u^3, and thus is analytical on the entire u plane. If we follow the standard loop calculation, we would find a divergent part proportional to u^3, plus a finite part with more complicated u dependence. Then we can use any convenient regularization method to remove the divergence, and use any proper renormalization scheme to determine the finite coefficient of the u^3 term. The arbitrariness of the coefficient of the u^3 term is an intrinsic uncertainty of the loop calculation. We think that this is an important lesson, especially for readers not very familiar with loop calculations, so let us reiterate it: When computing a superficially UV divergent loop correlator, the UV divergence is simply an artifact of our computation method and unphysical. Therefore, we may find a method so that UV divergences never appear and we never need to do UV regularization. Indeed, our dispersion method here is such an example where the regularization is never needed. On the contrary, when computing a 1-loop correlator with whatever methods, the result may contain a finite number of terms (in our case, the number is 1), whose kinematic dependence is totally fixed but coefficient undetermined. Indeed, the kinematic dependences of these terms are simply given by the corresponding tree graphs from the local counterterm in ordinary calculations, while the coefficients of these terms are never fixed by computation only; Instead, they should be determined by a renormalization condition, or, in a loose sense, by experimental data. Thus, to summarize: in a UV-divergent loop correlator, the UV divergence may be avoidable, but the renormalization ambiguity is not avoidable. For readers familiar with flat-space loop calculations with dimensional regularization, in App. <ref>, we provide a direct comparison between our dispersive calculation and the more conventional computation for a Minkowski 1-loop correlator, where one can see explicitly that the dispersion integral itself is free of any UV divergence or renormalization dependence, and that all renormalization-dependent information is fully encoded in the subtraction point. Back to our dispersion method, it is now clear that the renormalization ambiguity cannot be probed by the nonanalyticity of the correlator, and therefore we are not going to recover them from a dispersion integral. What we did recover in (<ref>), therefore, is a background free of any UV ambiguity, whose existence is demanded by analyticity of the correlator. For this reason, we call it the irreducible background. The physical meaning of this irreducible background is clear: For the loop diagram in question, we can imagine to integrate out all loop modes and get infinitely many effective 3-point self-interaction vertices of the external mode, with increasing number of derivatives. These derivative couplings contribute to the 3-point function in the form of a Taylor expansion of u, starting from u^3. Except the renormalization-dependent term ∝ u^3, all terms starting from u^4 are UV free and unambiguously determined by the loop computation. They can still be treated as from local (albeit derivative) interactions, but the coefficients of these interactions are unambiguous prediction of the model. Our result for the background (<ref>) precisely recovers these terms. With the above remark on renormalization ambiguity in mind, we can summarize our result for the loop seed integral as: 𝒥^0,-2(u) =Cu^3-u^4128πsin(2πν)∑_n=0^∞(3+4ν+4n)(1+n)_1/2(1+2ν+n)_1/2(12+ν+n)_1/2(32+ν+n)_1/2 ×{_2ℱ_1[2+2ν+2n,4+2ν+2n 4+4ν+4n|u]u^2n+2ν-_3ℱ_2[1,2,4 1-2n-2ν,4+2n+2ν|u]} +(ν→-ν). Here the first term Cu^3 is a local term, whose coefficient C is to be determined by a renormalization scheme. The rest of terms, including the signal and the irreducible background, are free from renormalization ambiguities. They are both organized as an infinite summation over quasi-normal modes of the bubble loop. Although it is difficult to analytically compare our result (<ref>) with the known background obtained from the spectral decomposition in <cit.> , we find that their numerical results only differ by a u^3-term, which is exactly the undetermined local part Cu^3 in (<ref>). Given the very complicated form of the background in <cit.>, we consider this agreement a rather nontrivial check of both methods. §.§ Four-point single-exchange graph As our last application of the vertex dispersion relation, we return to the 4-point seed integral (<ref>). Once again, we work with a particular choice of the exponents (p_1,p_2)=(-2,-2). As explained in (<ref>), this corresponds to the case of nonderivative coupling ϕ_c^2 between the conformal scalar ϕ_c in the external legs and a general principal massive scalar in the bulk line. Similar to the previous 3-point examples, we want to exploit the scale invariance of the process, which implies that the the seed integral ℐ^-2,-2(k_12,k_34,k_s) depends on three energy variables only through two independent momentum ratios. For the 4-point case, it is convenient to choose the following pair of ratios: r_1≡k_sk_12, r_2≡k_sk_34. The physical region 0≤ k_s≤min{k_12,k_34} then corresponds to r_1,2∈[0,1]. We then translate the analytical structure of the seed integral on the complex k_12 plane (Fig. <ref>) to the complex r_1 plane, keeping r_2∈(0,1) staying in the interior of the physical region. We show the result in Fig. <ref>, where the total-energy pole k_12=-k_34, the partial energy pole k_12=-k_s, and the signal branch point k_12→-∞ correspond to r_1=-r_2, r_1=-1, and r_1=0, respectively. Also, the branch cut is now entirely moved to the interval r_1∈(-1,0). To highlight that we are working with r_1 and r_2 as arguments of the seed integral, we use a new notation 𝒴(r_1,r_2) for the 4-point seed integral: 𝒴(r_1=k_s/k_12,r_2=k_s/k_34)≡ℐ^-2,-2(k_12,k_34,k_s). Then, from (<ref>), we can read the signal 𝒴_S(r_1,r_2)≡ℐ_S^-2,-2(k_12,k_34,k_s) of the seed integral, which is responsible for all the discontinuities: 𝒴_S(r_1,r_2) =  𝒴_S,>(r_1,r_2)θ(r_2-r_1)+𝒴_S,<(r_1,r_2)θ(r_1-r_2); 𝒴_S,>(r_1,r_2)=  ℐ^-2,-2_+-(k_12,k_34,k_s)+ℐ^-2,-2_++,F,>(k_12,k_34,k_s)+c.c. =  π4e^-πν[𝒰_+^-2(k_12,k_s)𝒰_-^-2(k_34,k_s) -𝒰_-^-2(e^πk_12,k_s)𝒰_+^-2(k_34,k_s) ]+c.c.; 𝒴_S,<(r_1,r_2)=  𝒴_S,>(r_2,r_1). Using the expressions for 𝒰^p_± in (<ref>), we can find the explicit result for the signal: 𝒴_S,>(r_1,r_2)=(1-sinh(π)2πr_1^1/2-𝐅_^-2(r_1)+(→-))(r_2^1/2-𝐅_^-2(r_2)+(→-)), where F_ν^p is defined in (<ref>). Then, with r_2∈(0,1) fixed in the interior of the physical region, the discontinuity of the seed function 𝒴(r_1,r_2) on the real axis of r_1 is itself a piecewise function of r_1: Disc_r_1𝒴(r_1,r_2) =2cosh(π)𝒴_S(-r_1,r_2)θ(-r_1)θ(r_1+1) =2cosh(π)[𝒴_S,>(-r_1,r_2)θ(r_1+r_2)θ(-r_1)+𝒴_S,<(-r_1,r_2)θ(r_1+1)θ(-r_1-r_2)]. This result is derived directly from (<ref>), although there is a sign difference in Disc_r_1𝒴(r_1,r_2) and Disc_k_12ℐ^-2,-2(k_12,k_34,k_s) due to the relation r_1=k_s/k_12. Since the seed integral 𝒴(r_1,r_2) is regular when |r_1|→∞ and r_2 fixed at a finite point, we can directly construct a dispersion integral for 𝒴(r_1,r_2) from (<ref>), with a first-order subtraction to ensure the vanishing integral along the large circle: 𝒴(r_1,r_2)=  r_12π∫_-1^0 rr(r-r_1)Disc_r𝒴(r,r_2) =  cosh(π)r_1π[∫_-r_2^0 r𝒴_S,>(-r,r_2)r(r-r_1)+∫_-1^-r_2 r𝒴_S,<(-r,r_2)r(r-r_1)], With the explicit expressions for the signal 𝒴_S in (<ref>) and (<ref>), the dispersion integral (<ref>) can be rewritten as: 𝒴(r_1,r_2)=[1-sinh(π)2π^2cosh(π)r_1𝐈_ν^(1)(r_1,r_2)+(→-) ][r_2^1/2-𝐅_^-2(r_2)+(→-)] +[r_1𝐈_ν^(2)(r_1,r_2)+(→-)][1-sinh(π)2π^2cosh(π)r_2^1/2-𝐅_^-2(r_2)+(→-)], where 𝐈_ν^(1) and 𝐈_ν^(2) are the two integrals that are derived from the vertex dispersion relation: 𝐈_ν^(1)(r_1,r_2)≡ ∫_-r_2^0 r(-r)^1/2-𝐅_^-2(-r)r(r-r_1), 𝐈_ν^(2)(r_1,r_2)≡ ∫_-1^-r_2 r(-r)^1/2-𝐅_^-2(-r)r(r-r_1). Unlike the 3-point case where the integrals extend from -∞ to 0, the integrals here are defined on finite intervals (-r_2,0) and (-1,-r_2), making the calculation more involved. Still, we can get their analytical results by using the PMB representation, although the actual computation is quite lengthy. We collect the main steps and the final results for these two integrals in App. <ref>. Once 𝐈_ν^(1) and 𝐈_ν^(2) are obtained, we get the full expression of the seed integral 𝒴(r_1,r_2), which can be further simplified and separated into the signal and the background, namely, 𝒴(r_1,r_2)=𝒴_S(r_1,r_2)+𝒴_BG(r_1,r_2). The simplification is spelled out in App. <ref>. Here, we only show the final result for the background 𝒴_BG(r_1,r_2), since the signal 𝒴_S(r_1,r_2) has been given in (<ref>): 𝒴_BG(r_1,r_2) =  𝒴_BG,>(r_1,r_2)θ(r_1-r_2)+𝒴_BG,>(r_2,r_1)θ(r_2-r_1); 𝒴_BG,>(r_1,r_2) =   2^ν(πν)√(2π)∑_n=0^∞(1+n-ν)_n-1/2n! _2ℱ_1[1,12-2n+ 32-2n+|-r_1r_2] ×_2ℱ_1[14+2,34+2 1+|r_2^2]r_1 (r_22)^2n +(→-). This expression appears different from the known results in the literature <cit.>, but a direct numerical check shows that they agree with each other. Therefore, we have successfully bootstrapped the 4-point correlation functions with single massive exchange by dispersion integrals. As we can see, for this particular 4-point example, performing the dispersion integral is by no means simpler than performing the nest time integral directly <cit.>. Rather, our calculation here serves as a proof of principle, and shows that the dispersion relations really work for correlators with more complicated kinematics than 3-point single-exchange diagram. On the other hand, we can anticipate that the use of dispersion relation can bring significant simplification to the 4-point correlators at 1-loop level. We will pursue this 1-loop calculation in a future work. § ANALYTICAL STRUCTURE ON THE COMPLEX LINE-ENERGY PLANE In the previous two sections, we considered the analytical properties and dispersion relations of inflation correlators in the complex plane of a vertex energy. Starting from this section, we are going to study the analytical properties of inflation correlators from a different perspective, by treating a line energy as a complex variable. In general, inflation correlators with massive exchanges also develop branch cuts on the complex plane of line energies. Therefore, it is possible to develop a different type of dispersion relations on the line energy plane, which we call line dispersion relations. As we shall show, branch cuts on the complex plane of a line energy can all be connected to the nonlocal signal of the inflation correlator. Therefore, a line dispersion relation allows us to compute the entire inflation correlator from its nonlocal signal alone. At the first sight, it may appear trivial that the branch cuts on a line energy plane can be entirely attributed to the nonlocal signal. Indeed, recall that the nonlocal signal with respect to a line energy K_i refers to the part of the correlator which develops complex powers in K_i in the soft K_i limit: lim_K_i→ 0𝒢({E_ℓ},{K_j})∼ f({E_ℓ},{K_j})K_i^±ν+g({E_ℓ},{K_j}), where both f and g are analytic at K_i=0, i.e., they have ordinary Taylor expansions at K_i=0. Therefore, the nonlocal signal, by definition, is associated with the branch point at K=0 generated by the complex-power term f({E_ℓ},{K_j})K_i^±ν. However, things are less trivial than they appear: The functions f and g are analytic in K_i only within a finite domain around K_i=0 where their Taylor expansions converge. Outside the convergence domain, these two functions could well develop new nonanalytic behaviors, including branch cuts, on the K_i plane. These new nonanalyticities, in particular the ones in g, are not obviously related to the nonlocal signal. Therefore, it is quite remarkable that all branch cuts on the K_i plane, including those not generated by nonlocal signals, can actually be connected to the nonlocal signal alone. In this section, we will spell out the details of reducing the entire correlator to its nonlocal signal. In this sense, we may say that the nonlocal signal by itself knows all about the whole correlator. Recall from the previous two sections that a vertex dispersion integral relates an inflation correlator with its signal, both local and nonlocal. On the other hand, the line dispersion enables the recovery of full correlator from the nonlocal signal alone. Therefore, we see that the line dispersion is more “economic” than vertex dispersion in that it can generate the full correlator from a smaller set of data. This may have a practical advantage for bootstrapping inflation correlators, since the nonlocal signal appears easier to identify and to compute than the local signal, especially at the loop level <cit.>. Therefore, we may expect that the line dispersion relation may be a useful tool to bootstrap some complicated loop correlators whose full analytical results remain out of reach with currently known methods. Defining the nonlocal signal Clearly, the nonlocal signal plays a central role in the line dispersion relation. By definition, the nonlocal signal is a term in the correlator that develops complex powers K^±ν in the soft line energy limit K→ 0, namely the f(E_ℓ,K_j)K_i^±ν term in (<ref>). Now let us identify this piece in the four-point seed integral ℐ^p_1p_2(k_12,k_34,k_s) in (<ref>) without really computing it. When we fix the two vertex energies k_12 and k_34 in their physical domain and let k_s→ 0, the seed integral ℐ^p_1p_2(k_12,k_34,k_s) is well convergent in the early time limit. Thus, its analytical behavior at k_s=0 is fully determined by the analytical property of the integrand in k_s, which in turn is determined by the bulk propagator D_å(k_s;τ_1,τ_2). Clearly, all four bulk propagators listed in (<ref>)-(<ref>) are constructed from a pair of Hankel functions H_+ν^(1) and H_-ν^(2). Thus, we can regroup these Hankel functions to separate all bulk propagators into a piece analytic at k_s=0 and a piece that contains complex powers in k_s. In practice, this can be neatly done by rewriting each Hankel function as a linear combination of Bessel function of the first kind J_±ν; See (<ref>). Then, the Hankel product in the propagator can be rewritten as: H ^(2)_-(-k_sτ_1)H^(1)_(-k_sτ_2) = csch(π)^2J_(-k_sτ_1)J_-(-k_sτ_2)+(1+(π))^2J_-(-k_sτ_1)J_(-k_sτ_2) -csch(π)(1+(π))[J_(-k_sτ_1)J_(-k_sτ_2)+J_-(-k_sτ_1)J_-(-k_sτ_2)]. In this expression, we have two types of terms: One involves a product of two J_±ν with opposite orders, namely J_±νJ_∓ν, as listed in the first line on the right hand side of (<ref>); The other type involves a product of two J_±ν with the same order, namely J_±νJ_±ν, listed in the last line of (<ref>). By expanding these Bessel J functions in the k_s→ 0 limit, it is straightforward to see that the opposite-order terms J_±νJ_∓ν are analytic as k_s→ 0, while the same-order terms J_±νJ_±ν behaves like k_s^±2ν as k_s→ 0. Thus, the same-order terms in the propagators precisely give rise to the nonlocal-signal part of the seed integral, while the opposite-order terms contain no nonlocal signal. Either by more careful inspection of the integral or by direct calculation, one can confirm that the opposite-order terms correspond to the local signal and the background, but we will not need this detailed separation between the local signal and the background in this section. Incidentally, from the boundary viewpoint, the same-order part can be viewed as the two-point correlator of a given conformal block with dimension Δ=32±ν, while the opposite-order part the correlator between a conformal block and its shadow. Based on the above observation, we now separate all four bulk propagators D_å(k;τ_1,τ_2) according to their analytic property at k→ 0 in the following way: D_å (k;τ_1,τ_2) =Σ(k;τ_1,τ_2)+Ω_å(k;τ_1,τ_2). Here the same-order propagatorsΣ(k;τ_1,τ_2) involve terms with same-order Bessel-J products, and thus are nonanalytic at k=0: Σ(k;τ_1,τ_2) ≡ -π(τ_1τ_2)^3/24sinh(πν)[J_(-kτ_1)J_(-kτ_2)+J_-(-kτ_1)J_-(-kτ_2)], while the four opposite-order propagatorsΩ_å(k;τ_1,τ_2) involve terms with opposite-order Bessel-J products, and thus are analytic at k=0: Ω_±∓(k;τ_1,τ_2) ≡  π(τ_1τ_2)^3/24sinh(πν)[((πν)-1)J_±(-kτ_1)J_∓(-kτ_2)-(ν→-ν)], Ω_±±(k;τ_1,τ_2) ≡  Ω_∓±(k;τ_1,τ_2)θ(τ_1-τ_2)+Ω_∓±(k;τ_1,τ_2)θ(τ_2-τ_1). We have deliberately removed the SK indices in the same-order propagators Σ(k;τ_1,τ_2), to highlight the fact that this propagator is actually independent of the SK contours: All four choices of the SK labels å,=± yield the same expression Σ(k;τ_1,τ_2). This is closely tied to the fact that the nonanalytic part of the propagator is real and symmetric in the two time variables τ_1 and τ_2. In particular, the symmetry under τ_1↔τ_2 renders the time-ordering θ functions ineffective in the same-sign propagators. However, let us immediately clarify that the same-order propagator Σ(k;τ_1,τ_2) is not the symmetrization of the original bulk propagator D_å(k;τ_1,τ_2) with respect to τ_1↔ r_2. As one can directly check, the opposite-order propagator Ω_±±(k;τ_1,τ_2) also contains a piece that is symmetric with respect to τ_1↔ r_2 but is nevertheless analytic at k_s=0. In fact, this additional piece corresponds to a part of the local signal that is symmetric in k_12↔ k_34. Now, we can put the above separated bulk propagator back into the seed integral, and separate the seed integral accordingly: ℐ_å^p_1p_2(k_12,k_34,k_s)=  𝒫_å^p_1p_2(k_12,k_34,k_s)+𝒬_å^p_1p_2(k_12,k_34,k_s), where 𝒫_å^p_1p_2(k_12,k_34,k_s) and 𝒬_å^p_1p_2(k_12,k_34,k_s) are respectively nonanalytic and analytic at k_s→ 0 when k_12 and k_34 staying in the interior of their physical domain, whose definitions are: 𝒫_å^p_1p_2(k_12,k_34,k_s) ≡ -𝖺𝖻 k_s^5+p_12∫_-∞^0τ_1τ_2 (-τ_1)^p_1(-τ_2)^p_2e^𝖺k_12τ_1+𝖻k_34τ_2Σ(k_s;τ_1,τ_2), 𝒬_å^p_1p_2(k_12,k_34,k_s) ≡ -𝖺𝖻 k_s^5+p_12∫_-∞^0τ_1τ_2 (-τ_1)^p_1(-τ_2)^p_2e^𝖺k_12τ_1+𝖻k_34τ_2Ω_å(k_s;τ_1,τ_2). We note that, although the same-order propagator itself Σ(k;τ_1,τ_2) is independent of SK indices, the nonanalytic integrals 𝒫_å^p_1p_2(k_12,k_34,k_s) still have nontrivial dependences on å,=± through the exponential factors e^å k_12τ_1+ k_34τ_2. To complete our list of new definitions, we can also define the integrals with SK indices summed: 𝒫^p_1p_2(k_12,k_34,k_s)≡∑_å,=±𝒫_å^p_1p_2(k_12,k_34,k_s); 𝒬^p_1p_2(k_12,k_34,k_s)≡∑_å,=±𝒬_å^p_1p_2(k_12,k_34,k_s). From the above discussion, we see that 𝒫^p_1p_2 is nothing but the nonlocal signal, while 𝒬^p_1p_2 is the sum of the local signal and the background: 𝒫^p_1p_2(k_12,k_34,k_s)=  ℐ_NS^p_1p_2(k_12,k_34,k_s), 𝒬^p_1p_2(k_12,k_34,k_s)=  ℐ_LS^p_1p_2(k_12,k_34,k_s)+ℐ_BG^p_1p_2(k_12,k_34,k_s). Same-order integral Now let us briefly look at the two integrals defined in (<ref>) and (<ref>). First, consider the same-order integral 𝒫^p_1p_2(k_12,k_34,k_s). Combining (<ref>), (<ref>), and (<ref>), we see that the nonlocal signal can be directly expressed as a sum of factorized time integrals: ℐ_NS^p_1p_2(k_12,k_34,k_s) =π4sinh(πν)∑_å,,=±å𝒱_^p_1(å k_12,k_s)𝒱_^p_2( k_34,k_s), where we have introduced two single-layer integrals 𝒱_±^p(E,K), defined by: 𝒱_±^p(E,K) ≡   K^5/2+p∫_-∞^0τ (-τ)^3/2+pe^+ EτJ_±ν(-Kτ). This integral can be directly done and the result is expressed in terms of the (dressed) Gauss's hypergeometric function: (See App. <ref> for our definition of the dressed hypergeometric functions.) 𝒱_±^p(E,K)=  2^3/2+p√(π)(K E)^5/2+p±ν_2ℱ_1[12(52+p±ν),12(72+p±ν) 1±ν|K^2E^2]. Thus, the computation of the nonlocal signal involves only single-layer integrals, which is a direct consequence of the nonlocal-signal cutting rule studied in the literature <cit.>. Parity of the opposite-order integral Next, let us turn to the opposite-order integrals 𝒬_å^p_1p_2. Unlike the nonlocal signal, these integrals involve genuine time orderings that cannot be removed, resulting in final expressions of higher “transcendental weight”<cit.>, and thus are more difficult to compute. We are going to compute them using dispersion relations below. Here, without computing them directly, we point out that the integral 𝒬_å^p_1p_2 has a very useful property: It possesses a fixed parity under the parity transformation of the line energy: k_s→ -k_s. To see this point, we make use of a property of Bessel J function, given in App. <ref>, which shows that J_±ν(e^πz)J_∓ν(e^πw)=J_±ν(z)J_∓ν(w). As a result, the opposite-order propagator Ω_å(k;τ_1,τ_2) is invariant under the sign flip of its energy: Ω_å(-k;τ_1,τ_2)=Ω_å(k;τ_1,τ_2). With this property and the definition of the opposite-order integral in (<ref>), it is straightforward to see that 𝒬_å^p_1p_2 has a fixed parity (-1)^1+p_12 under the k_s-parity transformation k_s→ -k_s: 𝒬_å^p_1p_2(k_12,k_34,-k_s)=(-1)^1+p_12𝒬_å^p_1p_2(k_12,k_34,k_s), where (-1)^1+p_12 comes entirely from the prefactor k_s^5+p_12 in our definition of 𝒬_å^p_1p_2. This property will be very useful for our following derivation of the line dispersion relation. Analyticity along the positive real axis After a brief analysis of same-order and opposite-order integrals, now let us come back to the main goal of this section, namely, to diagnose the nonanalyticity of the seed integral ℐ^p_1p_2(k_12,k_34,k_s) on the complex k_s plane. The strategy is similar to what we adopted in Sec. <ref>, namely, to use the contour-deformation method. With this method, we will show that the seed integral ℐ^p_1p_2(k_12,k_34,k_s) is analytic everywhere on the complex k_s plane, expect for a possible branch cut lying on the whole negative real axis. In the next part, we shall relate the discontinuity of this branch cut to the one in the nonlocal signal ℐ_NS^p_1p_2. Similar to the behavior in the vertex energy plane, the seed integral is obviously analytic in k_s for Im k_s≠ 0, a direct consequence of contour deformation argument. More nontrivial is the following fact: θ(k_s)Disc_k_sℐ^p_1p_2_å(k_12,k_34,k_s)=0. That is, the seed integral is analytic in k_s for all k_s>0. This is quite remarkable because the region k_s>0 is not entirely physical: Physically allowed k_s satisfies 0≤ k_s≤min{k_12,k_34}. Thus, the statement (<ref>) in particular implies that seed integrals ℐ^p_1p_2_å are regular on the boundaries of the physical region when k_s=k_12 and k_s=k_34. As is well known, the absence of singularities at these folded configurations is a consequence of choosing the Bunch-Davies initial state for all fluctuating modes in the bulk. Now let us prove (<ref>) rigorously using our contour-deformation method. Once again, the analytical behavior of seed integrals on the complex k_s plane is governed by the UV behavior of the integrands, namely the convergence of the integral as τ_1,2→-∞. So let us look at these UV regions for the opposite-sign integrals ℐ_±∓^p_1p_2 and same-sign integrals ℐ_±±^p_1p_2, respectively. First, we consider the opposite-sign integral: ℐ_+-^p_1p_2(k_12,k_34,k_s) =  π e^-πν4 k_s^5+p_12∫_-∞^0τ_1τ_2 (-τ_1)^3/2+p_1(-τ_2)^3/2+p_2  × e^ k_12τ_1- k_34τ_2H_-ν^(2)(-k_sτ_1)H_ν^(1)(-k_sτ_2). Clearly, the integrand is well defined in the entire integration region for all k_s>0. Furthermore, we only consider IR finite processes so that the integral is convergent as τ_1,2→ 0. Thus, any potential singularity of the integral on the complex k_s plane must come from the UV divergences when τ_1,2→ -∞. However, it is easy to see this never happens for k_s>0. In fact, using the asymptotic behavior of the Hankel functions (<ref>), we see that the integrand behaves like: e^+(k_12+k_s)τ_1e^-(k_34+k_s)τ_2, up to irrelevant power factors of τ_1 and τ_2. We see that, for physical values of k_12 and k_34, and for any k_s>0, the phases of these factors never change sign or hit zero. Thus, we conclude that, with the original choices of the integration contour (with proper prescriptions), the opposite-sign integral ℐ_+-^p_1p_2(k_12,k_34,k_s) is regular for any k_s>0. With completely the same argument, we can also show that the other opposite-sign integral ℐ_-+^p_1p_2(k_12,k_34,k_s) is regular for any k_s>0 as well. The two same-sign seed integrals can be analyzed similarly. Let us consider the all-plus integral: ℐ_++^p_1p_2(k_12,k_34,k_s) =-π e^-πν4 k_s^5+p_12∫_-∞^0τ_1τ_2 (-τ_1)^3/2+p_1(-τ_2)^3/2+p_2e^ k_12τ_1+ k_34τ_2  ×[H_ν^(1)(-k_sτ_1)H_-ν^(2)(-k_sτ_2)θ(τ_1-τ_2)+H_-ν^(2)(-k_sτ_1)H_ν^(1)(-k_sτ_2)θ(τ_2-τ_1)]. Parallel to the previous argument, the integral can develop singular behaviors on the complex k_s plane only through UV divergences. They occur when either the earlier time variable or both the time variables go to -∞. Taking the θ(τ_1-τ_2) part of (<ref>) as an example: -π e^-πν4 k_s^5+p_12∫_-∞^0τ_2∫_τ_2^0τ_1 (-τ_1)^3/2+p_1(-τ_2)^3/2+p_2e^ k_12τ_1+ k_34τ_2H_ν^(1)(-k_sτ_1)H_-ν^(2)(-k_sτ_2). In the UV limit τ_1,τ_2→-∞, the integrand behaves like e^+(k_12-k_s)τ_1e^+(k_34+k_s)τ_2 up to unimportant power factors. Then, after finishing the τ_1 integral, we get two terms, each behaves in the τ_2→-∞ limit as e^+(k_12+k_34)τ_2 and e^+(k_34+k_s)τ_2, respectively. Clearly, both phases stay positive for all k_s>0, so that the integral is well convergent with its original contour. One can similarly analyze the θ(τ_2-τ_1) term in (<ref>) and gets the same result. The analysis for ℐ_–^p_1p_2 is also the same. Thus we conclude that the same-sign seed integrals are also regular for all k_s>0. This completes the proof of (<ref>). Discontinuity in the line energy From (<ref>) we see that any possible branch cuts of the seed integrals ℐ_å^p_1p_2 must lie in the negative real axis. In other words, we have: Disc_k_sℐ^p_1p_2(k_12,k_34,k_s) = θ(-k_s)Disc_k_s[ℐ_NS^p_1p_2(k_12,k_34,k_s)+𝒬^p_1p_2(k_12,k_34,k_s)]. Here we have used (<ref>) with all SK indices summed, as well as (<ref>), namely, the same-order integral 𝒫^p_1p_2 with all SK indices summed is nothing but the nonlocal signal ℐ^p_1p_2_NS. Now we are ready to relates this discontinuity with that of the nonlocal signal. To this end, we make use of the above result (<ref>), written in terms of ℐ^p_1p_2_NS and 𝒬^p_1p_2: θ(k_s)Disc_k_s[ℐ_NS^p_1p_2(k_12,k_34,k_s)+𝒬^p_1p_2(k_12,k_34,k_s)]=0. Then, using the parity of the opposite-order integral 𝒬 in (<ref>), we get: θ(k_s)Disc_k_s[ℐ_NS^p_1p_2(k_12,k_34,k_s)-(-1)^1+p_12𝒬^p_1p_2(k_12,k_34,-k_s)]=0. This can be equivalently written as: θ(-k_s)Disc_k_s𝒬^p_1p_2(k_12,k_34,k_s) = (-1)^1+p_12θ(-k_s)Disc_k_sℐ_NS^p_1p_2(k_12,k_34,-k_s) . Substituting this relation back to (<ref>), we finally get: Disc_k_sℐ^p_1p_2(k_12,k_34,k_s) =  Disc_k_s[ℐ_NS^p_1p_2(k_12,k_34,k_s)-(-1)^p_12ℐ_NS^p_1p_2(k_12,k_34,-k_s)]θ(-k_s). That is, the discontinuity of the full seed integral can be completely related to that of the nonlocal signal alone. This is the central result of the current section, and forms the basis for the line energy dispersion relation, to be discussed below. § BOOTSTRAPPING CORRELATORS WITH LINE DISPERSION RELATION From the analytical structure of the seed integral on the complex k_s plane, it is straightforward to construct dispersion integrals, which relate the whole seed integral with its nonlocal signal alone. For clarity, let us still specialize to the case of p_1=p_2=-2. Once again, we use the fact that the dimensionless seed integral depends only on two independent momentum ratios, and we have freedom to choose them. A convenient choice is r_1=k_s/k_12 and x≡ k_34/k_12, so that the analytical structure of the seed integral in the line energy k_s is manifest on the complex r_1 plane. Again, to avoid potential confusions, we introduce a new variable for the seed integral with this particular choice of arguments: 𝒵(r_1=k_s/k_12,x=k_34/k_12)≡ℐ^-2,-2(k_12,k_34,k_s). The integral for the nonlocal signal 𝒵_NS(r_1,x) is likewise defined. Then, with this new notation, the discontinuity of the seed integral (<ref>) can be rewritten as: Disc_r_1𝒵(r_1,x) = Disc_r_1[ 𝒵_NS(r_1,x)-𝒵_NS(-r_1,x)] θ(-r_1), where x remains in the physical region x>0. We show this result in Fig. <ref>. From this result we learn a lesson: The singularity structure of the seed integral as a function of one momentum ratio, say r_1, is crucially dependent on how we choose and fix other ratios. This is made clear by comparing Fig. <ref>, where we fix r_2, and Fig. <ref>, where we fix x=r_1/r_2. There is nothing mysterious here: In the most general situation, the seed integral is to be treated as a function of multiple complex variables, whose singularity structure on a multidimensional complex space can be quite complicated. The dispersion relations considered in this work, on the other hand, are always formulated on a fixed complex dimension-1 submanifold, where we only see the projections of higher dimensional singularities. By fixing different ratios, we are working on different complex dimension-1 submanifolds, and it is not surprising that the projections of singularities on these submanifolds are different. With the discontinuity given in (<ref>), we can directly write down a dispersion integral for the seed integral 𝒵(r_1,x): 𝒵(r_1,x)=r_12π∫_-∞^0 rr(r-r_1)Disc_r_1[ 𝒵_NS(r_1,x)-𝒵_NS(-r_1,x)]. Here we have introduced a first-order subtraction. This choice follows from the asymptotic behavior of the seed integral 𝒵(r_1,x) in the limit |r_1|→∞. Note that r_1=∞ is the total energy limit where 𝒵(r_1,x) diverges at most logarithmically by power counting of time in the time integral. So, a first-order subtraction is sufficient to make the dispersion integral well defined. The nonlocal signal of the seed integral has been presented in (<ref>), and here we rewrite it as a function of r_1 and x: 𝒵_NS(r_1=k_s/k_12,x=k_34/k_12) =π4sinh(πν)∑_å,,=±å𝒱_^-2(å k_12,k_s)𝒱_^-2( k_34,k_s), Using the result for 𝒱^-2_𝖼 in (<ref>), we get an explicit expression for 𝒵_NS as: 𝒵_NS(r_1,x)=1-sinh(π)2πx^-1/2+r_1^1-2𝐅_^-2(r_1/x)𝐅_^-2(r_1)+(→-), where 𝐅_^p is defined in (<ref>). Discontinuity of the nonlocal signal To evaluate the dispersion integral (<ref>), we need the discontinuity of the nonlocal signal 𝒵_NS. It is possible to get this discontinuity by analyzing the integral expression for 𝒱_±^p without really evaluating it, like we did for 𝒰_±^p before. However, here we choose to present the discontinuity directly by known analytical properties of power functions and Gauss's hypergeometric functions. Notice that the power function r_1^-2ν has a branch cut in the negative real axis, and that the Gauss's hypergeometric function in 𝐅_^-2 factors has a branch cut when its argument z∈(1,∞). Thus, all three r_1-dependent factors in (<ref>) make contributions to the discontinuity of the nonlocal signal. More explicitly: * r_1^±2 contributes a branch cut for r_1∈(-∞,0). * 𝐅_±^-2(r_1/x) contributes a branch cut for r_1∈(-∞,-x)∪ (x,∞). * 𝐅_±^-2(r_1) contributes a branch cut for r_1∈(-∞,-1)∪ (1,∞). Then, it is straightforward to see that the nonlocal signal 𝒵_NS(r_1,x) in (<ref>) has a branch cut for any real value of r_1 except when 0<r_1<min{x,1}, in which 𝒵_NS(r_1,x) is real. In addition, the discontinuity across the branch cut is itself discontinuous at r_1=± x and r_1=± 1. We show these branch cuts in Fig. <ref>, where we make manifest the contributions from different factors in (<ref>). Incidentally, when we compute a quantity like Disc_z [f(z)g(z)] where both f and g have discontinuities, there are multiple equivalent ways to express it in terms of the discontinuity of individual factor. For instance, when z>0, we have Disc_z [f(z)g(z)] = f(z^+)Disc_z[g(z)]+Disc_z[f(z)]g(z^-) =f(z^-)Disc_z[g(z)]+Disc_z[f(z)]g(z^+). Here and below, we introduce the shorthand notation z^±≡ z e^± with an infinitesimal positive real. Thus, when computing the discontinuity of products of functions, one can make various choices. To fix our choice, we infinitesimally displace some branch cuts into complex plane, as shown in Fig. <ref>. According to this prescription, for instance, when we compute the discontinuity across the cyan branch cut (from F_ν^-2(r_1) factor) in the negative real axis, we should evaluate the other two factors on the lower edges of the gray and green branch cut, namely, we take (r_1^+)^1-2ν and F_ν^-2(r_1^+/x).[Note that, for negative r_1, r_1^+ corresponds to the lower edge of the branch cut and r^- to the upper edge.] The structure of the branch cut suggests that we should break the line dispersion integral (<ref>) into three pieces, each corresponding to the branch cut from a given factor, and also to a wiggly line of a given color in Fig. <ref>. Below we work out the discontinuity across each of these branch cuts. First, the gray branch cut in Fig. <ref> is contributed by the power factor r_1^1-2ν in (<ref>). We define the discontinuity from this branch cut into the following function: D_^(1)(r_1,x) ≡  Disc_r_1[r_1^1-2ν]𝐅^-2_(r_1^-/x)𝐅^-2_(r_1^-) = -2sinh(2πν)(-r_1)^1-2ν𝐅^-2_(r_1^-/x)𝐅^-2_(r_1^-). Here the two hypergeometric factors are taking values from the upper edges of their branch cuts on r_1 plane, consistent with our displacement of the branch cuts in Fig. <ref>. Second, the green branch cut in Fig. <ref> is contributed by the hypergeometric factor F_±ν^-2(± r_1/x) in (<ref>). For negative r_1, the discontinuity across this branch cut is given by the following function: D_^(2)(r_1,x) ≡   (r_1^+)^1-2Disc_r_1[𝐅^-2_(r_1/x)]𝐅^-2_(r_1^-) = -e^-2πν(-r_1)^1-2νG_ν(r_1/x)F_ν^-2(r_1^-), where we have defined the discontinuity of F_ν^-2(z) along its branch cut to be G_ν. Using the known property of the Gauss's hypergeometric function in (<ref>), we can find an explicit expression for G_ν: 𝐆_(z)≡Disc_z𝐅_^-2(z)=-√(2π^3) csch (π) F1[1/4-2,34-2 1|1-z^2].    (z<-1) Third, the cyan branch cut in Fig. <ref> is contributed by the hypergeometric factor F_±ν^-2(± r_1) in (<ref>). For negative r_1, we define the discontinuity across this branch cut into the following function: D_^(3)(r_1,x) ≡   (r_1^+)^1-2𝐅^-2_(r_1^+/x)Disc_r_1[𝐅^-2_(r_1)] = -e^-2πν(-r_1)^1-2ν[𝐅^-2_(r_1^-/x)-G_ν(r_1/x)]G_ν(r_1) , where we have used the relation 𝐅^-2_(r_1^+/x)=𝐅^-2_(r_1^-/x)+𝐆_(r_1/x) to rewrite a hypergeometric factor in terms of its value across the branch cut. In summary, for negative values of r_1, the discontinuity across the branch cuts of the nonlocal signal 𝒵_NS(r_1,x) is given by: Disc_r_1𝒵_NS(r_1<0,x)=D_^(1)(r_1,x)θ(-r_1)+D_^(2)(r_1,x)θ(-r_1-x)+D_^(3)(r_1,x)θ(-r_1-1). On the other hand, as shown in Fig. <ref>, the green and cyan branch cuts also extend to positive real values of r_1. However, the discontinuities for these “positive” branch cuts are not independent, since the two “positive” branch cuts can be related to the corresponding two “negative” branch cuts by a 180^∘ rotation around the origin r_1=0 via the lower plane (in order not to cross the gray branch cut). Now, using the facts that the function 𝐅^-2_(r_1) is even in r_1, and that (e^-πr_1)^1-2ν=-e^-2πνr_1^1-2ν for r_1>0, we have: Disc_r_1𝒵_NS(r_1>0,x)=e^2πνD_^(2)(r_1,x)θ(r_1-x)+e^2πνD_^(3)(r_1,x)θ(r_1-1). Thus we have found the explicit expressions for all five branch cuts shown in Fig. <ref>. Line dispersion integral and the result Based on the previous analysis of the branch cut of the nonlocal signal, we are now ready to find the explicit expression for the line dispersion integral. Combining (<ref>) and (<ref>), we see that the dispersion integral (<ref>) now boils down to three integrals J_i^ν(i=1,2,3): 𝒵(r_1,x)=-+sinh(π)4π^2x^-1/2+r_1[𝐉_ν^(1)(r_1,x)+𝐉_ν^(2)(r_1,x)+𝐉_ν^(3)(r_1,x)]+(→-), where the three terms (J_ν^(1),J_ν^(2),J_ν^(3)) correspond to integrals around the gray, green, and cyan branch cuts in Fig. <ref>, respectively: 𝐉_ν^(1)(r_1,x)≡ ∫_-∞^0 rD_^(1)(r,x)r(r-r_1), 𝐉_ν^(2)(r_1,x)≡ ∫_-∞^-x r(1-e^2π)D_^(2)(r,x)r(r-r_1), 𝐉_ν^(3)(r_1,x)≡ ∫_-∞^-1 r(1-e^2π)D_^(3)(r,x)r(r-r_1). These three integrals can be computed analytically via PMB representation, although the details are quite lengthy. We collect them in App. <ref>, and present the results below. As mentioned many times before, the 4-point seed integral 𝒵(r_1,x) can be written as a sum of nonlocal signal (NS), the local signal (LS), and the background (BG): 𝒵(r_1,x)=𝒵_NS(r_1,x)+𝒵_LS(r_1,x)+𝒵_BG(r_1,x). It turns out that the nonlocal signal 𝒵_NS is contributed only by the integral around the gray branch cut, namely 𝐉_ν^(1). The result is simply identical to our input (<ref>), which we collect here for completeness: 𝒵_NS(r_1,x)=1-sinh(π)2πx^-1/2+r_1^1-2𝐅_^-2(r_1/x)𝐅_^-2(r_1)+(→-), The local signal 𝒵_LS receives contributions from the integrals around the gray and the green branch cuts, namely 𝐉_ν^(1) and 𝐉_ν^(2), respectively. The result is: 𝒵_LS(r_1,x)=  𝒵_LS,>(r_1,x)θ(1-|x|)+𝒵_LS,>(r_1/x,1/x)θ(|x|-1), 𝒵_LS,>(r_1,x)=  1-sinh(π)2πx^-1/2-r_1𝐅^-2_-(r_1/x)𝐅_^-2(r_1)+(→-). Finally, the background 𝒵_BG receives contributions from the integrals around all three branch cuts, namely 𝐉_ν^(i)(i=1,2,3), whose result can be simplified into the following form: 𝒵_BG(r_1,x)=  𝒵_BG,>(r_1,x)θ(1-|x|)+𝒵_BG,>(r_1/x,1/x)θ(|x|-1), 𝒵_BG,>(r_1,x)= ∑_n=0^∞8(-x)^n r_1/(1+2n)^2+4^2 F2[1,1/2+n/2,1+n/2 5/4+n/2-/2,5/4+n/2+/2|r_1^2]. This expression for 𝒵_BG has a different look from known results in <cit.>, but is identical to the latter. In fact, the background part is a two-variable hypergeometric function known as Kampé de Fériet function and allows for many different series representations <cit.>. § CONCLUSIONS AND OUTLOOKS As the dS counterparts of flat-space scattering amplitudes, inflation correlators possess distinct analytical structure. For general massive-exchange processes, branch cuts usually appear in the complex plane of appropriate kinematic variables, which connect physics in the UV and IR regions. In the IR regions, such branch cuts are closely related to logarithmic oscillations of the correlators in the physical regions, known as CC signals in the context of Cosmological Collider physics. The CC signals have received many studies in recent years. In particular, it has been shown that, although the computation of general inflation correlators is difficult, we can apply the cutting rule and the factorization theorem to extract CC signals in the squeezed limit <cit.>. In comparison, the corresponding branch cuts on the complex domain beyond the physical region are less understood and deserve more studies. In this work, we explore the analytical properties of massive inflation correlators as functions of two types of kinematic variables: the vertex energies and the line energies. For both types of energies of a tree-level correlator, we identified the total-energy and partial-energy poles, the signal branch point, together with the branch cuts connecting them. Based on this structure, we developed two distinct dispersion relations: a vertex dispersion relation which relates the correlator to its full signal, and a line dispersion relation which relates the correlator to the nonlocal signal alone. With these dispersion relations, we have successfully bootstrapped a few tree-level and 1-loop massive inflation correlators. At 1-loop level, our method is manifestly UV finite and free from any regularization procedure. This allows us to neatly single out the renormalization-independent part of the correlator, which is unambiguously determined by analyticity. Although there have been scattered studies on analytical properties of inflation correlators (and the related wavefunction coefficients), to our best knowledge, the dispersion relations have not been used to bootstrap the full massive inflation correlators. Our work filled this gap by providing a few proof-of-principle calculations. While the computation itself can often become lengthy compared to other existing methods for simple examples, it nevertheless shows the potential power of the dispersion techniques in bootstrapping more complicated diagrams. Thus, we consider this work a first step in carrying out a more extensive program of dispersive bootstrap. Naturally, many directions are open to further explorations, and we conclude this work by mentioning some of them. A natural first task is to chart all nonanalyticities of a given tree diagram, beyond the 4-point single exchange. This includes not only the locations of poles and branch cuts, but also the discontinuities across all branch cuts. With these data, we can imagine to recursively bootstrap more complicated diagrams from simple sub-diagrams, either analytically or numerically. Next, it would be very interesting to explore the potential of dispersive bootstrap for loop diagrams. We have seen that dispersion relations could be advantageous in bootstrapping one-loop diagrams, including the absence of UV divergences, the simplified expressions, and the separation of renormalization dependent and independent parts. They encourage us to consider more complicated loop processes. As a concrete first step, we may try to combine the dispersive and spectral methods and bootstrap 1-loop bubble processes with spinning exchanges and derivative couplings, and this will be explored in a follow-up work. Beyond the bubble topology, it is not immediately clear that techniques like spectral decomposition are still available. Nevertheless, it looks promising to us to numerically implement the dispersive techniques for loop processes. We plan to investigate this route in a future work. As we pointed out many times, the dispersive bootstrap at its core is an idea to reconstruct the whole diagram from a knowledge of sub-diagrams. In this regard, what we have considered in this work is the most straightforward realization, namely, exploiting the complex energy planes. It is also interesting to search for “dispersion relations” with not only complex energies, but also other complex parameters. In flat space, it has been very fruitful to consider scattering amplitudes on complex planes of mass, angular momentum, and even spacetime dimensions. We can imagine that the analytical structures in these complex parameters could also bring us new insights and new methods for inflation correlators. Also, it has been shown recently that the parity-odd part of a cosmological correlator (or a wavefunction coefficient) automatically factorized under rather general conditions. <cit.> Thus, it would be very interesting to develop dispersion techniques for parity-violating theories. Last but not least, the dispersion relations in flat spacetime or in CFT are usually tied to nonperturbative properties of amplitudes, and are used to make nonperturbative statements about the unitarity and positivity of the theory. On the other hand, in this work, we only apply the dispersion techniques at the diagrammatic level. How are the two approaches related? From pure diagrammatic analysis, is it possible to gain insights applicable to all orders in perturbation theory? Similar to flat-space situations, we believe that, at least for simple kinematics with full dS isometries, it is possible to make progress along these directions. We leave all these interesting topics for future studies. Acknowledgments We thank Xingang Chen, Enrico Pajer, Carlos Duaso Pueyo, Sébastien Renaux-Petel, Xi Tong, Lian-Tao Wang, Yi Wang, Denis Werth, Jiayi Wu, Hongyu Zhang, and Yuhang Zhu for useful discussions. This work is supported by NSFC under Grants No. 12275146 and No. 12247103, the National Key R&D Program of China (2021YFC2203100), and the Dushi Program of Tsinghua University. § NOTATIONS In this appendix, for readers' convenience, we collect some frequently used variables in Table <ref>, together with the numbers of equations where they are defined or first appear. § USEFUL FUNCTIONS AND PROPERTIES In this appendix, we collect a few special functions and their properties used in the main text. These are standard material, and we quote them from <cit.>. Euler Gamma products and fractions In this work we use the following shorthand notation for the productions and fractions of Euler Γ functions: Γ[a_1,⋯,a_n]≡Γ(a_1)⋯Γ(a_n); Γ[ a_1,a_2,⋯,a_m b_1,b_2,⋯,b_n]≡Γ(a_1)Γ(a_2)⋯Γ(a_m)Γ(b_1)Γ(b_2)⋯Γ(b_n). With this notation, the Pochhammer symbol (a)_n is defined as (a)_n≡Γ[ a+n a]. Hypergeometric functions The (generalized) hypergeometric function is used in this work, whose standard form is defined by the following series when convergent, and by analytical continuation otherwise: _pF_q[ a_1,a_2,⋯,a_p b_1,b_2,⋯,b_q|z]≡∑_n=0^∞(a_1)_n(a_2)_n⋯(a_p)_n(b_1)_n(b_2)_n⋯(b_q)_nz^nn!. In particular, _2F_1 is known as the Gauss's or ordinary hypergeometric function. There are a few useful variations whose definitions are different from the standard form only in prefactors. First, the regularized hypergeometric function _pF_q is defined by: _pF_q[ a_1,a_2,⋯,a_p b_1,b_2,⋯,b_q|z]≡1Γ[b_1,b_2,⋯,b_q]_pF_q[ a_1,a_2,⋯,a_p b_1,b_2,⋯,b_q|z]. It is called regularized, because, when the argument z is not at the singular points, the regularized hypergeometric function is an entire function of all the parameters (a_1,⋯,a_p,b_1,⋯,b_q). Second, we frequently use the “dressed" hypergeometric function _pℱ_q in the main text, because it simplifies a lot of expressions: _pℱ_q[ a_1,a_2,⋯,a_p b_1,b_2,⋯,b_q|z]≡Γ[a_1,a_2,⋯,a_p]Γ[b_1,b_2,⋯,b_q]_pF_q[ a_1,a_2,⋯,a_p b_1,b_2,⋯,b_q|z]. It is useful to note that the Gauss's hypergeometric function _2F_1[⋯|z] in general has two branch points at z=1 and z=∞. It is our convention to choose the branch cut connecting these two points to lie in the interval z∈(1,∞) on the real axis. We define the value of _2F_1[⋯|z] when z>1 by its value on the lower edge of the branch cut. Then, the value on the upper edge is determined by the discontinuity across the branch cut. More explicitly: { _2F_1[ a,b c|z^+] = 2π e^π(a+b-c)Γ[ c a+b-c+1,c-a,c-b]_2F_1[ a,b a+b-c+1|1-z] +e^2π(a+b-c)_2F_1[ a,b c|z], _2F_1[ a,b c|z^-]=_2F_1[ a,b c|z]. . For power functions with non-integer powers, we can get a branch cut along the negative real axis by restricting the argument of variable in (-π,π]. (ze^π)^p=e^π pz^p, (ze^-π)^p=e^-π pz^p.    (z>0) Bessel functions In this work, we used the standard Bessel J function, especially its analytical property. Generically, a Bessel J function J_ν(z) has a branch cut on the negative real axis, connecting z=0 and z=-∞. The discontinuity across this branch cut is conveniently captured by the following identity: J_±ν(e^mπz)=e^∓ mνπJ_±ν(z).    (z>0) More frequently appeared in the main text are Hankel functions H_ν^(1) and H_ν^(2), which can be expressed in terms of Bessel J function as: H^(1)_(z)=(1+(π))J_(z)-csch(π)J_-(z), H^(2)_-(z)=-csch(π)J_(z)+(1+(π))J_-(z). Consequently, the Hankel functions H_ν^(j)(z)(j=1,2) possess branch cuts on the negative real axis of z, whose discontinuity can be found from the following identities: H^(1)_(ze^π)=-e^πH^(2)_(z), H^(1)_(ze^-π)= 2cosh(π)H^(1)_(z)+e^πH^(2)_(z), H^(2)_-(ze^π)=e^πH^(1)_-(z)+ 2cosh(π) H^(2)_-(z), H^(2)_-(ze^-π)=-e^πH^(1)_-(z).    (z>0) § VERTEX DISPERSION INTEGRAL WITH PMB REPRESENTATION In this section we collect some details of computing the 4-point single-exchange correlator from the vertex dispersion integral (<ref>). As shown in Sec. <ref>, the vertex dispersion integral for the 4-point tree seed integral can be reduced to (<ref>), which in turns amount to the computation of two integrals I^(j)_ν (j=1,2), which we collect here again: 𝐈_^(1)(r_1,r_2)≡∫_-r_2^0 r(-r)^1/2-𝐅_^-2(-r)r(r-r_1), 𝐈_^(2)(r_1,r_2)≡∫_-1^-r_2 r(-r)^1/2-𝐅_^-2(-r)r(r-r_1). Computing 𝐈_^(1) and 𝐈_^(2) Now we compute the two integrals above with PMB representation. For 𝐈_^(1), we take the MB representation of 𝐅_^-2(-r), which is given by: 𝐅_^-2(-r)=∫_-∞^∞ s2π(-r)^-s𝔽_(s), where 𝔽_(s)≡ -π2^-1+s+e^-π s/2sinh(π)Γ[s2,12-s- 1-s2-]. then the original integral I_^(1) becomes: I_^(1)=∫_-r_2^0 r∫_-∞^∞ s2π(-r)^1/2-s-r(r-r_1)𝔽_(s). The integral over r can then be finished directly, which gives: ∫_-r_2^0 r(-r)^1/2-s-r(r-r_1)=r_2^1/2-s-r_1_2ℱ_1[12-s-,1 32-s-|-r_2r_1]. Now, the original integral I_^(1) has been recasted into an integral over a Mellin variable s: I_^(1)=∫_-∞^∞ s2πr_2^1/2-s-r_1_2ℱ_1[12-s-,1 32-s-|-r_2r_1]𝔽_(s). Again, we use residue theorem to compute the integral over s. For r_2∈(0,1), we need to close the contour from the left side, and get a set of poles coming from Γ[s/2] in 𝔽_(s): s=-2n.    (n=0,1,2,⋯) Summing up all residues we get:[Here we introduce the notation Res(𝐈,s) to represent the residue of integral 𝐈 of the pole at s, multiplying an extra factor 2π for simplicity. For example, if for integral 𝐈 there is only one pole s inside the contour, then the final result is simply 𝐈=±Res(𝐈,s), where the plus/minus sign depends on the direction of the contour.] I_^(1)=∑_n=0^∞Res(I_^(1),-2n), where Res(I_^(1),-2n)=-π2^-2n+r_2^1/2+2n-sinh(π)r_1Γ[12+2n- 1+n,1+n-]_2ℱ_1[1,12+2n- 32+2n-|-r_2r_1]. This completes the computation of I_^(1). Next we consider I_^(2) in (<ref>). Again we use the MB representation of 𝐅^-2_ (<ref>) and get: I_^(2)=∫_-1^-r_2 r∫_-∞^∞ s2π(-r)^1/2-s-r(r-r_1)𝔽_(s). So the integral over r can be done: ∫_-1^-r_2 r (-r)^1/2-s-r(r-r_1)= Γ[12+s+] ×(r_2^-1/2-s-_2F_1[1,1/2+s+ 3/2+s+|-r_1r_2]-_2 F_1[1,1/2+s+ 3/2+s+|-r_1]), where _2 F_1[⋯] is the regularized Gauss's hypergeometric function whose definition is collected in App. <ref>. Again the integral over s can be finished via residue theorem. Closing the contour from the left side, there are two sets of poles: one from Γ[s/2] in 𝔽_(s), another from Γ[1/2+s+] contributed by the integral over r (<ref>): { s=-2n, s=-12-n-.    (n=0,1,2,⋯) . Then we get I_^(2)=∑_n=0^∞Res(I_^(2),-2n)+∑_n=0^∞Res(I_^(2),-12-n-), where Res(I_^(2),-2n)= -π2^-2n+sinh(π)Γ[12+2n- 1+n,1+n-] ×(r_2^-1/2+2n-_2ℱ_1[1,12-2n+ 32-2n+|-r_1r_2]-_2ℱ_1[1,12-2n+ 32-2n+|-r_1]), and Res(I_^(2),-12-n-)= π2^-3/2-ne^π(-1-2n+2)/4sinh(π)Γ[-14-n2-2 54+n2-2] ×(r_2^n_2F_1[1,-n 1-n|-r_1r_2]-_2F_1[1,-n 1-n|-r_1]). Note that _2F_1[1,-n 1-n|x]=Γ[n](-x)^n,    (n=0,1,2,⋯) so residues from the second set of poles actually vanish: Res(I_^(2),-12-n-)=0. This completes the computation of I_^(2). Let us collect the explicit results for both integrals I_^(j)(j=1,2) here for future reference: I_^(1)(r_1,r_2)= ∑_n=0^∞-π2^-2n+r_2^1/2+2n-sinh(π)r_1Γ[12+2n- 1+n,1+n-]_2ℱ_1[1,12+2n- 32+2n-|-r_2r_1], I_^(2)(r_1,r_2)= ∑_n=0^∞-π2^-2n+sinh(π)Γ[12+2n- 1+n,1+n-] ×(r_2^-1/2+2n-_2ℱ_1[1,12-2n+ 32-2n+|-r_1r_2]-_2ℱ_1[1,12-2n+ 32-2n+|-r_1]). Simplifying the result In principle, we can just substitute the above results for I_^(j)(j=1,2) into (<ref>) to get an analytical expression for the tree seed integral, but this expression is obviously to be simplified. Now we describe how to massage this expression, using various functional identities, to get a reasonably simplified result. For definiteness, we consider the case 0<r_1<r_2<1 without loss of generality. Given this relation, the result can be separated into the signal and the background without introducing θ factors. Then, for I_^(1) shown in (<ref>), we use the following relation: F1[ a,b b+1|-x]=x^-bΓ[ a-b,1+b a]-b× x^-aa-bF1[ a,a-b 1+a-b|-1x], through which we get _2ℱ_1[1,12+2n- 32+2n-|-r_2r_1]=π sech(π)(r_1r_2)^1/2+2n--r_1r_2_2ℱ_1[1,12-2n+ 32-2n+|-r_1r_2]. Then I_^(1)(r_1,r_2)= -∑_n=0^∞π2^-2n+sinh(π)Γ[12+2n- 1+n,1+n-] ×(π sech(π)r_1^-1/2+2n--r_2^-1/2+2n-_2ℱ_1[1,12-2n+ 32-2n+|-r_1r_2]) =  Λ_1^(r_1,r_2)-Λ_2^(r_1,r_2), where Λ_1^(r_1,r_2)≡ ∑_n=0^∞-π^22^-2n+r_1^-1/2+2n-sinh(π)cosh(π)Γ[12+2n- 1+n,1+n-], Λ_2^(r_1,r_2)≡ ∑_n=0^∞-π2^-2n+r_2^-1/2+2n-sinh(π)Γ[12+2n- 1+n,1+n-]_2ℱ_1[1,12-2n+ 32-2n+|-r_1r_2]. Also, if we define Λ_3^(r_1,r_2)≡∑_n=0^∞-π2^-2n+sinh(π)Γ[12+2n- 1+n,1+n-]_2ℱ_1[1,12-2n+ 32-2n+|-r_1], then the second integral I_^(2) can be expressed as: I_^(2)(r_1,r_2)=Λ_2^(r_1,r_2)-Λ_3^(r_1,r_2). Now, substituting (<ref>) and (<ref>) into (<ref>), we get 𝒴(r_1,r_2)= [(1-sinh(π)2π^2cosh(π)r_1[Λ_1^(r_1,r_2)-Λ_2^(r_1,r_2)]+(→-))r_2^1/2-𝐅_^-2(r_2) +(r_1[Λ_2^(r_1,r_2)-Λ_3^(r_1,r_2)]+(→-))1-sinh(π)2π^2cosh(π)r_2^1/2-𝐅_^-2(r_2)] +(→-). Signal One can show that the terms associated with Λ_1^± give the signal part. In fact, the summation in (<ref>) can be done: Λ_1^(r_1,r_2) =∑_n=0^∞-π^22^-2n+r_1^-1/2+2n-sinh(π)cosh(π)Γ[12+2n- 1+n,1+n-] =π sech(π)r_1^-1/2-𝐅_^-2(r_1), As a result, the terms involving (<ref>) in (<ref>) gives: 𝒴_S,>(r_1,r_2)=(1-sinh(π)2πr_1^1/2-𝐅_^-2(r_1)+(→-))(r_2^1/2-𝐅_^-2(r_2)+(→-)). This is exactly the signal part of the 4-point tree seed integral. Background It follows that all other terms besides the Λ_1^ terms give rise to the background: 𝒴 _BG,>(r_1,r_2) = [(1-sinh(π)2π^2cosh(π)r_1[-Λ_2^(r_1,r_2)]+(→-))r_2^1/2-𝐅_^-2(r_2) +(r_1[Λ_2^(r_1,r_2)-Λ_3^(r_1,r_2)]+(→-))1-sinh(π)2π^2cosh(π)r_2^1/2-𝐅_^-2(r_2)] +(→-). This expression can be further simplified thanks to several cancellations. First, it is easy to see that all terms including Λ_2^±(r_1,r_2)𝐅^-2_±(r_2) cancel out. Second, it can be shown that all terms involving Λ_3^±(r_1,r_2) cancel out. As a result, the background of 4-point seed integral can be simplified into: 𝒴_BG,>(r_1,r_2)= π^2sinh(π)cosh(π)×r_1× r_2^1/2+Λ_2^(r_1,r_2)𝐅^-2_-(r_2)+(→-) = ∑_n=0^∞((π)π^1/22^1/2+2n-× r_1× r_2^2n×Γ[12+2n- 1+n,1+n-] ×_2ℱ_1[1,12-2n+ 32-2n+|-r_1r_2]_2ℱ_1[14+2,34+2 1+|r_2^2])+(→-). This is still not the expression found in previous works, but we have checked numerically that it agrees with known results, as mentioned in the main text. There are a large number of functional identities and resummation tricks with which one may prove the agreement analytically, but we shall not pursue this pure mathematical exercise in this work. Instead, in the rest of this appendix, we prove the cancellation of Λ_3^ terms. More precisely, we shall prove Λ_3^(r_1,r_2)+Λ_3^-(r_1,r_2)=0. To this end, we use the standard series representation for the dressed hypergeometric function in Λ_3^: _2ℱ_1[ a,b c|x]=∑_m=0^∞Γ[ a+m,b+m c+m,1+m]x^m. Then, the expression (<ref>) for Λ_3^ can be rewritten as: Λ_3^(r_1,r_2)=∑_n=0^∞∑_m=0^∞-π2^-2n+sinh(π)Γ[12+2n- 1+n,1+n-](-r_1)^m12-2n+m+. In this expression, the sum over n can be directly finished: Λ_3^(r_1,r_2)=∑_m=0^∞-π2^1+(-r_1)^m(1+2m+2)sinh(π)F2[14-2,34-2,-14-m2-/2 3/4-m2+2,1-|1]. Then, we use the following identity of F2 (Eq. (4.3.4) of <cit.>): F2[ a,b,c d,e|1]= Γ[1-a,d,e,c-b e-b,d-b,1+b-a,c]F2[ b,1+b-d,1+b-e 1+b-a,1+b-c|1] +Γ[1-a,d,e,b-c e-c,d-c,1+c-a,b]F2[ c,1+c-d,1+c-e 1+c-a,1+c-b|1]. Then Λ_3^ can be rewritten as: Λ_3^(r_1,r_2)= ∑_m=0^∞(-r_1)^msinh(π){2^-7/2-m((-1)^m+1-1)Γ[1+m,-14-m2+2,-14-m2-2] +2^3/2π^2(2+m)cosh(π)Γ[14+2,14-2]F2[1+m2,34+2,34-2 32,2+m2|1]}. From this expression, it is easy to see Λ_3^-(r_1,r_2)=-Λ_3^(r_1,r_2). § LINE DISPERSION INTEGRAL WITH PMB REPRESENTATION In this appendix, we spell out the details of computing the three integrals (<ref>)-(<ref>) arising from the line dispersion relation for the 4-point seed integral (<ref>). The strategy is again the PMB representation. Computing 𝐉_^(1) For the first integral 𝐉_^(1), we take the MB representation for D_ν^(1) appeared in the integrand, whose expression is given in (<ref>).[A fine point is that the arguments of the two hypergeometric factors in (<ref>) are taken values from the lower edge of the branch cut, where our MB representation for F_ν^-2 in (<ref>) is valid. On the contrary, if we want to evaluate the F_ν^-2 on the upper edge of the branch cut, such as F_ν^-2(z^+) when z>1, we need to begin with F_ν^-2(z^-), and add back the discontinuity across the branch cut using (<ref>).] Then we get: J_^(1)(r_1,x)=-2sinh(2π)∫_-∞^0 r∫_-∞^∞ s_12π s_22π(-r)^1-s_1-s_2-2x^s_2r(r-r_1)𝔽_(s_1)𝔽_(s_2), where 𝔽_ν(s) is given in (<ref>). Then, the r integral is again directly done, giving: J_^(1)(r_1,x)=-2πsinh(2π)∫_-∞^∞ s_12π s_22πr_1^-s_1-2(r_1/x)^-s_2sin(π(s_1+s_2+2))𝔽_(s_1)𝔽_(s_2). When applying the residue theorem to compute this integral, we meet a subtlety here due to the “mixed poles” such as those from 1/sin[π(s_1+s_2+2)], and we need to deal with these poles carefully. Below we spell out some details. First, consider the s_1-integral. Since we want to obtain a result for physical r_1∈ (0,1), the factor r_1^-s_1 says that we should close the contour from the left side on the s_1 plane. There are two sets of left poles in s_1, respectively from Γ(s_1/2) and 1/sin[π(s_1+s_2+2ν)]: s_1=-2n_1, s_1=-s_2-n_1-2ν.    (n_1=0,1,2,⋯) After evaluating the Mellin integrand in (<ref>) on these two sets of poles respectively, we are left with an s_2 integral. The analysis of the s_2 integral depends on which sets of s_1-poles we take. Suppose we take the first set of poles s_1=-2n_1. Then, we are left with a factor of (r_1/x)^-s_2=r_2^-s_2. For physical r_2∈(0,1), we should pick up left s_2-poles of the integrand. Examining the integrand in (<ref>), we see that there are two sets of left poles:[One may naively think that the second left poles s_2=2n_1-n_2+2ν with fixed n_1 and n_2∈ℕ are not all “left,” in the sense that some of these poles have positive real part when 2n_1-n_2>0. However, we emphasize that the criterion for a pole being left or right is not the sign of its real part, but rather the sign in front of the natural number n_2 that parameterize the set of poles. Therefore, in this case, all poles with a -n_2 term should be counted as left poles.] s_2=-2n_2, s_2=2n_1-n_2-2ν.    (n_2=0,1,2,⋯) However, some of “poles” from the second set (e.g., the second line in (<ref>)) coincide with zeros from the factor 1/Γ(1-s_2/2-ν) in 𝔽_ν(s_2), which locate at s_2=2n+2-2ν with n=0,1,2,⋯. Thus, the poles in the second line of (<ref>) clash with these zeros if 2n_1-n_2 happens to be a positive even integer. As a result, among all poles in (<ref>), only the following ones make nonzero contributions to the final results: s_2=-2n_2,     (n_2=0,1,2,⋯) s_2=2n_1-(2n_2+1)-2ν, (n_2=0,1,2,⋯) s_2=2n_1-2n_2-2ν. (n_2=n_1,n_1+1,n_1+2,⋯) So much for the poles involving s_1=-2n_1. Now let us return to (<ref>) and consider the second set of s_1-poles, namely s_1=-s_2-n_1-2ν. After evaluating the Mellin integrand (<ref>) at these poles, we get a factor x^s_2. Since we are considering the region k_12>k_34, namely x=k_34/k_12<1, the factor x^s_2 suggests that we should take the right s_2-poles. There is only one set of right poles from the factor Γ[1/2-s_2-] in 𝔽_(s_2): s_2=12+n_2-.    (n_1,2=0,1,2,⋯) Naively, one may expect that there is another set of right poles coming from the factor Γ(s_1/2)=Γ[(-s_2-n_1-2)/2] in 𝔽_(s_1), since we are now evaluating s_1 at s_1=-s_2-n_1-2ν. However, this is overcounting since the pole from the Γ(s_1/2) factor has already been included in (<ref>). So, we should not include them again here. In summary, to compute J_^(1)(r_1,x) we need to pick up the following four sets of poles from the Mellin integrand in (<ref>): { s_1=-2n_1,s_2=-2n_2,     (n_1,n_2∈ℕ) s_1=-2n_1,s_2=2n_1-(2n_2+1)-2, (n_1,n_2∈ℕ) s_1=-2n_1,s_2=2n_1-2n_2-2, (n_1∈ℕ;n_2-n_1∈ℕ) s_1=-s_2-n_1-2,s_2=12+n_2-. (n_1,n_2∈ℕ) . The contributions from these poles in order are: Υ_1^(r_1,x)= ∑_n_1=0^∞∑_n_2=0^∞-×2^1-2n_1-2n_2+2νπ^3 csch(πν)^2r_1^2n_1+2n_2-2νx^-2n_2 ×Γ[12+2n_1-ν,12+2n_2-ν 1+n_1,1+n_2,1+n_1-ν,1+n_2-ν], Υ_2^(r_1,x)= ∑_n_1=0^∞∑_n_2=0^∞(-1)^3/2+n_1+n_2+ν2^-2n_2π^2(πν)r_1^1+2n_2x^-1+2n_1-2n_2-2 ×Γ[12+2n_1-ν,-12+n_1-n_2-ν,32-2n_1+2n_2+ν 1+n_1,32-n_1+n_2,1+n_1-ν], Υ_3^(r_1,x)= ∑_n_1=0^∞∑_n_2=n_1^∞(-1)^-n_1+n_22^1-2n_2π^2e^-πν(π)r_1^2n_2x^2n_1-2n_2-2 ×Γ[12+2n_1-ν,n_1-n_2-ν,12-2n_1+2n_2+ν 1+n_1,1-n_1+n_2,1+n_1-ν], Υ_4^(r_1,x)= ∑_n_1=0^∞∑_n_2=0^∞^2n_2-n_12^-n_1π^2e^-πν(πν)r_1^n_1x^1/2+n_2- ×Γ[ 1+n_1+n_2,-14-n_12-n_22-ν2,14+n_22-ν2 1+n_2,34-n_22-ν2,54+n_12+n_22-ν2]. Then we get the full result for the integral J_^(1) as: J_^(1)(r_1,x)=Υ_1^(r_1,x)+Υ_2^(r_1,x)+Υ_3^(r_1,x)+Υ_4^(r_1,x). Computing 𝐉_^(2) For the second integral J_^(2) in (<ref>), we again take the MB representation of its numerator D_ν^(2) in (<ref>). Then, the integral J_^(2) becomes: J_^(2)(r_1,x) = ∫_-∞^-x r(1-e^-2π)(-r)^1-2r(r-r_1)𝐆_(r/x)𝐅_^-2(r) =  (1-e^-2π)∫_-∞^-x r∫_-∞^∞ s_12π s_22π(-r)^1-s_1-s_2-2x^s_1r(r-r_1)𝔾_(s_1)𝔽_(s_2), where 𝔾_(s_1) is the MB representation of 𝐆_(r/x): 𝔾_(s_1)=-2^2s_1+(π)Γ[s_1,s_1+,12-2s_1-]. After finishing the integral over r, we get: J_^(2)(r_1,x)= (1-e^-2π)∫_-∞^∞ s_12π s_22πx^-s_2-2Γ[s_1+s_2+2] ×_2F_1[1,s_1+s_2+2 1+s_1+s_2+2|-r_1x]𝔾_(s_1)𝔽_(s_2). The analysis of poles are very similar to the previous case and here we only list the result. That is, there are two sets of poles contributing to the integral when all momentum ratios taking values from their physical region, together with the condition k_12>k_34: { s_1=12+n_1-,s_2=-2n_2, s_1=12+n_1-,s_2=-12-n_1-n_2-.    (n_1,2=0,1,2,⋯) . The contribution from the first set of poles is: Υ_5^(r_1,x)= ∑_n_1=0^∞∑_n_2=0^∞(-1)^1/2+n_12^1/2+n_1-2n_2+νπe^-π(π) ×Γ[12+2n_2-,14+n_12-2,14+n_12+2 1+n_1,1+n_2,1+n_2-] ×_2ℱ_1[1,1/2+n_1-2n_2+ 32+n_1-2n_2+|-r_1x]x^2n_2-2, and the contribution from the second set of poles is: Υ_6^(r_1,x)= ∑_n_1=0^∞∑_n_2=0^∞^3/2+3n_2+3n_12^-1-n_1πe^-32π(π)r_1^n_1x^1/2+n_2- ×Γ[ 1+n_1+n_2,14+n_22-2,14+n_22+2,-14-n_12-n_22-2 1+n_2,54+n_12+n_22-2]. Then, the second integral J_^(2) can be expressed as: J_^(2)(r_1,x)=Υ_5^(r_1,x)+Υ_6^(r_1,x). Computing 𝐉_^(3) Finally, we consider the third integral J_^(3) in (<ref>). We again take the MB representation of the numerator D_ν^(3) in (<ref>) and finish the r integral, which gives: J_^(3)(r_1,x)= (1-e^-2π)∫_-∞^∞ s_12π∫_-∞^∞ s_22πx^s_1Γ[s_1+s_2+2] ×_2F_1[1,s_1+s_2+2 1+s_1+s_2+2|-r_1]𝔾_(s_2)(𝔽_(s_1)-𝔾_(s_1)). After closing the contour from the right plane of s_1 then from the left plane of s_2, we get three sets of poles contributing residues: { s_1=12+n_1-,s_2=-2n_2, s_1=12+n_1-,s_2=-2n_2-2, s_1=12+n_1-,s_2=-12-n_1-n_2-.    (n_1,2=0,1,2,⋯) . The contributions from these poles are given respectively by: Υ_7^(r_1,x)= ∑_n_1=0^∞∑_n_2=0^∞(-1)^n_12^-1/2-2n_1+n_2+(π)( e^-π+(-1)^1+n_2) ×Γ[12+2n_1-,-n_1+,14+n_22-2,14+n_22+2 1+n_1,1+n_2] ×_2ℱ_1[1,12-2n_1+n_2+ 32-2n_1+n_2+|-r_1]x^1/2+n_2-, Υ_8^(r_1,x)= ∑_n_1=0^∞∑_n_2=0^∞(-1)^n_12^-1/2-2n_1+n_2-(π)( e^-π+(-1)^1+n_2) ×Γ[1/2+2n_1+,-n_1-,14+n_22-2,14+n_22+2 1+n_1,1+n_2] ×_2ℱ_1[1,1/2-2n_1+n_2- 3/2-2n_1+n_2-|-r_1]x^1/2+n_2-, Υ_9^(r_1,x)= ∑_n_1=0^∞∑_n_2=0^∞2^-2-n_1(π)( e^-π+(-1)^1+n_2)(-r_1)^n_1x^1/2+n_2- ×Γ[ 1+n_1+n_2,-14-n_12-n_22-2,-14-n_12-n_22+2 1+n_2] ×Γ[14+n_22-2,14+n_22+2]. Then, the result for the integral J_^(3) is: J_^(3)(r_1,x)=Υ_7^(r_1,x)+Υ_8^(r_1,x)+Υ_9^(r_1,x). Simplifying the result Now we have finished the computation of the three integrals J_ν^(j)(j=1,2,3) in (<ref>)-(<ref>). According to (<ref>), the final result of the seed integral is the sum of nine series Υ_ℓ^ν with ℓ=1,⋯,9. By looking at the dependence on various momentum ratios, it is straightforward to observe the following patterns: {Υ_1^ν}⊂nonlocal signal; {Υ_ℓ^ν;ℓ=2,3,5}⊂local signal; {Υ_ℓ^ν;ℓ=4,6,7,8,9}⊂background. Below, we will simply these 9 series according the above grouping. Nonlocal signal The nonlocal signal, which only comes from Υ^±_1, can be directly obtained by finishing the double sum in (<ref>), and the result is: Υ_1^(r_1,x)=2π r_1^-2 𝐅_^-2(r_1)𝐅_^-2(r_1/x), This gives exactly the nonlocal signal which is the starting point of the line dispersion integral: 𝒵_NS(r_1,x)=1-sinh(π)2πr_1^1-2x^-1/2+𝐅_^-2(r_1)𝐅_^-2(r_1/x)+(→-). Local signal Local signal comes from Υ_2^±, Υ_3^±, and Υ_5^±, whose explicit results are respectively (<ref>), (<ref>), and (<ref>). For Υ_3^, its double sum can be directly cpmputed, and the result is Υ_3^(r_1,x)=2π1+tanh(π)x^-2𝐅_^-2(r_1)𝐅^-2_-(r_1/x). The simplification of Υ_5^ is more complicated. We first expand the _2ℱ_1 factor in (<ref>) and get Υ_5^(r_1,x)= ∑_n_1=0^∞∑_n_2=0^∞∑_n_3=0^∞(-1)^1/2+n_1+n_32^1/2+n_1-2n_2+νπ12+n_1-2n_2+n_3+e^-π(π)r_1^n_3x^2n_2-n_3-2 ×Γ[12+2n_2-,14+n_12-2,14+n_12+2 1+n_1,1+n_2,1+n_2-]. Then the sum over n_1 can be finished: Υ_5^(r_1,x)= 2^-1/2-2n_2+π e^-π(π)(-r_1)^n_3x^2n_2-n_3-2Γ[12+2n_2- 1+n_2,1+n_2-] ×{_3ℱ_2[14-2,14+2,14-n_2+n_32+2 12,54-n_2+n_32+2|1] -2×_3ℱ_2[34-2,34+234-n_2+n_32+2 32,74-n_2+n_32+2|1]}. Thereafter, we use the formula (<ref>) again and get Υ_5^(r_1,x)= ∑_n_2=0^∞∑_n_3=0^∞π^32^1-n_3e^-πcsch(π)(-r_1)^n_3x^2n_2-n_3-2 ×Γ[12+2n_2-,12-2n_2+n_3+ 1+n_2,1+n_2-,1-n_2+n_32,1-n_2+n_32+]. The next key step is to devide this result into two parts, based on the parity of the summation index n_3. When n_3 is odd, we replace n_3 by 2n_3+1 and get Υ_5,odd^(r_1,x)= ∑_n_2=0^∞∑_n_3=0^∞-π^3 4^-n_3e^-πcsch(π)r_1^1+2n_3x^-1+2n_2-2n_3-2 ×Γ[12+2n_2-,32-2n_2+2n_3+ 1+n_2,1+n_2-,32-n_2+n_3,32-n_2+n_3+]. When n_3 is even, we replace n_3 by 2n_3 and get Υ_5,even^(r_1,x)= ∑_n_2=0^∞∑_n_3=n_2^∞π^3 2^1-2n_3e^-πcsch(π)r_1^2n_3x^2n_2-2n_3-2 ×Γ[12+n_2-,12-2n_2+2n_3+ 1+n_2,1+n_2-,1-n_2+n_3,1-n_2+n_3+]. Note that the sum over n_3 is not from 0 to ∞ as some terms vanish because of the factor Γ[1-n_2+n_3] in the denominator. This explains our split of Υ_5^ into two parts: Comparing (<ref>) with (<ref>), we find Υ_2^(r_1,x)+Υ_5,odd^(r_1,x)=0. Then we only need to compute Υ_5,even^, which can be directly done: Υ_5,even^(r_1,x)=2π1+(π)x^-2𝐅_^-2(r_1)𝐅^-2_-(r_1/x). Then we get the whole local signal: Υ_3^(r_1,x)+Υ_5,even^(r_1,x)=2π x^-2 𝐅_^-2(r_1)𝐅^-2_-(r_1/x). Consequently, 𝒵_LS(r_1,x)=1-sinh(π)2πr_1x^-1/2-𝐅_^-2(r_1)𝐅^-2_-(r_1/x)+(→-). Background The background comes from 5 terms: Υ_4^±, Υ_6^±, Υ_7^±, Υ_8^±, and Υ_9^±. First, for Υ_4^±, one can directly finish the sum over n_1 in (<ref>) and get: Υ_4^(r_1,x)= ∑_n=0^∞(-2)^nπ^5/2e^-π(π)x^1/2+n-Γ[14+n2-2 1+n,34-n2-2] ×{ r_1[π4(1+2n+2)]_3ℱ_2[1,1+n2,32+n2 74+n2-2,74+n2+2|r_1^2] -[π4(1+2n+2)]_3ℱ_2[1,12+n2,1+n2 54+n2-2,54+n2+2|r_1^2]}. We can also deal with Υ_6^ and Υ_9^ in a similar way. Adding these three terms together, we get Υ_4^(r_1,x)+Υ_6^(r_1,x)+Υ_9^(r_1,x) =(-1)^1+n16π^2csch(π)(1+2n)^2+4^2x^1/2+n-F2[1,12+n2,1+n2 54+n2-2,54+n2+2|r_1^2]. For the rest two terms Υ_7^ and Υ_8^, we apply the same procedure used when simplifying Υ_5^. Taking Υ_7^ as an example, we expand the _2ℱ_1 factor in (<ref>) and get Υ_7^(r_1,x)= ∑_n_1=0^∞∑_n_2=0^∞∑_n_3=0^∞(-1)^1+n_1+n_32^-1/2-2n_1+n_2+12-2n_1+n_2+n_3+((-1)^n_2- e^-π)(π) × r_1^n_3x^1/2+n_2-Γ[-n_1+,12+2n_1-14+n_22-2,14+n_22+2 1+n_1,1+n_2]. Then the sum over n_1 can be finished: Υ_7^(r_1,x)= ∑_n_2=0^∞∑_n_3=0^∞2^-2+n_2√(π)((-1)^1+n_2-e^-π)(π)csch(π)(-r_1)^n_3x^1/2+n_2- ×Γ[14+n_22-2,14+n_22+2 1+n_2]_3ℱ_2[14-2,34-2,-14-n_22-n_32-2 34-n_22-n_32-2,1-|1]. One can simplify Υ_8^ in the same way and get: Υ_8^(r_1,x)= ∑_n_2=0^∞∑_n_3=0^∞2^-2+n_2√(π)(e^-π+(-1)^n_2)(π)csch(π)(-r_1)^n_3x^1/2+n_2- ×Γ[14+n_22-2,14+n_22+2 1+n_2]_3ℱ_2[14+2,34+2,-14-n_22-n_32+2 34-n_22-n_32+2,1+|1]. After that, we use the transformation of F2 in (<ref>) again, and find: _3ℱ_2[14-2,34-2,-14-n_22-n_32-2 34-n_22-n_32-2,1-|1]-(→-)=0, which leads to a nontrivial result: Υ_7^(r_1,x)+Υ_8^(r_1,x)=0. Therefore, the background is totally from the three series in (<ref>), and therefore we get: 𝒵_BG,>(r_1,x)=∑_n=0^∞8(-x)^n r_1/(1+2n)^2+4^2F2[1,1/2+n/2,1+n/2 5/4+n/2-/2,5/4+n/2+/2|r_1^2]. This completes our computation of line dispersion integral for the 4-point tree seed integral. § DISPERSION INTEGRAL FOR A MINKOWSKI ONE-LOOP CORRELATOR One feature of the dispersive bootstrap is that the UV divergence in the ordinary computation of 1-loop correlators is totally absent. This may be unfamiliar to some readers, so we use a simple example to connect our dispersion method with the more familiar traditional calculation by dimensional regularization. Our example will be a 4-point 1-loop equal-time correlator of four scalar particles ϕ_i with masses m_i (i=1,2,3,4), mediated by a pair of massive scalar with mass m running in a bubble loop, shown in Fig. <ref>. We take the two vertices to be ϕ_1ϕ_2^2 and ϕ_3ϕ_4^2. Then, the diagram in Fig. <ref> is computed by the following integral: 𝒢 = - ∑_å,=±å∫_-∞^0 t_1 t_2 D_å^(m_1)(k_1;τ_1)D_å^(m_2)(k_2;τ_1)D_^(m_3)(k_3;τ_2)D_^(m_4)(k_4;τ_2) ×∫^d q(2π)^dD^(m)_å(q;τ_1,τ_2)D^(m)_å(| k_s- q|;τ_1,τ_2), where D_å^(m)(k;τ_1,τ_2) is the bulk scalar propagator with mass m, and is given by: D_±∓^(m)(k;t_1,t_2)=e^± E(t_1-t_2)2E, D_±±^(m)(k;t_1,t_2)=D_∓±^(m)(k;t_1,t_2)θ(τ_1-τ_2)+D_±∓^(m)(k;t_1,t_2)θ(τ_2-τ_1), where E≡√(k^2+m^2). A computation of this diagram with dimensional regularization has been done in App. F of <cit.>. Here we directly quote the result. By setting d=3- and let → 0, we have: 𝒢_DR = 1256π^2E_1E_2E_3E_4E_1234[2-_E+log4π+2 +2E_12-E_34∫_0^1ξ (E_34logE_12+E_minμ_R-E_12logE_34+E_minμ_R) ]+, where ξ is a Feynman parameter, E_min≡√(k_s^2+m^2/[ξ(1-ξ)]), E_i=√(k_i^2+m_i^2), and μ_R is the renormalization scale. Note that the divergent term 1/ is proportional to 1/(E_1E_2E_3E_4E_1234), and is what we would get by computing a contact diagram with ϕ_1ϕ_2ϕ_3ϕ_4 interaction. This is nothing but the local counterterm we should separate from the bare Lagrangian. The finite part of the counterterm is determined by a renormalization condition, and here we can choose the standard MS scheme, and remove the term proportional to 2/-γ_E+log4π in (<ref>) altogether. Then, we get: 𝒢_MS =  1256π^2E_1E_2E_3E_4E_1234ℐ(E_12,E_34,k_s), ℐ_m(E_12,E_34,k_s)≡   2+2E_12-E_34∫_0^1ξ (E_34logE_12+E_minμ_R-E_12logE_34+E_minμ_R). To make things even simpler, we set the loop mass m=0, and so that the integral over ξ can be done, leading to the following expression: ℐ_0(E_12,E_34,k_s) = 2+2E_12-E_34(E_34logE_12+k_sμ_R-E_12logE_34+k_sμ_R) . Now, let us consider the dispersion integral for ℐ_0(E_12,E_34,k_s) on the complex-E_12 plane, with E_34 and k_s fixed in the physical region. Clearly the only discontinuity comes from the first logarithmic factor: Disc_E_12ℐ_0(E_12)=4π E_34E_12-E_34,     E_12∈(-∞,-k_s). Then, we can use this discontinuity to form a dispersion integral. To determine the subtraction order, we note that the full correlator ℐ_0 approaches to a constant as E_12→∞: lim_E_12→∞ℐ_0(E_12)=2-2logE_34+k_sμ_R. Therefore our dispersion integral should have a first-order subtraction: ℐ_0(E_12)=ℐ_0(0)+E_122π∫_-∞^-k_s E Disc_Eℐ_0(E)E(E-E_12). This equality can be directly verified by finishing the integral. The lesson to be learned here is that the dispersion integral itself is independent of μ_R and is convergent. Similarly, had we started from the discontinuity of the regularized version (<ref>) to do the dispersion integral, we will not get any term ∝ 1/. So, the dispersion method here is free of UV regularization procedure; On the other hand, the renormalization-scale dependence cannot be removed. In the dispersion calculation, this dependence is introduced by the subtraction point ℐ_0(0). Clearly, our “ further modified”MS subtraction corresponds to choosing ℐ_0(0)=2 -2log(k_s/μ_R). utphys ] ]
http://arxiv.org/abs/2407.13463v1
20240718123626
End-To-End Clinical Trial Matching with Large Language Models
[ "Dyke Ferber", "Lars Hilgers", "Isabella C. Wiest", "Marie-Elisabeth Leßmann", "Jan Clusmann", "Peter Neidlinger", "Jiefu Zhu", "Georg Wölflein", "Jacqueline Lammert", "Maximilian Tschochohei", "Heiko Böhme", "Dirk Jäger", "Mihaela Aldea", "Daniel Truhn", "Christiane Höper", "Jakob Nikolas Kather" ]
cs.CL
[ "cs.CL", "cs.AI" ]
[ [ Received: date / Accepted: date =================================== Dyke Ferber (1, 2), Lars Hilgers (2), Isabella C. Wiest (2, 3), Marie-Elisabeth Leßmann (2, 4), Jan Clusmann (2, 5), Peter Neidlinger (2), Jiefu Zhu (2), Georg Wölflein (6), Jacqueline Lammert (7), Maximilian Tschochohei (8), Heiko Böhme (9, 10, 11, 12), Dirk Jäger (1), Mihaela Aldea (13), Daniel Truhn (14), Christiane Höper (15), Jakob Nikolas Kather (1, 2, 4, +) enumi. * Department of Medical Oncology, National Center for Tumor Diseases (NCT), Heidelberg University Hospital, Heidelberg, Germany * Else Kroener Fresenius Center for Digital Health, Technical University Dresden, Dresden, Germany * Department of Medicine II, Medical Faculty Mannheim, Heidelberg University, Mannheim, Germany * Department of Medicine I, University Hospital Dresden, Dresden, Germany * Department of Medicine III, University Hospital RWTH Aachen, Aachen, Germany * School of Computer Science, University of St Andrews, St Andrews, United Kingdom * Department of Gynecology and Center for Hereditary Breast and Ovarian Cancer, University Hospital rechts der Isar, Technical University of Munich (TUM), Munich, Germany * Google Cloud, Munich, Germany * National Center for Tumor Diseases (NCT/UCC), Dresden, Germany * German Cancer Research Center (DKFZ), Heidelberg, Germany * Medical Faculty and University Hospital Carl Gustav Carus, TUD Dresden University of Technology, Dresden, Germany * Helmholtz-Zentrum Dresden-Rossendorf (HZDR), Dresden, Germany * Department of Medical Oncology, Gustave Roussy, Villejuif, France; Paris Saclay University, Kremlin Bicêtre, France * Department of Diagnostic and Interventional Radiology, University Hospital Aachen, Germany * AstraZeneca GmbH, Germany + Corresponding author: jakob-nikolas.kather@alumni.dkfz.de Jakob Nikolas Kather, MD, MSc Professor of Clinical Artificial Intelligence Else Kröner Fresenius Center for Digital Health Technische Universität Dresden DE – 01062 Dresden Phone: +49 351 458-7558 Fax: +49 351 458 7236 Mail: jakob_nikolas.kather@tu-dresden.de ORCID ID: 0000-0002-3730-5348 § ABSTRACT §.§.§ Background Identifying suitable clinical trials for cancer patients is crucial to advance treatment modalities and patient care. However, due to the inconsistent format of medical free text documents and the often highly complex logic in the trials eligibility criteria, this process is not only extremely challenging for medical doctors, but also time-consuming and prone to errors. This results in insufficient inclusion of oncology patients in clinical trials, especially in a timely manner. The recent advent of Large Language Models (LLMs) has demonstrated considerable potential for interpreting electronic health records (EHRs), suggesting that they hold great promise to facilitate accurate trial matching at scale. §.§.§ Patients and Methods We generated 51 realistic oncology-focused patient EHRs. For each, a database of all 105,600 oncology-related clinical trials worldwide from clinicaltrials.gov was accessed by GPT-4o to identify a pool of suitable trial candidates with minimal human supervision. Patient eligibility was then screened by the LLM on criterion-level across a selection of trials from the candidate trial pool and compared against a baseline defined by human experts. We then used criterion-level AI feedback to iterate over discrepant AI and human results, refining the human ground truth where necessary. §.§.§ Results Our approach successfully identified relevant, human preselected candidate trials in 93.3% of test cases from all trials available worldwide and achieved a preliminary accuracy of 88.0% (1,398/1,589) when matching patient-level information on a per-criterion-basis using the initial human evaluation as baseline. Utilizing LLM feedback to interactively re-evaluate human scores revealed that 39.3% of criteria that were initially considered incorrect according to the human baseline were either ambiguous or inaccurately annotated by humans, leading to a total model accuracy of 92.7% after refining the human ground truth eligibility definitions. §.§.§ Conclusion We present an end-to-end pipeline for clinical trial matching using LLMs, demonstrating high precision in screening for appropriate clinical trials at scale and matching selected candidate trials with high precision to individual patients, even outperforming the performance of qualified medical doctors. Additionally, our pipeline can operate both fully autonomously or with human supervision and is not intrinsically restricted to cancer, offering a scalable solution to enhance patient-trial matching for the real world. § KEYWORDS Clinical Trial Matching • Oncology Trials • Eligibility Criteria •Artificial Intelligence • Large Language Model • GPT-4o § INTRODUCTION In oncology, clinical trials serve two purposes: they offer potential therapeutic benefits to cancer patients across all disease stages, ranging from early intervention to experimental treatments for those with limited or exhausted standard care options1,2. They are also crucial to advance scientific research, as new treatments can only be approved through rigorous clinical testing. However, the practical realization of clinical trial enrollments still remains far from satisfactory, from both patient and clinician perspectives. For clinicians, identifying suitable trials is often time-consuming3 and complex due to patient-related factors such as performance status or comorbidities, logistical challenges like regional trial availability, systemic issues including lack of access to genomic testing, and the difficulty clinicians might face in locating available trials, all of which contribute to low enrollment rates of only 2-3% of potential trial candidates4. Overall, there are three primary reasons for this: First, the sheer volume of data generated during oncologic treatments, including hospital stay records, as well as genomic and imaging data, often accumulate, drastically increasing the burden on physicians5–7. These data are typically fragmented and unstructured8, comprising free text, tabular records, and more8. Secondly, the complexity and volume of clinical trials tailored to oncology further complicate the process. There are approximately 500,000 studies registered on ClinicalTrials.gov, out of which 105,732 are dedicated to patients with cancer as of May 2024. Like patient records, trial information often includes unstructured information, such as plain text eligibility criteria and requires complex logical combinations of disease conditions, histologic subtypes, molecular markers and comorbidities9. Thirdly, from a patient's perspective, due to the evolution of the disease and the need to avoid patient attrition from deteriorating clinical conditions, it is crucial that the time to inclusion and initiation of treatment in clinical trials is minimized to the shortest possible duration. From a practical perspective, addressing these challenges requires clinicians to follow a two-step process: first, they must screen for potential trial candidates based on key patient criteria such as tumor type, stage, mutations and availability within the patient's area of residence; then, they need to perform detailed one-on-one matches of all the patient's information with each candidate trial's eligibility criteria. So far, computational support tools designed to simplify this process have focused on only one of these steps at a time. For the first step, systems have primarily used embedding techniques, where patient and trial text data are converted into a numerical representation space and matched based on approximate mathematical similarity10,11. For the second step, most tools focus on converting unstructured text from patient records and trial information into a tabular-like format. For instance, Criteria2Query uses machine learning and rule-based methods to parse inclusion and exclusion criteria into a structured format accessible via database queries12. Only recently, with advances in generative AI, particularly Large Language Models (LLMs) like GPT-413, extracting and structuring information from medical documents has been drastically simplified14. The potential of LLMs has also been explored for matching patients to clinical trials based on comparing eligibility criteria to patient records15. For instance, den Hamer et al.16 demonstrated that LLMs can accurately provide eligibility labels such as `yes', `no', or `unknown' when given both trial information and patient data as input at the same time. In oncology, Wong et al.17 extended this idea to account for complex logical conditions using a hierarchical matching procedure, showing that GPT-4 can excel at this task even without additional training. Fine-tuning LLMs on annotated trial data has markedly improved their performance even further. This approach has facilitated the development of a local, privacy-preserving model that closely rivals the capabilities of proprietary, large cloud-based LLMs18. Recently, the same research team created OncoLLM19, a new model that significantly reduces the performance gap with the current leading model, GPT-4. Nevertheless, the aforementioned projects have several limitations: First, they tend to focus on either step one or step two of the process, rather than integrating both. Additionally, for step one, discrete criteria such as recruitment status or intervention type take on only discrete values (e.g. sex, location or recruitment status), which would more effectively be managed through direct selection or filtering rather than embedding-based approaches that rely on inexact similarity matches. Second, all current LLM-based methods heavily rely on narrowly engineered prompts, which can be lengthy and cumbersome (Wong et al.17 report prompts of up to four pages). Third, due to the free-text nature in which eligibility criteria and patient information are processed by the model20, there is no guarantee that the responses will adhere strictly to the required criteria structure. We herein present a fully end-to-end pipeline for clinical trial matching which we designed to overcome the aforementioned limitations. Our approach is based on two principles: using LLMs as central reasoning agents21 capable of taking actions and programmatically enforcing trial eligibility criteria as structured programming objects rather than plain free text, thereby ensuring the model consistently outputs validly annotated information22.  Our contributions are the following: enumi. * To the best of our knowledge, we present the first truly end-to-end pipeline for clinical trial matching, starting with searching relevant trial candidates for a given patient from all cancer trials available world wide and ending with fully-annotated trial eligibility criteria for a relevant set of trials. * We provide an extensive and profound benchmarking, encompassing 51 oncology cases and matching over 1,580 single trial criteria that have been annotated by five human experts. We provide evidence that our pipeline excels in both, reliably filtering relevant trials from tens of thousands and providing highly accurate one-on-one eligibility matches with criterion-level feedback and explanations for users. * We demonstrate that LLMs can outperform medical doctors in clinical trial matching. Our findings reveal that nearly 40% of the initially contradictory answers between GPT-4o and physicians were accepted as valid responses upon refining the human baseline with criterion-level AI feedback, resulting in an overall criterion-level accuracy of 92.7% for our pipeline. * By enforcing trial eligibility criteria as structured programming objects rather than relying on them as free-text inputs, we guarantee that the LLM always outputs precisely and validly annotated information. § METHODS §.§.§ Clinical Trial Composition Data was sourced from ClinicalTrials.gov on May 13, 2024, by filtering for the Condition/disease “cancer”, yielding a total of 105,600 registered clinical trials, provided in a JavaScript-Object Notation (JSON) file. Subsequently, we programmatically filter each clinical trial by selecting relevant metadata, including fields like recruitment status, available centers (locations) or allowed disease conditions. Next, to allow the generation of vector embeddings from free text, we combine several metadata fields such as the brief and official titles, detailed trial descriptions and brief summaries into a structured plain text field. §.§.§ Database Generation When finding appropriate clinical trials for patients, physicians most often need to initially filter by specific, structured criteria like the locations of participating centers, recruitment status or allowed disease conditions, while then also examining free text descriptions to precisely match patient conditions to exclusion and inclusion criteria. From a computational perspective, we are thus confronted with the fact that certain attributes, such as discrete metadata fields that have a set of discrete allowed options, require exact matches, whereas others need to be matched based on free text: This requires handling the issue of synonyms - such as recognizing that “lung metastases” and “pulmonary metastases” are equivalent - where exact pattern matches are unsuitable. To address these issues, we developed a hybrid database that effectively combines exact field matching with vector proximity search to find clinical trials that most closely correspond to patient descriptions in representation space. For the former, we employed a local instance of a No-SQL database23 (MongoDB), which offers several advantages in this context, including high scalability, a flexible schema design for sending nested requests, and robust performance when handling large datasets. Next, we generate vector embeddings - numerical representations - of the free, preprocessed text for each clinical trial using the “BAAI/bge-large-en-v1.5” embedding model locally, producing vector embeddings with a dimensionality of 768 from text with a maximum of up to 512 tokens each. As clinical trial information is most often considerably longer, we performed text splits, including a 50-character overlap to ensure comprehensive coverage and avoid information loss, such as by splitting text in the middle of a sentence. We store all text embeddings in a local collection of a vector database (ChromaDB24) for efficient similarity search, using cosine distance as the default search metric throughout our experiments. §.§.§ Clinical Case Generation Our experiments are based on published synthetic cases by Benary et al.25, which include ten fictional patient vignettes representing seven different tumor types, primarily lung adenocarcinoma (four cases), each annotated with various mutations (59 in total). To create a more realistic setting, we extended these cases to full medical EHR reports, using in-house original patient reports as templates, and including clinical descriptions of patient diagnoses, comorbidities, molecular information, short imaging descriptions from staging CT or MRI scans and patient history at different levels of detail. To ensure reliable matching of patients to existing clinical trials, we initially selected candidate trials through manual search or by utilizing those approved by a molecular tumor board from Lammert et al26. This led us to generating a total of 15 patient cases, which we refer to as base cases in the following. We then aligned the clinical case descriptions to either meet or contain conflicts with the respective trial eligibility criteria. This procedure was performed by first manually crafting patient reports based on medical expertise, then utilizing ChatGPT (GPT-4) for iterative refinement of style, grammar and language flow, leading to a total of 51 case vignettes. The final versions of these were evaluated for clinical realism, completeness and linguistic authenticity, and were approved by one physician with expertise in oncology before performing the experiments. §.§.§ Trial Matching Pipeline Specifications The pipeline consists of two main components: the hybrid No-SQL-&-Vector database and an LLM that acts at its core to sequentially orchestrate database search, trial retrieval and finally trial matching with patient information. We utilized the “GPT-4o” model through the OpenAI integration in Python. Model hyperparameters were kept at their default settings. As the LLM operates programmatically to access the database, its outputs cannot be plain text, but need to be valid programmatic data types and occasionally also adhere to certain constraints, such as belonging to a fixed, discrete set of options (like current recruitment status of a trial). Otherwise, invalid requests would lead to failures in accessing the database. We therefore constrain model output types by setting type hints in pydantic27. The entire pipeline, which we illustrate in Figure 1, consists of a sequential chain of LLM requests, where each LLM call is executed as a structured Chain-of-Thought (CoT) module: Upon invocation with a plain text description of a cancer patient and a user instruction (with varying levels of detail), the LLM extracts relevant metadata to prefilter the database in a No-SQL fashion. For all discrete attributes, we provide all available options as enforced type hints in a zero-shot manner; for open-ended free text search terms like disease conditions or keywords (for instance to filter free text for specific mutations) we provide manually crafted, few-shot examples. Next, each LLM output is converted into a valid No-SQL database query to extract all matching trials by their National Clinical Trial identifier (NCTId). Subsequently, to enhance the diversity of the retrieval step and remove uninformative information from the patient descriptions, GPT-4o is instructed to generate a maximum of five different queries from the main patient information, where the retrieval is constrained to filter only from the preselected pool of trials by NCTId. This step is performed iteratively over all queries. We end up with a collection of n top-matching trials from the vector search, from which we use NCTIds to retrieve full trial information. As outlined later, we additionally experimented with using reranking (Cohere rerank-english-v3.028), which we omit from our final pipeline due to lack of additional benefits. Instead, for each trial, the LLM processes the fields containing brief trial summaries and detailed descriptions and discards trials that are deemed irrelevant to the patient. It then structures eligibility criteria programmatically with up to two levels of nested conditions and performs an element-by-element match of the structured inclusion and exclusion criteria to the patient information, returning only boolean values (True if patient is eligible according to a criterion, False otherwise) or unknown if the information provided to the LLM was insufficient to make a decision. To guide the model's response in handling edge cases, we define few-shot examples: For instance, if a potential comorbidity is not mentioned in the patient's EHR, the model is instructed to assume its absence unless the eligibility criteria require explicit exclusion. However, if any documented symptoms or indications in the EHR make the comorbidity plausible, the model should indicate that the information is insufficient (unknown). Additionally, for each single criterion, we receive an explanation by the model based on Chain-of-Thought reasoning. One constraint we make during testing the model is that we permit it to include trials in an active but not currently recruiting status as an explicit design choice to ensure consistency with the trials described previously26. §.§.§ Human evaluations Evaluations of all 51 trial candidates were conducted by five professionals experienced in medical oncology. To ensure one-to-one matches, the same criteria splits defined by GPT-4o for each trial were used for human annotations, with evaluations categorized as eligible, not eligible, or unknown. These ratings were performed using a browser-based interface that provided access to the full patient EHR, the trial NCTId, the trial's official title, brief summary, and GPT-4o structured inclusion and exclusion criteria (Supplementary Figure 1). Each human evaluator worked independently, with results later aggregated using a majority vote as the aggregation criterion. During the second stage, where discrepant AI-human results were compared, we collected consensus responses through discussions of the model's criterion-level explanations among three of the original evaluators, leading to either acceptance or rejection of the model's response. § RESULTS §.§.§ Target Trial Identification Performance We hypothesized that the process of filtering relevant trials on clinicaltrials.gov could be optimized using GPT-4o to write No-SQL database queries, thereby reducing the manual burden on physicians. We evaluate this idea on a subset of 15 base cases from our EHR collection, using either clinical trials from Lammert et al26 or potential target trials manually identified from clinicaltrials.gov. All prompts are provided in Supplementary Table 1. Our results indicate that using GPT-4o is sufficient to write a No-SQL query that filters all (15/15) potentially relevant trials - those that were preselected via manual search by humans for each patient base case - thus narrowing the initial pool of over 100,000 trials to a few hundred candidates (Figure 2, left). However, due to the variability in the number of trials for different conditions - such as rare mutations or tumor types yielding only a handful of trials, while others result in hundreds - it is not always feasible to process all filtered trials directly through an LLM. We therefore employed vector similarity search to enrich trials with highest potential relevance by calculating the cosine distance between trial information and patient EHRs in a representation space. We selected the top k=50 trials with the lowest cosine distance. These trials were then processed by GPT-4o, which was instructed to discard any irrelevant trials that falsely appeared relevant due to semantic overlap (Figure 2, right). As an example, consider a patient with “non-small cell lung cancer” and a clinical trial that is eligible only for “small cell lung cancer.” Despite the high semantic similarity (low cosine distance) between these terms, the patient would be ineligible for the trial. This discrepancy is accounted for by instructing GPT-4o to discard such trials, ensuring only relevant trials are selected. Our results demonstrate that this combined approach is highly effective, reducing the number of candidate trials from hundreds to 20-30. Notably, 14 out of the 15 target trials (93.3%) fall within the top 10 trial options, and 10 out of 15 are ranked within the top 5 trials (Figure 2, right). Additionally, we evaluated the potential benefits of incorporating reranking models. Although these models have shown promising results in optimizing text retrieval tasks and relevance sorting for efficiency29, we did not observe significant improvements when applied to the full text of the trials using Cohere's rerank-english-v3.028. Therefore, we omitted reranking and considered the selected trials from the previous step as final. Our findings demonstrate the potential of combining No-SQL database and vector similarity search with GPT-4o to effectively reduce the number of trials to a few candidate options, ensuring that only the most relevant ones are prioritized for each patient. §.§.§ Inclusion and Exclusion Criteria Accuracy Next, we evaluated the criterion-level accuracy of GPT-4o across all 51 oncology EHRs for one target trial each, resulting in a total of 1,589 evaluable criteria, including both flat and nested ones. We show an example of how GPT-4o internally structures these eligibility criteria in Supplementary Table 2 and 3 and provide an example of GPT-4o's full trial annotations including the unaltered eligibility criteria and criterion-level AI reasoning in Supplementary Table 4. For each criterion, the model was instructed to return one of three responses: True if the patient was eligible based on that criterion alone, False if the patient was not eligible, or “unknown” if the available data was inadequate to make a decision. In cases involving nested criteria, where the criterion “header” was not directly evaluable (e.g., “All patients:” or “(At least) one of the following:”), the model was additionally instructed to provide a global criterion result that reflects the logical aggregation of the nested criteria. We use the majority answer from annotations, generated by five independent board-licensed physicians on all 1,589 criteria as a human baseline for comparing to the model's performance, which we highlight in Figure 3. Notably, as elaborated later, we do not speak of human annotations as ground truth. Our results demonstrate that GPT-4o achieves an overall - preliminary - accuracy of 88.0% (calculated as the number of criteria where human and LLM decisions agree, divided by the total number of criteria, Figure 3), with similar performance when considering inclusion and exclusion criteria separately (87.5% and 88.6% respectively). All patient cases can be found in Supplementary Table 5. Additionally, we find that GPT-4o achieved a 96.5% accuracy when focusing solely on True or False answers by either the model or human observers (“True/False”). The same observation is made when excluding model N/A answers only, which led to 96.5% of the answers being considered correct upon comparison to the human annotations (“no AI N/A”). We consider excluding model N/A outputs as an even better indicator of the model's performance as outputs that point out its inability to answer the criterion due to insufficient patient or trial information are less critical in real-world settings than incorrectly assigning ineligibility or eligibility. In summary, we can show that besides finding relevant trial candidates, GPT-4o can next evaluate patient eligibility on these selected trials with very high criterion-level accuracy. §.§.§ Refining human baseline with AI feedback To better understand the reasons behind differences in model versus human annotations at the criterion level, we re-evaluated all 191 cases where answers did not align. This process was performed by three of the original observers, who debated these discrepancies: We found that 39.3% (75 out of 191) of the initially conflicting answers were accepted after considering the model's reasoning and re-assessing the patient case and specific criteria. Following this refinement of human baseline, our trial matching pipeline showed a 4.7% improvement in performance, achieving an overall accuracy of 92.7%. Furthermore, our model consistently performed above the 97% accuracy threshold when focusing on True and False answers only (“True/False”) or excluding model responses referring to missing information (“no AI N/A”) from the measurement (Figure 3A). We next investigated the types of refinements human annotators made upon reviewing model answers (Figure 3B): We found that a substantial number of corrections to human annotations were necessary when annotators initially considered a patient eligible or ineligible for a certain criterion, while relevant information to make a decision was absent in reality (74.7%, 56/75 and 8%, 6/75 respectively). More importantly however, we found scenarios in which human annotators corrected eligible to ineligible decisions (10.7%, 8/75) and ineligible to eligible (6.7%, 6/75), indicating instances where human annotators made substantial mistakes that could be corrected using AI feedback. In summary, we herein show that GPT-4o can match, and likely even exceed the performance of qualified physicians in evaluating patient trial eligibility. § DISCUSSION In this work, we describe and validate a fully end-to-end approach for leveraging LLMs for clinical trial matching using oncology cases as an example. Overall, we demonstrate how GPT-4o can first effectively screen potential trial candidates from a collection of over one hundred thousand clinical trials registered on clinicaltrials.gov and secondly match selected candidates on a criterion-by-criterion basis to patient records. This has several real-world advantages: From a clinical perspective, physicians must filter out over 99.9% of irrelevant trials due to differing tumor types, disease stages, or distant locations. Additionally, they must consider what type of trials they are specifically looking for: Should they target a particular molecular alteration? Are they seeking trials for treatment-naive patients, or for those refractory to other therapies? Consequently, physicians are often forced to rely on ad hoc searches rather than structured methods to find suitable clinical trials. Given the inherent capabilities of state-of-the-art LLMs, we show that the process of filtering relevant trials by keyword-based search can be automated using GPT-4o, which can itself write queries for a No-SQL database, guided without or with human supervision, such as “Please find a Phase 1 trial for the patient in Germany” or “Could you please find a clinical trial for the patient's KRAS mutation (or all of the patient's mutations)?” Our approach leverages the robust capabilities of GPT-4o in generating valid computer code, allowing programmatic access to trial databases with only optional human guidance. For instance, GPT-4o can request trials in specific locations or target particular mutations, or combinations of both if given as instruction from medical professionals. This makes our system highly scalable and flexible, extending its applicability beyond pre-selecting trials from a single center, as demonstrated by Gupta et al.19 Moreover, we are convinced of the inherent reasoning capabilities of LLMs21, particularly looking toward future advancements, allowing them to handle complex logic internally. In contrast, the approach by Wong et al.17 explicitly enforces LLMs to rewrite eligibility logic into structured Disjunctive Normal Form (DNF), imposing constraints on the model by limiting the combination of categories such as disease state, histology, and biomarkers through logical conditions (and, or, all, any, not etc.). This method also alters the original trial criteria, complicating human evaluation. Our approach ensures that the model's output can be mapped back to the original criteria on a one-by-one basis, with each criterion accompanied by a detailed chain of reasoning explaining the model's decision. This allows medical doctors to fact-check each decision of the model, ensuring explainability and trust. By understanding why the model reaches a particular conclusion and identifying potential errors we can better understand capabilities and limitations of current LLMs in managing EHR data. Additionally, we demonstrate that our system's performance can match and under certain conditions even surpass that of human experts in criterion decision tasks. Although not directly comparable due to different trials and patient data (Gupta et al. utilize real-world de-identified EHR cases) and potential variations in how trial criteria are processed, our overall pipeline achieves an accuracy of 92.7% and 97.4% when excluding N/A samples. This exceeds the results others have previously achieved with GPT-4, reporting accuracies of 68% and 72%, respectively19. Moreover, our program-rather-than-prompt strategy ensures that responses consistently adhere to the required format, reducing the burden of finding optimal, often specific and narrow prompts. We therefore can guarantee that regardless of the length or complexity of the criteria, we receive validly annotated and unaltered criteria back from the LLM, which is not the case if criteria are handled as plain, free text. Also, this approach allows the broad transferability of our system to other medical domains, with minimal need for adjustments, such as only the need for addressing domain-specific edge cases. Nevertheless, our study has several limitations: In real-world scenarios, under current regulatory restrictions, GPT-4o is not a suitable candidate due to its cloud-based nature, which necessitates transferring sensitive patient data to proprietary servers. Thus, we consider GPT-4o as a best-in-class model suitable for proof-of-concept purposes. We anticipate that local model solutions will catch up in performance in the near future, making them more suitable for clinical application. Additionally, real-world patient data will be required to fully validate applicability of our system, incorporating longer and even more diverse patient documents. For instance, laboratory values may be nested in spreadsheets, and imaging data might be separate, with all relevant patient information distributed across various documents. Moreover, we aim to evaluate the model's ability to accurately rank and prioritize the most relevant trials, enabling doctors to quickly identify the best options for their patients. Although our system currently provides scores based on the number and ratio of fulfilled eligibility criteria, we have not yet established a sophisticated measure for quantitative evaluation. We plan to develop and refine this using real-world data in the near future. Despite these challenges, our work demonstrates that an LLM can autonomously narrow down relevant trials from thousands to a manageable handful and accurately match these trials criterion by criterion. To our knowledge, our study is the closest in mirroring the real-world scenario of how medical doctors interact with clinical trial databases like clinicaltrials.gov. This evidence suggests significant potential for our approach, particularly as we show, for the first time, that AI feedback can enhance the performance of medical specialists in identifying suitable clinical trials for their patients. § ACKNOWLEDGEMENTS We thank OpenAI for supporting our work through a researcher access grant. § REFERENCES 1. Bouzalmate-Hajjaj, A., Massó Guijarro, P., Khan, K. S., Bueno-Cavanillas, A. & Cano-Ibáñez, N. Benefits of Participation in Clinical Trials: An Umbrella Review. Int. J. Environ. Res. Public Health 19, (2022). 2. Unger, J. M., Cook, E., Tai, E. & Bleyer, A. The Role of Clinical Trial Participation in Cancer Research: Barriers, Evidence, and Strategies. Am Soc Clin Oncol Educ Book 35, 185–198 (2016). 3. Penberthy, L. T., Dahman, B. A., Petkov, V. I. & DeShazo, J. P. Effort required in eligibility screening for clinical trials. J. Oncol. Pract. 8, 365–370 (2012). 4. Unger, J. M., Vaidya, R., Hershman, D. L., Minasian, L. M. & Fleury, M. E. Systematic Review and Meta-Analysis of the Magnitude of Structural, Clinical, and Physician and Patient Barriers to Cancer Clinical Trial Participation. J. Natl. Cancer Inst. 111, 245–255 (2019). 5. Oxentenko, A. S., West, C. P., Popkave, C., Weinberger, S. E. & Kolars, J. C. Time spent on clinical documentation: a survey of internal medicine residents and program directors. Arch. Intern. Med. 170, 377–380 (2010). 6. Rule, A., Bedrick, S., Chiang, M. F. & Hribar, M. R. Length and Redundancy of Outpatient Progress Notes Across a Decade at an Academic Medical Center. JAMA Netw Open 4, e2115334 (2021). 7. Moy, A. J. et al. Measurement of clinical documentation burden among physicians and nurses using electronic health records: a scoping review. J. Am. Med. Inform. Assoc. 28, 998–1008 (2021). 8. Kong, H.-J. Managing Unstructured Big Data in Healthcare System. Healthc. Inform. Res. 25, 1–2 (2019). 9. Bradley, J., Kelly, K. & Stinchcombe, T. E. The Ever-Increasing Number of Trial Eligibility Criteria: Time to Bend the Curve. Journal of thoracic oncology: official publication of the International Association for the Study of Lung Cancer vol. 12 1459–1460 (2017). 10. Zhang, X., Xiao, C., Glass, L. M. & Sun, J. DeepEnroll: Patient-Trial Matching with Deep Embedding and Entailment Prediction. arXiv [cs.AI] (2020). 11. Gao, J., Xiao, C., Glass, L. M. & Sun, J. COMPOSE: Cross-Modal Pseudo-Siamese Network for Patient Trial Matching. in Proceedings of the 26th ACM SIGKDD International Conference on Knowledge Discovery & Data Mining 803–812 (Association for Computing Machinery, New York, NY, USA, 2020). 12. Yuan, C. et al. Criteria2Query: a natural language interface to clinical databases for cohort definition. J. Am. Med. Inform. Assoc. 26, 294–305 (2019). 13. OpenAI et al. GPT-4 Technical Report. arXiv [cs.CL] (2023). 14. Wiest, I. C. et al. From Text to Tables: A Local Privacy Preserving Large Language Model for Structured Information Retrieval from Medical Documents. medRxiv 2023.12.07.23299648 (2023) doi:10.1101/2023.12.07.23299648. 15. Jin, Q. et al. Matching Patients to Clinical Trials with Large Language Models. ArXiv (2024). 16. den Hamer, D. M., Schoor, P., Polak, T. B. & Kapitan, D. Improving Patient Pre-screening for Clinical Trials: Assisting Physicians with Large Language Models. arXiv [cs.LG] (2023). 17. Wong, C. et al. Scaling Clinical Trial Matching Using Large Language Models: A Case Study in Oncology. in Proceedings of the 8th Machine Learning for Healthcare Conference (eds. Deshpande, K. et al.) vol. 219 846–862 (PMLR, 11--12 Aug 2023). 18. Nievas, M., Basu, A., Wang, Y. & Singh, H. Distilling large language models for matching patients to clinical trials. J. Am. Med. Inform. Assoc. (2024) doi:10.1093/jamia/ocae073. 19. Gupta, S. K. et al. PRISM: Patient Records Interpretation for Semantic Clinical Trial Matching using Large Language Models. arXiv [cs.CL] (2024). 20. Wornow, M. et al. Zero-Shot Clinical Trial Patient Matching with LLMs. arXiv [cs.CL] (2024). 21. Truhn, D., Reis-Filho, J. S. & Kather, J. N. Large language models should be used as scientific reasoning engines, not knowledge databases. Nat. Med. 29, 2983–2984 (2023). 22. Singhvi, A. et al. DSPy Assertions: Computational Constraints for Self-Refining Language Model Pipelines. arXiv [cs.CL] (2023). 23. Cattell, R. Scalable SQL and NoSQL data stores. SIGMOD Rec. 39, 12–27 (2011). 24. Chroma. https://www.trychroma.com/. 25. Benary, M. et al. Leveraging Large Language Models for Decision Support in Personalized Oncology. JAMA Netw Open 6, e2343689 (2023). 26. Lammert, J. et al. Expert-guided large language models for clinical decision support in precision oncology. (2024) doi:10.2139/ssrn.4855985. 27. Welcome to pydantic - pydantic. https://docs.pydantic.dev/latest/. 28. Introducing Rerank 3: A new foundation model for efficient enterprise search & retrieval. Cohere https://cohere.com/blog/rerank-3. 29. Sasazawa, Y., Yokote, K., Imaichi, O. & Sogawa, Y. Text Retrieval with Multi-Stage Re-Ranking Models. arXiv [cs.IR] (2023). §.§ Data availability statement All clinical trial information used can be accessed and downloaded manually via https://clinicaltrials.gov/ as detailed in “Methods - Clinical Trial Composition”. Note that available trials and information on existing trials will change over time. We release all 51 synthetic EHR notes, that are based on case vignettes published by Benary et al.25 in Supplementary Data Table 5. §.§ Code availability statement All methods necessary to reproduce our results are extensively documented. While we plan to enhance our pipeline further, we are committed to offering researchers access to our findings and methodologies in the near future. We release codes from the current implementation for research purposes upon publication in a scientific journal here: https://github.com/Dyke-F/llm-trials. §.§ Ethics statement This study does not include confidential information. All research procedures were conducted exclusively on publicly accessible, anonymized patient data and in accordance with the Declaration of Helsinki, maintaining all relevant ethical standards. The overall analysis was approved by the Ethics commission of the Medical Faculty of the Technical University Dresden (BO-EK-444102022). §.§ Statement on Use of Artificial Intelligence Tools In accordance with the COPE (Committee on Publication Ethics) position statement of 13 February 2023 (https://publicationethics.org/cope-position-statements/ai-author), the authors hereby disclose the use of the following artificial intelligence models during the writing of this article. GPT-4 (OpenAI) for checking spelling and grammar. §.§ Author Contributions DF designed and performed the experiments, evaluated and interpreted the results and wrote the initial draft of the manuscript. DF, LH and PN developed the case vignettes. ICW, JC, ML, LH and DF analyzed the results; ICW, LH and DF performed the eligibility re-evaluation. JZ designed the web interface for eligibility evaluation. CH provided expertise for the discussion of the implications of the findings. All authors contributed to writing the manuscript. MA, DJ, DT and JNK supervised the study. §.§ Funding JNK is supported by the German Cancer Aid (DECADE, 70115166), the German Federal Ministry of Education and Research (PEARL, 01KD2104C; CAMINO, 01EO2101; SWAG, 01KD2215A; TRANSFORM LIVER, 031L0312A; TANGERINE, 01KT2302 through ERA-NET Transcan; Come2Data, 16DKZ2044A; DEEP-HCC, 031L0315A), the German Academic Exchange Service (SECAI, 57616814), the German Federal Joint Committee (TransplantKI, 01VSF21048) the European Union's Horizon Europe and innovation programme (ODELIA, 101057091; GENIAL, 101096312), the European Research Council (ERC; NADIR, 101114631), the National Institutes of Health (EPICO, R01 CA263318) and the National Institute for Health and Care Research (NIHR, NIHR203331) Leeds Biomedical Research Centre. The views expressed are those of the author(s) and not necessarily those of the NHS, the NIHR or the Department of Health and Social Care. This work was funded by the European Union. Views and opinions expressed are however those of the author(s) only and do not necessarily reflect those of the European Union. Neither the European Union nor the granting authority can be held responsible for them. JC is supported by the Mildred-Scheel-Postdoktorandenprogramm of the German Cancer Aid (grant #70115730). DT is funded by the German Federal Ministry of Education and Research (TRANSFORM LIVER, 031L0312A), the European Union's Horizon Europe and innovation programme (ODELIA, 101057091), and the German Federal Ministry of Health (SWAG, 01KD2215B). GW is supported by Lothian NHS. JL is supported by the TUM School of Medicine and Health Clinician Scientist Program (project no. H-08). CH contributed to this work in her personal interest outside of her employment at AstraZeneca GmbH. The views expressed are those of the author(s) and not necessarily those of AstraZeneca, the NHS, the NIHR or the Department of Health and Social Care. No other funding is disclosed by any of the authors. §.§ Competing Interests JNK declares consulting services for Owkin, France; DoMore Diagnostics, Norway; Panakeia, UK, and Scailyte, Basel, Switzerland; furthermore JNK holds shares in Kather Consulting, Dresden, Germany; and StratifAI GmbH, Dresden, Germany, and has received honoraria for lectures and advisory board participation by AstraZeneca, Bayer, Eisai, MSD, BMS, Roche, Pfizer and Fresenius. DT received honoraria for lectures by Bayer and holds shares in StratifAI GmbH, Germany. ICW received honoraria from AstraZeneca. The authors have no additional financial or non-financial conflicts of interest to disclose. [figure]labelformat=empty font=footnotesize [figure]labelfont=bf § SUPPLEMENTARY DATA Supplementary Table 1. Model Instructions for Trial Search. Supplementary Table 2. Unstructured eligibility criteria for NCT02227251: Selinexor (KPT-330) in Patients With Relapsed/Refractory Diffuse Large B-Cell Lymphoma (DLBCL). Supplementary Table 3. LLM-structured eligibility criteria for NCT02227251: Selinexor (KPT-330) in Patients With Relapsed/Refractory Diffuse Large B-Cell Lymphoma (DLBCL). Supplementary Table 4. Fully evaluated clinical trial eligibility criteria for Patient 1.1.1. Supplementary Figure 1. Web-based user interface for assessing trial eligibility criteria on a per patient basis. Supplementary Table 5. Clinical Case EHRs. Supplementary Table 5. Clinical Cases EHRs. Note: Color highlighting is for illustration only, and not provided to the model. ===== Patient 1.1 ===== Patient Information Name: Emily Johnson Born: October 15, 1967 Address: 5678 Maple Avenue, Springfield, IL, United States Overview of Tumor Diagnosis and Therapy Tumor Diagnosis Diagnosis: Stage IV and unresectable lung adenocarcinoma; adenocarcinoma histology, PD-L1 Initial Detection: March 10, 2023, following persistent chest pain and dyspnea Biopsy Date: March 25, 2023 Molecular Profile: KRAS p.G13D, TP53 p.A276G, PTPRS R238*, ZFHX3 p.F2994L, CDH1 p.D433N Therapy Overview Initial Treatment: None Comorbidities: Well-managed and not contraindicating any treatment Comorbidities Hyperlipidemia Osteoarthritis Gastroesophageal Reflux Disease (GERD) Iron deficiency anemia   Medication Simvastatin 40mg once daily (0-0-1) Esomeprazole 20mg once daily (1-0-0) Ferrous sulfate one tablet daily (1-0-0)   Performance Status: ECOG Performance Status 1 Chronological Medical Findings: January 15, 2020: Routine annual physical examination revealed elevated blood pressure. Diagnosed with hypertension and prescribed Lisinopril 10 mg daily. March 20, 2020: Complaints of frequent heartburn and acid reflux. Diagnosed with GERD and prescribed Esomeprazole 20 mg daily. February 5, 2021: Follow-up for hypertension showed well-controlled blood pressure. Lisinopril dosage maintained. May 25, 2021: Complained of knee pain and stiffness. Diagnosed with osteoarthritis. Recommended over-the-counter NSAIDs for pain management. April 15, 2022: Routine check-up revealed elevated blood glucose levels. Diagnosed with Type 2 Diabetes Mellitus and prescribed Metformin 500 mg twice daily. August 10, 2022: Routine cholesterol check indicated high cholesterol levels. Diagnosed with hyperlipidemia and prescribed Simvastatin 20 mg daily. November 15, 2022: Follow-up for diabetes and hyperlipidemia. Dosages adjusted: Metformin increased to 1000 mg twice daily, Simvastatin increased to 40 mg daily. February 20, 2023: Complained of shortness of breath and chronic cough. Diagnosed with mild COPD. Prescribed Salbutamol inhaler. July 10, 2023: Routine follow-up showed stable condition with controlled comorbidities. Blood pressure, blood sugar, and cholesterol levels within target ranges.   March 10, 2024: Experienced persistent chest pain and shortness of breath. Chest X-ray and CT scan revealed a mass in the right lung. March 15, 2024: CT Angiography, Pulmonary Arteries: Tumor Size: Approximately 4.7 cm in diameter. Bronchial Obstruction: Partial obstruction of the right main bronchus leading to atelectasis of the right upper lobe. Urgent suspicion of a tumor-atelectasis complex in the right upper lobe of the lung. Mucus present in the lower lobe bronchi on the right. Suspicion of mediastinal lymph node metastases. No evidence of pulmonary artery embolism. Lymph Nodes: Enlarged, FDG-positive lymph nodes in the mediastinum. March 25, 2024: CT-guided lung biopsy: Diagnosed with non-small cell lung cancer (NSCLC), adenocarcinoma. Molecular diagnostics: KRAS p.G13D, TP53 p.A276G, PTPRS R238*, ZFHX3 p.F2994L, CDH1 p.D433N. April 20, 2024: Detailed assessment of health status. ECOG performance status 1.  April 21, 2024: Routine Lab: Leukocytes 4,200/mcL, Lymphocytes 600/mm³, Absolute Neutrophil Count (ANC) 1,200/mcL Platelets 150 × 10³/uL Hemoglobin 9.0 g/dL Total Bilirubin 1.3 mg/dL Aspartate Aminotransferase (AST) 50 U/L Alanine Aminotransferase (ALT) 60 U/L Alkaline Phosphatase 200 U/L Creatinine 1.4 mg/dL L-thyroxin (T4) 8.5 µg/dL Thyroid Stimulating Hormone (TSH) 2.0 µIU/mL Blood Glucose 90 mg/dL Cholesterol 180 mg/dL HCG within normal range. April 22, 2024: Molecular tumor board: Recommendation for trial inclusion.   ===== Patient 1.1.1 ===== Patient Information Name: Emily Johnson Born: October 15, 1967 Address: 5678 Maple Avenue, Springfield, IL, United States Overview of Tumor Diagnosis and Therapy Tumor Diagnosis Diagnosis: Stage IV and unresectable lung adenocarcinoma; adenocarcinoma histology, PD-L1 Initial Detection: March 10, 2023, following persistent chest pain and dyspnea Biopsy Date: March 25, 2023 Molecular Profile: KRAS p.G13D, TP53 p.A276G, PTPRS R238*, ZFHX3 p.F2994L, CDH1 p.D433N Therapy Overview Initial Treatment: None Comorbidities: Well-managed and not contraindicating any treatment Comorbidities Hyperlipidemia Osteoarthritis Gastroesophageal Reflux Disease (GERD) Iron deficiency anemia   Medication Simvastatin 40mg once daily (0-0-1) Esomeprazole 20mg once daily (1-0-0) Ferrous sulfate one tablet daily (1-0-0)   Performance Status: ECOG Performance Status 1 Chronological Medical Findings: January 15, 2020: Routine annual physical examination revealed elevated blood pressure. Diagnosed with hypertension and prescribed Lisinopril 10 mg daily. March 20, 2020: Complaints of frequent heartburn and acid reflux. Diagnosed with GERD and prescribed Esomeprazole 20 mg daily. February 5, 2021: Follow-up for hypertension showed well-controlled blood pressure. Lisinopril dosage maintained. May 25, 2021: Complained of knee pain and stiffness. Diagnosed with osteoarthritis. Recommended over-the-counter NSAIDs for pain management. April 15, 2022: Routine check-up revealed elevated blood glucose levels. Diagnosed with Type 2 Diabetes Mellitus and prescribed Metformin 500 mg twice daily. August 10, 2022: Routine cholesterol check indicated high cholesterol levels. Diagnosed with hyperlipidemia and prescribed Simvastatin 20 mg daily. November 15, 2022: Follow-up for diabetes and hyperlipidemia. Dosages adjusted: Metformin increased to 1000 mg twice daily, Simvastatin increased to 40 mg daily. February 20, 2023: Complained of shortness of breath and chronic cough. Diagnosed with mild COPD. Prescribed Salbutamol inhaler. July 10, 2023: Routine follow-up showed stable condition with controlled comorbidities. Blood pressure, blood sugar, and cholesterol levels within target ranges.   March 10, 2024: Experienced persistent chest pain and shortness of breath. Chest X-ray and CT scan revealed a mass in the right lung. March 15, 2024: CT Angiography, Pulmonary Arteries: Tumor Size: Approximately 4.7 cm in diameter. Bronchial Obstruction: Partial obstruction of the right main bronchus leading to atelectasis of the right upper lobe. Urgent suspicion of a tumor-atelectasis complex in the right upper lobe of the lung. Mucus present in the lower lobe bronchi on the right. Suspicion of mediastinal lymph node metastases. No evidence of pulmonary artery embolism. Lymph Nodes: Enlarged, FDG-positive lymph nodes in the mediastinum. March 25, 2024: CT-guided lung biopsy: Diagnosed with non-small cell lung cancer (NSCLC), adenocarcinoma. Molecular diagnostics: KRAS p.G13D, TP53 p.A276G, PTPRS R238*, ZFHX3 p.F2994L, CDH1 p.D433N. April 20, 2024: Detailed assessment of health status. ECOG performance status 1.  April 21, 2024: Routine Lab: Leukocytes 2,700/mcL, Lymphocytes 438/mm³, Absolute Neutrophil Count (ANC) 1,200/mcL Platelets 150 × 10³/uL Hemoglobin 9.0 g/dL Total Bilirubin 1.3 mg/dL Aspartate Aminotransferase (AST) 50 U/L Alanine Aminotransferase (ALT) 60 U/L Alkaline Phosphatase 200 U/L Creatinine 1.4 mg/dL L-thyroxin (T4) 8.5 µg/dL Thyroid Stimulating Hormone (TSH) 2.0 µIU/mL Blood Glucose 90 mg/dL Cholesterol 180 mg/dL HCG +++++ April 22, 2024: Molecular tumor board: Recommendation for trial inclusion.   ===== Patient 1.1.2 ===== Patient Information Name: Emily Johnson Born: October 15, 1967 Address: 5678 Maple Avenue, Springfield, IL, United States   Overview of Tumor Diagnosis and Therapy Tumor Diagnosis Diagnosis: Stage IV and unresectable lung adenocarcinoma, M+ (BRAIN); adenocarcinoma histology, Initial Detection: March 10, 2023, following persistent chest pain and dyspnea Biopsy Date: March 25, 2023 Molecular Profile: KRAS p.G13D, TP53 p.A276G, PTPRS R238*, ZFHX3 p.F2994L, CDH1 p.D433N   Therapy Overview Initial Treatment: None Comorbidities: Well-managed and not contraindicating any treatment   Comorbidities Hyperlipidemia Osteoarthritis Gastroesophageal Reflux Disease (GERD) Iron deficiency anemia HIV   Medication Simvastatin 40mg once daily (0-0-1) Esomeprazole 20mg once daily (1-0-0) Ferrous sulfate one tablet daily (1-0-0) Biktarvy (1-0-0)   Performance Status: ECOG Performance Status 1   Chronological Medical Findings: January 15, 2020: Routine annual physical examination revealed elevated blood pressure. Diagnosed with hypertension and prescribed Lisinopril 10 mg daily. March 20, 2020: Complaints of frequent heartburn and acid reflux. Diagnosed with GERD and prescribed Esomeprazole 20 mg daily. February 5, 2021: Follow-up for hypertension showed well-controlled blood pressure. Lisinopril dosage maintained. May 25, 2021: Complained of knee pain and stiffness. Diagnosed with osteoarthritis. Recommended over-the-counter NSAIDs for pain management. April 15, 2022: Routine check-up revealed elevated blood glucose levels. Diagnosed with Type 2 Diabetes Mellitus and prescribed Metformin 500 mg twice daily. August 10, 2022: Routine cholesterol check indicated high cholesterol levels. Diagnosed with hyperlipidemia and prescribed Simvastatin 20 mg daily. November 15, 2022: Follow-up for diabetes and hyperlipidemia. Dosages adjusted: Metformin increased to 1000 mg twice daily, Simvastatin increased to 40 mg daily. February 20, 2023: Complained of shortness of breath and chronic cough. Diagnosed with mild COPD. Prescribed Salbutamol inhaler. July 10, 2023: Routine follow-up showed stable condition with controlled comorbidities. Blood pressure, blood sugar, and cholesterol levels within target ranges. March 10, 2024: Experienced persistent chest pain and shortness of breath. Chest X-ray and CT scan revealed a mass in the right lung. March 15, 2024: CT Angiography, Pulmonary Arteries: Tumor Size: Approximately 4.7 cm in diameter. Bronchial Obstruction: Partial obstruction of the right main bronchus leading to atelectasis of the right upper lobe. Urgent suspicion of a tumor-atelectasis complex in the right upper lobe of the lung. Mucus present in the lower lobe bronchi on the right. Suspicion of mediastinal lymph node metastases. No evidence of pulmonary artery embolism. Lymph Nodes: Enlarged, FDG-positive lymph nodes in the mediastinum. March 20, 2024: MRI-Brain: three metastatic lesions consistent with primary lung cancer. Lesions are located in the left frontal lobe, left parietal lobe, and left occipital lobe, measuring 1.2 cm, 1.5 cm, and 1.8 cm in diameter, respectively. Surrounding vasogenic edema is noted, causing mild mass effect on adjacent brain structures. No evidence of hemorrhage or hydrocephalus observed. March 25, 2024: CT-guided lung biopsy: Diagnosed with non-small cell lung cancer (NSCLC), adenocarcinoma. Molecular diagnostics: KRAS p.G13D, TP53 p.A276G, PTPRS R238*, ZFHX3 p.F2994L, CDH1 p.D433N. April 20, 2024: Detailed assessment of health status. ECOG performance status 1.  April 21, 2024: Routine Lab: Leukocytes 4,200/mcL, Lymphocytes 600/mm³, Absolute Neutrophil Count (ANC) 1,200/mcL Platelets 150 × 10³/uL Hemoglobin 9.0 g/dL Total Bilirubin 1.3 mg/dL Aspartate Aminotransferase (AST) 50 U/L Alanine Aminotransferase (ALT) 60 U/L Alkaline Phosphatase 200 U/L Creatinine 1.4 mg/dL L-thyroxin (T4) 8.5 µg/dL Thyroid Stimulating Hormone (TSH) 2.0 µIU/mL Blood Glucose 90 mg/dL Cholesterol 180 mg/dL HCG within normal range. SO2 (room air) 87% April 22, 2024: Molecular tumor board: Recommendation for trial inclusion.   ===== Patient 2.1 ===== Patient Information Name: Sarah Mitchell Born: June 12, 1998 Address: 8765 Pine Street, Springfield, IL   Overview of Tumor Diagnosis and Therapy Tumor Diagnosis Diagnosis: Stage IV urachal adenocarcinoma; m+ (LYM, PULMONARY) Initial Detection: January 18, 2024, following persistent hematuria and abdominal discomfort Biopsy Date: January 28, 2024 Molecular Profile: KRAS p.G12V, BCORL p.R1332*, TP53 p.H214fs7, CDKN2C p.L65F, MAP3K1 p.T949_E950insT, MYCN p.E47fs8, CTNNA1 p.K577_L578TKL, JAK1 p.I597M, FANCL p.T367fs*12+, PIK3CA amplification (n6), MYC amplification (n6), MYCL1 amplification (n6), SOX2 amplification (n6), MUTYH amplification (n6)   Therapy Overview Chemotherapy: Began February 1 - April 22, 2024 (Cisplatin + 5-FU)   Comorbidities Seasonal Allergies   Medication Cetirizine 10mg as needed   Performance Status: ECOG Performance Status 1   Chronological Medical Findings: January 18, 2024: Presented with hematuria and abdominal discomfort. Abdominal ultrasound revealed a mass in the bladder dome. January 22, 2024: CT scan abdomen/pelvis: Mass located at the bladder dome, measuring approximately 3.0 cm in diameter. Evidence of local invasion into surrounding structures + several enlarged local lymph nodes are noted, with the largest lymph node located near the pelvic sidewall, measuring 1.5 cm. Chest CT scan: Multiple metastatic lesions are present in both lungs. The largest metastasis is located peripherally in the left lung and measures approximately 3.1 cm in diameter. Other smaller metastatic nodules scattered throughout both lung fields. January 28, 2024: Multiple Biopsies of bladder (CT-guided): Histology confirmed urachal adenocarcinoma. Molecular panel sequencing revealed mutations: KRAS p.G12V, BCORL p.R1332*, TP53 p.H214fs7, CDKN2C p.L65F, MAP3K1 p.T949_E950insT, MYCN p.E47fs8, CTNNA1 p.K577_L578TKL, JAK1 p.I597M, FANCL p.T367fs*12+, PIK3CA amplification (n6), MYC amplification (n6), MYCL1 amplification (n6), SOX2 amplification (n6), MUTYH amplification (n6). February 1, 2024: Initiated chemotherapy with Cisplatin + 5-FU. April 23, 2024: CT scan abdomen/pelvis: Mass located at the bladder dome, now measuring approximately 4.5 cm in diameter. Increased evidence of local invasion into surrounding structures. Several enlarged local lymph nodes are noted, with the largest lymph node located near the pelvic sidewall, now measuring 2.0 cm. Chest CT scan: Increased number and size of metastatic lesions in both lungs. The largest metastasis is located peripherally in the left lung and now measures approximately 4.0 cm in diameter. Numerous other metastatic nodules, with some showing an increase in size, are scattered throughout both lung fields. April 25, 2024: Detailed assessment of health status confirmed adequate organ function. Routine labs within normal limits: ANC 4,500/mcL, platelet count 250,000/mcL, total bilirubin 0.8 mg/dL, AST/ALT within normal limits, creatinine 0.8 mg/dL, hemoglobin 14.0 g/dL, serum albumin 4.0 g/dL, lipase and amylase within normal limits. Serum HCG test negative.  April 28, 2024: Tumor board review recommended considering eligibility for clinical trials due to limited response to standard and investigational therapies. Patient in good clinical condition, willing to participate in a trial.   ===== Patient 2.1.1 =====   Patient Information Name: Sarah Mitchell Born: June 12, 1998 Address: 8765 Pine Street, Springfield, IL   Overview of Tumor Diagnosis and Therapy Tumor Diagnosis Diagnosis: Stage IV urachal adenocarcinoma; m+ (LYM, PULMONARY) Initial Detection: January 18, 2024, following persistent hematuria and abdominal discomfort Biopsy Date: January 28, 2024 Molecular Profile: KRAS p.G12V, BCORL p.R1332*, TP53 p.H214fs7, CDKN2C p.L65F, MAP3K1 p.T949_E950insT, MYCN p.E47fs8, CTNNA1 p.K577_L578TKL, JAK1 p.I597M, FANCL p.T367fs*12+, PIK3CA amplification (n6), MYC amplification (n6), MYCL1 amplification (n6), SOX2 amplification (n6), MUTYH amplification (n6)   Therapy Overview Chemotherapy: Began February 1 - April 22, 2024 (Cisplatin + 5-FU)   Comorbidities Seasonal Allergies Platin-induced Neuropathy (April 2024)   Medication Cetirizine 10mg as needed   Performance Status: ECOG Performance Status 1   Chronological Medical Findings: January 18, 2024: Presented with hematuria and abdominal discomfort. Abdominal ultrasound revealed a mass in the bladder dome. January 22, 2024: CT scan abdomen/pelvis: Mass located at the bladder dome, measuring approximately 3.0 cm in diameter. Evidence of local invasion into surrounding structures + several enlarged local lymph nodes are noted, with the largest lymph node located near the pelvic sidewall, measuring 1.5 cm. Chest CT scan: Multiple metastatic lesions are present in both lungs. The largest metastasis is located peripherally in the left lung and measures approximately 3.1 cm in diameter. Other smaller metastatic nodules scattered throughout both lung fields. January 28, 2024: Multiple Biopsies of bladder (CT-guided): Histology confirmed urachal adenocarcinoma. Molecular panel sequencing revealed mutations: KRAS p.G12V, BCORL p.R1332*, TP53 p.H214fs7, CDKN2C p.L65F, MAP3K1 p.T949_E950insT, MYCN p.E47fs8, CTNNA1 p.K577_L578TKL, JAK1 p.I597M, FANCL p.T367fs*12+, PIK3CA amplification (n6), MYC amplification (n6), MYCL1 amplification (n6), SOX2 amplification (n6), MUTYH amplification (n6). February 1, 2024: Initiated chemotherapy with Cisplatin + 5-FU. Chemotherapy abrogated before completion of the last cycle due to severe neuropathy limiting daily activities. April 23, 2024: CT scan abdomen/pelvis: Mass located at the bladder dome, now measuring approximately 4.5 cm in diameter. Increased evidence of local invasion into surrounding structures. Several enlarged local lymph nodes are noted, with the largest lymph node located near the pelvic sidewall, now measuring 2.0 cm. Chest CT scan: Increased number and size of metastatic lesions in both lungs. The largest metastasis is located peripherally in the left lung and now measures approximately 4.0 cm in diameter. Numerous other metastatic nodules, with some showing an increase in size, are scattered throughout both lung fields. cMRI: 1 single brain metastasis in the frontal lobe. April 25, 2024: Detailed assessment of health status confirmed adequate organ function. Routine labs within normal limits: ANC 4,500/mcL, platelet count 250,000/mcL, total bilirubin 0.8 mg/dL, AST/ALT within normal limits, creatinine 0.8 mg/dL, hemoglobin 14.0 g/dL, serum albumin 4.0 g/dL, lipase and amylase within normal limits. Serum HCG test negative.  April 28, 2024: Tumor board review recommended considering eligibility for clinical trials due to limited response to standard and investigational therapies. Patient in overall good clinical condition, however persistent neuropathy (no improvements). She is willing to participate in clinical trials.   ===== Patient 2.1.2 =====  Patient Information Name: Sarah Mitchell Born: June 12, 1998 Address: 8765 Pine Street, Springfield, IL   Overview of Tumor Diagnosis and Therapy Tumor Diagnosis Diagnosis: Stage IV urachal adenocarcinoma; m+ (LYM, PULMONARY) Initial Detection: January 18, 2024, following persistent hematuria and abdominal discomfort Biopsy Date: January 28, 2024 Molecular Profile: KRAS p.G12V, BCORL p.R1332*, TP53 p.H214fs7, CDKN2C p.L65F, MAP3K1 p.T949_E950insT, MYCN p.E47fs8, CTNNA1 p.K577_L578TKL, JAK1 p.I597M, FANCL p.T367fs*12+, PIK3CA amplification (n6), MYC amplification (n6), MYCL1 amplification (n6), SOX2 amplification (n6), MUTYH amplification (n6)   Therapy Overview Chemotherapy: Began February 1 - April 22, 2024 (Cisplatin + 5-FU)   Comorbidities Diabetes type II Seasonal Allergies   Medication Cetirizine 10mg as needed Ceftriaxone 1g 1-0-0 Metformin 500mg BID (paused) Glyburide 5mg BID (paused) Insulin under monitoring   Performance Status: ECOG Performance Status 1   Chronological Medical Findings: January 18, 2024: Presented with hematuria and abdominal discomfort. Abdominal ultrasound revealed a mass in the bladder dome. January 22, 2024: CT scan abdomen/pelvis: Mass located at the bladder dome, measuring approximately 3.0 cm in diameter. Evidence of local invasion into surrounding structures + several enlarged local lymph nodes are noted, with the largest lymph node located near the pelvic sidewall, measuring 1.5 cm. Chest CT scan: Multiple metastatic lesions are present in both lungs. The largest metastasis is located peripherally in the left lung and measures approximately 3.1 cm in diameter. Other smaller metastatic nodules scattered throughout both lung fields. January 28, 2024: Multiple Biopsies of bladder (CT-guided): Histology confirmed urachal adenocarcinoma. Molecular panel sequencing revealed mutations: KRAS p.G12V, BCORL p.R1332*, TP53 p.H214fs7, CDKN2C p.L65F, MAP3K1 p.T949_E950insT, MYCN p.E47fs8, CTNNA1 p.K577_L578TKL, JAK1 p.I597M, FANCL p.T367fs*12+, PIK3CA amplification (n6), MYC amplification (n6), MYCL1 amplification (n6), SOX2 amplification (n6), MUTYH amplification (n6). February 1, 2024: Initiated chemotherapy with Cisplatin + 5-FU. April 23, 2024: CT scan abdomen/pelvis: Mass located at the bladder dome, now measuring approximately 4.5 cm in diameter. Increased evidence of local invasion into surrounding structures. Several enlarged local lymph nodes are noted, with the largest lymph node located near the pelvic sidewall, now measuring 2.0 cm. Chest CT scan: Increased number and size of metastatic lesions in both lungs. The largest metastasis is located peripherally in the left lung and now measures approximately 4.0 cm in diameter. Numerous other metastatic nodules, with some showing an increase in size, are scattered throughout both lung fields. April 25, 2024: Detailed assessment of health status confirmed adequate organ function. Routine labs within normal limits: ANC 4,500/mcL, platelet count 250,000/mcL, total bilirubin 0.8 mg/dL, AST/ALT within normal limits, creatinine 0.8 mg/dL, hemoglobin 14.0 g/dL, serum albumin 4.0 g/dL, lipase and amylase within normal limits. Serum HCG test negative.  April 28, 2024: Tumor board review recommended considering eligibility for clinical trials due to limited response to standard and investigational therapies. Patient in good clinical condition, willing to participate in a trial. May 1, 2024: Patient presents with fever, flank pain, and dysuria. Hospitalized for further evaluation and treatment. Ultrasound: Enlarged kidney with signs of inflammation, consistent with pyelonephritis. Blood culture: Pending. Urine culture: Pending. Started on IV antibiotics: Ceftriaxone 1g.  CRP: 15 mg/dL. Leukocytes: 18,000/mcL. HbA1c 8.3%. Paused Metformin/Glyburide, started on insulin with close monitoring   ===== Patient 2.2 ===== Patient Information Name: Sarah Mitchell Born: June 12, 1998 Address: 8765 Pine Street, Springfield, IL   Overview of Tumor Diagnosis and Therapy Tumor Diagnosis Diagnosis: Stage IV urachal adenocarcinoma; m+ (LYM, PULMONARY) Initial Detection: January 18, 2024, following persistent hematuria and abdominal discomfort Biopsy Date: January 28, 2024 Molecular Profile: KRAS p.G12V, BCORL p.R1332*, TP53 p.H214fs7, CDKN2C p.L65F, MAP3K1 p.T949_E950insT, MYCN p.E47fs8, CTNNA1 p.K577_L578TKL, JAK1 p.I597M, FANCL p.T367fs*12+, PIK3CA amplification (n6), MYC amplification (n6), MYCL1 amplification (n6), SOX2 amplification (n6), MUTYH amplification (n6)   Therapy Overview Chemotherapy: Began February 1 - April 22, 2024 (Cisplatin + 5-FU)   Comorbidities Seasonal Allergies   Medication Cetirizine 10mg as needed   Performance Status: ECOG Performance Status 1   Chronological Medical Findings: January 18, 2024: Presented with hematuria and abdominal discomfort. Abdominal ultrasound revealed a mass in the bladder dome. January 22, 2024: CT scan abdomen/pelvis: Mass located at the bladder dome, measuring approximately 3.0 cm in diameter. Evidence of local invasion into surrounding structures + several enlarged local lymph nodes are noted, with the largest lymph node located near the pelvic sidewall, measuring 1.5 cm. Chest CT scan: Multiple metastatic lesions are present in both lungs. The largest metastasis is located peripherally in the left lung and measures approximately 3.1 cm in diameter. Other smaller metastatic nodules scattered throughout both lung fields. January 28, 2024: Multiple Biopsies of bladder (CT-guided): Histology confirmed urachal adenocarcinoma. Molecular panel sequencing revealed mutations: KRAS p.G12V, BCORL p.R1332*, TP53 p.H214fs7, CDKN2C p.L65F, MAP3K1 p.T949_E950insT, MYCN p.E47fs8, CTNNA1 p.K577_L578TKL, JAK1 p.I597M, FANCL p.T367fs*12+, PIK3CA amplification (n6), MYC amplification (n6), MYCL1 amplification (n6), SOX2 amplification (n6), MUTYH amplification (n6). February 1, 2024: Initiated chemotherapy with Cisplatin + 5-FU. April 23, 2024: CT scan abdomen/pelvis: Mass located at the bladder dome, now measuring approximately 4.5 cm in diameter. Increased evidence of local invasion into surrounding structures. Several enlarged local lymph nodes are noted, with the largest lymph node located near the pelvic sidewall, now measuring 2.0 cm. Chest CT scan: Increased number and size of metastatic lesions in both lungs. The largest metastasis is located peripherally in the left lung and now measures approximately 4.0 cm in diameter. Numerous other metastatic nodules, with some showing an increase in size, are scattered throughout both lung fields. April 25, 2024: Detailed assessment of health status confirmed adequate organ function. Routine labs within normal limits: ANC 4,500/mcL, platelet count 250,000/mcL, total bilirubin 0.8 mg/dL, AST/ALT within normal limits, creatinine 0.8 mg/dL, hemoglobin 14.0 g/dL, serum albumin 4.0 g/dL, lipase and amylase within normal limits. Serum HCG test negative.  April 28, 2024: Tumor board review recommended considering eligibility for clinical trials due to limited response to standard and investigational therapies. Patient in good clinical condition, willing to participate in a trial.   ===== Patient 2.2.1 ===== Patient Information Name: Sarah Mitchell Born: June 12, 1998 Address: 8765 Pine Street, Springfield, IL   Overview of Tumor Diagnosis and Therapy Tumor Diagnosis Diagnosis: Stage IV urachal adenocarcinoma; m+ (LYM, PULMONARY) Initial Detection: January 18, 2024, following persistent hematuria and abdominal discomfort Biopsy Date: January 28, 2024 Molecular Profile: KRAS p.G12V, BCORL p.R1332*, TP53 p.H214fs7, CDKN2C p.L65F, MAP3K1 p.T949_E950insT, MYCN p.E47fs8, CTNNA1 p.K577_L578TKL, JAK1 p.I597M, FANCL p.T367fs*12+, PIK3CA amplification (n6), MYC amplification (n6), MYCL1 amplification (n6), SOX2 amplification (n6), MUTYH amplification (n6)   Therapy Overview Chemotherapy: Began February 1 - April 22, 2024 (Cisplatin + 5-FU)   Comorbidities Seasonal Allergies Recurrent gastrointestinal bleedings due to tumor infiltration of the rectum, requiring transfusions   Medication Cetirizine 10mg as needed   Performance Status: ECOG Performance Status 1   Chronological Medical Findings: January 18, 2024: Presented with hematuria and abdominal discomfort. Abdominal ultrasound revealed a mass in the bladder dome. January 22, 2024: CT scan abdomen/pelvis: Mass located at the bladder dome, measuring approximately 3.0 cm in diameter. Evidence of local invasion into surrounding structures + several enlarged local lymph nodes are noted, with the largest lymph node located near the pelvic sidewall, measuring 1.5 cm. Chest CT scan: Multiple metastatic lesions are present in both lungs. The largest metastasis is located peripherally in the left lung and measures approximately 3.1 cm in diameter. Other smaller metastatic nodules scattered throughout both lung fields. January 28, 2024: Multiple Biopsies of bladder (CT-guided): Histology confirmed urachal adenocarcinoma. Molecular panel sequencing revealed mutations: KRAS p.G12V, BCORL p.R1332*, TP53 p.H214fs7, CDKN2C p.L65F, MAP3K1 p.T949_E950insT, MYCN p.E47fs8, CTNNA1 p.K577_L578TKL, JAK1 p.I597M, FANCL p.T367fs*12+, PIK3CA amplification (n6), MYC amplification (n6), MYCL1 amplification (n6), SOX2 amplification (n6), MUTYH amplification (n6). February 1, 2024: Initiated chemotherapy with Cisplatin + 5-FU. April 23, 2024: CT scan abdomen/pelvis: Mass located at the bladder dome, now measuring approximately 4.5 cm in diameter. Rectal tumor invasion. Increased evidence of local invasion into surrounding structures. Several enlarged local lymph nodes are noted, with the largest lymph node located near the pelvic sidewall, now measuring 2.0 cm. Chest CT scan: Increased number and size of metastatic lesions in both lungs. The largest metastasis is located peripherally in the left lung and now measures approximately 4.0 cm in diameter. Numerous other metastatic nodules, with some showing an increase in size, are scattered throughout both lung fields. April 25, 2024: Detailed assessment of health status confirmed adequate organ function. Routine labs within normal limits: ANC 4,500/mcL, platelet count 250,000/mcL, total bilirubin 0.8 mg/dL, AST/ALT within normal limits, creatinine 0.8 mg/dL, hemoglobin 8.4 g/dL (due to recurrent GI bleedings), serum albumin 4.0 g/dL, lipase and amylase within normal limits. Serum HCG test negative.  April 28, 2024: Tumor board review recommended considering eligibility for clinical trials due to limited response to standard and investigational therapies. Patient in good clinical condition, willing to participate in a trial.   ===== Patient 3.1 =====  Patient Information Name: Thomas Meyer Born: January 12, 1966 Address: Schlossallee 1, Dresden, Germany   Overview of Tumor Diagnosis and Therapy Tumor Diagnosis Diagnosis: Masaoka-Koga Stage IVb thymic adenocarcinoma (metastases to the lungs, liver and spines T6, T9, L3) Initial Detection: March 15, 2023, following persistent chest pain and cough, shortness-of-breath Biopsy Date: March 28, 2023 Molecular Profile: Germline: BRCA2 p.K3326* (1N); Tumor: SMAD4 p.C363R (1N), TP53 p.305fs (2N_LOH), CDKN1B p.K100N (2N_LOH), ATM p.E1666* (4N), MAP3K8 p.H236Y (1N), TRAF1 p.R70H (2N), HDAC2 p.R409* (1N), TMEM111-TDRD3 fusion, PRKDC-CDH17 fusion, EXT1-MAGI2 fusion; Overexpressed genes: ERBB2, ERBB3, PDGFRB, TGFA, EGF, FGFR3, MET.   Therapy Overview Initial Treatment: Chemotherapy: Began April 5, 2023, with combination of Doxorubicin, Cisplatin, Vincristine and Cyclophosphamide (ADOC). Partial response after the initial two chemotherapy cycles completed by June, 2023. Continued chemotherapy until November 2023 (progressive disease). Subsequent Treatment: Second-line treatment with Carboplatin plus Paclitaxel starting Nov 24, 2023. Current Status: Disease progression as of May 2024, with new metastatic lesions in the lungs. Comorbidities Former Smoker 25 py Hypertension Stage 2 Type 2 Diabetes Mellitus Hyperlipidemia Gastroesophageal Reflux Disease (GERD) H/o cholecystectomy 2011 Medication Losartan 50mg once daily HCT 12.5mg once daily Metformin 1000mg once daily Atorvastatin 40mg once daily Omeprazole 20mg once daily XGEVA    Performance Status: ECOG Performance Status 1 Chronological Medical Findings: March 15, 2023: Presented with persistent chest pain and cough and SOB.  March 20, 2023: CT scan of the chest: Mass in the anterior mediastinum measuring approximately 6.0 cm with evidence of local invasion into surrounding structures. Multiple pulmonary nodules suggestive of metastasis.  March 28, 2023: CT-guided biopsy of mediastinal mass. Histology confirmed thymic adenocarcinoma. Whole exome sequencing revealed germline mutation BRCA2 p.K3326*, tumor mutations: SMAD4 p.C363R, TP53 p.305fs, CDKN1B p.K100N, ATM p.E1666*, MAP3K8 p.H236Y, TRAF1 p.R70H, HDAC2 p.R409*, TMEM111-TDRD3 fusion, PRKDC-CDH17 fusion, EXT1-MAGI2 fusion. Overexpressed genes: ERBB2, ERBB3, PDGFRB, TGFA, EGF, FGFR3, MET. April 5, 2023: Initiated chemotherapy with Doxorubicin, Cisplatin, Vincristine and Cyclophosphamide (ADOC). Patient in sophisticated conditions, first therapy today. June 20, 2023: Follow-up CT scan showed partial response with a decrease in the size of the primary tumor and pulmonary nodules. Continued chemotherapy regimen. November 15, 2023: Follow-up imaging CT chest/abdomen: disease progression with new metastatic lesions. Multiple hepatic lesions, with the largest lesion in segment VIII measuring 4.5 cm, and another lesion in segment II measuring 3.0 cm. Bone scan indicates metastatic involvement of the spine, with lesions identified at T6, T9, and L3 vertebrae. Additional findings include new pulmonary nodules and further enlargement of the primary mass in the anterior mediastinum. November 24, 2023: Started second-line therapy with Carboplatin plus Paclitaxel.  March 10, 2024: Follow-up CT scan: PD. Progression of disease with increased size of liver metastases and new bone lesions. May 20, 2024: Tumor board review recommended considering eligibility for clinical trials due to limited response to standard and investigational therapies. June 1, 2024: Detailed assessment of health status confirmed adequate organ function. All routine labs within normal limits.   ===== Patient 3.1.1 ===== Patient Information Name: Thomas Meyer Born: January 12, 1966 Address: Schlossallee 1, Dresden, Germany   Overview of Tumor Diagnosis and Therapy Tumor Diagnosis Diagnosis: Masaoka-Koga Stage IVb thymic adenocarcinoma (metastases to the lungs, liver and spines T6, T9, L3) Initial Detection: March 15, 2023, following persistent chest pain and cough, shortness-of-breath Biopsy Date: March 28, 2023 Molecular Profile: Germline: BRCA2 p.K3326* (1N); Tumor: SMAD4 p.C363R (1N), TP53 p.305fs (2N_LOH), CDKN1B p.K100N (2N_LOH), ATM p.E1666* (4N), MAP3K8 p.H236Y (1N), TRAF1 p.R70H (2N), HDAC2 p.R409* (1N), TMEM111-TDRD3 fusion, PRKDC-CDH17 fusion, EXT1-MAGI2 fusion; Overexpressed genes: ERBB2, ERBB3, PDGFRB, TGFA, EGF, FGFR3, MET.   Therapy Overview Initial Treatment: Chemotherapy: Began April 5, 2023, with combination of Doxorubicin, Cisplatin, Vincristine and Cyclophosphamide (ADOC). Partial response after the initial two chemotherapy cycles completed by June, 2023. Continued chemotherapy until November 2023 (progressive disease). Subsequent Treatment: Second-line treatment with Carboplatin plus Paclitaxel starting Nov 24, 2023. Current Status: Disease progression as of May 2024, with new metastatic lesions in the lungs. Comorbidities Former Smoker 25 py Interstitial Lung disease (ILD) Hypertension Stage 2 Type 2 Diabetes Mellitus Hyperlipidemia Gastroesophageal Reflux Disease (GERD) H/o cholecystectomy 2011 Medication          Prednisone 10mg 1x Losartan 50mg once daily HCT 12.5mg once daily Metformin 1000mg 1x/d Atorvastatin 40mg once daily Omeprazole 20mg daily XGEVA q4w   Performance Status: ECOG Performance Status 1 Chronological Medical Findings: March 15, 2023: Presented with persistent chest pain and cough and SOB.  March 20, 2023: CT scan of the chest: Mass in the anterior mediastinum measuring approximately 6.0 cm with evidence of local invasion into surrounding structures. Multiple pulmonary nodules suggestive of metastasis. Evidence of known Interstitial Lung Disease (ILD) with diffuse interstitial markings and fibrosis. March 28, 2023: CT-guided biopsy of mediastinal mass. Histology confirmed thymic adenocarcinoma. Whole exome sequencing revealed germline mutation BRCA2 p.K3326*, tumor mutations: SMAD4 p.C363R, TP53 p.305fs, CDKN1B p.K100N, ATM p.E1666*, MAP3K8 p.H236Y, TRAF1 p.R70H, HDAC2 p.R409*, TMEM111-TDRD3 fusion, PRKDC-CDH17 fusion, EXT1-MAGI2 fusion. Overexpressed genes: ERBB2, ERBB3, PDGFRB, TGFA, EGF, FGFR3, MET. April 5, 2023: Initiated chemotherapy with Doxorubicin, Cisplatin, Vincristine and Cyclophosphamide (ADOC). Patient in sophisticated conditions, first therapy today. June 20, 2023: Follow-up CT scan showed partial response with a decrease in the size of the primary tumor and pulmonary nodules. Continued chemotherapy regimen. November 15, 2023: Follow-up imaging CT chest/abdomen: disease progression with new metastatic lesions. Multiple hepatic lesions, with the largest lesion in segment VIII measuring 4.5 cm, and another lesion in segment II measuring 3.0 cm. Bone scan indicates metastatic involvement of the spine, with lesions identified at T6, T9, and L3 vertebrae. Additional findings include new pulmonary nodules and further enlargement of the primary mass in the anterior mediastinum. Signs of known ILD, stable. November 24, 2023: Started second-line therapy with Carboplatin plus Paclitaxel.  March 10, 2024: Follow-up CT scan: PD. Progression of disease with increased size of liver metastases and new bone lesions. May 20, 2024: Tumor board review recommended considering eligibility for clinical trials due to limited response to standard and investigational therapies. June 1, 2024: Detailed assessment of health status confirmed adequate organ function. All routine labs within normal limits.   ===== Patient 3.1.2 =====   Patient Information Name: Thomas Meyer Born: January 12, 1966 Address: Schlossallee 1, Dresden, Germany   Overview of Tumor Diagnosis and Therapy Tumor Diagnosis Diagnosis: Masaoka-Koga Stage IVb thymic adenocarcinoma (metastases to the lungs, liver and spines T6, T9, L3) Initial Detection: March 15, 2023, following persistent chest pain and cough, shortness-of-breath Biopsy Date: March 28, 2023 Molecular Profile: Germline: BRCA2 p.K3326* (1N); Tumor: SMAD4 p.C363R (1N), TP53 p.305fs (2N_LOH), CDKN1B p.K100N (2N_LOH), ATM p.E1666* (4N), MAP3K8 p.H236Y (1N), TRAF1 p.R70H (2N), HDAC2 p.R409* (1N), TMEM111-TDRD3 fusion, PRKDC-CDH17 fusion, EXT1-MAGI2 fusion; Overexpressed genes: ERBB2, ERBB3, PDGFRB, TGFA, EGF, FGFR3, MET.   Therapy Overview Initial Treatment: Chemotherapy: Began April 5, 2023, with combination of Doxorubicin, Cisplatin, Vincristine and Cyclophosphamide (ADOC). Partial response after the initial two chemotherapy cycles completed by June, 2023. Continued chemotherapy until November 2023 (progressive disease). Subsequent Treatment: Second-line treatment with Carboplatin plus Paclitaxel starting Nov 24, 2023. Current Status: Disease progression as of May 2024, with new metastatic lesions in the lungs. Comorbidities Coronary Artery Disease (CAD), status post percutaneous coronary intervention (PCI) with stent placement in 2018 Interstitial Lung disease (ILD) Hypertension Stage 2 Type 2 Diabetes Mellitus Hyperlipidemia Gastroesophageal Reflux Disease (GERD) H/o cholecystectomy 2011 Former Smoker 25 py   Medication Aspirin 100 1-0-0 Clopidogrel 75mg 1-0-0 Prednisone 10mg 1x Losartan 50mg once daily HCT 12.5mg once daily Metformin 1000mg 1x/d Atorvastatin 40mg once daily Omeprazole 20mg daily XGEVA q4w   Performance Status: ECOG Performance Status 1 Chronological Medical Findings: March 15, 2023: Presented with persistent chest pain and cough and SOB.  March 20, 2023: CT scan of the chest: Mass in the anterior mediastinum measuring approximately 6.0 cm with evidence of local invasion into surrounding structures. Multiple pulmonary nodules suggestive of metastasis. Evidence of known Interstitial Lung Disease (ILD) with diffuse interstitial markings and fibrosis. March 28, 2023: CT-guided biopsy of mediastinal mass. Histology confirmed thymic adenocarcinoma. Whole exome sequencing revealed germline mutation BRCA2 p.K3326*, tumor mutations: SMAD4 p.C363R, TP53 p.305fs, CDKN1B p.K100N, ATM p.E1666*, MAP3K8 p.H236Y, TRAF1 p.R70H, HDAC2 p.R409*, TMEM111-TDRD3 fusion, PRKDC-CDH17 fusion, EXT1-MAGI2 fusion. Overexpressed genes: ERBB2, ERBB3, PDGFRB, TGFA, EGF, FGFR3, MET. April 5, 2023: Initiated chemotherapy with Doxorubicin, Cisplatin, Vincristine and Cyclophosphamide (ADOC). Patient in sophisticated conditions, first therapy today. June 20, 2023: Follow-up CT scan showed partial response with a decrease in the size of the primary tumor and pulmonary nodules. Continued chemotherapy regimen. November 15, 2023: Follow-up imaging CT chest/abdomen: disease progression with new metastatic lesions. Multiple hepatic lesions, with the largest lesion in segment VIII measuring 4.5 cm, and another lesion in segment II measuring 3.0 cm. Bone scan indicates metastatic involvement of the spine, with lesions identified at T6, T9, and L3 vertebrae. Additional findings include new pulmonary nodules and further enlargement of the primary mass in the anterior mediastinum. Signs of known ILD, stable. November 24, 2023: Started second-line therapy with Carboplatin plus Paclitaxel.  March 10, 2024: Follow-up CT scan: PD. Progression of disease with increased size of liver metastases and new bone lesions. May 20, 2024: Tumor board review recommended considering eligibility for clinical trials due to limited response to standard and investigational therapies. June 1, 2024: Detailed assessment of health status confirmed adequate organ function. All routine labs within normal limits. Patient claims newly intermittent chest pain - next week appointment at in-house cardiology department. ===== Patient 3.2 =====   Patient Information Name: Tim Müller Born: January 03, 1966 Address: Parkallee 10, Dresden, Germany Overview of Tumor Diagnosis and Therapy Tumor Diagnosis Diagnosis: Masaoka-Koga Stage IVb thymic adenocarcinoma (metastases to the lungs, liver and spines T6, T9, L3) Initial Detection: March 15, 2023, following persistent chest pain and cough, shortness-of-breath Biopsy Date: March 28, 2023 Molecular Profile: Germline: del BRCA2 mutation; Tumor: SMAD4 p.C363R (1N), TP53 p.305fs (2N_LOH), CDKN1B p.K100N (2N_LOH), ATM p.E1666* (4N), MAP3K8 p.H236Y (1N), TRAF1 p.R70H (2N), HDAC2 p.R409* (1N), TMEM111-TDRD3 fusion, PRKDC-CDH17 fusion, EXT1-MAGI2 fusion; Overexpressed genes: ERBB2, ERBB3, PDGFRB, TGFA, EGF, FGFR3, MET.   Therapy Overview Initial Treatment: Chemotherapy: Began April 5, 2023, with combination of Doxorubicin, Cisplatin, Vincristine and Cyclophosphamide (ADOC). Partial response after the initial two chemotherapy cycles completed by June, 2023. Continued chemotherapy until November 2023 (progressive disease). Subsequent Treatment: Second-line treatment with Carboplatin plus Paclitaxel starting Nov 24, 2023. Current Status: Disease progression as of May 2024, with new metastatic lesions in the lungs. Comorbidities Former Smoker 25 py Hypertension Stage 2 Type 2 Diabetes Mellitus Hyperlipidemia Gastroesophageal Reflux Disease (GERD) H/o cholecystectomy 2011 Medication Losartan 50mg once daily HCT 12.5mg once daily Metformin 1000mg once daily Atorvastatin 40mg once daily Omeprazole 20mg once daily XGEVA    Performance Status: ECOG Performance Status 1 Chronological Medical Findings: March 15, 2023: Presented with persistent chest pain and cough and SOB.  March 20, 2023: CT scan of the chest: Mass in the anterior mediastinum measuring approximately 6.0 cm with evidence of local invasion into surrounding structures. Multiple pulmonary nodules suggestive of metastasis.  March 28, 2023: CT-guided biopsy of mediastinal mass. Histology confirmed thymic adenocarcinoma. Whole exome sequencing revealed: germline BRCA2 mutation (del), tumor mutations: SMAD4 p.C363R, TP53 p.305fs, CDKN1B p.K100N, ATM p.E1666*, MAP3K8 p.H236Y, TRAF1 p.R70H, HDAC2 p.R409*, TMEM111-TDRD3 fusion, PRKDC-CDH17 fusion, EXT1-MAGI2 fusion. Overexpressed genes: ERBB2, ERBB3, PDGFRB, TGFA, EGF, FGFR3, MET. April 5, 2023: Initiated chemotherapy with Doxorubicin, Cisplatin, Vincristine and Cyclophosphamide (ADOC). Patient in sophisticated conditions, first therapy today. June 20, 2023: Follow-up CT scan showed partial response with a decrease in the size of the primary tumor and pulmonary nodules. Continued chemotherapy regimen. November 15, 2023: Follow-up imaging CT chest/abdomen: disease progression with new metastatic lesions. Multiple hepatic lesions, with the largest lesion in segment VIII measuring 4.5 cm, and another lesion in segment II measuring 3.0 cm. Bone scan indicates metastatic involvement of the spine, with lesions identified at T6, T9, and L3 vertebrae. Additional findings include new pulmonary nodules and further enlargement of the primary mass in the anterior mediastinum. November 24, 2023: Started second-line therapy with Carboplatin plus Paclitaxel.  March 8, 2024: End of chemotherapy. March 10, 2024: Follow-up CT scan: PD. Progression of disease with increased size of liver metastases and new bone lesions. March 20, 2024: Tumor board review recommended considering eligibility for clinical trials due to limited response to standard and investigational therapies. April 5, 2024: Detailed assessment of health status confirmed adequate organ function. All routine labs within normal limits.   ===== Patient 4.1 =====    Patient Information Name: David Gärtner Born: March 22, 1965 Address: Cologne, Domstrasse 1, Germany   Overview of Tumor Diagnosis and Therapy Tumor Diagnosis Diagnosis: UICC Stage IV oropharyngeal carcinoma (Metastatic: LYM, HEP, OSS) Initial Detection: February 10, 2023, following persistent sore throat and difficulty swallowing Biopsy Date: February 20, 2023 Molecular Profile: PIK3CA p.E545K (AF 25%), MAPK1 p.E322K (AF 10%), FGFR3 p.D786N (AF 30%)   Therapy Overview Initial Treatment: Radiochemotherapy: Began March 1, 2023, with a regimen of Cisplatin (200 mg/m2) paired with local radiotherapy (70G). Partial response noted after the initial radiochemotherapy completed by June 15, 2023. Follow-up CT scan shows disease progression in September 2024. Subsequent Treatment: Began September 15, immunotherapy with Nivolumab (240mg/2weeks).   Current Status: Disease progression as of March 2024, with new metastatic lesions identified. ECOG 1.   Comorbidities Active Smoker 50 py Hypertension Stage 1 Hyperlipidemia Peripheral artery disease Fontaine 2a Diverticular disease CDD 3a   Medication Lisinopril 20mg 1-0-0 Simvastatin 20mg 0-0-0-1 XGEVA, Vitamin D   Chronological Medical Findings: February 10, 2023: Presented with persistent sore throat and difficulty swallowing. CT scan of the neck revealed a suspicious mass in the oropharynx. February 15, 2023: CT scan of the neck: Mass in the oropharynx measuring approximately 4.5 cm with evidence of local invasion into surrounding structures. Multiple enlarged cervical lymph nodes are noted, with the largest measuring approximately 2.2 cm in the right level II region. These nodes exhibit round morphology and loss of the fatty hilum, characteristics suggestive of metastatic involvement. Additional enlarged lymph nodes are present in the levels III and IV on the right side. February 18, 2023: Staging CT (chest and abdomen): No signs of distant metastasis. February 20, 2023: Biopsy of the oropharyngeal mass performed. Histology confirmed oropharyngeal carcinoma. Molecular panel sequencing revealed mutations: PIK3CA p.E545K (AF 25%), MAPK1 p.E322K (AF 10%), FGFR3 p.D786N (AF 30%). Tumor purity was 60%. March 1, 2024: Initiated radiochemotherapy with Cisplatin and 5-Fluorouracil alongside local radiotherapy (70G). June 15, 2023: Follow-up CT scan showed partial response with a decrease in the size of the primary tumor and cervical lymph nodes. September 10, 2023: Follow-up imaging CT Neck/Chest/Abdomen: Disease progression (PD). Several new hypodense lesions identified in the liver: The largest lesion located in segment VI, measuring approximately 3.1 cm in diameter. Smaller lesions scattered throughout the right  hepatic lobe. Multiple new pulmonary nodules are detected in both lungs. The largest nodule located in the right lower lobe, measuring approximately 1.5 cm in diameter. Additional smaller nodules are distributed throughout the bilateral lung fields. No evidence of pleural effusion or pneumothorax. The oropharyngeal mass remains present, with no significant change in size compared to the previous scan. The previously noted enlarged cervical lymph nodes remain prominent, with no significant interval change in size or number. September 15, 2023: Began immunotherapy with Nivolumab. December 18, 2023: Follow-up CT scan Neck/Chest/abdomen: Stable disease. December-February 2023: Continuation Nivolumab. February 20, 2024: Follow-up CT scan Neck/Chest/Abdomen: Progression of disease. Enlargement of multiple known hypodense lesions in the liver, with the largest now measuring 4.5 cm in segment VI (previously 3.1cm). New lytic lesions in the pelvis. Previously noted pulmonary nodules remain stable with no significant interval change. Stable primary tumor and cervical lymphadenopathy. March 3, 2024: Tumor board review recommended considering eligibility for clinical trials due to limited response to standard and investigational therapies. March 10, 2024: Detailed assessment of health status confirmed adequate organ function. Routine labs within normal limits: ANC 4,500/mcL, platelet count 250,000/mcL, total bilirubin 0.8 mg/dL, AST/ALT within normal limits, creatinine 0.8 mg/dL, hemoglobin 14.0 g/dL, serum albumin 4.0 g/dL, lipase and amylase within normal limits. Patient in good clinical condition. ===== Patient 4.1.1 =====    Patient Information Name: David Gärtner Born: March 22, 1965 Address: Cologne, Domstrasse 1, Germany   Overview of Tumor Diagnosis and Therapy Tumor Diagnosis Diagnosis: UICC Stage IV oropharyngeal carcinoma (Metastatic: LYM, HEP, OSS) Initial Detection: February 10, 2023, following persistent sore throat and difficulty swallowing Biopsy Date: February 20, 2023 Molecular Profile: PIK3CA p.E545K (AF 25%), MAPK1 p.E322K (AF 10%), FGFR3 p.D786N (AF 30%)   Therapy Overview Initial Treatment: Radiochemotherapy: Began March 1, 2023, with a regimen of Cisplatin (200 mg/m2) paired with local radiotherapy (70G). Partial response noted after the initial radiochemotherapy completed by June 15, 2023. Follow-up CT scan shows disease progression in September 2024. Subsequent Treatment: Began September 15, immunotherapy with Nivolumab (240mg/2weeks).   Current Status: Disease progression as of March 2024, with new metastatic lesions identified. ECOG 1.   Comorbidities Active Smoker 50 py Hypertension Stage 1 Hyperlipidemia Peripheral artery disease Fontaine 2a Diverticular disease CDD 3a   Medication Lisinopril 20mg 1-0-0 Simvastatin 20mg 0-0-0-1 XGEVA, Vitamin D   Chronological Medical Findings: February 10, 2023: Presented with persistent sore throat and difficulty swallowing. CT scan of the neck revealed a suspicious mass in the oropharynx. February 15, 2023: CT scan of the neck: Mass in the oropharynx measuring approximately 4.5 cm with evidence of local invasion into surrounding structures. Multiple enlarged cervical lymph nodes are noted, with the largest measuring approximately 2.2 cm in the right level II region. These nodes exhibit round morphology and loss of the fatty hilum, characteristics suggestive of metastatic involvement. Additional enlarged lymph nodes are present in the levels III and IV on the right side. February 18, 2023: Staging CT (chest and abdomen): No signs of distant metastasis. February 20, 2023: Biopsy of the oropharyngeal mass performed. Histology confirmed oropharyngeal carcinoma. Molecular panel sequencing revealed mutations: PIK3CA p.E545K (AF 25%), MAPK1 p.E322K (AF 10%), FGFR3 p.D786N (AF 30%). Tumor purity was 60%. March 1, 2024: Initiated radiochemotherapy with Cisplatin and 5-Fluorouracil alongside local radiotherapy (70G). June 15, 2023: Follow-up CT scan showed partial response with a decrease in the size of the primary tumor and cervical lymph nodes. September 10, 2023: Follow-up imaging CT Neck/Chest/Abdomen: Disease progression (PD). Several new hypodense lesions identified in the liver: The largest lesion located in segment VI, measuring approximately 3.1 cm in diameter. Smaller lesions scattered throughout the right  hepatic lobe. Multiple new pulmonary nodules are detected in both lungs. The largest nodule located in the right lower lobe, measuring approximately 1.5 cm in diameter. Additional smaller nodules are distributed throughout the bilateral lung fields. No evidence of pleural effusion or pneumothorax. The oropharyngeal mass remains present, with no significant change in size compared to the previous scan. The previously noted enlarged cervical lymph nodes remain prominent, with no significant interval change in size or number. September 15, 2023: Began immunotherapy with Nivolumab. December 18, 2023: Follow-up CT scan Neck/Chest/abdomen: Stable disease. December-February 2023: Continuation Nivolumab. February 20, 2024: Follow-up CT scan Neck/Chest/Abdomen: Progression of disease. Enlargement of multiple known hypodense lesions in the liver, with the largest now measuring 4.5 cm in segment VI (previously 3.1cm). New lytic lesions in the pelvis. Previously noted pulmonary nodules remain stable with no significant interval change. Stable primary tumor and cervical lymphadenopathy. March 3, 2024: Tumor board review recommended considering eligibility for clinical trials due to limited response to standard and investigational therapies. March 10, 2024: Visit in the Emergency department with fever for 3 days, shortness of breath + cough + severe headaches. Routine labs: ANC 15,000/mcL, platelet count 200,000/mcL, total bilirubin 1.2mg/dL, AST/ALT 1.5 x ULN, creatinine 1.1 mg/dL, hemoglobin 12.0 g/dL, serum albumin 3.5 g/dL, leukocytes 18,000/mcL, CRP 23 mg/dL. Chest X-ray and CT scan confirmed pneumonia. Hospitalized for further evaluation and treatment. Blood and sputum cultures were taken and are pending. Patient started on IV antibiotics: Ceftriaxone and Azithromycin.   ===== Patient 4.1.2 =====    Patient Information Name: David Gärtner Born: March 22, 1965 Address: Cologne, Domstrasse 1, Germany   Overview of Tumor Diagnosis and Therapy Tumor Diagnosis Diagnosis: UICC Stage IV oropharyngeal carcinoma (Metastatic: LYM, HEP, OSS) Initial Detection: February 10, 2023, following persistent sore throat and difficulty swallowing Biopsy Date: February 20, 2023 Molecular Profile: PIK3CA p.E545K (AF 25%), MAPK1 p.E322K (AF 10%), FGFR3 p.D786N (AF 30%)   Therapy Overview Initial Treatment: Radiochemotherapy: Began March 1, 2023, with a regimen of Cisplatin (200 mg/m2) paired with local radiotherapy (70G). Partial response noted after the initial radiochemotherapy completed by June 15, 2023. Follow-up CT scan shows disease progression in September 2024. Subsequent Treatment: Began September 15, immunotherapy with Nivolumab (240mg/2weeks).   Current Status: Disease progression as of March 2024, with new metastatic lesions identified. ECOG 1.   Comorbidities Active Smoker 50 py Hypertension Stage 1 Hyperlipidemia Peripheral artery disease Fontaine 2a Diverticular disease CDD 3a Epilepsy, Focal Onset Impaired Awareness Seizures NYHA Class II Heart Failure COPD, GOLD Stage 3 (Severe)   Medication Levetiracetam 500mg 1-0-1 (for epilepsy) Metoprolol Succinate 50mg 1-0-0 (for heart failure) Tiotropium 18mcg 1-0-0 (for COPD) Salbutamol Inhaler 100mcg as needed (for COPD) Lisinopril 20mg 1-0-0 Simvastatin 20mg 0-0-0-1 XGEVA, Vitamin D   Chronological Medical Findings: February 10, 2023: Presented with persistent sore throat and difficulty swallowing. CT scan of the neck revealed a suspicious mass in the oropharynx. February 15, 2023: CT scan of the neck: Mass in the oropharynx measuring approximately 4.5 cm with evidence of local invasion into surrounding structures. Multiple enlarged cervical lymph nodes are noted, with the largest measuring approximately 2.2 cm in the right level II region. These nodes exhibit round morphology and loss of the fatty hilum, characteristics suggestive of metastatic involvement. Additional enlarged lymph nodes are present in the levels III and IV on the right side. February 18, 2023: Staging CT (chest and abdomen): No signs of distant metastasis. February 20, 2023: Biopsy of the oropharyngeal mass performed. Histology confirmed oropharyngeal carcinoma. Molecular panel sequencing revealed mutations: PIK3CA p.E545K (AF 25%), MAPK1 p.E322K (AF 10%), FGFR3 p.D786N (AF 30%). Tumor purity was 60%. March 1, 2024: Initiated radiochemotherapy with Cisplatin and 5-Fluorouracil alongside local radiotherapy (70G). June 15, 2023: Follow-up CT scan showed partial response with a decrease in the size of the primary tumor and cervical lymph nodes. September 10, 2023: Follow-up imaging CT Neck/Chest/Abdomen: Disease progression (PD). Several new hypodense lesions identified in the liver: The largest lesion located in segment VI, measuring approximately 3.1 cm in diameter. Smaller lesions scattered throughout the right  hepatic lobe. Multiple new pulmonary nodules are detected in both lungs. The largest nodule located in the right lower lobe, measuring approximately 1.5 cm in diameter. Additional smaller nodules are distributed throughout the bilateral lung fields. No evidence of pleural effusion or pneumothorax. The oropharyngeal mass remains present, with no significant change in size compared to the previous scan. The previously noted enlarged cervical lymph nodes remain prominent, with no significant interval change in size or number. September 15, 2023: Began immunotherapy with Nivolumab. December 18, 2023: Follow-up CT scan Neck/Chest/abdomen: Stable disease. December-February 2023: Continuation Nivolumab. February 20, 2024: Follow-up CT scan Neck/Chest/Abdomen: Progression of disease. Enlargement of multiple known hypodense lesions in the liver, with the largest now measuring 4.5 cm in segment VI (previously 3.1cm). New lytic lesions in the pelvis. Previously noted pulmonary nodules remain stable with no significant interval change. Stable primary tumor and cervical lymphadenopathy. March 3, 2024: Tumor board review recommended considering eligibility for clinical trials due to limited response to standard and investigational therapies. March 10, 2024: Detailed assessment of health status confirmed adequate organ function. Routine labs within normal limits: ANC 4,500/mcL, platelet count 250,000/mcL, total bilirubin 0.8 mg/dL, AST/ALT within normal limits, creatinine 0.8 mg/dL, hemoglobin 14.0 g/dL, serum albumin 4.0 g/dL, lipase and amylase within normal limits. Patient in good clinical condition.   ===== Patient 4.1.3 =====    Patient Information Name: David Gärtner Born: March 22, 1965 Address: Cologne, Domstrasse 1, Germany   Overview of Tumor Diagnosis and Therapy Tumor Diagnosis Diagnosis: UICC Stage IV oropharyngeal carcinoma (Metastatic: LYM, HEP, OSS) Initial Detection: February 10, 2023, following persistent sore throat and difficulty swallowing Biopsy Date: February 20, 2023 Molecular Profile: PIK3CA p.E545K (AF 25%), MAPK1 p.E322K (AF 10%), FGFR3 p.D786N (AF 30%)   Therapy Overview Initial Treatment: Radiochemotherapy: Began March 1, 2023, with a regimen of Cisplatin (200 mg/m2) paired with local radiotherapy (70G). Partial response noted after the initial radiochemotherapy completed by June 15, 2023. Follow-up CT scan shows disease progression in September 2024. Subsequent Treatment: Began September 15, immunotherapy with Nivolumab (240mg/2weeks).   Current Status: Disease progression as of March 2024, with new metastatic lesions identified. ECOG 1.   Comorbidities Active hepatitis C virus (HCV) infection (HCV antibody +), HCV RNA elevated (March 10, 2024) Active Smoker 50 py Alcoholic (6 bottles of beer / day) Regular marijuana use (up to 10 joints per day)   Hypertension Stage 1 Hyperlipidemia Peripheral artery disease Fontaine 2a Diverticular disease CDD 3a   Medication Sofosbuvir/Velpatasvir 400mg/100mg 1-0-0 Lisinopril 20mg 1-0-0 Simvastatin 20mg 0-0-0-1 XGEVA, Vitamin D   Chronological Medical Findings: February 10, 2023: Presented with persistent sore throat and difficulty swallowing. CT scan of the neck revealed a suspicious mass in the oropharynx. February 15, 2023: CT scan of the neck: Mass in the oropharynx measuring approximately 4.5 cm with evidence of local invasion into surrounding structures. Multiple enlarged cervical lymph nodes are noted, with the largest measuring approximately 2.2 cm in the right level II region. These nodes exhibit round morphology and loss of the fatty hilum, characteristics suggestive of metastatic involvement. Additional enlarged lymph nodes are present in the levels III and IV on the right side. February 18, 2023: Staging CT (chest and abdomen): No signs of distant metastasis. February 20, 2023: Biopsy of the oropharyngeal mass performed. Histology confirmed oropharyngeal carcinoma. Molecular panel sequencing revealed mutations: PIK3CA p.E545K (AF 25%), MAPK1 p.E322K (AF 10%), FGFR3 p.D786N (AF 30%). Tumor purity was 60%. March 1, 2024: Initiated radiochemotherapy with Cisplatin and 5-Fluorouracil alongside local radiotherapy (70G). June 15, 2023: Follow-up CT scan showed partial response with a decrease in the size of the primary tumor and cervical lymph nodes. September 10, 2023: Follow-up imaging CT Neck/Chest/Abdomen: Disease progression (PD). Several new hypodense lesions identified in the liver: The largest lesion located in segment VI, measuring approximately 3.1 cm in diameter. Smaller lesions scattered throughout the right  hepatic lobe. Multiple new pulmonary nodules are detected in both lungs. The largest nodule located in the right lower lobe, measuring approximately 1.5 cm in diameter. Additional smaller nodules are distributed throughout the bilateral lung fields. No evidence of pleural effusion or pneumothorax. The oropharyngeal mass remains present, with no significant change in size compared to the previous scan. The previously noted enlarged cervical lymph nodes remain prominent, with no significant interval change in size or number. September 15, 2023: Began immunotherapy with Nivolumab. December 18, 2023: Follow-up CT scan Neck/Chest/abdomen: Stable disease. December-February 2023: Continuation Nivolumab. February 20, 2024: Follow-up CT scan Neck/Chest/Abdomen: Progression of disease. Enlargement of multiple known hypodense lesions in the liver, with the largest now measuring 4.5 cm in segment VI (previously 3.1cm). New lytic lesions in the pelvis. Previously noted pulmonary nodules remain stable with no significant interval change. Stable primary tumor and cervical lymphadenopathy. March 3, 2024: Tumor board review recommended considering eligibility for clinical trials due to limited response to standard and investigational therapies. March 10, 2024: Detailed assessment of health status confirmed adequate organ function. Routine labs within normal limits: ANC 4,500/mcL, platelet count 250,000/mcL, total bilirubin 0.8 mg/dL, AST/ALT within normal limits, creatinine 0.8 mg/dL, hemoglobin 14.0 g/dL, serum albumin 4.0 g/dL, lipase and amylase within normal limits. HCV RNA at 2,500,000 IU/mL. Patient in good clinical condition.   ===== Patient 5.1 =====     Patient Information Name: Lisa Müller Born: April 12, 1960 Address: Hamburg, Hafenstrasse 3, Germany   Overview of Tumor Diagnosis and Therapy Tumor Diagnosis Diagnosis: Stage IV lung adenocarcinoma (M+: HEP, LYM, BONE, ADRENAL) Initial Detection: January November 21, 2023, following persistent cough and weight loss Biopsy Date: November 28, 2023 Molecular Profile: EGFR p.E746_A750del (AF 43%), TP53 p.A138_Q144del (AF 37%), MET Amplification FISH positive; Tumor Purity: 30%; Tumor Mutational Burden (TMB): 3.8 Mut/MB   Therapy Overview Initial Treatment: Chemotherapy: Began February December 2023, with a regimen of Pembrolizumab, Carboplatin and Pemetrexed. Partial response after the initial chemotherapy cycle completed by May 1, 2024. Continued chemotherapy until May 2024 (progressive disease). Comorbidities Hyperlipidemia  Osteoarthritis Psoriasis vulgaris H/o cholecystectomy 2007 45py   Medication Atorvastatin 40mg once daily Hydrocortisone cream  Ibuprofen 400mg as needed XGEVA Novalgin 500 2-2-2-2   Performance Status: ECOG Performance Status 1   Chronological Medical Findings: November 8, 2023: Presented with persistent cough and weight loss (-6kg/3 mo) at her primary care physician. 1 week antibiotic treatment for suspected airway infection without clinical improvements, chest x-ray revealed tumorous lesion in the left lung. November 21, 2023: CT scan of the chest: Mass in the left upper lobe measuring approximately 5.0 cm with evidence of local invasion into surrounding structures, including the left main bronchus and adjacent vascular structures. Enlarged mediastinal lymph nodes, particularly in the subcarinal and right paratracheal regions, with the largest node measuring 1.8 cm. Additional moderate pleural effusion on the left side. Multiple liver lesions suggestive of metastasis, with the largest lesion in segment VIII measuring 3.5 cm and another lesion in segment IVa measuring 2.2 cm. Adrenal metastasis on the left side. Bone metastases in C3, T3,4,7. November 28, 2023: CT guided tumor biopsy: Histology confirmed lung adenocarcinoma. Molecular panel sequencing revealed mutations: EGFR p.E746_A750del (AF 43%), TP53 p.A138_Q144del (AF 37%), MET Amplification FISH positive. Tumor purity was 30%. Tumor Mutational Burden (TMB) was 3.8 Mut/MB. December 5, 2023: Initiated chemotherapy with Carboplatin and Pemetrexed + immunotherapy with Pembrolizumab. March 10, 2024: Follow-up CT scan: Partial Response. Continued chemotherapy regimen. March - May 2024: Continued therapy with Carbo/Pem + Pembrolizumab. May 07, 2024: CT-scan Chest/Abdomen: Significant disease progression. Mass in the left upper lobe has increased to approximately 6.5 cm with further invasion into the left main bronchus and adjacent vascular structures. Enlarged mediastinal lymph nodes are now more prominent, especially in the subcarinal and right paratracheal regions, with the largest node now measuring 2.5 cm. Progressive pleural effusion left right. Abdomen and Pelvis: Multiple liver lesions, with the largest in segment VIII now measuring 4.5 cm and another in segment IVa measuring 3.0 cm. New metastatic lesions observed in segments V and VI. The adrenal metastasis on the left side has increased in size to 2.5 cm. Bone Metastases: Increased metastatic involvement with new lesions identified in the spine, including C2, T5, and L1, in addition to the previously noted C3, T3, T4, and T7. May 10, 2024: Tumor board review recommended considering eligibility for clinical trials due to limited response to standard and investigational therapies. May 15, 2024: Detailed assessment of health status confirmed adequate organ function. Routine labs within normal limits: ANC 4,800/mcL, platelet count 220,000/mcL, total bilirubin 0.9 mg/dL, AST/ALT within normal limits, creatinine 0.9 mg/dL, hemoglobin 13.5 g/dL, serum albumin 4.2 g/dL, lipase and amylase within normal limits.   ===== Patient 5.1.1 =====     Patient Information Name: Lisa Müller Born: April 12, 1960 Address: Hamburg, Hafenstrasse 3, Germany   Overview of Tumor Diagnosis and Therapy Tumor Diagnosis Diagnosis: Stage IV lung adenocarcinoma (M+: HEP, LYM, BONE, ADRENAL) Initial Detection: January November 21, 2023, following persistent cough and weight loss Biopsy Date: November 28, 2023 Molecular Profile: EGFR p.E746_A750del (AF 43%), TP53 p.A138_Q144del (AF 37%), MET Amplification FISH positive; Tumor Purity: 30%; Tumor Mutational Burden (TMB): 3.8 Mut/MB   Therapy Overview Initial Treatment: Chemotherapy: Began February December 2023, with a regimen of Pembrolizumab, Carboplatin and Pemetrexed. Partial response after the initial chemotherapy cycle completed by May 1, 2024. Continued chemotherapy until May 2024 (progressive disease). Comorbidities Hyperlipidemia  Osteoarthritis Psoriasis vulgaris H/o cholecystectomy 2007 45py   Medication Atorvastatin 40mg once daily Hydrocortisone cream  Ibuprofen 400mg as needed XGEVA Novalgin 500 2-2-2-2 Prednisone 40mg daily   Performance Status: ECOG Performance Status 2   Chronological Medical Findings: November 8, 2023: Presented with persistent cough and weight loss (-6kg/3 mo) at her primary care physician. 1 week antibiotic treatment for suspected airway infection without clinical improvements, chest x-ray revealed tumorous lesion in the left lung. November 21, 2023: CT scan of the chest: Mass in the left upper lobe measuring approximately 5.0 cm with evidence of local invasion into surrounding structures, including the left main bronchus and adjacent vascular structures. Enlarged mediastinal lymph nodes, particularly in the subcarinal and right paratracheal regions, with the largest node measuring 1.8 cm. Additional moderate pleural effusion on the left side. Multiple liver lesions suggestive of metastasis, with the largest lesion in segment VIII measuring 3.5 cm and another lesion in segment IVa measuring 2.2 cm. Adrenal metastasis on the left side. Bone metastases in C3, T3,4,7. November 28, 2023: CT guided tumor biopsy: Histology confirmed lung adenocarcinoma. Molecular panel sequencing revealed mutations: EGFR p.E746_A750del (AF 43%), TP53 p.A138_Q144del (AF 37%), MET Amplification FISH positive. Tumor purity was 30%. Tumor Mutational Burden (TMB) was 3.8 Mut/MB. December 5, 2023: Initiated chemotherapy with Carboplatin and Pemetrexed + immunotherapy with Pembrolizumab. March 10, 2024: Follow-up CT scan: Partial Response. Continued chemotherapy regimen. March - May 2024: Continued therapy with Carbo/Pem + Pembrolizumab. May 07, 2024: CT-scan Chest/Abdomen: Significant disease progression. Mass in the left upper lobe has increased to approximately 6.5 cm with further invasion into the left main bronchus and adjacent vascular structures. Enlarged mediastinal lymph nodes are now more prominent, especially in the subcarinal and right paratracheal regions, with the largest node now measuring 2.5 cm. Progressive pleural effusion left right. Additionally, there are diffuse ground-glass opacities and reticular markings throughout both lungs, suspicious for immune mediated Pneumonitis. Abdomen and Pelvis: Multiple liver lesions, with the largest in segment VIII now measuring 4.5 cm and another in segment IVa measuring 3.0 cm. New metastatic lesions observed in segments V and VI. The adrenal metastasis on the left side has increased in size to 2.5 cm. Bone Metastases: Increased metastatic involvement with new lesions identified in the spine, including C2, T5, and L1, in addition to the previously noted C3, T3, T4, and T7. May 08, 2024: Started on Prednisone 40 mg daily because of Lung findings. Follow-up CT scan and pulmonary function tests scheduled. Patient advised on potential side effects and the need for regular monitoring. May 10, 2024: Tumor board review recommended considering eligibility for clinical trials due to limited response to standard and investigational therapies. May 15, 2024: Detailed assessment of health status confirmed adequate organ function. Routine labs within normal limits: ANC 4,800/mcL, platelet count 220,000/mcL, total bilirubin 0.9 mg/dL, AST/ALT within normal limits, creatinine 0.9 mg/dL, hemoglobin 13.5 g/dL, serum albumin 4.2 g/dL, lipase and amylase within normal limits.   ===== Patient 6.1 =====      Patient Information Name: Ehrich, Wolfgang born: 18.08.1968 Address: Kurfürstendamm 1, Berlin, Germany   Overview of Tumor Diagnosis and Therapy Tumor Diagnosis Diagnosis: UICC Stage IVA, M1a (contralateral metastases, malignant pleural effusions),  KRAS G12C mutant non-small cell lung cancer (NSCLC) Initial Detection: March 22, 2023, following symptoms of persistent cough and weight loss Biopsy Date: April 15, 2023, squamous cell lung cancer, PDL1 3% Molecular Profile: Molecular alterations: KRAS p.G12C (AF 18%), KRAS p.G12C (AF  18%), KEAP1 p.L276F (AF 45%), STK11 p.K83Tfs*13 (AF 38%).   Therapy Overview Combined Immuno-chemotherapy: Began May 5, 2023, with Cisplatin, Pemetrexed and Pembrolizumab, partial response noted after cycle completion by August 10, 2023, continuation of therapy until october 2023 (progressive disease) Current Status Health Condition: Stable with an ECOG performance status of 1   Comorbidities Former Smoker: 40 py Hypertension Stage I COPD GOLD 2 Type 2 Diabetes Mellitus Hyperlipidemia   Medication          Lisinopril 20mg 1-0-0          Metformin 1000mg 1-0-1          Atorvastatin 40mg 0-0-0-1          Tiotropium (Inhaler) on demand          Novalgin 500mg 1–1-1-1  Apixaban 5mg 1-0-1    Chronological Medical Findings: March 22, 2023: Experienced persistent cough and weight loss. Chest X-ray and CT scan revealed a mass in the left lung. Referred to oncologist. CT-Angiography: Tumor Size: Approximately 4.5 cm in diameter. At least 2 contralateral metastases. Bronchial Obstruction: Partial obstruction of the left main bronchus leading to atelectasis of the left upper lobe. Suspicion of mediastinal lymph node metastases. No evidence of pulmonary artery embolism. Thrombus in the left atrium at the transition to the auricle. Emphysematous and fibrotic changes in the lung parenchyma. Urgent suspicion of a tumor-atelectasis complex in the left upper lobe of the lung. Mucus present in the lower lobe bronchi on the left. Lymph Nodes: Enlarged, FDG-positive lymph nodes in the mediastinum, particularly in regions 4R and infracarinal. April 15, 2023: Lung biopsy via bronchoscopy: Endobronchial tumor manifestation in the distal left main bronchus extending to the upper lobe. Acute and chronic atrophic tracheobronchitis. Collapsed bronchial system in the affected area. Biopsy taken. Diagnosed with squamous non-small cell lung cancer (NSCLC), molecular diagnostics: KRAS G12C mutant. April 27, 2023: Ventilation: Moderate obstruction, no restriction. Increased airway resistance and slight hyperinflation. Tiffeneau index (FEV1/FVC) at 42.34%, z-score -3.32. FEV1: 0.93 L (42% predicted), z-score -2.89.Total lung capacity (TLC): 5.86 L (103% predicted), z-score 0.22. Forced vital capacity (FVC): 2.19 L, z-score -1.5. Residual volume (RV): 3.67 L, z-score 2.44. RV/TLC: 62.68%, z-score -1.18. May 5, 2023: Initiated on platinum-based immunochemotherapy regimen (Cisplatin, Pemetrexed, Pembrolizumab). August 10, 2023: Completed initial therapy cycle. Partial response as per CT chest / abdomen +PET CT:  Moderate reduction in tumor size to approximately 4.2 cm. Contralateral metastases still present, but no new lesions. Partial bronchial obstruction persists with ongoing atelectasis in the left upper lobe. Mediastinal lymph nodes remain enlarged and FDG-positive, although with reduced metabolic activity. Thrombus in the left atrium remains unchanged. Emphysematous and fibrotic changes are stable. Overall, mild response observed with no significant progression, as per RECIST stable disease. August-October: Continued chemotherapy with Cisplatin/Pemetrexed and Pembrolizumab. October 13, 2023: Follow-up CT (chest + abdomen): SD / Progressive Disease. New nodule in the right lung (1cm). Slight increase in the size of previously noted FDG-positive lymph nodes in the mediastinum. No additional metastatic lesions were detected. The patient has continued to tolerate the current treatment regimen well, with no significant adverse effects reported. October 17, 2023: Tumor board: SD. Continuation of therapy. October 25, 2023: Continuation of Cisplatin (dose reduced), Pemetrexed and Pembrolizumab. January 12, 2024: Follow-up CT scan abdomen and chest, FDG-PET-CT: Progressive Disease with three new metastases in the right lung and additional enlarged FDG-positive lymph nodes in the mediastinum. MRI scan of the brain conducted; no evidence of metastatic disease. Incidental findings included mild age-related cerebral atrophy and scattered white matter hyperintensities consistent with chronic microvascular ischemic changes. March 17, 2024: Tumorboard recommends considering clinical trial options due to limited response to standard therapies. April 20, 2024: Detailed assessment of health status. ECOG performance status 1. All routine labs, including liver and renal function tests, within normal limits.   ===== Patient 6.1.1 =====      Patient Information Name: Ehrich, Wolfgang born: 18.08.1968 Address: Kurfürstendamm 1, Berlin, Germany   Overview of Tumor Diagnosis and Therapy Tumor Diagnosis Diagnosis: UICC Stage IVA, M1a (contralateral metastases, malignant pleural effusions),  KRAS G12C mutant non-small cell lung cancer (NSCLC) Initial Detection: March 22, 2023, following symptoms of persistent cough and weight loss Biopsy Date: April 15, 2023, squamous cell lung cancer, PDL1 3% Molecular Profile: Molecular alterations: KRAS p.G12C (AF 18%), KRAS p.G12C (AF  18%), KEAP1 p.L276F (AF 45%), STK11 p.K83Tfs*13 (AF 38%).   Therapy Overview Combined Immuno-chemotherapy: Began May 5, 2023, with Cisplatin, Pemetrexed and Pembrolizumab, partial response noted after cycle completion by August 10, 2023, continuation of therapy until october 2023 (progressive disease) Current Status Health Condition: Stable with an ECOG performance status of 1   Comorbidities Chronic heart failure (NYHA Class III), reduced ejection fraction (HFrEF) of 35% Post Myocardial Infarction (2021), 2 coronary stents Former Smoker: 40 py Hypertension Stage I COPD GOLD 2 Type 2 Diabetes Mellitus Hyperlipidemia   Medication          Lisinopril 20mg 1-0-0          Metformin 1000mg 1-0-1 ASS 100mg 1-0-0 Carvedilol 12.5mg 1-0-1 Furosemide 40mg 1-0-1 Apixaban 5mg 1-0-1 Atorvastatin 40mg 0-0-0-1          Tiotropium (Inhaler) on demand Novalgin 500mg 1-1-1-1     Chronological Medical Findings: March 22, 2023: Experienced persistent cough and weight loss. Chest X-ray and CT scan revealed a mass in the left lung. Referred to oncologist. CT-Angiography: Tumor Size: Approximately 4.5 cm in diameter. At least 2 contralateral metastases. Bronchial Obstruction: Partial obstruction of the left main bronchus leading to atelectasis of the left upper lobe. Suspicion of mediastinal lymph node metastases. No evidence of pulmonary artery embolism. Thrombus in the left atrium at the transition to the auricle. Emphysematous and fibrotic changes in the lung parenchyma. Urgent suspicion of a tumor-atelectasis complex in the left upper lobe of the lung. Mucus present in the lower lobe bronchi on the left. Lymph Nodes: Enlarged, FDG-positive lymph nodes in the mediastinum, particularly in regions 4R and infracarinal. April 15, 2023: Lung biopsy via bronchoscopy: Endobronchial tumor manifestation in the distal left main bronchus extending to the upper lobe. Acute and chronic atrophic tracheobronchitis. Collapsed bronchial system in the affected area. Biopsy taken. Diagnosed with squamous non-small cell lung cancer (NSCLC), molecular diagnostics: KRAS G12C mutant. April 27, 2023: Ventilation: Moderate obstruction, no restriction. Increased airway resistance and slight hyperinflation. Tiffeneau index (FEV1/FVC) at 42.34%, z-score -3.32. FEV1: 0.93 L (42% predicted), z-score -2.89.Total lung capacity (TLC): 5.86 L (103% predicted), z-score 0.22. Forced vital capacity (FVC): 2.19 L, z-score -1.5. Residual volume (RV): 3.67 L, z-score 2.44. RV/TLC: 62.68%, z-score -1.18. May 5, 2023: Initiated on platinum-based immunochemotherapy regimen (Cisplatin, Pemetrexed, Pembrolizumab). August 10, 2023: Completed initial therapy cycle. Partial response as per CT chest / abdomen +PET CT:  Moderate reduction in tumor size to approximately 4.2 cm. Contralateral metastases still present, but no new lesions. Partial bronchial obstruction persists with ongoing atelectasis in the left upper lobe. Mediastinal lymph nodes remain enlarged and FDG-positive, although with reduced metabolic activity. Thrombus in the left atrium remains unchanged. Emphysematous and fibrotic changes are stable. Overall, mild response observed with no significant progression, as per RECIST stable disease. August-October: Continued chemotherapy with Cisplatin/Pemetrexed and Pembrolizumab. October 13, 2023: Follow-up CT (chest + abdomen): SD / Progressive Disease. New nodule in the right lung (1cm). Slight increase in the size of previously noted FDG-positive lymph nodes in the mediastinum. No additional metastatic lesions were detected. The patient has continued to tolerate the current treatment regimen well, with no significant adverse effects reported. October 17, 2023: Tumor board: SD. Continuation of therapy. October 25, 2023: Continuation of Cisplatin (dose reduced), Pemetrexed and Pembrolizumab. January 12, 2024: Follow-up CT scan abdomen and chest, FDG-PET-CT: Progressive Disease with three new metastases in the right lung and additional enlarged FDG-positive lymph nodes in the mediastinum. MRI scan of the brain revealed multiple metastases, specifically three lesions in the left hemisphere: one in the left frontal lobe, one in the left parietal lobe, and one in the left occipital lobe. Incidental findings included mild age-related cerebral atrophy and scattered white matter hyperintensities consistent with chronic microvascular ischemic changes. March 17, 2024: Tumorboard recommends considering clinical trial options due to limited response to standard therapies. April 20, 2024: Detailed assessment of health status. Patient currently in ECOG performance status 1. All routine labs, including liver and renal function tests, within normal limits.   ===== Patient 6.1.2 =====       Patient Information Name: Ehrich, Wolfgang born: 18.08.1968 Address: Kurfürstendamm 1, Berlin, Germany   Overview of Tumor Diagnosis and Therapy Tumor Diagnosis Diagnosis: UICC Stage IVA, M1a (contralateral metastases, malignant pleural effusions),  KRAS G12C mutant non-small cell lung cancer (NSCLC) Initial Detection: March 22, 2023, following symptoms of persistent cough and weight loss Biopsy Date: April 15, 2023, squamous cell lung cancer, PDL1 3% Molecular Profile: Molecular alterations: KRAS p.G12C (AF 18%), KRAS p.G12C (AF  18%), KEAP1 p.L276F (AF 45%), STK11 p.K83Tfs*13 (AF 38%).   Therapy Overview Combined Immuno-chemotherapy: Began May 5, 2023, with Cisplatin, Pemetrexed and Pembrolizumab, partial response noted after cycle completion by August 10, 2023, continuation of therapy until october 2023 (progressive disease) Current Status Health Condition: Stable with an ECOG performance status of 1   Comorbidities Chronic heart failure (NYHA Class III), reduced ejection fraction (HFrEF) of 35% Post Myocardial Infarction (2021), 2 coronary stents Former Smoker: 40 py Hypertension Stage I COPD GOLD 2 Type 2 Diabetes Mellitus Hyperlipidemia   Medication          Lisinopril 20mg 1-0-0          Metformin 1000mg 1-0-1 ASS 100mg 1-0-0 Carvedilol 12.5mg 1-0-1 Furosemide 40mg 1-0-1 Apixaban 5mg 1-0-1 Atorvastatin 40mg 0-0-0-1          Tiotropium (Inhaler) on demand Novalgin 500mg 1-1-1-1     Chronological Medical Findings: March 22, 2023: Experienced persistent cough and weight loss. Chest X-ray and CT scan revealed a mass in the left lung. Referred to oncologist. CT-Angiography: Tumor Size: Approximately 4.5 cm in diameter. At least 2 contralateral metastases. Bronchial Obstruction: Partial obstruction of the left main bronchus leading to atelectasis of the left upper lobe. Suspicion of mediastinal lymph node metastases. No evidence of pulmonary artery embolism. Thrombus in the left atrium at the transition to the auricle. Emphysematous and fibrotic changes in the lung parenchyma. Urgent suspicion of a tumor-atelectasis complex in the left upper lobe of the lung. Mucus present in the lower lobe bronchi on the left. Lymph Nodes: Enlarged, FDG-positive lymph nodes in the mediastinum, particularly in regions 4R and infracarinal. April 15, 2023: Lung biopsy via bronchoscopy: Endobronchial tumor manifestation in the distal left main bronchus extending to the upper lobe. Acute and chronic atrophic tracheobronchitis. Collapsed bronchial system in the affected area. Biopsy taken. Diagnosed with squamous non-small cell lung cancer (NSCLC), molecular diagnostics: KRAS G12C mutant. April 27, 2023: Ventilation: Moderate obstruction, no restriction. Increased airway resistance and slight hyperinflation. Tiffeneau index (FEV1/FVC) at 42.34%, z-score -3.32. FEV1: 0.93 L (42% predicted), z-score -2.89.Total lung capacity (TLC): 5.86 L (103% predicted), z-score 0.22. Forced vital capacity (FVC): 2.19 L, z-score -1.5. Residual volume (RV): 3.67 L, z-score 2.44. RV/TLC: 62.68%, z-score -1.18. May 5, 2023: Initiated on platinum-based immunochemotherapy regimen (Cisplatin, Pemetrexed, Pembrolizumab). August 10, 2023: Completed initial therapy cycle. Partial response as per CT chest / abdomen +PET CT:  Moderate reduction in tumor size to approximately 4.2 cm. Contralateral metastases still present, but no new lesions. Partial bronchial obstruction persists with ongoing atelectasis in the left upper lobe. Mediastinal lymph nodes remain enlarged and FDG-positive, although with reduced metabolic activity. Thrombus in the left atrium remains unchanged. Emphysematous and fibrotic changes are stable. Overall, mild response observed with no significant progression, as per RECIST stable disease. August-October: Continued chemotherapy with Cisplatin/Pemetrexed and Pembrolizumab. October 13, 2023: Follow-up CT (chest + abdomen): SD / Progressive Disease. New nodule in the right lung (1cm). Slight increase in the size of previously noted FDG-positive lymph nodes in the mediastinum. No additional metastatic lesions were detected. The patient has continued to tolerate the current treatment regimen well, with no significant adverse effects reported. October 17, 2023: Tumor board: SD. Continuation of therapy. October 25, 2023: Continuation of Cisplatin (dose reduced), Pemetrexed and Pembrolizumab. January 12, 2024: Follow-up CT scan abdomen and chest, FDG-PET-CT: Progressive Disease with three new metastases in the right lung and additional enlarged FDG-positive lymph nodes in the mediastinum. MRI scan of the brain revealed multiple metastases, specifically three lesions in the left hemisphere: one in the left frontal lobe, one in the left parietal lobe, and one in the left occipital lobe. Incidental findings included mild age-related cerebral atrophy and scattered white matter hyperintensities consistent with chronic microvascular ischemic changes. March 17, 2024: Tumorboard recommends considering clinical trial options due to limited response to standard therapies. April 20, 2024: Detailed assessment of health status. Patient currently in ECOG performance status 2. Routine labs: GOT 103 U/L, GPT 112 U/L, Creatinine 2.3 mg/dL   ===== Patient 6.2 =====        Patient Information Name: Ehrich, Wolfgang born: 18.08.1968 Address: Kurfürstendamm 1, Berlin, Germany   Overview of Tumor Diagnosis and Therapy Tumor Diagnosis Diagnosis: UICC Stage IVA, M1a (contralateral metastases, malignant pleural effusions),  KRAS G12C mutant non-small cell lung cancer (NSCLC) Initial Detection: March 22, 2023, following symptoms of persistent cough and weight loss Biopsy Date: April 15, 2023, squamous cell lung cancer, PDL1 3% Molecular Profile: Molecular alterations: KRAS p.G12C (AF 18%), KRAS p.G12C (AF  18%), KEAP1 p.L276F (AF 45%), STK11 p.K83Tfs*13 (AF 38%).   Therapy Overview Combined Immuno-chemotherapy: Began May 5, 2023, with Cisplatin, Pemetrexed and Pembrolizumab, partial response noted after cycle completion by August 10, 2023, continuation of therapy until october 2023 (progressive disease) Current Status Health Condition: Stable with an ECOG performance status of 1   Comorbidities Former Smoker: 40 py Hypertension Stage I COPD GOLD 2 Type 2 Diabetes Mellitus Hyperlipidemia   Medication          Lisinopril 20mg 1-0-0          Metformin 1000mg 1-0-1          Atorvastatin 40mg 0-0-0-1          Tiotropium (Inhaler) on demand          Novalgin 500mg 1–1-1-1  Apixaban 5mg 1-0-1    Chronological Medical Findings: March 22, 2023: Experienced persistent cough and weight loss. Chest X-ray and CT scan revealed a mass in the left lung. Referred to oncologist. CT-Angiography: Tumor Size: Approximately 4.5 cm in diameter. At least 2 contralateral metastases. Bronchial Obstruction: Partial obstruction of the left main bronchus leading to atelectasis of the left upper lobe. Suspicion of mediastinal lymph node metastases. No evidence of pulmonary artery embolism. Thrombus in the left atrium at the transition to the auricle. Emphysematous and fibrotic changes in the lung parenchyma. Urgent suspicion of a tumor-atelectasis complex in the left upper lobe of the lung. Mucus present in the lower lobe bronchi on the left. Lymph Nodes: Enlarged, FDG-positive lymph nodes in the mediastinum, particularly in regions 4R and infracarinal. April 15, 2023: Lung biopsy via bronchoscopy: Endobronchial tumor manifestation in the distal left main bronchus extending to the upper lobe. Acute and chronic atrophic tracheobronchitis. Collapsed bronchial system in the affected area. Biopsy taken. Diagnosed with squamous non-small cell lung cancer (NSCLC), molecular diagnostics: KRAS G12C mutant. April 27, 2023: Ventilation: Moderate obstruction, no restriction. Increased airway resistance and slight hyperinflation. Tiffeneau index (FEV1/FVC) at 42.34%, z-score -3.32. FEV1: 0.93 L (42% predicted), z-score -2.89.Total lung capacity (TLC): 5.86 L (103% predicted), z-score 0.22. Forced vital capacity (FVC): 2.19 L, z-score -1.5. Residual volume (RV): 3.67 L, z-score 2.44. RV/TLC: 62.68%, z-score -1.18. May 5, 2023: Initiated on platinum-based immunochemotherapy regimen (Cisplatin, Pemetrexed, Pembrolizumab). August 10, 2023: Completed initial therapy cycle. Partial response as per CT chest / abdomen +PET CT:  Moderate reduction in tumor size to approximately 4.2 cm. Contralateral metastases still present, but no new lesions. Partial bronchial obstruction persists with ongoing atelectasis in the left upper lobe. Mediastinal lymph nodes remain enlarged and FDG-positive, although with reduced metabolic activity. Thrombus in the left atrium remains unchanged. Emphysematous and fibrotic changes are stable. Overall, mild response observed with no significant progression, as per RECIST stable disease. August-October: Continued chemotherapy with Cisplatin/Pemetrexed and Pembrolizumab. October 13, 2023: Follow-up CT (chest + abdomen): SD / Progressive Disease. New nodule in the right lung (1cm). Slight increase in the size of previously noted FDG-positive lymph nodes in the mediastinum. No additional metastatic lesions were detected. The patient has continued to tolerate the current treatment regimen well, with no significant adverse effects reported. October 17, 2023: Tumor board: SD. Continuation of therapy. October 25, 2023: Continuation of Cisplatin (dose reduced), Pemetrexed and Pembrolizumab. January 12, 2024: Follow-up CT scan abdomen and chest, FDG-PET-CT: Progressive Disease with three new metastases in the right lung and additional enlarged FDG-positive lymph nodes in the mediastinum. MRI scan of the brain conducted; no evidence of metastatic disease. Incidental findings included mild age-related cerebral atrophy and scattered white matter hyperintensities consistent with chronic microvascular ischemic changes. March 17, 2024: Tumorboard recommends considering clinical trial options due to limited response to standard therapies. April 20, 2024: Detailed assessment of health status. ECOG performance status 1. All routine labs, including liver and renal function tests, within normal limits.   ===== Patient 6.2.1 =====         Patient Information Name: Ehrich, Wolfgang born: 18.08.1968 Address: Kurfürstendamm 1, Berlin, Germany   Overview of Tumor Diagnosis and Therapy Tumor Diagnosis Diagnosis: UICC Stage IVB, M1c (brain metastases), KRAS G12C mutant non-small cell lung cancer (NSCLC) Initial Detection: March 22, 2023, following symptoms of persistent cough and weight loss Biopsy Date: April 15, 2023, squamous cell lung cancer, PDL1 3% Molecular Profile: Molecular alterations: KRAS p.G12C (AF 18%), KRAS p.G12C (AF  18%), KEAP1 p.L276F (AF 45%), STK11 p.K83Tfs*13 (AF 38%).   Therapy Overview Combined Immuno-chemotherapy: Began May 5, 2023, with Cisplatin, Pemetrexed and Pembrolizumab, partial response noted after cycle completion by August 10, 2023, continuation of therapy until october 2023 (progressive disease) Current Status Health Condition: Stable with an ECOG performance status of 1   Comorbidities Former Smoker: 40 py Hypertension Stage I COPD GOLD 2 Type 2 Diabetes Mellitus Hyperlipidemia   Medication          Lisinopril 20mg 1-0-0          Metformin 1000mg 1-0-1          Atorvastatin 40mg 0-0-0-1          Tiotropium (Inhaler) on demand          Novalgin 500mg 1–1-1-1  Apixaban 5mg 1-0-1    Chronological Medical Findings: March 22, 2023: Experienced persistent cough and weight loss. Chest X-ray and CT scan revealed a mass in the left lung. Referred to oncologist. CT-Angiography: Tumor Size: Approximately 4.5 cm in diameter. At least 2 contralateral metastases. Bronchial Obstruction: Partial obstruction of the left main bronchus leading to atelectasis of the left upper lobe. Suspicion of mediastinal lymph node metastases. No evidence of pulmonary artery embolism. Thrombus in the left atrium at the transition to the auricle. Emphysematous and fibrotic changes in the lung parenchyma. Urgent suspicion of a tumor-atelectasis complex in the left upper lobe of the lung. Mucus present in the lower lobe bronchi on the left. Lymph Nodes: Enlarged, FDG-positive lymph nodes in the mediastinum, particularly in regions 4R and infracarinal. April 15, 2023: Lung biopsy via bronchoscopy: Endobronchial tumor manifestation in the distal left main bronchus extending to the upper lobe. Acute and chronic atrophic tracheobronchitis. Collapsed bronchial system in the affected area. Biopsy taken. Diagnosed with squamous non-small cell lung cancer (NSCLC), molecular diagnostics: KRAS G12C mutant. April 27, 2023: Ventilation: Moderate obstruction, no restriction. Increased airway resistance and slight hyperinflation. Tiffeneau index (FEV1/FVC) at 42.34%, z-score -3.32. FEV1: 0.93 L (42% predicted), z-score -2.89.Total lung capacity (TLC): 5.86 L (103% predicted), z-score 0.22. Forced vital capacity (FVC): 2.19 L, z-score -1.5. Residual volume (RV): 3.67 L, z-score 2.44. RV/TLC: 62.68%, z-score -1.18. May 5, 2023: Initiated on platinum-based immunochemotherapy regimen (Cisplatin, Pemetrexed, Pembrolizumab). August 10, 2023: Completed initial therapy cycle. Partial response as per CT chest / abdomen +PET CT:  Moderate reduction in tumor size to approximately 4.2 cm. Contralateral metastases still present, but no new lesions. Partial bronchial obstruction persists with ongoing atelectasis in the left upper lobe. Mediastinal lymph nodes remain enlarged and FDG-positive, although with reduced metabolic activity. Thrombus in the left atrium remains unchanged. Emphysematous and fibrotic changes are stable. Overall, mild response observed with no significant progression, as per RECIST stable disease. August-October: Continued chemotherapy with Cisplatin/Pemetrexed and Pembrolizumab. October 13, 2023: Follow-up CT (chest + abdomen): SD / Progressive Disease. New nodule in the right lung (1cm). Slight increase in the size of previously noted FDG-positive lymph nodes in the mediastinum. No additional metastatic lesions were detected. The patient has continued to tolerate the current treatment regimen well, with no significant adverse effects reported. October 17, 2023: Tumor board: SD. Continuation of therapy. October 25, 2023: Continuation of Cisplatin (dose reduced), Pemetrexed and Pembrolizumab. January 12, 2024: Follow-up CT scan abdomen and chest, FDG-PET-CT: Progressive Disease with three new metastases in the right lung and additional enlarged FDG-positive lymph nodes in the mediastinum. MRI scan of the brain: MRI scan of the brain revealed multiple metastases, specifically three lesions in the left hemisphere: one in the left frontal lobe, one in the left parietal lobe, and one in the left occipital lobe. January 16, 2024: Begin Sotorasib (Lumakras) 960 per day March 17, 2024: Tumorboard recommends considering clinical trial options due to limited response to standard therapies. April 20, 2024: Detailed assessment of health status. ECOG performance status 1. All routine labs, including liver and renal function tests, within normal limits.   ===== Patient 6.3 =====         Patient Information Name: Ehrich, Wolfgang born: 18.08.1968 Address: Italy   Overview of Tumor Diagnosis and Therapy Tumor Diagnosis Diagnosis: UICC Stage IVA, M1a (contralateral metastases),  KRAS G12C mutant non-small cell lung cancer (NSCLC) / adenocarcinoma of the  lung Initial Detection: March 22, 2024, following symptoms of persistent cough and weight loss (-5kg) Biopsy Date: April 15, 2024, adenocarcinoma, PDL1 3% Molecular Profile: Molecular alterations: KRAS p.G12C (AF 18%), KRAS p.G12C (AF  18%), KEAP1 p.L276F (AF 45%), STK11 p.K83Tfs*13 (AF 38%).   Therapy Overview None. Current Status Health Condition: Stable with an ECOG performance status of 1 Allergies: None. Comorbidities Former Smoker: 40 py Hypertension Stage I COPD GOLD 2 Type 2 Diabetes Mellitus Hyperlipidemia   Medication          Lisinopril 20mg 1-0-0          Metformin 1000mg 1-0-1          Atorvastatin 40mg 0-0-0-1          Tiotropium (Inhaler) on demand          Novalgin 500mg 1–1-1-1  Apixaban 5mg 1-0-1    Chronological Medical Findings: March 22, 2024: Experienced persistent cough and weight loss. Chest X-ray and CT scan revealed a mass in the left lung. Referred to oncologist. CT-Angiography: Tumor Size: Approximately 4.5 cm in diameter. At least 2 contralateral metastases. Bronchial Obstruction: Partial obstruction of the left main bronchus leading to atelectasis of the left upper lobe. Suspicion of mediastinal lymph node metastases. No evidence of pulmonary artery embolism. Thrombus in the left atrium at the transition to the auricle. Emphysematous and fibrotic changes in the lung parenchyma. Urgent suspicion of a tumor-atelectasis complex in the left upper lobe of the lung. Mucus present in the lower lobe bronchi on the left. Lymph Nodes: Enlarged, FDG-positive lymph nodes in the mediastinum, particularly in regions 4R and infracarinal. April 15, 2024: Lung biopsy via bronchoscopy: Endobronchial tumor manifestation in the distal left main bronchus extending to the upper lobe. Acute and chronic atrophic tracheobronchitis. Collapsed bronchial system in the affected area. Biopsy taken. Diagnosed with non-small cell lung cancer (NSCLC) (adenocarcinoma), molecular diagnostics: KRAS G12C mutant. April 27, 2024: Ventilation: Moderate obstruction, no restriction. Increased airway resistance and slight hyperinflation. Tiffeneau index (FEV1/FVC) at 42.34%, z-score -3.32. FEV1: 0.93 L (42% predicted), z-score -2.89.Total lung capacity (TLC): 5.86 L (103% predicted), z-score 0.22. Forced vital capacity (FVC): 2.19 L, z-score -1.5. Residual volume (RV): 3.67 L, z-score 2.44. RV/TLC: 62.68%, z-score -1.18. April 20, 2024: Detailed assessment of health status. ECOG performance status 1. All routine labs, including liver and renal function tests, within normal limits. Discussion in tumor board conference: palliative systemic treatment or clinical trial enrollment. ===== Patient 6.3.1 =====         Patient Information Name: Ehrich, Wolfgang born: 18.08.1968 Address: Italy   Overview of Tumor Diagnosis and Therapy Tumor Diagnosis Diagnosis: UICC Stage IVA, M1a (contralateral metastases, pleural effusions),  KRAS G12C mutant non-small cell lung cancer (NSCLC) / adenocarcinoma of the  lung Initial Detection: March 22, 2024, following symptoms of persistent cough and weight loss (-5kg) Biopsy Date: April 15, 2024, adenocarcinoma, PDL1 3% Molecular Profile: Molecular alterations: KRAS p.G12C (AF 18%), KRAS p.G12C (AF  18%), KEAP1 p.L276F (AF 45%), STK11 p.K83Tfs*13 (AF 38%).   Therapy Overview None. Current Status Health Condition: Stable with an ECOG performance status of 1 Allergies: None. Comorbidities Former Smoker: 40 py Hypertension Stage I COPD GOLD 2 Type 2 Diabetes Mellitus Hyperlipidemia   Medication          Lisinopril 20mg 1-0-0          Metformin 1000mg 1-0-1          Atorvastatin 40mg 0-0-0-1          Tiotropium (Inhaler) on demand          Novalgin 500mg 1–1-1-1  Apixaban 5mg 1-0-1    Chronological Medical Findings: March 22, 2024: Experienced persistent cough and weight loss. Chest X-ray and CT scan revealed a mass in the left lung. Referred to oncologist. CT-Angiography: Tumor Size: Approximately 4.5 cm in diameter. At least 2 contralateral metastases. Bronchial Obstruction: Partial obstruction of the left main bronchus leading to atelectasis of the left upper lobe. Suspicion of mediastinal lymph node metastases. No evidence of pulmonary artery embolism. Thrombus in the left atrium at the transition to the auricle. Emphysematous and fibrotic changes in the lung parenchyma. Urgent suspicion of a tumor-atelectasis complex in the left upper lobe of the lung. Mucus present in the lower lobe bronchi on the left. Lymph Nodes: Enlarged, FDG-positive lymph nodes in the mediastinum, particularly in regions 4R and infracarinal. Bilateral pleural effusions, additionally mild pericardial effusion. April 15, 2024: Lung biopsy via bronchoscopy: Endobronchial tumor manifestation in the distal left main bronchus extending to the upper lobe. Acute and chronic atrophic tracheobronchitis. Collapsed bronchial system in the affected area. Biopsy taken. Diagnosed with non-small cell lung cancer (NSCLC) (adenocarcinoma), molecular diagnostics: KRAS G12C mutant. April 27, 2024: Ventilation: Moderate obstruction, no restriction. Increased airway resistance and slight hyperinflation. Tiffeneau index (FEV1/FVC) at 42.34%, z-score -3.32. FEV1: 0.93 L (42% predicted), z-score -2.89.Total lung capacity (TLC): 5.86 L (103% predicted), z-score 0.22. Forced vital capacity (FVC): 2.19 L, z-score -1.5. Residual volume (RV): 3.67 L, z-score 2.44. RV/TLC: 62.68%, z-score -1.18. April 20, 2024: Detailed assessment of health status. ECOG performance status 2. All routine labs, including liver and renal function tests, within normal limits. Discussion in tumor board conference: palliative systemic treatment or clinical trial enrollment.        ===== Patient 6.4 =====         Patient Information Name: Ehrich, Wolfgang born: 18.08.1968 Address: Kurfürstendamm 1, Berlin, Germany   Overview of Tumor Diagnosis and Therapy Tumor Diagnosis Diagnosis: UICC Stage IVA, M1a (contralateral metastases, malignant pleural effusions),  KRAS G12C mutant non-small cell lung cancer (NSCLC) Initial Detection: March 22, 2023, following symptoms of persistent cough and weight loss Biopsy Date: April 15, 2023, squamous cell lung cancer, PDL1 3% Molecular Profile: Molecular alterations: KRAS p.G12C (AF 18%), KRAS p.G12C (AF  18%), KEAP1 p.L276F (AF 45%), STK11 p.K83Tfs*13 (AF 38%).   Therapy Overview Combined Immuno-chemotherapy: Began May 5, 2023, with Cisplatin, Pemetrexed and Pembrolizumab, partial response noted after cycle completion by August 10, 2023, continuation of therapy until october 2023 (progressive disease) Current Status Health Condition: Stable with an ECOG performance status of 1   Comorbidities Former Smoker: 40 py Hypertension Stage I COPD GOLD 2 Type 2 Diabetes Mellitus Hyperlipidemia   Medication          Lisinopril 20mg 1-0-0          Metformin 1000mg 1-0-1          Atorvastatin 40mg 0-0-0-1          Tiotropium (Inhaler) on demand          Novalgin 500mg 1–1-1-1  Apixaban 5mg 1-0-1    Chronological Medical Findings: March 22, 2023: Experienced persistent cough and weight loss. Chest X-ray and CT scan revealed a mass in the left lung. Referred to oncologist. CT-Angiography: Tumor Size: Approximately 4.5 cm in diameter. At least 2 contralateral metastases. Bronchial Obstruction: Partial obstruction of the left main bronchus leading to atelectasis of the left upper lobe. Suspicion of mediastinal lymph node metastases. No evidence of pulmonary artery embolism. Thrombus in the left atrium at the transition to the auricle. Emphysematous and fibrotic changes in the lung parenchyma. Urgent suspicion of a tumor-atelectasis complex in the left upper lobe of the lung. Mucus present in the lower lobe bronchi on the left. Lymph Nodes: Enlarged, FDG-positive lymph nodes in the mediastinum, particularly in regions 4R and infracarinal. April 15, 2023: Lung biopsy via bronchoscopy: Endobronchial tumor manifestation in the distal left main bronchus extending to the upper lobe. Acute and chronic atrophic tracheobronchitis. Collapsed bronchial system in the affected area. Biopsy taken. Diagnosed with squamous non-small cell lung cancer (NSCLC), molecular diagnostics: KRAS G12C mutant. April 27, 2023: Ventilation: Moderate obstruction, no restriction. Increased airway resistance and slight hyperinflation. Tiffeneau index (FEV1/FVC) at 42.34%, z-score -3.32. FEV1: 0.93 L (42% predicted), z-score -2.89.Total lung capacity (TLC): 5.86 L (103% predicted), z-score 0.22. Forced vital capacity (FVC): 2.19 L, z-score -1.5. Residual volume (RV): 3.67 L, z-score 2.44. RV/TLC: 62.68%, z-score -1.18. May 5, 2023: Initiated on platinum-based immunochemotherapy regimen (Cisplatin, Pemetrexed, Pembrolizumab). August 10, 2023: Completed initial therapy cycle. Partial response as per CT chest / abdomen +PET CT:  Moderate reduction in tumor size to approximately 4.2 cm. Contralateral metastases still present, but no new lesions. Partial bronchial obstruction persists with ongoing atelectasis in the left upper lobe. Mediastinal lymph nodes remain enlarged and FDG-positive, although with reduced metabolic activity. Thrombus in the left atrium remains unchanged. Emphysematous and fibrotic changes are stable. Overall, mild response observed with no significant progression, as per RECIST stable disease. August-October: Continued chemotherapy with Cisplatin/Pemetrexed and Pembrolizumab. October 13, 2023: Follow-up CT (chest + abdomen): SD / Progressive Disease. New nodule in the right lung (1cm). Slight increase in the size of previously noted FDG-positive lymph nodes in the mediastinum. No additional metastatic lesions were detected. The patient has continued to tolerate the current treatment regimen well, with no significant adverse effects reported. October 17, 2023: Tumor board: SD. Continuation of therapy. October 25, 2023: Continuation of Cisplatin (dose reduced), Pemetrexed and Pembrolizumab. January 12, 2024: Follow-up CT scan abdomen and chest, FDG-PET-CT: Progressive Disease with three new metastases in the right lung and additional enlarged FDG-positive lymph nodes in the mediastinum. Primary tumor 5.1 cm in diameter. MRI scan of the brain conducted; no evidence of metastatic disease. Incidental findings included mild age-related cerebral atrophy and scattered white matter hyperintensities consistent with chronic microvascular ischemic changes. March 17, 2024: Tumorboard recommends considering clinical trial options due to limited response to standard therapies. April 20, 2024: Detailed assessment of health status. ECOG performance status 1. All routine labs, including liver and renal function tests, within normal limits.          ===== Patient 7.1 =====         Patient Information Name: Jessica Smith Born: August 10, 1982 Address: Hamburg, Germany   Overview of Tumor Diagnosis and Therapy Tumor Diagnosis Diagnosis: UICC Stage IV metastatic malignant melanoma (hepatic, M. pectoralis major)  Initial Detection: January 5, 2024, following a rapidly growing mole and enlarged lymph nodes Biopsy Date: January 15, 2024 Molecular Profile: Tumor Purity: 80%; Tumor Mutational Burden (TMB): 12.8 Mut/Mb; NF1 p.I1605fs (AF 39%), TP53 c.672+1GA (AF 50%), RB1 p.Q846* (AF 20%), TERT p.R859Q (AF 41%) Therapy Overview Initial Treatment: None so far. Health Condition: ECOG 1   Comorbidities Former smoker 10 py Hypertension Stage 1 Mild Asthma H/o appendectomy 2014 Medication Amlodipine 10mg 1-0-0 Albuterol inhaler as needed   Chronological Medical Findings: January 5, 2024: Presented with a rapidly growing mole on the left arm and enlarged lymph nodes in the axillary region. January 10, 2024: CT scan of the chest and abdomen: Solid tumor in the left axillary region measuring approximately 3.5 cm with evidence of local invasion into surrounding soft tissues and possibly the pectoralis major muscle. Demonstrates irregular borders and heterogeneous enhancement. Multiple hypodense lesions noted throughout the liver, suggestive of metastatic disease. The largest lesion is located in segment VIII, measuring approximately 2.8 cm in diameter. Additional smaller lesions scattered in both hepatic lobes. January 15, 2024: Biopsy of the left axillary mass performed. Histology confirmed melanoma. Molecular panel sequencing: NF1 p.I1605fs (AF 39%), TP53 c.672+1GA (AF 50%), RB1 p.Q846* (AF 20%), TERT p.R859Q (AF 41%). Tumor purity 80%. Tumor Mutational Burden (TMB) 12.8 Mut/Mb. January 16, 2024: Detailed assessment of health status confirmed adequate organ function. Routine labs within normal limits: ANC 5,300/mcL, platelet count 140,000/mcL, total bilirubin 1.1 mg/dL, AST/ALT within range, creatinine 1.1 mg/dL, hemoglobin 10.5 g/dL, serum albumin 3.4 g/dL, lipase and amylase within normal limits.          ===== Patient 7.1.1 =====         Patient Information Name: Jessica Smith Born: August 10, 1982 Address: Hamburg, Germany   Overview of Tumor Diagnosis and Therapy Tumor Diagnosis Diagnosis: UICC Stage IV metastatic malignant melanoma (HEP, M. pectoralis major)  Initial Detection: January 5, 2024, following a rapidly growing mole and enlarged lymph nodes Biopsy Date: January 15, 2024 Molecular Profile: Tumor Purity: 80%; Tumor Mutational Burden (TMB): 12.8 Mut/Mb; NF1 p.I1605fs (AF 39%), TP53 c.672+1GA (AF 50%), RB1 p.Q846* (AF 20%), TERT p.R859Q (AF 41%) Therapy Overview Initial Treatment: Immunotherapy: Began February 1, 2024, with Nivolumab and Ipilimumab, partial response noted after the initial treatment cycle completed by May 15, 2024. Continued Nivolumab maintenance until December 2024 (progressive disease). Current Status: Disease progression as of December 2024, with new metastatic lesions identified. Health Condition: ECOG 1   Comorbidities Former smoker 10 py Hypertension Stage 1 Mild Asthma H/o appendectomy 2014 Medication Amlodipine 10mg 1-0-0 Albuterol inhaler as needed   Chronological Medical Findings: January 5, 2023: Presented with a rapidly growing mole on the left arm and enlarged lymph nodes in the axillary region. January 10, 2023: CT scan of the chest and abdomen: Solid tumor in the left axillary region measuring approximately 3.5 cm with evidence of local invasion into surrounding soft tissues and possibly the pectoralis major muscle. Demonstrates irregular borders and heterogeneous enhancement. Multiple hypodense lesions noted throughout the liver, suggestive of metastatic disease. The largest lesion is located in segment VIII, measuring approximately 2.8 cm in diameter. Additional smaller lesions scattered in both hepatic lobes. January 15, 2023: Biopsy of the left axillary mass performed. Histology confirmed melanoma. Molecular panel sequencing: NF1 p.I1605fs (AF 39%), TP53 c.672+1GA (AF 50%), RB1 p.Q846* (AF 20%), TERT p.R859Q (AF 41%). Tumor purity 80%. Tumor Mutational Burden (TMB) 12.8 Mut/Mb. February 1, 2023: Initiated combined immunotherapy with Nivolumab and Ipilimumab. May 5, 2023: CT scan showed partial response with a decrease in the size of the primary tumor and axillary lymph nodes. Partial response also regarding liver mets. Continued maintenance therapy with Nivolumab. September 15, 2023: Follow-up imaging: SD. September - December 2023: Continuation of Nivolumab.  December 18, 2023: Follow-up CT scan: Disease progression with new metastatic lesions in the liver and bones. Multiple enlarged lymph nodes persistent in the left axillary region, consistent with known metastatic melanoma. No significant change in size or number compared to the previous scan. Liver demonstrates multiple hypodense lesions throughout both hepatic lobes. The large lesion located in segment VIII now measures approximately 5.0 cm in diameter. Previously noted lesions have increased in size, with the largest lesion in segment IVa now measuring 4.2 cm (previously 3.1 cm). New lytic lesions are identified in the thoracic spine, specifically at T5 and T8 vertebral bodies, suggestive of metastatic disease. December 21, 2023: Bone scan confirmed multiple metastatic lesions in the thoracic spine. January 4, 2024: Tumor board review recommended considering eligibility for clinical trials due to limited response to standard and investigational therapies. January 16, 2024: Detailed assessment of health status confirmed adequate organ function. Routine labs within normal limits: ANC 5,300/mcL, platelet count 140,000/mcL, total bilirubin 1.1 mg/dL, AST/ALT within range, creatinine 1.1 mg/dL, hemoglobin 10.5 g/dL, serum albumin 3.4 g/dL, lipase and amylase within normal limits.          ===== Patient 7.1.2 =====         Patient Information Name: Jessica Smith Born: August 10, 1982 Address: Hamburg, Germany   Overview of Tumor Diagnosis and Therapy Tumor Diagnosis Diagnosis: UICC Stage IV metastatic malignant melanoma (hepatic, M. pectoralis major, Bone)  Initial Detection: January 5, 2024, following a rapidly growing mole and enlarged lymph nodes Biopsy Date: January 15, 2024 Molecular Profile: Tumor Purity: 80%; Tumor Mutational Burden (TMB): 12.8 Mut/Mb; NF1 p.I1605fs (AF 39%), TP53 c.672+1GA (AF 50%), RB1 p.Q846* (AF 20%), TERT p.R859Q (AF 41%) Therapy Overview Initial Treatment: None so far. Health Condition: ECOG 1   Comorbidities Former smoker 10 py Hypertension Stage 1 Mild Asthma H/o appendectomy 2014 Systemic Lupus Erythematosus (SLE) diagnosed in 2022, presenting with joint pain, fatigue, and a malar rash Medication Hydroxychloroquine 200 mg, once daily Prednisone 5 mg, once daily Amlodipine 10mg 1-0-0 Albuterol inhaler as needed   Chronological Medical Findings: January 5, 2024: Presented with a rapidly growing mole on the left arm and enlarged lymph nodes in the axillary region. January 10, 2024: CT scan of the chest and abdomen: Solid tumor in the left axillary region measuring approximately 3.5 cm with evidence of local invasion into surrounding soft tissues and possibly the pectoralis major muscle. Demonstrates irregular borders and heterogeneous enhancement. Multiple hypodense lesions noted throughout the liver, suggestive of metastatic disease. The largest lesion is located in segment VIII, measuring approximately 2.8 cm in diameter. Additional smaller lesions scattered in both hepatic lobes. January 15, 2024: Biopsy of the left axillary mass performed. Histology confirmed melanoma. Molecular panel sequencing: NF1 p.I1605fs (AF 39%), TP53 c.672+1GA (AF 50%), RB1 p.Q846* (AF 20%), TERT p.R859Q (AF 41%). Tumor purity 80%. Tumor Mutational Burden (TMB) 12.8 Mut/Mb. January 16, 2024: Detailed assessment of health status confirmed adequate organ function. Routine labs within normal limits: ANC 5,300/mcL, platelet count 140,000/mcL, total bilirubin 1.1 mg/dL, AST/ALT within range, creatinine 1.1 mg/dL, hemoglobin 10.5 g/dL, serum albumin 3.4 g/dL, lipase and amylase within normal limits.   ===== Patient 7.1.3 =====         Patient Information Name: Jessica Smith Born: August 10, 1982 Address: Hamburg, Germany   Overview of Tumor Diagnosis and Therapy Tumor Diagnosis Diagnosis: UICC Stage IV metastatic malignant melanoma (hepatic, M. pectoralis major, brain)  Initial Detection: January 5, 2024, following a rapidly growing mole and enlarged lymph nodes Biopsy Date: January 15, 2024 Molecular Profile: Tumor Purity: 80%; Tumor Mutational Burden (TMB): 12.8 Mut/Mb; NF1 p.I1605fs (AF 39%), TP53 c.672+1GA (AF 50%), RB1 p.Q846* (AF 20%), TERT p.R859Q (AF 41%) Therapy Overview Initial Treatment: None so far. Health Condition: ECOG 1   Comorbidities Former smoker 10 py Hypertension Stage 1 Mild Asthma H/o appendectomy 2014 Systemic Lupus Erythematosus (SLE) diagnosed in 2022, presenting with joint pain, fatigue, and a malar rash Medication Hydroxychloroquine 200 mg, once daily Prednisone 5 mg, once daily Amlodipine 10mg 1-0-0 Albuterol inhaler as needed   Chronological Medical Findings: January 5, 2024: Presented with a rapidly growing mole on the left arm and enlarged lymph nodes in the axillary region. January 10, 2024: CT scan of the chest and abdomen: Solid tumor in the left axillary region measuring approximately 3.5 cm with evidence of local invasion into surrounding soft tissues and possibly the pectoralis major muscle. Demonstrates irregular borders and heterogeneous enhancement. Multiple hypodense lesions noted throughout the liver, suggestive of metastatic disease. The largest lesion is located in segment VIII, measuring approximately 2.8 cm in diameter. Additional smaller lesions scattered in both hepatic lobes. cMRI: Imaging reveals five small brain metastases: A 1.2 cm lesion in the right frontal lobe. A 0.8 cm lesion in the left parietal lobe. A 0.6 cm lesion in the right occipital lobe. A 0.7 cm lesion in the left cerebellum. A 0.9 cm lesion in the right temporal lobe. All lesions with heterogeneous enhancement and associated with surrounding vasogenic edema. No evidence of midline shift or significant mass effect at this time.  January 15, 2024: Biopsy of the left axillary mass performed. Histology confirmed melanoma. Molecular panel sequencing: NF1 p.I1605fs (AF 39%), TP53 c.672+1GA (AF 50%), RB1 p.Q846* (AF 20%), TERT p.R859Q (AF 41%). Tumor purity 80%. Tumor Mutational Burden (TMB) 12.8 Mut/Mb. January 16, 2024: Detailed assessment of health status confirmed adequate organ function. Routine labs within normal limits: ANC 5,300/mcL, platelet count 140,000/mcL, total bilirubin 1.1 mg/dL, AST/ALT within range, creatinine 1.1 mg/dL, hemoglobin 10.5 g/dL, serum albumin 3.4 g/dL, lipase and amylase within normal limits.           ===== Patient 7.2 =====          Patient Information Name: Jessica Smith Born: August 10, 1982 Address: Hamburg, Germany   Overview of Tumor Diagnosis and Therapy Tumor Diagnosis Diagnosis: UICC Stage IV metastatic malignant melanoma (hepatic, M. pectoralis major) Initial Detection: January 5, 2024, following a rapidly growing mole and enlarged lymph nodes Biopsy Date: January 15, 2024 Molecular Profile: Tumor Purity: 80%; Tumor Mutational Burden (TMB): 12.8 Mut/Mb; NF1 p.I1605fs (AF 39%), TP53 c.672+1GA (AF 50%), RB1 p.Q846* (AF 20%), TERT p.R859Q (AF 41%) Therapy Overview Initial Treatment: None so far.   Health Condition: ECOG 1   Comorbidities Former smoker 10 py Hypertension Stage 1 Mild Asthma H/o appendectomy 2014 Medication Amlodipine 10mg 1-0-0 Albuterol inhaler as needed   Chronological Medical Findings: January 5, 2024: Presented with a rapidly growing mole on the left arm and enlarged lymph nodes in the axillary region. January 10, 2024: CT scan of the chest and abdomen: Solid tumor in the left axillary region measuring approximately 3.5 cm with evidence of local invasion into surrounding soft tissues and possibly the pectoralis major muscle. Demonstrates irregular borders and heterogeneous enhancement. Multiple hypodense lesions noted throughout the liver, suggestive of metastatic disease. The largest lesion is located in segment VIII, measuring approximately 2.8 cm in diameter. Additional smaller lesions scattered in both hepatic lobes. January 15, 2024: Biopsy of the left axillary mass performed. Histology confirmed melanoma. Molecular panel sequencing: NF1 p.I1605fs (AF 39%), TP53 c.672+1GA (AF 50%), RB1 p.Q846* (AF 20%), TERT p.R859Q (AF 41%). Tumor purity 80%. Tumor Mutational Burden (TMB) 12.8 Mut/Mb. January 16, 2024: Detailed assessment of health status confirmed adequate organ function. Routine labs within normal limits: ANC 5,300/mcL, platelet count 140,000/mcL, total bilirubin 1.1 mg/dL, AST/ALT within range, creatinine 1.1 mg/dL, hemoglobin 10.5 g/dL, serum albumin 3.4 g/dL, lipase and amylase within normal limits.     ===== Patient 7.2.1 =====          Patient Information Name: Jessica Smith Born: August 10, 1982 Address: Hamburg, Germany   Overview of Tumor Diagnosis and Therapy Tumor Diagnosis Diagnosis: UICC Stage IV metastatic malignant melanoma (hepatic, M. pectoralis major, brain)  Initial Detection: January 5, 2024, following a rapidly growing mole and enlarged lymph nodes Biopsy Date: January 15, 2024 Molecular Profile: Tumor Purity: 80%; Tumor Mutational Burden (TMB): 12.8 Mut/Mb; NF1 p.I1605fs (AF 39%), TP53 c.672+1GA (AF 50%), RB1 p.Q846* (AF 20%), TERT p.R859Q (AF 41%) Therapy Overview Initial Treatment: None so far. Health Condition: ECOG 1   Comorbidities Former smoker 10 py Hypertension Stage 1 Mild Asthma H/o appendectomy 2014 Systemic Lupus Erythematosus (SLE) diagnosed in 2022, presenting with joint pain, fatigue, and a malar rash Medication Hydroxychloroquine 200 mg, once daily Prednisone 5 mg, once daily Amlodipine 10mg 1-0-0 Albuterol inhaler as needed   Chronological Medical Findings: January 5, 2024: Presented with a rapidly growing mole on the left arm and enlarged lymph nodes in the axillary region. January 10, 2024: CT scan of the chest and abdomen: Solid tumor in the left axillary region measuring approximately 3.5 cm with evidence of local invasion into surrounding soft tissues and possibly the pectoralis major muscle. Demonstrates irregular borders and heterogeneous enhancement. Multiple hypodense lesions noted throughout the liver, suggestive of metastatic disease. The largest lesion is located in segment VIII, measuring approximately 2.8 cm in diameter. Additional smaller lesions scattered in both hepatic lobes. cMRI: Imaging reveals five small brain metastases: A 1.2 cm lesion in the right frontal lobe. A 0.8 cm lesion in the left parietal lobe. A 0.6 cm lesion in the right occipital lobe. A 0.7 cm lesion in the left cerebellum. A 0.9 cm lesion in the right temporal lobe. All lesions with heterogeneous enhancement and associated with surrounding vasogenic edema. No evidence of midline shift or significant mass effect at this time.  January 15, 2024: Biopsy of the left axillary mass performed. Histology confirmed melanoma. Molecular panel sequencing: NF1 p.I1605fs (AF 39%), TP53 c.672+1GA (AF 50%), RB1 p.Q846* (AF 20%), TERT p.R859Q (AF 41%). Tumor purity 80%. Tumor Mutational Burden (TMB) 12.8 Mut/Mb. January 16, 2024: Detailed assessment of health status confirmed adequate organ function. Routine labs within normal limits: ANC 5,300/mcL, platelet count 140,000/mcL, total bilirubin 1.1 mg/dL, AST/ALT within range, creatinine 1.1 mg/dL, hemoglobin 10.5 g/dL, serum albumin 3.4 g/dL, lipase and amylase within normal limits.     ===== Patient 8.1 =====           Name: Müller, David Born: 22.03.1970 Address: Hauptstraße 1, Heidelberg, Germany   Overview of Tumor Diagnosis and Therapy Tumor Diagnosis Diagnosis: UICC Stage IV FGFR2 mutant intrahepatic cholangiocarcinoma, peritoneal carcinomatosis Initial Detection: March 5 2022, following symptoms of jaundice and abdominal pain Molecular Profile:  Panel (Tumor purity 80%), TMB 1.2 Mut/Mb. Molecular alterations: FGFR2::BICC1 Fusion, TP53 p.E258* (AF 52%).   Therapy Overview Initial Treatment: Right hemihepatectomy with additional lymphadenectomy June 10, 2023. Histopathology: iCCA, T1b, N1, R0 resection. Adjuvant chemotherapy: Began June 20, 2023, with Capecitabine. Follow-up CT September 2023 shows multiple new liver lesions and peritoneal metastasis. Subsequent Treatment: September - December 2023: 6 cycles Gemzar/Cisplatin + Durvalumab. January - March 2024: Second line chemotherapy with FOLFOX.   Current Status: ECOG 1   Comorbidities Hypothyroidism   Medication          Levothyroxine 75µg 1-0-0   Chronological Medical Findings: February 1, 2023: Complaint of jaundice and abdominal pain. Ultrasound revealed a mass in the liver. Weight loss of -15kg/5 months.  March 5, 2023: MRI of the abdomen: Significant mass measuring approximately 5.5 cm in the right hepatic lobe, consistent with intrahepatic cholangiocarcinoma. Lesion with irregular borders and heterogeneous enhancement patterns. Evidence of bile duct dilation proximal to the mass, suggestive of obstructive cholestasis. Additionally, several enlarged lymph nodes noted in the perihepatic region, displaying increased uptake on FDG-PET, suggestive of potential metastasis. No vascular invasion observed, but the proximity of the mass to the right portal vein concerning for possible future involvement. No signs of distant metastasis present in the visualized organs. June 10, 2023: Right hemihepatectomy and lymphadenectomy. Histopathology reveals intrahepatic cholangiocarcinoma. pT1b, pN2, pM0, R0. Molecular pathology report: Panel (Tumor purity 80%), TMB 1.2 Mut/Mb. Molecular alterations: FGFR2::BICC1 Fusion, TP53 p.E258* (AF 52%). June 20, 2023: DPD status normal. Initiated adjuvant chemotherapy with Capecitabine.  September 15, 2023: Follow-up CT (chest + abdomen): Multiple new lesions in the remaining liver tissue, highly suggestive of tumor recurrence. New small nodules in the peritoneum, up to 1 cm, likely peritoneal metastasis. FDG-PET shows elevated activity in hepatic and lymph node lesions. Mild right-sided pleural effusion noted, no significant respiratory compromise. September 17, 2023: Initiated Therapy with Gemzar/Cisplatin + Durvalumab. September - December 2023: 6 cycles Gemzar/Cisplatin + Durvalumab. January 5, 2024: Follow-up MRI scan abdomen/liver: progressive disease (PD) with growth of all liver lesions and increased involvement of adjacent hepatic structures. The peritoneal nodules showed slight growth. Moderate ascites. No evidence of direct vascular invasion, but the tumor's close relationship with the hepatic artery and portal vein concerns potential future involvement. The liver parenchyma shows signs of chronic liver disease, possibly secondary to ongoing cholestasis and tumor-related liver dysfunction. MRI of the brain was conducted concurrently, revealing no evidence of metastatic disease. Incidental findings included mild age-related cerebral atrophy and scattered white matter hyperintensities, consistent with chronic microvascular ischemic changes. January - March 2024: Second line chemotherapy with FOLFOX. March 16, 2024: Progressive disease (PD) with significant growth of all liver lesions. The largest lesion in segment IVb has increased to 7.5 cm in diameter, with invasion into the adjacent hepatic structures. The peritoneal nodules have shown further growth, with the largest nodule now measuring 2.5 cm. Ascites: Moderate to severe ascites is present, with a noticeable increase compared to the previous scan. Vascular Involvement: No direct vascular invasion detected yet, but the lesions now encase the hepatic artery and portal vein, raising significant concerns for potential imminent involvement. Liver Parenchyma: The liver parenchyma shows worsening signs of chronic liver disease, likely secondary to ongoing cholestasis and tumor-related liver dysfunction. Evidence of hepatic decompensation is apparent, with diffuse nodularity and fibrosis indicative of cirrhosis. Additional Findings: Splenomegaly, consistent with portal hypertension. January 20, 2024: Tumor board recommends considering clinical trial options due to limited response to standard therapies.  January 21, 2024: Patient in good shape, routine lab results within normal ranges. Willing to participate in potential trials.   ===== Patient 8.1.1 =====             Name: Müller, David Born: 22.03.1970 Address: Hauptstraße 1, Heidelberg, Germany Overview of Tumor Diagnosis and Therapy Tumor Diagnosis Diagnosis: UICC Stage IV FGFR2 mutant intrahepatic cholangiocarcinoma, peritoneal carcinomatosis Initial Detection: March 5 2022, following symptoms of jaundice and abdominal pain Molecular Profile:  Panel (Tumor purity 80%), TMB 1.2 Mut/Mb. Molecular alterations: FGFR2::BICC1 Fusion, TP53 p.E258* (AF 52%).   Therapy Overview Initial Treatment: Right hemihepatectomy with additional lymphadenectomy June 10, 2023. Histopathology: iCCA, T1b, N1, R0 resection. Adjuvant chemotherapy: Began June 20, 2023, with Capecitabine. Follow-up CT September 2023 shows multiple new liver lesions and peritoneal metastasis. Subsequent Treatment: September - December 2023: 6 cycles Gemzar/Cisplatin + Durvalumab. January - March 2024: Second line chemotherapy with FOLFOX.   Current Status: ECOG 1   Comorbidities Hypothyroidism Coronary Artery Disease (CAD) Status post Myocardial Infarction (MI) on January 10, 2024 ECG January 10, 2024: ST-segment elevation in leads V2-V4, consistent with anterior wall myocardial infarction. Reciprocal ST-segment depression in leads II, III, and aVF. Q waves present in leads V1-V3, indicating myocardial necrosis. T-wave inversions in leads V2-V4. QTc time of 485 ms. Heart rate: 95 bpm. PR interval: 160 ms. QRS duration: 100 ms.   Medication Levothyroxine 75µg 1-0-0 Ass 100, once daily Clopidogrel 75 mg, once daily Atorvastatin 80 mg, once daily Metoprolol 50 mg, twice daily Lisinopril 10 mg, once daily   Chronological Medical Findings: February 1, 2023: Complaint of jaundice and abdominal pain. Ultrasound revealed a mass in the liver. Weight loss of -15kg/5 months.  March 5, 2023: MRI of the abdomen: Significant mass measuring approximately 5.5 cm in the right hepatic lobe, consistent with intrahepatic cholangiocarcinoma. Lesion with irregular borders and heterogeneous enhancement patterns. Evidence of bile duct dilation proximal to the mass, suggestive of obstructive cholestasis. Additionally, several enlarged lymph nodes noted in the perihepatic region, displaying increased uptake on FDG-PET, suggestive of potential metastasis. No vascular invasion observed, but the proximity of the mass to the right portal vein concerning for possible future involvement. No signs of distant metastasis present in the visualized organs. June 10, 2023: Right hemihepatectomy and lymphadenectomy. Histopathology reveals intrahepatic cholangiocarcinoma. pT1b, pN2, pM0, R0. Molecular pathology report: Panel (Tumor purity 80%), TMB 1.2 Mut/Mb. Molecular alterations: FGFR2::BICC1 Fusion, TP53 p.E258* (AF 52%). June 20, 2023: DPD status normal. Initiated adjuvant chemotherapy with Capecitabine.  September 15, 2023: Follow-up CT (chest + abdomen): Multiple new lesions in the remaining liver tissue, highly suggestive of tumor recurrence. New small nodules in the peritoneum, up to 1 cm, likely peritoneal metastasis. FDG-PET shows elevated activity in hepatic and lymph node lesions. Mild right-sided pleural effusion noted, no significant respiratory compromise. September 17, 2023: Initiated Therapy with Gemzar/Cisplatin + Durvalumab. September - December 2023: 6 cycles Gemzar/Cisplatin + Durvalumab. January 5, 2024: Follow-up MRI scan abdomen/liver: progressive disease (PD) with growth of all liver lesions and increased involvement of adjacent hepatic structures. The peritoneal nodules showed slight growth. Moderate ascites. No evidence of direct vascular invasion, but the tumor's close relationship with the hepatic artery and portal vein concerns potential future involvement. The liver parenchyma shows signs of chronic liver disease, possibly secondary to ongoing cholestasis and tumor-related liver dysfunction. MRI of the brain was conducted concurrently, revealing no evidence of metastatic disease. Incidental findings included mild age-related cerebral atrophy and scattered white matter hyperintensities, consistent with chronic microvascular ischemic changes. January - March 2024: Second line chemotherapy with FOLFOX. March 16, 2024: Progressive disease (PD) with significant growth of all liver lesions. The largest lesion in segment IVb has increased to 7.5 cm in diameter, with invasion into the adjacent hepatic structures. The peritoneal nodules have shown further growth, with the largest nodule now measuring 2.5 cm. Ascites: Moderate to severe ascites is present, with a noticeable increase compared to the previous scan. Vascular Involvement: No direct vascular invasion detected yet, but the lesions now encase the hepatic artery and portal vein, raising significant concerns for potential imminent involvement. Liver Parenchyma: The liver parenchyma shows worsening signs of chronic liver disease, likely secondary to ongoing cholestasis and tumor-related liver dysfunction. Evidence of hepatic decompensation is apparent, with diffuse nodularity and fibrosis indicative of cirrhosis. Additional Findings: Splenomegaly, consistent with portal hypertension. January 10, 2024: Patient presented with severe chest pain. Diagnosed with an acute myocardial infarction. Underwent emergency coronary angiography, revealing 90% Occlusion in the LAD and 70% occlusion in the RCA. Two DES stents placed. Started on aspirin, clopidogrel, atorvastatin, metoprolol, and lisinopril. January 20, 2024: Tumor board recommends considering clinical trial options due to limited response to standard therapies.  January 21, 2024: Patient in good shape, routine lab results within normal ranges. Willing to participate in potential trials.   ===== Patient 8.1.2 =====           Name: Müller, David Born: 22.03.1970 Address: Hauptstraße 1, Heidelberg, Germany   Overview of Tumor Diagnosis and Therapy Tumor Diagnosis Diagnosis: UICC Stage IV FGFR2 mutant intrahepatic cholangiocarcinoma, peritoneal carcinomatosis Initial Detection: March 5 2022, following symptoms of jaundice and abdominal pain Molecular Profile:  Panel (Tumor purity 80%), TMB 1.2 Mut/Mb. Molecular alterations: FGFR2::BICC1 Fusion, TP53 p.E258* (AF 52%).   Therapy Overview Initial Treatment: Right hemihepatectomy with additional lymphadenectomy June 10, 2023. Histopathology: iCCA, T1b, N1, R0 resection. Adjuvant chemotherapy: Began June 20, 2023, with Capecitabine. Follow-up CT September 2023 shows multiple new liver lesions and peritoneal metastasis. Subsequent Treatment: September - December 2023: 6 cycles Gemzar/Cisplatin + Durvalumab. January - March 2024: Second line chemotherapy with FOLFOX.   Current Status: ECOG 1   Comorbidities Hypothyroidism Hepatitis C   Medication          Levothyroxine 75µg 1-0-0 Sofosbuvir 400 mg, once daily Velpatasvir 100 mg, once daily   Chronological Medical Findings: February 1, 2023: Complaint of jaundice and abdominal pain. Ultrasound revealed a mass in the liver. Weight loss of -15kg/5 months.  March 5, 2023: MRI of the abdomen: Significant mass measuring approximately 5.5 cm in the right hepatic lobe, consistent with intrahepatic cholangiocarcinoma. Lesion with irregular borders and heterogeneous enhancement patterns. Evidence of bile duct dilation proximal to the mass, suggestive of obstructive cholestasis. Additionally, several enlarged lymph nodes noted in the perihepatic region, displaying increased uptake on FDG-PET, suggestive of potential metastasis. No vascular invasion observed, but the proximity of the mass to the right portal vein concerning for possible future involvement. No signs of distant metastasis present in the visualized organs. June 10, 2023: Right hemihepatectomy and lymphadenectomy. Histopathology reveals intrahepatic cholangiocarcinoma. pT1b, pN2, pM0, R0. Molecular pathology report: Panel (Tumor purity 80%), TMB 1.2 Mut/Mb. Molecular alterations: FGFR2::BICC1 Fusion, TP53 p.E258* (AF 52%). June 20, 2023: DPD status normal. Initiated adjuvant chemotherapy with Capecitabine.  September 15, 2023: Follow-up CT (chest + abdomen): Multiple new lesions in the remaining liver tissue, highly suggestive of tumor recurrence. New small nodules in the peritoneum, up to 1 cm, likely peritoneal metastasis. FDG-PET shows elevated activity in hepatic and lymph node lesions. Mild right-sided pleural effusion noted, no significant respiratory compromise. September 17, 2023: Initiated Therapy with Gemzar/Cisplatin + Durvalumab. September - December 2023: 6 cycles Gemzar/Cisplatin + Durvalumab. January 5, 2024: Follow-up MRI scan abdomen/liver: progressive disease (PD) with growth of all liver lesions and increased involvement of adjacent hepatic structures. The peritoneal nodules showed slight growth. Moderate ascites. No evidence of direct vascular invasion, but the tumor's close relationship with the hepatic artery and portal vein concerns potential future involvement. The liver parenchyma shows signs of chronic liver disease, possibly secondary to ongoing cholestasis and tumor-related liver dysfunction. MRI of the brain was conducted concurrently, revealing no evidence of metastatic disease. Incidental findings included mild age-related cerebral atrophy and scattered white matter hyperintensities, consistent with chronic microvascular ischemic changes. January - March 2024: Second line chemotherapy with FOLFOX. March 16, 2024: Progressive disease (PD) with significant growth of all liver lesions. The largest lesion in segment IVb has increased to 7.5 cm in diameter, with invasion into the adjacent hepatic structures. The peritoneal nodules have shown further growth, with the largest nodule now measuring 2.5 cm. Ascites: Moderate to severe ascites is present, with a noticeable increase compared to the previous scan. Vascular Involvement: No direct vascular invasion detected yet, but the lesions now encase the hepatic artery and portal vein, raising significant concerns for potential imminent involvement. Liver Parenchyma: The liver parenchyma shows worsening signs of chronic liver disease, likely secondary to ongoing cholestasis and tumor-related liver dysfunction. Evidence of hepatic decompensation is apparent, with diffuse nodularity and fibrosis indicative of cirrhosis. Additional Findings: Splenomegaly, consistent with portal hypertension. January 20, 2024: Tumor board recommends considering clinical trial options due to limited response to standard therapies.  January 21, 2024: Patient progressively in bad shape, stays in bed almost all day, routine lab results within normal ranges. Willing to participate in potential trials.   ===== Patient 9.1 =====            Patient Information  Name: Mueller, Max Born: 25.03.1945 Address: 456 Oak Street, Hamburg, Germany   Overview of Tumor Diagnosis and Therapy Tumor Diagnosis Diagnosis: Stage IV salivary duct carcinoma Initial Detection: June 10, 2023, following symptoms of persistent facial swelling and pain Biopsy Date: July 5, 2023 Molecular Profile: Tumor Mutational Burden (TMB) of 10.5 Mut/Mb, HRAS p.Q61R (AF 44%), PIK3CA p.E545K (AF 39%), p.H1047R (AF 30%). HER2 FISH positive.   Therapy Overview Initial Treatment: Chemotherapy: Initiated on August 1, 2023, with Docetaxel plus Trastuzumab. Partial response noted after three months. Subsequent Treatment: Second-line chemotherapy with carboplatin and paclitaxel initiated on December 1, 2023, due to disease progression. Current Status: Progressive disease with lymphatic, pulmonary and hepatic metastasis.   ECOG 1   Comorbidities Former smoker 30 py Hypertension Stage 1 Type 2 Diabetes Mellitus Hyperlipidemia Benign Prostatic Hyperplasia (BPH)   Medication Amlodipine 10 mg 1-0-0 Metformin 1000 mg 1-0-1 Empagliflozin 10mg 1-0-0 Atorvastatin 40 mg 0-0-0-1 Omeprazole 20 mg 1-0-0 Tamsulosin 0.4 mg 1-0-0 Fentanyl TTS 25 mcg every 3 days Fentanyl s.l. 100 mcg as needed up to 4 times a day Ibuprofen 600 1-1-1   Chronological Medical Findings: June 10, 2023: Patient presented with persistent facial swelling and pain. A CT scan of the head and neck revealed a mass in the left parotid gland measuring approximately 5 cm with extensive local invasion into the surrounding soft tissues and suspected involvement of multiple regional lymph nodes in levels II and III June 12, 2023: Staging CT-scan (chest and abdomen). Multiple nodular lesions are identified in the right lung, consistent with metastatic disease. The largest lesion is located in the right lower lobe, measuring approximately 2.5 cm in diameter. Additional smaller nodules are noted in the right upper and middle lobes, with the largest of these measuring up to 1.2 cm. No signs of metastatic involvement in the abdomen. June 15, 2023: Brain MRI. No signs of brain metastasis. July 5, 2023: Ultrasound-guided biopsy confirmed salivary duct carcinoma with high TMB and specific genetic mutations (HRAS p.Q61R, PIK3CA p.E545K). FISH positive for HER2 amplification. July 12, 2023: Started on Docetaxel and Trastuzumab.  October 15, 2023: Follow-Up imaging: CT scan of the head and neck showed a reduction in tumor size to approximately 3.5 cm. Regional lymph nodes remained enlarged but showed decreased metabolic activity on PET scan. All pulmonary lesions show minimal reduction in size compared to previous scan. No new metastatic lesions. January 1, 2024: Follow-up CT scan (neck, chest and abdomen) indicated disease progression. Primary tumor remains stable in size, as well as known lymph node metastases. Pulmonary metastases all show tumor growth with the largest lesion in the right lower lobe now measuring 2.8 cm in diameter. Multiple, previously unknown hypodense lesion within the left liver lobe, compatible with metastatic disease. PET scan shows high metabolic activity. January 9, 2024: Tumor board recommends considering clinical trial options due to limited response to standard therapies. March 15, 2024: Routine Labs: Comprehensive blood work indicated normal liver and renal function. The patient maintained an ECOG performance status of 1.              ===== Patient 9.1.1 =====             Patient Information  Name: Mueller, Max Born: 25.03.1945 Address: 456 Oak Street, Hamburg, Germany   Overview of Tumor Diagnosis and Therapy Tumor Diagnosis Diagnosis: Stage IV salivary duct carcinoma Initial Detection: June 10, 2023, following symptoms of persistent facial swelling and pain Biopsy Date: July 5, 2023 Molecular Profile: Tumor Mutational Burden (TMB) of 10.5 Mut/Mb, HRAS p.Q61R (AF 44%), PIK3CA p.E545K (AF 39%), p.H1047R (AF 30%). HER2 FISH positive.   Therapy Overview Initial Treatment: Chemotherapy: Initiated on August 1, 2023, with Docetaxel plus Trastuzumab. Partial response noted after three months. Subsequent Treatment: Second-line chemotherapy with carboplatin and paclitaxel initiated on December 1, 2023, due to disease progression. Current Status: Progressive disease with lymphatic, pulmonary and hepatic metastasis.   ECOG 2   Comorbidities Former smoker 30 py Hypertension Stage 1 Type 2 Diabetes Mellitus HFrEF NYHA II Hyperlipidemia Benign Prostatic Hyperplasia (BPH)   Medication Candesartan 12 mg 1-0-0 Metoprolol 47,5 mg 1-0-0 Metformin 1000 mg 1-0-1 Empagliflozin 10mg 1-0-0 Atorvastatin 40 mg 0-0-0-1 Omeprazole 20 mg 1-0-0 Tamsulosin 0.4 mg 1-0-0 Fentanyl TTS 25 mcg every 3 days Fentanyl s.l. 100 mcg as needed up to 4 times a day Ibuprofen 600 1-1-1   Chronological Medical Findings: June 10, 2023: Patient presented with persistent facial swelling and pain. A CT scan of the head and neck revealed a mass in the left parotid gland measuring approximately 5 cm with extensive local invasion into the surrounding soft tissues and suspected involvement of multiple regional lymph nodes in levels II and III June 12, 2023: Staging CT-scan (chest and abdomen). Multiple nodular lesions are identified in the right lung, consistent with metastatic disease. The largest lesion is located in the right lower lobe, measuring approximately 2.5 cm in diameter. Additional smaller nodules are noted in the right upper and middle lobes, with the largest of these measuring up to 1.2 cm. No signs of metastatic involvement in the abdomen. June 15, 2023: Brain MRI. No signs of brain metastasis. July 5, 2023: Ultrasound-guided biopsy confirmed salivary duct carcinoma with high TMB and specific genetic mutations (HRAS p.Q61R, PIK3CA p.E545K). FISH positive for HER2 amplification. July 12, 2023: Started on Docetaxel and Trastuzumab.  October 15, 2023: Follow-Up imaging: CT scan of the head and neck showed a reduction in tumor size to approximately 3.5 cm. Regional lymph nodes remained enlarged but showed decreased metabolic activity on PET scan. All pulmonary lesions show minimal reduction in size compared to previous scan. No new metastatic lesions. January 1, 2024: Follow-up CT scan (neck, chest and abdomen) indicated disease progression. Primary tumor remains stable in size, as well as known lymph node metastases. Pulmonary metastases all show tumor growth with the largest lesion in the right lower lobe now measuring 2.8 cm in diameter. Multiple, previously unknown hypodense lesion within the left liver lobe, compatible with metastatic disease. PET scan shows high metabolic activity. January 9, 2024: Tumor board recommends considering clinical trial options due to limited response to standard therapies. March 15, 2024: Routine Labs: Comprehensive blood work indicated normal liver and moderately reduced renal function (eGFR 65 ml/min/1.73m2). The patient maintained an ECOG performance status of 2.              ===== Patient 9.1.2 =====             Patient Information  Name: Mueller, Max Born: 25.03.1945 Address: 456 Oak Street, Hamburg, Germany   Overview of Tumor Diagnosis and Therapy Tumor Diagnosis Diagnosis: Stage IV salivary duct carcinoma Initial Detection: June 10, 2023, following symptoms of persistent facial swelling and pain Biopsy Date: July 5, 2023 Molecular Profile: Tumor Mutational Burden (TMB) of 10.5 Mut/Mb, HRAS p.Q61R (AF 44%), PIK3CA p.E545K (AF 39%), p.H1047R (AF 30%). HER2 FISH positive.   Therapy Overview Initial Treatment: Chemotherapy: Initiated on August 1, 2023, with Docetaxel plus Trastuzumab. Partial response noted after three months. Subsequent Treatment: Second-line chemotherapy with carboplatin and paclitaxel initiated on December 1, 2023, due to disease progression. Current Status: Progressive disease with lymphatic, pulmonary and hepatic metastasis.   ECOG 3   Comorbidities Former smoker 30 py Hypertension Stage 1 Type 2 Diabetes Mellitus Hyperlipidemia Benign Prostatic Hyperplasia (BPH)   Medication Candesartan 12 mg 1-0-0 Metoprolol 47,5 mg 1-0-0 Metformin 1000 mg 1-0-1 Empagliflozin 10mg 1-0-0 Atorvastatin 40 mg 0-0-0-1 Omeprazole 20 mg 1-0-0 Tamsulosin 0.4 mg 1-0-0 Fentanyl TTS 25 mcg every 3 days Fentanyl s.l. 100 mcg as needed up to 4 times a day Ibuprofen 600 1-1-1   Chronological Medical Findings: June 10, 2023: Patient presented with persistent facial swelling and pain. A CT scan of the head and neck revealed a mass in the left parotid gland measuring approximately 5 cm with extensive local invasion into the surrounding soft tissues and suspected involvement of multiple regional lymph nodes in levels II and III June 12, 2023: Staging CT-scan (chest and abdomen). Multiple nodular lesions are identified in the right lung, consistent with metastatic disease. The largest lesion is located in the right lower lobe, measuring approximately 2.5 cm in diameter. Additional smaller nodules are noted in the right upper and middle lobes, with the largest of these measuring up to 1.2 cm. No signs of metastatic involvement in the abdomen. June 15, 2023: Brain MRI. No signs of brain metastasis. July 5, 2023: Ultrasound-guided biopsy confirmed salivary duct carcinoma with high TMB and specific genetic mutations (HRAS p.Q61R, PIK3CA p.E545K). FISH positive for HER2 amplification. July 12, 2023: Started on Docetaxel and Trastuzumab.  October 15, 2023: Follow-Up imaging: CT scan of the head and neck showed a reduction in tumor size to approximately 3.5 cm. Regional lymph nodes remained enlarged but showed decreased metabolic activity on PET scan. All pulmonary lesions show minimal reduction in size compared to previous scan. No new metastatic lesions. January 1, 2024: Follow-up CT scan (neck, chest and abdomen) indicated disease progression. Primary tumor remains stable in size, as well as known lymph node metastases. Pulmonary metastases all show tumor growth with the largest lesion in the right lower lobe now measuring 2.8 cm in diameter. Multiple, previously unknown hypodense lesion within the left liver lobe, compatible with metastatic disease. PET scan shows high metabolic activity. January 9, 2024: Tumor board recommends considering clinical trial options due to limited response to standard therapies. March 15, 2024: Status assessment before possible study enrollment. Patient shows reduced overall health, ECOG performance status now 3. Lab results show liver and kidney injury: Total bilirubin 4.5 mg/dl, AST 230 U/L, ALT 180 U/L, AP 320 U/L, GGT 30 U/L, Albumin 2.3 g/dl. Creatinine 3.2 mg/dl, eGFR 20.6 ml/min/m2.   ===== Patient 9.1.3 =====             Patient Information  Name: Mueller, Max Born: 25.03.1945 Address: 456 Oak Street, Hamburg, Germany   Overview of Tumor Diagnosis and Therapy Tumor Diagnosis Diagnosis: Stage IV salivary duct carcinoma Initial Detection: June 10, 2023, following symptoms of persistent facial swelling and pain Biopsy Date: July 5, 2023 Molecular Profile: Tumor Mutational Burden (TMB) of 10.5 Mut/Mb, HRAS p.Q61R (AF 44%), PIK3CA p.E545K (AF 39%), p.H1047R (AF 30%). HER2 FISH positive.   Therapy Overview Initial Treatment: Chemotherapy: Initiated on August 1, 2023, with Docetaxel plus Trastuzumab. Partial response noted after three months. Subsequent Treatment: Second-line chemotherapy with carboplatin and paclitaxel initiated on December 1, 2023, due to disease progression. Current Status: Progressive disease with lymphatic, pulmonary and hepatic metastasis.   ECOG 3   Comorbidities Former smoker 30 py Hypertension Stage 1 Type 2 Diabetes Mellitus Hyperlipidemia Benign Prostatic Hyperplasia (BPH) UICC Stage III melanoma, diagnosed 10/2021 (currently on Nivolumab maintenance)   Medication Candesartan 12 mg 1-0-0 Metoprolol 47,5 mg 1-0-0 Metformin 1000 mg 1-0-1 Empagliflozin 10mg 1-0-0 Atorvastatin 40 mg 0-0-0-1 Omeprazole 20 mg 1-0-0 Tamsulosin 0.4 mg 1-0-0 Fentanyl TTS 25 mcg every 3 days Fentanyl s.l. 100 mcg as needed up to 4 times a day Ibuprofen 600 1-1-1   Chronological Medical Findings: June 10, 2023: Patient presented with persistent facial swelling and pain. A CT scan of the head and neck revealed a mass in the left parotid gland measuring approximately 5 cm with extensive local invasion into the surrounding soft tissues and suspected involvement of multiple regional lymph nodes in levels II and III June 12, 2023: Staging CT-scan (chest and abdomen). Multiple nodular lesions are identified in the right lung, consistent with metastatic disease. The largest lesion is located in the right lower lobe, measuring approximately 2.5 cm in diameter. Additional smaller nodules are noted in the right upper and middle lobes, with the largest of these measuring up to 1.2 cm. No signs of metastatic involvement in the abdomen. June 15, 2023: Brain MRI. No signs of brain metastasis. July 5, 2023: Ultrasound-guided biopsy confirmed salivary duct carcinoma with high TMB and specific genetic mutations (HRAS p.Q61R, PIK3CA p.E545K). FISH positive for HER2 amplification. July 12, 2023: Started on Docetaxel and Trastuzumab.  October 15, 2023: Follow-Up imaging: CT scan of the head and neck showed a reduction in tumor size to approximately 3.5 cm. Regional lymph nodes remained enlarged but showed decreased metabolic activity on PET scan. All pulmonary lesions show minimal reduction in size compared to previous scan. No new metastatic lesions. January 1, 2024: Follow-up CT scan (neck, chest and abdomen) indicated disease progression. Primary tumor remains stable in size, as well as known lymph node metastases. Pulmonary metastases all show tumor growth with the largest lesion in the right lower lobe now measuring 2.8 cm in diameter. Multiple, previously unknown hypodense lesion within the left liver lobe, compatible with metastatic disease. PET scan shows high metabolic activity. January 9, 2024: Tumor board recommends considering clinical trial options due to limited response to standard therapies. March 15, 2024: Status assessment before possible study enrollment. Patient shows reduced overall health, ECOG performance status now 3. Lab results show liver and kidney injury: Total bilirubin 4.5 mg/dl, AST 230 U/L, ALT 180 U/L, AP 320 U/L, GGT 30 U/L, Albumin 2.3 g/dl. Creatinine 3.2 mg/dl, eGFR 20.6 ml/min/m2.   ===== Patient 9.2 =====              Patient Information  Name: Mueller, Max Born: 25.03.1945 Address: 456 Oak Street, Hamburg, Germany   Overview of Tumor Diagnosis and Therapy Tumor Diagnosis Diagnosis: Stage IV salivary duct carcinoma Initial Detection: June 10, 2023, following symptoms of persistent facial swelling and pain Biopsy Date: July 5, 2023 Molecular Profile: Tumor Mutational Burden (TMB) of 10.5 Mut/Mb, HRAS p.Q61R (AF 44%), PIK3CA p.E545K (AF 39%). HER2 FISH positive.   Therapy Overview Initial Treatment: Chemotherapy: Initiated on August 1, 2023, with Docetaxel (70 mg/m2) plus Trastuzumab (8 mg/kg). Partial response noted after three months. Subsequent Treatment: Second-line chemotherapy with carboplatin (6 mg/m2/min) and paclitaxel (200 mg/m2 q3week) initiated on December 1, 2023, due to disease progression. Current Status: Progressive disease with lymphatic, pulmonary and hepatic metastasis.   ECOG 1   Comorbidities Former smoker 30 py Hypertension Stage 1 Type 2 Diabetes Mellitus Hyperlipidemia Benign Prostatic Hyperplasia (BPH)   Medication Amlodipine 10 mg 1-0-0 Metformin 1000 mg 1-0-1 Empagliflozin 10mg 1-0-0 Atorvastatin 40 mg 0-0-0-1 Omeprazole 20 mg 1-0-0 Tamsulosin 0.4 mg 1-0-0 Fentanyl TTS 25 mcg every 3 days Fentanyl s.l. 100 mcg as needed up to 4 times a day Ibuprofen 600 1-1-1   Chronological Medical Findings: June 10, 2023: Patient presented with persistent facial swelling and pain. A CT scan of the head and neck revealed a mass in the left parotid gland measuring approximately 5 cm with extensive local invasion into the surrounding soft tissues and suspected involvement of multiple regional lymph nodes in levels II and III June 12, 2023: Staging CT-scan (chest and abdomen). Multiple nodular lesions are identified in the right lung, consistent with metastatic disease. The largest lesion is located in the right lower lobe, measuring approximately 2.5 cm in diameter. Additional smaller nodules are noted in the right upper and middle lobes, with the largest of these measuring up to 1.2 cm. No signs of metastatic involvement in the abdomen. June 15, 2023: Brain MRI. No signs of brain metastasis. July 5, 2023: Ultrasound-guided biopsy confirmed salivary duct carcinoma with high TMB and specific genetic mutations (HRAS p.Q61R, PIK3CA p.E545K). FISH positive for HER2 amplification. July 12, 2023: Started on Docetaxel and Trastuzumab.  October 15, 2023: Follow-Up imaging: CT scan of the head and neck showed a reduction in tumor size to approximately 3.5 cm. Regional lymph nodes remained enlarged but showed decreased metabolic activity on PET scan. All pulmonary lesions show minimal reduction in size compared to previous scan. No new metastatic lesions. January 1, 2024: Follow-up CT scan (neck, chest and abdomen) indicated disease progression. Primary tumor remains stable in size, as well as known lymph node metastases. Pulmonary metastases all show tumor growth with the largest lesion in the right lower lobe now measuring 2.8 cm in diameter. Multiple, previously unknown hypodense lesion within the left liver lobe, compatible with metastatic disease. PET scan shows high metabolic activity. January 9, 2024: Tumor board recommends considering clinical trial options due to limited response to standard therapies. March 15, 2024: Routine Labs: Comprehensive blood work indicated normal liver and renal function. The patient maintained an ECOG performance status of 1.   ===== Patient 9.2.1 =====               Patient Information  Name: Mueller, Max Born: 25.03.1945 Address: 456 Oak Street, Hamburg, Germany   Overview of Tumor Diagnosis and Therapy Tumor Diagnosis Diagnosis: Stage IV salivary duct carcinoma Initial Detection: June 10, 2023, following symptoms of persistent facial swelling and pain Biopsy Date: July 5, 2023 Molecular Profile: Tumor Mutational Burden (TMB) of 10.5 Mut/Mb, HRAS p.Q61R (AF 44%), PIK3CA p.E545K (AF 39%). HER2 FISH positive.   Therapy Overview Initial Treatment: Chemotherapy: Initiated on August 1, 2023, with Docetaxel (70 mg/m2) plus Trastuzumab (8 mg/kg). Partial response noted after three months. Subsequent Treatment: Second-line chemotherapy with carboplatin (6 mg/m2/min) and paclitaxel (200 mg/m2 q3week) initiated on December 1, 2023, due to disease progression. Current Status: Progressive disease with lymphatic, pulmonary and hepatic metastasis.   ECOG 1   Comorbidities   HIV (current viral load undetectable)   Former smoker 30 py Hypertension Stage 1 Type 2 Diabetes Mellitus Hyperlipidemia Benign Prostatic Hyperplasia (BPH)   Medication Amlodipine 10 mg 1-0-0 Metformin 1000 mg 1-0-1 Empagliflozin 10mg 1-0-0 Atorvastatin 40 mg 0-0-0-1 Omeprazole 20 mg 1-0-0 Tamsulosin 0.4 mg 1-0-0 Fentanyl TTS 25 mcg every 3 days Fentanyl s.l. 100 mcg as needed up to 4 times a day Ibuprofen 600 1-1-1 Bictegravir/Emtricitabine/Tenofovir alafenamide 50mg/200mg/25mg 1-0-0   Chronological Medical Findings: June 10, 2023: Patient presented with persistent facial swelling and pain. A CT scan of the head and neck revealed a mass in the left parotid gland measuring approximately 5 cm with extensive local invasion into the surrounding soft tissues and suspected involvement of multiple regional lymph nodes in levels II and III June 12, 2023: Staging CT-scan (chest and abdomen). Multiple nodular lesions are identified in the right lung, consistent with metastatic disease. The largest lesion is located in the right lower lobe, measuring approximately 2.5 cm in diameter. Additional smaller nodules are noted in the right upper and middle lobes, with the largest of these measuring up to 1.2 cm. No signs of metastatic involvement in the abdomen. June 15, 2023: Brain MRI. No signs of brain metastasis. July 5, 2023: Ultrasound-guided biopsy confirmed salivary duct carcinoma with high TMB and specific genetic mutations (HRAS p.Q61R, PIK3CA p.E545K). FISH positive for HER2 amplification. July 12, 2023: Started on Docetaxel and Trastuzumab.  October 15, 2023: Follow-Up imaging: CT scan of the head and neck showed a reduction in tumor size to approximately 3.5 cm. Regional lymph nodes remained enlarged but showed decreased metabolic activity on PET scan. All pulmonary lesions show minimal reduction in size compared to previous scan. No new metastatic lesions. January 1, 2024: Follow-up CT scan (neck, chest and abdomen) indicated disease progression. Primary tumor remains stable in size, as well as known lymph node metastases. Pulmonary metastases all show tumor growth with the largest lesion in the right lower lobe now measuring 2.8 cm in diameter. Multiple, previously unknown hypodense lesion within the left liver lobe, compatible with metastatic disease. PET scan shows high metabolic activity. January 9, 2024: Tumor board recommends considering clinical trial options due to limited response to standard therapies. March 15, 2024: Routine Labs: Comprehensive blood work indicated normal liver and renal function. The patient maintained an ECOG performance status of 1.                ===== Patient 9.2.2 =====               Patient Information  Name: Mueller, Max Born: 25.03.1945 Address: 456 Oak Street, Hamburg, Germany   Overview of Tumor Diagnosis and Therapy Tumor Diagnosis Diagnosis: Stage IV salivary duct carcinoma Initial Detection: June 10, 2023, following symptoms of persistent facial swelling and pain Biopsy Date: July 5, 2023 Molecular Profile: Tumor Mutational Burden (TMB) of 10.5 Mut/Mb, HRAS p.Q61R (AF 44%), PIK3CA p.E545K (AF 39%). HER2 FISH positive.   Therapy Overview Initial Treatment: Chemotherapy: Initiated on August 1, 2023, with Docetaxel (70 mg/m2) plus Trastuzumab (8 mg/kg). Partial response noted after three months. Subsequent Treatment: Second-line chemotherapy with carboplatin (6 mg/m2/min) and paclitaxel (200 mg/m2 q3week) initiated on December 1, 2023, due to disease progression. Current Status: Progressive disease with lymphatic, pulmonary and hepatic metastasis.   ECOG 1   Comorbidities Former smoker 30 py Hypertension Stage 1 HFrEF NYHA II Type 2 Diabetes Mellitus Generalized Epilepsy Hyperlipidemia Benign Prostatic Hyperplasia (BPH)   Medication   Candesartan 12 mg 1-0-0 Metoprolol 47,5 mg 1-0-0 Valproic acid 500mg 2-0-2 Metformin 1000 mg 1-0-1 Empagliflozin 10mg 1-0-0 Atorvastatin 40 mg 0-0-0-1 Omeprazole 20 mg 1-0-0 Tamsulosin 0.4 mg 1-0-0 Fentanyl TTS 25 mcg every 3 days Fentanyl s.l. 100 mcg as needed up to 4 times a day Ibuprofen 600 1-1-1   Chronological Medical Findings: June 10, 2023: Patient presented with persistent facial swelling and pain. A CT scan of the head and neck revealed a mass in the left parotid gland measuring approximately 5 cm with extensive local invasion into the surrounding soft tissues and suspected involvement of multiple regional lymph nodes in levels II and III June 12, 2023: Staging CT-scan (chest and abdomen). Multiple nodular lesions are identified in the right lung, consistent with metastatic disease. The largest lesion is located in the right lower lobe, measuring approximately 2.5 cm in diameter. Additional smaller nodules are noted in the right upper and middle lobes, with the largest of these measuring up to 1.2 cm. No signs of metastatic involvement in the abdomen. June 15, 2023: Brain MRI. No signs of brain metastasis. July 5, 2023: Ultrasound-guided biopsy confirmed salivary duct carcinoma with high TMB and specific genetic mutations (HRAS p.Q61R, PIK3CA p.E545K). FISH positive for HER2 amplification. July 12, 2023: Started on Docetaxel and Trastuzumab.  October 15, 2023: Follow-Up imaging: CT scan of the head and neck showed a reduction in tumor size to approximately 3.5 cm. Regional lymph nodes remained enlarged but showed decreased metabolic activity on PET scan. All pulmonary lesions show minimal reduction in size compared to previous scan. No new metastatic lesions. January 1, 2024: Follow-up CT scan (neck, chest and abdomen) indicated disease progression. Primary tumor remains stable in size, as well as known lymph node metastases. Pulmonary metastases all show tumor growth with the largest lesion in the right lower lobe now measuring 2.8 cm in diameter. Multiple, previously unknown hypodense lesion within the left liver lobe, compatible with metastatic disease. PET scan shows high metabolic activity. January 9, 2024: Tumor board recommends considering clinical trial options due to limited response to standard therapies. March 15, 2024: Routine Labs: Comprehensive blood work indicated normal liver and renal function. The patient maintained an ECOG performance status of 1.   ===== Patient 9.2.3 =====               Patient Information  Name: Mueller, Max Born: 25.03.1945 Address: 456 Oak Street, Hamburg, Germany   Overview of Tumor Diagnosis and Therapy Tumor Diagnosis Diagnosis: Stage IV salivary duct carcinoma Initial Detection: June 10, 2023, following symptoms of persistent facial swelling and pain Biopsy Date: July 5, 2023 Molecular Profile: Tumor Mutational Burden (TMB) of 10.5 Mut/Mb, HRAS p.Q61R (AF 44%), PIK3CA p.E545K (AF 39%). HER2 FISH positive.   Therapy Overview Initial Treatment: Chemotherapy: Initiated on August 1, 2023, with Docetaxel (70 mg/m2) plus Trastuzumab (8 mg/kg). Partial response noted after three months. Subsequent Treatment: Second-line chemotherapy with carboplatin (6 mg/m2/min) and paclitaxel (200 mg/m2 q3week) initiated on December 1, 2023, due to disease progression. Current Status: Progressive disease with lymphatic, pulmonary, hepatic and brain metastasis.   ECOG 1   Comorbidities Alcohol dependence Former smoker 30 py Hypertension Stage 1 Type 2 Diabetes Mellitus Hyperlipidemia Benign Prostatic Hyperplasia (BPH)   Medication Amlodipine 10 mg 1-0-0 Metformin 1000 mg 1-0-1 Empagliflozin 10mg 1-0-0 Atorvastatin 40 mg 0-0-0-1 Omeprazole 20 mg 1-0-0 Tamsulosin 0.4 mg 1-0-0 Fentanyl TTS 25 mcg every 3 days Fentanyl s.l. 100 mcg as needed up to 4 times a day Ibuprofen 600 1-1-1   Chronological Medical Findings: June 10, 2023: Patient presented with persistent facial swelling and pain. A CT scan of the head and neck revealed a mass in the left parotid gland measuring approximately 5 cm with extensive local invasion into the surrounding soft tissues and suspected involvement of multiple regional lymph nodes in levels II and III June 12, 2023: Staging CT-scan (chest and abdomen). Multiple nodular lesions are identified in the right lung, consistent with metastatic disease. The largest lesion is located in the right lower lobe, measuring approximately 2.5 cm in diameter. Additional smaller nodules are noted in the right upper and middle lobes, with the largest of these measuring up to 1.2 cm. No signs of metastatic involvement in the abdomen. June 15, 2023: Brain MRI. No signs of brain metastasis. July 5, 2023: Ultrasound-guided biopsy confirmed salivary duct carcinoma with high TMB and specific genetic mutations (HRAS p.Q61R, PIK3CA p.E545K). FISH positive for HER2 amplification. July 12, 2023: Started on Docetaxel and Trastuzumab.  October 15, 2023: Follow-Up imaging: CT scan of the head and neck showed a reduction in tumor size to approximately 3.5 cm. Regional lymph nodes remained enlarged but showed decreased metabolic activity on PET scan. All pulmonary lesions show minimal reduction in size compared to previous scan. No new metastatic lesions. January 1, 2024: Follow-up CT scan (neck, chest and abdomen) indicated disease progression. Primary tumor remains stable in size, as well as known lymph node metastases. Pulmonary metastases all show tumor growth with the largest lesion in the right lower lobe now measuring 2.8 cm in diameter. Multiple, previously unknown hypodense lesion within the left liver lobe, compatible with metastatic disease. PET scan shows high metabolic activity. January 5, 2024: MRI scan of the brain revealed multiple metastases, specifically two lesions in the left hemisphere: one in the left frontal lobe and one in the left occipital lobe. Incidental findings included mild age-related cerebral atrophy and scattered white matter hyperintensities consistent with chronic microvascular ischemic changes. January 9, 2024: Tumor board recommends considering clinical trial options due to limited response to standard therapies. March 15, 2024: Routine Labs: Comprehensive blood work indicated normal liver and renal function. The patient maintained an ECOG performance status of 1.                ===== Patient 9.3 =====               Patient Information  Name: Mueller, Max Born: 25.03.1945 Address: 456 Oak Street, Hamburg, Germany   Overview of Tumor Diagnosis and Therapy Tumor Diagnosis Diagnosis: Stage IV salivary duct carcinoma Initial Detection: June 10, 2023, following symptoms of persistent facial swelling and pain Biopsy Date: July 5, 2023 Molecular Profile: Tumor Mutational Burden (TMB) of 10.5 Mut/Mb, HRAS p.Q61R (AF 44%), PIK3CA p.E545K (AF 39%). HER2 FISH positive.   Therapy Overview Initial Treatment: Chemotherapy: Initiated on August 1, 2023, with Docetaxel plus Trastuzumab. Partial response noted after three months. Subsequent Treatment: Second-line chemotherapy with carboplatin and paclitaxel initiated on December 1, 2023, due to disease progression. Current Status: Progressive disease with lymphatic, pulmonary and hepatic metastasis.   ECOG 1   Comorbidities Former smoker 30 py Hypertension Stage 1 Type 2 Diabetes Mellitus Hyperlipidemia Benign Prostatic Hyperplasia (BPH)   Medication Amlodipine 10 mg 1-0-0 Metformin 1000 mg 1-0-1 Empagliflozin 10mg 1-0-0 Atorvastatin 40 mg 0-0-0-1 Omeprazole 20 mg 1-0-0 Tamsulosin 0.4 mg 1-0-0 Fentanyl TTS 25 mcg every 3 days Fentanyl s.l. 100 mcg as needed up to 4 times a day Ibuprofen 600 1-1-1   Chronological Medical Findings: June 10, 2023: Patient presented with persistent facial swelling and pain. A CT scan of the head and neck revealed a mass in the left parotid gland measuring approximately 5 cm with extensive local invasion into the surrounding soft tissues and suspected involvement of multiple regional lymph nodes in levels II and III June 12, 2023: Staging CT-scan (chest and abdomen). Multiple nodular lesions are identified in the right lung, consistent with metastatic disease. The largest lesion is located in the right lower lobe, measuring approximately 2.5 cm in diameter. Additional smaller nodules are noted in the right upper and middle lobes, with the largest of these measuring up to 1.2 cm. No signs of metastatic involvement in the abdomen. June 15, 2023: Brain MRI. No signs of brain metastasis. July 5, 2023: Ultrasound-guided biopsy confirmed salivary duct carcinoma with high TMB and specific genetic mutations (HRAS p.Q61R, PIK3CA p.E545K). FISH positive for HER2 amplification. July 12, 2023: Started on Docetaxel and Trastuzumab.  October 15, 2023: Follow-Up imaging: CT scan of the head and neck showed a reduction in tumor size to approximately 3.5 cm. Regional lymph nodes remained enlarged but showed decreased metabolic activity on PET scan. All pulmonary lesions show minimal reduction in size compared to previous scan. No new metastatic lesions. January 1, 2024: Follow-up CT scan (neck, chest and abdomen) indicated disease progression. Primary tumor remains stable in size, as well as known lymph node metastases. Pulmonary metastases all show tumor growth with the largest lesion in the right lower lobe now measuring 2.8 cm in diameter. Multiple, previously unknown hypodense lesion within the left liver lobe, compatible with metastatic disease. PET scan shows high metabolic activity. January 9, 2024: Tumor board recommends considering clinical trial options due to limited response to standard therapies. March 15, 2024: Routine Labs: Comprehensive blood work indicated normal liver and renal function. The patient maintained an ECOG performance status of 1.   ===== Patient 9.3.1 =====               Patient Information  Name: Mueller, Max Born: 25.03.1945 Address: 456 Oak Street, Hamburg, Germany   Overview of Tumor Diagnosis and Therapy Tumor Diagnosis Diagnosis: Stage IV salivary duct carcinoma Initial Detection: June 10, 2023, following symptoms of persistent facial swelling and pain Biopsy Date: July 5, 2023 Molecular Profile: Tumor Mutational Burden (TMB) of 10.5 Mut/Mb, HRAS p.Q61R (AF 44%), PIK3CA p.E545K (AF 39%). HER2 FISH positive.   Therapy Overview Initial Treatment: Chemotherapy: Initiated on August 1, 2023, with Docetaxel plus Trastuzumab. Partial response noted after three months. Subsequent Treatment: Second-line chemotherapy with carboplatin and paclitaxel initiated on December 1, 2023, due to disease progression. Current Status: Progressive disease with lymphatic, pulmonary and hepatic metastasis.   ECOG 2   Comorbidities Former smoker 30 py Hypertension Stage 1 Type 2 Diabetes Mellitus Hyperlipidemia Benign Prostatic Hyperplasia (BPH) CKD KDIGO G4   Medication Amlodipine 10 mg 1-0-0 Metformin 1000 mg 1-0-1 Empagliflozin 10mg 1-0-0 Atorvastatin 40 mg 0-0-0-1 Omeprazole 20 mg 1-0-0 Tamsulosin 0.4 mg 1-0-0 Fentanyl TTS 25 mcg every 3 days Fentanyl s.l. 100 mcg as needed up to 4 times a day Ibuprofen 600 1-1-1   Chronological Medical Findings: June 10, 2023: Patient presented with persistent facial swelling and pain. A CT scan of the head and neck revealed a mass in the left parotid gland measuring approximately 5 cm with extensive local invasion into the surrounding soft tissues and suspected involvement of multiple regional lymph nodes in levels II and III June 12, 2023: Staging CT-scan (chest and abdomen). Multiple nodular lesions are identified in the right lung, consistent with metastatic disease. The largest lesion is located in the right lower lobe, measuring approximately 2.5 cm in diameter. Additional smaller nodules are noted in the right upper and middle lobes, with the largest of these measuring up to 1.2 cm. No signs of metastatic involvement in the abdomen. June 15, 2023: Brain MRI. No signs of brain metastasis. July 5, 2023: Ultrasound-guided biopsy confirmed salivary duct carcinoma with high TMB and specific genetic mutations (HRAS p.Q61R, PIK3CA p.E545K). FISH positive for HER2 amplification. July 12, 2023: Started on Docetaxel and Trastuzumab.  October 15, 2023: Follow-Up imaging: CT scan of the head and neck showed a reduction in tumor size to approximately 3.5 cm. Regional lymph nodes remained enlarged but showed decreased metabolic activity on PET scan. All pulmonary lesions show minimal reduction in size compared to previous scan. No new metastatic lesions. January 1, 2024: Follow-up CT scan (neck, chest and abdomen) indicated disease progression. Primary tumor remains stable in size, as well as known lymph node metastases. Pulmonary metastases all show tumor growth with the largest lesion in the right lower lobe now measuring 2.8 cm in diameter. Multiple, previously unknown hypodense lesion within the left liver lobe, compatible with metastatic disease. PET scan shows high metabolic activity. January 9, 2024: Tumor board recommends considering clinical trial options due to limited response to standard therapies. March 15, 2024: Routine Labs: Comprehensive blood work indicated normal organ function except for known reduced kidney function: eGFR 21.56 ml/min/1.73m2, Creatinine 3.0 mg/dl. Current ECOG performance status 2.   ===== Patient 9.3.2 =====               Patient Information  Name: Mueller, Max Born: 25.03.1945 Address: 456 Oak Street, Hamburg, Germany   Overview of Tumor Diagnosis and Therapy Tumor Diagnosis Diagnosis: Stage IV salivary duct carcinoma Initial Detection: June 10, 2023, following symptoms of persistent facial swelling and pain Biopsy Date: July 5, 2023 Molecular Profile: Tumor Mutational Burden (TMB) of 10.5 Mut/Mb, HRAS p.Q61R (AF 44%), PIK3CA p.E545K (AF 39%). HER2 FISH positive.   Therapy Overview Initial Treatment: Chemotherapy: Initiated on August 1, 2023, with Docetaxel plus Trastuzumab. Partial response noted after three months. Subsequent Treatment: Second-line chemotherapy with carboplatin and paclitaxel initiated on December 1, 2023, due to disease progression. Current Status: Progressive disease with lymphatic, pulmonary and hepatic metastasis.   ECOG 1 Active P. jirovecii pneumonia   Comorbidities Former smoker 30 py Hypertension Stage 1 Type 2 Diabetes Mellitus Hyperlipidemia Benign Prostatic Hyperplasia (BPH)   Medication Sulfamethoxazole/Trimethoprim 400mg/80mg 5–5-5-5  Amlodipine 10 mg 1-0-0 Metformin 1000 mg 1-0-1 Empagliflozin 10mg 1-0-0 Atorvastatin 40 mg 0-0-0-1 Omeprazole 20 mg 1-0-0 Tamsulosin 0.4 mg 1-0-0 Fentanyl TTS 25 mcg every 3 days Fentanyl s.l. 100 mcg as needed up to 4 times a day Ibuprofen 600 1-1-1   Chronological Medical Findings: June 10, 2023: Patient presented with persistent facial swelling and pain. A CT scan of the head and neck revealed a mass in the left parotid gland measuring approximately 5 cm with extensive local invasion into the surrounding soft tissues and suspected involvement of multiple regional lymph nodes in levels II and III June 12, 2023: Staging CT-scan (chest and abdomen). Multiple nodular lesions are identified in the right lung, consistent with metastatic disease. The largest lesion is located in the right lower lobe, measuring approximately 2.5 cm in diameter. Additional smaller nodules are noted in the right upper and middle lobes, with the largest of these measuring up to 1.2 cm. No signs of metastatic involvement in the abdomen. June 15, 2023: Brain MRI. No signs of brain metastasis. July 5, 2023: Ultrasound-guided biopsy confirmed salivary duct carcinoma with high TMB and specific genetic mutations (HRAS p.Q61R, PIK3CA p.E545K). FISH positive for HER2 amplification. July 12, 2023: Started on Docetaxel and Trastuzumab.  October 15, 2023: Follow-Up imaging: CT scan of the head and neck showed a reduction in tumor size to approximately 3.5 cm. Regional lymph nodes remained enlarged but showed decreased metabolic activity on PET scan. All pulmonary lesions show minimal reduction in size compared to previous scan. No new metastatic lesions. January 1, 2024: Follow-up CT scan (neck, chest and abdomen) indicated disease progression. Primary tumor remains stable in size, as well as known lymph node metastases. Pulmonary metastases all show tumor growth with the largest lesion in the right lower lobe now measuring 2.8 cm in diameter. Multiple, previously unknown hypodense lesion within the left liver lobe, compatible with metastatic disease. PET scan shows high metabolic activity. January 9, 2024: Tumor board recommends considering clinical trial options due to limited response to standard therapies. March 15, 2024: Routine Labs: Comprehensive blood work indicated normal liver and renal function. The patient maintained an ECOG performance status of 1.                ===== Patient 9.3.3 =====               Patient Information  Name: Mueller, Max Born: 25.03.1945 Address: 456 Oak Street, Hamburg, Germany   Overview of Tumor Diagnosis and Therapy Tumor Diagnosis Diagnosis: Stage IV salivary duct carcinoma Initial Detection: June 10, 2023, following symptoms of persistent facial swelling and pain Biopsy Date: July 5, 2023 Molecular Profile: Tumor Mutational Burden (TMB) of 10.5 Mut/Mb, HRAS p.Q61R (AF 44%), PIK3CA p.E545K (AF 39%). HER2 FISH positive.   Therapy Overview Initial Treatment: Chemotherapy: Initiated on August 1, 2023, with Docetaxel plus Trastuzumab. Partial response noted after three months. Subsequent Treatment: Second-line chemotherapy with carboplatin and paclitaxel initiated on December 1, 2023, due to disease progression. Current Status: Progressive disease with lymphatic, pulmonary, hepatic and brain metastasis.   ECOG 1   Comorbidities Former smoker 30 py Hypertension Stage 1 Type 2 Diabetes Mellitus Hyperlipidemia Benign Prostatic Hyperplasia (BPH)   Medication Amlodipine 10 mg 1-0-0 Metformin 1000 mg 1-0-1 Empagliflozin 10mg 1-0-0 Atorvastatin 40 mg 0-0-0-1 Omeprazole 20 mg 1-0-0 Tamsulosin 0.4 mg 1-0-0 Fentanyl TTS 25 mcg every 3 days Fentanyl s.l. 100 mcg as needed up to 4 times a day Ibuprofen 600 1-1-1   Chronological Medical Findings: June 10, 2023: Patient presented with persistent facial swelling and pain. A CT scan of the head and neck revealed a mass in the left parotid gland measuring approximately 5 cm with extensive local invasion into the surrounding soft tissues and suspected involvement of multiple regional lymph nodes in levels II and III June 12, 2023: Staging CT-scan (chest and abdomen). Multiple nodular lesions are identified in the right lung, consistent with metastatic disease. The largest lesion is located in the right lower lobe, measuring approximately 2.5 cm in diameter. Additional smaller nodules are noted in the right upper and middle lobes, with the largest of these measuring up to 1.2 cm. No signs of metastatic involvement in the abdomen. June 15, 2023: Brain MRI. No signs of brain metastasis. July 5, 2023: Ultrasound-guided biopsy confirmed salivary duct carcinoma with high TMB and specific genetic mutations (HRAS p.Q61R, PIK3CA p.E545K). FISH positive for HER2 amplification. July 12, 2023: Started on Docetaxel and Trastuzumab.  October 15, 2023: Follow-Up imaging: CT scan of the head and neck showed a reduction in tumor size to approximately 3.5 cm. Regional lymph nodes remained enlarged but showed decreased metabolic activity on PET scan. All pulmonary lesions show minimal reduction in size compared to previous scan. No new metastatic lesions. January 1, 2024: Follow-up CT scan (neck, chest and abdomen) indicated disease progression. Primary tumor remains stable in size, as well as known lymph node metastases. Pulmonary metastases all show tumor growth with the largest lesion in the right lower lobe now measuring 2.8 cm in diameter. Multiple, previously unknown hypodense lesion within the left liver lobe, compatible with metastatic disease. PET scan shows high metabolic activity. January 4, 2024: MRI scan of the brain revealed multiple metastases, specifically two lesions in the right hemisphere: one in the right frontal lobe and one in the right parietal lobe. Incidental findings included mild age-related cerebral atrophy and scattered white matter hyperintensities consistent with chronic microvascular ischemic changes. January 9, 2024: Tumor board recommends considering clinical trial options due to limited response to standard therapies. March 15, 2024: Routine Labs: Comprehensive blood work indicated normal liver and renal function. The patient maintained an ECOG performance status of 1.   ===== Patient 10.1 =====               Name: Miller, Jane Born: 25.07.1965 Address: Main Street 78, Potsdam   Overview of Tumor Diagnosis and Therapy Tumor Diagnosis Diagnosis: UICC Stage IV EGFR mutated non-small cell lung cancer (NSCLC) Initial Detection: January 10, 2023, following symptoms of persistent cough and chest pain Biopsy Date: February 5, 2023, adenocarcinoma of the lung Molecular Profile: Panel (tumor purity 60%). EGFR p.E746_A750del (AF 50%), EGFR T790M, EGFR p.C797S (AF 29%), STK11 p.C210* (AF 39%).   Therapy Overview Initial Treatment: Targeted Therapy: Began March 1, 2023, with Osimertinib  (T790M). Partial response noted after initial therapy cycle completed by June 15, 2023. Continued therapy until November 2021 (progressive disease). Subsequent Treatment: Further treatment with Pembrolizumab in combination with Paclitaxel/Carboplatin/ Bevacizumab and Atezolizumab initiated on December 1, 2023. Staging CT shows disease progression after 6 months. Current Status: ECOG 1 Comorbidities Current smoker: 35 py Hypertension Stage 1 Hyperlipidemia: Managed with Simvastatin 20 mg daily COPD GOLD 2 Medication Losartan 50mg 1-0-0 Simvastatin 20mg daily 0-0-0-1 Albuterol (inhaler) on demand  Tiotropium (inhaler) on demand             Chronological Medical Findings: January 2023: Complaints of persistent cough and bloody sputum. Weight loss of -10kg / 5 months. Chest X-ray revealed a mass in the right lung.  January 10, 2023: Comprehensive CT scan (chest and abdomen): Solid, spiculated mass in the right upper lobe of the lung measuring approximately 3.8 cm. Additionally, three small hypoattenuating hepatic lesions noted, to be considered as metastases. Enlarged hilar and subcarinal lymph nodes. Right adrenal gland slightly enlarged, warranting further investigation for metastatic involvement. No evidence of pleural effusion or significant vascular invasion was present. Additional notes: Minor atelectasis in the left lower lobe and mild emphysematous changes in both lungs, consistent with the patient's history of chronic obstructive pulmonary disease (COPD). The abdominal organs, aside from the hepatic lesions and possible adrenal metastasis, appeared unremarkable. February 5, 2023: Biopsy and molecular testing confirmed adenocarcinoma with EGFR T790M mutation. Material sent for further panel testing. Initiated Osimertinib therapy on March 1, 2021. Patient signed consent. Patient in clinical good condition. Feb.-June 2023: Antineoplastic targeted therapy with Osimertinib. June 15, 2023: Follow-up CT scan (chest and abdomen), PET CT scan: Partial response (PR) to treatment. The primary lung mass in the right upper lobe decreased in size, now measuring approximately 3 cm in diameter, down from 3.8 cm. The three previously noted hypoattenuating hepatic lesions have also shown slight reduction in size and decreased metabolic activity, suggesting a positive response to systemic therapy. No new metastatic lesions detected in the liver or other abdominal organs. The previously enlarged hilar and subcarinal lymph nodes have reduced in size, indicating a favorable response to treatment. The right adrenal gland remains slightly enlarged but stable, with no significant change noted, and it continues to show no signs of active disease. Overall: PR.  July 1, 2023: Continuation of Osimertinib therapy October 3, 2023: CT scan Chest/Abd.: PD. The primary lung mass in the right upper lobe has increased in size, now measuring approximately 4.5 cm in diameter (prev 3,0 cm). The previously noted hypoattenuating hepatic lesions have also shown slight growth. Additional new metastasis in S7. Small pleural effusion on the right side, minor atelectasis in the left lower lobe has slightly worsened. Hilar and subcarinal lymph nodes drastically enlarged in size. Right adrenal gland with second metastasis. No new metastatic lesions were detected in the liver or other abdominal organs.  December 1, 2023: Begin Paclitaxel/Carboplatin/Bevacizumab and Atezolizumab. Received written consent from the patient, ECOG 1. May 10, 2024: CT Lung/Abdomen: Progressive Disease. Primary lung mass in the right upper lobe has increased in size, now measuring approximately 4.5 cm in diameter, up from 3 cm. The previously noted hypoattenuating hepatic lesions have also shown slight growth. Small pleural effusion on the right side. Minor atelectasis in the left lower lobe slightly worsened, likely due to the progressive nature of the disease and the presence of pleural effusion. The hilar and subcarinal lymph nodes, prev. reduced in size, now slightly enlarged again. Right adrenal gland remains slightly enlarged and stable with no significant change in size. No new metastatic lesions were detected in the liver or other abdominal organs. Summary:          Primary Lung Mass: Increased in size to 4.5 cm.          Hepatic Lesions: Slight growth and increased metabolic activity.          Lymph Nodes: Slightly re-enlarged hilar and subcarinal nodes.          Pleural Effusion: Small right-sided pleural effusion noted.          Atelectasis: Slight worsening of minor atelectasis in the left lower lobe.          Adrenal Gland: Remains slightly enlarged; increased metabolic activity.   Overall Assessment: Disease progression (PD).   May 15, 2024: Tumor board recommends considering clinical trial options due to limited response to standard therapies. May 18, 2024: Detailed assessment of health status. ECOG performance 1. All routine labs, including liver and renal function tests, within normal limits.                 ===== Patient 10.1.1 =====                Name: Miller, Jane Born: 25.07.1965 Address: Main Street 78, Potsdam   Overview of Tumor Diagnosis and Therapy Tumor Diagnosis Diagnosis: UICC Stage IV EGFR mutated non-small cell lung cancer (NSCLC) Initial Detection: January 10, 2023, following symptoms of persistent cough and chest pain Biopsy Date: February 5, 2023, adenocarcinoma of the lung Molecular Profile: Panel (tumor purity 60%). EGFR p.E746_A750del (AF 50%), EGFR T790M, EGFR p.C797S (AF 29%), STK11 p.C210* (AF 39%).   Therapy Overview Initial Treatment: Targeted Therapy: Began March 1, 2023, with Osimertinib  (T790M). Partial response noted after initial therapy cycle completed by June 15, 2023. Continued therapy until November 2021 (progressive disease). Subsequent Treatment: Further treatment with Pembrolizumab in combination with Paclitaxel/Carboplatin/ Bevacizumab and Atezolizumab initiated on December 1, 2023. Staging CT shows disease progression after 6 months. Current Status: ECOG 1 Comorbidities Current smoker: 35 py Hypertension Stage 1 Hyperlipidemia: Managed with Simvastatin 20 mg daily COPD GOLD 2 Medication Losartan 50mg 1-0-0 Simvastatin 20mg daily 0-0-0-1 Albuterol (inhaler) on demand  Tiotropium (inhaler) on demand             Chronological Medical Findings: January 2023: Complaints of persistent cough and bloody sputum. Weight loss of -10kg / 5 months. Chest X-ray revealed a mass in the right lung.  January 10, 2023: Comprehensive CT scan (chest and abdomen): Solid, spiculated mass in the right upper lobe of the lung measuring approximately 3.8 cm. Additionally, three small hypoattenuating hepatic lesions noted, to be considered as metastases. Enlarged hilar and subcarinal lymph nodes. Right adrenal gland slightly enlarged, warranting further investigation for metastatic involvement. No evidence of pleural effusion or significant vascular invasion was present. Additional notes: Minor atelectasis in the left lower lobe and mild emphysematous changes in both lungs, consistent with the patient's history of chronic obstructive pulmonary disease (COPD). The abdominal organs, aside from the hepatic lesions and possible adrenal metastasis, appeared unremarkable. February 5, 2023: Biopsy and molecular testing confirmed adenocarcinoma with EGFR T790M mutation. Material sent for further panel testing. Initiated Osimertinib therapy on March 1, 2021. Patient signed consent. Patient in clinical good condition. Feb.-June 2023: Antineoplastic targeted therapy with Osimertinib. June 15, 2023: Follow-up CT scan (chest and abdomen), PET CT scan: Partial response (PR) to treatment. The primary lung mass in the right upper lobe decreased in size, now measuring approximately 3 cm in diameter, down from 3.8 cm. The three previously noted hypoattenuating hepatic lesions have also shown slight reduction in size and decreased metabolic activity, suggesting a positive response to systemic therapy. No new metastatic lesions detected in the liver or other abdominal organs. The previously enlarged hilar and subcarinal lymph nodes have reduced in size, indicating a favorable response to treatment. The right adrenal gland remains slightly enlarged but stable, with no significant change noted, and it continues to show no signs of active disease. Overall: PR.  July 1, 2023: Continuation of Osimertinib therapy October 3, 2023: CT scan Chest/Abd.: PD. The primary lung mass in the right upper lobe has increased in size, now measuring approximately 4.5 cm in diameter (prev 3,0 cm). The previously noted hypoattenuating hepatic lesions have also shown slight growth. Additional new metastasis in S7. Small pleural effusion on the right side, minor atelectasis in the left lower lobe has slightly worsened. Hilar and subcarinal lymph nodes drastically enlarged in size. Right adrenal gland with second metastasis. No new metastatic lesions were detected in the liver or other abdominal organs.  December 1, 2023: Begin Paclitaxel/Carboplatin/Bevacizumab and Atezolizumab. Received written consent from the patient, ECOG 1. May 8, 2024: Stopped platin because of severe neuropathy (CTCAE III) May 10, 2024: CT Lung/Abdomen: Progressive Disease. Primary lung mass in the right upper lobe has increased in size, now measuring approximately 4.5 cm in diameter, up from 3 cm. The previously noted hypoattenuating hepatic lesions have also shown slight growth. Small pleural effusion on the right side. Minor atelectasis in the left lower lobe slightly worsened, likely due to the progressive nature of the disease and the presence of pleural effusion. The hilar and subcarinal lymph nodes, prev. reduced in size, now slightly enlarged again. Right adrenal gland remains slightly enlarged and stable with no significant change in size. No new metastatic lesions were detected in the liver or other abdominal organs. Summary:          Primary Lung Mass: Increased in size to 4.5 cm.          Hepatic Lesions: Slight growth and increased metabolic activity.          Lymph Nodes: Slightly re-enlarged hilar and subcarinal nodes.          Pleural Effusion: Small right-sided pleural effusion noted.          Atelectasis: Slight worsening of minor atelectasis in the left lower lobe.          Adrenal Gland: Remains slightly enlarged; increased metabolic activity.   Overall Assessment: Disease progression (PD). May 15, 2024: Tumor board recommends considering clinical trial options due to limited response to standard therapies. May 18, 2024: Detailed assessment of health status. ECOG performance 1. All routine labs, including liver and renal function tests, within normal limits. May 20, 2024: Presentation with shortness of breath via emergeny. CT scan with multiple infiltrates. CRP elevated at 190. Started on Meropenem. Admitted for inpatient care.                 ===== Patient 10.1.2 =====                Name: Miller, Jane Born: 25.07.1965 Address: Main Street 78, Potsdam   Overview of Tumor Diagnosis and Therapy Tumor Diagnosis Diagnosis: UICC Stage IV EGFR mutated non-small cell lung cancer (NSCLC) Initial Detection: January 10, 2023, following symptoms of persistent cough and chest pain Biopsy Date: February 5, 2023, adenocarcinoma of the lung Molecular Profile: Panel (tumor purity 60%). EGFR p.E746_A750del (AF 50%), EGFR T790M, EGFR p.C797S (AF 29%), STK11 p.C210* (AF 39%).   Therapy Overview Initial Treatment: Targeted Therapy: Began March 1, 2023, with Osimertinib  (T790M). Partial response noted after initial therapy cycle completed by June 15, 2023. Continued therapy until November 2021 (progressive disease). Subsequent Treatment: Further treatment with Pembrolizumab in combination with Paclitaxel/Carboplatin/ Bevacizumab and Atezolizumab initiated on December 1, 2023. Staging CT shows disease progression after 6 months. Current Status: ECOG 1 Comorbidities Current smoker: 35 py Hypertension Stage 1 Hyperlipidemia: Managed with Simvastatin 20 mg daily COPD GOLD 2 Medication Losartan 50mg 1-0-0 Simvastatin 20mg daily 0-0-0-1 Albuterol (inhaler) on demand  Tiotropium (inhaler) on demand             Chronological Medical Findings: January 2023: Complaints of persistent cough and bloody sputum. Weight loss of -10kg / 5 months. Chest X-ray revealed a mass in the right lung.  January 10, 2023: Comprehensive CT scan (chest and abdomen): Solid, spiculated mass in the right upper lobe of the lung measuring approximately 3.8 cm. Additionally, three small hypoattenuating hepatic lesions noted, to be considered as metastases. Enlarged hilar and subcarinal lymph nodes. Right adrenal gland slightly enlarged, warranting further investigation for metastatic involvement. No evidence of pleural effusion or significant vascular invasion was present. Additional notes: Minor atelectasis in the left lower lobe and mild emphysematous changes in both lungs, consistent with the patient's history of chronic obstructive pulmonary disease (COPD). The abdominal organs, aside from the hepatic lesions and possible adrenal metastasis, appeared unremarkable. February 5, 2023: Biopsy and molecular testing confirmed adenocarcinoma with EGFR T790M mutation. Material sent for further panel testing. Initiated Osimertinib therapy on March 1, 2021. Patient signed consent. Patient in clinical good condition. Feb.-June 2023: Antineoplastic targeted therapy with Osimertinib. June 15, 2023: Follow-up CT scan (chest and abdomen), PET CT scan: Partial response (PR) to treatment. The primary lung mass in the right upper lobe decreased in size, now measuring approximately 3 cm in diameter, down from 3.8 cm. The three previously noted hypoattenuating hepatic lesions have also shown slight reduction in size and decreased metabolic activity, suggesting a positive response to systemic therapy. No new metastatic lesions detected in the liver or other abdominal organs. The previously enlarged hilar and subcarinal lymph nodes have reduced in size, indicating a favorable response to treatment. The right adrenal gland remains slightly enlarged but stable, with no significant change noted, and it continues to show no signs of active disease. Overall: PR.  July 1, 2023: Continuation of Osimertinib therapy October 3, 2023: CT scan Chest/Abd.: PD. The primary lung mass in the right upper lobe has increased in size, now measuring approximately 4.5 cm in diameter (prev 3,0 cm). The previously noted hypoattenuating hepatic lesions have also shown slight growth. Additional new metastasis in S7. Small pleural effusion on the right side, minor atelectasis in the left lower lobe has slightly worsened. Hilar and subcarinal lymph nodes drastically enlarged in size. Right adrenal gland with second metastasis. No new metastatic lesions were detected in the liver or other abdominal organs.  December 1, 2023: Begin Paclitaxel/Carboplatin/Bevacizumab and Atezolizumab. Received written consent from the patient, ECOG 1. May 10, 2024: CT Lung/Abdomen: Progressive Disease. Primary lung mass in the right upper lobe has increased in size, now measuring approximately 4.5 cm in diameter, up from 3 cm. The previously noted hypoattenuating hepatic lesions have also shown slight growth. Small pleural effusion on the right side. Minor atelectasis in the left lower lobe slightly worsened, likely due to the progressive nature of the disease and the presence of pleural effusion. The hilar and subcarinal lymph nodes, prev. reduced in size, now slightly enlarged again. Right adrenal gland remains slightly enlarged and stable with no significant change in size. No new metastatic lesions were detected in the liver or other abdominal organs. Summary:          Primary Lung Mass: Increased in size to 4.5 cm.          Hepatic Lesions: Slight growth and increased metabolic activity.          Lymph Nodes: Slightly re-enlarged hilar and subcarinal nodes.          Pleural Effusion: Small right-sided pleural effusion noted.          Atelectasis: Slight worsening of minor atelectasis in the left lower lobe.          Adrenal Gland: Remains slightly enlarged; increased metabolic activity.   Overall Assessment: Disease progression (PD). May 11, 2024: Seizures, CT scan: 3 metastases in the brain: 1x2.5 cm lesion in the left frontal lobe. A 1.8 cm lesion in the right parietal lobe. A 1.2 cm lesion in the cerebellum. Initiated Prednisone. Begin with Keppra. Consultation with the radiation oncology team, recommended whole-brain radiotherapy (30 Gy in 10 fractions). Surgical resection deemed not possible due to location of metasases.  May 15, 2024: Tumor board recommends considering clinical trial options due to limited response to standard therapies. May 18, 2024: Detailed assessment of health status. ECOG performance 1. All routine labs, including liver and renal function tests, within normal limits.                  ===== Patient 10.1.3 =====                Name: Miller, Jane Born: 25.07.1965 Address: Main Street 78, Potsdam   Overview of Tumor Diagnosis and Therapy Tumor Diagnosis Diagnosis: UICC Stage IV EGFR mutated non-small cell lung cancer (NSCLC) Initial Detection: January 10, 2023, following symptoms of persistent cough and chest pain Biopsy Date: February 5, 2023, adenocarcinoma of the lung Molecular Profile: Panel (tumor purity 60%). EGFR p.E746_A750del (AF 50%), EGFR T790M, EGFR p.C797S (AF 29%), STK11 p.C210* (AF 39%).   Therapy Overview Initial Treatment: Targeted Therapy: Began March 1, 2023, with Osimertinib  (T790M). Partial response noted after initial therapy cycle completed by June 15, 2023. Continued therapy until November 2021 (progressive disease). Subsequent Treatment: Further treatment with Pembrolizumab in combination with Paclitaxel/Carboplatin/ Bevacizumab and Atezolizumab initiated on December 1, 2023. Staging CT shows disease progression after 6 months. Current Status: ECOG 1 Comorbidities Current smoker: 35 py Hypertension Stage 1 Hyperlipidemia: Managed with Simvastatin 20 mg daily COPD GOLD 2 Diabetes Mellitus (II)  Diabetic Retinopathy Medication Losartan 50mg 1-0-0 Simvastatin 20mg daily 0-0-0-1 Metformin  (800 mg 1-0-1) Albuterol (inhaler) on demand  Tiotropium (inhaler) on demand Lucentis              Chronological Medical Findings: January 2023: Complaints of persistent cough and bloody sputum. Weight loss of -10kg / 5 months. Chest X-ray revealed a mass in the right lung.  January 10, 2023: Comprehensive CT scan (chest and abdomen): Solid, spiculated mass in the right upper lobe of the lung measuring approximately 3.8 cm. Additionally, three small hypoattenuating hepatic lesions noted, to be considered as metastases. Enlarged hilar and subcarinal lymph nodes. Right adrenal gland slightly enlarged, warranting further investigation for metastatic involvement. No evidence of pleural effusion or significant vascular invasion was present. Additional notes: Minor atelectasis in the left lower lobe and mild emphysematous changes in both lungs, consistent with the patient's history of chronic obstructive pulmonary disease (COPD). The abdominal organs, aside from the hepatic lesions and possible adrenal metastasis, appeared unremarkable. February 5, 2023: Biopsy and molecular testing confirmed adenocarcinoma with EGFR T790M mutation. Material sent for further panel testing. Initiated Osimertinib therapy on March 1, 2021. Patient signed consent. Patient in clinical good condition. Feb.-June 2023: Antineoplastic targeted therapy with Osimertinib. June 15, 2023: Follow-up CT scan (chest and abdomen), PET CT scan: Partial response (PR) to treatment. The primary lung mass in the right upper lobe decreased in size, now measuring approximately 3 cm in diameter, down from 3.8 cm. The three previously noted hypoattenuating hepatic lesions have also shown slight reduction in size and decreased metabolic activity, suggesting a positive response to systemic therapy. No new metastatic lesions detected in the liver or other abdominal organs. The previously enlarged hilar and subcarinal lymph nodes have reduced in size, indicating a favorable response to treatment. The right adrenal gland remains slightly enlarged but stable, with no significant change noted, and it continues to show no signs of active disease. Overall: PR.  July 1, 2023: Continuation of Osimertinib therapy October 3, 2023: CT scan Chest/Abd.: PD. The primary lung mass in the right upper lobe has increased in size, now measuring approximately 4.5 cm in diameter (prev 3,0 cm). The previously noted hypoattenuating hepatic lesions have also shown slight growth. Additional new metastasis in S7. Small pleural effusion on the right side, minor atelectasis in the left lower lobe has slightly worsened. Hilar and subcarinal lymph nodes drastically enlarged in size. Right adrenal gland with second metastasis. No new metastatic lesions were detected in the liver or other abdominal organs.  December 1, 2023: Begin Paclitaxel/Carboplatin/Bevacizumab and Atezolizumab. Received written consent from the patient, ECOG 1. May 10, 2024: CT Lung/Abdomen: Progressive Disease. Primary lung mass in the right upper lobe has increased in size, now measuring approximately 4.5 cm in diameter, up from 3 cm. The previously noted hypoattenuating hepatic lesions have also shown slight growth. Small pleural effusion on the right side. Minor atelectasis in the left lower lobe slightly worsened, likely due to the progressive nature of the disease and the presence of pleural effusion. The hilar and subcarinal lymph nodes, prev. reduced in size, now slightly enlarged again. Right adrenal gland remains slightly enlarged and stable with no significant change in size. No new metastatic lesions were detected in the liver or other abdominal organs. Summary:          Primary Lung Mass: Increased in size to 4.5 cm.          Hepatic Lesions: Slight growth and increased metabolic activity.          Lymph Nodes: Slightly re-enlarged hilar and subcarinal nodes.          Pleural Effusion: Small right-sided pleural effusion noted.          Atelectasis: Slight worsening of minor atelectasis in the left lower lobe.          Adrenal Gland: Remains slightly enlarged; increased metabolic activity.   Overall Assessment: Disease progression (PD). May 15, 2024: Tumor board recommends considering clinical trial options due to limited response to standard therapies. May 18, 2024: Detailed assessment of health status. Patient progressively in worse conditions, currently ECOG performance 2. All routine labs, including liver and renal function tests, within normal limits.                  ===== Patient 11.1 =====                 Patient Information Name: Smith, Anna Born: 10.03.1980 Address: Another Avenue 89, Augsburg Overview of Tumor Diagnosis and Therapy Tumor Diagnosis Diagnosis: UICC Stage IV well-differentiated, non-functioning neuroendocrine tumor of the pancreas Initial Detection: June 5, 2021, following symptoms of abdominal pain and weight loss Biopsy Date: July 1, 2021, well-differentiated neuroendocrine tumor of the pancreas Molecular Profile: High Tumor Mutational Burden (TMB-H, >=10 mut/Mb, F1CDx assay) Therapy Overview Initial Treatment: Chemotherapy: Began July 20, 2021, with Capecitabine and Temozolomide regimen. Partial response noted after initial chemotherapy cycle completed by October 10, 2021. Continued therapy until January 2022 (progressive disease). Subsequent Treatment: Further chemotherapy with Everolimus initiated on February 1, 2022. Staging CT shows disease progression after 6 months. Current Status: ECOG 1 Comorbidities Hypertension Stage 2 History of appendectomy 2001 Medication Ramipril 10mg 1-0-0 Amlodipin 5mg 1-0-0 Hydrochlorothiazide 12.5mg 1-0-0           Chronological Medical Findings: May 10, 2021: Complained of abdominal pain and weight loss. Abdominal ultrasound revealed a mass in the pancreas. Referred to oncologist. No symptoms attributable to carcinoid syndrome. June 5, 2021: CT scan of the abdomen and pelvis: Revealed a well-defined mass in the head of the pancreas measuring approximately 4.5 cm with surrounding lymphadenopathy. No signs of vascular invasion, but multiple small hepatic lesions were identified, indicating metastatic disease. No evidence of bowel obstruction, but slight dilation of the pancreatic duct was observed. July 1, 2021: Biopsy and molecular testing confirmed a well-differentiated neuroendocrine tumor with a high tumor mutational burden (TMB-H, >=10 mut/Mb, F1CDx assay), Grade 3. Initiated Capecitabine and Temozolomide chemotherapy regimen on July 20, 2021. October 10, 2021: Follow-up CT scan (chest, abdomen, and pelvis): Partial response to treatment observed, with the primary pancreatic mass reducing to 3.5 cm in diameter. Hepatic lesions showed slight reduction in size and metabolic activity. No new metastatic lesions were detected, but mild ascites persisted. January 1, 2022: Continued Capecitabine and Temozolomide therapy, with staging scans showing moderate disease progression. Primary mass increased to 4.0 cm, with new peritoneal nodules. February 1, 2022: Initiated Everolimus therapy due to progression on previous regimen. August 1, 2022: Follow-up MRI scan (abdomen and pelvis): MRI indicated further disease progression, with the primary tumor enlarging to 4.8 cm and increased involvement of adjacent hepatic structures. Peritoneal nodules showed slight growth, and moderate ascites was present. There was no evidence of bowel obstruction or significant vascular invasion. November 15, 2022: Tumor board recommends considering clinical trial options due to limited response to standard therapies. December 1, 2022: Detailed assessment of health status. ECOG performance status 1. All routine labs, including liver and renal function tests, within normal limits.                  ===== Patient 11.1.1 =====                 Patient Information Name: Smith, Anna Born: 10.03.1980 Address: Another Avenue 89, Augsburg Overview of Tumor Diagnosis and Therapy Tumor Diagnosis Diagnosis: UICC Stage IV well-differentiated, non-functioning neuroendocrine tumor of the pancreas Initial Detection: June 5, 2021, following symptoms of abdominal pain and weight loss Biopsy Date: July 1, 2021, well-differentiated neuroendocrine tumor of the pancreas Molecular Profile: High Tumor Mutational Burden (TMB-H, >=10 mut/Mb, F1CDx assay) Therapy Overview Initial Treatment: Chemotherapy: Began July 20, 2021, with Capecitabine and Temozolomide regimen. Partial response noted after initial chemotherapy cycle completed by October 10, 2021. Continued therapy until January 2022 (progressive disease). Subsequent Treatment: Further chemotherapy with Everolimus initiated on February 1, 2022. Staging CT shows disease progression after 6 months. Current Status: ECOG 1 Comorbidities Hypertension Stage 2 History of appendectomy 2001 Systemic lupus erythematodes (last systemic therapy 09/2020)   Medication Ramipril 10mg 1-0-0 Amlodipin 5mg 1-0-0 Hydrochlorothiazide 12.5mg 1-0-0  Hydroxychloroquine 200mg 1-0-0      Chronological Medical Findings: May 10, 2021: Complained of abdominal pain and weight loss. Abdominal ultrasound revealed a mass in the pancreas. Referred to oncologist. No symptoms attributable to carcinoid syndrome. June 5, 2021: CT scan of the abdomen and pelvis: Revealed a well-defined mass in the head of the pancreas measuring approximately 4.5 cm with surrounding lymphadenopathy. No signs of vascular invasion, but multiple small hepatic lesions were identified, indicating metastatic disease. No evidence of bowel obstruction, but slight dilation of the pancreatic duct was observed. July 1, 2021: Biopsy and molecular testing confirmed a well-differentiated neuroendocrine tumor with a high tumor mutational burden (TMB-H, >=10 mut/Mb, F1CDx assay), Grade 3. Initiated Capecitabine and Temozolomide chemotherapy regimen on July 20, 2021. October 10, 2021: Follow-up CT scan (chest, abdomen, and pelvis): Partial response to treatment observed, with the primary pancreatic mass reducing to 3.5 cm in diameter. Hepatic lesions showed slight reduction in size and metabolic activity. No new metastatic lesions were detected, but mild ascites persisted. January 1, 2022: Continued Capecitabine and Temozolomide therapy, with staging scans showing moderate disease progression. Primary mass increased to 4.0 cm, with new peritoneal nodules. February 1, 2022: Initiated Everolimus therapy due to progression on previous regimen. August 1, 2022: Follow-up MRI scan (abdomen and pelvis): MRI indicated further disease progression, with the primary tumor enlarging to 4.8 cm and increased involvement of adjacent hepatic structures. Peritoneal nodules showed slight growth, and moderate ascites was present. There was no evidence of bowel obstruction or significant vascular invasion. November 15, 2022: Tumor board recommends considering clinical trial options due to limited response to standard therapies. December 1, 2022: Detailed assessment of health status. ECOG performance status 1. All routine labs, including liver and renal function tests, within normal limits.   ===== Patient 11.1.2 =====                  Patient Information Name: Smith, Anna Born: 10.03.1980 Address: Another Avenue 89, Augsburg Overview of Tumor Diagnosis and Therapy Tumor Diagnosis Diagnosis: UICC Stage IV well-differentiated, non-functioning neuroendocrine tumor of the pancreas Initial Detection: June 5, 2021, following symptoms of abdominal pain and weight loss Biopsy Date: July 1, 2021, well-differentiated neuroendocrine tumor of the pancreas Molecular Profile: High Tumor Mutational Burden (TMB-H, >=10 mut/Mb, F1CDx assay) Therapy Overview Initial Treatment: Chemotherapy: Began July 20, 2021, with Capecitabine and Temozolomide regimen. Partial response noted after initial chemotherapy cycle completed by October 10, 2021. Continued therapy until January 2022 (progressive disease). Subsequent Treatment: Further chemotherapy with Everolimus initiated on February 1, 2022. Staging CT shows disease progression after 6 months. Current Status:   ECOG 1   Comorbidities Smoker 35 py Alcohol dependence Hepatitis C Hypertension Stage 2 History of appendectomy 2001   Medication Glecaprevir/Pibrentasvir 100mg/40mg 3-0-0 Ramipril 10mg 1-0-0 Amlodipin 5mg 1-0-0 Hydrochlorothiazide 12.5mg 1-0-0           Chronological Medical Findings: May 10, 2021: Complained of abdominal pain and weight loss. Abdominal ultrasound revealed a mass in the pancreas. Referred to oncologist. No symptoms attributable to carcinoid syndrome. June 5, 2021: CT scan of the abdomen and pelvis: Revealed a well-defined mass in the head of the pancreas measuring approximately 4.5 cm with surrounding lymphadenopathy. No signs of vascular invasion, but multiple small hepatic lesions were identified, indicating metastatic disease. No evidence of bowel obstruction, but slight dilation of the pancreatic duct was observed. July 1, 2021: Biopsy and molecular testing confirmed a well-differentiated neuroendocrine tumor with a high tumor mutational burden (TMB-H, >=10 mut/Mb, F1CDx assay), Grade 3. Initiated Capecitabine and Temozolomide chemotherapy regimen on July 20, 2021. October 10, 2021: Follow-up CT scan (chest, abdomen, and pelvis): Partial response to treatment observed, with the primary pancreatic mass reducing to 3.5 cm in diameter. Hepatic lesions showed slight reduction in size and metabolic activity. No new metastatic lesions were detected, but mild ascites persisted. January 1, 2022: Continued Capecitabine and Temozolomide therapy, with staging scans showing moderate disease progression. Primary mass increased to 4.0 cm, with new peritoneal nodules. February 1, 2022: Initiated Everolimus therapy due to progression on previous regimen. August 1, 2022: Follow-up MRI scan (abdomen and pelvis): MRI indicated further disease progression, with the primary tumor enlarging to 4.8 cm and increased involvement of adjacent hepatic structures. Peritoneal nodules showed slight growth, and moderate ascites was present. There was no evidence of bowel obstruction or significant vascular invasion. November 15, 2022: Tumor board recommends considering clinical trial options due to limited response to standard therapies. December 1, 2022: Detailed assessment of health status. ECOG performance status 1. Routine labs show elevated liver enzymes: ALT 100 U/L, AST 89 U/L, total bilirubin 2.8 mg/dl, direct bilirubin 1.6 mg/dl, Albumin 3.0 g/dl .                    ===== Patient 11.1.3 =====                   Patient Information Name: Smith, Anna Born: 10.03.1980 Address: Another Avenue 89, Augsburg   Overview of Tumor Diagnosis and Therapy Tumor Diagnosis Diagnosis: UICC Stage IV well-differentiated, non-functioning neuroendocrine tumor of the pancreas Initial Detection: June 5, 2021, following symptoms of abdominal pain and weight loss Biopsy Date: July 1, 2021, well-differentiated neuroendocrine tumor of the pancreas Molecular Profile: High Tumor Mutational Burden (TMB-H, >=10 mut/Mb, F1CDx assay)   Therapy Overview Initial Treatment: Chemotherapy: Began July 20, 2021, with Capecitabine and Temozolomide regimen. Partial response noted after initial chemotherapy cycle completed by October 10, 2021. Continued therapy until January 2022 (progressive disease). Subsequent Treatment: Further chemotherapy with Everolimus initiated on February 1, 2022. Staging CT shows disease progression after 6 months. Current Status: ECOG 1   Comorbidities Hypertension Stage 2 History of appendectomy 2001 History of active tuberculosis 2003   Medication Ramipril 10mg 1-0-0 Amlodipin 5mg 1-0-0 Hydrochlorothiazide 12.5mg 1-0-0           Chronological Medical Findings: May 10, 2021: Complained of abdominal pain and weight loss. Abdominal ultrasound revealed a mass in the pancreas. Referred to oncologist. No symptoms attributable to carcinoid syndrome. June 5, 2021: CT scan of the abdomen and pelvis: Revealed a well-defined mass in the head of the pancreas measuring approximately 4.5 cm with surrounding lymphadenopathy. No signs of vascular invasion, but multiple small hepatic lesions were identified, indicating metastatic disease. No evidence of bowel obstruction, but slight dilation of the pancreatic duct was observed. July 1, 2021: Biopsy and molecular testing confirmed a well-differentiated neuroendocrine tumor with a high tumor mutational burden (TMB-H, >=10 mut/Mb, F1CDx assay), Grade 3. Initiated Capecitabine and Temozolomide chemotherapy regimen on July 20, 2021. October 10, 2021: Follow-up CT scan (chest, abdomen, and pelvis): Partial response to treatment observed, with the primary pancreatic mass reducing to 3.5 cm in diameter. Hepatic lesions showed slight reduction in size and metabolic activity. No new metastatic lesions were detected, but mild ascites persisted. January 1, 2022: Continued Capecitabine and Temozolomide therapy, with staging scans showing moderate disease progression. Primary mass increased to 4.0 cm, with new peritoneal nodules. February 1, 2022: Initiated Everolimus therapy due to progression on previous regimen. August 1, 2022: Follow-up MRI scan (abdomen and pelvis): MRI indicated further disease progression, with the primary tumor enlarging to 4.8 cm and increased involvement of adjacent hepatic structures. Peritoneal nodules showed slight growth, and moderate ascites was present. There was no evidence of bowel obstruction or significant vascular invasion. August 5, 2022: Brain MRI. Multiple metastases, specifically three lesions in the right hemisphere: two in the right parietal lobe, and one in the right occipital lobe. Incidental findings included scattered white matter hyperintensities consistent with chronic microvascular ischemic changes. November 15, 2022: Tumor board recommends considering clinical trial options due to limited response to standard therapies. December 1, 2022: Detailed assessment of health status. ECOG performance status 1. All routine labs, including liver and renal function tests, within normal limits.
http://arxiv.org/abs/2407.13278v1
20240718083155
Deep Time Series Models: A Comprehensive Survey and Benchmark
[ "Yuxuan Wang", "Haixu Wu", "Jiaxiang Dong", "Yong Liu", "Mingsheng Long", "Jianmin Wang" ]
cs.LG
[ "cs.LG" ]
IEEE Transactions on Pattern Analysis and Machine Intelligence, Vol. XX, No. X Wang et al.: Deep Time Series Models: A Comprehensive Survey and Benchmark § ABSTRACT Time series, characterized by a sequence of data points arranged in a discrete-time order, are ubiquitous in real-world applications. Different from other modalities, time series present unique challenges due to their complex and dynamic nature, including the entanglement of nonlinear patterns and time-variant trends. Analyzing time series data is of great significance in real-world scenarios and has been widely studied over centuries. Recent years have witnessed remarkable breakthroughs in the time series community, with techniques shifting from traditional statistical methods to advanced deep learning models. In this paper, we delve into the design of deep time series models across various analysis tasks and review the existing literature from two perspectives: basic modules and model architectures. Further, we develop and release Time Series Library (TSLib) as a fair benchmark of deep time series models for diverse analysis tasks, which implements 24 mainstream models, covers 30 datasets from different domains, and supports five prevalent analysis tasks. Based on TSLib, we thoroughly evaluate 12 advanced deep time series models on different tasks. Empirical results indicate that models with specific structures are well-suited for distinct analytical tasks, which offers insights for research and adoption of deep time series models. Code is available at https://github.com/thuml/Time-Series-Libraryhttps://github.com/thuml/Time-Series-Library. Time series analysis, deep time series models, survey, benchmark Deep Time Series Models: A Comprehensive Survey and Benchmark Yuxuan Wang, Haixu Wu, Jiaxiang Dong, Yong Liu, Mingsheng Long, Jianmin Wang Yuxuan Wang, Haixu Wu, Jiaxiang Dong, Yong Liu, Jianmin Wang, and Mingsheng Long are with the School of Software, BNRist, Tsinghua University, Beijing 100084, China. E-mail: wangyuxu22@mails.tsinghua.edu.cn. Yuxuan Wang, Haixu Wu and Jiaxiang Dong contributed equally to this work. Corresponding author: Mingsheng Long, mingsheng@tsinghua.edu.cn. July 22, 2024 =============================================================================================================================================================================================================================================================================================================================================================================================================================================================================================== § INTRODUCTION Time series refers to a sequence of data points indexed in a discrete-time order <cit.>, which are omnipresent in real-world applications, such as financial risk assessment, energy sustainability, and weather forecasting. Driven by the increasing availability of vast amounts of time series data across various domains, the community of time series analysis has witnessed tremendous advancements. Compared to image and text data, which have objectively prescribed syntax or intuitive patterns, the semantic information of time series data is primarily derived from the temporal variation <cit.>. This presents significant challenges in understanding the data, such as identifying sequential dependencies, trends, seasonal patterns, and complicated dynamics. Consequently, analyzing time series data requires sophisticated methods to capture and utilize these complex temporal representations. Given the crucial role of time series data in real-world applications <cit.>, time series analysis has been a longstanding research direction. Time series analysis encompasses the process of analyzing the temporal variation to understand time series data and make accurate predictions and informed decisions. One of the essential cornerstone of time series analysis is discovering the underlying patterns in time series data, which involves the intricate temporal dependencies and variate correlations inherent within the data. By capturing these complex dependencies, time series models can effectively reveal the underlying dynamics, and facilitate various downstream tasks, including forecasting, classification, imputation, and anomaly detection. Traditional time series methods, such as AutoRegressive Integrated Moving Average (ARIMA) <cit.>, Exponential Smoothing, and Spectral Analysis <cit.>, have long served as stalwart tools in time series analysis. These models, grounded in statistical methodologies, have been instrumental in discovering patterns, trends, and seasonality within temporal variations. However, their capabilities are hindered due to the inherent limitations of capturing complex nonlinear relationships and long-term dependencies present in real-world time series data. The rigid assumptions of linearity and stationarity that underpin traditional models constrain their adaptability to eventful and evolving data flows. Deep models have garnered significant attention and achieved remarkable performance across various domains, including natural language processing (NLP) <cit.>, <cit.>, computer vision (CV) <cit.>, <cit.>, and recommendation systems <cit.>. In recent years, deep learning models <cit.> have demonstrated their capability to capture the intricate dependencies within time series data, making deep learning models a powerful tool for time series analysis over traditional statistical methods. More recently, Transformer models with attention mechanisms, originally developed for natural language processing tasks, have presented stunning power in processing large-scale data <cit.> and have also been adapted for learning time series data. These architectures offer the advantage of selectively focusing on different parts of the input sequence, allowing for more nuanced discovery of both temporal and variable dependencies in time series. Related Surveys Although various time series models designed for different analysis tasks have emerged in recent years, there is a lack of a comprehensive overview of existing methods, covering both tasks and models. Previous reviews focus exclusively on either a specific model architecture or an analysis task. For example, <cit.> reviews deep learning methods for specific time series analysis tasks while failing to include advanced architecture such as Transformer. Several surveys <cit.> provide up-to-date reviews for time series analysis focusing on specific deep learning architectures(i.e., Graph Neural Network and Transformer). Recently, BasicTS <cit.> and TFB <cit.> introduce forecasting benchmarks that enable an unbiased evaluation of existing approaches but do not provide an overview of the architectural design of those deep models. In this survey, we provide a comprehensive review of deep time series models for researchers and practitioners, starting from the basic modules to modern architectures. To foster practical applications, a time series benchmark is offered for a fair evaluation and identifying the effective scope of existing models. Our survey is organized as follows. Section 2 provides the background concepts of time series analysis. Section 3 introduces the basic modules that are widely utilized in prevalent deep time series models. Section 4 reviews the existing deep time series models in terms of the architecture design. Section 5 introduces the proposed open-source benchmark—Time Series Library (TSLib)—and presents extensive experimental comparison with detailed analysis. Section 6 provides a brief discussion of future research directions while Section 7 summarizes this survey. § PRELIMINARIES §.§ Time Series Time series is a sequence of T observations ordered by time, which can be denoted as 𝐗 = {𝐱_1, 𝐱_2, ..., 𝐱_T}∈ℝ^T× C, where 𝐱_t ∈ℝ^C represents the observed values at time point t and C is the number of variables. Since time series data are physical measurements obtained from sensors, systems are often recorded with multiple variables. Consequently, real-world time series usually recorded in a multivariate form. Theoretical studies <cit.> have shown that when there are two or more non-stationary series, a linear combination of them can be stationary. This co-integration property helps in uncovering and modeling long-term relationships among non-stationary series. Therefore, the essence of time series analysis is to capture and utilize the temporal dependencies and inter-variable correlations within the observations. Temporal Dependency Given the sequential nature inherent in the observations, one evident technological paradigm is to capture the temporal dependence of a set of historical data. The basic idea of temporal dependencies is the intricate correlations between time points or sub-series. Traditional statistical models have laid the groundwork for modeling temporal dependencies. Prominent models include ARIMA (Autoregressive Integrated Moving Average) <cit.> have been extensively studied for capturing complex temporal patterns in the time series modality. Owing to their simplicity and interpretability, these statistical methods remain popular for tasks where the underlying temporal dynamics do not exhibit high complexity. Considering the high-dimensionality and non-stationarity of real-world time series, the research focus shifted towards deep learning for time series analysis. These advanced methods are designed to handle more complex temporal dynamics and offer greater flexibility in capturing the temporal dependency of time series data. Variate Correlation In addition to capturing temporal dependencies, understanding the variate correlations within high-dimensionality plays a pivotal role in analyzings multivariate time series. These correlations refer to the complex interactions and associations among different variables changing across the time. They provide valuable insights into the underlying dynamics and dependencies among the measurements, enabling a more comprehensive understanding of the latent process. Traditional approaches, such as Vector Autoregressive (VAR) models <cit.>, extend the concept of autoregression to multiple variables and can capture the relationships between multiple quantities as they evolve over time. Technically, VAR represents each variable as a linear combination of its lagged values and the lagged values of all other variables in the model, which results in an inability to capture complex and non-linear relationships. Recently, advanced deep models, such as Graph Neural Networks <cit.> and Transformers <cit.>, have also been introduced for variate correlation modeling. §.§ Time Series Analysis Tasks Based on the understanding of underlying patterns and trends within time series data, time series analysis encompasses various downstream applications, including forecasting <cit.>, imputation <cit.>, classification <cit.>, and anomaly detection <cit.>, each serving distinct purposes in diverse application domains. We illustrate representative time series analysis tasks in Figure <ref>. Forecasting is a fundamental task in time series analysis that requires models to uncover temporal dependencies and dynamic patterns in the data. By capturing the relationships between past and future data, the forecasting model aims to predict future values or trends of the input series. Missing data due to sensor failures, data corruption, or absent measurements are ubiquitous in practical applications, leading to a growing demand for time series imputation to obtain higher-quality data. Unlike forecasting, which predicts future values based on historical observations, imputation focuses on reconstructing missing values using the available contextual information. Anomaly detection involves identifying unusual or abnormal patterns within a time series, which can indicate critical events, system faults, or outliers requiring further investigation. Lastly, classification assigns a label or category to a given time series based on its characteristics, a task widely utilized in fields such as medical diagnosis. § BASIC MODULES Time series modeling approaches have evolved significantly, transitioning from traditional statistical models to sophisticated deep learning models. Despite these advancements, many classical tools and analytical algorithms remain widely used and continue to serve as foundational design principles in modern deep models. In this section, we focus on the major tools of classical time series analysis and demonstrate how they have been integrated as fundamental components in contemporary deep time series models. §.§ Stationarization As a foundational concept in time series analysis, stationarity refers to the property of a time series where its statistical properties remain constant over time. A stationary time series has a constant mean and variance, which simplifies statistical analysis and makes it easier to capture the underlying patterns and behavior within a time series. Since many statistics-based time series analysis methods take stationarity as a basic assumption, stationarization of time series data has become an essential module. There are ways of transforming non-stationary time series into stationary. Traditional time series models stationarize the time series through differencing or log-transformation. In recent deep learning approaches, data normalization <cit.> takes the role of stationarization in a simple but effective way, which standardizes the value distribution of observations while maintaining the intrinsic variations and further helps mitigate the distribution shift between the source and target domains. The deep adaptive input normalization (DAIN) layer <cit.> was proposed to adaptively stationarize time series data according to their original distribution. RevIN <cit.> introduces reversible instance normalization to time series data, which is an effective normalization-and-denormalization method with learnable affine transforms to make the model bypass the non-stationary inputs. Non-Stationary Transformer <cit.> (Stationary for short in the following) proposes a simpler but more effective series stationarization technique that improves the predictive capability of non-stationary series without extra parameters. Specifically, for a sequence with T time stamps and C variates 𝐗 = {𝐗_1, 𝐗_2, ..., 𝐗_T}∈ℝ^T× C, the outline of Stationary <cit.> can be summarized as: μ_𝐱 = 1/T∑_i=1^T 𝐗_i, σ_𝐱^2 = ∑_i=1^T 1/T (𝐗_i - μ_𝐱)^2, 𝐗^' = (𝐗 - μ_𝐱)/√(σ_𝐱^2 + ϵ), 𝐘^'=Model(𝐗^'), 𝐘̂ = σ_𝐱^2(𝐘^' + μ_𝐱), where ϵ is in a small value for numerical stability. μ_𝐱,σ_𝐱^2∈ℝ^1× C are the variate-specific mean and variance. To recover the distribution and non-stationarity of the original series, a de-normalization module is further used to augment the model output 𝐘^' with mean and variance statistics of inputs. The idea of stationarization and the above-mentioned techniques have been widely used in subsequent deep time series models <cit.>. Recent SAN <cit.> rethinks the nature of non-stationary data and tries to split it into non-overlap equally-sized slices and perform normalization on each slice. Specifically, based on the evolving trends of statistical properties, SAN introduces a statistics prediction module to predict the distributions of future slices. §.§ Decomposition Decomposition <cit.>, as a conventional approach in time series analysis, can disentangle time series into several components with categorized patterns, and works primarily useful for exploring complex series variations. In the previous work, diverse decomposition paradigms are explored. §.§.§ Seasonal-Trend Decomposition Seasonal-trend decomposition <cit.> is one of the most common practices to make raw data more predictable, which can separate the series into several different components: trend, seasonal, cyclical, and irregular, namely 𝐗 = 𝐓 + 𝐂 + 𝐒 + 𝐈, where the trend component 𝐓 represents the overall long-term pattern of the data over time, the cyclical component 𝐂 reflects repeated but non-periodic fluctuations within data, the seasonal component 𝐒 indicates the repetitive patterns over a fixed period, and the irregular component 𝐈 is the residuals or remainder of the time series after the other components have been removed. The trend-seasonality decomposition can be achieved by using mathematical tools such as filters or exponential smoothing <cit.>. Previous statistical approaches mainly adopt the trend-seasonality decomposition as data pre-processing <cit.>. In deep models, Autoformer <cit.> firstly introduces the idea of decomposition to deep learning architecture and proposes a series decomposition block as a basic module to extract the seasonal and trend-cyclical parts of deep features and input series, whose computation process can be formalized as: 𝐗_𝒯 = AvgPool(Padding(𝐗)), 𝐗_𝒮 = 𝐗 - 𝐗_𝒯. The series decomposition block is concisely implemented based on a temporally average pooling layer with padding operation to keep the sequence length unchanged. This design can capture trends 𝐗_𝒯, and the remainder is taken as the seasonal part 𝐗_𝒮. The proposed series decomposition block has been widely used in the follow-up <cit.> as a native building block of deep models to disentangle the underlying patterns of deep features. §.§.§ Basis Expansion Basis expansion is a mathematical method used to represent a function or a set of data points in terms of a new set of pre-defined functions. These new functions form a basis for a function space, meaning any function in that space can be expressed as a linear combination of these basis functions. In the context of time series analysis, basis expansion is used to reveal complex non-linear temporal relationships by decomposing the time series into a combination of basic variations, which also enhances interpretability. As a representative model, N-BEATS <cit.> presents hierarchical decomposition to time series by utilizing a fully connected layer to produce expansion coefficients for both backward and forward forecasts. For l-th blocks in the proposed hierarchical architecture, the operation can be as follows: 𝐗_l = 𝐗_l-1 - 𝐗̂_l-1 𝐗̂_l, 𝐘̂_l = Block_l (𝐗_l), where 𝐗̂_l-1 is the backcast results which restrict the block to approximate the input signal 𝐗_l-1, then 𝐗_l removes the portion of well-estimated signal 𝐗̂_l-1 from 𝐗̂_l-1, therefore providing a hierarchical decomposition. 𝐘̂_̂l̂ is the partial forecast based on the decomposed input 𝐗_l and the final forecast 𝐘̂ = ∑_l 𝐘̂_l is the sum of all partial forecasts. Subsequently, N-HiTs <cit.> redefine the N-BEATS by incorporating subsampling layers before the fully connected blocks, which enhances the input decomposition via multi-frequency data sampling and future predictor via multi-scale interpolation. DEPTS <cit.> puts forward a novel decoupled formulation for periodic time series by introducing the periodic state as a hidden variable and then develops a deep expansion module on top of residual learning to conduct layer-by-layer expansions between observed signals and hidden periodic states. Similarly, DEWP <cit.> is a also stack-by-stack expansion model to handle multivariate time series data, where each stack consists of a variable expansion block to capture dependencies among multiple variables and a time expansion block to learn temporal dependencies. §.§.§ Matrix Factorization The above-mentioned two decomposition methods are proposed for univariate series or applied to multivariate series in a variate-independent way. Here, we discuss a factorization-based decomposition for multivariate series. Specifically, many multivariate time series data in real-world scenarios can also be referred to as high-dimensional data. They can be formalized in the form of a matrix, whose rows correspond to variate and columns correspond to time points. Since variables in multivariate time series tend to be highly correlated, it can be possibly reduced to a more compact space. Matrix factorization methods <cit.> work by decomposing the high-dimensional series data into the product of two matrices in a lower-dimensional latent space. For a multivariate time series 𝐗∈ℝ^T× C, as shown in Figure <ref> the matrix can be approximated by the multiplications of two lower rank embedding matrix, 𝐗≈𝐅𝐗, in which 𝐅∈ℝ ^ k × C, 𝐗∈ℝ ^ T × k and k is a hyperparameter. Besides the estimation, there are regularizers to avoid overfitting problems in factorization. Going beyond the canonical design that takes the squared Frobenius norm as regularizers, Temporal regularized matrix factorization (TRMF) <cit.> designs an autoregressive-based temporal regularizer to describe temporal dependencies among latent temporal embeddings. Further, <cit.> extended TRMF with a new spatial autoregressive regularizer to estimate low-rank latent factors by simultaneously learning the spatial and temporal autocorrelations. NoTMF <cit.> integrates the vector autoregressive process with differencing operations into the classical low-rank matrix factorization framework to better model real-world time series data with trend and seasonality. Eliminating the need for tuning regularization parameters, BTF <cit.> is a fully Bayesian model that integrates the probabilistic matrix factorization and vector autoregressive process into a single probabilistic graphical model. Instead of using an autoregressive-based temporal regularization, DeepGLO <cit.> utilizes a temporal convolution network for regularization to capture non-linear dependencies. LSTM-GL-ReMF <cit.> contains an LSTM-based temporal regularizer to learn complex long-term and short-term non-linear temporal correlations and a Graph Laplacian spatial regularizer <cit.> to capture spatial correlations. §.§ Fourier Analysis Fourier analysis <cit.> can convert a physical signal into the Fourier domain to highlight the inherent frequency properties of the original data and has been a well-acknowledged analysis tool in extensive areas. Since time series are usually recorded as a sequence of discrete time points by sampling the original continuous signals, Fourier analysis has become one of the mainstream tools in time series modeling and has been demonstrated favorable effectiveness and efficiency <cit.>. Introducing the Fourier domain not only augments the representation of the original series but also provides a global view since the frequency spectrum distribution, which can indicate essential periodic properties of time series. In practice, Fast Fourier Transform (FFT) <cit.> and Wavelet Transform (WT) <cit.> as the basic algorithms connecting the discrete temporal domain to the frequency domain, have gained increasing popularity in the modular design of deep time series models <cit.>. Existing approaches can be roughly divided into two categories: time-domain and frequency-domain modeling. §.§.§ Time-Domain Modeling The fundamental principle behind the Fourier transform is that sequential data can be decomposed and represented by a series of periodic signals. Consequently, it can be used to identify potentially dominant periods and their corresponding frequencies in the data by analyzing the highest amplitude components. As a typical practice, TimesNet <cit.> employs the Fast Fourier Transform (FFT) to extract the most significant frequencies with the highest amplitude values, subsequently reshaping the 1D time series data into a 2D space based on the identified periods for better representation learning. Following TimesNet, PDF <cit.> posits that frequencies with larger values facilitate a more discernible distinction between long-term and short-term relationships. In addition to exploiting the information of the sequence obtained by the Fourier Transformer, some works attempt to perform efficient computation through the Fast Fourier Transformer. Auto-correlation is a fundamental concept in time series analysis that measures the dependence between observations at different time points within a sequence of data. The Wiener-Khinchin theorem <cit.> provides a mathematical relationship between the auto-correlation function and the power spectral density (PSD) of a stationary random process, where the auto-correlation function represents the inverse Fourier transform of the PSD. Taking the data as a real discrete-time process, Autoformer <cit.> proposes an Auto-Correlation mechanism with an efficient Fast Fourier Transforms to capture the series-wise correlation. The frequency-domain representation provides information about the amplitudes and phases, where low-frequency components correspond to slower variations or trends in the signal, and high-frequency components capture fine details or rapid variations. A significant body of work has focused on leveraging frequency-domain information to enhance the model's capability in capturing temporal dependencies. FiLM <cit.> introduces Frequency Enhanced Layers (FEL) which combine Fourier analysis with low-rank approximation to keep the part of the representation related to low-frequency Fourier components and the top eigenspace to effectively reduce the noise and boost the training speed. FITS <cit.> integrates a low-pass filter (LPF) to eliminate high-frequency components above a specified cutoff frequency, thereby compressing the model size while preserving essential information. From an opposite idea, FEDformer <cit.> posits that retaining only low-frequency components is insufficient for time series modeling, as it may dismiss important fluctuations in the data. Based on the above considerations, to capture the global view of time series, FEDformer represents the series by randomly selecting a constant number of Fourier components, including both high-frequency and low-frequency components. §.§.§ Frequency-Domain Modeling Building on time-frequency analysis in signal processing, several approaches have been developed to study time series simultaneously in both the time and frequency domains. ATFN <cit.> comprises an augmented sequence-to-sequence model that learns the trending features of complex non-stationary time series, along with a frequency-domain block designed to capture dynamic and intricate periodic patterns. TFAD <cit.> introduces a time-frequency analysis-based model that employs temporal convolutional networks to learn both time-domain and frequency-domain representations. Some works have developed specialized deep learning architecture to process the frequency domain of time series. STFNet <cit.> applies Short-Time Fourier Transform to input signals and applies filtering, convolution, and pooling operations directly in the frequency domain. StemGNN <cit.> combines Graph Fourier Transform (GFT) and Discrete Fourier Transform to model both inter-series correlations and temporal dependencies. EV-FGN <cit.> uses a 2D discrete Fourier transform on the spatial-temporal plane of the embeddings and performs graph convolutions for capturing the spatial-temporal dependencies simultaneously in the frequency domain. FreTS <cit.> leverages Discrete Fourier Transform (DFT) to transform the data into the frequency domain spectrum and introduces frequency domain MLPs designed for complex numbers with separated modeling for the real parts and the imaginary parts. FCVAE <cit.> integrates both the global and local frequency features into the condition of Conditional Variational Autoencoder (CVAE) concurrently. Recent TSLANet <cit.> propose a lightweight Adaptive Spectral Block (ASB) to replace the self-attention mechanism, which is achieved via Fourier-based multiplications by global and local filters. FourierDiffusion <cit.> explores extending the score-based SDE formulation of diffusion to complex-valued data and therefore implements time series diffusion in the frequency domain. § MODEL ARCHITECTURES As we have discussed in Section 2, the time series model needs to unearth the intrinsic temporal dependencies and variate correlations that lie in observations. In this section, we provide a technical review of the existing deep time series models. As we have presented in Figure <ref>, existing works can be classified into five categories based on their backbone architecture, namely MLP-based, RNN-based, CNN-based, GNN-based, and Transformer-based. §.§ Multi-Layer Perceptrons As a representation of traditional statistical time series models, the Auto-regressive (AR) model assumes that the model output depends linearly on its own historical values. Inspired by the remarkable performance of auto-regressive models, Multi-Layer Perceptrons (MLP) have become a popular architecture for modeling time series data. As a representative work of linear-based models, N-BEATS <cit.> is a pure MLP-based deep time series model without any time-series-specific knowledge to capture the temporal patterns in time series. Specifically, as described in Equ. (<ref>) N-BEATS consists of deep stacks of fully-connected layers with two residual branches in each layer, one is for the backcast prediction and the other one is the forecast branch. Extending the idea of neural basis expansion analysis, N-HiTs <cit.> use multi-rate signal sampling and hierarchical interpolation and N-BEATSx <cit.> incorporate exogenous variables to enhance the prediction. Recent research by DLinear <cit.>, also referred to as LTSF-Linear, challenges the effectiveness of complicated deep architecture in temporal modeling. It argues a simple linear regression in the raw space that achieves remarkable performance in both modeling and efficiency. As illustrated in Figure <ref>, prevalent MLP-based deep time series models consist of simple linear layers primarily designed for forecasting tasks. Also lightweight but effective, FITS <cit.> advocates time series analysis can be treated as interpolation exercises within the complex frequency domain and further introduces a complex-valued linear layer to learn amplitude scaling and phase shift in the frequency domain. Inspired by MLP-Mixer <cit.> in computer vision, several works have attempted to utilize MLPs to model both temporal and variate dependencies. TSMixer <cit.> contains interleaving time-mixing and feature-mixing MLPs to extract information from different perspectives. To better model the global dependencies in time series data, FreTS <cit.> investigates the learned patterns of frequency-domain MLPs which are operated on both inter-series and intra-series scales to capture channel-wise and time-wise dependencies in multivariate data. Recent works have moved beyond using simple linear layers over discrete time points. TimeMixer suggests that time series exhibit distinct patterns in different sampling scales and proposes an MLP-based multiscale mixing architecture. TiDE <cit.> incorporates exogenous variables to enhance the time series prediction. Based on Koopman theory and Dynamic Mode Decomposition (DMD) <cit.>, which is a dominant approach for analyzing complicated dynamical systems, Koopa <cit.> hierarchically disentangles dynamics through an end-to-end predictive training framework and can utilize real-time incoming series for online development. §.§ Recurrent Neural Networks Recurrent Neural Networks (RNNs) are specifically designed to model sequential data <cit.>, such as natural language processing <cit.> and audio modeling <cit.>. Since time series are also serial in nature, RNNs have emerged as a popular choice for analyzing time series data <cit.>. Existing RNN-based deep time series models focus on combating the gradient vanishing problem caused by the vanilla recurrent structure and modeling the mutual correlation among multivariate variables. Previous works <cit.> use variants of RNN to model temporal dependencies. LSTNet <cit.> combines the recurrent structure with the convolutional layer to capture both the short-term local dependency between variables and long-term patterns for time series. Moreover, a novel recurrent-skip component based on the periodic pattern is introduced to alleviate gradient vanishing in modeling long-term dependencies. Similarly, DA-RNN <cit.> combines the recurrent unit with a dual-stage attention mechanism to adaptively extract relevant series at each time step. Beyond deterministic forecasts, DeepAR <cit.> proposes an auto-regressive recurrent network model to predict the probability distribution of further time points. Technologically, it learns not only the seasonal behavior with time series but dependencies on given covariates across time series, allowing the model to make predictions even when there is little or no historical data. Also based on Markovian state representation, the State Space Model (SSM) <cit.> is another classical mathematical framework that captures the probabilistic dependence between observed measurements in stochastic dynamical systems. Concretely, a single-input single-output (SISO) linear state space model is defined as follows: d/ dtx(t) = 𝐀x(t) + 𝐁u(t), y(t) = 𝐂x(t) + 𝐃u(t), where u(t), x(t), y(t) are input signal, latent state, and output signal respectively. The system is characterized by the matrices 𝐀∈ℝ^N × N, 𝐁∈ℝ^N × 1, 𝐂∈ℝ ^1 × N, 𝐃∈ℝ^1 × 1 can be learned by the deep neural network. SSMs have proven their effectiveness and efficiency in processing well-structured time series data, but traditional approaches have to refit each time series sample separately and therefore cannot infer shared patterns from a dataset of similar time series. With the rise of deep learning models, modern SSMs are often implemented in a recurrent manner. By adapting and propagating a deterministic hidden state, RNNs are able to represent long-term dependencies in continuous data which offer an alternative to classical state space models. Therefore, some work <cit.> have attempted to fuse classical state space models with deep neural networks. Representative like Deep State Spaces Model (DSSM) <cit.>, using a recurrent neural network (RNN) to parametrize a particular linear SSM, takes advantage of incorporating structural assumptions and learning complex patterns. Structured State Space sequence model (S4) <cit.> introduces a new parameterization for the SSM by conditioning matrix 𝐀 with a low-rank correction, allowing it to be diagonalized stably, which empowers the model with better long-term modeling capacity. Similar to S4, LS4 <cit.> is a generative model with latent space evolution following a state space ordinary differential equations (ODE). Recent work on Mamba <cit.> has emerged as a powerful method for modeling long-context sequential data while scaling linearly with sequence length. Utilizing a simple selection mechanism that parameterizes the SSM parameters based on the input, Mamba can discern the importance of information in a manner similar to the attention mechanism, posing a potentially effective way to sequential modeling. §.§ Convolutional Neural Networks Since the semantic information of time series is mainly hidden in the temporal variation, Convolutional neural networks (CNN) <cit.> have become a competitive backbone for their ability to capture local features and pattern recognition. By leveraging convolutions and hierarchical feature extraction, CNNs have shown remarkable success in various computer vision tasks, such as image classification <cit.>, segmentation <cit.> and object detection <cit.>. Considering the temporal continuity of time series data, previous works <cit.> apply one-dimensional CNN (1D CNN) to capture the local patterns of time series data. Recent SCINet<cit.> applies normal convolutions with a hierarchical downsample-convolve-interact architecture to capture dynamic temporal dependencies at different temporal resolutions of time series data. Inspired by the idea of masked convolution <cit.>, Wavenet<cit.> introduces causal convolution and dilated causal convolution to model long-range temporal causality. Similar to Wavenet, Temporal Convolutional Networks (TCN) <cit.> uses a stack of dilated convolutional kernels with progressively enlarged dilation factors to achieve a large receptive field. However, the limited receptive field of TCN makes it difficult for them to capture global relationships in time series data. Based on TCN, MICN<cit.> is a local-global convolution network that combines different convolution kernels to model temporal correlation from a local and global perspective. ModernTCN <cit.> boosts the traditional TCN to capture cross-time and cross-variable dependency by DWConv and ConvFFN separately. Considering that DWConv is proposed to learn temporal information, it is operated variate-independently to learn the temporal dependency of each univariate time series. Beyond 1D space, motivated by the periodicity properties of time series data, TimesNet <cit.> transforms the 1D time series 𝐗_1𝐃 data into a set of 2D tensors 𝐗_2D = {𝐗^1_2D, ..., 𝐗^k_2D} in each TimesBlock based on the estimated period lengths, where the inter-period variations are presented in tensor columns and inner-period ones are shown in tensor rows. Here k is a hyperparameter, corresponding to multiple 1D-to-2D transformations with different periods. Then it applies inception block <cit.> to process the transformed 2D tensors, which can be summarized as: 𝐗^i_2D =Reshape(Padding(𝐗_1D)), i∈{1,⋯, k} 𝐗^i_2D =Inception(𝐗^i_2D), i∈{1,⋯, k} 𝐗^i_1D =Trunc(Reshape(𝐗^i_2D)), i∈{1,⋯, k}, where 𝐗^i_2D is the i-th transformed 2D tensor. After passing through the inception block Inception(·), the learned 2D representations are transformed back to 1D for aggregation. These transformations enable TimesNet to effectively capture both multi-scale intraperiod-variation and interperiod-variation simultaneously. Furthermore, by leveraging hierarchical convolutional layers, TimesNet is capable of learning both high-level and low-level representations, facilitating comprehensive time series analysis across four distinct tasks. §.§ Graph Neural Networks Analyzing multivariate time series data is often challenging due to the complex and often non-linear correlations between variables. To address this challenge, Graph neural networks (GNNs) <cit.> have been widely adopted in time series analysis. By modeling multivariate data as a spatiotemporal graph, where each node represents a variable, GNNs can extract relationships among neighboring nodes and capture the temporal evolution of node attributes over time, thereby providing a robust framework for understanding the underlying dynamics of multivariate time series. The core goal of GNN architecture is to model the underlying topological relations in multivariate data, therefore existing GNN-based works can be roughly divided into two categories based on whether graph structure is part of the input into the model. DCRNN <cit.> models the spatial dependency of traffic as a diffusion process on a directed graph and uses diffusion convolution to capture the spatial dependency, alongside a recurrent neural network to capture temporal dynamics. Similarly, STGCN <cit.> integrates graph convolutional networks to model the spatial dependencies among traffic sensors with temporal convolutions to capture the temporal dependencies in the traffic time series data. Graph WaveNet <cit.> combines graph convolution with dilated casual convolution and learns an adaptive dependency matrix through node embedding, enabling the model to automatically capture hidden spatial dependencies in spatial-temporal graph data. Similarly, AGCRN <cit.> enhances the traditional graph convolutional network with node adaptive parameter learning and data-adaptive graph generation modules, allowing for the automatic capture of spatial and temporal correlations without a pre-defined graph structure. MTGNN <cit.> introduces a graph learning layer to adaptively learn the graph adjacency matrix, thereby capturing hidden relationships among multivariate time series data. STFGNN <cit.> employs a Spatial-Temporal Fusion Graph Neural Network with a generated temporal graph to learn localized spatial-temporal heterogeneity and global spatial-temporal homogeneity. StemGNN <cit.> leverages the advantages of both the Graph Fourier Transform (GFT) and the Discrete Fourier Transform (DFT), modeling multivariate time series in the spectral domain. §.§ Transformers In the view of the great success in the field of natural language processing <cit.> and computer vision <cit.>, Transformers have also emerged as a powerful backbone for time series analysis. Benefiting from the self-attention mechanism <cit.>, Transformer-based models can capture long-term temporal dependencies and complex multivariate correlations. As overviewed in Figure <ref>, existing Transformer-based time series models can be categorized based on the granularity of representation used in the attention mechanism, namely point-wise, patch-wise, and series-wise approaches. §.§.§ Point-wise Dependency Due to the serial nature of time series, most existing Transformer-based works use a point-wise representation of time series data and apply attention mechanisms to capture the correlations among different time points. Among these point-wise modeling approaches, Data Embedding is a crucial component that maps the value of time series data to a high-dimensional representation. Given time series 𝐗∈ℝ ^T × C with corresponding time stamp information 𝐗^mark∈ℝ ^T × D, where C is the variate number and D is the types of time stamps, the embedding module can be summarized as follow: 𝐇_t = Projection(𝐗_t) + PE(𝐗_t) + TE(𝐗^mark_t)), where 𝐇_t∈ℝ^T× d_model and d_model is the dimension of the emebedded representation, value projection Projection: ℝ^C ↦ℝ^d_model and timestamp embedding TE: ℝ^D ↦ℝ^d_model are implemented by channel-dimension linear layers, and PE(·) denotes the absolute position embedding to preserve the sequential context of input series. To better apply the Transformer architecture to the time series domain, researchers have explored two aspects: designing pre-processing modules and modifying the attention mechanisms. As we have discussed in Section 3.1, past RevIN <cit.> and Stationary <cit.> achieved superior performance by introducing Normalization and De-Normalization modules before and after the vanilla Transformer. Besides, Stationary <cit.> further proposes De-stationary Attention to avoid the over-stationarization problem. Given that the canonical attention approach leads to a quadratic computational complexity, numerous efficient Transformers <cit.> have been proposed to mitigate the complexity caused by point-wise modeling, which is summarized in Table <ref>. LogSparse <cit.> proposes Convolutional Self-Attention to replace canonical attention by employing causal convolutions to produce queries and keys in the self-attention layer. Informer <cit.> introduces a Query Sparsity Measurement, where a larger value indicates a higher chance to contain the dominant information in self-attention. Based on the proposed sparsity measurement, it further designs a ProbSparse self-attention only using top queries with the biggest measurement results, which can reduce the quadratic complexity in time and memory. Pyraformer <cit.> constructs a multi-resolution C-ary tree and develops a Pyramidal Attention Mechanism, in which every node can only attend to its neighboring, adjacent, and children nodes. With the calculated mask for attention, Pyraformer can capture both short- and long-temporal dependencies with linear time and space complexity. §.§.§ Patch-wise Dependency Patch-based architectures play a crucial role in the Transformer models for both Natural Language Processing (NLP) <cit.> and Computer Vision (CV) <cit.>. Since point-wise representations are insufficient to capture local semantic information in temporal data, several studies <cit.> have been devoted to exploring patch-level temporal dependencies within time series data. Pioneer work Autoformer <cit.> proposes an Auto-Correlation Mechanism, which captures the series-wise dependencies of time series to replace canonical point-wise self-attention. Based on the stochastic process theory <cit.>, Auto-Correlation utilizes the Fast Fourier Transform to discover the time-delay similarities between different sub-series. A time delay module is further proposed to aggregate the similar sub-series from underlying periods instead of the relation between scattered points, which firstly explores the sub-series level modeling in Transformer-based models. Different from modifying the attention mechanism, most recent works utilize patch-wise representations of time series data and perform a self-attention mechanism to capture patch-wise dependencies <cit.>. PatchTST <cit.> and follow-up works split time series 𝐗 into a sequence of overlapped patches and embed each patch following: {𝐏_1, 𝐏_2, ..., 𝐏_N} = Patchify(𝐗), 𝐇_i = PatchEmbed(𝐏_i) + 𝐖_ pos^i. Assume P, N is patch length and the corresponding number of patches split, and 𝐏_i denotes the i-th patch with sequence length P. The patches are mapped to the latent space through a temporal linear projection PatchEmbed: ℝ^ P ↦ℝ^d_model and a learnable position embedding 𝐖_ pos∈ℝ ^ d_model× N. Based on the vanilla attention mechanism, PatchTST <cit.> learns the patch-wise dependencies. Going beyond PatchTST, recent Pathformer <cit.> proposes a multi-scale Transformer-based model with adaptive pathways. Based on the patch division of different scales, the adaptive pathways select the patch sizes with the top K weights generated by the router to capture multi-scale characteristics. The success of PatchTST also benefits from channel-independence design, where each temporal patch-level token only contains information from a single series. In addition to capturing the patch-level temporal dependencies within one single series, recent approaches <cit.> have endeavored to capture interdependencies among patches from different variables over time. Crossformer <cit.> introduces a Two-Stage Attention layer containing a Cross-Time Stage and a Cross-Dimension Stage to efficiently capture the cross-time and cross-variate dependencies between each patch token. For the obtained embedded vector 𝐇∈ℝ ^ N × C × d_ model, the overall attention stage can be described as follow: 𝐙^ time = MSA^ time( 𝐇, 𝐇, 𝐇) 𝐁 = MSA_1^ dim (𝐑, 𝐙^ time, 𝐙^ time) 𝐙^ dim = MSA_2^ dim (𝐙^ time, 𝐁, 𝐁), where 𝐑∈ℝ ^𝐍×𝐂× d_model is a learnable vector array used as a router to gather information from all dimensions and then distribute the gathered information. §.§.§ Series-wise Dependency Further expanding the receptive field, there are also some works that attempt to use the tokenization of the whole time series to capture inter-series dependencies. iTransformer <cit.> introduce VariateEmbed to multivariate data, and for i-th variable 𝐗^(i), it can be simply formulated as follows: 𝐇^(i) = VariateEmbed(𝐗^(i)) where VariateEmbed: ℝ^T →ℝ^d_ model is instantiated as trainable linear projector. Based on the global representations of each series, iTransformer utilizes the vanilla Transformer without any architectural modifications to capture mutual correlations in multivariate time series data. Similarly, TimeXer<cit.> focuses on forecasting with exogenous variables and utilizes patch-level and series-level representations for endogenous and exogenous variables, respectively. Additionally, an endogenous global token is introduced to TimeXer, which serves as a bridge in-between and therefore captures intra-endogenous temporal dependencies and exogenous-to-endogenous correlations jointly. § TIME SERIES LIBRARY Time series analysis has emerged as an important research area, attracting significant attention from both academia and industry. Recently, extensive exploration of deep learning based methods for time series analysis has resulted in significant advances. However, the issue of fair benchmarking poses a pressing challenge in this domain. The absence of fair, rational, and comprehensive benchmarks can lead to biased comparisons between different methods and hinder accurate evaluation of their effectiveness, potentially inflating domain advances or hindering practical applications. This presents a substantial obstacle to understanding advances and fostering robust development within the field. In the domain of time series analysis, several benchmarks have been proposed, such as DGCRN <cit.>, LibCity <cit.>, DL-Traff <cit.>, TS-bench <cit.>, and BasicTS <cit.>. More specifically, Autoformer <cit.> proposed a standard long-term forecasting benchmark covering different practical applications. Further, to verify the generality of different time series analysis models, TimesNet <cit.> builds a more comprehensive model generalization benchmark covering five mainstream time series analysis tasks. However, these benchmarks typically have some limitations. One issue with current time series benchmarks is their limited coverage of time series analysis tasks and specific domains, which limits their practical applications. Moreover, these benchmarks often fail to provide detailed discussions and comprehensive summaries of task types, model architectures, and specific baseline methods. As a result, they do not effectively guide the design of more efficient time series analysis methods or drive further development in the field. To effectively address these issues, we introduce and implement Time Series Library (TSLib), a benchmark for fair and comprehensive comparing and evaluating the performance of deep time series models across various time series analysis tasks. As shown in Figure <ref>, TSLib encompasses a unified model experiment pipeline, standardized evaluation protocols, extensive and diverse real-world datasets, mainstream and advanced time series analysis models, and unified experimental validation and analysis process. In our Time Series Library, we meticulously followed the official codes and implemented 24 widely used and advanced deep time series analysis models. These models are derived from four canonical deep learning architectures. Users can choose from these models based on their specific practical usage scenarios. The code is available at https://github.com/thuml/Time-Series-Libraryhttps://github.com/thuml/Time-Series-Library. As follows, we will provide a detailed description of our TSLib, including the design and implementation principles (Section <ref>), the evaluation protocols and metrics (Section <ref>), the dataset descriptions (Section <ref>), and the main results of models with different architectures (Section <ref>). §.§ Design and Implementation Principle TSLib is designed based on the well-established factory pattern and implements a unified interface between data and model objects, thus enabling a clear separation between deep model creation and usage, promoting modularity and flexibility. By loading different data and model objects and combining specific task heads during model training, TSLib enables different datasets and models to be shared and extended, allowing easy switching between various time series analysis tasks. These design and implementation principles provide enhanced flexibility and scalability for our TSLib. Furthermore, as illustrated in Figure <ref>, TSLib introduces a unified experimental pipeline covering the overall process of the model training and evaluation, which includes data source, data processing, model training and analysis, and model performance evaluation. Data Source Our TSLib provides extensive support for a wide range of diverse and multi-type datasets in a variety of formats, including ".csv", ".npz", ".txt", etc. As shown in Figure <ref> and Table <ref>, TSLib currently supports more than 30 datasets with different sampled frequencies across four mainstream time series analysis tasks, all derived from real-world scenarios in domains such as energy, transportation, economics, weather, and medicine, etc. Moreover, TSLib excels in scalability, allowing for the effortless integration of new data sources of different data types. Data Processing Data processing plays a pivotal role in guaranteeing stable training within the realm of time series analysis. Within the Time Series Library, a multitude of data processing steps are conducted, including time window splitting, data batch generation, etc. Subsequently, the raw data is partitioned into separate sets for training, validation, and testing purposes, enabling streamlined model training and equitable comparisons. These steps serve as indispensable prerequisites for attaining precise and dependable results across a range of diverse time series analysis tasks. Moreover, our TSLib provides additional support for numerous crucial and effective data processing strategies based on different model design principles <cit.>, <cit.>, <cit.>, <cit.> to enhance model performance and training efficiency. We encapsulate these various design strategies within our basic data processing layer, encompassing techniques such as data normalization, time-frequency decomposition, Fourier analysis, and more. When utilizing TSLib, users have the flexibility to select these strategies to improve training effect based on their specific requirements and objectives. Model Training and Analysis After the data processing phase, the raw time series data is transformed into the desired format for model training. Model training forms the crux of the entire experiment pipeline, where we fine-tune the model parameters based on the input to predict the output with minimal error. Our primary goal during model training is to obtain the best possible trainable parameters that result in a significant improvement in model performance. Each model has its own unique design and training objective. The model analysis procedure is to determine the optimal model parameters by comparing the correlation between the training and validation losses. Our TSLib includes complete log printing and result storage functions record and evaluate the training process. By employing rational model analysis techniques, we can efficiently obtain models with superior performance and stronger generalization. Performance Evaluation Model evaluation is a crucial step in verifying the effectiveness and generalization of trained time series models. It involves model prediction and performance evaluation, providing insights into the efficacy of the trained model. In TSLib, we provide evaluation support for four mainstream time series analysis tasks: classification, imputation, forecasting (long-term or short-term), and anomaly detection. Each task comes with its specific evaluation metric, enabling a comprehensive assessment of the performance of models. These metrics play a crucial role in determining the effectiveness of the trained model and its suitability for the intended task. §.§ Evaluation Protocols In order to conduct a fair and comprehensive model performance verification, our Time Series Library is designed to provide standardized evaluation protocols for four mainstream time series analysis tasks following <cit.>. The primary goal of these standardized and unified evaluation protocols is to quantify the effectiveness of different time series analysis methods with varying architectures. Additionally, they provide valuable insights into the strengths and limitations of different methods across diverse time series analysis tasks. By establishing these standardized evaluation protocols, we aim to promote fair comparisons between different methods and improve our understanding of their performance in various time series analysis scenarios. For long-term forecasting and imputations, we rely on Mean Square Error (MSE) and Mean Absolute Error (MAE) as the primary evaluation metrics. These metrics help us accurately assess the accuracy of our predictions and imputations. For short-term forecasting, we use the Symmetric Mean Absolute Percentage Error (SMAPE) and Mean Absolute Scaled Error (MASE) as metrics, which focus on absolute errors and reduce the impact of outliers, providing reliable evaluations of forecast accuracy across different datasets and methodologies. In the case of time series classification tasks, we utilize Accuracy as the evaluation metric. Accuracy measures the overall prediction performance by calculating the ratio of correctly classified samples to the total number of samples. For anomaly detection, we employ the F1 score to validate the identification of abnormal values. The F1 score represents a balanced combination of precision and recall, offering a comprehensive assessment of a classifier's performance, especially when dealing with imbalanced classes in the context of anomaly detection. §.§ Datasets TSLib includes a variety of mainstream datasets across different numbers of samples and categories, a richness of tasks, and a diversity of domains. In this section, we will focus on introducing representative datasets from various time series analysis tasks included in the Time Series Library. Classification Time series classification aims to assign a label or category to a time series based on its temporal features. To evaluate this capability, we selected ten multivariate datasets from the UEA Time Series Classification Archive <cit.>, supported in our TSLib. These datasets cover a range of practical tasks, including gesture, action, audio recognition, and medical diagnosis through heartbeat monitoring. We pre-processed the datasets according to the descriptions provided in <cit.>. Detailed dataset descriptions are shown in Table <ref>. Imputation Due to glitches, the collected time series data may contain partially missing values, posing a challenge for time series analysis. Therefore, time series imputation is a crucial task in real-world applications, aiming to fill in missing values within a time series based on contextual observations from the data. For our benchmark, we selected Electricity Transformer Temperature (ETT) <cit.>, Electricity, and Weather to evaluate the performance of time series imputation tasks with different missing ratios. Forecasting Time series forecasting is an essential task in time series analysis, and it has been widely explored in academic and industry domains. By leveraging historical patterns and trends, the model can predict future values or trends of a time series. Time series forecasting can be broadly divided into two types: long-term forecasting and short-term forecasting. For the long-term time series forecasting task, a wide range of datasets are included in our benchmark, including Electricity Transformer Temperature (ETT), Electricity, Weather, Traffic, Exchange, and Illness (ILI). For the short-term forecasting task, we selected the M4 dataset <cit.>, which comprises six sub-datasets with varying sampling frequencies and domains. Anomaly Detection Anomaly detection involves identifying unusual or abnormal patterns in a time series. These anomalies can indicate critical events, faults, or outliers that require attention or further investigation. There are some mainstream anomaly detection datasets supported in TSLib, such as Server Machine Dataset (SMD) <cit.>, Mars Science Laboratory rover (MSL) <cit.>, Soil Moisture Active Passive satellite (SMAP) <cit.>, Secure Water Treatment (SWaT) <cit.>, and Pooled Server Metrics (PSM) <cit.> which are collected from a variety of industrial scenarios. §.§ Main Results To examine the strengths and limitations of various methods in mainstream time series analysis tasks, we select twelve representative models from our TSLib. These models encompass four popular deep model architectures and address tasks, including long-term and short-term forecasting, classification, imputation, and anomaly detection. Baselines To conduct a fair comparative analysis and thoroughly explore the effectiveness of different model architectures in various time series analysis tasks, we conduct comparative comparison experiments using state-of-the-art models designed based on different deep architectures. As shown in Figure <ref>, we select several advanced and representative Transformer-based models: iTransformer <cit.>, PatchTST <cit.>, Autoformer <cit.>, Non-Stationary Transformer (Stationary) <cit.>, and FEDformer <cit.> to verify the performance. Additionally, we consider TimesNet <cit.> and SCINet <cit.> as the CNN-based models to compare. For RNN-based models, we included the novel and effective Mamba <cit.>. Finally, we include DLinear <cit.>, N-BEATS <cit.>, and TiDE <cit.> as representative MLP-based models for analysis. It is important to mention that TiDE <cit.> is designed to be dependent on specific timestamps, and cannot be easily adapted to part tasks without timestamps, such as short-term forecasting, anomaly detection, and classification. Unified Experimental Settings For the long-term forecasting task, we conducted two experimental settings to ensure a fair and comprehensive comparison: unified hyperparameter and hyperparameter searching. For the unified hyperparameter, we conducted experiments using a range of standardized hyperparameters set across different datasets. This allows us to accurately evaluate the relative performance of time series models with different deep architectures while keeping other factors constant. As for the “hyperparameter searching” scenario, we conducted separate hyperparameter searches for different model architectures and time series datasets. This approach enabled us to identify the best performance of different time series analysis models. By employing above both settings, we obtain a comprehensive understanding of the forecasting performance of different time series models. For the remaining tasks, we maintained the standard experimental settings as outlined in <cit.> to validate the performance of different time series models. Overall Results Based on the overall results of the models across different architectures in Figure <ref>, we surprisingly find that the MLP-based models, which are generally simpler and have lower computational overhead, perform well on the time series forecasting task. However, these models appear to be less effective in other types of tasks, which requires the model to learn more informative representations. On the contrary, the CNN-based models demonstrate more comprehensive capabilities and excel in classification, imputation, and anomaly detection tasks. The RNN-based models, while performing well on anomaly detection tasks, show limited effectiveness compared to other model architectures. In contrast, the Transformer-based models have demonstrated highly competitive performance across various time series analysis tasks. This can be attributed to the powerful data modeling capabilities inherent in the transformer architecture, which contribute to its overall and consistently superior performance across diverse time series analysis tasks. It further shows that Transformer-based models hold significant research value and application potential in the field of time series analysis and have emerged as a particularly promising option in the time series domain. As illustrated in Figure <ref>, we have also included more detailed results and a top three performance leaderboard for four representative time series analysis tasks. These results clearly show that the Transformer-based models, namely iTransformer <cit.> and PatchTST <cit.>, exhibit superior forecasting capabilities compared to other models for both long-term and short-term forecasting tasks. This further proves that it is of great significance and value to explore different modeling methods of temporal tokens in time series. Additionally, TimesNet <cit.> shows a more comprehensive and effective performance covering time series classification, imputation, and anomaly detection tasks. It has pioneered a milestone in the general time series analysis model. We believe that TSLib can provide useful start code, valuable insights on model properties and model selection guidance for future research and real-world applications. § FUTURE DIRECTIONS In this section, we provide a discussion on the promising directions for time series analysis. §.§ Time Series Pre-training A pretraining-finetuning learning paradigm is a two-stage approach commonly used in nature language processing (NLP) <cit.>, <cit.>, <cit.>, <cit.>, <cit.> and computer vision (CV) <cit.>, <cit.>, <cit.>, <cit.>. Pre-training establishes the basis of the abilities of Large Models through unsupervised learning <cit.>. Fine-tuning can improve the performance of the pre-trained model on a specific task or domain. Due to the limited availability of labeled datasets, self-supervised pre-training <cit.> has garnered significant attention and has been extensively investigated in the domains of natural language modeling and computer vision. Self-supervised pre-training paradigm significantly reduces labeling expenses and benefits for diverse downstream tasks. Notably, recent research efforts have introduced several self-supervised pre-training methods tailored for time series data, which can be primarily classified into contrastive learning <cit.> and masked time series modeling <cit.>. Contrastive learning refers to learning the representations of data by contrasting between similar and dissimilar pairs, where similar sample pairs are learned to be close to each other and dissimilar pairs are far apart <cit.>. Although SimCLR <cit.> has demonstrated remarkable success in the domain of computer vision, directly applying SimCLR to the field of time series data yields unsatisfactory results due to the insufficient modeling of temporal dependencies. CPC <cit.> introduced contrastive predictive coding, which utilizes model-predicted features as positive samples and randomly-sampled features as negative samples, to obtain time series representations that are advantageous for downstream tasks. TimeCLR <cit.> proposes a DTW data augmentation to generate phase shift and amplitude-change phenomena which can preserve time series structure and feature information. TS-TCC <cit.> employs efficient data augmentations designed for time-series data, and learns discriminative representations from the proposed Temporal Contrasting module and Contextual Contrasting module. TS2Vec <cit.> employs a hierarchical contrastive learning method and defines the contrastive loss from both instance-wise and patch-wise perspectives across different augmented context views, resulting in a robust contextual representation for each timestamp. Furthermore. LaST <cit.> takes the idea of variational inference theory <cit.> and proposes seasonal-trend representations learning and disentanglement mechanisms. CoST <cit.> proposes a contrastive learning framework to learn disentangled seasonal-trend representations for long sequence time series data. TF-C <cit.> develop frequency-based contrastive augmentation to leverage rich spectral information and explore time-frequency consistency. Masked modeling is a reconstruction-based method, which can predict a masked token in a sequence based on the context unmasked part <cit.>. TST <cit.> follows the pre-training paradigm proposed in BERT <cit.> and proposes a pre-training framework for multivariate time series. Further, PatchTST <cit.> segments the time series data into multiple non-overlapping patches and proposes a patch-level masked modeling approach. SimMTM <cit.> relates masked modeling to manifold learning and presents a neighborhood aggregation design for reconstruction based on the similarities learned in series-wise representation space. HiMTM <cit.> proposes a novel hierarchical masked time series pre-training framework to effectively capture the multi-scale characteristics of time series data. TimeSiam <cit.> constructs an asymmetric masking reconstruction task to capture intrinsic temporal correlations between randomly sampled past and current subseries and learn internal time-dependent representations based on Siamese networks. §.§ Large Time Series Models With the advent of Large Language Models (LLMs), the utilization of large-scale models to tackle time series downstream tasks has gained significant attention as the direction of future research. Current researches present the following two possible roadmaps to large time series models. §.§.§ Time Series Foundation Models Recent advancements in deep learning, particularly with the emergence of foundation models (FMs), have demonstrated significant progress in natural language processing (NLP) and computer vision (CV) domains <cit.>. Different from prior deep models, foundation models are pre-trained on massive amounts of data, which enables them to have a wide range of general knowledge learned from diverse domains. Given their success in capturing contextual information and semantic understanding, it is promising to explore a generalized time series foundation model that can effectively learn complex temporal dependencies and capture the underlying dynamics inherent in time series data. Early attempts such as TimeGPT <cit.>, Lag-LlaMa <cit.>, and Timer <cit.> focus solely on univariate time series data. Nevertheless, in real-world forecasting scenarios, it is crucial to involve additional information that is related to the temporal variation of the target time series and must be taken into account, such as weather conditions or holidays. MOIRAI <cit.> tries to flatten multivariate time series into a single sequence containing all variate, but its generalization capabilities to other downstream analysis tasks are under-explored. In addition to modeling inter-series dependencies, modeling the relationship between time series and external factors in other can achieve a better understanding of time series data. These external factors may be in the form of other modalities, such as text data or calendar data, and thus multimodal learning is a future trend in the development of a multi-modal time series foundation model. §.§.§ Adaptation of Large Language Models Large Language Models (LLMs) have made significant strides in solving various natural language processing tasks. Exemplified by the success of models like GPTs <cit.> and LLaMA <cit.>, LLMs have proven adept at generalizing to unseen tasks by simply following provided prompts. Therefore, it has become a promising future research direction to unleash the power of LLMs in the field of time series. Here are two paradigms for adapting LLMs. Fine-Tuning Pre-trained Language Models Based on the similar sequential nature, fine-tuning pre-trained language models to equip them with time series analysis capabilities has become a promising research topic. When applying LLMs to time series, it is essential to tokenize the time series before feeding it to a pre-trained model. Thus, adapting an LLM for time series included two key components: time series tokenization and efficient fine-tuning methods for time series analysis tasks. LLM4TS <cit.> proposes a two-stage fine-tuning strategy, including the time-series alignment stage to align LLMs with the nuances of time series data, and the fine-tuning stage for downstream tasks. LLMTime <cit.> treats time series forecasting as next-token prediction in text and attempts to encode time series data as a string of numerical digits. Recent Chronos <cit.> introduces a pre-trained probabilistic time series model based on existing Transformer-based language model architectures. Technologically, Chronos tokenizes time series values using scaling and quantization into a fixed vocabulary and trains the model on these tokenized time series via the cross-entropy loss. Benefiting from the generative capability of LLMs, most of the existing research focuses on time series forecasting tasks. GPT4TS <cit.> propose a unified framework for diverse time series analysis tasks by using a pre-trained GPT-2 model and fine-tuning the positional embeddings and the parameters of the layer normalization for each analysis task. Prompting Large Language Models Recent Large Language Models exhibit the strong abilities of in-context learning <cit.> and instruction following <cit.>. Therefore, the paradigm of leveraging natural language instructions or task examples to guide the model in addressing novel tasks has emerged as a groundbreaking approach <cit.>, which has become a potential solution for time series analysis tasks <cit.>. Recent literature, such as PromptCast <cit.>, UniTime <cit.>, and TimeLLM <cit.> focus on investigating a prompt template to enable LLMs to perform the forecasting task. There are other works represented by Autotimes <cit.>, that attempt to design soft prompts for time series data. However, existing prompting approaches are tailored for forecasting, and how to empower LLMs to other time series tasks besides forecasting is relatively unexplored. §.§ Practical Applications §.§.§ Handling Extremely Long Series Deep time series models have demonstrated remarkable performance across a wide range of downstream tasks, their applicability to longer time series data is often limited by scalability and high computational complexity. In industrial time series analysis, high-frequency sampling results in lengthy historical data, impeding the practical implementation of advanced deep models. Existing methods usually include patching techniques to enable them to handle long sequences, and when the input length becomes longer, the patch length can be increased accordingly to reduce the computational complexity. However, model performance is closely tied to patch length; hence, solely increasing patch size to reduce complexity may compromise capabilities. Therefore, addressing the limitations of deep models in handling longer time series could be a promising topic. §.§.§ Utilizing Exogenous Variables Since variations within the time series are often influenced by external factors, it is crucial to include exogenous variables in the analysis to gain a more comprehensive understanding of these factors. Exogenous variables, which are widely discussed in time series prediction tasks, are included in the model for uniform training in modeling time series data, without requiring separate analysis. However, in practical applications, different from multivariate time series analysis, the main variables and covariates usually occupy different positions. Given the crucial role played by exogenous variables in real-world applications, it is essential to explore a unified framework for modeling the relationships between the endogenous and exogenous variants, which allows for a more comprehensive understanding of interrelations and causality among different variants, leading to better and more reliable model performance, as well as interpretability. §.§.§ Processing Heterogeneous Data In the field of time series analysis, there is still an unexplored area related to the modeling of heterogeneous time series data. Heterogeneous time series data encompasses a wide range of diverse characteristics, such as varying sampling rates, irregularities, and different length scales. These diverse features make it challenging to develop models that can effectively capture the underlying patterns and relationships within the data. Moreover, the need for fixed-size inputs in current deep learning models limits their ability to handle the dynamic nature of heterogeneous time series data. Addressing these challenges requires innovative approaches that can adapt to the unique nature of each individual time series while still capturing the overarching patterns across multiple series. This may involve developing new techniques for feature extraction, incorporating domain knowledge into model design, or exploring alternative architectures that are better suited to handle variable-length inputs. As researchers continue to explore this unexplored area in time series analysis, there is potential for significant advances in areas such as finance, healthcare, and environmental monitoring. By improving our ability to model and analyze heterogeneous time series data, we can gain deeper insights into complex systems and make more informed decisions based on predictive analytics. Overall, further research in this area holds great promise for advancing our understanding of temporal data dynamics and enhancing the capabilities of time series modeling in real-world applications. § CONCLUSION In this survey, we provide a systematic review of deep models in time series analysis and introduce Time-Series Library (TSLib) as a fair benchmark for deep time series models across various analysis tasks. Compared with previous reviews that focus on a specific analysis task or model architecture, this paper provides a comprehensive survey and overview of existing deep models for time series analysis, ranging from forecasting, classification, imputation, and anomaly detection. We first present a detailed review of the universal modules that are widely used among time series models, including normalization, decomposition, and Fourier analysis. Next, we summarize existing deep time series models from the perspective of backbone architecture. Based on the review of existing literature, we introduce a practical open-source library, Time Series Library (TSLib), which has included representative deep time series models that can be a fair evaluation benchmark in the field of time series analysis. Finally, we discuss future research directions for deep time series models based on the recent development of the AI community and the practical application needs of time series analysis in real-world scenarios. IEEEtran [ < g r a p h i c s > ]Yuxuan Wang received the BE degree from Beihang University in 2022. She is now working towards the PhD degree in computer software at Tsinghua University. Her research interests include machine learning and time series analysis. [ < g r a p h i c s > ]Haixu Wu received the BE degree in software engineering from Tsinghua University in 2020. He is working towards the PhD degree in computer software at Tsinghua University. His research interests include scientific machine learning and spatiotemporal learning. [ < g r a p h i c s > ]Jiaxiang Dong received the ME degree in computer science and technology from Nankai University in 2018. He is currently working toward the PhD degree in computer software at Tsinghua University. His research interests include machine learning and time series pre-training. [ < g r a p h i c s > ] Yong Liu received the BE degree in software engineering from Tsinghua University in 2021. He is working towards the PhD degree in computer software at Tsinghua University. His research interests include time series analysis and large time series models. [ < g r a p h i c s > ]Mingsheng Long received the BE and PhD degrees from Tsinghua University in 2008 and 2014 respectively. He was a visiting researcher with UC Berkeley from 2014 to 2015. He is currently a tenured associate professor with the School of Software, Tsinghua University. He serves as an associate editor of IEEE Transactions on Pattern Analysis and Machine Intelligence and Artificial Intelligence Journal, and as area chairs of major machine learning conferences, including ICML, NeurIPS, and ICLR. His research is dedicated to machine learning theory, algorithms, and models, with special interests in transfer learning and domain adaptation, deep learning and foundation models, scientific learning, and world models. [ < g r a p h i c s > ]Jianmin Wang received the BE degree from Peking University, China, in 1990, and the ME and PhD degrees in computer software from Tsinghua University, China, in 1992 and 1995, respectively. He is a full professor with the School of Software, Tsinghua University. His research interests include Big Data management systems and large-scale data analytics. He led to developing a product data and lifecycle management system, which has been deployed in hundreds of enterprises in China. He is leading the development of the Tsinghua DataWay Big Data platform in the National Engineering Lab for Big Data Software.
http://arxiv.org/abs/2407.12977v1
20240717194659
ALARIC: A NLL accurate Parton Shower algorithm
[ "Florian Herren" ]
hep-ph
[ "hep-ph" ]
A Framework for testing Federated Learning algorithms using an edge-like environment [ July 22, 2024 ==================================================================================== § INTRODUCTION Parton shower algorithms are ubiquitous in Monte Carlo event generators for collider experiments. They evolve high energy partons, quarks and gluons, down to the hadronization scale through subsequent soft and collinear splittings. These splittings correspond to the logarithmically enhanced regions of QCD matrix elements and thus parton showers resum the corresponding logarithms. A priori, parton shower algorithms are only accurate at the leading logarithmic (LL) level for generic observables and possibly accurate at the next-to-leading-logarithmic (NLL) level for a select few observables <cit.>. Yet, to better assess parton shower uncertainties and improve event generation for the LHC experiments, parton showers at NLL or above are required. To this end, a significant amount of work has been dedicated to understanding the relevant features of parton shower algorithms critical to the logarithmic accuracy and first NLL accurate showers such as the PanScales showers <cit.> have been developed. Despite this progress, no publicly available NLL accurate shower has been implemented in a general purpose event generator and compared to actual data. To this end, we developed the ALARIC algorithm <cit.> and found an analytic proof that it is NLL accurate, while also maintaining a simple kinematics mapping, facilitating the matching to fixed-order calculations. Here, we present the key concepts, show first results and summarize further developments. § KEY CONCEPTS In the following we discuss the two main concepts relevant to the NLL accuracy of ALARIC: the treatment of soft radiation and the recoil scheme. In addition, we briefly describe the matching to next-to-leading order (NLO) calculations. Soft radiation In the soft limit the squared matrix element factorizes as _n⟨1,…,n|1,…,n⟩_n=-8πα_s∑_i,k≠ j _n-1<1,…,j\,…,n| T_i T_k w_ik,j|1,…,j\,…,n>_n-1 , where j is the label of the soft gluon, i,j denote the constituents of the dipoles, the T_i are colour insertion operators and w_ik,j=p_ip_k/(p_ip_j)(p_jp_k) , is the Eikonal factor. To avoid double counting of soft-collinear contributions, the Eikonal factor has to be distributed properly over the partons i and k, such that its collinear limit can be matched to the collinear splitting functions. In ALARIC, we write the Eikonal factor as w_ik,j=W_ik,j/E_j^2 , where W_ik,j=1-cosθ_ik/(1-cosθ_ij)(1-cosθ_jk) , and perform a partial fraction decomposition of W_ik,j: W_ik,j=W̅_ik,j^i+W̅_ki,j^k , where W̅_ik,j^i=1-cosθ_ik/ (1-cosθ_ij)(2-cosθ_ij-cosθ_jk) . The resulting W̅_ik,j^i are strictly positive, thus allowing for an interpretation as the probability of a soft splitting. Furthermore, in contrast to angular ordered showers, our treatment of the Eikonal factor allows to populate the whole phase space including angular correlations, thus also capturing non-global logarithms. We can now combine the W̅_ik,j^i with the regular collinear splitting functions, by subtracting the collinear limit of the Eikonal: 1/2p_ip_jP_(ij)i(z)→1/2p_ip_jP_(ij)i(z) +δ_(ij)i T_i^2 [W̅_ik,j^i/E_j^2-w_ik,j^(coll)(z)] , which now depend on the direction of the colour spectator k. Recoil scheme The second important ingredient is the recoil scheme. Instead of using the colour spectator as recoil partner, we preserve its direction and magnitude, allowing us the define the additional direction entering the collinear splitting functions. Additionally, we preserve the direction of the emitter and compensate the recoil by the sum of all multipole momenta K̃. Furthermore, we require that the invariant mass of the multipole is conserved, leading us to the momenta after the splitting: p_i =z p̃_i , p_k = p̃_k , p_j =(1-z) p̃_i+v(K̃-(1-z+2κ) p̃_i)+k_⊥ , K =K̃-v(K̃-(1-z+2κ) p̃_i)-k_⊥ , depicted in Fig. <ref>. Here, z is the momentum fraction of the emitter, v = p_ip_j/p_iK̃ and K̃^2/(2p̃_iK̃) Finally, the whole configuration is boosted back into the original frame. Previous emissions are only mildly affected by the boost, which is the basis of the proof of NLL accuracy in Ref. <cit.>. For initial-initial or initial-final dipoles the mapping works similar. NLO matching To combine ALARIC with fixed-order NLO calculations, we follow the MC@NLO method <cit.>. To subtract the double counting, the splitting kernels need to be integrated in D=4-2ϵ dimensions, which is not easily possible for all shower algorithms. ALARICs similarity to the identified particle subtraction scheme of Ref. <cit.> enables a simple computation of the required terms. We follow Ref. <cit.> for the actual calculation. The only non-trivial integral involves the H-operator <cit.>: ∫_0^1dz 𝐇_i(p_1,…,p_i,…,p_m;n;z) =- α_s/2π∑_k=1,k≠^m 𝐓_𝐓_k/𝐓_^2{ 𝒦^i +δ_i Li_2(1- 2p̃_ip̃_k K̃^2/(p̃_iK̃)(p̃_kK̃)) -∫_0^1dz P^i_ reg(z) ln(p_ip_k)n^2/2(p_i n)(p_kn) } . The finally integral can not be computed in closed form since n, as depicted in Fig. <ref>, depends implicitly on z. However, its numerical evaluation is straightforward. Additional key points such as the choice of evolution variable and the analytic proof of NLL accuracy can be found in Ref. <cit.>. § RESULTS While an analytic proof exists, that the ALARIC recoil scheme is NLL accurate, we can explicitly check it for observables with known NLL resummation results. To this end we tested several observables in e^+e^- collisions following Ref. <cit.>. In Fig. <ref> we show the test for the leading Lund plane declustering scale in the Cambridge algorithm, y_23. While the ALARIC result converges nicely to the NLL result in the limit α_s → 0, the DIRE algorithm as implemented in SHERPA <cit.> does not. With our preliminary implementation of ALARIC in SHERPA, we can also compare results obtained with ALARIC to e^+ e^- collision data. In Fig. <ref> we compare to the y_23 data from JADE/OPAL <cit.> and find perfect agreement, despite using a standard hadronization tune. However, differences between ALARIC and DIRE are small, indicating that the logarithmic accurarcy of the shower § OUTLOOK The formulation of ALARIC discussed in these proceedings is sufficient to describe processes in e^+e^- collisions involving only massless quarks. The algorithm has since been extended to properly describe the evolution of massive quarks <cit.> and to proton-proton collisions, including multi-jet merging <cit.>. While some implementation work is still required, ALARIC is on track to be part of an upcoming release of the SHERPA event generator. Consequently, experiments at the LHC will be able to harness fully differential NLL accurate predictions in their analyses. § ACKNOWLEDGMENTS FH thanks Stefan Höche, Frank Krauss, Marek Schönherr and Daniel Reichelt for the fruitful collaboration leading to the ALARIC shower. This research was supported in part by the Swiss National Science Foundation (SNF) under contract 200021-212729. FH acknowledges support by the Alexander von Humboldt Foundation. 99 Dasgupta:2018nvj M. Dasgupta, F. A. Dreyer, K. Hamilton, P. F. Monni and G. P. Salam, JHEP 09 (2018), 033 [erratum: JHEP 03 (2020), 083] doi:10.1007/JHEP09(2018)033 [arXiv:1805.09327 [hep-ph]]. Dasgupta:2020fwr M. Dasgupta, F. A. Dreyer, K. Hamilton, P. F. Monni, G. P. Salam and G. Soyez, Phys. Rev. Lett. 125 (2020) no.5, 052002 doi:10.1103/PhysRevLett.125.052002 [arXiv:2002.11114 [hep-ph]]. Herren:2022jej F. Herren, S. Höche, F. Krauss, D. Reichelt and M. Schoenherr, JHEP 10 (2023), 091 doi:10.1007/JHEP10(2023)091 [arXiv:2208.06057 [hep-ph]]. Frixione:2002ik S. Frixione and B. R. Webber, JHEP 06 (2002), 029 doi:10.1088/1126-6708/2002/06/029 [arXiv:hep-ph/0204244 [hep-ph]]. Catani:1996vz S. Catani and M. H. Seymour, Nucl. Phys. B 485 (1997), 291-419 [erratum: Nucl. Phys. B 510 (1998), 503-504] doi:10.1016/S0550-3213(96)00589-5 [arXiv:hep-ph/9605323 [hep-ph]]. Hoche:2018ouj S. Höche, S. Liebschner and F. Siegert, Eur. Phys. J. C 79 (2019) no.9, 728 doi:10.1140/epjc/s10052-019-7212-7 [arXiv:1807.04348 [hep-ph]]. Gleisberg:2003xi T. Gleisberg, S. Hoeche, F. Krauss, A. Schalicke, S. Schumann and J. C. Winter, JHEP 02 (2004), 056 doi:10.1088/1126-6708/2004/02/056 [arXiv:hep-ph/0311263 [hep-ph]]. Gleisberg:2008ta T. Gleisberg, S. Hoeche, F. Krauss, M. Schonherr, S. Schumann, F. Siegert and J. Winter, JHEP 02 (2009), 007 doi:10.1088/1126-6708/2009/02/007 [arXiv:0811.4622 [hep-ph]]. Sherpa:2019gpd E. Bothmann et al. [Sherpa], SciPost Phys. 7 (2019) no.3, 034 doi:10.21468/SciPostPhys.7.3.034 [arXiv:1905.09127 [hep-ph]]. JADE:1999zar P. Pfeifenschneider et al. [JADE and OPAL], Eur. Phys. J. C 17 (2000), 19-51 doi:10.1007/s100520000432 [arXiv:hep-ex/0001055 [hep-ex]]. Assi:2023rbu B. Assi and S. Höche, Phys. Rev. D 109 (2024) no.11, 114008 doi:10.1103/PhysRevD.109.114008 [arXiv:2307.00728 [hep-ph]]. Hoche:2024dee S. Höche, F. Krauss and D. Reichelt, [arXiv:2404.14360 [hep-ph]].
http://arxiv.org/abs/2407.12770v1
20240717175113
Deconfined quantum criticality of frustrated hard-core dipolar bosons
[ "Ya-Nan Wang", "Wen-Long You", "Wen-Yi Zhang", "Su-Peng Kou", "Gaoyong Sun" ]
cond-mat.str-el
[ "cond-mat.str-el" ]
College of Science, Nanjing University of Aeronautics and Astronautics, Nanjing, 211106, China Key Laboratory of Aerospace Information Materials and Physics (Nanjing University of Aeronautics and Astronautics), MIIT, Nanjing 211106, China College of Science, Nanjing University of Aeronautics and Astronautics, Nanjing, 211106, China Key Laboratory of Aerospace Information Materials and Physics (Nanjing University of Aeronautics and Astronautics), MIIT, Nanjing 211106, China College of Science, Nanjing University of Aeronautics and Astronautics, Nanjing, 211106, China Key Laboratory of Aerospace Information Materials and Physics (Nanjing University of Aeronautics and Astronautics), MIIT, Nanjing 211106, China Department of Physics, Beijing Normal University, Beijing 100875, China Corresponding author: gysun@nuaa.edu.cn College of Science, Nanjing University of Aeronautics and Astronautics, Nanjing, 211106, China Key Laboratory of Aerospace Information Materials and Physics (Nanjing University of Aeronautics and Astronautics), MIIT, Nanjing 211106, China § ABSTRACT Deconfined quantum critical points (DQCPs) are proposed as unconventional second-order phase transitions beyond the Landau-Ginzburg-Wilson paradigm. The nature and experimental realizations of DQCPs are crucial issues of importance. We illustrate the potential for DQCPs between the valence bond solid state and the antiferromagnetic phase to arise in optical lattices containing frustrated dipolar bosons subject to hard-core constraints. The emergence of DQCPs is comprehended through the fusion of two Berezinskii-Kosterlitz-Thouless (BKT) transitions. The DQCPs and the BKTs are confirmed by the finite-size scaling of ground-state fidelity susceptibilities. The numerical analysis reveals varying critical exponents of the correlation length in DQCPs and the logarithmic scaling in BKTs, respectively. This work offers a promising platform for realizing DQCPs and provides valuable insights into their nature within the framework of topological phase transitions. Deconfined quantum criticality of frustrated hard-core dipolar bosons Gaoyong Sun ===================================================================== Introduction-. The quantum phase transition is one of the fundamental concepts in realms of statistical physics and condensed matter physics <cit.>. Continuous phase transitions are effectively characterized by the Landau-Ginzburg-Wilson (LGW) paradigm through the utilization of local order parameters based on the concept of spontaneous symmetry breaking. Berezinskii–Kosterlitz–Thouless (BKT) transitions <cit.> and deconfined quantum critical points (DQCPs) <cit.> stand as two renowned examples that extend beyond the framework of the LGW theory. BKT transitions are infinite-order phase transitions that occur between quasi-ordered phases and disordered phases <cit.>, while DQCPs represent second-order phase transitions between two ordered phases <cit.>. Over last decades, DQCPs have garnered significant attention, particularly in exploring the nature of phase transitions (first or second order) <cit.>, quantum criticalities <cit.>, duality phenomena <cit.>, and emergent symmetries <cit.>. In previous studies, BKT transitions and DQCPs were investigated separately. Here, we demonstrate that DQCPs can arise from the fusion of two BKT transitions, offering essential insights into comprehending both BKT transitions and DQCPs. Despite the wide range of theoretical explorations on DQCPs <cit.>, there are relatively few experimental realizations <cit.>. Ultracold bosons, serving as an ideal quantum simulator, provide a versatile platform for engineering many-body physics <cit.>. Taking into account the interplay of interactions and lattice geometry, ultracold bosons unveil numerous rich novel quantum phenomena <cit.>. The zig-zag lattice, being the simplest form of a triangular lattice, embodies distinctive geometric frustration and topological structure, serves as an ideal candidate for exploring quantum phases <cit.>. For example, hard-core bosons arranged in a triangular lattice configuration can induce a supersolid phase, distinguished by the coexistence of solid and superfluid characteristics <cit.>. Recently, there has been significant focus on proposed experimental explorations of DQCPs within the domain of programmable quantum simulators, involving systems such as arrays of Rydberg quantum simulators <cit.>, trapped ions quantum simulators <cit.>, and state-dependent optical lattices with ultracold bosons <cit.>. In this letter, we instead explore a frustrated dipolar Bose-Hubbard (FDBH) model in the hard-core limit in one dimension, where nearest-neighbor interactions arise from dipolar interactions <cit.>. We show that the FDBH model hosts DQCPs, aligning with recent theoretical proposals <cit.> that describe a continuous quantum phase transition from an antiferromagnetic (AFM) state to a valence bond solid (VBS) state. We employ the density matrix renormalization group (DMRG) technique <cit.> to investigate quantum phase transitions of FDBH model by varying nearest-neighbor interactions. Our study reveals the presence of three distinct phases, delineated by two BKT transitions and one DQCP through the fidelity susceptibility. Intriguingly, our proposal reveals that DQCPs emerge from the fusion of two BKT transitions, providing an essential insight into the understanding of both BKT transitions and DQCPs. Model-. We consider interacting dipolar bosons within a one-dimensional lattice accounting for both nearest-neighbor and next-nearest-neighbor hopping terms, as illustrated in Fig.<ref>(a). The corresponding Hamiltonian of the FDBH model is described by H = ∑_r ( t_1 b_r^†b_r+1 + t_2 b_r^†b_r+2 + h.c.) + V n_r n_r+1+U/2n_r(n_r-1). In this Hamiltonian, b_r (b_r^†) denote the bosonic annihilation (creation) operators at the rth site. And, n_r=b_r^†b_r represents the particle number operator, while t_1 ≥ 0 and t_2 ≥ 0 denote the amplitudes of the nearest-neighbor and next-nearest-neighbor hopping, respectively. Moreover, V ≥ 0 and U ≥ 0 signify the strengths of the nearest-neighbor and on-site interactions. The frustrated one-dimensional chain is equivalent to the zigzag chain [cf. Fig. <ref>(b)], which can be formed by the incoherent superposition of an optical triangular lattice <cit.>. Ultracold bosons offer a wealth of physics arising from the interplay between frustrations induced by lattice geometry and interactions <cit.>. For instance, it is demonstrated that ultracold bosons subject to a three-body constraint exhibit chiral superfluid and Haldane insulator phases within zig-zag optical lattices <cit.>. In the hard-core limit (U →∞), the FDBH model in Eq.(<ref>) becomes H = ∑_r ( t_1 b_r^†b_r+1 + t_2 b_r^†b_r+2 + h.c.) + V n_r n_r+1, with a constraint n_r={ 0,1 }. In the following, we will investigate the physics underlying the hard-core FDBH model, as illustrated in Eq.(<ref>), wherein the total number of sites in the chain is assumed to be divisible by four in order to comprehensively account for the translation symmetry-breaking phase. Berezinskii-Kosterlitz-Thouless transitions-. Subject to the hard-core constraints (U →∞), the FDBH model described in Eq.(<ref>) can be effectively transformed into a spin model, as presented below: H = ∑_rt_1/2(σ_r^xσ_r+1^x+σ_r^yσ_r+1^y)+t_2/2(σ_r^xσ_r+2^x+σ_r^yσ_r+2^y) + V/4(1-σ_r^z)(1-σ_r+1^z), by utilizing the transformations σ^x_r = b^†_r + b_r,   σ^y_r = i(b^†_r - b_r),   σ^z_r = 1 - 2n_r, with (σ^x_r, σ^y_r, σ^z_r) being Pauli matrices. Here, the particle occupation numbers are represented by spins. Specifically, σ_z = -1/2 denotes the down spin when n_j = 1, and σ_z = 1/2 signifies the up spin when n_j = 0. Utilizing spin operators to explore the physics of the hard-core bosonic model is a straightforward and effective approach. When V=0, the system shown in Eq.(<ref>) become the XY model incorporating both the nearest-neighbor and next-nearest-neighbor hopping terms. In particular, when t_2 = 0, the system's ground state is characterized by the gapless XY (superfluidity (SF) in bosonic language) phase (see Appendix <ref> for details). Upon increasing t_2, the system undergoes a phase transition from the XY phase to the VBS (bond-ordered-wave (BOW) in bosonic language) phase <cit.>. To verify if the transition between the XY phase and the VBS phase persists in the presence of interaction V, we calculate the ground-state phase diagram of the FDBH model described in Eq.(<ref>) varying parameters t_2/t_1 and V/t_1 in terms of DMRG method. During DMRG simulations, we keep 500 matrix states and enforce n_max=1 per site. The full phase diagram is shown in Fig.<ref>, which is obtained from the fidelity susceptibility <cit.>, χ_L=1/Llim_δλ→ 0-2 ln F(λ,λ+δλ)/(δλ)^2, where F(λ,λ+δλ)=|⟨Ψ_0(λ)|Ψ_0(λ+δλ)⟩| is the fidelity with the control parameter λ={ t_2/t_1, V/t_1 }. The numerical results support the argument that the phase transition persists even under weak interaction. In Fig.<ref>(a), we depict the scaling behavior of fidelity susceptibility at an interaction strength of V/t_1=0.1 by varying the next-nearest-neighbor hopping t_2/t_1. It is observed that the fidelity susceptibility increases gradually as the system's size increases. Furthermore, both the maximum fidelity susceptibility χ_L^m and the peak positions λ_m themselves are demonstrated to be in excellent agreement with the scaling laws [c.f. Fig.<ref>(c) and (d)] proposed for BKT transitions <cit.>, as described by: χ_L^m ≃χ_∞ - 1/ln(aL), λ_m ≃λ_c + 1/ln^2(aL), where, χ_∞ represents the fidelity susceptibility in the thermodynamic limit, λ_c is the critical point, and a is a positive nonuniversal constant serving as a cutoff. The transition from the XY phase to the VBS phase is further validated through level crossing (see Appendix <ref> for details) the order parameter [c.f. Fig.<ref>(b)] characterizing the VBS phase under open boundary conditions, as expressed in: O_VBS = 1/3( |⟨σ_r·σ_r+1⟩| - |⟨σ_r+1·σ_r+2⟩| ), where σ_r=(σ^x_r, σ^y_r, σ^z_r). When t_2=0, the model in Eq.(<ref>) represents the quantum spin-1/2 antiferromagnetic XXZ chain, which undergoes a BKT transition from the XY phase (V/t_1 < 2) to the AFM (density wave (DW) with all particles located in one of the legs in the bosonic language) phase (V/t_1 > 2) <cit.>. Our numerical results obtained from the fidelity susceptibility confirm that the critical value is located at V/t_1 = 2, as illustrated in Fig.<ref>. We then investigate whether this phase transition persists under the next-nearest-neighbor hopping using the DMRG. The fidelity susceptibility is computed as a function of V/t_1 at t_2=0.1 for large open chains. The numerical results indicate that the peak of the fidelity susceptibility gradually shifts towards the left, and its magnitude slowly increases with increasing system size, as shown in Fig.<ref>(a). Finite-size scaling analysis, presented in Fig.<ref>(c) and (d), demonstrates that the fidelity susceptibility continues to adhere to the scaling laws proposed for BKT transitions <cit.>, as outlined in Eq.(<ref>) and Eq.(<ref>). Moreover, it is observed that the long-range spin correlations in the z-direction under periodic boundary conditions, O_AFM = ⟨σ_r^zσ_r+L/2^z⟩, begin to increase from zero as the interaction V increases [cf. Fig.<ref>(b)], indicating the existence of the AFM phase in the z-direction for V > V_c. Deconfined quantum critical point-. We have demonstrated that the BKT transitions between the XY phase and VBS state, as well as the BKT transition between the XY phase and AFM phase, persist even in the presence of small next-nearest-neighbor hopping (t_2/t_1) and interaction (V/t_1). Now, we turn to explore the transitions in the regime characterized by large values of t_2/t_1 and V/t_1. Interestingly, we find unexpectedly that the system undergoes a phase transition from the AFM phase to the VBS state as t_2/t_1 or V/t_1 increase further. As an example, we compute the fidelity susceptibility and the order parameters at t_2/t_1=0.7 [cf. Fig.<ref>]. It is observed that the system resides in the VBS phase for 0<V/t_1<2.12, characterized by a non-zero O_VBS and vanishing O_AFM. Conversely, in the regime 2.12<V/t_1<3, the reverse occurs, indicating that the system is in the AFM phase [cf. Fig.<ref>(b)]. Notably, both order parameters, O_AFM and O_VBS, seem to smoothly vanish at this same point, suggesting an unconventional nature for this quantum phase transition. This transition is argued to be a second-order phase transition with the correlation length critical exponent ν=1.11 (see Appendix <ref> for details), as demonstrated in Fig.<ref>(a), (c), and (d) from the finite-size scaling of the fidelity susceptibility <cit.>, χ_L^m∝ L^2/ν-1. where, χ_L^m represents the fidelity susceptibility per site at the peak position λ=λ_m, which converges towards the critical point λ_c as L tends to infinity. As the AFM phase breaks the ℤ_2 spin-flip symmetry and the VBS phase breaks translation symmetry, the direct continuous second-order phase transition occurring between the AFM phase and the VBS state indicates that the phase transition is a DQCP beyond the LGW theory. Interestingly, the DQCP arises from the merging of two BKT transitions. These transitions can be understood through the concept of domain walls, which are topological defects in one-dimensional systems. Beginning from the VBS phase (i.e at t_2 / t_1=0.7 and V=0), characterized by, increasing the interaction strength V/t_1 induces the formation of AFM domain walls. Domain walls in VBS bind the AFM phase (and vice versa), leading to the destruction of the VBS phase and the emergence of the AFM phase as t_2/t_1 increases further. Consequently, a direct transition between the VBS phase to the AFM phase occurs under conditions of large t_2 and V, indicating the presence of a DQCP. This DQCP is linked to the two BKT transitions through a multicritical point characterized by an emergent high symmetry. We note that our finding is analogous to the phenomena described in frustrated two-dimentional models <cit.>, where a gapless quantum spin liquid (QSL) phase gradually develops from the AFM-VBS transition, leading to two phase transitions between gapped and gapless phases (AFM-QSL transition and QSL-VBS transitions). Consequently, a DQCP emerging from two BKT transitions may be a universal phenomenon, offering crucial insights into the understanding of both BKT transitions and DQCPs. Conclusion-. In summary, we employed the DMRG method to investigate novel physical phenomena in the frustrated dipolar Bose-Hubbard model and to provide a comprehensive phase diagram of the model through the exploration of fidelity susceptibilities and order parameters. We identified two gapped phases, the AFM phase and the VBS phase, along with one gapless XY phase, each exhibiting two BKT transitions and a DQCP. Remarkably, we found that the DQCP arises from the fusion of two BKT transitions in this model. Moreover, we propose that this model could be realized using dipolar bosons. Acknowledgments-. G.S. thanks Luis Santos for the valuable comments. G.S. is appreciative of support from the NSFC under the Grants No. 11704186, "the Fundamental Research Funds for the Central Universities, NO. NS2023055". W.-L.Y is supported by the NSFC under Grant No. 12174194, Opening Fund of the Key Laboratory of Aerospace Information Materials and Physics (Nanjing University of Aeronautics and Astronautics), MIIT, Top-notch Academic Programs Project of Jiangsu Higher Education Institutions (TAPP), and stable supports for basic institute research under Grant No. 190101. Y.-N.W and W.-Y.Z are supported by the Postgraduate Research & Practice Innovation Program of Jiangsu Province under Nos. KYCX24_0526 and KYCX23_0347, respectively. S.-P. K is appreciative of support from the NSFC under the Grant Nos. 11974053 and 12174030, and the National Key R&D Program of China under Grant No. 2023YFA1406704. This work is partially supported by the High Performance Computing Platform of Nanjing University of Aeronautics and Astronautics. § NON-INTERACTING ENERGY SPECTRUM We first discuss the case of free bosons (V=0 and U=0), which can be simplified as follows: H=∑_r t_1(b_r^†b_r+1+h.c)+t_2(b_r^†b_r+2+h.c) Using the Fourier transformation, b_r=1/√(N)∑ e^ikjb_k, b_r^†=1/√(N)∑ e^-ik'jb_k'^†, the model described in Eq.(<ref>) can be transformed into momentum space, H =t_1∑_kk'δ_kk'e^ikb_k'^†b_k + t_1∑_kk'e^-ik'b_k'^†b_k + t_2∑_kk'δ_kk'e^2ikb_k'^†b_k + t_2 ∑_kk'δ_kk'e^-2ik'b_k'^†b_k =∑_k[ t_1(e^ik+e^-ik)+t_2(e^2ik+e^-2ik) ] b_k^†b_k =∑_k2t_1cosk+2t_2cos2k b_k^†b_k =∑_kϵ(k) b_k^†b_k The Hamiltonian is diagonalized, with the spectrum, ϵ(k)=2 t_1 (cosk+jcos2k), Here, j=t_2/t_1. Using trigonometric relationships, cos2k=2cos^2k+1, the spectrum can be transformed as: ϵ(k)=2 t_1 (2jcos^2k+cosk+j). The minimum of ϵ(k) is obtained by differentiating the derivative with respect to k, which gives: d/dkϵ(k)=-2 t_1(4jcosksink+sink) = 0 The equation sink=0 or 1+4jcosk=0 offers two solutions, thereby affecting the spectrum ϵ(k) depending on the frustration parameter j <cit.>. Specifically: When j < 1/4, the dispersion ϵ(k) displays a single minimum at k=π because sink=0. Conversely, when j > 1/4, the spectrum ϵ(k) exhibits two distinct minima located at k = ±arccos(-1/4j), arising from the condition 1+4jcosk=0. The critical value j = 1/4 marks a special point known as the Lifshitz point. The system exhibits either a superfluid phase when j < 1/4 or a chiral superfluid phase when j > 1/4. Next, we consider bosons in the hard-core limit. The Hamiltonian subject the hard-core constraint (U→∞) is, H=∑_r t_1(b_r^†b_r+1+h.c)+t_2(b_r^†b_r+2+h.c) with n_r=b_r^†b_r={ 0, 1 } represent particle number operators with a hard-core constraint at site r. In the hard-core limit with infinite interaction, only one boson can occupy a single lattice site, behaving similarly to spinless fermions. Therefore, the superfluid phase and the chiral superfluid phase observed in free bosons are expected to transform into a gapless phase and a phase characterized by symmetry breaking. § LEVEL CROSSING FOR THE BKT TRANSITION To further pinpoint the critical point of the system, we calculated the distributions of several lowest energy levels by imposing U(1) symmetry, specifically conserving total spin along the z-axis (equivalent to conserving total particle numbers in bosonic language), S_total^z=∑_rS_r^z. When the number of particles in the system is a multiple of four, the ground state is a non-degenerate singlet state <cit.>. In the XY phase, the first excited state exhibits S_total^z=1, whereas in the VBS phase, the first excited state exhibits S_total^z=0. Therefore, the XY-VBS phase transition can be identified by the level crossing between the doublet and the singlet of the first excited states. To identify the phase transition via level crossings, we investigated the energy gap Δ E = E_m^(n) - E_m^'^(n^') of the FDBH model using exact diagonalization with lattice sizes L=12 and L=20 at V=0.1 for 0<t_2/t_1<0.5 as illustrated in Fig.<ref>. Here, E_m^(n) denotes energy levels where m=0,±1 represents the total spin along the z-direction, and n denotes the energy level (e.g., ground state, first excited state). The red squares depict the energy difference between the singlet ground state and the first excited state with S_total^z=0, which is non-degenerate except at the Majumdar-Ghosh point t_2=0.5. The blue circles correspond to the energy difference between ground states with S_total^z=1 and S_total^z=0. The critical point is identified where these two lines cross. For L=12, the XY phase to VBS phase transition occurs at t_2c=0.37, while for L=20, it shifts slightly to t_2c=0.39. This shift to higher t_2c values with increasing lattice size indicates a pronounced finite-size effect.. In the hard-core boson limit, spin conservation is related to particle number conservation through the equation, S_total^z = L/2 - N_total, where L is the total number of lattice sites and N_total represents the total number of particles in the system. This mapping allows us to interpret spin states in terms of particle configurations. For instance, when S_total^z=0, the system corresponds to a half-filled particle case. By varying the particle number within a specific subspace, one can identify excited states. § CORRELATION LENGTH CRITICAL EXPONENT Quantum many-body systems demonstrate finite-size scaling and universal behaviors near quantum critical points, allowing for the extraction of critical exponents. The fidelity susceptibility near the critical point becomes more pronounced as the system size increases, highlighting significant finite-size scaling effects at criticality. It is well-established that the fidelity susceptibility exhibits finite-size scaling near the critical point, χ_F(λ_m)∼ L^2 / ν, for second-order phase transitions. Here, ν denotes the correlation length critical exponent, and λ_m represents the peak position of the fidelity susceptibility for a system of size L. The correlation length critical exponent is determined by fitting the logarithm of both sides of Eq. (<ref>): lnχ_F(λ_m)∝ 2 / νln L Here, lnχ_F(λ_m) corresponds to the natural logarithm of the fidelity susceptibility evaluated at the peak position λ_m, and L denotes the system size. In the vicinity of the phase transition, we keep the interaction strength fixed and compute the ground state fidelity of the system across various sizes (including L=128, 144, 192, 240, and 288) as a function of the next-nearest-neighbor hopping t_2/t_1. By fitting the maximum fidelity against system size using the logarithmic relationship in Eq. (<ref>), we extract the slope of the fit, thereby determining the correlation length critical exponent, as illustrated in Fig. <ref>. From the figure, it is observed that the critical exponent decreases gradually with increasing interaction strength. This suggests that the system exhibits varying universality classes for different quantum critical points.
http://arxiv.org/abs/2407.13756v1
20240718175535
Challenge of direct imaging of exoplanets within structures: disentangling real signal from point source from background light
[ "Jialin Li", "Laird M. Close", "Jared R. Males", "Sebastiaan Y. Haffert", "Alycia Weinberger", "Katherine Follette", "Kevin Wagner", "Daniel Apai", "Ya-Lin Wu", "Joseph D. Long", "Laura Perez", "Logan A. Pearce", "Jay K. Kueny", "Eden A. McEwen", "Kyle Van Gorkom", "Olivier Guyon", "Maggie Y. Kautz", "Alexander D. Hedglen", "Warren B. Foster", "Roz Roberts", "Jennifer Lumbres", "Lauren Schatz" ]
astro-ph.EP
[ "astro-ph.EP", "astro-ph.IM" ]
[NO \title GIVEN] [NO \author GIVEN] July 22, 2024 ====================== § ABSTRACT The high contrast and spatial resolution requirements for directly imaging exoplanets requires effective coordination of wavefront control, coronagraphy, observation techniques, and post-processing algorithms. However, even with this suite of tools, identifying and retrieving exoplanet signals embedded in resolved scattered light regions can be extremely challenging due to the increased noise from scattered light off the circumstellar disk and the potential misinterpretation of the true nature of the detected signal. This issue pertains not only to imaging terrestrial planets in habitable zones within zodiacal and exozodiacal emission but also to young planets embedded in circumstellar, transitional, and debris disks. This is particularly true for Hα detection of exoplanets in transitional disks. This work delves into recent Hα observations of three transitional disks systems with MagAO-X, an extreme adaptive optics system for the 6.5-meter Magellan Clay telescope. We employed angular differential imaging (ADI) and simultaneous spectral differential imaging (SSDI) in combination with KLIP, a PCA algorithm in post-processing, for optimal starlight suppression and quasi-static noise removal. We discuss the challenges in protoplanet identification with MagAO-X in environments rich with scattered and reflected light from disk structures and explore a potential solution for removing noise contributions from real astronomical objects with current observation and post-processing techniques. § INTRODUCTION Protoplanetary disks are the birth sites of stars and planets, thus their structures and chemical compositions provide valuable insights into the underlying mechanisms driving the formation processes of the host stars and their system. High spatial resolution and contrast observation techniques, spanning from millimeter to infrared wavelengths, have revealed a variety of unique disk features, such as spirals, arcs, and gaps. It remains unclear whether these substructures can be attributed solely or partially to ongoing planet formation or other processes within the disks. However, interactions between giant protoplanets and their host disks can alter the environment, and the presence of these substructures is often interpreted as a sign of ongoing planet formation <cit.>. Since these larger-scale substructures caused by planetary perturbations are more easily detected than the planets themselves, systems with spiral arms and gaps are targets of interest for high-angular-resolution observations in near-infrared wavelengths. This enables a direct search for protoplanets embedded in disks because they are still radiating remnant energy from the formation process. Although there are reports of such observations, the planetary nature of these companion candidates are ambiguous or challenged (e.g., AB Aur B <cit.>; 169142b <cit.>). PDS 70 is the only system known to host confirmed protoplanets, both located within its transitional disk cavity. Its inner planet, PSD 70b, was first observed via NIR imaging <cit.> and subsequently confirmed through the detection of accretion excess emission in the Hα line and in UV continuum <cit.>, and Hα differential imaging revealed a second accreting planet in the system, PDS 70 c <cit.>. Although other detection of protoplanetary candidates have been reported, they are either located at wider separation or require further study to confirm their planetary nature. The lack of confirmed detections can partially attributed to the multitude of challenges involved in directly image exoplanets at small separations. A suite of dedicated hardware, technologies, observing and post-processing techniques are all need to achieve the the high angular resolution at small separations (≤4λ/D) and high contrast (≤10^-3) requirements. Due to the complex morphology of transitional disks, observation of these embedded protoplanets faces another challenge: the unambiguous separation of disk and planet signals. Hα differential imaging is one approach to address some of these challenges as it is the strongest recombination line of Hydrogen (Hα; 656.3 nm) within the optical and infrared wavelengths, that should only come from accreting bodies themselves. When compared to the observed features in IR wavelengths, distinctions between disk structures and protoplanets can be made. With observations in only infrared wavelengths, light from disk structures can be misinterpreted as a protoplanet (i.e. <cit.>), and the contrast ratio to planet mass relationship in the infrared decreases drastically at lower masses (< 5M_Jup) making observations of lower mass giant planets more challenging <cit.>. The detection of a point source in Hα, an emission line tracer for giant planet accretion with a more linear contrast ratio to planet mass relationship, lowers the threshold of detection and allows for a clearer interpretation on the nature of the detection <cit.>. In this paper, we will discuss recent observations from MaxProtoPlanetS, an Hα Protoplanet Survey aiming to discover accreting exoplanets with with MagAO-X, an extreme AO (ExAO) instrument on the 6.5m Magellan Clay telescope. A brief summary of MagAO-X and observational targets are provided in Section <ref>. Post-processing procedures for analyzing these highly morphologically complex systems are described in Section <ref> and the results our observations are detailed in Section <ref>. We discuss the various reduction methods we used to eliminate noise in each dataset and its similarities to solutions proposed to mitigate exozodiacal dust around planets in the habitable zone in Section <ref> and the conclusion in <ref>. § OBSERVATION AND ANALYSIS §.§ Observations with MagAO-X Observations were made using MagAO-X, a ExAO system designed to perform coronagraphic imaging in visible wavelengths (0.5-1.0 μm) at high Strehl; it hosts a 97-actuator woofer and 2040-actuator tweeter along with a pyramid wavefront sensor (PWFS) and Lyot-coronagraph system that feeds light into a dual-EMCCD simultaneous differential imaging (SDI) science camera system. The coronagraph contains a third deformable mirror for corrections of non-common path errors. <cit.>. The low noise (<0.6 rms e^- read noise) EMCCD pyramid WFS OCAM2K detector enables Strehls of >50% while closed loop at 2 kHz. To achieve its science goals of detection and characterization of Solar System-like exoplanets, MagAO-X has recently gone through a Phase II upgrade. Most notably, a new post-AO 1000-actuator MEMS device was added inside the coronagraph to enable Focal Plane Wavefront Sensing (FPWFS) and improved Focus Diversity Phase Retrieval (FDPR <cit.>) performance on sky, increasing the Strehl of ∼28% at Hα on sky with faint (V∼12 magnitude) targets <cit.>. With this suite of technologies, the largest and deepest (Hα∼10^-4) survey for protoplanets, MaxProtoPlanetS (PI: Laird Close), has commenced <cit.>. As a part of this project, we observed the following objects in Angular Differential Imaging (SDI) mode with the dual-EMCCD SDI system through an Hα continuum filter (λ_o=0.668 μm, Δλeff=0.008 μm) and an Hα filter spanning three observation run: (1)HD 34700, located at 356.5±6.1 parsecs, is a young T Tauri binary with an approximate age of 5 Myr and equal mass components ∼2M_⊙, is separated by approximately 0.0007". It is known to have three external stellar companions <cit.> and exhibits multiple spiral arms along with a massive central cavity with an estimated radius of 175 AU <cit.>. The first observation, done on Dec 4 2022, used a wide Hα filter(λ_o=0.656μm, Δλeff=0.009μm[Filter specifications can be found in the digital MagAO-X instrument handbook at <https://magao-x.org/docs/handbook/index.html>]) along with EM gain of 100 on both science cameras at 4Hz. The V band seeing of the night were affected heavily by the wind, varying from roughly 0.6" to 1.0". As we tracked the object through transit, a total of 51^∘ of sky rotation was obtained. A narrow Hα filter(λ_o=0.656 μm, Δλ eff=0.001 μm) for the latter epoch of observation on Mar 5 2023. The seeing averaged to be approximately 0.5" throughout the night. An exposure time of 0.25 s, which is equivalent to a 4 Hz readout speed, with EM gains of 50 and 200 were set for science cameras in continuum and Hα to avoid saturation. Despite the better conditions, we were only able to obtain about 30^∘ of sky rotation due to its sunset transit time. After data selection, which is detailed in <ref>, we kept 104 min and 132 min of data respectively for the 2022 and 2023 epoch. (2) HD 142527 is a binary transitional disk system (∼ 5 Myr) located at a distance of 159.3±0.7 pc. Its primary star is a Herbig Ae/Be star with a mass of ∼2 M_⊙ <cit.>, while it has a lower mass (∼0.35 M_⊙) stellar companion situated 15 AU away from the central star <cit.>. Observations in both sub-millimeter and scattered infrared light reveal that the disk of this system exhibits multiple spiral arms and contains a central cavity with a radius of approximately ∼140 AU (e.g., <cit.>). This target was observed on the night of Mar 8, 2023 with average seeing of 0.75". The cameras were running at 4 Hz with an EM gain of 40 and 150 respectively through the Hα continuum camera and Hα narrow camera. We took 160 min of data and acquired 120^∘ of rotation. (3) MaxProtoPlanetS 1 is a newly imaged ALMA face-on disk with a dust depleted gap at 0.6" with I band magnitude of ∼11 <cit.>, placing this object at the faint end of the MaxProtoPlanetS sample. We observed this target on Mar 20 and Mar 25 of 2024 with seeing conditions on both nights being <0.5". Again, we utilize the SDI imaging mode through the narrow Hα and Hα continuum filters. The integration time was set to be 3 seconds per frame for the first observation, and the EM gain was set to 500 for the Hα camera and 300 for Hα continuum camera. We obtained a total of ∼ 70^∘ degrees of rotation for approxmately 70 mins of integration time. Due to its overhead position during transit, 45^∘ of data were before obtained before transit and 25^∘ after transit with 100^∘ rotation spaced between. For our second observation of this object, the EM gain was set to 200 for the Hα camera and 600 for Hα continuum camera. This latter dataset contained 65 mins of 1s exposure frames after transit, yielding ∼ 45^∘ rotation. §.§ Data Selection and Reduction The selection and reduction procedures are identical for the different science cameras. First, we apply a 10-20% cut on the dataset by peak counts from the source in each 1024x1024 frame. The center of the star is roughly estimated to be located at the pixel with the maximum counts for the initial selection. A more precise central location is identified in later stages of reduction. For more efficient processing, a box of size 256x256 pixels centered around the central pixel is cropped out from each image. Based on the peak count of the central pixel, frames with values greater than 60,000 counts are rejected as they have surpassed the detector saturation limit. The remaining frames are aligned via cross-correlation with bi-cubic interpolation through the OpenCV package to account for sub-pixel shifts <cit.>. Frames with more than a half pixel offset from the reference PSF are rejected. We note that the sub-pixel offset measurement is done to the precision of the hundredth of a pixel within a 32x32 cutoff around the center; Precision can be increased at the cost of longer run time. An additional ∼20-30% of frames are discarded for their large offset for most datasets. With worse data quality (i.e. low Stehl ratio, increased seeing, etc.), the percentage can increase to roughly 50%. Since the expected Hα flux from the protoplanets is low, it is key to preserve every photo-electron from the planet to ensure the survival of planet light after PSF subtraction. The passing frames are block averaged by time or parallactic angle (PA) for better performance of the pipeline. The former method combines a certain numbers of frames into one via summation, creating a combined frame with a longer exposure time and larger number of Hα counts. Such number varies depending on the size of the dataset and we have found that constraining this parameter somewhere between 50 to 200 frames leads to higher efficiency and ensure the diversity in PA for increased ADI performance. The PA value of each combined frame is the average of the PA values associated with each individual frame. The latter is an alternative method for observations without data through transit, as it creates a more evenly spaced data cube to enhance ADI performance. The final step before PSF subtraction is applying a high-pass filter to the combined frames, which serves as a substitute for dark and flat subtraction. Additionally, it also removes some of the extensive scattered light from the central star and dust present in the system which occupies the lower spatial frequency space. A median image of the combined frames is created to determine the true center of the aligned images, the FWHM and peak of the PSF. We used the python implementation package PyKLIP <cit.> to perform both KLIP and ADI to increases the post-processed contrasts. There are three key parameters in PyKLIP that effects the stellar PSF reconstruction and subtraction: annuli, subsection, and numbasis. The PSF is modeled in annular segments (annuli), each subdivided into equal subsections (subsections) and with a number KL modes or principal components (numbasis). PyKLIP produces a de-rotated data cube containing the images after the inputted number of principal components removed. In our case, our post KLIP-ADI data cube contains images with 1, 5, 10, 20 and 50 KL modes subtracted. The movement parameter, an exclusion criteria for picking reference PSFs, were limited to values ranging from 0-5 pixels. In other words, the reconstruction of the target PSF does not utilize images where the rotation of the companion between target and reference is less than the given movement value. If the movement values are small, the final reduced images can be more susceptible to self-subtraction. The final ASDI images are created by multiplying the KLIP-ADI reduced continuum image by a scale factor before subtracting it from the Hα image to account for the flux difference from primary star at the two wavelengths. This should eliminate the residual starlight and scattered light from disk structures. In the case where the disk structures are not located close to the point source of interest, the scale factor is a ratio of flux of the central star between the two filters. To account for the change in diffraction pattern with wavelength, the continuum image is scaled spatially by the ratio of the two wavelengths. However, in the case where a bright disk structure is present, this step of resizing the continuum image is omitted and the scaling factor for flux is determined through minimizing the average flux difference of apertures placed on disk structure around the region of interest between the two wavelengths for each mode parameter. Further discussion on optimization of the free parameters of the different reduction routines used for datasets and their effects on the interpretation of the results can be found in Section <ref>. §.§ Astrometry and Photometry of Companions Candidates As PSF subtraction algorithms can distort planet signal, we obtained companion astrometry and photometry through the Bayesian KLIP Astrometry (BKA) technique with the forward modeling feature in PyKLIP <cit.> for accurate measurements and uncertainties on the companion parameters. The initial position of the planet is obtained through a Gaussian fit in ASDI images, and a grid of forward modeled negative planets are injected within a FWHM of such position into the datacube of combined frames. The injected planets are all of the same brightness and thus minimizing the total flux within a circular aperture with radius being FWHM of the PSF centered around the initial position should be the optimal location. To account for the uncertainty of the initial position, the optimal location is taken to be the median value of positions with minimized total flux with aperture radii ranging from 0.75 FWHM to 1.5 FWHM. The contrast of the companion can be determined in a similar method through injecting negative fake planets of various contrast at the optimized position, but we minimize the root-mean-square (RMS) rather than total flux. § RESULTS §.§ Objects with Known Companions or Companion Candidates in the Literature HD 142527: Shown in Figure <ref>, we successfully resolved the stellar binary of HD 142527 with a SNR=5 at ∼0.054" (∼5 AU) in both science filters, as well as the disk spanning 1" in radius. There are no additional detections of new companion candidates in the gaps of this system. §.§ Objects with New Companion Candidates HD 34700: We report the tentative (∼ 4σ) detection of a point source with excess Hα flux in both the Dec 4 and Mar 8 observations. The reduced Hα, continuum images and final SDI cubes from both epochs are shown in Figure <ref>. The brightness of the object and its massive disk introduces light contamination in the position of the observed point source and the region around it (represented by the circular aperture in Figure <ref> and beyond). In the first epoch, the point source appears to be an extended structure of the disk in both the Hα and final ASDI image. Although a similarly prominent source was lacking in the continuum images at the position of the structure excess Hα flux, we do note that there is a significant amount of background light either from the scattered disk light or remaining flux from the stellar PSF. If treating this source as a potential companion, it is an 5.3 σ point source detection at separation of 61.0±1.27 pixels and PA of 341.0±0.72^∘, with a contrast of (5.5±2.7)·10^-5. In the latter epoch, with better seeing conditions and a narrow Hα, there is an significant reduction of residual stellar flux and scattered light overall. Finer substructure of the disk can be identified in the continuum, including western parts of the inner disk. Due to the lack of photons through the narrow 1 nm Hα filter, the path of the central photo-electrons being transferred across detector during readout becomes prominent (PA∼ 142^∘), leaving a similar linear “read-out stripe" structure in the ASDI images. The Hα excess source appears to be detached from the disk and has a closer resemblance to a point source in both Hα and continuum KLIP reduced images. As the value of the numbasis parameter increases , or the number of KL basis used for stellar PSF reconstruction and subtraction, the bright point source in continuum images changes into a dipole like structure, with two bright circular blobs with a dark lane separating the two in the middle. When treating this source as a potential companion, the point source is located at a separation of 62.9±0.87 pixels and PA of 341.3±0.57^∘, with a contrast of 5.5·10^-5±2.7·10^-5 and an average SNR of 4. MaxProtoPlanetS 1: We report a detection of a companion candidate in the second epoch (Mar 25) observation of this object in both Hα and continuum images at 5 σ and 2 σ respectively. The Hα, continuum, and ASDI images are shown in Figure <ref>. Despite the positive detection in both filters, there appears to be a ∼1 pixel offset in the location of the source and a difference in morphology. The Hα source resembles a point source in different reductions more consistently, which is in agreement with a true accreting object <cit.>. It is also worth noting that we fail to obtain a 5σ source when choosing to combine frames by time rather than PA, like attributable to the lack of significant rotation for many of the frames taken well after transit, as mentioned in <ref>. The 5σ detection in ASDI image indicates the presence of additional Hα flux from the source. However, we fail to detect any signal in the initial observation on Mar 20 in both wavelengths with either method of performing ADI as shown in the top row of Figure <ref>. Considering the variations in the PSF before and after transit, we discarded the data post-transit (∼15%) for a more stable PSF reference to ensure the efficiency of PSF removal with PyKLIP. The reduced pre-transit dataset is shown in the bottom row of Figure <ref>. § DISCUSSION HD 142527: The location of the stellar companion is roughly consistent with the latest orbit with semimajor axis ∼10 AU <cit.>. Our additional data point will further constrain the orbit of HD 142527 B, but the separation remains small between the central binary and thus ruling out the highly eccentric orbit of the stellar companion as the cause to many of its disk features, notably the massive cavity spanning ∼100 AU. With the non-detection of additional companions in this system, the origin to its large central cavity remains unknown, as the stellar companion with such an orbit can only be responsible for gaps ∼ 30 AU <cit.>. HD 34700: The positive Hα source identified in both epoch of MagAO-X observations is approximately 0.378" away from the central binary. A SNR=5 source with Hα excess emission was detected in the first epoch. However, due to sub-optimal seeing conditions and the brightness of the disk in Hα, it was impossible to separate the contributions from the disk and the planet signal as the signal does not appear to be a point source, but an extension of the disk. In subsequent observations in the 2023A term, we employed narrow Hα filter (Δλ= 1 nm) instead of the regular Hα filter (Δλ= 9 nm). This allowed for the isolation and recovery of the excess Hα signal from the protoplanet candidate, albeit with an SNR= 3.5. The improved conditions during this epoch also revealed a positive signal in the continuum emission at the same location. Although the continuum signal did not resemble a point source as observed in Hα, the presence of continuum flux suggests that the source may be associated with disk features rather than an accreting protoplanet. When combining both SDI images and overlapping apertures of a five pixel radius centered on the 100 brightest pixels from each epoch, only a handful apertures remain as shown in Figure <ref>. Majority of remaining apertures are located on the brighter parts of the disk, and only two does not have direct association with the disk, which is illustrated roughly by the gray elliptical annulus in Figure <ref>. One of such two location appears to be consistent with the location of the potential companion. Observation in JHK bands with SCExAO/CHARIS detected a positive signal with close proximity to our observation <cit.>. However, due to the complex disk structure, they concluded that the signal is likely a distorted part of the disk's spiral arm or an artifact introduced during post-processing. Mass limits on the potential substellar mass objects were placed at ∼12 M_Jup inside the disk and ∼5 M_Jup outside of the disk. The true nature of this source remains a mystery, but its excess of Hα flux is challenging to explain as anything other than a protoplanet. A recent multi-wavelength study of this disk revealed an inner ring extending from 65 to 120 AU inside the multi-spiral outer disk in the polarized Hα image<cit.>. We detected parts of this inner disk feature in both epochs, but as it is an extended structure, it is broken up into sections by KLIP, and is better seen in the latter epoch of data. Although there were no detection of point sources, they used the observed geometric offsets between the inner and outer ring to constrain the mass and location of the potential companion to be a ∼4_Jup mass planet inside or outside the HD 34700 A inner disk <cit.>. Due to the difference in filter width in the new epoch, we failed to remove the speckle noise and identify any protoplanetary candidates within the inner disk. MaxProtoPlanetS 1: Our null detection in the first epoch of observation of this object is likely attributable to the fast variation in atmospheric turbulence conditions that was unable to be corrected by the AO loop. We see strong wind driven halos and its artifacts in the post-processes images. Through decreasing the variation in the PSF by eliminating data obtained after transit, the wind driven halo becomes less prominent as shown in Figure <ref>. However, the smaller variation still prevents the ADI from accurately modeling the starlight residuals, resulting in strong asymmetric wind driven halo residuals the ADI images. Additionally, since the direction of the wind driven halo follows PA, which is the same PA as the companions, its signature aligns and accumulates when the temporal data cube is simply rotated and median combined <cit.>. This behavior we observe is likely caused by the smearing of the servo-lag speckles across a planet in the direction of a wind. §.§ “Read Out Stripe" One of the major sources of noise in the post-processed images is a bright stripe created from the transfer of charge from the bright central star to the readout region on a frame transfer CCD. An example in our data can be seen in as the linear feature the Mar 8 data of HD 34700A in the lower right panel of Figure <ref>, where this effect is most prominent when the camera is set with high EM gains and high readout speeds on targets with low photon counts. As HD 34700A and its disk are brighter in continuum and the continuum EM gain is relatively lower than its Hα counterpart, the stripe is not pronounced in the Hα continuum ADI image. The stripe remains in the same position throughout observations, just like stellar speckles, thus it can be partially removed by ADI PSF subtraction and its effect is negligible. However, when the observation lacks rotation, it remains and gets broken up by PyKLIP, resulting in bright blobs or point sources in that PA vicinity. The location of the stripe is flipped between the cameras due to the extra reflection towards the Hα science camera, allowing for partial differentiation between artifact and true signal. For protoplanet identification, which requires differentiation in both cameras to distinguish its nature, we neglect the regions overlapping with the read out stripe. §.§ Disk Removal in ASDI In attempts to isolate the point source signal in HD 34700, we used a slightly different approach creating the ASDI images. As mentioned in Section <ref>, we did not compress the continuum image by the ratio of the wavelengths for this object, as this standard practice of SDI only enhances our major noise source, disk structures; Unlike diffraction speckles, real objects like disks, do not scale with wavelength. As the source is located within the disk, the disk must be removed to isolate the point source. Due to the large size of disk and its distance from its young central stars, the flux from the central star no longer accurately represents the flux of the disk. Thus, we attempted to remove the outer disk via two different approaches. First, we first performed reference differential imaging (RDI) in combination with KLIP to extract the outer disk in observations in both wavelengths with a bright target observed earlier in the night as the reference star. The extracted disk can then be subsequently subtracted from the ADI image. The outer disk can be extracted using this method as shown in Figure <ref>. While the rough elliptical shape of the outer disk is extracted, its multi-spiral and discontinuity feature cannot be recovered. Due to the lack of the disk structures in the model, the disk removal only added more noise to the region of interest near the discontinuity. We tried removed the disk by performing photometry on the disk as an alternative; We measured the total flux from the outer disk via a single elliptical annulus and a group of circular apertures in both wavelengths for creating the final ASDI image for both epochs. The elliptical annulus is fitted to the outer disk with the spirals ignored via the Least Squares fitting of ellipses tool in Python <cit.>. The slight inclination of the outer disk introduced a variation of brightness in disk, causing the region of interest to have higher flux than the rest of the disk. The scaling factor determined through a single elliptical annulus represents a more global flux different of the disk, but is not effective in removing the disk in region of interest. To find a scaling factor better representative of the residual disk flux in the northwest, we placed a group of circular aperture with diameter of size equivalent to the FWHM of the PSF in such region. We experimented with different numbers of apertures placed and their locations, and found that placing eight on the bright discontinuity region of the disk can best removes the disk around the companion candidate. This is likely due to the flux from the structure around the discontinuity better captures the true flux within the region of interest. However, this elongates the candidate and creating a more extended morphology rather than a point source. Furthermore, this approach to disk removal introduces new variables when performing photometry of the disk, such as the location, shape, size, and number of the apertures used. Fine-tuning these parameters can lead to different interpretation on the properties and the nature of the detected source in the ASDI image. The task of retrieving the planet from disk structures shares similar challenges as imaging planets within habitable zones. The exozodiacal dust present in the region can obscure observations of Earth-like exoplanets and it will need to be subtracted to achieve the desired contrast. The various techniques used in the proposed solutions are also utilized in our reduction pipeline: PSF subtraction, ADI, and disk subtraction via high pass filtering <cit.>. In simulations, a simple high-pass filter removes structured exozodi to the Poisson noise limit for systems with inclinations <60^∘ and up to 100 zodis <cit.>. However, to reach such a noise floor in real observational data can be challenging, especially when there are bright dust or speckle structures at the same spatial frequencies as the planet. Our observations of the HD 34700A system serve as a realistic example in non-ideal scenarios, including the presence of the bright and complex disk structures and observational conditions being sub-optimal. § CONCLUSION In this paper, we presented MagAO-X observations of three different transitional disk systems on the 6.5m Magellan Clay telescope using Hα differential imaging for extraction of accretion emissions from potential protoplanets. We applied standard PSF subtraction and post-processing techniques to separate planetary signals from the speckle noise and the surrounding disk features. We successfully identified accreting gap companion signal in all three datasets: a repeated detection of the accreting stellar companion in the HD 142527 system along with two potential substellar companion candidates, HD 34700Ab and MaxProtoPlanetS1. Despite the fact that both protoplanet candidates had 5σ detections in a single epoch, the true nature of these two objects remains a question due to observation variability across epochs and the disk background noise they are embedded in. With our current observational and post-processing techniques, we find that the wind driven halo can limit our sensitivity in region closer to the star and complex disk features can become a problem as we move further away. We do not have a reliable approach to disentangle embedded protoplanet light from its disk that doesn't appear as a clear Hα excess point source. However, our dataset of HD 34700A serves as an example of an embedded source that can be recovered, but the nature of the source is enigmatic and our achieved contrast is reduced due to disk light. This can be a similar problem faced when trying to imaging habitable zones planets within exozodiacal dust. J. Li, M. Y. Kautz, and E. A. McEwen are supported by NSF Graduate Research Fellowships. L. M. Close and J Li. were partially supported by NASA eXoplanet Research Program (XRP) grant 80NSSC18K0441 and is now supported by grant 80NSSC21K0397, which funds the MaxProtoPlanetS survey (PI: L. M. Close). S. Y. Haffert received support from NASA through the NASA Hubble Fellowship grant #HST-HF2-51436.001-A, awarded by the Space Telescope Science Institute (operated by AURA under NASA contract NAS5-26555). We are very grateful for the support from NSF MRI Award #1625441 for the development of MagAO-X. The MagAO-X Phase II upgrade program (PI: J. R. Males) is made possible by the generous support of the Heising-Simons Foundation. spiebib
http://arxiv.org/abs/2407.12669v1
20240717155245
Enhancing the Utility of Privacy-Preserving Cancer Classification using Synthetic Data
[ "Richard Osuala", "Daniel M. Lang", "Anneliese Riess", "Georgios Kaissis", "Zuzanna Szafranowska", "Grzegorz Skorupko", "Oliver Diaz", "Julia A. Schnabel", "Karim Lekadir" ]
cs.CV
[ "cs.CV", "cs.AI", "cs.LG" ]
Enhancing Privacy-Preserving Cancer Classification using Synthetic Data R. Osuala et al. Departament de Matemàtiques i Informàtica, Universitat de Barcelona, Spain richard.osuala@ub.edu Helmholtz Center Munich, Munich, Germany Technical University of Munich, Munich, Germany Imperial College London, London, United Kingdom Computer Vision Center, Bellaterra, Spain Kings College London, London, UK Institució Catalana de Recerca i Estudis Avançats (ICREA), Barcelona, Spain Enhancing the Utility of Privacy-Preserving Cancer Classification using Synthetic Data Richard Osuala1,2,3 Daniel M. Lang2,3 Anneliese Riess2,3 Georgios Kaissis2,3,4 Zuzanna Szafranowska1 Grzegorz Skorupko1 Oliver Diaz1,5 Julia A. Schnabel2,3,6 Karim Lekadir1,7 Received 04 December 2023 / Accepted 15 July 2024 ========================================================================================================================================================================================= § ABSTRACT Deep learning holds immense promise for aiding radiologists in breast cancer detection. However, achieving optimal model performance is hampered by limitations in availability and sharing of data commonly associated to patient privacy concerns. Such concerns are further exacerbated, as traditional deep learning models can inadvertently leak sensitive training information. This work addresses these challenges exploring and quantifying the utility of privacy-preserving deep learning techniques, concretely, (i) differentially private stochastic gradient descent (DP-SGD) and (ii) fully synthetic training data generated by our proposed malignancy-conditioned generative adversarial network. We assess these methods via downstream malignancy classification of mammography masses using a transformer model. Our experimental results depict that synthetic data augmentation can improve privacy-utility tradeoffs in differentially private model training. Further, model pretraining on synthetic data achieves remarkable performance, which can be further increased with DP-SGD fine-tuning across all privacy guarantees. With this first in-depth exploration of privacy-preserving deep learning in breast imaging, we address current and emerging clinical privacy requirements and pave the way towards the adoption of private high-utility deep diagnostic models. Our reproducible codebase is publicly available at <https://github.com/RichardObi/mammo_dp>. § INTRODUCTION Breast cancer accounts for staggering estimates of 684.000 deaths and 2,26 million new cases worldwide per year <cit.>. Part of this burden could be reduced through earlier detection and timely treatment. Screening mammography is a cornerstone for early detection and further associated with a reduction in breast cancer mortality <cit.>. Recent literature emphasizes the potential of deep learning-based computer-aided diagnosis (CAD) <cit.>, e.g., demonstrating that a symbiosis of deep learning models with radiologist assessment yields the highest breast cancer detection performances <cit.>. However, training deep learning models on patient data poses a risk of leakage of sensitive person-specific information during and after training <cit.>, as models have the capacity to memorise sufficient information to allow for high-fidelity image reconstruction <cit.>. To avoid such leakage of private patient information, data needs to be protected during model training, in particular when the objective is to develop models to be used in clinical practice or shared among entities. Furthermore, international data protection regulations grant patients the right to request the removal of their information from data holders. For instance, point (b) of article 17(1) of the EU General Data Protection Regulation (GDPR) <cit.> stipulates that data subjects have a right to be forgotten. Given, for instance, the proven possibility of reconstructing training data given a model's weights <cit.>, these rights can extend to the removal of patient-specific information from already trained deep learning models <cit.>. However, it is known to be difficult to reliably and provably remove patient information — present in only one or few specific training data points — from already trained model weights <cit.>. A generic and verifiable alternative is given by the removal of a patient's data point from the training data and retraining of the respective model with the reminder of the dataset. This procedure is not only likely to have negative impacts on the performance of algorithms, but also emerges as a deterrence and risk for hospitals to adopt deep learning models, due to extensive economic, organisational, and environmental costs caused by retraining. Anticipating patient consent withdrawals, costly retraining can be avoided by demonstrating that deep learning model weights do not include personally identifiable information (PII) about any specific patient. To this end, a powerful technique to ensure privacy during model training is given by Differentially Private Stochastic Gradient Descent (DP-SGD)<cit.>, which quantifiably reduces the effect each single training sample can have on the resulting model weights. Furthermore, privacy-preservation can also be achieved by diagnostic models exclusively trained on synthetic data, which is not (unambiguously) attributable to any specific patient but rather contains anonymous samples representing the essence of the dataset <cit.>. The caveat of both DP-SGD and synthetic data strategies is, however, that they generally lead to a reduction in model performance, known as the privacy-utility trade-off. Investigating this trade-off in the realm of breast imaging, our core contributions are summarised as follows: * We design and validate a transformer model, achieving promising performance as a backbone for privacy-preserving breast mass malignancy classification. * We propose and validate a conditional generative adversarial network capable of differentiating between benign and malignant breast mass generation. * We empirically quantify privacy-utility-tradeoffs in mass malignancy classification, assessing various differential privacy guarantees, and further combine and compare them with training on synthetic data. § METHODS AND MATERIALS §.§ Datasets and Preprocessing We use the open-access Curated Breast Imaging Subset of Digital Database for Screening Mammography (CBIS-DDSM) dataset <cit.>, which consists of 891 scanned film mammography cases with segmented masses with biopsy-proven malignancy status. After extracting mass images from craniocaudal view (CC) and mediolateral oblique (MLO) views, we follow the predefined per-patient train-test split <cit.>, allocating 1296 mass images for training and 402 (245 benign, 157 malignant) mass images to testing. We further divided this training set randomly per-patient into a training (1104 mass images, 525 malignant) and a validation set (192 mass images, 102 malignant). As external test set, we further adopt the publicly available BCDR cohort <cit.>, which comprises 1010 patients, totalling 1493 lesions (639 masses) with biopsy information from both digital mammograms (BCDR-DM) and film mammograms (BCDR-FM). Our final BCDR test set contains 1106 mass images extracted from CC and MLO views, 486 of which are malignant and 620 benign. To obtain mass patches, the lesion contour information is used to extract bounding boxes from the mammograms. We then create a square patch with a minimum size of 128x128 around this bounding box, ensuring a margin of 60 pixel in each direction. For classification, the mass patches are resized to pixel dimensions of 224x224 using inter-area interpolation, maintaining image ratios, and stacked to 3 channels. Models were trained on either a single 8GB NVIDIA RTX 2080 Super or 48GB RTX A6000 GPU using PyTorch and opacus <cit.> for DP-SGD. §.§ Cancer Classification Transformer Model Given its reported high performance on classifying the presence of a lesion in mammography patches <cit.> and its shifted window mechanism, allowing to effectively attend to shapes of varying sizes, we adopt a swin transformer (Swin-T) <cit.> as cancer classification model, to distinguish between benign and malignant masses. We inititalize ImageNet-pretrained <cit.> network weights and, after following the Swin-T hyperparameter setup <cit.> (stride, window size), we adjust the last fully-connected layer of the swin transformer reinitializing it with two output nodes each one outputting the logits for one of our respective classes (i.e., malignant or benign). We only set the parameters of the adjusted fully-connected layer as trainable and apply a learning rate of 1e-5. A weight decay of 1e-8 is used following the fine-tuning experiment described in <cit.>. Furthermore, an adamw optimizer, label smoothing of 0.1, and a batch size of 128 are used. During training, random horizontal and vertical flips are applied as data augmentation and a cross entropy loss is backpropagated. Training for 300 epochs, the model from the epoch with the lowest area under the precision-recall curve (AUPRC) on the validation set is selected for testing. §.§ Malignancy-Conditioned Generative Adversarial Network Going beyond unconditional mass synthesis in the literature <cit.>, we propose a malignancy conditioned generative adversarial network (MCGAN) to control the generation of either benign or malignant synthetic breast masses. In general, GANs consist of a generator (G) and a discriminator (D) network, which engage in a two-player zero-sum game, where G generates synthetic samples that D strives to distinguish from real ones <cit.>. We design G and D as deep convolutional neural networks <cit.> and, as shown in Fig. <ref>, integrate class-conditional information <cit.>. To this end, we extract the histopathology report's biopsy information for each mass from the metadata, and convert it into a discrete malignancy label. Then, we transform this label into a multi-dimensional embedding vector before passing it through a fully-connected layer yielding a representation with the corresponding dimensionality to concatenate it to the generator input (100 dim noise vector) and to the discriminator input (128x128 input image). As D learns to associate class labels with patterns in the input images, it has to learn whether or not a given class corresponds to a given synthetic sample. Furthermore, as the discriminator loss is backpropagated into the generator, G is forced to synthesize samples corresponding to the provided class condition. This results in G learning a conditional distribution based on the value function min_Gmax_D V(D,G) = min_Gmax_D[𝔼_x∼ p_data [log D(x|y)] + 𝔼_z∼ p_z [log(1 - D(G(z|y)))]]. Optimizing the discriminator via binary cross-entropy <cit.>, we define its loss in a class-conditional setup as L_D_MCGAN = - 𝔼_x∼ p_data [log D(x|y)] + 𝔼_z∼ p_z [log(1 - D(G(z|y)))]. We train our MCGAN on the CBIS-DDSM training data, applying random horizontal (p=0.5) and vertical (p=0.5) flipping as well as random cropping with resizing, where the resize scale ranges from 0.9 to 1.1 and aspect ratio from 0.95 to 1.1. We further include one-sided label smoothing <cit.> in a range of [0.7, 1.2]. Following <cit.>, we employ a discriminator convolutional kernel size of 6 and a generator kernel size of 4. We observe that this reduces checkerboard artefacts as D's field-of-view now requires G to create realistic transitions between the kernel-sized patches in the image. MCGAN is trained for 10k epochs with a batch size of 16. Based on the best quality-diversity tradeoff, we select the model from epoch 1.4k after qualitative visual assessment of generated samples . §.§ Patient Privacy Preservation Framework Privacy protection is an ethical norm and legal obligation, e.g. granting patients the right of their (retrospective) removal from databases <cit.>. Since (biomedical) deep learning models are vulnerable to information leakage, e.g. sensitive patient attributes <cit.>, they can be affected by such (and future) regulations. However, privacy-preserving techniques can be integrated into deep learning frameworks and, to some extent, avoid compromising confidential data. For instance, (i) model training with DP-SGD <cit.> or (ii) training exclusively on synthetic data. From a legal perspective, models trained on only synthetic data remain unaffected by patient consent withdrawal if relatedness between the data and the data subject cannot be established, or if personal data has been rendered synthetic in such a manner that the data subject is no longer identifiable <cit.> e.g., according to article 4(1) and recital 26 of the GDPR <cit.>. It is to be noted that in the acceptable-risk legal interpretation, a data subject's re-identification risk is reduced to an acceptable level rather than fully eradicated <cit.>. Hence, this interpretation enables approaches such as synthetic data and/or Differential Privacy (DP) model training to be used as legally compliant privacy preservation methods despite not guaranteeing a zero-risk of patient re-identification. DP is a mathematical framework that allows practitioners to provide (worst-case scenario) theoretical privacy guarantees for an individual sharing their data to train a deep learning model. Consider two databases (e.g., containing image-label pairs), we call them adjacent if they differ in a single data point, i.e., one image is present in one database but not in the other. Then, a randomised mechanism → with domain and range is said to satisfy (,δ)-differential privacy, if for any two adjacent databases ,'∈ and for any subset of outputs S⊆, [()∈ S]≤ e^[(')∈ S]+δ holds. and δ bound a single data point's influence on a model's output (e.g. the models' weights or predictions). Thus, the smaller the value of these parameters, the higher the model's privacy and the harder it is for an attacker to retrieve information about any training data point. DP-SGD <cit.> is the DP variant of the well-known SGD algorithm, and facilitates the training of a model under DP conditions. In particular, a model trained under (,δ)-DP is robust to post-processing, meaning only using its output for further computations also satisfies (,δ)-DP. Moreover, the choice of these parameters is application-dependent and normative <cit.> and varies strongly across real-world deployments <cit.>. In the case of mammography, multiple lesions of the same patient are available in the datasets, i.e. one from the CC view and one from the MLO view. Therefore, to preserve the privacy of one patient it is necessary to protect all their data points (i.e. all images). In such a case, DP group privacy is used to estimate a patient's DP privacy guarantee. However, for simplicity, in our subsequent experiments, we provide image-level privacy guarantees rather than per patient. § EXPERIMENTS AND RESULTS §.§.§ Synthetic Data Evaluation Qualitatively assessing the synthetic images in Fig. <ref>, it is not readily possible to distinguish synthetic from real masses in terms of image fidelity or diversity. We note the absence of clear visual indicators to distinguish between malignant and benign images for both real and synthetic images. This is in line with the difficulty of determining the malignancy of a mammographic lesion shown by high clinical error rates and inter-observer variability <cit.>. However, results for training our malignancy classification model on only synthetic data (see Syn and SynPre in Table <ref>) show that the synthetic data captures the conditional distribution effectively generating either malignant or benign masses. Both, vanilla ImageNet-based Fréchet Inception Distance (FID) <cit.> and radiology domain-specific RadImageNet-based FID <cit.>, concur that the synthetic data (FID_Img=58±.72) is substantially closer to the real CBIS-DDSM <cit.> distribution compared to BCDR <cit.> (FID_Img=156.43±1.43). This is even more pronounced when comparing the variation of extracted radiomics features for CBIS-DDSM to synthetic (FRD=18.12) and BCDR (FRD=277.63) images using the Fréchet Radiomics Distance (FRD) <cit.>. While this indicates desirable synthetic data fidelity, we also observe good diversity. The latter is shown by comparing subsets of the same datasets with each other, where the variation within the synthetic data (e.g., FID_Rad=0.32±.12) closely resembles the variation within the real CBIS-DDSM dataset (e.g., FID_Rad=0.31±.19). Notwithstanding less variation in radiomics imaging biomarkers within the synthetic data (FRD_Syn=0.57 vs. FRD_Real=3.48), this overall points to a valid coverage of the distribution and an absence of mode collapse. §.§.§ Mass Malignancy Classification As shown in Table <ref>, we conduct experiments with and without formal privacy guarantees. For scenarios where a formal privacy guarantee is not strictly required and, thus, synthetic data suffices as privacy mechanism, we compare the results of training SwinT on synthetic data (Syn) and on real data (Real) with DP-SGD. Kaissis et al. <cit.> defined =6 as suitable privacy budget for their medical imaging dataset. Compared to DP-SGD with =6, synthetic data achieves better AUPRCs for within-domain tests on CBIS-DDSM (SwinT_Syn=0.696 vs SwinT_Real(=6)=0.679) and is on par for out-of-domain (ood) tests on BCDR (SwinT_Syn=0.602 vs SwinT_Real(=6)=0.600). However, training all SwinT layers using synthetic data (SynPre), achieves substantially better performance only approximated by DP results for =60 for within-domain (SwinT_SynPre=0.733 vs SwinT_Real(=60)=0.721) and ood (SwinT_SynPre =0.66 vs SwinT_Real(=60)=0.64) tests. Further fine-tuning SwinT_SynPre on real data using DP-SGD results in additional improvement across all privacy parameters for within-domain and ood testing. For instance, training SwinT_SynPre+RealFT with =1 results in an AUPRC of 0.74 and 0.67 for CBIS-DDSM and BCDR, respectively. To assess scenarios where a formal guarantee is required, we further compare DP-SGD training of SwinT on real data (Real) with DP-SGD training on a mix of real and synthetic data (Real+Syn). To this end, our experiments show that such synthetic data augmentation can improve the privacy-utility tradeoff. This is exemplified by SwinT_Real+Syn(=6) accomplishing an AUPRC of 0.708 within-domain and 0.647 ood, while SwinT_Real(=6) achieved 0.679 and 0.579, respectively. We further observe the trend that stricter privacy budgets (i.e., smaller ) can be associated with more added performance of synthetic data as additional classification model training data. § DISCUSSION AND CONCLUSION We introduce a privacy preservation framework based on differential privacy (DP) and synthetic data and apply it to the diagnostic task of classifying the malignancy of breast masses extracted from screening mammograms. We further propose, train, and evaluate a malignancy-conditioned generative adversarial network to generate a dataset of benign and malignant synthetic breast masses. Next, we train a swin transformer model on mass malignancy classification and assess, compare and combine training under DP and training on synthetic data. This analysis revealed that when training with DP, synthetic data augmentation can notably improve classification performance for within-domain and out-of-domain test cases. Apart from that, we show, across privacy mechanisms and across domains, that the performance of models pretrained on synthetic data can be further improved by DP fine-tuning on real data. This finding is particularly important considering that synthetic data, if not directly attributable to any specific patient, can become a valid, legally compliant alternative to strict DP guarantees in clinical practice. Consequently, it is to be further investigated where and when deterministic mechanisms without formal DP guarantees can suffice to shield against different privacy attacks <cit.>. In particular, we motivate future work to analyse the extent to which the inherent properties of synthetic data generation algorithms can provide empirical protection against attacks. A methodological alternative to our approach is to assess privacy-utility tradeoffs when training the generative model itself using DP-SGD <cit.>, resulting in formal privacy guarantees of the generated synthetic datasets. Thus, a further avenue to explore then lies within the question whether randomness inherent in randomised data synthesis algorithms (e.g., based on the noise in diffusion models <cit.> or GANs <cit.>) can be used to amplify the privacy of the DP versions of such synthesis algorithms, thereby potentially further enhancing privacy-utility tradeoffs. To this end, our study constitutes a crucial first step leading towards the clinical adoption of diagnostic deep learning models, enabling practical privacy-utility tradeoffs all while anticipating respective legal obligations and clinical requirements. §.§.§ This study has received funding from the European Union’s Horizon research and innovation programme under grant agreement No 952103 (EuCanImage) and No 101057699 (RadioVal). It was further partially supported by the project FUTURE-ES (PID2021-126724OB-I00) from the Ministry of Science and Innovation of Spain. RO acknowledges a research stay grant from the Helmholtz Information and Data Science Academy (HIDA). §.§.§ The authors have no competing interests to declare that are relevant to the content of this article. splncs04
http://arxiv.org/abs/2407.12513v1
20240717115422
Weak supersymmetry and superconformal indices
[ "Vyacheslav P. Spiridonov" ]
hep-th
[ "hep-th" ]
160mm 220mm -0.7cm 0cm 0cm
http://arxiv.org/abs/2407.12684v1
20240717160255
4Dynamic: Text-to-4D Generation with Hybrid Priors
[ "Yu-Jie Yuan", "Leif Kobbelt", "Jiwen Liu", "Yuan Zhang", "Pengfei Wan", "Yu-Kun Lai", "Lin Gao" ]
cs.CV
[ "cs.CV" ]
IEEE Transactions on Pattern Analysis and Machine Intelligence, Vol. xx, No. xx, April 2024 § ABSTRACT Due to the fascinating generative performance of text-to-image diffusion models, growing text-to-3D generation works explore distilling the 2D generative priors into 3D, using the score distillation sampling (SDS) loss, to bypass the data scarcity problem. The existing text-to-3D methods have achieved promising results in realism and 3D consistency, but text-to-4D generation still faces challenges, including lack of realism and insufficient dynamic motions. In this paper, we propose a novel method for text-to-4D generation, which ensures the dynamic amplitude and authenticity through direct supervision provided by a video prior. Specifically, we adopt a text-to-video diffusion model to generate a reference video and divide 4D generation into two stages: static generation and dynamic generation. The static 3D generation is achieved under the guidance of the input text and the first frame of the reference video, while in the dynamic generation stage, we introduce a customized SDS loss to ensure multi-view consistency, a video-based SDS loss to improve temporal consistency, and most importantly, direct priors from the reference video to ensure the quality of geometry and texture. Moreover, we design a prior-switching training strategy to avoid conflicts between different priors and fully leverage the benefits of each prior. In addition, to enrich the generated motion, we further introduce a dynamic modeling representation composed of a deformation network and a topology network, which ensures dynamic continuity while modeling topological changes. Our method not only supports text-to-4D generation but also enables 4D generation from monocular videos. The comparison experiments demonstrate the superiority of our method compared to existing methods. : Text-to-4D Generation with Hybrid Priors Yu-Jie Yuan, Leif Kobbelt, Jiwen Liu, Yuan Zhang, Pengfei Wan, Yu-Kun Lai, and Lin Gao1 1 Corresponding Author is Lin Gao (gaolin@ict.ac.cn). Received 04 December 2023 / Accepted 15 July 2024 ================================================================================================================================================== § INTRODUCTION The production and generation of digital content (both 2D and 3D) traditionally rely on capturing from the real world or manual creation by professionals which is cumbersome and expensive. With the prosperity of high-quality large-scale generative models, especially cross-modal models such as those based on diffusion models, 2D images can be generated from various text prompts. This has brought about the popularity of generative artificial intelligence, which makes professional content generation (PCG) converge to user content generation (UCG), allowing casual users to create diverse imaginary 2D content. For 3D models, existing generative models are often custom designed and trained individually for different representations, such as voxels <cit.>, point clouds <cit.>, or meshes <cit.>, and also rely on a large scale of high-quality 3D datasets <cit.> and can only generate novel shapes on limited categories. Implicit representations, such as Neural Radiance Fields (NeRF) <cit.>, are gradually replacing explicit representations as the core representation in 3D generative models. With the help of tri-planes <cit.> or voxels <cit.>, it is convenient to introduce 2D generative networks, such as StyleGAN <cit.> or diffusion models <cit.> to generate 3D NeRF <cit.>. However, the generation quality still depends on the quantity and quality of the training data. Due to the lack of high-quality 3D data and inefficiency in manual collection and creation, exploiting existing large-scale generative models to assist in 3D generation has become a viable approach. The text-to-image diffusion model is a powerful tool, which is trained on a large-scale paired text-image dataset <cit.> and has fascinating generation performance. Therefore, DreamFusion <cit.>, as a pioneering work, combines it with NeRF representation and proposes the score distillation sampling (SDS) loss. Guided by the input text, a pre-trained diffusion model is used to supervise the rendered images from NeRF, achieving text-based 3D NeRF generation. This work has sparked a series of works around this formulation and the field is developing rapidly. Some remarkable text-to-3D or image-to-3D results have been achieved by introducing 3D priors to enhance 3D consistency <cit.>, improving SDS loss to some variants <cit.> or introducing physical-based materials based on mesh rendering <cit.> to improve texture synthesis quality. However, the generated objects are all static, even though motion is an inherent property of our world. It is a laborious task to endow generated 3D objects with motion, and users often prefer to use simple inputs, such as text prompt. Therefore, generating a dynamic scene from a text similar to existing 3D generation methods remains to be a problem that needs to be addressed. The methods for reconstructing a dynamic NeRF are emerging one after another, but generating a dynamic NeRF from a text prompt still poses great challenges. The main challenge is the augmentation in data dimensions between input and output. The differences in dimensions need to be compensated for by introducing other information, such as the priors from pre-trained generative models. However, existing data does not support training a large model for 4D generation, so current methods use SDS to distill from a 2D generative model. The pioneering work, MAV3D <cit.>, adopts HexPlane <cit.> as the dynamic representation and divides the generation into three stages: static, dynamic, and super-resolution with a combination of image SDS loss and video SDS loss. The latter one distills 4D dynamic information from the text-to-video diffusion model. However, the generation results are not satisfactory. In this paper, we propose a novel text-to-4D generation pipeline, which not only introduces a video SDS loss based on the existing text-to-video diffusion model but also obtains additional supervision from the pre-generated reference video to improve the quality of 4D generation. The input text will first generate a reference video with the help of a text-to-video generative model and the generated video will be used as a direct prior for our 4D generation. Then the remaining generation process is divided into two stages: static 3D generation and dynamic generation. The static stage adopts joint supervision from 2D SDS and 3D SDS losses to ensure diversity and 3D consistency. The first frame of the generated video will be used as the reference image to ensure the alignment between the generated result and the generated video prior. In the following dynamic stage, we introduce a dynamic representation composed of a deformation network and a topology network which model the continuous motion as well as topological changes, respectively, ensuring the continuity and diversity of the generated motion. In terms of supervision, we also introduce a video SDS loss distilling the generative prior from the 2D text-to-video diffusion model. More importantly, we exploit additional supervision from the reference video. We first build a customized SDS loss based on the result of the static stage. The video SDS loss mainly ensures the temporal consistency, while the customized SDS loss ensures the 3D consistency of geometry and texture. However, the supervision of SDS loss is not direct, so we further incorporate direct losses from the reference video prior. We adopt multiple losses from different priors in the dynamic generation stage, namely the direct prior from the reference video and the generative prior from the diffusion models, so we design a prior-switching training strategy. Specifically, in the early iterations of training, we rely on the direct prior to guide and stabilize motion generation, while in the later iterations, we gradually transition to the distillation of the diffusion model to enhance motion amplitude and diversity. The contributions of our method are summarized as follows: * We introduce a dynamic representation composed of a deformation field and a topology field in text-to-4D generation. The former ensures continuous and sufficient dynamics, while the latter is responsible for topologically discontinuous changes, helping achieve diverse dynamic generation. * We propose a novel text-to-4D generation method with a prior-switching training strategy that not only exploits the text-to-video diffusion model for SDS supervision but also builds additional supervision from the pre-generated reference video to ensure generation quality and dynamic effects. * Our method can achieve not only text-to-4D generation but also 4D generation from monocular videos. It achieves state-of-the-art 4D generation performance compared to existing methods. § RELATED WORK §.§ Dynamic Neural Radiance Fields Neural Radiance Field (NeRF) <cit.> has become a popular 3D implicit representation in recent times. Due to its fascinating results in novel view synthesis and 3D reconstruction <cit.>, it has been further extended for digital human modeling <cit.>, better rendering effects <cit.>, generalization on different scenes <cit.>, faster training or inference speed <cit.>, geometry or appearance editing <cit.>, etc. For more comprehensive and detailed discussions and comparisons, we refer the readers to these surveys <cit.>. Our work focuses on the text-based generation of 4D scenes represented by dynamic NeRF <cit.>, which adds additional temporal inputs to NeRF. Some works directly take the positionally encoded time <cit.> or learnable vectors <cit.> as one of the NeRF inputs and encode spatial and temporal information in a single network simultaneously. They typically introduce additional supervision, such as the predicted depth <cit.> and cycle consistency of scene flow <cit.>. This kind of method can be further improved by utilizing the discrete cosine transform representation of scene flow <cit.> or separately modeling different scene parts <cit.>. Another kind of method predicts the offset <cit.> or SE(3) transformation field <cit.> for each sampled point by an additional network. The elastic energy constraint is introduced to constrain the Jacobian matrix of the transformation <cit.>. Further, to handle topological changes, HyperNeRF <cit.> regards different topological states as hyperplanes of a high-dimensional space and introduces a topology network. With the emergence of NeRF acceleration methods <cit.>, dynamic NeRF is also accelerated by introducing an explicit voxel representation <cit.> or a combination of tri-planes/voxels and implicit networks <cit.>. Our method adopts a dynamic modeling approach that combines the deformation and topology networks, where the former ensures dynamic continuity and amplitude, and the latter models potential topological changes, which is flexible. §.§ Text/Image-guided 3D Generation Although one can also generate a 3D object from the given text through training a specific generative model to generate a tri-plane representation of NeRF <cit.>, this way is limited by the size and quality of the 3D dataset. With the introduction of the SDS (Score Distillation Sampling) loss <cit.>, utilizing a 2D pre-trained diffusion model <cit.> to distill the generative power to text-based 3D generation becomes a popular solution. However, realism and 3D consistency are two main facing challenges. Magic3D <cit.> extends DreamFusion by performing secondary optimization on the extracted mesh with mesh-based differentiable rendering to render high-resolution images. SJC <cit.> proposes a variant of SDS while multiple improved versions of SDS are also proposed <cit.>. Fantasia3D <cit.> divides the 3D generation into geometry and texture stages and introduces physical-based material as the texture representation, further enhancing the appearance realism of the generation. DreamTime <cit.> improves the generation quality by modifying the timestep sampling strategy. DreamBooth3D <cit.> achieves customized generation through the use of DreamBooth <cit.>. The “multi-face” or “Janus” problem is the dimension curse when using 2D diffusion models for 3D generation. So the 3D prior is introduced to solve this curse. The 3D shape can be directly given <cit.> or estimated from the image <cit.>, providing geometric initial values for optimizing NeRF. MVDream <cit.> proposes to fine-tune the diffusion model to generate multi-view images and as so explicitly embeds 3D information into a 2D diffusion model. Using the fine-tuned model to generate 3D NeRF effectively alleviates the Janus problem. Based on these text-to-3D methods, we can achieve image-to-3D with the image as an additional input <cit.>. Magic123 <cit.> and Make-it-3D <cit.> add additional supervision using the input image during the SDS optimization, while Zero123 <cit.> fine-tunes the diffusion model by changing the condition to the image and the relative view. The follow-up works not only improve the synthesis quality <cit.> but also consider how to increase efficiency <cit.>. By utilizing text-based generation capabilities, text-based editing can also be achieved <cit.>. Due to the rise of 3D Gaussian Splatting (3DGS) <cit.>, some methods <cit.> have replaced NeRF representation with 3DGS to achieve multi-view generation. For a summary of these methods, please refer to this survey <cit.>. §.§ Text/Video-guided 4D Generation Our method focuses on 4D NeRF generation from text, which is more challenging than those text/image-based 3D generation methods mentioned above. The pioneering work, MAV3D <cit.> introduces video-based SDS loss and adopts dynamic NeRF representation, HexPlane <cit.>. It divides the generation process into three stages: static, dynamic, and super-resolution, but the generation quality can be further improved. Recently, Dream-in-4D <cit.> adopts a dynamic NeRF representation based on the deformation field and divides the text-to-4D generation into static and dynamic stages. 4D-fy <cit.> adopts a hybrid feature representation of dynamic and static voxels and proposes a hybrid optimization strategy with the combination of SDS loss <cit.>, 3D SDS loss <cit.> and video SDS loss <cit.>. By assigning a dynamic network on 3DGS and optimizing it under the video SDS and the constraint on 3D Gaussians, dynamic generation can also be achieved <cit.>. Based on 4D-fy, TC4D <cit.> decomposes the motion into the global motion parameterized by a spline curve and the local motion of each object itself. The former is a motion trajectory specified by the user, while the latter is generated segment-by-segment with the video-based SDS loss. Comp4D <cit.> splits the input prompt into different entities using a Large Language Model (LLM), generates 4D objects separately, and then combines them using the trajectory information given by the LLM. Another solution to text-based 4D generation is 4D reconstruction from a monocular video, rather than using SDS loss. For example, Vidu4D <cit.> generates a video from the text prompt and then employs dynamic Gaussian surfels for 4D reconstruction, while Diffusion4D <cit.> fine-tunes the video diffusion model to generate the orbital video for 4D reconstruction. EG4D <cit.> first adopts attention injection to generate consistent multi-view videos, and then, after 4D Gaussian Splatting (4D-GS) <cit.> reconstruction, fine-tuning is conducted using a diffusion model prior. PLA4D <cit.> aligns 3DGS to the mesh generated by an image-to-mesh generative model and generates dynamics through the pixel loss and SDS supervision provided by Zero123 <cit.>, with static images serving as references. Some works employ the Material Point Method in physical simulation to achieve motion transfer <cit.> and interactive dynamic generation <cit.> based on a monocular video. In addition to generating 4D NeRF from text, there are also Consistent4D <cit.> and 4DGen <cit.> that generate 4D content from a monocular video and Animate124 <cit.> that generates 4D NeRF from an image with a text. AnimatableDreamer <cit.> and MagicPose4D <cit.> explore the 4D reconstruction and motion transfer of articulated objects from monocular videos. SC4D <cit.> adopts the dynamic 3DGS of SC-GS (Sparse Controlled Gaussian Splatting) <cit.>, and achieves video-to-4D generation through coarse-to-fine optimization. DreamScene4D <cit.> can reconstruct 4D scenes from in-the-wild videos containing multiple objects. It segments, tracks, and reconstructs different objects separately, and combines them with background using monocular depth prediction guidance. STAG4D <cit.> generates 6 additional reference videos from input or generated videos through a spatial and temporal attention fusion mechanism and selects adjacent-view images as references in multi-view SDS loss. 4Diffusion <cit.> fine-tunes ImageDream <cit.> to incorporate temporal consistency, and then utilizes the fine-tuned model for SDS loss while adopting image-level perceptual loss for supervision. The above methods either adopt the SDS-based optimization or utilize the video generation combined with 4D reconstruction. However, the former distills the generative prior which lacks direct supervision, and the latter suffers from inconsistencies in generating multi-view videos or conflicts when blending video and SDS supervision. So our method incorporates a reference video for supervision and introduces a prior-switching training strategy. This not only avoids the limitation of using SDS alone but also mitigates conflicts inherent in mixed supervision. Moreover, our method employs a dynamic representation with a hybrid of deformation and topology networks to ensure both dynamic continuity and diversity. § METHOD Our method generates dynamic 3D scenes from the input text. The pipeline of our method is shown in Fig. <ref>. We will first introduce our 4D NeRF representation, where the dynamic modeling part is crucial (Sec. <ref>). Then we will introduce our text-to-4D process, where we not only perform two-stage generation but also introduce additional supervision from the pre-generated reference video to each stage (Sec. <ref>). Finally, we will introduce the training strategy which ensures the final generation quality (Sec. <ref>). §.§ 4D Representation Our text-to-4D generation method is based on existing text/image-to-3D generation methods, so it is crucial to assign dynamic information based on the 3D generation result, which requires a good dynamic representation. Our 4D representation is a dynamic NeRF method that includes three parts: the static NeRF, the deformation network, and the topology network. First, for the static NeRF, following Magic3D <cit.>, we adopt the multi-resolution hash grid representation Instant-NGP <cit.>. Specifically, under the sampled camera view, a camera ray 𝐫 is injected into space from the camera center and some points 𝐩_i are sampled on the ray. The coordinates of each sampled point will query the features 𝐟_i on the 3D multi-resolution feature grid through hash-index and tri-linear interpolation. The queried features are fed into a small MLP to obtain the volume density σ_i and color values 𝐜_i. Note that since we do not consider complex view-dependent effect generation, the view direction is not considered in the prediction of color, which is consistent with existing 3D generation methods <cit.>. Finally, the volume densities σ_i and colors 𝐜_i of all sampled points 𝐩_i on each ray 𝐫 are aggregated through volume rendering <cit.>: 𝐜_𝐫 = ∑_iα_i 𝐜_i ∏_j<i(1-α_j), where α_i=1-e^(-σ_i𝐩_i+1-𝐩_i), 𝐜_𝐫 is the resulting pixel color corresponding to the ray 𝐫. During the SDS optimization, we traverse each pixel for rendering to obtain the final image. Then we add a dynamic representation on top of the static representation to support the dynamic generation. The dynamic modeling method will affect the final dynamic result. The existing dynamic generation methods either adopt a deformation field network <cit.>, which can model motion continuity well but is limited by the topological change, or they introduce additional dynamic feature inputs through temporal feature grids <cit.>, which can model diverse motions but lacks continuity guarantee. Therefore, we propose to exploit the combination of a deformation network and a topology network to ensure motion continuity while breaking the limitation of topology and enriching motion types. First, in order to provide sufficient encoding of temporal information without affecting the encoding of static position information, we introduce a 4D multi-resolution hash-encoded feature grid to embed the 4D input composed of coordinates and time and output the temporal feature at time t. The output temporal feature will be input into the deformation network to predict the displacement of the corresponding sampled point, and the sampled point will be transformed back from the current observation space to the canonical space where the static model is located. In addition to the deformation network, we introduce a topology network to represent the potential topological changes. It also inputs the temporal feature output by the 4D feature grid and outputs a topology vector to describe the current topology state at time t. After the sampled point is transformed by the deformation network, its coordinates will be encoded through the static 3D feature grid. The topology vector is then concatenated with this static positional feature vector and input into the subsequent small MLPs to predict volume density and color. The topology network is introduced to address the limitation that the continuous deformation network cannot represent topological changes. Therefore, compared to using only pure deformation, the extended model is more flexible and thus generates more diverse dynamics. As we will illustrate later, Fig. <ref> shows an example to illustrate the function of the topology network in motion reconstruction and generation. The name ‘topology’ is because the output vector of this network accounts for the discontinuities when the topology state changes. §.§ 4D Generation with Video Prior Based on the proposed dynamic representation, we further explore how to achieve text-to-4D generation. We divide the generation into two stages, static 3D generation and dynamic generation. Similar to 3D generation, we consider using a pre-trained text-to-video diffusion model to design the video SDS loss to supervise the dynamic generation. However, this kind of supervision is indirect and may generate implausible dynamic results. Therefore, we propose to fully utilize the text-to-video diffusion model in that we first generate a corresponding reference video from the input text, and apply it as a direct prior to dynamic generation. Then, under the guidance of video prior, the two generation stages are factored into image-guided text-to-3D generation and video-guided dynamic generation. The first stage achieves 3D generation under the text and image inputs, where the image input comes from the first frame of the reference video. We propose to use a joint SDS loss scheme including 2D and 3D SDS losses to supervise the multi-view images rendered from the static NeRF. The 2D SDS loss L_2D is consistent with the definition in DreamFusion <cit.>, but we use the open-source model Stable Diffusion <cit.>. The 3D SDS loss L_3D has a similar formulation but adopts the Stable Zero-1-to-3 model, which has the same structure as Zero-1-to-3 <cit.>. It takes the input image and relative view as conditional inputs and outputs the novel view image. Note that the diffusion model used in SDS loss can be replaced with other updated models. For example, 3D SDS loss can potentially use ImageDream <cit.>. In addition to text-based supervision, we also add RGB loss L_RGB and mask loss L_mask in the initial view based on the image input to ensure that the identity of the generated 3D model is consistent with the image, which is beneficial for applying video prior to the subsequent dynamic generation. Ultimately, the optimization loss L_static for this stage is: L_static = λ_2D L_2D + λ_3D L_3D + λ_RGB L_RGB + λ_mask L_mask, where λ_2D, λ_3D, λ_RGB, and λ_mask are the weights for corresponding losses, which are set to be 0.025, 1, 1000, 100. During the optimization, 2D SDS loss ensures generation capability while 3D SDS loss ensures multi-view consistency and minimizes the impact of the Janus problem as much as possible. The dynamic stage incorporates the dynamic representation including the 4D feature grid, the deformation network, and the topology network into the optimization. The static part will be assigned a smaller learning rate for fine-tuning. The dynamic generation will mainly be supervised by SDS loss based on the text-to-video diffusion model. Specifically, we sample a continuous camera trajectory and render a 24-frame video. Similar to image SDS loss, the process of adding and removing noise occurs in the latent space of the video, so video SDS loss can be defined similarly: L_T(θ, X=f(θ, c)) = 𝔼_t,c,ϵ[ X - X̂_0_2^2 ] where X is the latent representation of the rendered video from the dynamic NeRF f(θ, c) with the parameters θ and camera c. And X̂_0 is the estimated X_0 based on the diffusion output ϵ_ϕ(X_t; y,c,t), where y is the text condition, X_t is the diffusion forward process result at timestep t with Gaussian noise ϵ. We use the Modelscope <cit.> video diffusion model as the 2D motion prior, and similarly, this model can be replaced with other diffusion models, such as Zeroscope <cit.>. In addition to SDS loss, since a reference video that matches the text prompt is pre-generated, similar to the static stage, the dynamic stage can formulate direct supervision from the reference video. We assume that the reference video has a fixed camera view (which can be ensured by adding corresponding text prompts during generation) so that the corresponding video can be rendered from our 4D representation at the initial view for supervision. We introduce RGB loss L_RGB, mask loss L_mask, and optical flow loss L_flow. Note that, since the camera is assumed to be fixed, the estimated optical flow will only include scene motion, playing a direct supervisory role. However, these losses only add supervision from a single view, making it struggle to ensure the plausibility of other views. We observe that every frame of the 4D NeRF can be considered as a 3D NeRF, so to ensure spatial consistency, we utilize the rendering images from the static stage following DreamBooth <cit.> to fine-tune the diffusion model and introduce bootstrapped score distillation (BSD) loss <cit.>: ∇_θL_BSD(f(θ)) =𝔼_t,c,ϵ[ω(t)(ϵ_DreamBooth(X_t;y,c,t,r_t^'(X)) -ϵ_lora(X_t;y,c,t,X))∂ X/∂θ], where ω(t) is the weighting function that depends on the timestep t, r_t^'(X) is the augmented renderings used to train the DreamBooth model and ϵ_lora estimates the score of the rendered images using a LoRA (Low-rank adaptation) <cit.> model. We find that Stable Zero-1-to-3 struggles to maintain multi-view consistency and texture quality in dynamic generation. Therefore, we do not adopt it here. Finally, in order to eliminate the possible jitters of motion in space and time, we introduce a total variation loss L_TV in the image domain, which constrains the rendered 2D displacement map to be similar between adjacent pixels and adjacent times. The overall optimization loss L_dynamic in the dynamic stage is: L_dynamic = λ_T L_T + λ_BSD L_BSD + λ_TV L_TV + λ_RGB L_RGB + λ_mask L_mask + λ_flow L_flow , where λ_* are the weights for corresponding losses. For the specific definition of all loss terms and the default values of the weights, please refer to the supplementary document. §.§ Prior Switching Training Strategy In the static stage, we adopt an optimization strategy similar to Magic123 <cit.>. For the dynamic generation, we have introduced several supervisory losses which can be roughly divided into 3 categories: direct supervision from the reference video, i.e. L_RGB, L_mask and L_flow at a fixed initial view, distillation supervision from diffusion models, i.e. L_T and L_BSD, and the regularizer L_TV. In our experiments, we have found that although direct supervision from the generated video can help stabilize motion generation, the limited dynamic information contained in the generated video gradually limits motion generation, while priors from diffusion models can help enrich motion generation. Therefore, we propose a prior switching training strategy that transitions between the reference video prior and the diffusion model prior. Specifically, at the beginning of the optimization, those losses from the reference video will be given greater loss weights, allowing them to play a main guiding role. Then, as the training step increases, we will gradually reduce the weights of these losses, so that the SDS losses from the diffusion model will play a major supervisory role, promoting the amplitude of the generated motion. It should be noted that since we aim for text-to-4D generation, we do not require the final result to be completely consistent with the reference video, and there may be deviations between the final result and the reference video. In addition, we have empirically found that there are certain difficulties in co-optimizing both video SDS L_T and customized SDS loss L_BSD. So we choose to set an SDS probability and choose which SDS loss to use in each iteration based on the SDS probability. This is also a prior switching strategy between two diffusion priors. Through this training strategy, the final dynamic generation results will not fall into a completely disordered state, resulting in reasonable and sufficient motion. § EXPERIMENTS AND EVALUATIONS §.§ Evaluation Metrics It is not easy to quantitatively evaluate the text-based generated results due to the lack of ground truth. Following <cit.>, we use two metrics, CLIP score <cit.> and user survey, for quantitative comparison. CLIP score measures the alignment between the generated results and the input text. Specifically, it extracts the embedding vectors of the text and the image, respectively, and then calculates the cosine similarity between the two vectors. We sample multiple camera trajectories for each method to generate results over time and under changing viewpoints. Then we take the average CLIP score of all frames as the final score. Another metric is user study. We have invited 25 users to evaluate our method and compare to other methods, and the evaluation procedure is the same as <cit.>. The evaluation criteria are appearance quality (AQ), 3D structure quality (SQ), motion quality (MQ), text alignment (TA), and overall preference (Overall). The reported numbers are the percentages of users who voted for the corresponding method over ours in head-to-head comparisons. Please refer to the supplementary document for more details. §.§ Results and Comparisons Our method is proposed for the text-to-4D generation, but it can also be applied to the monocular-video-to-4D generation task because we introduce the pre-generated reference video to provide direct priors in the generation. The required text prompt can be provided by the user or obtained through some text generation methods <cit.>. Please refer to the supplementary document for more implementation details. Therefore, we compare our method not only with the text-to-4D generation methods, MAV3D <cit.> and recent 4d-fy <cit.> but also with the video-to-4D generation method, Consistent4D <cit.> and 4DGen <cit.>. Text-to-4D. We compare our method with MAV3D <cit.> and 4d-fy <cit.> in text-to-4D generation and the qualitative comparisons are shown in Fig. <ref>. It can be seen from the visual comparison that our method has significant advantages over MAV3D, and the generation results are more realistic in both geometry and texture. For example, the panda face and the rocket have more details, and the emitted smoke looks more realistic. Compared to the cartoon style of 4d-fy, our results are more realistic, thanks to the introduction of the direct prior from the pre-generated reference video. Meanwhile, our method generates clearer dynamic effects without blurring. Furthermore, in the case of the panda (the first two rows), 4d-fy produces an unreasonable result, with a third leg growing behind the panda's back. This might be caused by the dynamic representation used by 4d-fy, which is only an additional spatiotemporal encoding and is prone to producing substantial and potentially unreasonable geometric changes to the static model. Our representation includes a deformation network, which can ensure that the generated results are meaningful continuous dynamic effects. We present more comparison results with 4d-fy in Fig. <ref>. These results illustrate that the dynamic amplitude of our method is higher than 4d-fy. The supplementary video provides a better viewing experience. We also present quantitative results in Table <ref>. The user study proves that users tend to prefer our method. More text-to-4D generation results are shown in Fig. <ref>. Video-to-4D. We visualize the 4D generation results from monocular videos in Fig. <ref> and also compare with the results of Consistent4D <cit.> and 4DGen <cit.>. The first group of comparisons shows that our method maintains the texture well under the novel view where the white diaper of Patrick Star disappears from the results of Consistent4D. And the astronaut example shows that our method produces more vivid results and has richer details, while 4DGen fails to accurately reconstruct the appearance and motions. We show more video-to-4D generation results in Fig. <ref>. §.§ Ablation Study To verify the effectiveness of the dynamic representation, 4D generation pipeline, and training strategy used in our method, we perform multiple ablation experiments. The qualitative and quantitative results are shown in Fig. <ref> and Table <ref>, respectively. w/o topology network. Our dynamic representation not only uses the deformation network but also introduces the topology network to ensure the continuity and diversity of motion. The introduction of the deformation network is a natural idea, so we conduct an ablation experiment on the topology network to verify whether it will affect the dynamic generation effect. We show two video-to-4D examples in Fig. <ref> to illustrate the function of the topology network in some specific dynamics. In the examples, an egg is falling into a cup filled with liquid and a frog is opening its eyes and mouth. As can be seen, without the topology network, these dynamics cannot be achieved. Fig. <ref> further shows text-to-4D examples where the generation effect with only the deformation network is unsatisfactory. w/o direct prior. The core idea of our method is to use the direct prior of the pre-generated reference video to guide tex-to-4D generation. Because we focus on the dynamic generation, we remove the direct losses from the reference video during the dynamic generation stage. Although the reference video is not a hard constraint, the lack of direct prior guidance in the early stages of optimization will lead dynamic generation towards an unreasonable outcome, resulting in worse results. Note the area marked by the orange box in the first case of Fig. <ref>. w/o prior-switching. The reference video may not contain sufficient dynamic information. So to exploit the generation ability of SDS distillation, we design a prior-switching training strategy to gradually reduce the weights of the direct losses. If this training strategy is not adopted, there will be conflicts between the direct prior and the generative prior from SDS in the later stage of training. The results shown in Fig. <ref> illustrate this point. w/o BSD. The customized loss function, i.e. BSD loss, helps maintain 3D consistency and texture quality. Removing BSD loss will result in a decrease in generation quality and inconsistent multi-views. For example, in the `w/o BSD' column of Fig. <ref>, the panda has three legs in some views and the texture of the generated ice cream is very poor. w/o video SDS. The video SDS loss function extracts dynamic information from the 2D video diffusion model to ensure dynamic continuity. Without using this loss, reasonable dynamic generation results cannot be obtained. § DISCUSSIONS AND CONCLUSIONS Limitations. Text-to-4D generation is a challenging task, so our method, as an initial attempt, still has some limitations. First, we exploit the image-to-3D generation model to generate the static shape as the basis for dynamic generation. However, texture inaccuracy and Janus problems exist in some of the reconstructed 3D models. Although our 4D generation results ultimately are not bound to the geometry and texture of the static model, its use as an initial value for optimization will still affect the final generation quality. Likewise, our method utilizes the direct prior from the generated reference video to ensure generation quality and motion amplitude and the current text-to-video generative model still faces issues such as geometric mutations over time, which may influence the final effects of our method. An example failure case is shown in Fig. <ref>. Given the rapid development in image-to-3D and text-to-video fields, our method will benefit from the development of these methods to achieve higher-quality results. Moreover, some difficult-to-reconstruct objects, such as transparent objects, cannot appear in dynamic generation. Second, our method adopts the SDS optimization for generation, which is slower than feed-forward generation. And we use the NeRF representation, which makes the entire optimization generation process last several hours. Since it is difficult to collect a corresponding paired dataset to train a feed-forward generation model, we will explore replacing the NeRF with the efficient 3D Gaussian Splatting representation <cit.> to improve the generation speed. Ethical issues. As a generation method, our method may be abused, generating misleading or false information. We will carefully control open access rights and put warning labels on the generation results. In this paper, we propose a novel text-to-4D generation method that fully utilizes the priors from the text-to-video diffusion model and the pre-generated reference video. First, we introduce a hybrid dynamic representation of a deformation network and a topology network in 4D generation, modeling dynamic continuity and topological changes. Then, we divide the generation process into two stages (static and dynamic) and propose to adopt the pre-generated reference video to provide direct priors, combined with customized and video-based SDS losses, to achieve high-quality 4D generation. For this purpose, we also design a prior-switching training strategy to balance the direct prior of the reference video and the generative prior of the diffusion model. In a user study and quality comparisons, our method has proven to outperform existing methods. IEEEtran
http://arxiv.org/abs/2407.13522v1
20240718135716
INDIC QA BENCHMARK: A Multilingual Benchmark to Evaluate Question Answering capability of LLMs for Indic Languages
[ "Abhishek Kumar Singh", "Rudra Murthy", "Vishwajeet kumar", "Jaydeep Sen", "Ganesh Ramakrishnan" ]
cs.LG
[ "cs.LG" ]
Pushing the Limits of Reactive Planning: Learning to Escape Local Minima Isar Meijer, Michael Pantic, Helen Oleynikova, Roland Siegwart isarmeijer@gmail.com, mpantic@ethz.ch, helenoleynikova@gmail.com, rsiegwart@ethz.ch All authors are with the Autonomous Systems Lab, ETH Zurich, 8092 Zurich, Switzerland. ====================================================================================================================================================================================================================================================== § ABSTRACT Large Language Models (LLMs) have demonstrated remarkable zero-shot and few-shot capabilities in unseen tasks, including context-grounded question answering (QA) in English. However, the evaluation of LLMs' capabilities in non-English languages for context-based QA is limited by the scarcity of benchmarks in non-English languages. To address this gap, we introduce Indic-QA, the largest publicly available context-grounded question-answering dataset for 11 major Indian languages from two language families. The dataset comprises both extractive and abstractive question-answering tasks and includes existing datasets as well as English QA datasets translated into Indian languages. Additionally, we generate a synthetic dataset using the Gemini model to create question-answer pairs given a passage, which is then manually verified for quality assurance. We evaluate various multilingual Large Language Models and their instruction-fine-tuned variants on the benchmark and observe that their performance is subpar, particularly for low-resource languages. We hope that the release of this dataset will stimulate further research on the question-answering abilities of LLMs for low-resource languages. "In a world deluged by irrelevant information, clarity is power." – Yuval Noah Harari § INTRODUCTION India has the largest global population, with almost 1.43 billion people. However, many of the major languages in India are considered low-resource by the natural language processing (NLP) community. These languages are not as well-represented in NLP compared to English because of the lack of high-quality datasets for pre-training, fine-tuning, and task-specific evaluations. In the field of NLP, Large Language Models (LLMs) have been pre-trained on vast amounts of textual data. Despite their extensive training, these models frequently yield inaccurate results in tasks such as question answering, largely due to their limited contextual comprehension and uncertainties surrounding their parametric knowledge. Researchers have addressed this issue by utilizing systems like Retrieval-Augmented Generation (RAG). RAG retrieves relevant text from a large corpus, whether it's from pre-training data or real-world data, and uses it as the context for the query. While retrieval is important, the generator component, which finds the exact and correct answer without producing inaccurate outputs, is equally crucial, especially when the answer is not present in the retrieved text.<cit.>. In a Retrieval-Augmented Generation (RAG) system, there are two distinct components: the retriever and the generator. Each component is evaluated separately. The retriever's performance is assessed through tasks such as passage reranking, utilizing datasets like <cit.>, where the retrieval model is trained to rank paragraphs based on their relevance to a given query. On the other hand, the generator's evaluation focuses on context-grounded question answering. In RAG setups, context is obtained through retrieval, necessitating that the dataset structure includes triples comprising context, question, and answer. The model's objective is to generate an answer to the query based on the provided context. What distinguishes our benchmark from other existing Multilingual Indic context-grounded question-answering Benchmarks? There are numerous context-grounded question-answering benchmarks available for high-resource languages like English. However, there are very few benchmarks available for Indic languages, and those that do exist often lack domain diversity and are limited in size. To address these gaps, we developed the INDIC QA BENCHMARK, which not only includes a large number of data instances but also spans a wide range of domains, including geography, Indian culture, news, and more. Given the existing domain diversity in English datasets <cit.>, we decided to translate these datasets into Indic languages. While there are many extractive question-answering datasets, there is a scarcity of abstractive question-answering datasets where the answer may not be explicitly present in the text. To address this, we sampled numerous Wikipedia and Common Crawl pages, focusing on paragraphs rich in cultural nuances and domain diversity. Using these paragraphs, we employed large language models (LLMs) to generate QA pairs, thus creating a comprehensive and culturally diverse benchmark for Indic languages. In our observations, we have noticed that directly using a base pre-trained model does not initially predict or output answers effectively. Even when it does find an answer, the output is often incorrect and illogical, alongside the ground truth. However, after few-shot prompting, the model produces better-formatted answers and causes the model to start searching for exact answers in single words or phrases from the paragraph, for generation. We summarize the key contributions of this paper as follows: * INDIC QA BENCHMARK: a multilingual evaluation benchmark for evaluating Indic Question-Answering/Generative capability of large language models, all these languages are low-resource language and lack a proper benchmark for multi-domain generative tasks. * We critically evaluate some of the most esteemed Language Model Models (LLMs) for Indic languages of equivalent size, meticulously comparing their performance on our proposed benchmark to determine their Question-Answering (QA) skills. § RELATED WORK In the realm of context-grounded question answering (QA), significant research has been conducted in both English and Indian languages. This task involves presenting a question along with a contextual paragraph to the model, which then extracts the phrase from the paragraph. Various benchmarks <cit.> have been established for this task, with encoder-only transformer models proving effective in extracting the span containing the answer from the paragraph. The Indic QA community has demonstrated remarkable performance using models like XLM-RoBERTa<cit.> and others, particularly for multilingual Indian languages. They have a rich dataset to showcase their benchmarks, including SQuAD <cit.> for English, along with its translated version in Hindi. Additionally, instead of translation, there are datasets specifically designed for evaluating benchmarks in Hindi, such as the Chaii [<https://www.kaggle.com/competitions/chaii-hindi-and-tamil-question-answering>]. dataset and IndicQA, which are also discussed in this survey paper <cit.>. Although there are a few benchmarks for Indic Question Answering, they lack extensive domain coverage, which is crucial for evaluating the robustness of models. In contrast, English benchmarks encompass a wide range of domain-specific datasets such as Resources like the llama Index <cit.> highlight that selecting the appropriate evaluation dataset is challenging and highly dependent on the specific use case. Academic benchmarks such as BEIR and HotpotQA often fail to generalize across different use cases. For example, parameters that work well on certain data domains (e.g., SEC filings) may not perform as effectively on others (e.g., research papers). This challenge led them to the create a dataset hub specifically designed for evaluating RAG systems, encompassing a wide range of domains including research papers, blockchain articles, and code articles. Additionally, NQ Open <cit.> contains a wealth of Wikipedia content across various domains, and MS MARCO <cit.> features questions sampled from real-world user searches with contexts derived from web documents. The diversity of user queries leads to a broad range of content, making MS MARCO highly versatile. Although initially intended for a different task, we adopted this dataset for our purposes. Hence to address the lack of existing Indic QA benchmark datasets, we translated and adapted several commonly used English QA datasets into 11 Indic languages. This approach provides a more comprehensive and robust evaluation framework for Indic Question Answering models. By leveraging these datasets, we aim to offer a diverse and extensive evaluation resource, enhancing the development and assessment of QA models in Indic languages. § BENCHMARKS The primary focus of this work is on context-based QA, where the answer is entirely or partially found within the given context. The datasets utilized in this study were tailored to facilitate this task, with each instance composed of triples consisting of a context, a question, and an answer. This section provides a detailed description of the methodology used to create or modify the existing dataset for our task. §.§ Datasets In this section, we provide a catalog of the datasets constituting this benchmark, complete with a thorough exposition of their original accessibility and the modifications we have implemented. These datasets are either pre-existing or have been released as part of this work. Following is a detailed description of each dataset. * Hindi : This dataset is a translated version of the original <cit.> into Hindi. It consists of nearly 5,000 instances, translated using the Google Translate API. We translated that from Hindi to other Indic languages. * X: (Cross-lingual Question Answering Dataset) <cit.> serves as a benchmark for assessing the performance of cross-lingual question answering. It comprises 240 paragraphs and 1190 question-answer pairs, extracted from the development set of v1.1 <cit.>. The dataset includes professional translations of these pairs into ten languages, But we have Used <cit.> version of XQuAD because they manually translated to all Indic languages. * ChaII Dataset <cit.>: This question-answering dataset features context-question-answer triples in Hindi and Tamil, gathered directly without translation. Created by expert data annotators who are native speakers, the dataset presents a realistic information-seeking task focused on predicting answers to genuine questions about Wikipedia articles. It was used in a Kaggle challenge and includes 1104 questions in Hindi and Tamil, we used the Hindi part of the data and translated it to 10 other Indian languages. * Indic QA <cit.>: This dataset is a manually curated cloze-style reading comprehension dataset designed for evaluating question-answering models in 10 Indic languages Since this dataset doesn't have Gujarati translation we translated it from Hindi to Gujarati and validated the translation as described in the [<ref>] section. * MLQA <cit.>: (MultiLingual Question Answering) is a benchmark dataset for evaluating cross-lingual question answering performance. We have used the test set for benchmarking purposes, the test set contains 4918 triples of the form (context, question, answer) all available in Hindi, hence we translated this triplet from Hindi to 10 other Indian languages. * MS Marco <cit.>: Microsoft Machine Reading Comprehension (MS MARCO) is a collection of large-scale datasets designed for deep learning applications related to search. The questions in MS Marco are sampled from real, anonymized user queries. The context passages, from which the answers are derived, are extracted from real web documents using the latest version of the Bing search engine. We initially considered adapting the multilingual version of the MS MARCO passage ranking dataset (mMarco) for our setting. However, since mMarco lacks a test set, we opted to use the MS MARCO test set, which contains 100k instances, each including a query and a set of passages, among which only one is relevant to the query. We filtered out instances without any relevant passages, resulting in a dataset of 55k instances. We then translated this dataset from English to Hindi. After applying certain filtering conditions, we translated the Hindi dataset into 10 other Indian languages. The exact steps are detailed in [<ref>]. The final dataset now includes the question, the source document, and the corresponding answer, and is available in 11 Indian languages. * NQ-Open trans <cit.>: The task is an open-domain question-answering benchmark derived from Natural Questions. The objective is to predict an English answer string for a given English question, with all questions answerable using the contents of English Wikipedia. Initially, the dataset was entirely in English, with context, question, and answer all in English. The context often included tables scraped from HTML pages of Wikipedia, resulting in numerous HTML tags. To clean the dataset, we removed all triples where the context contained a table and eliminated all other HTML tags from the remaining examples. In this modified dataset, the fields include the source document (the entire Wikipedia page), the long answer (a paragraph from the page containing the answer), and the exact phrase or word from that paragraph as the short answer. We modified the long answer to serve as the context and the short answer as the answer for the corresponding question. and Since after all this modification dataset was in English we translated that to other Indian languages. * XORQA <cit.>: Cross-lingual Open Retrieval Question Answering (XOR QA) consists of three tasks involving cross-lingual document retrieval from both multilingual and English resources. This dataset was subsequently translated into other Indian languages by <cit.>. We utilized the same since it was cross-lingual data, the context was in English while the questions and answers were in other languages. To adapt it to our setting, we translated the context into various Indian languages. * LLama Index <cit.>: The dataset includes question-answer pairs along with source context, serving as an evaluation tool for the RAG pipeline. We observed that some contexts were insufficient to answer the questions effectively. To address this, we applied the BGE-M3 <cit.> algorithm to measure the similarity between the context and the query, using a threshold of 0.43 to determine if a question could be answered adequately based on the context. Post filtering we translated the resulting context, question, and answer triplets into Hindi and Hindi other Indian languages. * Synthetic Data: This dataset is introduced as part of this study. We employed the Gemini model <cit.> to generate question-answer pairs based on provided contexts. To achieve this, we sampled a diverse set of Hindi contexts from sources such as Wikipedia, storybooks, Indian news articles, and paragraphs from competitive exams. We then prompted the model with these context paragraphs to generate abstractive question-answer pairs, framing the task as a generative QA task. Subsequently, this dataset was translated into other languages and verified by language experts, the whole workflow process can be found <ref>. §.§ Data Curation Methodology In light of the approaches discussed previously in Section <ref>, context-grounded question-answering datasets can generally be categorized into two types: abstractive and extractive. While there are many extractive datasets available in high-resource languages, a few extractive datasets available for Indian languages lack diversity in domains and question types, limiting their utility for benchmarking. Hence, we extended the benchmark suite available in English to these Indian languages by translating. We utilized Indic Transv2 [<https://github.com/AI4Bharat/IndicTrans2>] <cit.> for translation, an open-source transformer-based multilingual NMT model that supports high-quality translations across all the 22 scheduled Indian languages. We segmented the context paragraph into sentences using the Spacy library, translated each sentence, and then recombined them. This approach yielded better translation results, and importantly, the model did not lose context when translating, thus preserving the coherence of the text. In the list of datasets for benchmarking, some are available only in English (e.g., NQ-open, ORQA, llama index,MS-Marco), while others are available in both English and Hindi (e.g., Hindi SQuAD, CHAII, MLQA, Synthetic data). Additionally, a few datasets (e.g., IndicQA, XSQuAD) are also available in all 10 or 11 languages with verified translations. For all the datasets not found in the respective language, we translated them and applied the filtering methods discussed below. To assess the quality of our translations, we first translated each dataset from the source language to the target language, then back-translated it from the target language to the source language. We calculated the CHRF and CHRF++ scores between the original and back-translated sentences, applying a threshold on these metrics to filter the instances. Additionally, we manually verified a subset of the filtered data to ensure accuracy. For the translation process, we initially translated the English data directly into Hindi. After filtering the data, we then translated it from Hindi to other Indian languages, rather than directly from English. This approach was based on our observation that the translation quality from Hindi to other Indian languages was superior. The improved quality can be attributed to the linguistic similarities within the same language family, including morphology, syntax, and grammar. § EXPERIMENT SETUP We conducted a series of experiments to evaluate existing LLM's performance, utilizing the NVIDIA RTX A100 both 40Gb and 80Gb variants for our computational needs. Our computational needs signify GPUs both for Translation and evaluation over the models. For inference or evaluation we utilized VLLM <cit.> which is an open-source library that supports LLM inference efficiently. We evaluate the following LLMs on our benchmark: Open Hathi and its Instruction Finetuned variant (IFV) known as Airavata <cit.>, Bloom <cit.> and its IFV named Bloomz, Gemma<cit.>, and its instruction fine-tuned variant Gemma-IT. Open Hathi[<https://www.sarvam.ai/blog/announcing-openhathi-series>] (7B parameter model), which was created through continual pre-training on the LLaMA-2 model <cit.>. Airavata <cit.> (7B parameter model) is an instruction fine-tuned version of OpenHathi. Both OpenHathi and Airavata are specifically trained for Hindi. Gemma and Gemma-IT (7B parameter models)[<https://ai.google.dev/gemma/docs>], which were released by Google. Though these models are not specifically trained for Indian languages, they exhibit multilingual capability. Aya-8B <cit.> is another instruction-tuned model designed explicitly for multilingual languages. LLaMA 3 and LLaMA 3 Instruct models [<https://ai.meta.com/blog/meta-llama-3/>](8B parameter model), part of the LLaMA family. Llama-3 has seen data from around 30 languages excluding English. This diverse set of models allows for a comprehensive evaluation of the strength and performance of our benchmarks across different architectures and training methods. §.§ Evaluation Metrics We borrow the key metrics of both Extractive and generative Question Answering as mentioned below: 1. F1 (macro-averaged) score The F1 score is a metric that represents the harmonic mean of precision and recall. The F1 score calculates the average similarity between predicted and actual answers by comparing the sets of words or tokens in the predicted and ground truth sentences. 2. Exact Match The Exact Match metric computes the percentage of instances that exactly match the ground truth. This metric is more stringent but evaluates the thoroughness of the model, as it results in either a true or false outcome for individual instances. 3. ROUGE(L) A Recall-Oriented Understudy for Gisting Evaluation metric is mostly used for evaluating summarization, we have used it to evaluate generative QA tasks. § RESULTS AND ANALYSIS Comparing base model performance and effect of Few shot: Table <ref> shows the base LLMs' performance in the zero-shot setting. The Gemma model excels in extractive question-answering tasks, surpassing the Bloom and Llama-3 base models. However, Llama-3 outperforms Gemma in Hindi, Marathi, and Odia languages. Bloom surpasses all models in abstractive question-answering tasks and for all languages. Notably, base models generally perform poorly on abstractive question-answering datasets compared to extractive ones. We also evaluate the effect of in-context examples as reported in Table <ref>. As expected, using few-shot (1-shot and 3-shot) almost always improves over the zero-shot base model. However, we can spot some language-specific patterns where Bloom and Openhathi behave differently than Gemma and Llama-3. For example, for some languages such as Bn, Ml, Mr Gemma and Llama-3 show a significant drop with the increase in few shot examples, however, Bloom and Openhathi retain or even improve. We believe this is correlated with the availability of language-specific corpus and their utilization in training these models. Effect of Instruction Finetuning: Table <ref> shows the performance of instruction-finetuned models in our study. Instruction finetuning generally improves abstractive QA tasks across all models, but its impact on extractive QA varies. For instance, Gemma and Llama-3 perform better than Bloom and OpenHathi in their base models, but their instruction-finetuned variants do not show significant improvement. This is because these models were primarily instruction-finetuned on non-Indic languages, which compromises their generic multi-lingual ability during task-specific finetuning, leading to lower results. On the other hand, OpenHathi was specifically trained on the Hindi language and so is its instruction finetuning variant Airavata. As a result, the performance of OpenHathi is significantly poor in all languages. Airavata benefits from further instruction fine-tuning on Hindi data and improves over OpenHathi for Hindi language but suffers poorly for other Indian languages. Bloomz produces the highest jump compared to Bloom and we hypothesize this is because a good portion of evaluation benchmark coming from generic-domain such wikipedia data has been seen by Bloomz during its training and instruction finetuning, making it a good choice for applications which aims to use common world knowledge. Extractive vs Abstractive tasks: While it is clear that instruction finetuning helps more in abstractive QA tasks, both Table <ref> and Table <ref> shows positive correlation between the scores for extractive task and abstractive task across languages i.e. whenever numbers have improved for extractive QA tasks, it also improved for abstractive QA too. This is almost true for all the base and instruct variants of the models except Gemma, where Gemma instruct improves abstractive QA score but detoriates in extractive QA task. Careful analysis shows, generative task metrics change moderately between base models and their instruction finetuned variants. This is expected because generative metrics such as rogue(l) are more heuristic driven in nature and designed to ignore small variations, natural in non-deterministic text generation, unlike extractive metrics such as EM, F1. Thus generative metrics deviate in smaller scale than extractive metrics. However, the positive correlation between both the task metrics across models clearly establishes that the factors affecting the overall performance of the models shows similar signs for both extractive and generative tasks and hence improving one will likely improve the other as well. On how to choose a model: Going by the results so far, one would pick BloomZ if the application needs only common world knowledge and needs a model which does well OOB. If there is use-case for which we have adequate Indic language finetuning data , it might be good to build over the world knowledge acquired by Gemma and Llama-3 and do instruction finetuning on Indic languages to make it better suitable for abstractive QA task. If we are very specific about a certain niche domain in only Hindi language, where common world knowledge is not a pre-requisite, Airavata can be a good candidate given its focus on Hindi based training and improvements in both extractive and abstractive task with instruction finetuning. On generative vs extractive tasks more are designed to ignore small variations in next unlike EM/tend to small change in generative metrics changes (gain or drop) as compared to extra (1) Instruction fine-tuned models are better choices than base models for datasets which are derived from wikipedia, where it is likely that the instruction tuning has acquired the common world knowledge. But for other datasets such as Hindi RC, synthetic, base models can perform better than instruction fine-tuned variations. (2) Gemma-it proves to be a stable instruction finetuned model, always improving over base on datasets from wikipedia domain and works better than Airavataand Bloomz on non-wikipedia domains too. (3) Bloomz shows maximal ability to reuse parametric knowledge and obtains best numbers on most datasets derived from wikipedia, including very high numbers on XSquad, MLQA. (4) Airavatais mostly better than Open Hathi, but because it is trained exclusively on Hindi corpus, it doesn't have as much world knowledge as BloomZ or Gemma-it. Thus Airavat, which is best suited for Hindi applications, likely needs domain specific fine-tuning. § CONCLUSION In this paper, we release a benchmark for evaluating the grounded Question-Answering capabilities of existing Large Language Models (LLM). The benchmark comprises datasets testing both extractive and abstractive answering abilities. Our observations reveal that question-answering capability in LLMs can be enhanced through instruction-tuning of these models with target language data. We hope that this benchmark will aid researchers in improving LLMs’ grounded question-answering abilities. § LIMITATIONS While our research aims to provide a challenging and comprehensive benchmark for evaluating the LLMs on Hindi QA task, there might be some limitations. 1. Availability of high quality datasets for Hindi is limited. Despite our best efforts to curate the benchmark from various sources, it might still have an inherent bias introduced during the data collection/translation process. 2. Although we conducted quality checks, there might be subjective interpretability issues with the translated datasets. 3. While we attempted to diversify across various domains, the benchmark may not depict the true performance in a completely unseen domain. § APPENDIX §.§ Miscellaneous
http://arxiv.org/abs/2407.13096v1
20240718020848
DSO: A GPU Energy Efficiency Optimizer by Fusing Dynamic and Static Information
[ "Qiang Wang", "Laiyi Li", "Weile Luo", "Yijia Zhang", "Bingqiang Wang" ]
cs.PF
[ "cs.PF" ]
DSO: A GPU Energy Efficiency Optimizer by Fusing Dynamic and Static Information Qiang Wang1, Laiyi Li1, Weile Luo2, Yijia Zhang3, Bingqiang Wang3Corresponding authors: Qiang Wang, Yijia Zhang 1Harbin Institute of Technology (Shenzhen) 2The Hong Kong University of Science and Technology (Guangzhou) 3Peng Cheng Laboratory 1qiang.wang@hit.edu.cn, 123s151122@stu.hit.edu.cn, 2wluo976@connect.hkust-gz.edu.cn, 3{zhangyj01,wangbq}@pcl.ac.cn July 22, 2024 ============================================================================================================================================================================================================================================================================================================================================================================= § ABSTRACT Increased reliance on graphics processing units (GPUs) for high-intensity computing tasks raises challenges regarding energy consumption. To address this issue, dynamic voltage and frequency scaling (DVFS) has emerged as a promising technique for conserving energy while maintaining the quality of service (QoS) of GPU applications. However, existing solutions using DVFS are hindered by inefficiency or inaccuracy as they depend either on dynamic or static information respectively, which prevents them from being adopted to practical power management schemes. To this end, we propose a novel energy efficiency optimizer, called DSO, to explore a light weight solution that leverages both dynamic and static information to model and optimize the GPU energy efficiency. DSO firstly proposes a novel theoretical energy efficiency model which reflects the DVFS roofline phenomenon and considers the tradeoff between performance and energy. Then it applies machine learning techniques to predict the parameters of the above model with both GPU kernel runtime metrics and static code features. Experiments on modern DVFS-enabled GPUs indicate that DSO can enhance energy efficiency by 19% whilst maintaining performance within a 5% loss margin. GPU Modeling, Energy Efficiency, Dynamic Voltage and Frequency Scaling DSO: A GPU Energy Efficiency Optimizer by Fusing Dynamic and Static Information Qiang Wang1, Laiyi Li1, Weile Luo2, Yijia Zhang3, Bingqiang Wang3Corresponding authors: Qiang Wang, Yijia Zhang 1Harbin Institute of Technology (Shenzhen) 2The Hong Kong University of Science and Technology (Guangzhou) 3Peng Cheng Laboratory 1qiang.wang@hit.edu.cn, 123s151122@stu.hit.edu.cn, 2wluo976@connect.hkust-gz.edu.cn, 3{zhangyj01,wangbq}@pcl.ac.cn July 22, 2024 ============================================================================================================================================================================================================================================================================================================================================================================= § INTRODUCTION In recent years, there has been a significant increase in the utilization of graphics processing units (GPUs) as accelerators in high-performance computing (HPC). This trend has been driven by the growing demand for energy-efficient solutions, particularly due to the rise of artificial intelligence (AI) applications <cit.>. One prominent instance is the advanced language model GPT-3 <cit.>, which was designed with over 150 billion parameters to generate human-like texts. The training cost for GPT-3 exceeds 4.6 million dollars, equivalent to nearly 120 years of electricity consumption by an average household. These staggering figures underscore the significance of implementing effective mechanisms to enhance the energy efficiency of these systems. Even a modest 5% reduction in energy consumption can have a substantial impact. Dynamic voltage and frequency scaling (DVFS) is a promising technique for GPUs which enables the adjustment of devices to lower performance/power states. DVFS optimizes GPU performance and power by adjusting voltage and frequency levels, offering substantial energy savings with minimal performance impact <cit.>. Recent investigations <cit.> demonstrate that the utilization of DVFS techniques in graphics processing units (GPUs) engaged in deep neural network (DNN) applications resulted in energy savings of up to 26% especially in the common DNN inference scenario. Several existing studies <cit.> on energy conservation through DVFS rely on runtime information provided by GPU profiling tools, such as nvprof for Nvidia GPUs. These tools have proven to be effective in modeling the performance and power changes under different DVFS settings, given the high correlation between the performance counters of each GPU sub-component and the execution time/power. However, two drawbacks hinder their practical online usage. Firstly, the profiling overhead associated with these tools is typically significant because these profiling tools often require multiple replays of the target application, resulting in heavy computational costs. Secondly, some of these tools necessitate modifications to the application source code, which is not user-friendly and may not be available for online submitted jobs. Another branch of analyzing the performance and power behavior of GPU applications is static information modeling, which involves examining GPU low-level assembly codes such as PTX[<https://docs.nvidia.com/cuda/parallel-thread-execution/index.html>] and SASS[<https://docs.nvidia.com/cuda/cuda-binary-utilities/>]. This approach relies on using the GPU assembly of the kernels, which can be obtained at compile-time or by disassembler tools. One advantage of this approach is that it does not require modifying users' applications or pre-executing them to collect runtime information. Moreover, this type of static modeling introduces new usage scenarios, such as facilitating the evaluation of how changes in the source code can affect the DVFS behavior of applications. However, due to the lack of GPU runtime information such as cache hit rate and compute resource occupancy, the prediction errors for execution time are typically high. We argue that an ideal energy efficiency optimizer for DVFS-based GPUs should be efficient and accurate, which cannot tolerate the extremely high overhead of those existing profiling tools. Recently, data center GPU manager (DCGM) <cit.> published by Nvidia is a lightweight tool to manage and monitor the GPUs in data center environments. It provides a set of powerful tools and APIs that allow system administrators and operators to monitor the health, performance, and utilization of the GPUs in a non-intrusive manner with negligible cost. However, the metrics provided by DCGM is just a subset of those by nvprof, which may decrease the model accuracy. To this end, we finally come to the optimization framework, called DSO, that leverages both the DCGM metrics (dynamic information) and the PTX codes (static information). We summarize the contributions of DSO as follows. * We propose a novel parameterized theoretical model of GPU energy efficiency considering both the effects of DVFS and the tradeoff between performance and energy consumption. The optimization solution for a GPU kernel is also explicitly derived to tune the best DVFS setting. * We design a machine learning based scheme to predict the parameters of the proposed theoretical model leveraging both the runtime metrics from the lightweight DCGM profiling tool and the static features from the PTX codes. * Validated on 20 real applications (not used during training) among two contemporary GPUs, the model trained on only micro-benchmarks shows considerably low errors (mostly within 5%) for both performance and power prediction. The average energy conservation observed in our optimization results achieves 19% on Tesla V100 on average compared to the default setting with no more than 5% performance loss, all without heavy offline profiling. § BACKGROUND AND MOTIVATION §.§ GPU DVFS Performance and power modeling is essential for energy conservation in different DVFS settings, dictating the total energy consumption of GPU applications. Recent studies <cit.> have shown that GPU DVFS behaviors are more complex than CPUs when altering voltages and frequencies, sometimes even proving contrary to conventional CPU DVFS. As modern GPUs have two primary frequency domains—core frequency for stream multiprocessors (SMs) speed and memory frequency for GPU memory bandwidth, our efforts centralize on optimizing frequency control based on application behaviors. §.§ The Input Sources for GPU Modeling Two input types are generally used in previous works for GPU performance, power modeling, and DVFS management: dynamic and static information. Dynamic information, also referred to runtime information, is hardware-dependent, collected during one execution of the target application, usually through profiling tools like nvprof <cit.>. Despite high prediction accuracy, the profiling overhead may inhibit practical DVFS energy optimization due to the lack of real-time measurements. Static information is hardware-independent, referring to GPU code features obtained before kernel execution. The NVIDIA Parallel Thread Execution (PTX), an intermediate assembly language for NVIDIA GPUs, is often utilized. It can be extracted from CUDA binary files using the Nvidia disassembler tool cuobjdump, allowing analysts to link each instruction with the GPU components involved in its execution. Moreover, lightweight monitoring tools like nvidia-smi <cit.> and DCGM <cit.> are available for tracing GPU status. As they relate to GPU hardware runtime, they fall under the category of runtime information. §.§ Why Dynamic and Static Information Fusion As stated above, these two input source types have their own advantages and drawbacks. A practical energy management scheme with GPU DVFS should be efficient and accurate in terms of modeling and optimization. Notice that DCGM can monitor the runtime utilization of different GPU components with negligible overhead. We consider it as the substitute of the heavy profiling tool such as nvprof. To fulfill the information gap between DCGM and nvprof, we further utilize the PTX code features as complement when designing our energy efficiency optimizer, which finally comes to the scheme of dynamic and static information fusion. § RELATED WORK There have been many research studies about understanding the impacts on GPU performance and energy consumption brought by DVFS as well as optimizing them solely or jointly. They generally take advantage of either physical runtime metrics (also referred as dynamic information) and GPU kernel codes (also referred as static information). Dynamic information-based methods initially used micro-benchmarks to profile GPU hardware components <cit.>. DVFS-aware prediction models were developed by Wang et al. <cit.> to quantify each GPU component's performance contribution. Guerreiro et al. employed similar tactics for power contributions <cit.>. ML-based methods, such as one developed by Wang et al. <cit.>, encapsulated interplay influences of diverse instructions and achieved below 10% average errors. The high accuracy of the above methods indicates the importance of runtime information, especially for contemporary GPU architectures with rich sub-components. Static information-based methods began with Hong et al. <cit.>, who modelled GPU kernel performance and power consumption using static CUDA and PTX code analysis. Guerreiro et al. <cit.> expanded this by including GPU assembly instruction sequences and deploying recurrent neural networks to capture dependency features. Braun et al. <cit.> proposed a simple model for swift predictions across GPUs relying solely on PTX instruction statistics. Fan et al. <cit.> developed DVFS-aware static models based on a vector of 10 instruction types. However, these methods achieved high power prediction accuracy, but their execution time predictions were lacking due to the absence of hardware runtime information, such as cache hit rate or register spilling. § METHODOLOGY §.§ Overview Figure <ref> outlines our DSO optimizer framework. Initially, we present a parameterized optimization model, referred to as the “GPU DVFS Model" rectangle, which incorporates the effects of GPU DVFS and balances the tradeoff between performance and energy efficiency. Our model aims to enhance energy efficiency by considering these factors. To accurately and efficiently predict the model parameters, we employ machine learning techniques that leverage hardware status information from DCGM and GPU kernel details from the PTX parser. By utilizing these inputs, we can make informed predictions about the model parameters. Once these parameters are determined, we can theoretically derive the optimal DVFS configuration. For implementing the DVFS configuration, the GPU DVFS controller utilizes the APIs provided by NVML <cit.>. These APIs enable us to set the desired voltage and frequency targets, allowing us to implement the optimal DVFS configuration based on the calculated parameters. §.§ Problem Formulation Previous studies have showcased the effectiveness of statistical models trained using dynamic and static GPU features to accurately represent performance and power data samples. These models have achieved a remarkable level of accuracy and confidence. However, compared to learning-based approaches, the utilization of parameterized models provides the advantage of interpreting the unique characteristics of GPU hardware and comprehending the impact of DVFS on performance and power. Expanding upon the research presented in <cit.>, we adopt a similar approach to model the runtime power of the GPU, as shown in Equation (<ref>). P(V^c, f^c, f^m) = (P^0+κ V^c)_P_static + (γ f^m + c(V^c)^2 f^c)_P_dynamic V^c, f^c, and f^m represent the GPU supply core voltage, GPU core frequency, and GPU memory frequency, respectively. The power is consist of two parts, the static part P_static and the dynamic part P_dynamic. P_static includes P^0, which denotes the constant power consumption of the GPU system that is not related to GPU voltage/frequency scaling, and κ V, which denotes the power that maintains the supply voltage for executing the GPU application. κ denotes the coefficient related to the hardware characteristics, such as the number of transistors in the chip design and the leakage current for a single transistor. As for the dynamic part, the coefficients γ and c are constant values that rely on both the hardware characteristics and the specific application being considered. These coefficients indicate the sensitivity of power consumption to memory frequency scaling and core voltage/frequency scaling, as explained in <cit.>. Compared to power modeling, performance modeling of GPU DVFS is rather complex <cit.>. Inspired by the DVFS-aware roofline observations in <cit.>, we innovatively design a piecewise mathematical model with a concise form to simplify the subsequent analysis of GPU energy conservation. We formulate the performance function T(f^c,f^m) of a GPU-accelerated application as shown in Eq. (<ref>). t^0 represents the constant component in GPU application execution time. α is a constant factor that indicates the sensitivity of this application to GPU memory frequency scaling, and β is a constant factor that indicates the sensitivity to GPU core frequency scaling. With t^0,α and β set to different values, the model is capable of simulating the various DVFS effects of a variety of applications. Our experiments on real GPU applications indicate that this time model effectively captures the performance effects of DVFS on all the tested applications. It provides a coherent explanation for the observed performance variations and accurately represents the impact of DVFS across different application scenarios. T(f^c,f^m)=t^0+max(α/f^m, β/f^c) Notice that f^c and V^c are correlated. For a fixed V^c, the maximum core frequency (f^c_max) is determined by V^c. We apply the function in <cit.> to denote this relationship: f^c≤ g_1(V^c)=√((V^c-κ)/2)+κ, and κ is the function parameter. With the above models, the GPU energy (E_J) consumed to process one task is the product of the runtime power and the execution time, as shown in Eq. (<ref>). E = (P^0+κ V^c+γ f^m + c(V^c)^2 f^c)_P(V^c, f^c, f^m)×(t^0+max(α/f^m, β/f^c))_T(f^c) We propose a simple objective cost function to tradeoff the performance and energy consumption as Eq. (<ref>). C(V^c, f^c, f^m) = η E + (1 - η) P_max T = η PT + (1-η)P_maxT = (η P + (1-η)P_max)T Here η is the parameter specified by the user to express the relative importance of energy efficiency and training performance (throughput). When η=0, we are only optimizing for time consumption, whereas when η=1, we are only optimizing for energy consumption. P_max is the maximum power limit supported by the GPU, a constant introduced to unify the units of measure in the cost metric. The parameters to be determined are (P^0, κ, γ, c) related to power and (t^0, α, β) related to performance. While they can be fitted with the data points sampled from different DVFS settings, in the paper, we attempt to directly predict them by the DCGM metrics and static code information, which do not need to pre-execute the target GPU application with different DVFS settings. §.§ Best Configuration for Target Cost As a first step, we consider the following how to derive the solution to achieve the best energy efficiency modeled by Eq. (<ref>). Eq. (<ref>) shows the mathematical formulation of the problem. Notice that V^c and f^c are correlated variables, and f^c is upper bounded by a function of V^c, denoted by g_1(V_c). argmin C = argmin{(η(P^0+κ V^c+γ f^m+c(V^c)^2f^c) +(1-η)P_max) × (t^0+max(α/f^m, β/f^c))} s.t. V^c_min ≤ V^c≤ V^c_max, f^m_min≤ f^m≤ f^m_max, f^c_min ≤ f^c≤ g(V^c) Theorem 1. With a fixed memory frequency, the cost function of a GPU kernel is minimum when the GPU core frequency is maximum corresponding to the GPU core voltage, and f^c ≤β/αf^m, i.e., C_min(f^m)=V^Gcarg minC(V^c,g(V^c),f^m). We firstly discuss the case of f^c < β/αf^m (equivalent to α/f^m < β/f^c), which indicates that the kernel is compute-bound. The cost function then becomes C=(η(P^0+κ V^c+γ f^m+c(V^c)^2f^c)+(1-η)P_max) × (t^0+β/f^c). We obtain the first-order partial derivatives as: ∂ C/∂ V^c=η(κ+2c V^c f^c)(t^0+β/f^c) and ∂ C/∂ f^c=cη (V^c)^2(t^0+β/f^c)-βη(P^0+κ V^c+γ f^m+c(V^c)^2f^c)+(1-η)P_max/(f^c)^2. Because ∂ C/∂ V^c>0, C cannot attain its minimum on the interior of the domain, and E_J is a monotonically increasing function of V^c. The minimum is on the boundary of g(V^c). We then discuss the case of f^c ≥β/αf^m, which indicates that the kernel is memory-bound. T(f^c) is reduced to be t^0+β/f^m. As P is a monotonically increasing function of f^c, the minimum of C is achieved when f^c = β/αf^m. To be concluded, f^c can be eliminated such that finding the minimum of C is only related to V^c, and the condition of getting the minimum C is f^c ≤β/αf^m. Theorem 1 transforms a three-variable optimization problem into a two-variable optimization problem. It implies that when we scale the GPU core alone to conserve energy, we only need to find an appropriate core voltage and set the core frequency to the largest allowed value. We then consider GPU memory frequency scaling alone. If the core voltage and frequency settings are fixed as V^c_o and f^c_o, we can easily compute the optimal memory frequency by setting ∂ C/∂ f^c=0. We denote it as f^m_o. Since the minimum C is obtained when f^c ≤β/αf^m, the time model can be simplified to (t^0+β/f^c), which eases the calculation of f^m_o. Based on the above analysis, the original three-variable problem is transformed into an one-variable optimization problem. Reducing the problem dimension is vital to speeding up the computation. Since the GPU voltage usually has a narrow range, we can conduct a grid search on it and derive the optimal core and memory frequencies in practice. §.§ Modeling the performance and power of GPU DVFS Theorem 1 allows the solution of Eq. (<ref>) via a single-variable optimization problem. The next step is estimating parameter values in the cost model for a GPU application. Unlike prior work <cit.> directing predictions towards performance and power or the scaling ratio compared to the default DVFS setting, we suggest estimating model parameters using both DCGM metrics (dynamic information) and PTX instructions (static information) with a machine learning algorithm. §.§.§ Feature Processing We selected eight key metrics from the DCGM profiling tool closely tied to GPU kernel activities (Table <ref>), the values of which range between 0 and 1. As discussed in <cit.>, these metrics encompass the crucial factors essential for GPU performance modeling. Regarding PTX instructions, we consider three categories: instruction type, data type, and memory space (Table <ref>). These include instruction types defined in the PTX ISA, basic instruction operand types, and all the types in the GPU memory hierarchy. We parse the PTX source code to count each instruction type and normalize each value by their category's total instructions. §.§.§ Training Scheme Contrasting previous methods <cit.> that predict absolute values or scaling factors, we propose to estimate the parameters in Eq. (<ref>), which can be used by Theorem 1 to derive the best configuration. This approach leverages efficient machine learning algorithms like Randomized Trees Regression <cit.> and shallow neural networks <cit.>, with an explicit formulation simplifying the learning process. We utilize a multi-layer perceptron for estimating the energy model parameters, consisting of five layers including one input, three hidden layers (using a sigmoid activation function) with empirically set neurons (100, 50, 25), and one output layer. Hyper-parameters like batch size and learning rate are optimized with grid search and overfitting is prevented through three-fold cross-validation. Practically, parameters in Eq. (<ref>) are obtained by collecting data samples under all frequency settings for each GPU. Linear regression is then used to fit the power model and piecewise linear regression for the performance model, with the average regression absolute percentage error within 2%. Our neural network is trained to estimate these model parameters, providing accurate predictions for realistic benchmarks. § EXPERIMENTS §.§ Experimental Setup We validate the proposed DSO model on the Tesla V100 GPU. Experiments are performed on a Linux Ubuntu 18.04, with CUDA 11.5 and Nvidia Driver v515. Notice that the memory frequency of V100 is fixed since they cannot be tuned under our experimental environments. We change the GPU operating DVFS settings by nvidia-smi <cit.> with the flag "-lgc" (for core frequency) and "-lmc" (for memory frequency). In our experiments, we tune no less than 10 frequency options, from 705 to 1380 MHz, to collect sufficient data samples to train the model. To obtain all the data samples to train the machine learning algorithms for predicting the GPU DVFS model parameters, we execute each benchmark in <cit.> on the GPUs at all the available frequency configurations. By tuning the ratio of different instructions in each application, we obtain totally 138 GPU benchmarks of different operational intensity values. After that, the accuracy of the estimated models is tested on 20 realistic benchmarks from CUDA SDK 11.5 and Rodinia. These benchmark applications cover a wide range of execution patterns, such as DRAM intensive, L2 cache intensive, shared memory intensive and computation intensive. Notice that the testing set is not used to train the models. Our model can perform well generalization on unseen GPU applications. To build the DSO model, we obtain the necessary PTX instructions, DCGM metrics, and power samples for each benchmark. The "-ptx" flag of the nvcc compiler is used for PTX instructions, adjusting the PTX ISA version for different GPU architectures with "-gencode=arch=compute_XX,code=compute_XX" (in the case for the Tesla V100, which has a compute capability of 7.0, "XX" is 70). To gather DCGM metrics, a daemon thread runs "dcgmi dmon -e ${METRIC_LIST}" in loop during benchmark execution. A second daemon thread using NVML <cit.> gathers power consumption data. Each benchmark's average power consumption is calculated from these samples. These efficient, non-intrusive methods demonstrate DSO's practical flexibility and efficiency. §.§ Model Accuracy Table <ref> showcases our DSO in comparison to other studies, highlighting the modeling overhead and accuracy. Relying solely on static PTX code analysis proves challenging for execution time prediction, highlighting the importance of dynamic tools like nvprof and DCGM for accurate estimates. The DSO integrates DCGM for practical, high-quality, and low overhead estimation. The performance and power estimation accuracy of DSO on the V100 GPU are manifested in Figure <ref>. To emphasize the benefit of fusing dynamic and static information, DSO is compared with methods using either PTX features or DCGM metrics alone. DSO consistently yields low prediction errors for execution time (0.5%-9.5%) and power consumption (1.3%-7%). It outperforms the others with an average MAPE of 4.6% (vs. PTX's 7.8% and DCGM's 5.7%) for performance and 4.9% for power (vs. PTX's 8.3% and DCGM's 6.9%). It's observed that combining PTX and DCGM improves the accuracy drastically for several GPU applications, confirming the utility of runtime utilization information for capturing DVFS effects. This underlines the rationale behind DSO's design, using PTX features as a supplement to DCGM metrics for enhanced prediction. §.§ Energy Efficiency Once predictive model parameters are garnered, the optimal DVFS configuration for minimal energy consumption is derived (Section <ref>). Figure <ref> displays these results as average values over 20 real applications. We examined the impact of varying η values, which signal the preference level for energy efficiency. An η of 1 indicates maximum preference for low energy consumption. As η increases, execution time expands, but energy consumption decreases. This results from additional opportunities for energy efficiency allowed by a higher η. Within performance loss limits (e.g., 5%), our DSO offers a suitable η selection according to these requirements. For the V100 GPU architecture, we advise using η = 0.8, conserving energy consumption by approximately 19%. § CONCLUSION We introduce DSO, an innovative framework for modeling and optimizing GPU performance, power, and energy with DVFS. DSO uses a parameterized theoretical model considering DVFS effects to simultaneously optimize performance and energy efficiency. Combining static PTX and dynamic DCGM information, DSO accurately estimates model parameters, showing improved prediction accuracy over using either source independently. Tested on the Volta GPU, DSO models yield accurate results for unencountered real GPU applications. Furthermore, DSO facilitates balancing performance and energy efficiency. Leveraging DSO’s optimal configurations can enhance GPU energy efficiency by approximately 20% with no more than a 5% performance loss. § ACKNOWLEDGMENTS This research was supported by the National Natural Science Foundation of China (No. 62302126), the Shenzhen Science and Technology Program (No. RCBS20221008093125065, No. JSGGKQTD20221101115655027) and the Peng Cheng Laboratory Project (Grant PCL2021A13). IEEEtran
http://arxiv.org/abs/2407.13243v1
20240718075336
Dark matter droplets
[ "Ian G Moss" ]
astro-ph.CO
[ "astro-ph.CO", "cond-mat.quant-gas", "hep-th" ]
ian.moss@newcastle.ac.uk School of Mathematics, Statistics and Physics, Newcastle University, Newcastle Upon Tyne, NE1 7RU, UK § ABSTRACT A new model for dark matter is put forward which consists of uniform droplets of Bose Einstein condensate. In this model, structure forms rapidly, shortly after the hot big bang plasma de-ionises. The model also produces modifications to the expansion rate before droplet formation that affect the measurement of cosmological parameters from Cosmic Microwave Background data. The model could contribute to explaining why observations at high redshift see anomalously high structure formation and predict low values for the Hubble constant. Dark matter droplets Ian G. Moss July 22, 2024 ==================== Introduction. Amongst the many ideas about the nature of dark matter there is a possibility that it consists of oscillations in a very light scalar field <cit.> (see <cit.> for reviews). Generally, scalar field dark matter models go under the name of “fuzzy dark matter", because the matter clusters in a similar way to particulate dark matter, but with some wavelike interference superimposed. In scalar field models that include self-interaction, structures involve a balance of scalar field forces, which can be attractive or repulsive, spatial gradient pressure and gravitational forces <cit.>. In this paper, a new type of fuzzy dark matter model is introduced, in which the self-interaction switches from being repulsive at early times, to attractive at later times. We place this in the context of a Coleman-Weinberg potential inspired by elementary particle physics <cit.>. The switch in self-interaction closely parallels some laboratory experiments on cold atom mixtures, where droplets form as a result of an instability that can be triggered by quantum vacuum polarisation effects <cit.>. Droplets form rather like liquid water droplets form in clouds. Merging droplets can form larger drops, which can start to feel the effects of gravity, the Earth's gravity in the case of clouds or self-gravity in the case of cosmology. The droplets have uniform density, surrounded by vacuum. This sets them apart from other types of droplet which balance quantum pressure, scalar field forces and gravity <cit.>. They can range in mass from microscopic, to many solar masses. A major feature is that droplet formation is driven by scalar field forces, and they can grow exponentially in time, unlike the linear growth typical of gravitational instability. Baryons found inside the droplets would respond to the enhanced density. This means they can form structures very early on in the history of the universe and droplets are ideally situated to play a role in explaining how star and black hole formation seems to have started at such large redshifts <cit.>. Modifications to the dark matter sector at high redshift can also affect predictions of the cosmological parameters based on Cosmic Microwave Background (CMB) data. This can be relevant because the value of Hubble's constant h predicted in Lambda Cold Dark Matter (Λ CDM) models <cit.> is currently lower than the Hubble constant measured at low redsfhift <cit.>. There is a long list of models that have been put forward to explain this Hubble tension <cit.> including some with fuzzy dark matter that have used a “frozen field" <cit.> or dark matter-dark energy interactions <cit.>. In the new model, Droplet formation at redshift around 10–100 decreases the size of the sound horizon at the surface of last scattering compared to Λ CDM models. This increases the CMB prediction of the Hubble constant. After the droplets form, they behave as ordinary Λ CDM. On the downside, models which decrease the size of the sound horizon often conflict with other observable parameters <cit.>. For this preliminary investigation, the effect on the CMB is primarily a constraint on the model, with moves in the right direction for solving the Hubble tension problem. The cosmology of Coleman-Weinberg based potentials have been considered previously <cit.>, but this was in the context of relic abundances in extensions of the standard model of particle physics. The model considered here have exceptionally small masses and couplings. The aspect of the model is not justified, but is common to most fuzzy dark matter models. Coleman-Weinberg models– The dark matter models we will investigate have a transition from repulsive to attractive self interactions. This can be realised in scalar fields with a Coleman-Weinberg effective potential. Consider a charged scalar field χ that interacts with an (invisible) photon. We require that the scalar self-interaction constant λ and scalar mass m are both small, and then the Lagrangian density L is L=1/2c^2|χ̇|^2-1/2|∇χ|^2-1/2m^2c^2/ħ^2|χ|^2 -3/4α^2/ħ c|χ|^4ln|χ|^2/μ^2_R, where μ_R is a renormalisation scale. The logarithmic term represents the effect of vacuum polarization. The parameter requirements for the vacuum polarization to be important are λ≪α^2, where α is fine structure constant for the scalar-photon interaction, and λ m≪ħα^2|χ|^2/c^3. The field equations in an expanding flat universe with scale factor a(t) can be derived from the Lagrangian given above. In the non-relativistic limit, used widely for scalar dark matter, we introduce a complex field ψ and set χ=ħ/√(m a^3) e^-im c^2 t/ħe^-iθ(t)ψ The reason for the additional phase θ should become clearer below. Dropping the ψ̈ terms, we find iħψ̇=-ħθ̇ψ-ħ^2/2m∇^2ψ+gnψ(lnn/n_d-1/2) where n=|ψ|^2/a^3 is the particle number density, n_d is related to μ_R and the coupling g is g=3/2ħ^3α^2/m^2 c Introducing the θ term allows this equation to have a stationary solution. If we choose ħθ̇=gn(lnn/n_d-1/2) then ψ is constant and the number density decreases as ordinary matter. In the theory of cold atoms, we would identify ħθ̇ with a chemical potential μ. In the expanding universe, μ decreases with time. There are additional corrections to the density redshift relation as a consequence of the neglected derivative terms, which will be considered later. Droplet formation.–Note that the effective coupling in Eq. (<ref>) becomes negative when n<n_d√(e), and we should expect some type of instability. This can be analysed using cosmological perturbation theory, as in Ref <cit.>. The density inhomogeneity δ=δ n/n for a comoving mode k satisfies δ̈+2Hδ̇+(ħ^2 k^4/4 m^2 a^4+gn k^2/m a^2lnn/n_d-4π G nm)δ =0. The negative sign in the last term indicates that large wavelengths have the usual Jeans instability, which grows linearly with time. Additionally, modes can also become unstable due to the scalar self force when n<n_d. The parameter range g n_d/2ħ≫ H is particularly interesting because this is where perturbations can grow faster than the Hubble flow. There are approximate solutions δ∝ e^-iω t, with dispersion relation depending on the physical wavenumber k_ phys=k/a, ω^2=k_ phys^2/2m[ħ^2k_ phys^2/2m+gnlnn/n_d] When n<n_d, imaginary values of ω are associated with modes that grow exponentially. The fastest growing mode for given n is the one with the largest value of |ω|, |ω|_ max, at physical wavenumber k_ max, |ω|_ max=ħ k_ max^2/2 m=gn/2ħ|lnn/n_d|. The maximum growth rate occurs when n=n_d/e, for a length scale λ_d=2π/k_ max, λ_d=(m c^2/g n_d)^1/2λ_CW, where λ_CW is the Compton wavelength of the particle 2πħ/m c. In the chosen parameter range, |ω|≫ H. If, instead, we have g n_d/2ħ<H, then λ_d is larger than the Jeans length λ_J and the droplet formation would be driven by gravitational forces. (This can be seen from the relation 2ħ H/gn_d∼λ_d^2/λ_J^2.) The final state of the instability can be analysed using the energy density. Substituting the non-relativistic approximation (<ref>) into the stress-energy tensor gives the energy density ϵ for a homogeneous system, ϵ= n m c^2+μ n +1/2 g n^2(lnn/n_d-1) On the other hand, suppose the dark matter forms into droplets with number density n_d. Since the particle number is conserved, a fraction n/n_d of the volume is occupied by droplets. The energy density ϵ_d of a region containing a mixture of droplets and empty space is ϵ_d=n/n_d(n_dm c^2+μ n_d-1/2 gn_d^2) We subtract the μ n term to obtain the grand thermodynamic potential, and this is lower in the droplet phase than the homogeneous phase when n<n_d. So far, we have neglected gravity and surface tension. Let us include these for spherical droplet of radius R, mass M and uniform density n_R. The total energy of the droplet E_d, obtained by multiplying Eq. (<ref>) by the droplet volume is E_d=1/2gM/mn_R(lnn_R/n_d-1)-GM^2/2R+σ R^2+Mc^2 For fixed mass, R≡ R(n_R) and the generalised force dE_d/dn_R is dE_d/dn_R=Mc^2/2 n_R{gn_R/m c^2lnn_R/n_d-GM/3 c^2R-4σ R^2/3 Mc^2} Consideration of the gradient terms gives σ≈κ g n_R^2λ_d, where κ is a small numerical constant. The first and the last terms balance for any R≥λ_d, with n_R≈ n_d. When R<λ_J, the gravity factor is negligible. Uniform droplets exist for radii λ_d≤ R<λ_J. Once formed, droplets at rest would behave like very large CDM particles. In practice, droplets can form with a range of sizes and initial velocities, and may combine like water droplets into larger drops. Larger drops become non-uniform as they are are effected increasingly by gravitational and pressure forces. This process could play a role in early structure formation, but it is quite complex, and we leave for it future work. Effect on the CMB.–Next, we turn to the deviations from the ϵ∝ a^-3 rule that happen before the droplet phase. The simplest approach is to use the conservation rule ϵ̇+3H(ϵ+p)=0 The energy density was given above in Eq. (<ref>). A similar calculation gives the pressure p=μ n-1/2 gn^2(lnn/n_d-1) Now, we make use of the non-relativistic approximation m c^2≫ gn. We replace n→ n+δ n, in Eqs. (<ref>) and (<ref>), where n has the CDM behaviour n∝ a^-3. Solving for the number density perturbation δ n from CDM in Eq. (<ref>) gives δ n/n=-gn/m c^2lnn/n_d. Substituting back into the energy, δϵ/ϵ=1/2gn/m c^2(lnn/n_d-1) This only affects the Hubble parameter-redshift relation for redshifts larger than the redshift of droplet formation z_d. If we denote the redshift of matter radiation equality by z_ eq, then the dark matter dominates the expansion for redshifts 1≪ z_d<z<z_ eq, and in this range H∝ϵ^1/2. If we hold the Hubble constant h fixed, then the Hubble parameter H(z_*) at the surface of last scattering z_* for the dark matter model and H_CDM(z_*) for ordinary CDM are related by δln H_*≡H(z_*)-H_CDM(z_*)/H_CDM(z_*)≈1/2δϵ_*/ϵ_* Using Eq. (<ref>), we have an increase in energy density between redshift z_d and z_*, δlnϵ_*=1/2g n_d/m c^2(1+z_*/1+z_d)^3{ln(1+z_*/1+z_d)^3-1 } In other words, the Hubble parameter at decoupling would appear larger in the dark matter model than it would when assuming CDM. The effect of the dark matter model on CMB observations can be estimated as follows. The primary observable quantity is the angle subtended by the sound horizon at last scattering r_s(z_*), as measured in the position of the first peak in the angular spectrum. The angle is given by θ_s=r_s(z_*)/r_H, where r_H is the distance to the cosmological horizon. For z_d≫ 1, the main dependence on δϵ is through r_s(z_*)∼ 2c_s/H(z_*), because the luminosity distance r_H is determined mostly at low z where the droplet model is identical to ΛCDM. We therefore expect from Eq. (<ref>) that δlnθ_H∼ -0.5 δlnϵ_*. The predicted value of the Hubble parameter h is inferred from a complicated numerical pipeline, but Percival et al. <cit.> give some useful approximations relating deviations in θ_H to the matter density parameter ω_m=Ω_m h^2 and Hubble constant h. Applying a similar method, including the Hubble law deviation, gives δlnθ_H≈ 0.2 δln h+0.14 δlnω_m-0.55 δlnϵ_*. A secondary effect on the CMB comes through changes to Silk damping of the CMB peaks, but the change turns out to be very minor, and the peak heights effectively fix the value of ω_m. If we fix θ_H by the observations, then predicted value of the Hubble constant h is larger in a model with droplet formation after the surface of last scattering, by δln h≈ 2.5 δlnϵ_*. The dark matter model has three parameters, the particle mass m, the coupling g (or α) and the droplet density n_d. In practice, we can replace n_d by the redshift of droplet formation z_d. One constraint on the parameters which we have not discussed so far is that the density variation δlnϵ must be small at matter-radiation equality. This because the energy density behaves like radiation when the χ^4 term starts to dominate, and nucleosynthesis constraints kick in. Some illustrative parameters are given in table <ref>. Conclusions Although the results have been built around a fuzzy dark matter model with Coleman-Weinberg potential, some of the features are more general. Structure can grow at an exponential rate in dark matter with an attractive self-interaction, compared to the usual linear growth from gravitational forces. On the other hand, fuzzy dark matter with repulsive self-interactions increase the value of the Hubble constant h predicted by the CMB. The Coleman-Weinberg models combine both, with a self-interaction that is repulsive at large redshift and attractive at low redshift. The Coleman-Weinberg dark matter potentials lead to droplet formation, with a wide range of possible droplet masses and sizes. The simplest fate of these droplets is to remain as dark matter particles. A more interesting possibility is that the droplets move and merge to form larger drops. They would also be affected by the gravitational forces from baryonic matter. Understanding the details would require a more detailed analysis. Through an interesting coincidence, cold atom models in two spatial dimensions have the same logarithmic form for the vacuum polarization as Coleman-Weinberg models in three spatial dimensions <cit.>. This opens up the possibility of making a laboratory analogue of cosmological structure formation, to study the initial size and velocity distribution of droplets when they form, and the subsequent process of merger into larger drops. The author is grateful for discussion with Tom Billam and Chanda Prescod-Weinstein. He is supported by the UK the Science and Technology Facilities Council [grants ST/T00584X/1 and ST/W006162/1].
http://arxiv.org/abs/2407.12128v1
20240716193323
Distribution Alignment for Fully Test-Time Adaptation with Dynamic Online Data Streams
[ "Ziqiang Wang", "Zhixiang Chi", "Yanan Wu", "Li Gu", "Zhi Liu", "Konstantinos Plataniotis", "Yang Wang" ]
cs.LG
[ "cs.LG", "cs.CV" ]
DA-TTA Z. Wang et al. Concordia University, Canada {ziqiang.wang,li.gu}@mail.concordia.ca, yang.wang@concordia.ca University of Toronto, Canada zhixiang.chi@mail.utoronto.ca, kostas@ece.utoronto.ca Beijing Jiaotong University, China ynwu0510@bjtu.edu.cn Shanghai University, China liuzhisjtu@163.com Distribution Alignment for Fully Test-Time Adaptation with Dynamic Online Data Streams Ziqiang Wang10000-0002-4083-5411 Zhixiang Chi20000-0003-4560-4986 Yanan Wu30000-0002-3301-6303 Li Gu10000-0002-4447-4967 Zhi Liu40000-0002-8428-1131 Konstantinos Plataniotis20000-0003-3647-5473 Yang Wang10000-0001-9447-1791 July 16, 2024 =================================================================================================================================================================================================================================== Corresponding authors. Our code is available at https://github.com/WZq975/DA-TTAgithub.com/WZq975/DA-TTA. § ABSTRACT Given a model trained on source data, Test-Time Adaptation (TTA) enables adaptation and inference in test data streams with domain shifts from the source. Current methods predominantly optimize the model for each incoming test data batch using self-training loss. While these methods yield commendable results in ideal test data streams, where batches are independently and identically sampled from the target distribution, they falter under more practical test data streams that are not independent and identically distributed (non-i.i.d.). The data batches in a non-i.i.d. stream display prominent label shifts relative to each other. It leads to conflicting optimization objectives among batches during the TTA process. Given the inherent risks of adapting the source model to unpredictable test-time distributions, we reverse the adaptation process and propose a novel Distribution Alignment loss for TTA. This loss guides the distributions of test-time features back towards the source distributions, which ensures compatibility with the well-trained source model and eliminates the pitfalls associated with conflicting optimization objectives. Moreover, we devise a domain shift detection mechanism to extend the success of our proposed TTA method in the continual domain shift scenarios. Our extensive experiments validate the logic and efficacy of our method. On six benchmark datasets, we surpass existing methods in non-i.i.d. scenarios and maintain competitive performance under the ideal i.i.d. assumption. § INTRODUCTION The unprecedented success of deep models <cit.> is conditioned on the assumption that the training and test data are drawn from the same distribution <cit.>. However, such an assumption is delicate in ever-changing deployment environments <cit.>, leading to domain shift and performance deterioration. Test-Time Adaptation (TTA) is a line of research that mitigates domain shift by continually adapting to the unlabeled data stream in a target domain before inference <cit.>. There are two main categories of TTA: 1) Test-Time Training (TTT) <cit.>, where customized model training (e.g., adding auxiliary tasks <cit.>) is performed offline using the source data and performs adaptation on test data. 2) Fully TTA <cit.>, adapts an off-the-shelf model without altering offline training. Our work focuses on fully TTA which poses a greater challenge as minimal training is allowed. There have been a series of studies on fully TTA to tackle the challenges of learning on unlabeled data. One notable method is Test-time Batch Normalization (TTBN) <cit.>. TTBN adjusts the BN layers, allowing them to normalize the feature leveraging the batch statistics from the current test data batch, rather than the population statistics from the training phase. Given its effectiveness, TTBN has become a cornerstone for recent TTA research. Following this, there has been a surge in self-training-based TTA methods, primarily hinging on two adaptation objectives. The first employs entropy minimization (EM) <cit.>, pushing the model to make predictions with low entropy for the incoming test data. This ensures that the inference is well-distanced from the classification boundary <cit.>, thereby enhancing model performance on the test data. The second utilizes a teacher-student self-training framework <cit.>. Here, a teacher model assigns pseudo labels to the student, allowing the latter to be trained in a manner similar to supervised learning. Both TTBN and self-training-based TTA methods aim to tailor the model towards the incoming unknown test data batch. Their notable performances have been observed in ideal circumstances where every test data batch is independent and identically distributed (i.i.d., balanced classes) from the target domain. In the online fashion, as the model keeps updating, the i.i.d scenario ensures stable optimization from batch to batch. However, real-world scenarios are rarely so accommodating. In applications like self-driving vehicles and robot-vision systems, the batches of image feed are temporally correlated, making it non-i.i.d (imbalanced classes) <cit.>. In the non-i.i.d. data streams, the long-tailed problem may occur, where a minority of classes dominate the current batch <cit.> (see <ref>). As each incoming batch of data contains different sets of dominated classes, the distributions among batches are diverse, leading to conflicting optimization. As depicted in <ref>, for TTBN, the adaptation fails because test data batches in long-tailed distributions do not provide true target domain statistics for the BN layers. For self-training-based methods, the varying long-tailed target distributions in TTA sessions lead to conflicting optimization objectives <cit.>, which severely impair model performance, and may even cause it to collapse <cit.>. To address the aforementioned challenges, we propose a surprisingly simple yet effective method for robust TTA, as illustrated in <ref>. Rather than adapting the model to unpredictable test-time distributions, we reverse such a process and propose to set the source feature distribution as a reference and pull test data towards it. As a result, conflicting optimization objectives among data batches can be alleviated. Specifically, we propose to optimize the source model's affine layers using a Distribution Alignment (DA) loss. This loss minimizes the divergence between test feature distributions and the source distributions, thereby ensuring the test data's features align with the source model for compatibility. Furthermore, to accommodate scenarios featuring continuous domain shifts in test data streams, namely continual TTA <cit.>, we devise a domain shift detection mechanism that tracks changes in feature distributions. It improves our TTA method's efficacy in continuous domain environments. As demonstrated in <ref>, our method outshines others in handling both i.i.d. and non-i.i.d. data streams, effectively navigating the challenges associated with non-i.i.d. streams. Main Contributions: (1) Our distribution alignment loss addresses the TTA challenges in non-i.i.d. scenarios by aligning test features with source distributions, ensuring they mesh with the source model and preventing degradation from conflicting optimization objectives. (2) We propose a domain shift detection mechanism that tracks feature distributions, enhancing our TTA method's performance for continual TTA in non-i.i.d. data streams. (3) Our method surpasses recent state-of-the-art methods (, ∼6% on ImageNet-C/CIFAR100-C) across six datasets with different types of domain shifts in non-i.i.d. scenarios, while maintaining comparable performance under i.i.d. assumption. § RELATED WORK Unsupervised Domain Adaptation (UDA). UDA addresses the distribution shift by jointly training on the labeled source and unlabeled target data <cit.>. One popular approach is to learn domain-invariant features by minimizing a certain measure of divergence between the source and target distributions (e.g. <cit.>). Another line of studies involves embedding a “domain discriminator" within the network, which is applied to develop indistinguishable feature space (e.g. <cit.>). However, the necessity of having access to both source and target domains during training limits the usability of these methods. Source-free Domain Adaptation (SFDA). SFDA aims to adapt source models to unlabeled target domains without accessing the source domain data <cit.>. Among these, SHOT <cit.> suggests learning target-specific features through information maximization and pseudo-label prediction. SFDA-DE <cit.> works on domain alignment by estimating source class-conditioned feature distribution and minimizing a contrastive adaptation loss. DSiT <cit.> utilizes the queries of a vision transformer to induce domain-specificity and train the unified model to enable a disentanglement of task- and domain-specificity. Most existing source-free methods <cit.> operate offline and require an analysis of the entire test dataset, along with several adaptation epochs for model updates. Specially, BUFR <cit.> pre-computes and stores marginal distributions for each feature on source data using a soft binning function. It then realizes adaptation by restoring the test features with the stored marginal distributions. The philosophy of BUFR can be related to our work, thus, we also include a comparison with this SFDA method in our experiments. Test-time Adaptation (TTA). TTA can be categorized into Test-Time Training <cit.> (TTT) and Fully TTA <cit.>, differentiated by the presence of prior joint training. TTT leverages both supervised and self-supervised losses to train a source model, which is then fine-tuned during TTA using self-supervised learning <cit.>. Fully TTA, in contrast, performs inference and adaptation directly in test data streams without prior training. A notable method in this setting is Test-Time Batch Normalization (TTBN) <cit.>, which utilizes test-time batch statistics within BN layers for adaptation. Subsequently, optimization-based methods, including entropy minimization <cit.> and teacher-student self-training<cit.>, have been developed. EATA <cit.> alleviates redundant optimization in test streams by employing a mechanism that identifies redundant samples. It tracks model outputs and skips the optimization for samples that are similar to previous ones. Our domain shift detection mechanism also adopts a tracking philosophy, albeit with a different focus and objective: to monitor feature distribution and detect domain shifts. Besides, LAME <cit.> focuses on adjusting output assignments rather than tuning parameters for TTA, while ODS <cit.> optimizes the estimation of label distribution to enhance self-training-based TTA methods in scenarios involving label shift. Moreover, DDA <cit.> employs diffusion models to align target images with the source domain, then realizes classification without adapting the source model. Modified TTBN for TTA in Non-i.i.d. Streams. TTBN <cit.> establishes a strong baseline for TTA, yet it encounters difficulties in non-i.i.d. data streams or when dealing with small batch sizes. This is because incoming batches are class-imbalanced and provide biased statistics for BN layers. Subsequent works have modified TTBN to better handle non-i.i.d. streams or limited batch sizes. MEMO <cit.> and TTN <cit.> combine source population statistics with dynamic test batch statistics, while DELTA <cit.> applies a moving average of test batch statistics for batch normalization. Furthermore, NOTE <cit.> adjusts the normalization layers of TTBN by selectively incorporating instance normalization. In addition, both NOTE <cit.> and RoTTA <cit.> employ a resampling memory bank that collects and stores test samples from different estimated classes and updates test batch statistics from the stored samples using a moving average. Overall, prior works modify the computing of batch normalization, trying to stabilize the normalization process for the incoming test batch. Differently, this work investigates the correlation between model accuracy and the change in intermediate feature distribution due to imbalanced or balanced classes (<ref>). And we introduce our Distribution Alignment method, which directly optimizes the distribution of features for all test batches towards the same source reference. § METHODOLOGY §.§ Problem Definition Fully TTA <cit.> encompasses a scenario where a model, pre-trained on a labeled source dataset {(x, y) ∼ P_S(x, y)}, is subjected to a stream of unlabeled test data from a target dataset {(x, y) ∼ P_T(x, y)}. This target dataset presents a domain shift from the source, indicated by P_S(x) ≠ P_T(x) and P_S(y|x) = P_T(y|x) <cit.>. After deployment, the model updates itself based on the current data it receives, without using the source data. In pioneer work, fully TTA assumes that the distributions of target data over time, P_T(x, y| t), are i.i.d. that is consistent with P_S(x, y). However, our focus is on practical scenarios where P_T(x, y| t) is non-i.i.d. and changes over time. Therefore, fully TTA in non-i.i.d. data streams demands the management of both the domain shift from source to target and the distribution shifts (label shifts) that occur at each time step. Besides, we also consider continual TTA <cit.>. This setting extends the fully TTA from a single target domain to a sequence of continuously shifting target domains: P_T_1(x), P_T_2(x), …, P_T_n(x), as depicted in <ref>. §.§ Motivation on TTA in Dynamic Online Settings TTBN <cit.> sets a strong baseline underpinning a series of TTA works <cit.>. We first analyze why TTBN experiences performance drop in dynamic online data streams follow by our motivation. Analysis of TTBN in Non-i.i.d. Data Streams. <ref> shows the commendable efficacy of TTBN <cit.> in i.i.d. test data streams, positioning it as a strong baseline method. However, its performance wanes when exposed to non-i.i.d. data streams. We attribute this degradation to misleading distribution statistics provided by non-i.i.d data batches. Specifically, TTBN’s adjustment to the source model lies on the test-time statistics—the means μ and standard deviations (stds) σ of each Batch Normalization layer: μ = 1/b∑_i=1^b𝐗^(i) , σ^2 = 1/b∑_i=1^b (𝐗^(i) - μ)^2 , where 𝐗^(i) is the input feature corresponding to i^th sample in a batch with batch size b. The μ and σ enable an affine transformation that normalizes the feature 𝐗^(i), i ∈ [1, b] to match preferred distributions of the well-trained source model: 𝐗̂^(i) = 𝐗^(i) - μ/√(σ^2 + ϵ) = 𝐗^(i)/√(σ^2 + ϵ) + -μ/√(σ^2 + ϵ) , ⇒m(𝐗̂^(i)) = 1/√(σ^2 + ϵ)·m(𝐗^(i)) + -μ/√(σ^2 + ϵ) , d(𝐗̂^(i)) = 1/√(σ^2 + ϵ)·d(𝐗^(i)) , where 𝐗̂^(i) is transformed from 𝐗^(i), and m(𝐗̂^(i)), d^2(𝐗̂^(i)) are the mean and variance of each transformed feature map. For clarity, we use μ, σ to denote the batch statistics, while using m, d to denote the mean and standard deviation of the feature distribution for a single sample, as illustrated in <ref>. Implications for TTA. During test-time, when fed a data batch, the TTBN layers transform their features, trying to approach 𝐗̂^(i) towards the distributions of the source data features. The domain shift can be effectively mitigated and performance can be preserved if the transformed features approximate the source distribution. <ref> provides a visual exploration of the impact of non-i.i.d. data streams on the transformed feature distribution. With a frozen source model, three different data streams are assessed through the TTBN method: an i.i.d. source data stream, and both i.i.d. and non-i.i.d. target data streams. These visualizations randomly spotlight one of BN layers, with the x-axis denoting feature channels, and the y-axis portraying the average variances d^2(𝐗̂^(i)). The means are approximating zeros that are omitted here. The key takeaways include: (a) I.i.d. target data has its transformed feature distributions closer to the source distributions, and mild performance drop is observed (i.e., error rate = 20%). (b) Conversely, non-i.i.d. streams manifest feature distributions that deviate from the source, correlating with the observed significant performance dip (i.e., error rate = 79%). Therefore, the performance of TTBN method is greatly hampered under the non-i.i.d. (dynamic online) setting due to the drifting of target feature distributions. On the other hand, as the distribution (label) for each non-i.i.d. batch differs, it causes the gradient conflict <cit.> among batches when the model is updating towards test data. The performance is further impeded due to such conflicting optimizations. To this end, we aim to narrow the disparity in distributions between non-i.i.d. test features and source features by steering the feature distributions back to the source, ensuring they are aptly managed by the source model. Moreover, as the source distribution is set as the “reference” for all test data streams, it sidesteps conflicting optimizations on distinctively distributed batches, thus preventing degradation of the well-trained source model. §.§ Distribution Alignment for TTA We propose the Distribution Alignment (DA) loss, a simple yet effective method to provide consistent optimization objectives. It avoids the instability caused by conflicting objectives, and effectively counteracts domain shifts by steering the test-time feature distributions towards the source domain. The DA loss is applied to the features from intermediate layers (DA is applied to multiple layers, we omit the layer notation here for simplicity) of the source model. Upon processing a batch of input data, we calculate distribution statistics of the features in the model. For one of the intermediate features, 𝐗, we have: m_j = 1/H · W∑_p=1^H · W𝐗_j,p , d^2_j = 1/H · W∑_p=1^H · W (𝐗_j,p - m_j)^2 , where j ∈{1, …, C}, C is the channel, and H · W represents the spatial size of each feature map. It is important to note that prior to TTA, we pre-compute the average feature distribution statistics m^S_j, d^2^S_j of source data offline. This computation is performed only once, after which the source statistics are retained for ongoing use. This kind of operation before deployment time is also adopted in other TTA methods, such as EATA <cit.> and RMT <cit.>. During TTA, when a batch of test data is fed into the model, the test statistics m^T_j, d^2^T_j are also computed using <ref>. Subsequently, the DA loss is computed based as: ℒ_DA = 1/C∑_j=1^C( | m^T_j - m^S_j| + | d^2^T_j - d^2^S_j| ) , where | ·| denotes the absolute value operation. Therefore, the DA loss quantifies the disparity between the feature distributions of the source and test data, with the objective of pulling test-time feature distributions back to the source domain through optimization as shown in the right side of <ref>. Feature distribution (means and variances) can be linearly manipulated via affine transformations. Hence, we utilize the affine layers in BN layers to be optimized by the DA loss, as depicted on the left side of <ref>. Alternative strategies for the selection of affine layers, such as integrating external affine layers instead of utilizing those within BN layers, are discussed in Appendix G. At inference, BN layers utilize the population mean μ_popu and variances σ_popu^2 computed at pre-training for normalization. Before the TTA process starts, we update the such statistics in <cit.> towards the first batch of test data as: μ_norm = α·μ_popu + (1 - α) ·μ_B_1 , σ_norm^2 = α·σ_popu^2 + (1 - α) ·σ_B_1^2 , where μ_B_1, σ_B_1^2 are the statistics of the first batch from the test data stream, and α is a hyper-parameter. This modification offers a more favorable starting point for optimization, ensuring that the initial distribution discrepancy between the test and source features is not excessively large. Additionally, we explore the synergistic effect of combining the DA loss with the entropy minimization (EM) loss: ℒ_EM = ∑_m [ 1(max_nŷ_n > θ) ∑_n=1^N -p(ŷ_n) log p(ŷ_n) ], ℒ_final = ℒ_DA + ℒ_EM, where ŷ_n denotes the predicted probability for class n, with n ranging from 1 to N, 1(·) denotes an indicator function, θ is the confidence threshold, and m is the batch size. <ref> will reveal that, additional EM loss further improves, although DA loss alone can achieve SoTA performance. §.§ Domain Shift Detection in Continual TTA Setting In certain application scenarios, it is essential for deployed models to automatically process data streams with continual domains without manual intervention <cit.>, as shown in Fig. <ref>b. The DA loss is designed to pull the feature distributions in current test domain to the source distributions. When a new target domain is encountered, the model, whose affine layers are tailored to the last tested domain, may apply unsuitable affine transformations on the features of the new domain. This is particularly problematic in the event of significant domain shifts, as the discrepancy between the test-time feature distributions and the source distributions can increase substantially, thereby raising the risk of convergence to a suboptimal local minimum. To improve the performance of our method in the continual TTA setting <cit.>, we introduce a domain shift detection mechanism. This mechanism tracks the DA loss ℒ_DA, which reflects the discrepancy between test-time feature distributions and source distributions. A domain shift is detected if the average discrepancy within a short-term window is larger than the average discrepancy within a long-term window by a predefined margin, ∑_i=0^pℒ_DA^B_t-i/p > τ·∑_i=0^qℒ_DA^B_t-i/q, where ℒ_DA^B_t denotes the DA loss of the current batch, p, q denote the lengths of short-term and long-term windows, and the τ is the threshold factor. Upon detecting a new domain, the model's trainable affine layers are reset to their initial states and the normalization layers in BN layers are reset according to <ref> and <ref>. For more details on the domain shift detection mechanism, see Algorithm 1, Appendix A. It is noteworthy that in the continual TTA setting, we employ both distribution alignment and domain shift detection, whereas for the fully TTA setting, we exclusively utilize distribution alignment. § EXPERIMENTS We conduct comparative experiments on TTA benchmarks with state-of-the-art TTA methods: TTBN <cit.>, TENT <cit.>, MEMO <cit.>, LAME <cit.>, CoTTA <cit.>, EATA <cit.>, NOTE <cit.>, RoTTA <cit.>, RMT <cit.>, DELTA <cit.>, and SAR <cit.>. We also compare our method with a related SFDA work, BUFR <cit.>, in Appendix F.1. For a fair comparison, we adopt the codebase from RMT <cit.> which integrates many SoTA TTA methods. In Appendix F.1, we also integrate our method into the official NOTE <cit.> codebase for a direct comparison with NOTE <cit.>. §.§ Datasets CIFAR10/100-C, ImageNet-C. We conduct experiments on the CIFAR10-C, CIFAR100-C, and ImageNet-C <cit.> that are common TTA benchmarks. They have 15 types of corruptions (target domains) applied on test and validation. Each type of corruption has 5 severity levels, of which we use the highest. On each target domain, CIFAR10-C/CIFAR100-C/ImageNet-C has 10000/10000/50000 images and 10/100/1000 classes. ImageNet-R, ImageNet-D, ImageNet-A. We additionally conduct experiments on other types of domain shifts. ImageNet-R <cit.> contains 200 ImageNet classes with different textures and styles. ImageNet-D <cit.>, re-proposed from DomainNet <cit.>, maps classes to those in ImageNet, removing unmappable classes. Furthermore, ImageNet-A <cit.> comprises adversarially filtered images from 200 ImageNet classes. We use these datasets as target domains, with ImageNet serving as the source domain. Appendix B shows more details. §.§ Implementation and Setup. We evaluate our method in both fully TTA <cit.> and continual TTA <cit.> settings within non-i.i.d. scenarios. In the fully TTA setting, an off-the-shelf source model is online adapted to a data stream from a single target domain. In the continual TTA setting, the model is online adapted to a data stream that comprises a succession of domains, each fed sequentially one after the other. Following most existing TTA work, we use a pre-trained WideResNet-28 <cit.>, ResNeXt-29 <cit.>, and ResNet-50 <cit.> from the RobustBench benchmark <cit.> as source models for the CIFAR10-C, CIFAR100-C, and ImageNet-C/D/R, respectively, in experiments for all methods. In comparative experiments, non-i.i.d. data streams of CIFAR10/100-C and ImageNet-R/D/A are generated based on Dirichlet distribution with Dirichlet parameter δ set to 0.1, which controls the degree of temporal correlation of class labels in data streams. We provide illustration of non-i.i.d. data streams with different δ in Appendix C.1. For ImageNet-C, due to the low ratio of samples per class to the number of classes, we construct non-i.i.d. data streams by sorting the images according to their labels. More implementation details on ImageNet-D/R/A and hyper-parameter details can be found in Appendix C.2 and Appendix D. §.§ Main Results We conduct the comparative experiments in both the fully TTA (<ref>, <ref>, <ref>) and the continual TTA (<ref>) settings as described in <ref>. Fully TTA in Non-I.I.D. Data Streams. <ref> presents the results of our method in comparison to other TTA methods on the commonly used corruption benchmark. Observing the “Mean” column reveals that over half of the prior methods yield results inferior to the source model without adaptation, suggesting an adaptation failure. Our method, denoted DA-TTA, outperforms competing methods across all the datasets, showcasing accuracy improvements of 3.3%, 7.7%, 5.9% over the next best-performing methods, respectively. Furthermore, DA-TTA demonstrates robust performance across all target domains. In contrast, LAME excels in certain domains but significantly underperforms or even regresses relative to the “Source” in others. Besides, it is observed that many previous methods exhibit notably poorer results in non-i.i.d. streams compared to i.i.d. streams (shown in Appendix F.2). Our method, however, obtains close results in the i.i.d. and non-i.i.d. data streams. Apart from the corruption domain shifts, <ref> presents the results of evaluations on realistic domain shifts using the ImageNet-D and ImageNet-R datasets. Previous methods, except for RoTTA, fail to adapt in these non-i.i.d. streams (performing worse than the “Source”), while our method achieves an improvement of 3.9% on ImageNet-D and 5.3% on ImageNet-R compared to the “Source”. Additionally, <ref> shows the results on the ImageNet-A dataset. Our method demonstrates effectiveness in adapting to the adversarial attack domain shift. Continual TTA in Non-I.I.D. Data Streams. In <ref>, we present the comparison of our method with other TTA methods when applied to a test data stream comprised of continual domains. This demanding stream delivers a sequence of 750000 images from 15 target domains of ImageNet-C under the non-i.i.d. sampling condition. It is observed that the majority of competing methods present incur error rates in excess of 90%, significantly underperforming when compared to the “Source”. While LAME performs optimally on a few target domains, it encounters failures in several domains, registering error rates above 90%. In comparison, DA-TTA showcases robust adaptation capabilities on all target domains and achieves the best overall average performance. Robustness on Different Conditions of Data Streams. We examine the effect of the non-i.i.d. degree and the batch size. As illustrated in <ref>, a smaller Dirichlet parameter δ indicates a higher degree of temporal correlation within the data stream. Most existing TTA methods experience a marked performance decline as δ decreases. LAME excels under intense temporal correlation, yet it underperforms compared to the baseline in less severe cases. In contrast, our method maintains robust performance across various degrees of non-i.i.d. severity. <ref> illustrates the impact of varying batch sizes. It's observed that most existing TTA methods experience a decline in performance as the batch size is reduced. This phenomenon could be attributed to larger batches more accurately representing the target domain's distribution, thereby reducing the conflict with the optimization objective. In contrast, our method shows consistent performance, proving to be robust across different batch sizes. §.§ Ablation Studies Effects of Model Components. As detailed in <ref>, we conduct an ablation study in non-i.i.d. data streams across three datasets. Firstly, we explored the effects of applying DA optimization within different ranges of the source model. The terms `w/o low-level DA' and `w/o high-level DA' refer to the application of DA optimization to the latter and former halves of the affine layers in the model, respectively. The results indicate a performance decline when the DA optimization range is reduced. However, the decrease in performance is relatively modest. This can be attributed to the correlated nature of feature distributions among layers within the frozen model, ensuring distributions in layers not directly supervised remain controlled. Moreover, the application of EM on top of the TTBN baseline, which is the TENT method, yields diminished results in non-i.i.d. data streams, as shown in <ref>. Nevertheless, introducing an EM loss atop the DA loss resulted in enhanced performance, highlighting the synergistic effect of the EM loss under protection from DA optimization. DA Alleviates Domain Shifts. In this analysis, we input source data and target domain data into the source model, labeling the features prior to the classifier layer as 𝐑_S for the source and 𝐑_T for the target, respectively. We then feed target data under our TTA method, labeling these features as 𝐑'_T. Visualization of 𝐑_S and 𝐑_T is provided in the left plot in <ref> via t-SNE, while the right plot depicts the 𝐑_S and 𝐑'_T. Upon examining the transition from the left to the right, we observe a clear trend: the features of each class in the target domain not only become discriminative but also show an alignment with their corresponding classes in the source domain. This convergence of class-specific clusters confirms that our method is successfully reducing domain shift by steering the target feature distributions back to those from the source. Alignment Between DA Optimization and Task Objective. DA optimization minimizes the discrepancy in distribution between test-time and source features. The task objective is to classify the incoming data stream. <ref> provides a visualization of both the distribution discrepancy and the cumulative classification errors across data batches. Notably, there is a trend of decreasing accumulated error, which corresponds with the shrunken distribution discrepancy, in contrast to the larger discrepancy observed in the TTBN baseline. § CONCLUSION In this paper, we introduce a simple yet effective Distribution Alignment (DA) method for fully realizing test-time adaptation within dynamic online streams. Our proposed distribution alignment loss aligns test-time data features with the source distributions, ensuring compatibility with the source model and addressing the challenges posed by label shifts across online data batches. The addition of a domain shift detection mechanism further strengthens our method's performance in environments with continual domain shifts. Extensive experiments confirm the superiority of our method in non-i.i.d. streams, while it also maintains competitive performance under the i.i.d. assumption. § ACKNOWLEDGEMENTS This work is supported by an NSERC Discovery grant, the Gina Cody Research and Innovation Fellowship, and in part by the National Natural Science Foundation of China under Grant 62171269. splncs04
http://arxiv.org/abs/2407.12375v1
20240717075403
FETCH: A Memory-Efficient Replay Approach for Continual Learning in Image Classification
[ "Markus Weißflog", "Peter Protzel", "Peer Neubert" ]
cs.CV
[ "cs.CV", "cs.LG" ]
FETCH M. Weißflog et al. Faculty of Electrical Engineering and Information Technology, Chemnitz University of Technology, Germany markus.weissflog@etit.tu-chemnitz.de Institute for Computational Visualistics, University of Koblenz, Germany FETCH: A Memory-Efficient Replay Approach for Continual Learning in Image Classification Markus Weißflog10009-0003-1163-8755 Peter Protzel10000-0002-3870-7429 Peer Neubert20000-0002-7312-9935 July 22, 2024 =========================================================================================================== [remember picture,overlay] [anchor=north, yshift=-1cm] at (current page.north) This preprint has not undergone peer review or any post-submission improvements or corrections. The Version of Record of this contribution is published in Intelligent Data Engineering and Automated Learning – IDEAL 2023, and is available online at <https://doi.org/10.1007/978-3-031-48232-8_38> 0.4pt ; § ABSTRACT Class-incremental continual learning is an important area of research, as static deep learning methods fail to adapt to changing tasks and data distributions. In previous works, promising results were achieved using replay and compressed replay techniques. In the field of regular replay, GDumb <cit.> achieved outstanding results but requires a large amount of memory. This problem can be addressed by compressed replay techniques. The goal of this work is to evaluate compressed replay in the pipeline of GDumb. We propose FETCH, a two-stage compression approach. First, the samples from the continual datastream are encoded by the early layers of a pre-trained neural network. Second, the samples are compressed before being stored in the episodic memory. Following GDumb, the remaining classification head is trained from scratch using only the decompressed samples from the reply memory. We evaluate FETCH in different scenarios and show that this approach can increase accuracy on CIFAR10 and CIFAR100. In our experiments, simple compression methods (e.g., quantization of tensors) outperform deep autoencoders. In the future, FETCH could serve as a baseline for benchmarking compressed replay learning in constrained memory scenarios. § INTRODUCTION Humans are capable of learning to solve new tasks ever throughout their lives. Learning to incorporate new information is crucial in areas such as robotics and machine learning, yet this endeavor remains challenging despite the use of advanced techniques <cit.>. If no special measures are taken, a learning system quickly forgets old knowledge as soon as it is presented with new information <cit.>. This challenge, known as Catastrophic Forgetting (CF), is so difficult to overcome that Continual Learning (CL) has emerged as a discipline of machine learning. In recent years, many publications have approached the subject <cit.>. Promising results were achieved by replay techniques, that keep an episodic memory of previously encountered samples to mitigate catastrophic forgetting <cit.>. Prabhu et al. <cit.> in particular presented GDumb, an approach that has attracted much attention in the community due to its simplicity and yet good performance.[At the time of writing, the GDumb has received over 300 citations on Google Scholar.] Replay techniques can have high memory consumption, which has led to the development of compressed replay methods <cit.>. This work investigates whether the memory consumption of GDumb can be reduced using ideas from compressed replay and how the performance changes under constrained memory. Based upon this, we present FETCH (Fixed Encoder and Trainable Classification Head). A simplified schematic illustration can be found in <ref>. FETCH improves GDumb in several aspects: * A pre-trained fixed encoder extracts general features from the images and enables knowledge transfer from a pre-training dataset. Additionally, the number of parameters in the trainable classification head is reduced. * A compressor reduces the size of the samples in the episodic memory, thus reducing the overall memory footprint. This can either improve performance on a limited memory or reduce the memory footprint of the overall pipeline. * In our experiments we assess various variations and components of FETCH and show improved performance over both GDumb and selected compressed replay techniques. The paper is structured as follows: <Ref> introduces the problem of class-incremental continual learning. <Ref> presents general related work while <ref> focusses on GDumb in particular. <Ref> details the design of FETCH. <Ref> summarizes implementation details. <Ref> presents our experiments. <Ref> concludes the paper. Code will be made available. § PROBLEM FORMULATION Our proposed approach operates in the challenging online, class-incremental setting. A learning agent is presented with a stream of tasks 𝒯 = {𝒯^1, 𝒯^2, …, 𝒯^t,…, 𝒯^T} one task at a time. T is the total number of tasks and t is the current task's identifier. Each task consists of multiple samples, i.e., images, 𝐱^t ∈𝒳^t and their corresponding labels y^t∈𝒴^t. The agent's job is to find a model f that can predict the label for each sample f_t: 𝐱^t_test↦ y^t_test, where the samples 𝐱^t_test belong to a never before seen test dataset 𝒳^t_test. The labels of these samples consist of the classes of the current task and all previous tasks y^t_test∈𝒴^t_test = ⋃ _i=0^t 𝒴^i. To achieve this, the agent is presented with a set of training examples belonging to the current task 𝒯^t = (𝒳^t, 𝒴^t). The agent has the option of saving a pair of image and label, or choosing to never see it again. § RELATED WORK §.§ Literature Review Several publications offer an overview of CL. <cit.>. De Lange et al. <cit.> in particular propose a widely adopted taxonomy, categorizing approaches and settings into parameter isolation, regularization, and replay methods. Parameter isolation methods work by identifying important parts of the model for the different tasks. These parts can be, for example, gradually extended <cit.> or even exchanged <cit.>, as new tasks arise. Regularization methods introduce new terms in the loss function to ensure that performance for previous tasks is not degraded <cit.>. Replay methods work by storing exemplars in an episodic memory and interweaving them into the stream of new data <cit.>. Sangermano et al. <cit.> use dataset distillation in the replay memory. Chen et al. <cit.> use a database of unlabeled data together with an episodic memory to improve learning performance. Gu et al. <cit.> propose to better utilize samples from the continual datastream to mix with the samples from the replay memory. A special case of replay is compressed replay. As replay requires to store a subset of exemplars in a memory, it is a sensible idea to compress these exemplars in order to reduce the overall memory footprint. Hayes et al. <cit.> and Wang et al. <cit.> use different compression strategies to reduce the size of the images in memory. Hayes et al. <cit.> extend these approaches by freezing the early layers of a neural network after initial pre-training and using the resulting feature maps as exemplars for compressed replay. The decompressed samples are used to train the remaining parts of the network. Wang et al. <cit.> extend this approach even further with an additional autoencoder. All methods operate in the classical replay scenario where the continual datastream is mixed with the stored exemplars. §.§ Greedy Sampler and Dumb Learner GDumb (Greedy Sampler and Dumb Learner) <cit.> was proposed as a simple baseline but still outperformed many previous methods. It uses an episodic memory with N free slots. During training, the memory slots are filled with samples from the datastream with a balancer ensuring equal class representation. When memory is full, new classes are added by removing exemplars from the largest class. After each task, GDumb retrains a backbone network from scratch using only the exemplars from the memory. Following <cit.>, GDumb can be classified as online, class-incremental CL. GDumb's simple design comes with some drawbacks. Saving raw data isn't always feasible due to licensing and privacy concerns. Moreover, the images require significant storage space that might not be used efficiently. Therefore, we propose to combine the methodology of GDumb with the principles of compressed replay in order to exploit the advantages of both approaches. § APPROACH Following GDumb <cit.>, FETCH uses an episodic memory and retrains the classification head after each task. The blue arrows in <ref> show an overview of the proposed pipeline. Whenever the input distribution changes, the data 𝐱 is sampled from the continual datastream using GDumb's balanced greedy strategy. The data passes through the fixed encoder (𝐳) and the compressor (𝐡) before being stored in the episodic memory. During the inference phase, the classification head is trained from scratch using only the decompressed data (𝐳̂) and corresponding labels y from the memory. Compression and decompression are not required for inference, so the data flows directly from the encoder to the classification head. The fixed encoder and some compressor-decompressor pairs must be pre-trained, so an additional pre-training dataset is used. By comparing the red and blue arrows in <ref>, it becomes apparent that FETCH, unlike GDumb, leverages an additional fixed encoder, compressor, decompressor, and pre-training dataset. Both algorithms share the greedy sampler, memory, and retraining strategy for the classification head. Like GDumb, FETCH can be classified as online class-incremental CL. The following subsections describe the components in more detail. §.§ Fixed Encoder & Trainable Classification Head First, an encoding model, called fixed encoder in this work, converts the image to a latent representation 𝐳. The classification model uses this representation 𝐳 to predict the class ŷ of the input data. If the encoding was successful, all the relevant information about the class is still present. As encoders, we utilize the early layers of a CNN, whose weights remain frozen, following prior works <cit.>. To adhere to the paradigm of CL, we use different datasets for the pre-training of the encoder (called pre-training dataset) and the training and evaluation of FETCH. This approach allows for transfer effects from the pre-training dataset and is computationally more efficient than GDumb, as data only passes through the encoder once instead of each epoch. Also, fewer parameters need to be updated as the encoder stays fixed. After the initial pre-training, the encoder's weights remain frozen. The fixed encoder is considered as the early layers of a CNN, so the trainable classification head can be regarded as the remaining layers. The classification head is trained using only the encoded data 𝐳 from memory. For this work, different variations of the ResNet architecture <cit.> are used as fixed encoders and classification heads for multiple reasons: First, ResNets have a good performance in many classification tasks <cit.>. Second, implementations and pre-trained weights are available and lastly, ResNets are used in many other publications making comparison easy and fair <cit.>. If not stated otherwise, we used the layers up to <cit.> as the fixed encoder for our experiments. The encoder pre-trained on ImageNet1k <cit.>. The weights were provided by PyTorch.[Online: <https://pytorch.org/vision/stable/models.html>] The remaining layers including form the classification head. §.§ Compressor & Decompressor The encoded data 𝐳 in memory may contain redundancies, so a second compression stage, called the compressor, is used to reduce their size on the actual hardware. It operates on the matrix/ tensor representation of the images or featuremaps. A decompressor restores the original encoded representation as close as possible. We have selected the approaches listed below for our experimental setup. Each method has its own hyperparameter k that controls the amount of compression. * Quantization. Quantization describes the reduction of the tensor entries to a small number of k_quant discrete states, which can be represented with fewer bits than the actual values. For decompression, a lookup table is used. To get the discrete states, the pre-training dataset is analyzed. The range between the highest and lowest values in the whole pre-training dataset is split into k_quant equally sized intervals. For all experiments, TinyImagenet <cit.> was used as a pre-training dataset. * Thinning. The basic idea of this method is to keep only the most important, i.e., the largest entries. The resulting tensor is sparse and can thus be stored more efficiently. Instead of storing all entries, only the non-zero entries are saved together with their corresponding index in the tensor. Decompression is done by setting the stored indices of the output tensor to their corresponding values. All other entries are assumed to be equal to zero. The parameter k_thin∈ [0, 1] describes the proportion of entries that are set to zero. * Autoencoding. Convolutional autoencoders are a deep learning-based approach for dimensionality reduction <cit.> of images. The compressor and the decompressor are typically already part of their architecture. In this work, the compressor consists of two blocks of Conv2d layers with kernel size 3 and padding 1, followed by ReLU activation and max-pooling with kernel size 2 and a stride of 2. The decompressor consists of two blocks of transposed two-dimensional convolution with kernel size 2, stride 2, and ReLU activation. For this architecture, the parameter k_ae describes the number of channels in the bottleneck. The autoencoder was pre-trained on the TinyImagenet <cit.> dataset. §.§ Calculation of the Storage Consumption The total amount of storage s_Σ that is consumed by the whole pipeline depends on several variables: First, s_Σ depends on the storage consumption of the model s_model, which is split into fixed encoder and classification head. We used ResNets for our experiments. As fetch operates independently from the underlying model, s_model was set to zero in all evaluations. The resulting findings do not change since s_model is a constant value that gets added to all results. This is also in line with the literature <cit.>. Second, s_Σ depends on the used datatypes. In this work, we used single-precision floating point numbers, meaning s_float = 4 bytes. We used integers of size s_addr = 2 bytes as indices for matrices and arrays, as no matrix surpassed 2^16 elements in our experiments. For raw images, we assumed RGB values with 8 bits per channel, therefore we used s_uint = 1 byte. Third, s_Σ depends on the other components, one of which is the episodic memory with N slots. Additionally, the compressor's input influences s_Σ. If the full model is used as a classification head (e. g. the fixed encoder gets omitted), the input is equal to the raw images 𝐱 with datatype . When a fixed encoder is used, the compressor receives the encoded images 𝐳 of type as input. Let n refer to the number of elements in these tensors in the corresponding case and let s_uint/float be the shorthand notation for the corresponding memory requirement. Lastly, the total memory consumption depends on the compressor. The level of compression and, thus, the memory consumption of one exemplar can be adjusted using the compression parameter k. The resulting equation for the total memory requirement can be found in <ref>. For the autoencoder, n_𝐡 describes the number of elements in the spatial dimensions in the bottleneck and s_ae is the memory consumption of the autoencoder network, which varies between 4.6 KiB and 21.62 KiB, depending on the parameter k_ae. § IMPLEMENTATION DETAILS The implementation was based on GDumb's publicly available PyTorch-code[Online: <https://github.com/drimpossible/GDumb>] <cit.>. No hyperparameters were changed. The training was done using SGDR-optimization <cit.> with a batch size of 16 and learning rates in [0.005… 0.05]. Data regularization was done using normalization and cutmix <cit.> with p=0.5 and α=1. Because only the data from the episodic memory is used for training, the backbone can be trained for multiple epochs without breaking the class-incremental paradigm. The performance is measured as the average accuracy on the test set after convergence and is always measured after the last task. We used two datasets and models for evaluation. For the CIFAR10 dataset <cit.>, ResNet-18 <cit.> was used as the fixed encoder and the classification head. For CIFAR100, ResNet-34 was used as the fixed encoder and the classification head. Storage consumption was measured in mebibyte[1 MiB =2^20 bytes ≈10^6 bytes]. § EXPERIMENTS §.§ Tradeoff between Storage and Performance This experiment aimed at finding a tradeoff between low memory consumption and high accuracy. The results can be seen in <ref>. The different methods were compared using a variable number of memory slots N∈{10; 100; 1000; 10000}. Using a fixed encoder improves performance, as shown by the fact that most of the green curve is above the purple curves. This suggests that the transfer effects of the pre-trained fixed encoder are beneficial to the whole pipeline. GDumb demonstrates superior performance for high storage capacities on CIFAR10, which indicates that the impact of training data quantity on performance diminishes as the learner has more memory available. Nevertheless, model size remains a significant factor in determining performance. Restricting the number of adjustable parameters through pre-training can lead to poorer performance compared to full models. Additionally, the quality of the data can also affect performance, as encoded samples have fewer entries and therefore provide less information to the model. For the more complex CIFAR100 and ResNet-34, this is not the case. Here, a fixed encoder is always beneficial compared to GDumb. The impact of compression depends on the setup. Positive effects can be seen in the blue curves, which are ordered by their compression parameter k_quant. Quantization increases performance by trading data quality for reduced storage size and thus increased storable exemplars N. The red curves show a different pattern. High compression parameters, such as k_thin=0.95 show decreased storage consumption but the accuracy appears to approach an upper limit, as shown by the shape of the solid red curve in the left plot. §.§ Ablation: Effect of the Fixed Encoder This experiment aims to investigate the influence of the layer where the ResNet is split into encoder and classification head. For this reason, the compressor was omitted. Four different configurations were investigated: Using the whole model as a classifier (like GDumb), dividing after , after , and after <cit.>. For each configuration, the episodic memory was filled with as many samples as possible without exceeding a maximum storage of 10 MiB. <Ref> shows the results. Splitting ResNets at later layers appears beneficial, as shown by the fact that the performance is consistently higher. At later layers, the samples are more compressed and their representation also benefits from the encoder's pre-training, which is known to be beneficial <cit.>. Not splitting the ResNet results in the highest number of memory slots, despite raw images 𝐱 having more elements than encoded samples 𝐳. Raw images are smaller due to their representation using unsigned integers, taking up less space [4] compared to floating point numbers used for encoded samples. The encoded samples have an advantage due to the encoder's pre-training, therefore they nonetheless improve the performance. §.§ Ablation: Effect of Compression The previous section examined the effect of an encoder in isolation, while this section examines the effect of a compressor in the same way. For the experiment, the number of memory slots was fixed at N=10000, while the compression parameter k was varied. The result can be seen in <ref>. The autoencoder could only be used for the experiments without the fixed encoder because the spacial dimensions of the encoded featuremaps (2×2) are too small to perform convolutions and pooling. It also becomes clear that the performance of the baseline cannot be reached using this architecture. Increased compression negatively impacts performance across all compressors, which is expected. Remarkably, the upper bound of the curves reflects the optimal accuracy achievable with this configuration. The quantization strategy approaches baseline performance, given a sufficiently high compression parameter. The best accuracy is reached between k=8 and k=16, which corresponds to compression of over 85 %. The thinning compression performs significantly better when a fixed encoder is used. §.§ Performance on a fixed Memory Budget This experiment aims to show how well different configurations perform under a memory constraint. To replicate these conditions, the total memory consumption was fixed at 4 MiB for CIFAR10 and at 6 MiB for CIFAR100. The memory is filled with N samples up to the maximum available storage size. The results can be seen in <ref>. Notably, almost all curves show a maximum, where the balance between the number and quantity of the samples in memory is optimal. Exceptions include the quantization strategy in the first plot (where compression is always beneficial) and the thinning compressor in the second and fourth plots, where compression is always harmful. As previously discussed, the results show that encoding improves performance, as evidenced by the higher accuracy of the setups using FETCH. §.§ Comparison with other approaches We compare FETCH with the following state-of-the-art approaches: REMIND <cit.>, freezes the early layers of a ResNet and performs product quantization on the resulting featuremaps, before storing them in memory. During continual learning, the data from the continual stream is mixed with samples from the memory. ACAE-REMIND <cit.> extends REMIND with an additional autoencoder to compress the samples even further. `Smaller Is Better', the best-performing variant of an approach proposed in <cit.>, involves resizing raw images to 8×8 pixels before storing them in the episodic memory. A network is retrained from scratch, whenever the input distribution changes. To the best of our knowledge, no method besides FETCH combines the benefits of freezing early layers of a convolutional neural network with compressed replay in the pipeline of GDumb. We evaluate FETCH in three different settings: Setting A: the encoder is pre-trained on TinyImageNet. The classification head is initialized randomly. This approach was used in the other sections of this paper. Setting B: both the encoder and the classification head are pre-trained using the data from the first tasks (in the case of CIFAR10 the classes and ). This setting is used by REMIND and ACAE-REMIND. Setting C: the encoder and classification head were pre-trained on TinyImageNet. We vary the compression parameter and the layer up to which we keep the weights frozen. The memory is filled with N samples up to a maximum size of 1.536 MB. All compared methods use ResNet-18 and the CIFAR10 dataset. The results are shown in figure <ref>. Setting B shows that the simple quantization strategy performs similarly to REMIND and ACAE-REMIND, even outperforming both approaches for some configurations. Comparing settings A and B shows the positive influence of the diverse pre-training dataset. Setting A shows that pre-training enables FETCH to also outperform `Simpler Is Better'. Our experiments in setting C show a positive effect of using in-distribution datasets for pre-training the classification head. § CONCLUSION This work aimed at investigating the advantages of compressed replay in the context of GDumb. We evaluated the effect of different compression strategies as well as the effect of pre-training and freezing parts of the backbone. A combination of both techniques showed improved performance over both GDumb and selected compressed replay techniques in our experiments. These findings suggest that episodic memories with a large number of compressed exemplars and the transfer effects of pre-trained components benefit replay learning. However, FETCH has limitations, including the need to retrain the classification head whenever the data distribution changes. In future work, we wish to investigate the proposed two-step compression scheme in combination with other memory-based CL approaches and domains outside computer vision. Although FETCH can directly be used in applications with limited memory, such as mobile robotics, this work intends to serve as a baseline for future research in the area of memory-constrained CL. splncs04
http://arxiv.org/abs/2407.12209v1
20240716230622
A question of personalities: evolution of viscous and wind-driven protoplanetary discs in the presence of dead zones
[ "Simin Tong", "Richard Alexander", "Giovanni Rosotti" ]
astro-ph.EP
[ "astro-ph.EP", "astro-ph.SR" ]
firstpage–lastpage On the Calibration of Epistemic Uncertainty: Principles, Paradoxes and Conflictual Loss Mohammed Fellaji1 Frédéric Pennerath1 Brieuc Conan-Guez2 Miguel Couceiro2 July 16, 2024 ======================================================================================= § ABSTRACT Whether the angular momentum of protoplanetary discs is redistributed by viscosity or extracted by magnetised winds is a long-standing question. Demographic indicators, such as gas disc sizes and stellar accretion rates, have been proposed as ways of distinguishing between these two mechanisms. In this paper, we implement one-dimensional gas simulations to study the evolution of “hybrid” protoplanetary discs simultaneously driven by viscosity and magnetised winds, with dead zones present. We explore how the variations of disc properties, including initial disc sizes, dead zone sizes and angular momentum transport efficiency, affect stellar accretion rates, disc surface density profiles, disc sizes, disc lifetimes, and cumulative mass loss by different processes. Our models show that the expansion of the gas disc size can be sustained when the majority of angular momentum is removed by the magnetised wind for individual protoplanetary discs. However, when we can only observe discs via demographic screenshots, the variation of disc sizes with time is possibly diminished by the disc personalities, by which we mean the variations of initial disc properties among different discs. Our hybrid models re-assess association of the two demographic indicators with mechanisms responsible for angular momentum transport and suggest additional diagnostics are required to assist the differentiation. accretion, accretion discs – planets and satellites: formation – protoplanetary discs – stars: pre-main-sequence § INTRODUCTION Protoplanetary discs are by-products of star formation due to the conservation of angular momentum. These discs of dust and gas fuel materials to the central star and provide the necessary components for planet formation. Therefore, understanding protoplanetary discs and how they evolve is fundamental to the study of planetary systems. Material in the disc must lose angular momentum in order to be accreted from the protoplanetary disc to the central star. Two scenarios have been suggested to address where the angular momentum has gone: redistribution of the angular momentum by turbulence, and extraction of the angular momentum by magneto-hydrodynamic (MHD) winds. Turbulence can arise from gravitational instabilities <cit.>, hydrodynamical instabilities, such as vertical shear instabilities <cit.>, and magneto-hydrodynamical instabilities, such as the magneto-rotational instability (MRI) <cit.>, which has long been thought to be the main driver of disc turbulence. When MRI-induced “viscous” turbulence redistributes the angular momentum in the disc, the small fraction of the outer disc carrying a large quantity of the angular momentum moves outwards and gives rise to an increasing gas disc size over time <cit.>. The MRI is sensitive to the degree of ionization. It can be sustained when the disc is sufficiently ionized by the thermal and non-thermal processes <cit.> to couple with magnetic fields. These conditions are usually fulfilled in the very inner and possibly outer discs, indicating a large fraction of the disc remains MRI-quenched. These regions are called dead zones <cit.>. MHD winds also rely on charged particles and the magnetic field. When charged particles in protoplanetary discs are coupled to the magnetic field and are lifted from the disc to launch the magnetised wind, the tail of the ionized gas exerts torques on the disc and takes away angular momentum from discs <cit.>. Several studies utilising different methods have shown that this wind is capable of driving the observed stellar accretion rate <cit.> and inducing gas disc size shrinking while the characteristic radius remains unchanged <cit.>, making it a viable alternative mechanism to viscous accretion. Advances in observational techniques in the last decade have now enabled us to characterise protoplanetary discs properties, such as disc sizes and stellar accretion rates, systematically and statistically. The Atacama Large Millimeter/submillimeter Array (ALMA) reveals substructures in the dust and gas discs <cit.>, and provides radial intensity profiles of dust <cit.> and molecular emission <cit.> from discs in the nearby star-forming regions. These observations, along with surveys dedicated to specific star-forming regions <cit.>, help us to quantify the disc sizes subject to some observational limitations, such as sensitivity, resolution and sample selection biases. The X-shooter instrument mounted on the Very Large Telescope (VLT) has measured accretion rates for hundreds of discs in nearby star-forming regions, including Lupus <cit.>, Chamaeleon I <cit.>, η-Chamaeleon <cit.>, TW Hydrae association <cit.> and Upper Scorpius <cit.>. Recent studies built on the established theories and ample observations have attempted to discern whether viscosity or the MHD wind drives disc evolution. <cit.> collated gas sizes of Class I and Class II discs characterised by different tracers. They found that the Class I gas discs are typically smaller than those of Class II, implying that gas discs spread in the T Tauri phase and supporting the viscous picture. <cit.> adopted a similar approach but with larger samples (44 discs) that were consistently traced by spatially resolved ^12CO (J=2-1). They found no correlations between ^12CO disc sizes and stellar ages (see their Figure 5f), and attributed this to the large uncertainties in the stellar ages. <cit.> presented an analytical solution of the magnetised wind, whose efficiency in removing angular momentum is parametrized by an α_DW, equivalent to the α_SS for viscosity in <cit.>. This solution facilitates the study of disc evolution by incorporating the wind component into 1-D evolution models. <cit.> reproduced the correlation between disc masses and accretion rates observed in Lupus by generating a population of inviscid wind-driven discs. This indicates that a wind-only model can possibly explain some observations. <cit.> generated populations of purely wind-driven and viscous discs, respectively, and discussed the possibility of distinguishing two scenarios by the distribution of stellar accretion rates. They suggested that a slightly larger sample size than we currently have is required to answer the question. <cit.> integrated the α_DW-prescribed pure wind model into the thermochemical model <cit.> and showed that the gas disc sizes in Lupus and Upper Sco are reproducible without the inclusion of viscosity. <cit.> expanded applications of thermochemical models to viscosity and/or wind discs. The inferred disc characteristic radii decreasing from younger to older clusters are inconsistent with either of the two scenarios, hinting other physical mechanisms (such as external photoevaporation) could also play a role. 1-D gas+dust evolution incorporating viscosity and winds was investigated in <cit.>. Their model reproduced the observed dust sizes (in 0.89-mm observations) in Lupus, Chamaeleon I and Upper Sco, irrespective of the relative strength of the two mechanisms, implying that current dust sizes limited by observational sensitivitiy <cit.> are not reliable differentiators of wind and viscosity scenarios, and alternative tracers should be considered <cit.>. However, all of these prior studies presume α_DW and α_SS are constant at all disc radii, and overlook the presence of dead zones, which can alter the spatial distribution of α_SS. Likewise, α_DW should also be treated as a radial variable in addition to its time variations due to changes in the configuration of magnetic fields with time. Comparison of disc sizes and stellar accretion rates in previous research is also based on the implicit assumption that all the discs share the same α_SS and/or α_DW. But discs formed in different environments might possess very different properties. Those around massive stars tend to have a larger dead zone inner edges <cit.>. The ambient thermal and non-thermal radiation can also impact the outer edge of dead zones. External radiation <cit.>, along with the local stellar density <cit.>, potentially truncates the outer disc, leading to smaller disc sizes than their counterparts growing in a more friendly environment <cit.>. Therefore, in this work, we propose to explore how these various parameters influence the evolution of more realistic discs driven by viscosity and winds prescribed by radially varying α, and attempt to determine whether the two main observable diagnostics, gas disc sizes and stellar accretion rates, are still valid discriminators between the two mechanisms when the personalities of discs – fundamental initial disc properties, such as disc masses and disc sizes, varying among individuals – are considered. We limit our models to isolated discs, so effects directly caused by environments, such as external photoevaporation and dynamical encounters, are beyond the scope of this study. This paper is structured as follows. In Section <ref>, we introduce the disc evolution and the dead zone models. Section <ref> shows how the disc evolution is altered with the dead zone present. We explore the effect of parameters, including α_SS(R), α_DW(R), initial characteristic radii R_c,0 and dead zone outer edges R_dz,out, on the disc evolution from perspectives of stellar accretion rates, surface densities, disc sizes, lifetimes and cumulative mass loss in Section <ref>. The discussion of our models and selection of some parameters are presented in Section <ref>. Based on the aforementioned studies, we perform two small scale population syntheses and show the results in Section <ref>. Then we discuss observational implications and limitations of this work in Section <ref>, and summarise our results in Section <ref>. § METHOD §.§ Disc evolution model In our model, we consider geometrically thin protoplanetary discs regulated by viscosity, MHD winds and internal photoevaporation to assist the rapid clearing at the end of evolution. The gas surface density (Σ_g) of a viscous disc can be expressed as <cit.> ∂Σ_g/∂ t= 3/R∂/∂ R[R^1/2∂/∂ R (νΣ_g R^1/2)], where ν is the viscosity and can be quantified by ν = α_SSc_sH <cit.>. Here, α_SS is a dimensionless parameter, measuring the efficiency of angular momentum redistribution by turbulence, c_s is the sound speed, and H is the disc scale height. We adopt the prescription for MHD winds developed in <cit.>, where they use an α_SS-equivalent parameter α_DW along with the magnetic lever arm parameter λ <cit.> to characterise the efficiency of angular momentum removal by winds. We incorporate the analytical model of photoevaporation from <cit.> to account for the rapid disc clearing at late evolutionary stages. Combinations of above mechanisms give the master equation of our model ∂Σ_g/∂ t= 3/R∂/∂ R[R^1/2∂/∂ R (νΣ_g R^1/2)] +3/2R∂/∂ R(α_DWΣ_g c_s^2/Ω) -3α_DWΣ_g c_s^2/4(λ-1)R^2Ω-Σ̇_w(R,t), where Ω=√(GM_*/R^3) is the Keplerian orbital frequency at radius R around a central star of 1 M_*. The first term on the right hand side is the viscous diffusion term. The second and third terms are for the advection term and mass extraction term by the magnetised wind, respectively. The last term is the sink term by internal photoevaporation, which is prescribed as Σ̇_w(R) = Ṁ_thick/4π R_crit^2(R/R_crit)^-5/2, R ⩾ R_crit, when the disc within R_crit≃ 0.2GM_*/c_s^2 is optically thick. If the inner disc becomes optically thin, then the mass-loss rate is modelled as Σ̇_w(R) = Ṁ_thin/4π R_in^2(R/R_in)^-5/2(R/2R_crit)^1/2, R ⩾ R_in. Here Ṁ_thick and Ṁ_thin are measurements of the mass-loss rate in diffuse radiation field and direct radiation field defined in <cit.>. R_in is the innermost radius where the surface density is optically thin. We replace variables in Eq. <ref> and then solve the equation using an explicit first-order integrator following <cit.>. We evaluate diffusion and advection terms in two steps and impose different boundary conditions on each. For the diffusion term, we impose zero-torque boundary conditions for both the inner and outer boundaries. For the advection term, a zero-torque is only applied for the outer boundary and we replace the inner boundary with a constant power law condition. §.§ Dead/wind zone model Following <cit.>, <cit.> and <cit.>, we adopt a “three-zone” model for both viscosity α_SS and the disc wind α_DW, with two-step transitions between zones. The disc is therefore modelled as a dead zone sandwiched between MRI-active regions. The radial variation of α_SS is specified by α_SS(R)= α_SS,in+(α_SS, dz-α_SS,in) · 1/2exp( R-R_dz,in/w_in) R<R_dz,in [ 1-1/2exp( R_dz,in-R/w_in) ] R_dz,in≤ R<R_m α_SS,dz+(α_SS, out-α_SS,dz) · 1/2exp( R-R_dz,out/w_out) R_m<R≤ R_dz,out [1-1/2exp( R_dz,out-R/w_dz,out) ] R>R_dz,out , where R_dz,in and R_dz,out delineate the boundary between the inner MRI-active region and the dead zone, and the boundary between the dead zone and the outer MRI-active region, respectively. R_m is the middle point between R_dz,in and R_dz,out. w_in=R_dz,in/20 and w_out=R_dz,out/20 are adopted to achieve a sharp but continuously differentiable transition between regions. A sharp transition comes from the dead zone model in <cit.>, where they start with a slow transition, which evolves to a sharp one at the late time. For simplicity, we assume MHD winds take over the removal of angular momentum in regions covered by the dead zone and describe α_DW in a similar way as Eq. <ref>, i.e. replacing α_SS with α_DW correspondingly would yield the description of α_DW(R). We keep boundaries transiting to each region and the width of transition the same for α_SS(R) and α_DW(R). We fix α_DW,in=10^-5, α_SS,in=10^-2 and α_SS,dz=10^-4, and explore how the variations of α_DW,dz, α_DW,out and α_SS,out affect the disc evolution and observable disc properties. In our dead zone model, though the dead zone is inactive to the MRI, it is active to the MHD wind and can be renamed as dead/wind zone. An illustration of the dead/wind zone model is shown in Figure <ref>. Solid and dashed lines label fixed and free parameters, respectively. Two vertical grey lines indicate locations of the dead/wind zone inner and outer boundaries, respectively. We fix the inner boundary R_dz,in=0.1au and vary the outer boundary R_dz,out, which is not well constrained by observations, to investigate the impact of dead/wind zone sizes on the disc evolution. §.§ Simulation Setup We adopt a time-independent temperature T(R)∝ R^-1/2 <cit.>, which is a standard temperature for the flaring disc model and results in viscosity proportional to the radius. We set the aspect ratio H/R to be 0.05 at 1AU, corresponding to a local temperature of ∼600K. We assume an initial disc mass M_d=0.01 M_⊙ disc with a characteristic radius R_c,0=60au surrounding an M_*=1 M_⊙ star. The initial gas surface density profile is described by a cutoff power-law function Σ_g(R)=M_d/2π R_c,0^2(R/R_c,0)^-1exp(-R/R_c,0), distributed among 8000 cells equispaced in R^1/2 between 0.0056au and 40,000au, which is sufficiently large to allow discs with a large α_SS,out to continuously expand during the entire evolution. Eq. <ref> is not a self-similar solution when the wind component is also taken into account (see Eq. <ref>), though the impact is probably small. We assume the magnetic field evolves in a way more slowly than that of the gas surface density and conforms to α_DW(R,t)∝Σ_c(R,t)^-ω, where Σ_c=M_d(t)/2π R_c(t)^2, with ω between 0 and 1 <cit.>. Aside from the strength of the magnetic field, α_SS is also sensitive to the degree of ionization <cit.>, which is not depicted in our simple 1-D model. Therefore, we leave it as a constant with time for a given radius in this work as most work based on 1-D models and 2-D hydrodynamical simulations. Accurately tracing R_c(t) in simulations is challenging, as substructures and disc winds make the disc surface density profile deviate from the original one (see Section <ref> and Appendix <ref>). We instead use α_DW(R,t)∝ M_d(t)^-ω, to avoid computing R_c(t) on the fly. The latter is equivalent to the former when the disc is purely evolved under MHD winds, but underestimates α_DW when R_c continuously increases, which is true for models studied here. We adopt ω = 0.5 throughout the paper. As α_DW is a time-varying parameter, if it is not otherwise specified, the value of α_DW assigned in this paper refers to its initial value at t=0. We set λ = 3, following previous theoretical studies <cit.> and observations of disc winds from Class II objects (λ=1.6-2.3) <cit.>. We also discuss the selection of λ in Section <ref>. Ṁ_thick in Eq. <ref> and Ṁ_thin in Eq. <ref> are fixed to representative values of 10^-10 M_⊙ yr^-1 and 10^-9 M_⊙ yr^-1 <cit.>, respectively, noting that the stronger photoevaporative wind is only triggered when the surface density in the inner disc becomes optically thin at late times. We impose a maximum evolution time of 12Myr, when simulations are automatically terminated regardless of remaining mass in the disc. It is worth noting that t=0 in our simulations represents the time when the envelope infall rate is smaller than the stellar accretion rate instead of the initial time when the disc is formed. The stage studied in this work is close to the Class II disc defined from the infrared excess. §.§ Code Testing To test our code against analytical solutions provided in <cit.>, we implement four simulations with constant α along the radius by activating 1) only the viscosity component; 2) only the wind component with no magnetic field evolution; 3) viscosity+MHD wind with no magnetic field evolution; and 4) only the wind component with magnetic field evolution. When testing cases involving MHD winds, we modify the slope of the initial surface density profile by adding ξ Σ_g(R)=M_d/2π R_c,0^2(R/R_c,0)^-1+ξexp(-R/R_c,0) to ensure they have the same initial surface density profile as analytical solutions. ξ is the mass ejection index and can be quantified by ψ=α_DW/α_SS and the lever arm λ as <cit.> ξ=1/4(ψ+1)[√(1+4ψ/(λ-1)(ψ+1)^2)-1 ]. Our numerical method recovers the analytic solutions well: the comparison between the numerical and analytical solutions is shown in Appendix <ref>. § FIDUCIAL MODEL We initiate our study by building a fiducial model adopting a total α_tot(R)=α_SS(R)+α_DW(R)≃ 10^-2, to simulate a disc with an almost constant total α throughout the whole disc and investigate the roles that MHD winds and dead/wind zones play in comparison to a fundamental viscous disc with a constant α_SS=10^-2 facilitated by photoevaporative winds. We assume the dead/wind zone (see Section <ref>) spanning from 0.1au to 30au, within which α_DW,dz=10^-2. Viscosity dominates over the magnetised wind in the outer disc, where α_SS,out=10^-2 and α_DW,out=10^-4. Other parameters are fixed as specified in Section <ref>. This transition profile is equivalent to the one depicted in Figure <ref>. Figure <ref> illustrates how the gas surface density evolves in the fiducial model compared concurrently with pure viscosity+photoevaporation model in the left panel and hybrid (viscosity+wind) +photoevaporation model in the right panel. It is evident that winds flatten the slope of Σ_g and accelerate disc evolution by extracting mass from the disc and subtly aiding the stellar accretion (Figure <ref>). At ∼2.5Myr, the viscous disc (in the left panel) still has relatively high surface densities, while “hybrid” discs have lost a great proportion of mass before ∼ 2Myr. The inclusion of the dead/wind zone can alter the smooth gas surface density profiles to ones with substructures formed around the inner and outer edges of the dead/wind zone. More detailed discussion on these substructures can be found in Section <ref>. In these edges, Ṁ(R) for viscosity and for MHD winds change substantially due to the sharp transition of α_DW(R) and α_SS(R), and bring up additional mass accumulated or removed locally. The underlying physics is well illustrated by Figure <ref>, and the analytical solutions of mass accretion rates by viscosity and winds are given in Eq. <ref> and Eq. <ref>. Ṁ_SS(R) = 6π/RΩ∂/∂ R(Σ_g c_s^2α_SSR^2), and by MHD winds Ṁ_DW(R)=3πΣ_g c_s^2α_DW/Ω. The wind responds to the change of α in a distinct way from viscosity. Accretion rates driven by winds change proportionally to α_DW as Ṁ_DW(R)∝α_DWc_sHΣ_g, while Ṁ_SS(R) varies with the gradient of the product in parentheses in Eq. <ref>. When α_DW decreases abruptly, Ṁ_DW(R) drops significantly, leaving sufficient gas piled up in the dead/wind zone. In contrast, the decrease in α_SS results in a positive velocity. Therefore, gas flows outwards to smooth out the gas accumulation. This is clearly shown by the dashed blue lines at ∼0.1au in Figure <ref>. This mass outflow persists for a majority of the disc lifetime. Similar behaviour of Ṁ_SS has also been found by other studies that incorporate the dead zone model, such as <cit.> (their Figure 3) and <cit.> (their Figure 4). Although significant changes exist in both Ṁ_SS and Ṁ_DW when the dead/wind zone is taken into account, the total mass accretion rate Ṁ_tot remains smooth in the fiducial case (solid grey lines in Figure <ref>), where α_tot is nearly constant along the radius, as expected from a purely viscous disc with constant α_SS, until rapid clearing is switched on at late stages (the bottom panel of Figure <ref>). In addition to substructures created by the incorporation of dead/wind zones, the strong photoevaporation triggered at a later stage when Σ_g(R≃ R_crit) becomes optically thin takes away gas and then opens a gap around the critical radius R_crit (see Section <ref>). The gap further becomes an inner cavity when the disc interior to R_crit is fully accreted on to the star and the gas from the outer disc cannot fuel the inner disc, due to the photoevaporative mass-loss rate exceeding the local accretion rate <cit.>. § PARAMETER EXPLORATION Following the fiducial model, we expand the three free parameters (α_DW,dz, α_DW,out and α_SS,out, see Figure <ref>) to broader parameter space (see Table <ref> for specific values) to study how variations of α_SS and α_DW affect stellar accretion rates, surface density profiles, gas disc sizes, lifetimes, and cumulative mass loss by different physical processes. We then further extend our investigation to impacts of the dead/wind zone size and the initial disc characteristic radius R_c,0 on disc evolution. Accompanying these hybrid models are two naive models designed to compare and illustrate how the inclusion of dead/wind zones makes disc behaviours differ from what we expect for a commonly assumed constant-α disc. One of the naive models is a viscous disc (α_SS=10^-3) with internal photoevaporation (as introduced in Section <ref>); the other is a wind-only disc (α_DW=10^-3) incorporating a magnetic field evolved in the same way as that in hybrid models. Parameters that we examine in the following sections are listed in Table <ref> above the dividing line, below which we also provide parameters that are fixed in simulations. We adopt a small α_DW,out as non-ideal MHD simulations show the accretion rate in the outer disc is dominated by the MRI-driven accretion caused by FUV-induced ionization in upper layers <cit.>. We conduct 92 simulations in two separate groups. First, we run 27 simulations with all combinations of varying α in Table <ref> for discs with fixed R_c,0=60au and R_dz,out=30au. Among them, we select 13 representative combinations of α to study the disc size problem. We stretch the initial characteristic radius R_c,0 from 60 to 120au to examine how the disc size affects disc evolution. As the dead/wind zone outer edge R_dz,out, fixed in the first group of simulations, is also not well determined by observations and simulations, we vary it from 30au to 75au, and to 135au. These 13×(6-1)=65 simulations constitute the second group of simulations. Results for two groups of simulations can be found in Table <ref>, which is also visualised in Figure <ref> and Figure <ref> to assist reading. §.§ Stellar accretion rate The stellar accretion rate is one of observables for which we have a statistically large sample and that can be used to constrain the disc evolution model. We plot stellar accretion rates vs. disc gas masses of all the 92 models in Figure <ref>, with comparison to observed stellar accretion rates and disc masses around stars with masses of 0.3-1.2 M_⊙ <cit.>. Discs with upper-limits (non-detections) on either stellar accretion rates or disc masses are excluded. Models are classified in three panels by their α_DW, dz, which determines initial stellar accretion rates together with the disc initial characteristic radius when α_SS,in, α_DW,in and α_SS,dz are fixed. Our models can explain intermediate mass discs (3×10^-4-10^-2 M_⊙) with intermediate stellar accretion rates (<2× 10^-8 M_⊙ yr^-1) in the Ṁ_*-M_d plane. For a given initial disc mass, the upper limit of the stellar accretion rate can be elevated if a smaller R_c,0 or a larger lever arm λ is assumed. The stellar accretion rates of “hybrid” models behave similarly to that of a purely viscous disc except the latter has a much longer evolutionary timescale (>12 Myr). On the contrary, the wind-only model follows a distinct evolutionary pathway. Its accretion rate can sustain a relatively high value when the disc mass is low, extending the evolutionary pathway to a region where no observational data has been obtained (the lower left corner in the Ṁ_*-M_d plane). However, if a larger lever arm is adopted for the pure wind model, it is able to explain low-mass discs (∼ 10^-4 M_⊙) observed with relatively high accretion rates (∼ 10^-9 M_⊙ yr^-1). As M_d(t) should be a monotonically decreasing variable with time, small bumps exhibited in evolution tracks in the Ṁ_*-M_d plane indicate that Ṁ_* is not consistently declining with time for some models. This means some discs even after entering Class II still undergo small accretion outbursts due to the mass accumulation in the inner disc when the presence of dead/wind zones is taken into account. §.§ Categorization of the surface density As shown in Section <ref>, the relative change in α_DW and α_SS along the radius always leads to the creation of gas substructures. By visually inspecting substructures in the surface density profiles from group 1 simulations (R_c,0=60au and R_dz,out=30au), we can roughly classify them into three categories (Figure <ref>). When α_DW,dz<10^-2 (Category A), accretion driven by winds in the dead/wind zone is inefficient in transferring mass fed by the outer disc to the inner disc, and gas continually accumulates around the inner transition radius R_dz,in, maintaining an overall surface density relatively higher than those of the other two categories. The fixed large α_SS,in (10^-2) in the inner disc, set by default, efficiently fuels the central star, enabling quick consumption of the local gas. The contrast of the accretion rate on the two sides of the dead/wind zone inner edge forms a bump in the surface density. When α_DW,dz= 10^-2 (Category B and C), the accretion rates in the inner disc (R≤ R_dz,in) and within the dead/wind zone (R_dz,in<R≤ R_dz,out) are comparable over the majority of the evolution and no substantial mass accumulates at the inner transition radii (R_dz,in). The less significant change in the total α around R_dz,in in Category B and C renders a narrower spike in the surface density, instead of a wider bump. The morphology of the gas accumulation depends on the α assumed on the two sides of the “dead/wind zone” inner boundary, which, though not well constrained, are assigned reasonable values in our models. The width of the gas accumulation and the α-transition itself both are several times wider than the local scale-height, making the excitation of Rossby wave instability less likely <cit.>. But whether such a feature is stable or not should be studied in 2-D or 3-D simulations, which are beyond the scope of this study. The morphology in the outer disc – whether the gas is concentrated to a bump or not – can further classify discs into Category B and C. When the outer disc is dominated by efficient expansion (large α_SS,out), mass is primarily moving further out and no significant mass is piled up (Category C). When the expansion is less efficient (small α_SS,out), the wind-driven accretion can compensate the spreading driven by viscosity to some extent, leading to more mass participating in the accretion and stocked up in the dead/wind zone outer edge (Category B). This process is also reflected in the smaller gas disc sizes in the middle panel of Figure <ref> compared to those in the right panel. Regardless of the dominant mechanisms in the outer disc, a “dip” feature can be observed around the outer boundary of the “dead/wind zone” in all categories (see three panels of Figure <ref>). This arises from the transition of α_DW from larger to smaller values (see also Eq. <ref>). For the 27 simulations in the first group, 18 cases belonging to Category A share the common feature that α_tot=α_DW+α_SS in the dead/wind zone is not significantly larger or even smaller than α_tot in the outer disc. Category B contains 6 simulations and they have α_tot in the dead/wind zone considerably greater than that in the outer disc (α_tot,dz/α_tot,out>10). Three simulations are classified as Category C, where we require the dead/wind zone to be strongly influenced by the efficient wind α_DW,dz=10^-2 and the outer disc to be dominated by viscosity (α_SS,out=10^-2) initially. Similar classification is also applied to discs when their R_c,0 and R_dz,out are extended to larger values for simulations in the second group. startstop = [rectangle, rounded corners, minimum width=3cm, minimum height=1cm,text centered, draw=black, fill=gray!20] io = [trapezium, trapezium left angle=70, trapezium right angle=110, minimum width=3cm, minimum height=1cm, text centered, draw=black, fill=blue!30] §.§ Disc spreading Three different radii are typically used to characterise the disc sizes: the characteristic radius R_c, beyond which the disc surface density drops exponentially; the outer radius R_o, a disc radius set by a certain surface density threshold; and the transition radius R_t <cit.>, delimiting the accreting disc (Ṁ(R≤ R_t)≥ 0) and the spreading disc (Ṁ(R>R_t)<0). In this section, we explore the evolution of these radii in various combinations of α and discuss how they can be applied to understand observations. The characteristic radius R_c is commonly used to define the initial disc size. It keeps growing in the conventional viscous disc and remains unchanged in the magnetised wind disc <cit.>. The outer radius R_o increases in viscous discs and shrinks in wind-only discs (see the overlaid circles and triangles in the upper middle panel of Figure <ref>). The transition radius R_t maintains its meaning only when the viscosity is considered as a purely wind-driven disc contracts at any radii at all times. Although the measurements of R_o and R_t are still straightforward when integrating the dead/wind zone into a hybrid disc, which entangles effects of winds and viscosity together, it can be challenging to trace the motion of R_c. The wind modifies the slope of the surface density profile and the presence of dead/wind zones creates substructures (see Section <ref>), jointly hindering the estimation of R_c from simply fitting the surface density profile with a tapered power-law function. Therefore, we characterise R_c for hybrid discs statistically. Detailed explanation of the method can be found in Appendix <ref>. For all models with R_c,0=60au and R_dz,out=30au, we measure these three radii every 0.5Myr. We deliberately choose a very small surface density threshold of 10^-10 g cm^-2 for R_o to accurately trace the outer disc motion. We defer the discussion of the selection of the surface density threshold to Section <ref>. We fit the variation of radii with time using a linear function, as suggested by visual inspection and analytical solution <cit.>. The slopes of the fitting functions denoted as dR_c/dt, dR_o/dt and dR_t/dt, are applied to characterise the expansion rates of R_c, R_o and R_t, separately. Figure <ref> shows clearly that the expansion rate increases with α_SS,out, and that α_DW, dz has an almost negligible effect on the disc expansion rate regardless of which measurements we use. α_DW, out also plays a minor role unless for dR_t/dt. When discs possess a large α_DW, out, it typically comes with a large dR_t/dt. This is because the efficient accretion driven by winds can partly offset the spreading caused by α_SS, out in the outer disc and enlarge the region covered by an overall inflow (Ṁ_tot(R)>0), leaving the outermost part of the disc with less mass to spread more rapidly. The linear fitting function cannot always depict the evolution of disc sizes. When the gas disc size exhibits a trajectory with time analogous to a parabola, characterised by an initial increase followed by a subsequent decrease, the fitting still yields a positive expansion rate provided that the overall trend indicates growth. This is the case for discs with a wind-dominated outer part, displayed by the three dots on the top left of each panel, where α_DW,out=10^-3 and α_SS,out=3×10^-4 (Simulations 7, 16 and 25 in Table <ref>). R_o of these discs does not start contracting until α_DW,out/α_SS,out≳10 due to the enhanced magnetic field induced by its own evolution. Contrary to R_o, both R_c and R_t keep increasing at all times (i.e., dR_c/dt>0, dR_t/dt>0). Unlike R_c and R_t, which are more meaningful from the theoretical perspective, R_o is an observable quantity, which we can be traced via molecular line emission. ^12CO, the most abundant gas species after H_2 in ISM, is accessible at millimetre wavelengths from the ground and is a suitable tracer for characterising the gas disc radius. The self-shielding from photodissociation by ^12CO yields a nearly constant limit on the observable surface density of ∼10^-4 g cm^-2 <cit.> when assuming an abundance of 10^-4 relative to H_2 <cit.>. We apply this threshold to mimic very high-sensitivity observations, which reach the fundamental sensitivity limit imposed by physical processes. In comparison, a higher threshold of 10^-2 g cm^-2 is adopted to represent lower-sensitivity observations. We measure the ^12CO disc sizes for all models listed in Table <ref> at 5 specific evolutionary stages (0.5, 1, 2, 5 and 10 Myr) by adopting two surface density thresholds discussed above, and show the results in Figure <ref>. We classify disc sizes by their values of R_c,0 and α_SS,out. The domination over the disc expansion by the latter is illustrated in Figure <ref>. In Figure <ref>, discs characterised by a lower surface density threshold are more radially extended than those measured by a higher threshold when compared at the same age. Their sizes increase with time for given α_SS,out and R_c,0. Exceptions exist for discs with α_SS,out=10^-2, whose sizes drop from 2 to 5Myr, tracing the switch-on of efficient photoevaporation at the end of evolution. Disc sizes traced with a higher threshold (10^-2 g cm^-2) decrease with time instead. This trend is particularly prominent for discs with large α_SS,out (10^-2) and can be easily understood as they tend to be more radially extended and have a larger R_t (a larger and positive dR_t/dt in Figure <ref>). If the threshold surface density is higher than the surface density corresponding to R_t, it will trace a shrinking disc within R_t. This is mitigated for discs with smaller α_SS,out, whose R_t at a given time corresponds to a higher surface density. They are more tolerant to the threshold we adopt for R_o. Interestingly, this tolerance may explain the smaller variations in disc sizes when α_SS,out is smaller, and can also make discs with smaller α_SS,out look larger than their counterparts with larger α_SS,out, bringing up confusion for disc size comparison when observations are not integrated for a sufficiently long time. The disc size measurements taken here assume an ISM abundance of ^12CO. However, mounting evidences from observations show that CO is depleted in protoplanetary discs <cit.>. Lockup of CO into ices or large solid bodies is required to explain this depletion in addition to freeze-out and photodissociation <cit.>, inducing carbon depletion compared to the ISM value. The latter in the outer disc of Class II stars can also vary substantially among individuals <cit.>. These undoubtedly complex the disc gas size problem further. §.§ Disc lifetime Various definitions of disc lifetimes exist in literature[<cit.> defines the lifetime as the ratio of the disc mass to the stellar accretion rate; in observations, the lifetime for a cluster is estimated by extrapolation of the disc fraction against the disc/stellar age <cit.>.]. The lifetime in this work is from the start of the simulation until either the disc is fully dispersed or the simulation is terminated due to reaching the time limit (12Myr), which is shorter. We take t=0 in our models as the beginning of the Class II phase, so time used here is not directly comparable to observed ages for objects ≲ 0.5Myr. The lifetimes of 27 discs in the first group of simulations are shown in Figure <ref>, where we employ a similar illustration as Figure <ref>. We encode the lifetime in a way that is linearly proportional to the dot area and compress the dimension of α_DW, dz into colours in the two-dimensional dot map. In Figure <ref>, discs with larger α_SS,out tend to have a shorter lifetime for a given combination of α_DW,dz and α_DW,out. This is highlighted by much smaller dots in the third column than those with smaller α_SS, out in the first two columns. This trend is underpinned when the α_DW, dz is also large (darkest dots). This can be explained by the increasing radially average α when we increase the α in the “dead/wind zone” and in the outer disc. The minor responsibility of α_DW, out on the disc lifetime is partially due to its relatively smaller value than α_SS,out assumed in this study. However, regardless of the adopted combinations of α_DW and α_SS, the disc lifetime is noticeably shortened after incorporating the magnetised wind (see Figure <ref>), implying the lifetimes of our hybrid models are generally akin to that of a purely wind-driven disc. This also means that equivalent angular momentum can be more efficiently transported away from discs by magnetised winds. The lifetime increases for discs with larger R_c,0 as both the stellar accretion rate and the wind extraction rate decrease due to the more radially-extended mass distribution. On the contrary, when the radially-averaged α_DW increases with the enlarged “dead/wind zone”, the lifetime does not decrease monotonically. For several simulations, discs with other parameters the same except R_dz,out have their shortest lifetimes when the dead/wind zone size is intermediate (75 au, Simulations 28, 31, 33, 36, 58 and 61 in Table <ref>). This is caused by the weak wind (α_DW,dz<10^-2) in the dead/wind zone. In this case, the locally accumulated gas can drive a minor accretion outburst – a minor positive deviation from the original power-law accretion rate. If the surface density in the inner disc, after the outburst, abruptly becomes optically thin to the stellar radiation, an earlier turn-on of the rapid late-stage photoevaporation can reduce the disc lifetime. Discs with only intermediate-sized dead/wind zones fulfilling this condition therefore have the shortest lifetimes. §.§ Cumulative mass loss Three sinks of gas mass: stellar accretion (driven by viscosity and MHD winds), mass extraction by MHD winds, and mass loss by internal photoevaporation are considered in this work. Although the mass lost to each process is not traceable from observations, identifying them would help us to understand the dominant mass-loss mechanisms during evolution. The mass loss fraction by each component for each simulation is listed in Table <ref> and visualised in Figure <ref> (for group 1 simulations) and Figure <ref> (for group 2 simulations). Most hybrid discs studied in this work lose a large proportion of gas to magnetised winds (≳ 55 per cent) and to stellar accretion (∼ 20 per cent). They have a time-scale and mass-loss budget analogous to those of the pure wind model. When the accretion and expansion in the outer disc are inefficient (α_DW, out=10^-5 or 10^-4 and α_SS, out=3×10^-4, Simulations 1, 4, 10, 13, 19 and 22 in Figure <ref> and Table <ref>), the low viscosity and small wind torques do not transport the gas inwards efficiently, leaving more mass lost to photoevaporation at later stages. When we further separate the mass-loss process into two stages: the stage losing the first 60 per cent of total mass (lost within 12Myr); and the stage losing the remaining 40 per cent. Except for the naive viscous model, the majority of gas in the first stage is extracted by winds and very little by photoevaporation, which has a low rate (≃10^-10 M_⊙ yr^-1) in the early stage of evolution. The remaining 40 per cent of gas is primarily removed either by wind extraction for shorter-lived discs, due to large α_SS and α_DW (see Section <ref>), or by photoevaporation for longer-lived discs. When we increase the dead/wind zone size, more mass is taken away by wind extraction due to its larger covering. This is partly the result of our chosen value of the lever arm λ. When the lever arm is adjusted to a higher value, more mass will be lost to stellar accretion instead of wind extraction (see Section <ref>). We also notice from Figure <ref> that discs with small α_DW,dz (Simulations 1-18) lose mass in a more steady approach than their counterparts with larger α_DW,dz (Simulations 19-27). The former typically take 20-30 per cent of their lifetimes to lose 60 per cent of the total mass, while the latter require only ≲ 10 per cent of their lifetimes to become comparably depleted. A similar pattern is also applicable when larger R_c,0 and R_dz,out are adopted. This is determined by the higher extraction rate and accretion rate driven by strong winds in the intermediate disc (α_DW,dz=10^-2) when the initial surface density is higher. § DISCUSSION §.§ Lever arm Recent observations and non-ideal MHD simulations consistently predict a small lever arm λ and a small mass ejection-to-accretion ratio f=Ṁ_wind/Ṁ_acc∼0.1-1 <cit.>. In previous sections, our adopted lever arm (λ=3) gives rise to f>1[see Figure <ref> and Figure <ref>, where the dark blue bar is generally longer than the light blue bar, indicating that Ṁ_wind is larger than Ṁ_*,SS+Ṁ_*,DW averaged over time]. The analytical solution[f=Ṁ_wind/Ṁ_*,DW=(R_c/R_in)^ξ-1 from <cit.>. ξ=1/[2(λ-1)] for the pure wind case, see also Eq. <ref>.] based on a steady-state pure wind disc extending from R_in=0.01au to R_c=60au predicts a lever arm of ∼ 7 to achieve f∼1. Therefore, we replace the lever arm in our fiducial model with 7 and 12. R_in here does not necessarily mean the disc inner edge but can be the inner radius of the wind-launching region instead. The fiducial model has a wind region originating from 0.1au (see Section <ref> and Figure <ref>). The comparison between the original and two modified fiducial models is shown in Figure <ref>. The first panel of Figure <ref> illustrates when adopting a larger lever arm, less mass is taken away by winds from the intermediate region to drive a similar accretion rate (due to the fixed small α_DW,in, the middle panel), leaving the slope of the surface density closer to that of the initial profile (the left panel). Less mass loss in the dead/wind zone also means more mass will be accumulated in the inner disc, enhancing the viscous stellar accretion rate (the middle panel) and delaying the rapid clearing by internal photoevaporation. In contrast, the outer disc is governed by viscosity here, and the change in the lever arm does not affect the local mass distribution much. In the middle panel of Figure <ref>, the mass lost by wind-driven stellar accretion constitutes a negligible fraction of total mass loss, and this fraction is stable when varying the lever arm. This can be attributed to the imposed small α_DW,in (10^-5), which suppresses the wind-driven accretion to the host star. But this also indicates that winds originating from radii larger than the disc inner edge drive local accretion instead of the stellar accretion, rendering the distribution of stellar accretion rates akin to that of viscous discs. The radially integrated mass loss rate due to each component shown in the middle panel of Figure <ref> is similar to the Figure 3 presented in <cit.>, from which, we can infer whether a small or large lever arm is assumed by comparing the mass-loss rate by wind extraction with wind-driven stellar accretion rates. Differences between the middle panel of Figure <ref> and their Figure 3 arise from a more massive initial disc with a more compact mass distribution, and stronger photoevaporation over the majority of the disc lifetime adopted in <cit.>. We visualise the cumulative mass loss due to these three components: wind extraction, stellar accretion and photoevaporation in the right panel of Figure <ref>. Contrary to the fiducial model, where gas is mainly lost to wind extraction (see Section <ref>), discs with larger λ lose the majority of mass to stellar accretion due to the elevated viscous accretion rates and reduced wind extraction rates (the middle panel). In summary, a change of the lever arm λ can alter the slope of the gas surface density profile in the intermediate discs, modify the disc lifetime slightly, and change the ratio of mass lost by stellar accretion to that by wind extraction substantially. We caution readers here that the mass ejection-to-accretion ratio f is sensitive to the extent of the wind-launching region, i.e. variations of either the inner or the outer wind-launching radius can alter f by a factor of a few. Present observations constrain the inner launching radius of magnetised winds to 0.5-3au for Class II discs <cit.>. For a specific disc, the outer radius of the wind region is typically determined by R_c, beyond which the surface density drops sharply. Stricter constraints on the wind inner launching radius, which might vary from disc to disc, are necessary to understand the relative importance of mass loss due to wind extraction and stellar accretion. §.§ Surface density-adaptive “dead/wind zone” The dead/wind zone size is fixed for all hybrid models during the entire evolution. However, a more realistic treatment should be one evolving with the surface density. A decreasing surface density due to evolution alleviates the difficulty of ionizing electrons in the disc midplane, yielding a progressively smaller MRI-quenched region. To test this, we follow <cit.> and define the “dead/wind zone” outer edge by radii corresponding to Σ_g=0.5 g cm^-2. We implement this by tracing the radius on the fly in simulations. The varying R_dz,out changes the width of the outer boundary transition (w_out, see Section <ref>) slightly but does not alter the overall profile. A lower limit of 10au is imposed to the dead/wind zone outer edge to sustain low turbulence around tens of au as estimated from observations <cit.>. The upper panel of Figure <ref> shows surface density profiles compared between the fiducial model and the Σ_g-dependent model. More complex time-varying substructures in gas are formed caused by the inwardly moving dead/wind zone outer edge. The disc lifetime is also significantly shortened for the Σ_g-dependent model due to the initially larger wind-dominated dead/wind zone. The outer edge rapidly drifts from the initial ∼ 94 au to the manually imposed lower limit of 10au within 0.2 Myr. This takes 20 per cent of the total disc lifetime (∼ 1 Myr), indicating that the evolution is slowed down by the shrinking dead/wind zone. Further comparison with a disc that has a fixed dead/wind zone outer edge at 94au, but a much shorter lifetime, also validates this statement. Though the lifetime is more than halved after adoption of the Σ_g-dependent dead/wind zone, the cumulative mass loss fraction by winds for it (∼ 64 per cent) is marginally lower than for the fiducial model (∼ 71 per cent) as the former has a smaller dead/wind zone averaged over time. Nevertheless, this does not alter our conclusion in Section <ref> that discs in our hybrid models primarily lose mass in a way akin to a pure wind model. As the inclusion of a Σ_g-dependent dead/wind zone changes the disc lifetime substantially, a better constraint on the dead/wind zone sizes can improve our understanding of the window left for planet formation in protoplanetary discs. §.§ Sensitivity of the disc size Ro to the threshold surface density The outer radius R_o is determined by the imposed surface density threshold. Incorrect selection of the threshold can lead to mis-interpreting how the disc size changes over time (see Section <ref>). Hence, it is necessary to examine the sensitivity of R_o to the surface density threshold. We select 6 thresholds Σ_thres, ranging from 10^-12 to 10^-2 g cm^-2 in steps of 2 dex, to trace R_o for all simulations in this work every 0.1 Myr except those with lifetimes shorter than ∼ 1 Myr. Figure <ref> shows R_o traced by different thresholds for “hybrid” discs (represented by the fiducial model) and two “naive” models. A threshold of 10^-2 g cm^-2 can effectively trace the disc expansion or contraction for naive models, but will misleadingly trace a shrinking disc for the fiducial model when the outer disc is in fact spreading. A slightly smaller threshold of 10^-3 g cm^-2 still fails to trace the motion of more than half of discs that are wrongly traced by Σ_thres=10^-2 g cm^-2 in Table <ref>. The outer disc behaviour is only captured accurately when a threshold of ≲ 10^-4 g cm^-2 is adopted. This value is quite close to the maximum sensitivity limited by photodissociation of ^12CO. A lower threshold could be achieved by observing neutral atomic carbon, found in a thin layer sandwiched between the carbon ionization front and the ^12CO region <cit.>. Recent observations suggest it originates from a more elevated layer than ^12CO and its isotopologues <cit.>. However, the low signal-to-noise ratio in the outer disc in real observations <cit.> may limit its capability to accurately trace the disc at even larger radii than ^12CO can. The almost constant R_o with decreasing thresholds when Σ_thres<10^-8 g cm^-2 shown in the left and right panels of Figure <ref> arises from the simplified photoevaporation prescription adopted in our model, which efficiently removes gas with Σ_g<10^-8 g cm^-2, resulting in a very sharp outer edge at these low surface densities. We further investigate the robustness of the threshold of 10^-4 g cm^-2 to discs with a few combinations of α discussed before, but with smaller R_c,0 and R_dz,out (R_c,0=10/30au with R_dz,out being 0.5/1.5 R_c,0), and to the initially more compact disc (dlogΣ_g/dlog R=-3/2). All of these results validate the threshold of 10^-4 g cm^-2 for accurately tracing the pattern of R_o. We therefore caution that observations with lower sensitivity may not accurately capture the evolution of the outer edges of real discs <cit.>. § POPULATION SYNTHESIS In the previous sections, we discussed discs of a single initial mass (0.01 M_⊙), with two different initial characteristic radii R_c,0 (60au and 120au) and three different dead/wind zone outer edges R_dz,out (30au, 75au and 135au). However, star-disc systems which form and evolve in distinct environments tend to have different initial conditions. The local radiation and magnetic fields also possibly influence the values of α_SS and α_DW, and the dead/wind zone sizes. We refer to the variations in initial properties among individual discs as personalities of discs. Although we do not have much knowledge of α_DW, measurements of α_SS inferred from observations suggest a relatively large range of values <cit.>. Furthermore, we lack observational constraints on the dead/wind zone outer edges. All the undetermined factors above affect the disc properties discussed in Section <ref>, and hence disc demographics. In the following section, we implement two small-scale population syntheses based on our hybrid disc models to address whether groups of discs possessing different personalities still exhibit the observable disc expansion or contraction predicted by the naive models <cit.>. §.§ Methods We assume discs in the first population have various disc masses, characteristic radii and dead/wind zone fractions, but the same transition profile, i.e. identical combinations of α_DW and α_SS. We assume a combination of moderate viscous and wind torques from above discussion and adopt α_DW,dz=10^-3, α_DW,out=10^-4 and α_SS,out=10^-3. We draw 1000 radii from exponentially distributed characteristic radii ranging from 20 to 200 au with steps of 5 au, which cover the majority of gas disc sizes measured by ^12CO (see Appendix <ref> for collection of 98 ^12CO disc sizes.) The exponential distribution is described by p(r_i)=exp(-3log(r_i))/∑_iexp(-3log(r_i)), where p(r_i) is the probability of the characteristic radius r_i. The exponential distribution is also an assumption based on Figure <ref>. Although high-resolution studies of discs from ALMA Large Programs suggest both Class 0/I and Class II discs traced by ^12CO can spread to hundreds of au <cit.>, these samples were selected in various ways, and are generally biased towards more extended discs. Roughly 50 per cent of ^12CO discs from an incomplete collection in Figure <ref> having sizes larger than ∼150au can partially justify the bias. Considering small and faint discs are less likely to be detected in gas, discs with smaller sizes are likely to take an even larger fraction. We assume a uniform distribution of the ratio of the dead/wind zone size to the disc characteristic radius, from 10 to 120 per cent with steps of 10 per cent, for the poorly constrained “dead/wind zone” sizes. For example, discs with R_c,0=60au have a “dead/wind zone” from 0.1 to 12 au if it takes 20 per cent of the characteristic radius. We simply assume a binary uniform distribution for the disc mass (0.01 M_⊙ and 0.05 M_⊙), as discs with other parameters the same but only the disc mass different exhibit a scaling relation regarding disc sizes. In the second population, we extend the dimensions of disc personalities by additionally varying α. We uniformly draw 1000 samples for α_DW,dz, α_DW,out and α_SS,out from values adopted in Section <ref> (see also Table <ref>), respectively, and combine them as 1000 sets of α for discs. We then integrate samples of α-combinations into the initial properties of the first population to constitute the second one. The distributions of parameters sampled for the first (only the upper panels) and the second populations (all the panels) can be found in Figure <ref>. We characterise disc sizes of two populations by R_o and adopt thresholds of Σ_g=10^-2 g cm^-2 and 10^-4 g cm^-2 to mimic observations taken with low and high sensitivity as in Section <ref>. We randomly sample 100 disc sizes from each population at ages between 0.1 and 10 Myr, and plot them against the disc age. We include samples having a disc size of 0au, and samples that are coincidentally selected multiple times from the same model. The former represent discs that have dispersed by the time of observations (the disc lifetime is shorter than the specified time) and the latter represent discs with the same personalities. §.§ Results Figure <ref> shows gas sizes vs. disc ages for a single draw from two populations, accompanied by the distributions of disc properties for each draw. We see for both populations that discs measured by higher-sensitivity observations generally have larger sizes. This is consistent with our conclusion drawn in Section <ref>. In Figure <ref>, where discs have the same combinations of α_SS and α_DW, gas sizes measured by high-sensitivity observations (blue dots) increase slightly over time, aligning with expectation from a viscosity-dominated outer disc, which is the case assumed in our models. The increasing gas sizes can also be partly attributed to the tendency for larger discs to survive for a longer time (Section <ref>). This increasing trend nearly vanishes when discs are observed with lower sensitivity (pink dots) due to the incapability of a higher threshold to accurately trace the outer disc motion (Section <ref> and also the right panel of Figure <ref>). When we also consider varying combinations of α (Figure <ref>), discs with similar ages have more diverse sizes, represented by more scattered dots in the upper left panel of Figure <ref> than in Figure <ref>. The large scatter in Class II disc sizes has been also been observed in <cit.> and <cit.>. This scattering due to disc “personalities” makes the increasing radii over time shown in higher-sensitivity observations in Figure <ref> even weaker. Therefore, capturing how disc sizes change with time can be challenging even when we ignore the uncertainties existed in age estimation and radius measurement, as it requires high-sensitivity observations, which approach the limitation imposed by photodissociation of ^12CO (Section <ref>), for both populations studied here. We repeatedly draw 100 gas disc sizes from the synthesised population for 100 times. We see weak variations in the overall pattern in the disc size–age diagram depending on the randomly-selected samples. This may make it difficult to conclude which mechanisms drive the motion of outer discs given the selected samples. It is worth noting that the two populations discussed here are based on different assumptions regarding α. The first assumes universal combinations of α, including α_SS,out, which dominates the disc expansion (see Section <ref>), while the second assumes varying α among individuals. It is likely that a more realistic case is in between, but measurements of α_SS in the outer disc by ALMA observations vary by orders of magnitude <cit.>, and are limited in constraining the distribution of α_SS <cit.>. A more detailed and larger-scale population synthesis (such as <cit.>), which is out of the scope of our toy population study, has the potential to constrain the preferred disc properties, such as α_SS, α_DW, the lever arm λ and even the dead zone size, based on present theories by comparing with observations statistically. However, our limited knowledge on disc fundamental properties, such as distributions of disc masses and sizes, which are inputs to population modelling, and biased observations as references, may limit the usefulness of such comparison. § IMPLICATIONS AND LIMITATIONS §.§ Observational implications Previous research seeking mechanisms responsible for the angular momentum transport associates the mechanisms with either gas disc sizes <cit.> or the stellar accretion rates <cit.>. When gas discs spread over time, transport of angular momentum is attributed to viscosity; otherwise, magnetised winds are considered instead. The distribution of stellar accretion rates also serves as a proxy for the two mechanisms. However, these two observational diagnostics may only trace local disc physics (in the outer and inner disc, respectively) when a more realistic disc model accounting for the dead/wind zone is employed. The incorporation of MHD winds in the disc alters the disc lifetime and the dominant process of mass removal (Section <ref> and <ref>) from the traditional viscous disc in this study, indicating that winds can remove angular momentum with higher efficiency[This is likely a consequence of the well known fact that the lifetime of a wind-driven disc is significantly shorter than that of a viscous disc for the same α.]. But in some hybrid models, the inner and outer discs still behave like a viscous disc, i.e., accreting for the former (Section <ref>), and expanding (Section <ref>) for the latter. That is to say, for an individual disc evolving similarly to some hybrid models, even if we can observe its gas size growing, or its accretion rate behaving like that of a viscous disc over an unrealistically long time (a few million years), we can only conclude that viscosity dominates the expansion in the outer disc or the stellar accretion in the inner disc. The problem becomes more complicated when we are limited to observing demographic “snapshots” of evolving populations. In such a case, we cannot ignore the pitfall presented by disc personalities, which make statistically identifying how gas disc sizes vary with time challenging (Section <ref>). Previous studies investigating the dominant mechanisms over disc evolution simply assume a homogeneous α_DW or α_SS for the entire disc and remain ambiguous in the use of disc evolution. When a hybrid disc with dead/wind zones is considered, angular momentum can be transported by different mechanisms in different regions of one disc. Disc evolution can point to the evolution of stellar accretion rates, disc sizes and also mass loss fractions by different processes, which can be distinct from angular momentum transport. Nevertheless, characterising disc sizes and stellar accretion rates, and studying them in demographics still remain crucial. Although they have limited capability in signifying the major contributor to the global angular momentum transport, i.e., how the angular momentum is transported at any radii of a specific disc, they do inform the dominant mechanisms of local angular momentum transport, i.e., how the angular momentum is transported in the very inner disc, in the intermediate disc, and in the outer disc. §.§ Limitations The models presented in this work are relatively simple and are not able to precisely reproduce complete personalities of protoplanetary discs. One of the major uncertainties is from our lack of constraints on the strengths and configurations of magnetic fields, and their evolution. The lever arm λ is assumed to be a time-independent parameter, and the evolution of magnetic fields (α_DW∝Σ_g^-ω) is treated in an oversimplified way in our study. External radiation and disc-disc interaction in dense environments are efficient in modifying the disc sizes <cit.>, but are also not considered here. The outer disc expansion due to magnetic fields beyond the radius truncated by external photoevaporation <cit.> is not included in our wind analytical solution. Additionally, we only consider hybrid discs as discs simultaneously driven by viscosity and winds, and the dominant mechanism only varies with locations. A more realistic case might be that the dominant mechanism also varies with time <cit.>, i.e. the majority of angular momentum is probably transported by different mechanisms at different times. We have also not explored the interaction between gas and dust in the disc evolution. While small dust is well-coupled to the gas, larger dust, which suffers radial drift <cit.>, behaves differently from the gas. The dust/gas dynamics can be further complicated by the coagulation and fragmentation of particles, which can change the size distribution of dust <cit.>, and by the dust back-reaction on the gas when the dust-to-gas ratio is non-negligible <cit.>. All of these result in significant differences between the radial distributions of dust and gas. While ALMA now allows gas observations with higher resolution and sensitivity, the vast majority of observations still only trace the dust. Inclusion of dust components in future studies, with the aid of radiative transfer techniques, would potentially allow us to study how dust evolves in “hybrid” discs with dead/wind zone models. Meanwhile, a full scale population synthesis factoring in disc personalities can provide insight into correlations we inferred from observations <cit.>. § CONCLUSION In this paper we have run a suite of 1-D gas simulations (a total of 92 individual models) to study the evolution of “hybrid” protoplanetary discs regulated by radially varying α-parametrized viscosity (α_SS(r)) and magneto-hydrodynamic winds (α_DW(r)), as well as internal photoevaporation. Our models are broadly consistent with current understanding of protoplanetary discs in terms of several properties, such as stellar accretion rates, gas disc sizes and lifetimes. We vary α_SS, α_DW, the disc initial characteristic radius R_c,0 and the dead/wind zone outer edge R_dz,out in the “hybrid” models, and compare the evolution of their properties with those of “naive” models (purely viscous and wind-only discs). This understanding of “hybrid” discs is further applied to the population level to examine the effectiveness of gas disc sizes in differentiating the dominant mechanisms transporting angular momentum. We summarise the main results as follows: * The radially varying α invariably creates gas substructures around the inner (R_dz,in) and outer (R_dz,out) edges of the dead/wind zone. The disc surface density profiles from models in this study can be classified into three categories by their morphologies. However, we caution that the stability of these substructures requires investigation with 2-D and 3-D hydrodynamic simulations. * Comparison with naive models shows that hybrid discs behave mainly like viscous discs in terms of stellar accretion rates and disc expansion, but behave like wind-driven discs in terms of cumulative mass loss and lifetimes. * We measure disc sizes in three ways: the characteristic radius R_c, beyond which the surface density drops sharply; the transition radius R_t, delimiting the accreting (Ṁ(R)>0) inner disc from the spreading (Ṁ(R)<0) outer disc; and the outer radius R_o, defined by a threshold surface density. The first two consistently increase for all the “hybrid” models explored here, while the third contracts when magnetised winds dominate the outer disc (when the parameterization of the magnetic field evolution leads to α_DW,out/α_SS,out>10 at late times.) * Winds originating from a radius larger than the disc inner edge may only be able to drive local accretion where winds dominate. The fact that viscosity still drives the observed stellar accretion rate for “hybrid” discs places obstacles in differentiating two mechanisms by the distribution of stellar accretion rates. * We conducted two small-scale population syntheses, with the first fixing α but varying initial disc masses, initial characteristic radii and dead/wind zone outer edges, and the second additionally varying α. The gas disc expansion over time vanishes unless discs are observed at very high-sensitivity (Σ_thres=10^-4 g cm^-2), which approaches the limitation set by photodissociation of ^12CO. This reveals that identifying the dominant mechanism of angular momentum transport in the outer disc from measuring disc sizes in “snapshot” demographics can be more challenging than previously thought. * Our hybrid models show that the inclusion of magnetised winds substantially changes the disc evolution time-scale, and the cumulative mass loss fractions by different physical processes. This implies that winds may transport angular momentum more efficiently than viscosity does. However, the physical processes dominating angular momentum transport can differ from those governing stellar accretion and disc expansion. As a result, stellar accretion rates and gas disc sizes may be less valid proxies of global angular momentum transport but good indicators for the local angular momentum transport. Other observable diagnostics should be considered jointly in order to determine the dominant mechanism in transporting the majority of angular momentum in the disc. § ACKNOWLEDGEMENTS ST acknowledges the University of Leicester for a Future 100 Studentship. RA acknowledges funding from the Science & Technology Facilities Council (STFC) through Consolidated Grant ST/W000857/1. This project has received funding from the Fondazione Cariplo, grant no. 2022-1217, and the European Research Council (ERC) under the European Union’s Horizon Europe Research & Innovation Programme under grant agreement no. 101039651 (DiscEvol). Views and opinions expressed are however those of the author(s) only, and do not necessarily reflect those of the European Union or the European Research Council Executive Agency. Neither the European Union nor the granting authority can be held responsible for them. This work is strongly benefitted from the Core2disk-III residential program of Institut Pascal at Université Paris-Saclay, with the support of the program “Investissements d’avenir” ANR-11-IDEX-0003-01. This research used the ALICE High Performance Computing Facility at the University of Leicester. § DATA AVAILABILITY The observational data used in this paper are from the compilation of <cit.>, and are publicly available at <http://ppvii.org/chapter/15/>. Data generated in simulations and codes reproducing figures in this work are available on reasonable request to the corresponding author. This work made use of Jupyter <cit.>, Matplotlib <cit.>, Numpy <cit.>, Scipy <cit.>, Astropy <cit.> and Pandas <cit.>. mnras § CODE TESTING Our numerical results (coloured lines) are plotted over the analytical solutions (indicated by grey shades at the corresponding time), which are normalized to the initial accretion time-scale t_acc,0=R_c,0/(3ϵ_c c_s,cα(t=0)). α(t=0) is the summation of α_DW and α_SS at t=0. ϵ_c and c_s,c are the aspect ratio (H/R) and sound speed at the initial characteristic radius R_c,0, respectively. Here, we fix the initial α_DW or α_SS to be 10^-3. In cases where both effects are considered, the same value (10^-3) is assigned to each, giving rise to α(t=0)=2×10^-3. Figure <ref> shows that numerical results match well with the analytical solutions for the pure wind and the hybrid cases, but are a little off for the pure viscosity case and the Σ_c-dependent case in the later evolutionary stage. We attribute the deviation in the former to the zero-torque boundary condition imposed in the inner boundary. The latter arises from the numerical discretization and is further complicated by the dependence of α_DW on the disc mass computed from the surface density (α_DW∝ M_d(t)^-ω). The variation of Σ_g can result in changes in α_DW and these quantities jointly determine accretion rates driven by viscosity and MHD winds, which in turn alter the disc mass and hence the surface density profile. However, even though the relative difference in the surface density between the numerical and the analytical solution looks large, the absolute difference is negligible, as only 10^-5 of the initial gas disc mass remains at t=4 t_acc,0. § MEASUREMENTS OF THE CHARACTERISTIC RADIUS RC The two-fold physical meaning of R_c: the cutoff radius, beyond which the surface density drops exponentially; and the radius enclosing 63 per cent of the total disc mass, inspires us to characterise it from two approaches. First, we measure dlogΣ_g/dlog R for every two adjacent cells and then compute the distribution of these slopes, which is further used to calculate cumulative frequency-weighted slopes by varying the fraction of slopes included in the calculation. We use the initial surface density, where the R_c,0 is determined, to calibrate the threshold fraction. We also make sure R_c is always beyond any gas substructures present in the profile. The characteristic radius R_c evaluated in this way is denoted as R_c,exp. Second, we measure the radius that encloses 63 per cent of total disc mass and denote it as R_63. For each model, we measure R_c every 0.5Myr by both approaches and compare them. The relative differences between two R_c are within 30 per cent for more than 70 per cent measured disc sizes. The remaining ∼ 30 per cent disc sizes are mainly (∼ 94 per cent) from discs falling in Category A (see Section <ref>), especially for those with a large α_SS,out=10^-2. The small α_DW,dz enhances mass accumulation in the intermediate disc while the large α_SS,out facilitates the disc expansion in the outer disc, enhancing the disparity in the surface density around R_dz,out and pushing R_63 to a smaller radius than its initial location. When the jump in the surface density is smoothed by the viscosity at later times, R_63 returns to be comparable to R_c,exp (relative differences <30 per cent). Hence, in this study, we use R_c,exp as the characteristic radius R_c. § DISTRIBUTION OF MEASURED 12CO DISC SIZE We collated gas disc sizes traced by ^12CO from previous studies <cit.>. We ignore the differences in disc sizes characterised by different rotational transitions, ^12CO (2-1) and ^12CO (3-2), as they tend to be less than 10 per cent <cit.>. These sizes are measured in two approaches. When the measurement is directly performed in the image plane, a disc size enclosing a certain fraction (commonly 68 or 90 per cent) of the total flux density is either obtained from an increasing elliptical aperture <cit.>, or an azimuthally averaged radial intensity profile of the disc <cit.>. The other method first requires an input for the visibility modelling, and then measures the disc size from the modelled image plane following methods mentioned above. Commonly used models for visibilities from previous studies are Gaussian, Nuker and power-law models <cit.>. We define the size as a radius enclosing 90 per cent of the total flux density (R_CO,90). For literature that uses 68 per cent (R_CO,68) instead, we simply assume discs are Gaussian and convert R_CO,68 to R_CO,90 by multiplying a factor of 1.42 <cit.>. If discs are explicitly denoted as non-Gaussian in previous studies, we adopt the R_CO,68 directly. For example, some discs in <cit.> are modelled with Nuker profiles and only R_CO,68 is provided. Discs measured in the visibility plane and modelled using a non-Gaussian function, specifically a power-law function in this instance <cit.>, have their sizes as reported in the literature here. The largest and smallest measurements are taken separately for discs that have been measured multiple times and plotted in two distributions in Figure <ref>. § DISTRIBUTIONS OF VARYING PARAMETERS IN THE POPULATION SYNTHESIS Figure <ref> shows the distributions of parameters, including initial disc masses M_d, initial characteristic radii R_c,0, dead/wind zone outer edges R_dz,out, α_DW,dz, α_DW,out and α_SS,out for 1000 samples in the first and second populations described in Section <ref>. § LIST OF SIMULATIONS A full list of 92 models carried out in this study and their results on disc lifetimes, cumulative mass-loss fractions by wind extraction, stellar accretion and photoevaporation. cccccccccc Summary of simulations that we have studied in Section <ref>. Column 2-6 list parameters for each simulation. Column 7 provides lifetimes of discs (see Section <ref>). Column 8-10 give the cumulative mass-loss fractions of wind extraction, stellar accretion and internal photoevaporation. No. 3cα 2cRadius (au) 1cLifetime 3cMass loss fraction (r)2-4 (r)5-6 (r)8-10 α_DW, dz α_DW, out α_SS, out R_c,0 R_dz,out (Myr) Wind Acc. PE (1) (2) (3) (4) (5) (6) (7) (8) (9) (10) [0.5ex] [1ex] 3cα 2cRadius (au) 1cLifetime 3cMass loss fraction (r)2-4 (r)5-6 (r)8-10 α_DW, dz α_DW, out α_SS, out R_c,0 R_dzo (Myr) Wind Acc PE (1) (2) (3) (4) (5) (6) (7) (8) (10) (11) [0.5ex] [1ex] visc. 1 × 10^-3 1 × 10^-3 1 × 10^-3 60 – >12 – 0.84 0.16 wind 1 × 10^-3 1 × 10^-3 1 × 10^-3 60 – ∼ 5.1 0.87 0.13 – 1 5 × 10^-4 1 × 10^-5 3 × 10^-4 60 30 10.37 0.49 0.20 0.31 2 5 × 10^-4 1 × 10^-5 1 × 10^-3 60 30 7.79 0.55 0.22 0.23 3 5 × 10^-4 1 × 10^-5 1 × 10^-2 60 30 4.13 0.63 0.26 0.11 4 5 × 10^-4 1 × 10^-4 3 × 10^-4 60 30 10.36 0.53 0.20 0.27 5 5 × 10^-4 1 × 10^-4 1 × 10^-3 60 30 7.68 0.57 0.23 0.21 6 5 × 10^-4 1 × 10^-4 1 × 10^-2 60 30 4.11 0.63 0.26 0.11 7 5 × 10^-4 1 × 10^-3 3 × 10^-4 60 30 5.87 0.71 0.23 0.06 8 5 × 10^-4 1 × 10^-3 1 × 10^-3 60 30 6.06 0.67 0.23 0.10 9 5 × 10^-4 1 × 10^-3 1 × 10^-2 60 30 3.88 0.65 0.26 0.09 10 1 × 10^-3 1 × 10^-5 3 × 10^-4 60 30 8.93 0.52 0.18 0.30 11 1 × 10^-3 1 × 10^-5 1 × 10^-3 60 30 6.72 0.59 0.20 0.21 12 1 × 10^-3 1 × 10^-5 1 × 10^-2 60 30 3.52 0.66 0.23 0.11 13 1 × 10^-3 1 × 10^-4 3 × 10^-4 60 30 9.00 0.56 0.18 0.26 14 1 × 10^-3 1 × 10^-4 1 × 10^-3 60 30 6.64 0.60 0.20 0.20 15 1 × 10^-3 1 × 10^-4 1 × 10^-2 60 30 3.49 0.66 0.23 0.11 16 1 × 10^-3 1 × 10^-3 3 × 10^-4 60 30 5.03 0.74 0.21 0.05 17 1 × 10^-3 1 × 10^-3 1 × 10^-3 60 30 5.22 0.71 0.21 0.08 18 1 × 10^-3 1 × 10^-3 1 × 10^-2 60 30 3.28 0.68 0.23 0.09 19 1 × 10^-2 1 × 10^-5 3 × 10^-4 60 30 7.27 0.57 0.16 0.27 20 1 × 10^-2 1 × 10^-5 1 × 10^-3 60 30 5.51 0.63 0.18 0.19 21 1 × 10^-2 1 × 10^-5 1 × 10^-2 60 30 2.82 0.70 0.20 0.10 22 1 × 10^-2 1 × 10^-4 3 × 10^-4 60 30 7.41 0.60 0.16 0.24 23 1 × 10^-2 1 × 10^-4 1 × 10^-3 60 30 5.44 0.64 0.18 0.18 24 1 × 10^-2 1 × 10^-4 1 × 10^-2 60 30 2.79 0.71 0.20 0.09 25 1 × 10^-2 1 × 10^-3 3 × 10^-4 60 30 4.11 0.77 0.19 0.04 26 1 × 10^-2 1 × 10^-3 1 × 10^-3 60 30 4.24 0.74 0.19 0.07 27 1 × 10^-2 1 × 10^-3 1 × 10^-2 60 30 2.57 0.72 0.20 0.08 28 5 × 10^-4 1 × 10^-5 3 × 10^-4 60 75 7.89 0.59 0.21 0.20 29 5 × 10^-4 1 × 10^-5 3 × 10^-4 60 135 8.02 0.65 0.22 0.13 30 5 × 10^-4 1 × 10^-5 3 × 10^-4 120 30 >12.00 0.59 0.22 0.19 31 5 × 10^-4 1 × 10^-5 3 × 10^-4 120 75 11.60 0.48 0.15 0.37 32 5 × 10^-4 1 × 10^-5 3 × 10^-4 120 135 11.88 0.56 0.16 0.28 33 5 × 10^-4 1 × 10^-5 1 × 10^-3 60 75 6.71 0.63 0.22 0.15 34 5 × 10^-4 1 × 10^-5 1 × 10^-3 60 135 7.51 0.67 0.22 0.11 35 5 × 10^-4 1 × 10^-5 1 × 10^-3 120 30 11.16 0.47 0.18 0.35 36 5 × 10^-4 1 × 10^-5 1 × 10^-3 120 75 9.79 0.55 0.17 0.28 37 5 × 10^-4 1 × 10^-5 1 × 10^-3 120 135 10.89 0.61 0.17 0.22 38 5 × 10^-4 1 × 10^-5 1 × 10^-2 60 75 4.96 0.68 0.24 0.08 39 5 × 10^-4 1 × 10^-5 1 × 10^-2 60 135 6.48 0.70 0.23 0.07 40 5 × 10^-4 1 × 10^-5 1 × 10^-2 120 30 5.96 0.59 0.23 0.18 41 5 × 10^-4 1 × 10^-5 1 × 10^-2 120 75 6.47 0.66 0.21 0.13 42 5 × 10^-4 1 × 10^-5 1 × 10^-2 120 135 8.62 0.69 0.20 0.11 43 5 × 10^-4 1 × 10^-4 1 × 10^-2 60 75 4.95 0.68 0.24 0.08 44 5 × 10^-4 1 × 10^-4 1 × 10^-2 60 135 6.47 0.70 0.23 0.07 45 5 × 10^-4 1 × 10^-4 1 × 10^-2 120 30 5.91 0.60 0.23 0.17 46 5 × 10^-4 1 × 10^-4 1 × 10^-2 120 75 6.46 0.66 0.21 0.13 47 5 × 10^-4 1 × 10^-4 1 × 10^-2 120 135 8.61 0.69 0.19 0.11 48 1 × 10^-3 1 × 10^-5 3 × 10^-4 60 75 5.19 0.64 0.19 0.17 49 1 × 10^-3 1 × 10^-5 3 × 10^-4 60 135 4.37 0.71 0.20 0.09 50 1 × 10^-3 1 × 10^-5 3 × 10^-4 120 30 >12.00 0.53 0.17 0.30 51 1 × 10^-3 1 × 10^-5 3 × 10^-4 120 75 8.65 0.52 0.14 0.34 52 1 × 10^-3 1 × 10^-5 3 × 10^-4 120 135 7.34 0.61 0.15 0.24 53 1 × 10^-3 1 × 10^-5 1 × 10^-3 60 75 4.22 0.67 0.20 0.13 54 1 × 10^-3 1 × 10^-5 1 × 10^-3 60 135 4.01 0.72 0.21 0.07 55 1 × 10^-3 1 × 10^-5 1 × 10^-3 120 30 9.94 0.50 0.16 0.34 56 1 × 10^-3 1 × 10^-5 1 × 10^-3 120 75 7.21 0.59 0.16 0.25 57 1 × 10^-3 1 × 10^-5 1 × 10^-3 120 135 6.50 0.66 0.16 0.18 58 1 × 10^-3 1 × 10^-4 1 × 10^-2 60 75 2.77 0.72 0.22 0.06 59 1 × 10^-3 1 × 10^-4 1 × 10^-2 60 135 3.36 0.74 0.21 0.05 60 1 × 10^-3 1 × 10^-4 1 × 10^-2 120 30 5.24 0.63 0.21 0.16 61 1 × 10^-3 1 × 10^-4 1 × 10^-2 120 75 4.19 0.69 0.19 0.12 62 1 × 10^-3 1 × 10^-4 1 × 10^-2 120 135 4.79 0.73 0.18 0.09 63 1 × 10^-2 1 × 10^-5 3 × 10^-4 60 75 3.00 0.69 0.18 0.13 64 1 × 10^-2 1 × 10^-5 3 × 10^-4 60 135 1.18 0.75 0.19 0.06 65 1 × 10^-2 1 × 10^-5 3 × 10^-4 120 30 10.47 0.44 0.11 0.45 66 1 × 10^-2 1 × 10^-5 3 × 10^-4 120 75 5.81 0.57 0.13 0.30 67 1 × 10^-2 1 × 10^-5 3 × 10^-4 120 135 3.24 0.67 0.14 0.19 68 1 × 10^-2 1 × 10^-5 1 × 10^-2 60 75 1.38 0.76 0.19 0.05 69 1 × 10^-2 1 × 10^-5 1 × 10^-2 60 135 0.68 0.78 0.19 0.03 70 1 × 10^-2 1 × 10^-5 1 × 10^-2 120 30 4.47 0.67 0.18 0.15 71 1 × 10^-2 1 × 10^-5 1 × 10^-2 120 75 2.78 0.73 0.16 0.11 72 1 × 10^-2 1 × 10^-5 1 × 10^-2 120 135 1.78 0.76 0.16 0.08 73 1 × 10^-2 1 × 10^-4 1 × 10^-2 60 75 1.38 0.76 0.19 0.05 74 1 × 10^-2 1 × 10^-4 1 × 10^-2 60 135 0.68 0.78 0.19 0.03 75 1 × 10^-2 1 × 10^-4 1 × 10^-2 120 30 4.44 0.67 0.18 0.15 76 1 × 10^-2 1 × 10^-4 1 × 10^-2 120 75 2.77 0.73 0.16 0.11 77 1 × 10^-2 1 × 10^-4 1 × 10^-2 120 135 1.77 0.77 0.16 0.07 78 1 × 10^-2 1 × 10^-3 3 × 10^-4 60 75 2.77 0.78 0.19 0.03 79 1 × 10^-2 1 × 10^-3 3 × 10^-4 60 135 1.58 0.79 0.19 0.02 80 1 × 10^-2 1 × 10^-3 3 × 10^-4 120 30 7.72 0.75 0.15 0.10 81 1 × 10^-2 1 × 10^-3 3 × 10^-4 120 75 6.16 0.76 0.15 0.09 82 1 × 10^-2 1 × 10^-3 3 × 10^-4 120 135 4.36 0.78 0.15 0.07 83 1 × 10^-2 1 × 10^-3 1 × 10^-3 60 75 2.48 0.77 0.19 0.04 84 1 × 10^-2 1 × 10^-3 1 × 10^-3 60 135 1.25 0.78 0.19 0.03 85 1 × 10^-2 1 × 10^-3 1 × 10^-3 120 30 7.00 0.71 0.15 0.14 86 1 × 10^-2 1 × 10^-3 1 × 10^-3 120 75 4.96 0.74 0.15 0.11 87 1 × 10^-2 1 × 10^-3 1 × 10^-3 120 135 3.26 0.76 0.15 0.09 88 1 × 10^-2 1 × 10^-3 1 × 10^-2 60 75 1.35 0.76 0.19 0.05 89 1 × 10^-2 1 × 10^-3 1 × 10^-2 60 135 0.70 0.78 0.19 0.03 90 1 × 10^-2 1 × 10^-3 1 × 10^-2 120 30 4.09 0.70 0.18 0.12 91 1 × 10^-2 1 × 10^-3 1 × 10^-2 120 75 2.68 0.74 0.16 0.10 92 1 × 10^-2 1 × 10^-3 1 × 10^-2 120 135 1.77 0.77 0.16 0.07 § VISUALISATION OF RESULTS FROM SIMULATIONS Figure <ref> is visualisation of data below the dividing line in Table <ref>.
http://arxiv.org/abs/2407.13088v1
20240718013049
Scheduling Deep Learning Jobs in Multi-Tenant GPU Clusters via Wise Resource Sharing
[ "Yizhou Luo", "Qiang Wang", "Shaohuai Shi", "Jiaxin Lai", "Shuhan Qi", "Jiajia Zhang", "Xuan Wang" ]
cs.DC
[ "cs.DC" ]
Scheduling Deep Learning Jobs in Multi-Tenant GPU Clusters via Wise Resource Sharing Yizhou Luo1, Qiang Wang2, Shaohuai Shi2, Jiaxin Lai1, Shuhan Qi2, Jiajia Zhang2, Xuan Wang2Corresponding authors: Qiang Wang, Shaohuai Shi Harbin Institute of Technology (Shenzhen) Guangdong Provincial Key Laboratory of Novel Security Intelligence Technologies 1{23S151149,200110515}@stu.hit.edu.cn, 2{qiang.wang,shaohuais,shuhanqi,zhangjiajia,wangxuan}@hit.edu.cn ==================================================================================================================================================================================================================================================================================================================================================================================== § ABSTRACT Deep learning (DL) has demonstrated significant success across diverse fields, leading to the construction of dedicated GPU accelerators within GPU clusters for high-quality training services. Efficient scheduler designs for such clusters are vital to reduce operational costs and enhance resource utilization. While recent schedulers have shown impressive performance in optimizing DL job performance and cluster utilization through periodic reallocation or selection of GPU resources, they also encounter challenges such as preemption and migration overhead, along with potential DL accuracy degradation. Nonetheless, few explore the potential benefits of GPU sharing to improve resource utilization and reduce job queuing times. Motivated by these insights, we present a job scheduling model allowing multiple jobs to share the same set of GPUs without altering job training settings. We introduce SJF-BSBF (shortest job first with best sharing benefit first), a straightforward yet effective heuristic scheduling algorithm. SJF-BSBF intelligently selects job pairs for GPU resource sharing and runtime settings (sub-batch size and scheduling time point) to optimize overall performance while ensuring DL convergence accuracy through gradient accumulation. In experiments with both physical DL workloads and trace-driven simulations, even as a preemption-free policy, SJF-BSBF reduces the average job completion time by 27-33% relative to the state-of-the-art preemptive DL schedulers. Moreover, SJF-BSBF can wisely determine the optimal resource sharing settings, such as the sharing time point and sub-batch size for gradient accumulation, outperforming the aggressive GPU sharing approach (baseline SJF-FFS policy) by up to 17% in large-scale traces. Distributed Deep Learning, Job Scheduling, Communication Contention § INTRODUCTION The popularity of Deep Neural Network (DNN) <cit.> grows rapidly in both industry and academia with its significant role in various applications, such as computer vision and natural language processing. With more and more training data and larger model size, training deep models becomes very time-consuming. Distributed Deep Learning (DDL) <cit.> is widely adopted to speed up the training procedure, which distributes the training workload to a cluster of workers and exploit the parallel computing power to accelerate the training process. In the data center scenario where the hardware resources are shared by multiple users, multiple online DDL training jobs are running simultaneously, and the resource contention could lead to severe performance degradation if the training jobs are not scheduled properly <cit.>. For such an online scheduling system that concurrently handles a rising number of jobs, flexible resource allocation and efficient job scheduling are indispensable to maximize the resource utilization. There exist some traditional schedulers <cit.> to schedule different computing tasks, but they are not specifically designed for DDL training jobs and cannot leverage the characteristics of DDL (such as iterativeness and convergence properties) for maximal training efficiency. Existing DL job management and scheduling systems <cit.> commonly employ preemptive and exclusive strategies to enhance system utilization and minimize job completion time. The advanced heuristic scheduler Tiresias <cit.> demonstrated that the shortest-remaining-service-first (SRSF) algorithm generally yields optimal results when job durations are known. However, small jobs still experience delays waiting for GPU resource release when the cluster is predominantly occupied by large jobs. The state-of-the-art representative is Pollux <cit.>, which dynamically (re-)assigns resources to improve cluster-wide goodput, while respecting fairness and continually optimizing each DL job to better utilize those resources. However, Pollux helps users choose the GPU resources as well as tune the training hyper-parameters, which may result in model accuracy degradation <cit.>. Overall speaking, in preemptive and exclusive policies, long-term job packing can exacerbate HOL (Head-of-line) blocking issues and prolong JCT (Job Completion Time). Consequently, jobs with small training iterations and low GPU demand may face severe queuing and starvation issues, while those large ones can suffer from high migration overhead. In several recent schedulers, including Gandiva <cit.>, Zico <cit.>, Salus <cit.> and Lucid <cit.>, there has been a notable shift towards emphasizing resource sharing, particularly regarding GPU and network resources. This shift aims to enhance overall resource utilization while addressing queuing and starvation issues effectively. Gandiva <cit.> introduced GPU time-slicing and job scheduling based on predicted DDL training job characteristics, albeit with a conservative approach limiting GPU sharing to single-GPU jobs. Yu et al. <cit.> tackled network resource sharing in multiple all-reduce based DDL job training, reducing communication contention overhead. Lucid <cit.> utilized an indolent packing strategy to mitigate interference effectively. However, their search was confined to a limited solution space due to the inability to alter the training batch size. On the contrary, gradient accumulation has become standard feature in many deep learning training frameworks <cit.>. The basic idea of gradient accumulation is to accumulate gradients from multiple micro-batches and only then update the model parameters. This is particularly helpful in training very large neural networks <cit.>, where workers can only fit one small micro-batch at a given time, saving GPU memory footprint requirement. From an optimization perspective, gradient accumulation is completely equivalent to training with a larger mini-batch size, since in both cases the gradient is averaged with respect to all computed examples. Motivated by the observations outlined above, we introduce a job scheduling model that enables multiple jobs to run concurrently on one or more GPUs. In contrast to the approach of scaling the training batch size and tuning training hyper-parameters as in <cit.>, we investigate the potential of GPU sharing to improve the overall performance. This model is coupled with gradient accumulation to address GPU memory limitations and ensure model convergence. The contributions of this paper can be summarized as follows: * We introduce a novel DDL job scheduling model enabling multiple jobs to fully or partially share the same set of GPUs while ensuring model convergence through gradient accumulation. Unlike existing methods that increase batch size and GPU numbers to enhance performance, risking accuracy degradation, our model focuses on GPU resource sharing across jobs and mitigates GPU memory constraints through gradient accumulation, thereby potentially reducing queuing time for waiting DDL jobs. * We propose SJF-BSBF (shortest job first with best sharing benefit first), a straightforward yet effective scheduling algorithm for the aforementioned problem. Initially, we derive the optimal solution for scheduling a job pair (one ongoing job and one new arrival) to decide GPU sharing feasibility and launch timing. Subsequently, we employ a greedy strategy to determine batch size and GPU allocation, minimizing interference with existing jobs and reducing queuing time. * Through both physical and simulated experiments, we evaluate SJF-BSBF on different scales of job traces. Compared to recent DL schedulers such as Tirasias <cit.> and Pollux <cit.>, SJF-BSBF reduces average job completion time by 27-33%. Additionally, compared to the first-fit GPU sharing approach for new arrival jobs, SJF-BSBF avoids those sharing decisions that may degrade the overall performance, surpassing it by up to 17%. § RELATED WORK Scheduling DL training jobs has garnered significant interest recently. Research in this field primarily focuses on fully utilizing computing resources and allocating them effectively to achieve optimal efficiency in multi-tenant GPU computing environments. Here we discuss two categories: preemptive and exclusive schedulers, as well as non-preemptive schedulers. Preemptive and Exclusive Schedulers. These schedulers possess the capability to interrupt or preempt a running job in order to allocate exclusive resources to another job with higher priority. This mechanism ensures that allocated resources remain inaccessible to other jobs while the current job is utilizing them, thereby fostering predictable resource usage patterns and mitigating interference between jobs. Early works such as Optimus <cit.> and Cynthia <cit.> relied on job time prediction, making simplistic assumptions about training convergence curves. Tiresias <cit.> addressed the severe resource starvation issue by proposing adaptive scheduling algorithms with effective job migration strategies. Other studies, such as Harmony <cit.> and Spear <cit.>, leveraged deep reinforcement learning to provide efficient solutions aimed at minimizing average job completion time or makespans. Another line of research in DDL job scheduling algorithms relies on theoretical formulation and optimization, treating DDL job scheduling as constrained optimization problems. The recent state-of-the-art Pollux <cit.> dynamically reallocates resources to enhance cluster-wide throughput while ensuring fairness and continually optimizing each DL job to maximize resource utilization. However, these methods cannot guarantee no accuracy degradation for all models. Moreover, they may encounter performance degradation due to migration <cit.> and GPU under-utilization <cit.>. Non-preemptive Schedulers. Early non-preemptive schedulers predominantly relied on heuristic algorithms based on job characterization and hardware performance modeling. In recent studies, attention has shifted towards resource sharing, encompassing GPU and network resources, which holds significant potential for improving computing resource utilization and alleviating starvation. Gandiva <cit.> introduced GPU time-slicing and job scheduling by predicting DDL training job characteristics. However, it adopted a conservative approach, limiting GPU sharing to single-GPU jobs. Zico <cit.> focused on system-wide memory consumption for concurrent training and devised a feasible memory management solution to ensure that concurrent jobs do not exceed the allocated memory budget. Wang et al. <cit.> and Yu et al. <cit.> addressed network resource sharing in multiple ring-all-reduce based DDL job training, alleviating communication contention overhead. Lucid <cit.> employed an indolent packing strategy to mitigate interference. However, few of these approaches offer a general and flexible solution for sharing GPUs among DL jobs. § PRELIMINARIES §.§ S-SGD Based Distributed Deep Learning The DNN model is trained in an iterative manner with the target of minimizing a loss function ℒ(W, D), where W and D are respectively the model weights and the input data. For large-scale DNNs, the data-parallel synchronized SGD (S-SGD) is widely applied to train models with multiple workers (say N workers, and indexed by g) because it has the same convergence performance as the sequential SGD. Generally the i^th iteration of the training contains four steps: a) Each worker g loads a mini-batch of local data D_i^g into the device memory. b) Each worker g performs a feed forward on D_i^g through the neural network and computes the value of the loss function ℒ(W_i, D_i^g). c) The first order gradients w.r.t. W_i are calculated by backpropagation. d) Gradients from all the workers ∇ℒ(W_i, D_i^g) are aggregated, averaged and then distributed, which is often tackled by the All-Reduce collective function. Then all the workers update the model as Eq. (<ref>). W_i+1 = W_i-ξ1/N∑_g=1^N∇ℒ(W_i, D_i^g). §.§ All-Reduce Communication The most common scenario of DDL training is using a large number of computing devices distributed among nodes in a cluster. As a result, the step d) involves extra communication overheads In Eq. (<ref>), we use Δ W_i = 1/N∑_g=1^N∇ℒ(W_i, D_i^g) to represent the aggregation of gradients from N workers, which can be done through an all-reduce operation or through a set of parameter servers. For brevity, we assume that the number of nodes is power-of-two. Given the number of nodes N and the message size M, the time cost of one All-Reduce operation without contention can be generalized as where a and b are two constant numbers that are not related to M <cit.>. The inter-node communication cost can be modelled as Eq. (<ref>). The values of a and b depend on the algorithms for the All-Reduce operation with different number of processes and message sizes <cit.>. Without loss of generality, we do not limit the communication model to one specific algorithm. T_allreduce=a + bM. § SYSTEM MODELING AND PROBLEM FORMULATION For ease of reference, we summarize some frequently used notations throughout this paper in Table <ref>. We consider a multi-tenant GPU cluster comprising |𝒮| servers equipped with |𝒩| GPUs evenly distributed. These servers are interconnected with a network switch possessing sufficient bandwidth. All GPUs within the cluster share the same specifications and theoretical peak performance. At the onset of a scheduling horizon |𝒯| spanning time-slots, a set of DDL jobs 𝒥 awaits scheduling for training over the duration of |𝒯|. Each job J_k ∈𝒥 is characterized by the number of GPUs it requires, denoted as 𝒢(J_k), and the total number of training iterations I_k requested by its users. §.§ DL Job Training Time Modeling We first model the training time of one job, which includes the GPU computation time and network communication of the all-reduce operation. §.§.§ Modeling GPU Computation The DL model is trained using back-propagation. The computation time on GPU scales linearly with the per-GPU batch size B, which can be calculated as follows. t_comp(B)=α_comp + β_comp× B. §.§.§ Modeling Network Communication The gradient aggregation overhead depends on the topology as well as the network communication algorithm. We simply define the communication part as follows. t_comm=α_comm + β_comm× M, where M is the message size, and α_comm,β_comm are the all-reduce time model parameters as described in Section <ref>. §.§.§ Sharing Performance Modeling Existing schedulers that facilitate GPU sharing, such as Gandiva <cit.>, Gavel <cit.>, and Lucid <cit.>, often adopt conservative and limited approaches or require additional application information to generate schedules. In contrast, we apply a simple interference model to describe the overhead of GPU sharing. We illustrate three possible job schedules for two jobs sharing the same set of GPUs in Figure <ref>. Schedule (a) sequentially executes two DL jobs. Schedules (b) and (c) involve invoking two DL jobs simultaneously or with partial overlap, resulting in varying degrees of interference penalty. To optimize the average job completion time, one must balance the tradeoff between job queuing/waiting time (Job 2 waits for Job 1 to finish in (a)) and interference penalty (complete overlap of two jobs leads to severe penalty in (b)). In practice, the job iteration time under GPU sharing can be measured and modeled by equations (<ref>) and (<ref>), as they occupy partial GPU and network resources with similar trends. To simplify the model, if a new job shares GPUs occupied by an existing job (Job A and Job B), we adjust their job iteration time as follows. t̂_A = t_A ξ_A, t̂_B = t_B ξ_B, where ξ_A and ξ_B denote the interference ratios, reflecting the performance degradation resulting from GPU sharing. The solution to determining the optimal scheduling point under this scenario will be discussed in Section <ref>. §.§.§ Modeling Gradient Accumulation Given that GPU memory constraints may limit the per-GPU batch size, some schedulers tackle this limitation through memory offloading <cit.> (which may introduce additional system overhead) or by adjusting batch sizes and other training hyper-parameters <cit.> (which may compromise model accuracy). As our model incorporates GPU sharing, the memory footprint frequently imposes constraints on feasibility. Thus, we focus on gradient accumulation, which can dynamically reduce the sub-batch size while preserving the original model accuracy as per the user's requested batch size. It is also easily implemented using popular DL frameworks. It is important to note that one can utilize gradient accumulation algorithms to manage the computational aspect, thereby reducing batch size to mitigate memory consumption. We subsequently define the overall iteration time as follows. t_iter^j=(s-1) × t_comp^j(B/s) + ((t_comp(B/s))^δ+t_comm^δ)^1/δ, where s represents the accumulation step required to attain the original batch size, and δ denotes the degree of overlap between GPU computation and all-reduce communication, as initially proposed in <cit.>. It is important to acknowledge that δ may vary when different batch sizes are applied. §.§ Scheduling Modeling In this paper, we adopt the "gang-scheduling" discipline widely prevalent in practical large-scale GPU clusters <cit.>. Under gang scheduling, all workers (i.e., GPUs) of a DDL job must be allocated simultaneously. Furthermore, once a job commences its scheduled run, all allocated GPUs must remain dedicated to the job until its completion, with no allowances for preemption or migration. (It is worth noting that frequent job preemption and migration can significantly degrade performance <cit.>). Upon job completion, the occupied resources are simultaneously released. Differing from conventional GPU-exclusive scheduling policies, we permit GPUs to be occupied by multiple workers from various jobs concurrently. These workers can be allocated within a single server or across multiple servers, provided there exists a network path connecting them. Assume y_jg[τ] denotes that the job j uses GPU g in the time slot t. Job j requires G_j number of GPUs. ∑_g ∈𝒢y_jg[τ] = G_j. To ensure that one GPU at most holds C jobs, we have ∑_j ∈ J[τ]y_jg[τ] ≤ C, ∀ j ∈𝒥[t], τ∈𝒯, g ∈𝒢. In practice, we observe that interference degradation can be severe, rarely improving performance when more than two jobs share the same set of GPUs. Therefore, we set C=2 in our context. Also, since we consider gang scheduling, we have y_jg[τ] =y_gs[τ-1], ∀ s ∈ S, j ∈𝒥[τ], a_j < τ≤ T_j, y_jg[τ] =0, ∀ g ∈𝒢, j ∉𝒥[τ], τ∈𝒯, y_jg[τ] ∈ℤ^+, ∀ g ∈𝒢, j ∈𝒥[τ], τ∈𝒯. The completion time of job j can be calculated as T_j = a_j + arg τmin ∑_τ∈𝒯1/t_iter^j≥ I_k, ∀ j ∈𝒥[τ], τ≥ a_j, ϕ_j[t] = B_k/t_iter^j, where ϕ_j[τ] denotes the system throughput of the job. In practice, it is more common to monitor and collect DL training throughput using popular DL frameworks. The throughput can be readily converted to iteration time given the training batch size. By measuring DL job throughput under both sole execution and concurrent execution with other jobs, we can fit the time model (Equation (<ref>)) for both cases and naturally infer the interference ratio ξ. Figure <ref> illustrates the throughputs of all DL models in our experiments across a range of resource allocations and batch sizes. Overall, our model closely represents the observed data. We also notice that different jobs exhibit varying sensitivities to network communication and GPU workloads. For instance, BERT shows a linear increase with batch size within the experimental range for all GPU configurations, indicating that the bottleneck lies in GPU computation and is constrained by GPU memory. Additionally, YoloV3 mostly achieves peak throughput with a batch size of 16 and encounters network bottlenecks when the GPU number exceeds 12. We also measure the system throughput of different job pairs and training configurations, as depicted in Figure <ref>. We find that throughput can be fitted by Equation (<ref>), albeit with different parameters from the solely running mode. Moreover, the interference ratios of different cases exhibit a wide range of up to 6 in our experiments, emphasizing that avoiding unfavorable cases is crucial for improving overall performance. §.§ Problem Formulation In this paper, our goal is to determine the scheduling decisions y_jg[t] to minimize the average JCT, which is commonly used to evaluate the efficiency of DL job schedulers <cit.>. This optimization problem can be formulated as follows: y_jg[t],∀ j,g,tmin∑_j ∈ J[t] T_j. Our DDL job scheduling problem arises from the following system setting and assumptions: §.§.§ The GPU cluster A GPU cluster consists of N_s servers, S_i, i ∈ [1, N_s], connected with a network switch of sufficient bandwidth. Each server S_i has N_g GPUs. g_i,j denote the j-th GPU on the i-th server. All the GPUs have the same specification and theoretical peak performance, denoted by P_i,j. The network performance of each server is modelled by Eq. (<ref>) and shared among different jobs. §.§.§ DDL job characteristics A job set 𝕁 of N_𝕁 training jobs arriving over time. The job J_k arrives at A_k. Each job can be represented by a DAG. We assume that the tasks in each job are non-preemptive, and the job is preemptive if the ongoing task is finished. §.§.§ The allocated GPUs for each job 𝔾(J_k) denotes the set of GPUs used by J_k and it can be within the same server or across different servers. 𝔾(J_k) will not change for each job. Each GPU can only be occupied by one job at any time slot. § SOLUTION We note that Problem (<ref>) presents an integer non-convex program with packing and covering constraints, which is NP-hard. Given these challenges, we opt to explore a heuristic approach that provides a provable local optimum guarantee for a job pair that shares GPUs either completely or partially. In this section, we describe our solution to address the problem formulated in Section <ref>. The solution comprises two parts. Firstly, we address the simple case of two jobs: one running on the GPUs while the other awaits scheduling. It is important to note that concurrent execution on a GPU may degrade overall performance if interference is significant. Thus, we must decide whether the jobs should share GPUs and when to launch the waiting one. This gives rise to Theorem 1, which forms the core of our solution by providing a feasible solution when cluster resources are insufficient. Secondly, we introduce our scheduling algorithm SJF-BSBF (shortest job first with best sharing benefit first), built upon Theorem 1 and the shortest job first strategy. By judiciously selecting job pairs that benefit from GPU sharing, even acting in a non-preemptive manner, SJF-BSBF reduces job queuing time while avoiding scenarios where sharing may detrimentally impact overall performance. §.§ Scheduling One Job Pair We assume that all the tasks of a DL job are assigned to a fixed set of GPUs during its execution. Before we design the scheduling algorithm, each new-arriving DL job should be placed to a certain set of intra-node or inter-node processors, which is called job placement. Assume that there is a new job A sharing the GPUs occupied by the existing job B, and their execution time under concurrent execution is respectively t̂_A = t_A ξ_A, t̂_B = t_B ξ_B, and κ is the inserting time. We have the following theorem. Theorem 1 The shortest JCT of the above job pair is achieved by either sequentially executing them (κ=t_A i_A) or simultaneously invoking them concurrently κ=0. Case 1: If t̂_Ai_A ≥t̂_Bi_B, then T_A = t̂_Bi_B + t_A × (i_A - t̂_Bi_B/t̂_A), T_B = t̂_Bi_B. The average time is T = (T_A + T_B) / 2, = t̂_Bi_B + t_A i_A/2 - t̂_̂B̂i_B/2ξ_A. Case 2: If t̂_Ai_A < t̂_Bi_B, then T_A = κ + t̂_A× (i_A - κ/t_A), T_B = κ + t̂_A× (i_A - κ/t_A) + t_B × (i_B - t̂_̂Â× (i_A - κ/t_A)/t̂_̂B̂). The average time is T = (T_A + T_B) / 2 = (2ξ_B+ξ_A-2ξ_Aξ_B/2ξ_B)κ + (1-1/2ξ_B)t̂_̂Âi_A + 1/2t_B i_B. If 2ξ_B+ξ_A-2ξ_Aξ_B > 0, it is an monotonically increasing function with respect to κ. The minimum value is achieved at κ=0, which indicates that one should start the new job immediately. Otherwise, it is an monotonically increasing function with respect to κ. The minimum value is achieved at κ=t_A i_A, which indicates that overlapping two jobs degrade the overall performance. In practice, evaluating the conditions for the best solution is the same as directly comparing the fully overlapped time and the fully non-overlapped time in terms of time cost. §.§ Scheduling Multiple DL Jobs One critical challenge in addressing Problem (<ref>) is the allocation of GPUs when all GPUs in the cluster are occupied by existing jobs, and a new job arrives. One needs to determines 1) which job to share resources with the new arrival, 2) deciding when to initiate the new job. Our goal is to develop a heuristic efficient online scheduling algorithm for it. §.§.§ Basic Idea We propose an online scheduling algorithm called SJF-BSBF (smallest job first with best sharing benefit first). Algorithm <ref> describes the steps of SJF-BSBF. The intuition behind Algorithm <ref> has three points. 1) As for job priority, the overall framework is based on the shortest job first (SJF) strategy, as tackled by Lines 1-2. This size-based heuristic strategies determine the job priority according to their used GPU numbers. We apply SJF since it performs well most of time for online scheduling problems <cit.>. 2) As for GPU allocation, since the case of scheduling two jobs concurrently running on the same GPUs (wholely or partially) is considered in our paper, once the free GPU number is not enough to execute the job, we manage to look for those already occupied by the running jobs to schedule the new one. This is the core logic of SJF-BSBF and handled by Lines 3-19. 3) In the point of 2), for each job pair, we should also decide the batch size of the new job for gradient accumulation that not only exceeds the GPU memory size but also achieves the shortest JCT of scheduling the job pair. This corresponds to Line 11. §.§.§ GPU Allocation Given a job J_k that needs G_J_k GPUs, we should decide a set of GPUs to schedule it. The classical heuristic algorithms include First-Fit (FF) <cit.> and List-Scheduling (LS) <cit.>. In Algorithm <ref>, Lines 3-19 present the step of choosing the GPUs. First, if there are enough GPUs to execute J_k, we select the top-k GPU in 𝒢_free to make them as colidated on the nodes as possible (Lines 6-7). Second, notice that we allow at most two jobs to concurrently run on the same GPUs. Once the free GPU number is smaller that the request of J_k, we attempt to seek those GPUs that are already occupied by one job (Lines 10-17). We scan G_OJ and determine the best concurrent running setting of the running job and J_k, including the batch size of J_k and whether to let them share the GPUs, using Algorithm <ref> introduced later. We add those pairs that can benefit from the sharing strategy (Lines 10-13). Then we sort 𝒥_share by the JCT of the job pair in ascending order (Line 14). Finally, we pick up the GPUs from those candidate jobs until the total number can fulfill the request of J_k (Lines 15-17). Notice that we do not pick the free GPUs at first for this case to save resources because the completion time of J_k is determined by those shared GPUs. §.§.§ Batch Size Scaling In Algorithm 2, given a running job J_run and a new job J_k ready to be scheduled, we present how to adaptively adjust the batch size for J_k to achieve the shortest average JCT of these two jobs. Notice that we do not adjust the batch size of the running job to reduce the complexity of the scheduling system. We search the batch size in the range [1, B_J_k] with a step of power two (Lines 5 and 12). For each candidate batch size, we use Theorem 1 to obtain the best configuration of scheduling the job pair, including the flag of whether to let them share GPUs (SF) and the JCT (Line 6). Then we record that configuration if better (Lines 7-11). Notice that it is possible that the new job J_k may not be scheduled immediately and put back to the pending job pool if the final SF is False, indicating that running the job pair concurrently is not optimal. §.§.§ Time complexity of SJF-BSBF The time consumption of SJF-BSBF is primarily attributed to searching the GPU set for a pending job to be shared when there is insufficient resource (Lines 9 to Line 18 in Algorithm <ref>). Initially, a for loop (Line 10) scans all GPUs with one job, iterating |𝔾(OJ)| times. Subsequently, each iteration executes Algorithm <ref> to determine the time point to initiate GPU sharing as well as the appropriate batch size, with a time complexity of θ(log2 (BJ_k)). Lastly, after collecting candidate jobs for sharing GPUs, sorting the list in ascending order to select those with the shortest Job Completion Time (JCT) requires θ(|𝒥share|log2(|𝒥share|)). Consequently, the time complexity for scheduling a job is θ(|𝔾(OJ)|log2 (BJ_k)+|𝒥share|log2(|𝒥share|)). In our system implementation on a 16-GPU cluster, the overhead of periodically scheduling those waiting jobs is negligible, averaging below 0.02 seconds for each operation. § PERFORMANCE EVALUATION §.§ Experimental Setup Cluster configurations: We first conduct physical experiments on a cluster of four servers. Each server is equipped with an Intel Xeon CPU E5-2670 and four Nvidia GeForce 2080 Ti GPUs. The network is configured with Fat-Tree topology with 10 Gbps connected to a 100-Gbps switch. All experiments are performed in the environment of Ubuntu 20.04, PyTorch 1.18, CUDA 11.2 and OpenMPI 4.0. Based on the data measured on the physical environment, we then conduct simulation experiments to resemble the physical cluster configuration and test large scale of clusters and job traces. To evaluate the performance of SJF-BSBF in a large-scale cluster (16 servers each with 4 GPUs) with long-term traces, we also implement a simulator to record job events and resource usage. All experiment results without explicit comments are derived from the simulation. Baselines: We consider the following baselines. §.§.§ First-In-First-Out (FIFO) a traditional but popular policy adopted by several well-known cluster management systems, such as Yarn and Kubernetes. However, it usually performs poor due to its runtime-agnostic scheduling paradigm. Picks the top-G_j GPU with least execution time first. §.§.§ Shortest Job First (SJF) an ideal policy to minimize the average JCT without preemption by prioritizing short-term jobs to overcome HOL blocking. It is impractical as it requires perfect job information which is impossible to attain. §.§.§ Tiresias <cit.> a preemptive policy that prioritizes least attained service jobs (i.e., consumed GPU numbers and training iterations). Under this policy, it helps short-term jobs escape from resource starvation and finish earlier without any prior information. §.§.§ Shortest Job First with First Fit Sharing (SJF-FFS) a sharing policy built upon SJF. It is similar to our proposed SJF-BSBF except that it does not search the best sharing configuration as SJF-BSBF but allocates the job to those GPUs that only have one job in a first fit manner if the free GPUs are not sufficient for the new job. This policy is a comparison baseline to validate the effectiveness of wisely sharing the GPUs in SJF-BSBF. §.§.§ Pollux <cit.> the state-of-the-art elastic scheduler that adaptively adjust the GPU resources for each job to optimize the overall job performance as well as resource utilization. As explained in <cit.>, Pollux cannot guarantee no accuracy degradation for all models as it allows the scheduler to tune the training batch size, while our SJF-BSBF applies gradient accumulation to attain the same convergence as the original user specific batch size setting. For physical experiments, we compare our SJF-BSBF with FIFO, SJF and Tiresias to demonstrate the advantages of resource sharing over those exclusive-mode policies. For simulation experiments, we also add Pollux, one of the state-of-the-art elasticity-based scheduler, to compare the sharing-based and the elasticity-based policies. Workload Settings: We generate the workload similar to the Microsoft job trace <cit.>. More details about the Microsoft trace can be found in <cit.> and Appendix of <cit.>. For the physical experiments, considering that our testbed only has 4 nodes with 16 GPUs, we generate totally 30 DDL jobs by scaling down the original job trace. As job characteristics, such as the number of GPUs and the training iterations, we mostly follow the distributions of the real trace: 20 jobs using no more than 8 GPUs and 10 jobs using 12 or 16 GPUs. The training iteration of jobs varies from 100 to 5000. For the simulation experiments, we mainly follow the settings of Pollux <cit.>. We randomly sample 240 jobs from the busiest period in the deep learning cluster traces published by Microsoft <cit.> and also annotate six DL tasks (BERT, CIFAR10, DeepSpeech2, ImageNet, NCF and YoloV3) used in Pollux to them. The settings of GPU numbers and training iterations also follow those of Pollux. §.§ Experimental Results on a Physical Cluster JCT Improvements: Figure <ref> demonstrates the JCT distributions of the baseline workload using different scheduling policies. Nearly 80% of jobs have no more than 0.75 hour of JCTs using our SJF-BSBF, while other algorithms only have less than 70%. SJF-BSBF generally achieves the best performance. In Table <ref>, it is reported that SJF-FFS and SJF-BSBF, which allows the GPUs to be shared among jobs, have considerable performance improvements over other policies. In particular, SJF-BSBF achieves a 27% lower average JCT than Tiresias. Besides, instead of allowing GPU sharing in a greedy manner, SJF-BSBF can adaptively select the best job combination as well as the scheduling point to avoid those job pairs that may bring down the overall performance, which outperforms SJF-FFS by 9% in terms of JCT. Job Queuing Delay: Figure <ref> shows the average queuing time of different scheduling policies on different DDL job models. First, the queuing time of SJF-BSBF is generally lower than those heuristic policies with the exclusive GPU mode. For the model BERT, SJF-BSBF reduces the queuing time by nearly 44% compared to Tiresias. Second, since SJF-FFS allows the jobs to share the GPUs in an aggressive manner, it generally has the lowest queuing time. However, as reported in Figure <ref> and Table <ref>, it usually leads to a longer JCT since some job pairs have a high interference ratio and subsequently hurt the overall performance. §.§ Experimental Results on Large-Scale Simulations To verify the fidelity of our simulator, we also compare the results of physical experiments with simulations. We observe that the simulator can achieve the realistic experimental performance within 5% relative percentage errors on both makespan and average JCT. This confirms the high fidelity of our simulator. JCT Improvements: We first compare the JCTs of different scheduling policies on the standard simulation workload. In Figure <ref>, it is evident that SJF-BSBF outperforms other policies. Nearly 40% of jobs of SJF-BSBF achieves lower than 500 seconds of JCTs, reducing the average JCT of the shortest 40% jobs by 37% than Pollux. This demonstrates the preemption-free policy can even obtain better performance than the preemptive policy, such as Tiresias and Pollux. Tables <ref> and <ref> present the performance of different scheduling policies for 240 jobs and 480 jobs, respectively. Jobs are characterized based on their requested number of GPUs, with those requiring more than 4 GPUs considered large, and others small. For the workload of 240 jobs, SJF-BSBF demonstrates slightly better performance than the advanced policy Pollux. While large jobs under SJF-BSBF may experience longer JCTs than Pollux due to GPU sharing overhead, small jobs benefit significantly by potentially sharing GPUs with large jobs, resulting in markedly shorter queuing times compared to other preemption-free policies. This advantage is further accentuated as the number of jobs increases. In Table <ref> with 480 jobs, SJF-BSBF enhances the average JCT by nearly 3 times compared to Pollux, primarily attributable to the reduction in queuing time for small jobs. Moreover, SJF-BSBF outperforms SJF-FFS by reducing the average JCT and queuing time by 17% and 5.5%, respectively. Job Queuing Delay: Figure <ref> presents a comparison of the average queuing time among different scheduling policies for various DDL job tasks in simulation. Notably, the GPU sharing policies, namely SJF-BSBF and SJF-FFS, consistently yield lower queuing times compared to heuristic policies operating in exclusive GPU mode. Additionally, preemptive policies such as Tiresias and Pollux often exhibit longer queuing times attributable to job migration. Sensitivity to job load: We compare the performance of our SJF-BSBF to other existing policies for increasing workload intensity in terms of job submission frequencies. We scale the baseline workload of 240 jobs by 0.5×∼2×, ranging from 120 jobs to 480 jobs. Figure <ref> shows the results. An interesting phenomenon is that Pollux can have better performance than other policies when the job workload intensity is low. Pollux is more suitable for lighter workload intensity because its adaptive job batch size and resource scaling techniques are limited when clusters are overloaded, which meets the findings in <cit.>. However, when the workload increases, the GPU resources are rather insufficient so that Pollux cannot benefit from this strategy. Across all job workloads, our SJF-BSBF maintains relatively low improvements over other baseline policies since it allows the jobs to share the GPUs to shrink the job queuing time. Impact of Different Interference Ratios: To evaluate the impact of different interference ratios on our GPU sharing policies, SJF-FFS and SJF-BSBF, we artificially inject various values for all the jobs sharing the same GPUs in the baseline simulation workload. Figure <ref> shows the results. When the ratio is small (ξ≤1.25), which is the ideal scenario that sharing GPUs brings negligible overhead, SJF-BSBF tends to allow all the available sharing decisions as SJF-FFS, which results in the same performance. However, when the GPU sharing leads to severe slowdowns for the running jobs, our SJF-BSBF can get rid of those job pairs that may hurt the overall performance in SJF-FFS, which reduces the average JCT by 8%∼13% when ξ ranges from 1.5 to 2.0. § CONCLUSION In this paper, we delve into resource scheduling for DL jobs in a multi-tenant GPU cluster, where we harness GPU sharing capabilities to diminish job queuing time and enhance overall performance. We begin by formulating a DL scheduling model that accounts for GPU sharing among various jobs and employs gradient accumulation to surmount memory limitations while maintaining the job training accuracy. We then derive the optimal solution to schedule a job pair on the same set of GPUs and further design an efficient heuristic scheduling algorithm upon it to unleash the potential of GPU sharing in reducing the job queuing time and avoid serious interference with the running jobs. Extensive experiments, including physical implementations and simulations, were conducted to demonstrate the effectiveness of SJF-BSBF. Our findings reveal that the non-preemptive SJF-BSBF surpasses advanced preemptive policies like Tiresias and Pollux by leveraging GPU sharing techniques. Furthermore, identifying appropriate sharing settings is pivotal in mitigating severe degradation cases induced by high interference. § ACKNOWLEDGMENTS This research was supported by the National Natural Science Foundation of China (No. 62302126, No. 62302123), the Shenzhen Science and Technology Program (No. RCBS20221008093125065, No. JSGGKQTD20221101115655027, No. JCYJ20220818102414030, No. KJZD20230923115113026, No. KJZD20230923114213027), and Guangdong Provincial Key Laboratory of Novel Security Intelligence Technologies (2022B1212010005). IEEEtran
http://arxiv.org/abs/2407.12101v1
20240716180921
Better RAG using Relevant Information Gain
[ "Marc Pickett", "Jeremy Hartman", "Ayan Kumar Bhowmick", "Raquib-ul Alam", "Aditya Vempaty" ]
cs.CL
[ "cs.CL", "cs.AI" ]
Shape-morphing membranes augment the performance of oscillating foil energy harvesting turbines Kenneth Breuer July 22, 2024 =============================================================================================== § ABSTRACT A common way to extend the memory of large language models (LLMs) is by retrieval augmented generation (RAG), which inserts text retrieved from a larger memory into an LLM's context window. However, the context window is typically limited to several thousand tokens, which limits the number of retrieved passages that can inform a model's response. For this reason, it's important to avoid occupying context window space with redundant information by ensuring a degree of diversity among retrieved passages. At the same time, the information should also be relevant to the current task. Most prior methods that encourage diversity among retrieved results, such as Maximal Marginal Relevance (MMR), do so by incorporating an objective that explicitly trades off diversity and relevance. We propose a novel simple optimization metric based on relevant information gain, a probabilistic measure of the total information relevant to a query for a set of retrieved results. By optimizing this metric, diversity organically emerges from our system. When used as a drop-in replacement for the retrieval component of a RAG system, this method yields state-of-the-art performance on question answering tasks from the Retrieval Augmented Generation Benchmark (RGB), outperforming existing metrics that directly optimize for relevance and diversity. [Code is available at <https://github.com/EmergenceAI/dartboard>.] § INTRODUCTION A limitation of transformer-based Large Language Models (LLMs) is that the number of tokens is bounded by the transformer's context window, which is typically in the thousands. This is often insufficient for representing large texts, such as novels and corporate documentation. A common way to mitigate this constraint is via retrieval augmented generation (RAG), in which a relatively small subset of relevant passages are retrieved from a larger database and inserted into an LLM's context window <cit.>. Typically, this process involves applying a similarity metric, such as cosine similarity, to (precomputed) embeddings of passages and the embedding of a query. Using this metric, many systems then use K-nearest-neighbors or a fast approximation with a vector database such as FAISS <cit.>. Importantly, K-nearest-neighbors <cit.> and related methods (such as a cross-encoder reranker <cit.>) simply return the highest individually relevant passages, without regard to whether the information in the passages is redundant. Given the premium value on LLM context-window real estate, it's important to make best use of this limited resource by minimizing redundancy, while maintaining relevance. To appreciate the importance of minimizing redundancy in a RAG context, consider a toy database of facts and the two possible sets of retrieval results in Table <ref>, for the same query, “Tell me some facts about sharks.” Both sets of retrieved results are highly relevant to the query, but only the second set is diverse enough to support a satisfactory answer. A family of methods from the Information Retrieval literature attempts to address the general issue of diversity in retrieved results by introducing a measure that explicitly balances diversity and relevance <cit.>. In this paper, we propose a more principled method, Dartboard, that instead seeks to directly accomplish what previous methods are indirectly aiming for - maximize the total amount of information relevant for a given query in a set of k results. The intuition behind Dartboard is simple - we assume that one passage is the “correct” one for a given query. Our system is allowed k “guesses” and it aims to maximize the relevance score of its most relevant guess. Since the best guess is not known ahead of time, this score is weighted by the probability of that guess being the most relevant. This objective is sufficient to encourage diversity in the guesses. This is because a redundant guess does little to increase the relevance of the most relevant guess. The main contributions of this paper are 3-fold: * We introduce the Dartboard algorithm, a principled retrieval method based on optimizing a simple metric of total information gain relevant to a given query (<ref>). * We demonstrate the effectiveness of Dartboard on Retrieval-Augmented Generation Benchmark (RGB) <cit.>, a closed-domain question answering task. This benchmark consists of a retrieval component, and an end-to-end question-answering component. We show that the Dartboard algorithm, when used as the retrieval component, outperforms all existing baselines at both the component level and at end-to-end level (<ref>). * We show that instead of directly encouraging diversity, diversity naturally emerges by optimizing this metric (<ref>). § DARTBOARD The Dartboard algorithm is based on the following analogy illustrated in Figure <ref>: Suppose that we have a cooperative two-player game where a dartboard is covered with a random collection of points. Player 1 is given one of these points arbitrarily as the target. Player 1 then throws her dart aiming for the target, and it lands somewhere on the board. Where it lands is the query. Player 2 sees where Player 1's dart landed (the query), but doesn't know where the actual target is. Player 2 then picks k of the points on the board. The true target is revealed, and the score (which the players are trying to minimize) is the distance from the target to the closest guess. Note that to minimize the score, Player 2 would not want to put all his guesses right next to each other. Also, Player 2 should take into account how accurate Player 1's throws are in general. In our implementation, Player 1's accuracy is modeled by a Gaussian distribution with standard deviation . More formally, Player 1 selects a target T from a set of all points A and gives a query q. Then Player 2 makes a set of guesses G ⊆ A, resulting in a score sł(G, q, A, )̊ which is given as: sł(G, q, A, )̊ = ∑_t ∈ A Pł(T=t| q, )̊min_g ∈ G Dł(t|g)̊ where D is a distance function. For d dimensional vectors, A ⊆ℝ^d; under some assumptions, we can use a Gaussian kernel for the distance functions. For example, we can set Pł(T=t| q, )̊ = 𝒩ł(q, t, )̊. Thus, our equation becomes: sł(G, q, A, )̊∝ -∑_t ∈ A𝒩ł(q, t, )̊max_g ∈ G𝒩ł(t,g,)̊ §.§ The Dartboard Algorithm The Dartboard Algorithm aims to maximize Equation <ref> given a distance metric. In practice, we can greedily build our set G, which works well as it saves us combinatorial search, and allows reuse of previous answers (since the top-k results are a subset of the top-k+1 results). We begin by ranking top-k passages A' from our initial dataset of passages A using K-nearest-neighbors based on cosine similarity. We use a linear search, but sub-linear methods such as FAISS <cit.> could also be used for this initial ranking. Our search is a simple greedy optimization method with two changes - (a) we stay in log space to avoid numerical underflow, and (b) we reuse the results (maxes) from previous loops to avoid recomputing the maximums. The detailed algorithm is given in Algorithm <ref> in Appendix <ref>. In Appendix <ref>, we also show how to adapt Dartboard to use a cross-encoder based reranker (resulting in two methods called Dartboard crosscoder and Dartboard hybrid), and Appendix <ref> shows that Dartboard generalizes KNN and MMR retrieval algorithms <cit.>. § EXPERIMENTS We tested Dartboard on benchmark datasets from <cit.>, from which we used two types of closed-domain question answering. In the simple question answering case, a query is answerable from a single passage retrieved from the corpus. For example, consider the query When is the premiere of `Carole King & James Taylor: Just Call Out My Name'?. On the other hand, in the information integration case, a query would require multiple passages to be retrieved to answer the query. For example, consider the query Who is the director of `Carole King & James Taylor: Just Call Out My Name' and when is its premiere?. We modified this benchmark for our setup in the following way. The original benchmark contains “positive” and “negative” labeled passages for each query. The positive passages are useful for answering, while the negative ones are related but ineffective in answering the query. Since we are interested in the retrieval component of this task, we merged the positive and negative passages for all queries into a single collection of 11,641 passages for the 300 simple question answering test cases and 5,701 passages for the 100 information integration test cases. The evaluation is otherwise identical apart from the retrieval component. Note that the innovation of Dartboard is solely on the retrieval component. Therefore, we keep the rest of the RAG pipeline fixed. In particular, we do not modify the prompting of LLMs or try to optimize passage embeddings. Given a query and the full set of thousands of passage embeddings, we measured both a direct retrieval score and the overall end-to-end performance of the system with the only change being the retrieval algorithm. For the direct retrieval score, we computed the Normalized Discounted Cumulative Gain (NDCG) score <cit.> on retrieving any one of the “positive” passages relevant to a specific query. In the information integration case, the positive passages were split into positive ones for each component of the question. Therefore, in this case, we calculated the NDCG score for retrieving at least one positive passage for each component of the query. For the end-to-end score, given an LLM's response to the query (generated from retrieved passages), we use the same evaluation as <cit.>, which does a string match of the response on a set of correct answers, marking each response as either correct or incorrect. Some of the methods (described in Appendix <ref>), including Dartboard, have tunable parameters. For instance, Maximal Marginal Relevance (MMR) has a diversity parameter that varies from 0 to 1. We performed a grid search over these parameters, reporting the best results for each method. §.§ Results From the results shown in Table <ref>, we observe that Dartboard outperforms all state-of-the-art methods in terms of all metrics across all the tasks. Figure <ref> shows the performance of different retrieval methods on the end-to-end QA task (simple) as the parameters vary. Although Dartboard Crosscoder (D-CC) and Dartboard hybrid (D-H) are fairly robust to a range of values, the best performance is achieved for Dartboard hybrid with =0.096 (See Appendix <ref> for baselines). § RELATED WORK MMR retrieves documents <cit.> that are both relevant to the query and dissimilar to previously retrieved documents. It combines a relevance score (e.g., from BM25) with a novelty score that penalizes documents similar to those already retrieved. It have been used extensively for building recommendation systems <cit.> as well as for summarization tasks <cit.>. However, MMR suffers from few limitations. First is that MMR requires the diversity parameter to control the balance between relevance and novelty. This parameter is often dataset-specific and requires careful tuning, making it impractical for real-world applications. Second is that MMR can favor exact duplicates of previously retrieved documents as they retain a high relevance score while minimally impacting the average novelty score (See Appendix <ref>). KNN retrieves documents based on their similarity to a query embedding <cit.>. While efficient, KNN often suffers from redundancy as nearby documents in the embedding space tend to be semantically similar <cit.>. This can lead to a retrieved set dominated by passages conveying the same information with slight variations. Several recent works have explored incorporating diversity objectives into retrieval models <cit.>. These approaches often involve complex optimization functions or require additional training data for diversity estimation. For example, Learning-to-Rank with Diversity methods leverage learning-to-rank frameworks that incorporate diversity objectives directly into the ranking function. This allows for the optimization of both relevance and diversity during the ranking process. However, these approaches often require large amounts of labeled training data for diversity, which can be expensive and time-consuming to obtain <cit.>. Bandit-based approaches model document selection as a multi-armed bandit problem <cit.>. The model explores different retrieval strategies and receives feedback based on the relevance and diversity of the retrieved passages. These approaches can be effective but can be computationally expensive for large-scale retrieval tasks. RAG models have also been extended to incorporate diversity objectives. For example, RAG with Dense Passage Retrieval retrieves a large number of candidate passages <cit.>. It then employs a two-stage selection process: first selecting a diverse subset based on novelty scores, then selecting the most relevant passages from this subset. While effective, this approach requires careful tuning of the selection thresholds. § DISCUSSION In this paper, we introduce Dartboard, a principled retrieval algorithm that implicitly encourages diversity of retrieved passages by optimizing for relevant information gain. We demonstrate that Dartboard outperforms existing state-of-the-art retrieval algorithms on both retrieval and end-to-end QA tasks. We view this work as an initial step for a more general line of work that optimizes information gain during retrieval, especially in the context of RAG systems. In future work, we plan to investigate Dartboard for other retrieval tasks, such as suggestion generation (see Appendix <ref>). § LIMITATIONS We have not done a systematic investigation of the run time of Dartboard. In the worst case scenario, Dartboard is quadratic in the number of ranked passages. However, in practice, Dartboard hybrid typically runs in a fraction of a second for ranking (based on cosine-similarity with query) a set of 100 passages (note that a full cross-encoder based MMR/Dartboard needs to run the cross-encoder 10,000 times, and can take several seconds). This retrieval time is minimal compared to the time required for a LLM to process the retrieved passages and generate an answer. Our experimental results are limited to a single benchmark and a single LLM i.e. ChatGLM <cit.>. It remains to be seen whether our results would generalize to other benchmarks and LLMs. We plan to investigate this in future work. One shortcoming of our method (also shared by MMR) is that it requires a hyperparameter that affects how much diversity is encouraged. While we show that Dartboard is robust to the choice of this hyperparameter, it would be ideal to have a method that does not require manual tuning. As part of future work, we plan to investigate methods that automatically adapt to the context of the query. For example, the hyperparameter could be set based on a held-out validation set. Another topic for future work is to investigate if it is also possible for to vary depending on the type of query. For example, a query like “Tell me facts about The Beatles” would warrant a broader range of passages than a query like “Tell me facts about George Harrison”. Another shortcoming of our approach is that our benchmarking criteria is limited in terms of the evaluation protocol we are using. Our evaluation is based on an exact string match of the output answer generated from the LLM with a set of possible answers. For example, for one question, the generated output answer is considered correct if it contains the exact string `January 2 2022', `Jan 2, 2022', etc., but would be considered incorrect if it only contains `January 2nd, 2022'. However, we left the benchmark as is (modulo our modifications mentioned above) so that our method is easily comparable to that of others. Finally, though the initial cosine similarity based proposed Dartboard method is principled, the hybrid variation of Dartboard is not that principled. This is because it tries to compare logits from a cross-encoder with the cosine similarity of a different embedding model, similar to comparing apples with oranges, though it seems to work well as seen in our presented empirical results. § APPENDIX §.§ Dartboard Algorithm Details The full algorithm for Dartboard is described in Algorithm <ref>. §.§ Baselines In this section, we briefly describe the different variations of Dartboard as well as the competing retrieval methods that we use to compare the performance of Dartboard in Table <ref> in the main paper. All methods that rely on using the cross-encoder first use KNN to retrieve the top 100 passages. * Dartboard cossim (D-CS): This is the variation of the proposed Dartboard method that relies on using cosine similarity for ranking passages. * Dartboard crosscoder (D-CC): This is the variation of the proposed Dartboard method that relies on using cross-encoder based similarity. * Dartboard hybrid (D-H): This is the variation of the proposed Dartboard method that relies on using cross-encoder for the Gaussian kernel 𝒩ł(q,t,)̊ and cosine similarity for the Gaussian kernel 𝒩ł(t,g,)̊. * KNN cossim: This is the variation of K-nearest neighbors algorithm that relies on using using cosine similarity. * KNN crosscoder: This is the variation of K-nearest neighbors algorithm that relies on using cross-encoder similarity. * MMR cossim: This is the variation of the Maximal Marginal Relevance method that relies on using cosine similarity. * MMR crosscoder: This is the variation of the Maximal Marginal Relevance method that relies on using cross-encoder similarity. * Empty: This is a method that involves no retrieval step but uses just the LLM to generate the answer for a given query. * Oracle: This method retrieves only the “positive” labeled passages. For the information integration case, we retrieve positive passages for each component of the query up to k. If the number of positive passages is less than k, we use the negative passages to fill in the rest. * Random: This method randomly retrieves k passages from the full passage set. §.§ Modification for cross-encoder based reranker Cross-encoder-based reranking has been shown to outperform embedding-based approaches such as cosine similarity <cit.>, as it uses the full computational power of a transformer model, rather than being limited to simple vector operations. We have proposed two variations of Dartboard, namely Dartboard Crosscoder and Dartboard Hybrid, based on how we compute the cross-encoder scores for the Gaussian kernels in Equation <ref> given in the main paper. For the Dartboard Crosscoder variation, we use the cross-encoder score Cł(q,t)̊ before computing the Gaussian kernel for both 𝒩ł(q, t, )̊ and 𝒩ł(t,g,)̊ in Equation <ref>. Note that the cross-encoder score is asymmetric, so we simply average the two possible ways to compute the cross-encoder score for 𝒩ł(t,g,)̊, i.e., 12ł(Cł(t,g)̊ + Cł(g,t)̊)̊. For 𝒩ł(q, t, )̊, we are only interested in the likelihood of t given q, so we only use the cross-encoder score Cł(q,t)̊. However, the cross-encoder is computationally expensive to run for k^2 pairs. Hence, we rely on the Dartboard-Hybrid variation wherein we use the cross-encoder score only for the Gaussian kernel 𝒩ł(q, t, )̊ whereas we use cosine similarity for the Gaussian kernel 𝒩ł(t,g,)̊. §.§ Dartboard generalizes KNN and MMR The Dartboard algorithm can be viewed as a generalization of the traditional retrieval algorithms, KNN and MMR. In order to verify this claim, let us look at the score presented in Equation <ref> in the main paper. When the Player 1 has a perfect aim, or in other words, σ→ 0, P(T=t|q,σ) tends to a point mass distribution such that t=q, and hence the score becomes sł(G, q, A, )̊→min_g ∈ G Dł(q|g)̊ where D is the distance function as before. If the chosen distance function is proportional to the similarity measure, this is nothing but the KNN algorithm. On the other hand, when the chosen distance function is the weighted sum of the similarity between query and guess, and dissimilarity between current guess and past guesses, it reduces to the MMR algorithm. §.§ Dartboard inherently promotes diversity In Figure <ref>, we show the diversity of the retrieved passages from RGB for both Dartboard and MMR, measured as one minus the average cosine similarity between pairs of retrieved passages. While MMR explicitly encourages diversity, Dartboard does not. However, we observe from the figure that as the parameter increases, the diversity of the retrieved passages also increases. This implies that by optimizing the relevant information gain metric, Dartboard inherently ensures diversity in the set of retrieved passages. §.§ Example of a generative use of Dartboard Below is an example of the set of retrieved passages for a query that shows that the passages retrieved by Dartboard are highly diverse compared to those retrieved by KNN which has high redundancy, if we consider the cross-encoder based variations: §.§ Dartboard does not allow for the possibility of exact duplicates The “max” in Equation <ref> given in the main paper ensures that the same vector (passage) is not selected twice (unless all non-duplicate/unique passages have been exhausted) in case of Dartboard. This is in contrast to MMR, which can select the same vector (passage). Here is an example where MMR produces exact duplicates. Consider the scenario when our passage database consists of the vectors {(2, 1), (2, 1), (1, 2), (0, 1)} (with a duplicate (2, 1)). Now if we use cosine similarity based scoring, and set diversity to .5 for k=3 in case of MMR, the bag that maximizes the score for probe (2, 1) for MMR is {(0, 1), (2, 1), (2, 1)}, which has an exact duplicate passage vector (2, 1). This verifies that MMR can allow for exact duplicates, which can increase the MMR score because it decreases the average distance to the query, while (possibly) only marginally decreasing the diversity. On the contrary, in case of Dartboard, an exact duplicate passage vector will add zero information i.e. it would not increase the chances of hitting the target. So it will not be selected for retrieval until all other non-duplicate options are exhausted. §.§ More results In Figure <ref>, we show the relation between NDCG score and final end-to-end performance on the question answering (QA) task.
http://arxiv.org/abs/2407.13590v1
20240718152836
Prospects of constraining on the polarizations of gravitational waves from binary black holes using space- and ground-based detectors
[ "Jie Wu", "Jin Li" ]
gr-qc
[ "gr-qc", "astro-ph.IM" ]
UTF8gbsn cqujinli1983@cqu.edu.cn College of Physics, Chongqing University, Chongqing 401331, China Department of Physics and Chongqing Key Laboratory for Strongly Coupled Physics, Chongqing University, Chongqing 401331, China § ABSTRACT In general relativity, gravitational waves (GWs) exhibit two tensor modes, while alternative theories predict up to six polarization modes. We investigate GW polarization constraints using a model-independent parametrized post-Einsteinian framework with space- and ground-based detectors. Evaluating LISA, Taiji, TianQin, LIGO, Virgo, KAGRA, and Einstein Telescope (ET), we analyze their capabilities and network performance. Taiji provides the best constraints among space-based detectors, with LISA outperforming TianQin. Among ground-based detectors, LIGO excels in vector modes, while ET offers the most comprehensive constraints. In network scenarios, LISA+TJm performs best, and ET surpasses second-generation detector combinations. Multiband observations alleviate scalar mode degeneracies, significantly enhancing ground-based detector performance. Combined space- and ground-based observations yield robust constraints on GW polarization, advancing tests of deviations from general relativity. Our findings highlight the potential of future GW missions to refine our understanding of gravitational physics through precise polarization measurements. Prospects of constraining on the polarizations of gravitational waves from binary black holes using space- and ground-based detectors Jin Li (李瑾) ===================================================================================================================================== § INTRODUCTION Over the past century, general relativity (GR) has been rigorously tested through numerous solar system experiments and astrophysical observations, with no definitive evidence suggesting deviations from GR <cit.>. Despite passing all experimental tests, there are indications that GR may need to be extended, particularly due to the challenges posed by dark energy and dark matter in cosmology <cit.>. Gravitational waves (GWs) offer a new avenue for testing GR, especially in the strong-field regime where GWs are generated by compact objects <cit.>. One method to test GR is by detecting additional polarizations of GWs. In GR, GWs only have two tensor modes, whereas some alternative gravity theories predict up to six polarization modes <cit.>. For instance, Brans-Dicke theory predicts an additional scalar mode <cit.>, while f(R) theory and Horndeski theory propose two additional scalar modes <cit.>. Einstein-Aether theory predicts five polarization modes <cit.>, and some tensor-vector-scalar theories encompass all six polarization modes <cit.>. Therefore, detecting additional polarizations of GWs can help identify deviations from GR, surpass current experimental limits, and reveal the deeper nature of gravity. Since LIGO first detected GW <cit.>, nearly a hundred GW events generated by compact binary coalescences (CBCs) have been observed <cit.>. Testing GR with these GW data has shown that all observed data are consistent with GR so far <cit.>. With the commencement of the fourth observation run (https://observing.docs.ligo.org/plan/O4), more GW events are expected to be detected, allowing for further testing of GR. In general, to detect the polarization of transient GWs, more detectors are needed to obtain responses from different directions, thereby enhancing detection capabilities <cit.>. Ground-based detectors in the hertz frequency band, such as LIGO <cit.>, Virgo <cit.>, and KAGRA <cit.>, collaborate to form a detector network to detect the polarization of GWs. For continuous GWs emitted by distant CBCs, a detector with directional change capability is sufficient <cit.>. Space-based GW detectors in the millihertz frequency band, such as LISA <cit.>, Taiji <cit.>, and TianQin <cit.>, consist of three spacecraft orbiting the Sun/Earth in a triangular formation, making it possible to detect additional GW polarizations due to the detector's motion in space <cit.>. Besides determining whether the detector can theoretically detect additional polarizations of GWs, it is also crucial to ascertain whether the detected GW signal is consistent with the predictions of GR. One approach is to use a waveform model with purely phenomenological parameters to represent possible deviations from GR and to apply constraints to these parameters using observed GW data <cit.>. Yunes and Pretorius proposed the parameterized post-Einstein (ppE) waveform <cit.>, based on the post-Newtonian (PN) approximation, which is suitable for parameterizing the influence of alternative gravity theories on non-GR polarizations. Most modified gravity theories, such as Brans-Dicke theory, massive gravity, and bimetric theory, can be described using the ppE framework <cit.>. Although the ppE framework cannot parameterize all possible deviations from GR, it can test for additional polarizations, providing a method to evaluate the performance of detectors in testing GR <cit.>. Currently, some work utilizes the ppE framework to constrain additional polarizations for testing GR. Narikawa and Tagoshi discussed the potential of advanced ground-based GW detectors, such as LIGO, Virgo, and KAGRA, in detecting general deviations between GWs and the predictions of GR using the ppE framework <cit.>. Huwyler et al. employed LISA to detect massive black hole binaries (MBHBs) and test GR by representing ppE waveforms with phase corrections only <cit.>. References <cit.> discussed the potential of Taiji and TianQin in using MBHBs to test GR and provide constraints on non-GR parameters. For detector networks, Nair et al. focused on the synergistic effect of the Einstein Telescope (ET) and preDECIGO, demonstrating enhanced sensitivity within a specific band <cit.>. In Ref. <cit.>, Wang and Han used the LISA-Taiji network to test the ppE parameters of deviations from the GR waveform, indicating a significant improvement in the detection of polarization amplitude through joint observations. Based on our previous work <cit.>, we investigate the constraints from binary black holes (BBHs) for alternative configurations of space- and ground-based detectors on the detection of additional GW polarizations. We employ the ppE framework for model-independent testing in GR by numerically calculating time-domain GW signals that contain all polarizations. Space-based detectors LISA, Taiji, and TianQin are used to observe massive black hole binaries (MBHBs), while ground-based detectors LIGO, Virgo, KAGRA, and the Einstein Telescope (ET) observe stellar-mass binary black holes (SBBHs), providing constraint results under different networks. We also consider the combination of multiband observations. The long-term observation of SBBHs by space-based detectors helps break the response degeneracy of ground-based detectors, thereby improving constraints. Using the Fisher information matrix (FIM), we present the constraint results of several typical mass BBHs as a function of redshift. Additionally, we discuss the potential effects of multimessenger observations on enhancing parameter constraints. Through systematic research, we comprehensively analyze the results of additional GW polarizations constrained by space- and ground-based detectors from multiple perspectives. This paper is organized as follows. In Sec. <ref>, we introduce time-domain GW signals and ppE parameters for measuring additional polarizations within the ppE framework. In Sec. <ref>, we review the performance parameters and alternative configurations of space- and ground-based detectors, and analyze the response functions corresponding to different modes. In Sec. <ref>, we explain the typical BBH sources selected in our paper and the method for calculating the signal-to-noise ratio (SNR) and FIM. In Sec. <ref>, we present the constraint results on ppE parameters, including observations from different networks, multiband, and multimessenger observations. Finally, we summarize the results of our research in Sec. <ref>. § GRAVITATIONAL WAVE SIGNAL In generalized modified gravity theories, metric perturbations can have up to six independent degrees of freedom, resulting in six different polarization modes: two tensor modes (+ and ×), two vector modes (X and Y) and two scalar modes (B and L) <cit.>. For detectors, the observed GW strain can be described as a linear combination of different GW polarizations, expressed as h(t)=∑_AF^A h_A(t) , where A={+,×,X,Y,B,L} represents for the six polarizations, h_A(t) is the input signal of GWs, and F^A is the angular response function. The extended ppE framework is used to construct a model-independent test for GR, including all GW polarization modes <cit.>. The amplitude and phase of GWs can be obtained separately from the measurement of perturbations and energy evolution, and there exist simultaneous quadrupole and dipole radiation. The process of CBC includes three phases: inspiral, merger, and ringdown. Currently, the observed GW events are transient and do not include the early inspiral phase <cit.>. Moreover, GR has passed all current experimental tests, indicating that only tensor modes generated by quadrupole radiation are present in the non-inspiral phase. The contribution of dipole radiation in the early inspiral phase can be greater than during the merger phase <cit.>. Therefore, it is reasonable to assume that tensor modes dominate in quadrupole radiation, while vector and scalar modes dominate in dipole radiation. Following Refs. <cit.>, the GW waveform under the ppE framework can be written as h_+ = 𝒜_T (1+cos^2ι)/2cos(2Φ+2Φ_0), h_× = 𝒜_T cosιsin(2Φ+2Φ_0), h_X = 𝒜_V cosιcos(Φ+Φ_0), h_Y = 𝒜_V sin(Φ+Φ_0), h_B = 𝒜_B sinιcos(Φ+Φ_0), h_L = 𝒜_L sinιcos(Φ+Φ_0), with 𝒜_T =4 /D_L(Gℳ/c^2)^5/3(ω/c)^2/3 , 𝒜_V = α_V/D_L(Gℳ/c^2)^4/3(ω/c)^1/3, 𝒜_B = α_B/D_L(Gℳ/c^2)^4/3(ω/c)^1/3, 𝒜_L = α_L/D_L(Gℳ/c^2)^4/3(ω/c)^1/3, where α_V,B,L are the dimensionless ppE parameters, ℳ=(m_1m_2)^3/5/(m_1+m_2)^1/5 is the chirp mass, m_1 and m_2 are the masses of BBH, D_L is the luminosity distance, ι is the inclination angle, Φ=∫ωd t is the orbital phase, ω is the orbital angular frequency, Φ_0 is the initial orbital phase, G and c are the gravitational constant and the speed of light. We only consider the dominant modes in different radiations for the amplitude, and the contribution of different radiations to the orbital angular frequency varies. Considering the influence of possible non-dominant modes, the overall evolution of the orbital angular frequency can be described by the contributions of dipole radiation and quadrupole radiation <cit.>: dω/dt=α_Dη^2/5Gℳ/c^3ω^3+α_Q(Gℳ/c^3)^5/3ω^11/3 , where α_D and α_Q are the ppE parameters that describe the orbital angular frequency contributions of dipole and quadrupole radiation, η=m_1m_2/M^2 is the symmetric mass ratio, and M=m_1+m_2 is the total mass. To describe the model-independent GW waveform, we select five dimensionless ppE parameters to constrain parameters in alternative gravity theories. In the case of GR, α_D,V,B,L=0 and α_Q=96/5. To ensure our analysis remains independent of specific models, we treat these ppE parameters as independent. Some theories exhibit correlations between these parameters, potentially enhancing parameter constraints <cit.>. Thus, our results are conservative, and the actual constraints may exceed those presented here. From Eq. (<ref>), solving the evolution function ω(t) of the orbital angular frequency over time analytically is challenging. We determine t(ω) through integration:: t=t_0+∫_ω(t_0)^ω(t)(dω/dt)^-1dω, which is shown in Refs. <cit.>. Computational methods, such as the bisection method, iteratively solve for the orbital angular frequency corresponding to a given time point. By solving point by point, we obtain ω(t). Using the above method, we input the calculated ω(t) into Eqs. (<ref>)-(<ref>) to derive the final GW signal. In order to better demonstrate the impact of ppE parameters on GW waveforms significantly, we set large parameter values in Fig. <ref>. As shown in Fig. <ref>(b), there is a notable amplitude difference between the ppE waveform and the GR waveform. This discrepancy arises because the GW frequency from dipole radiation equals the orbital frequency of the BBH, whereas quadrupole radiation operates at twice that frequency. This effectively superimposes a waveform with half the frequency onto the original GR waveform, resulting in amplitude modulation at twice the wavelength. Comparing Figs. <ref>(b) and (c), the disparity between the ppE waveform and the GR waveform is significant when the merger is distant but diminishes as the merger approaches, consistent with our assumption in Eq. (<ref>) to align with current GR testing outcomes. In Fig. <ref>(d), the ratio of additional modes to tensor modes is visually depicted, showing a gradual decrease over time, causing the ppE waveform to progressively approach the GR waveform. From Eq. (<ref>), it can be seen that the amplitude of the tensor modes varies with the power of 2/3 of the angular frequency, while the additional modes varies with the power of 1/3. The rate of amplitude increase for tensor modes is significantly greater than for additional modes, resulting in the ratio 𝒜_V/𝒜_T ∝ω^-1/3 declining as angular frequency increases. § DETECTORS AND RESPONSE §.§ Space-based detectors In the 2030s, LISA, Taiji, and TianQin are scheduled to launch triangular constellations consisting of three spacecraft. LISA and Taiji utilize heliocentric orbits, whereas TianQin employs geocentric orbits. Various alternative orbital configurations are available, as detailed in Figs. <ref> and <ref>. LISA and Taiji, including three configurations, have a leading or trailing angle of 20^∘ relative to Earth, with a 60^∘ inclination between the constellation plane and the ecliptic plane <cit.>. Furthermore, the normal direction of the TianQin constellation plane remains fixed, and TianQin follows a “three months on+three months off" observation scheme <cit.>. Variations in the constellation plane determine how GW sources appear in the detector coordinate system, thereby influencing the response to different polarizations. Additionally, detector sensitivity is directly influenced by the arm length, with LISA, Taiji, and TianQin having arm lengths of 2.5×10^6 km, 3×10^6 km, and √(3)×10^5 km, respectively. Concerning the noise in space-based detectors, we account for acceleration noise, displacement noise, and foreground noise. Acceleration noise and displacement noise contribute to the primary power spectral density (PSD), as detailed in Refs. <cit.>. Foreground noise originates from galactic binaries within the Milky Way galaxy, creating a peak in the frequency band of approximately 0.3-3 mHz, as demonstrated in our previous work <cit.>. §.§ Ground-based detectors In contrast to the planned space-based detectors, the ground-based detectors are currently operational, primarily comprising the LVK collaboration, which includes LIGO, Virgo and KAGRA. These are all L-shaped second-generation detectors with arm lengths of 4 km (LIGO), 3 km (Virgo), and 3 km (KAGRA), respectively <cit.>. In this paper, we consider LIGO to include three detectors: LIGO Hanford (H1) and LIGO Livingston (L1), which are operational, and the planned LIGO India (I1). Furthermore, we consider the third-generation detector ET, which features a triangular shape composed of three 10 km arms <cit.>. As the Earth rotates, ground-based detectors scan different sky regions, as illustrated in Fig. <ref>. Due to the obliquity of the ecliptic, the sky regions scanned by space- and ground-based detectors is not parallel. Ground-based detectors, unlike their space-based counterparts, are affected by various types of noise, including quantum noise, seismic noise, gravity-gradient noise, thermal noise, and others. For our study, we utilize the design performance specifications of these detectors, and the corresponding PSD can be referenced in LIGO Document https://dcc.ligo.org/LIGO-T1500293/publicT1500293 and Refs. <cit.>. §.§ Response function A dual-arm Michelson interferometer detects GWs by measuring the relative change in the length of its two arms. We describe the detector's response to GWs using the method outlined in Refs. <cit.>. As shown in Fig. <ref>, the detector coordinates are constructed using orthogonal unit vectors {x̂, ŷ, ẑ}, and the GW coordinates are constructed using {p̂, q̂, ŵ}. Here, ŵ represents the propagation direction of the GW, and the unit vector Ω̂=-ŵ represents the position of the GW source. In a triangular detector, the angle γ between the two arms is 60^∘, while in an L-shaped detector, it is 90^∘. For GWs, an additional rotational degree of freedom can be fixed by specifying the polarization angle ψ, ultimately using {m̂, n̂, ŵ} to describe the GW. The angular response function F^A in Eq. (<ref>) can be expressed as F^A=D^ije_ij^A , where the polarization tensor e_ij^A is described using the orthogonal unit vectors mentioned above <cit.>: e_ij^+ =m̂_im̂_j-n̂_in̂_j, e_ij^× =m̂_in̂_j+n̂_im̂_j, e_ij^X =m̂_iŵ_j+ŵ_im̂_j, e_ij^Y =n̂_iŵ_j+ŵ_in̂_j, e_ij^B =m̂_im̂_j+n̂_in̂_j, e_ij^L =ŵ_iŵ_j. And the detector tensor D^ij can be writen as <cit.> D^ij=1/2[û^iû^j𝒯(f,û·ŵ)-v̂^iv̂^j𝒯(f,v̂·ŵ)], with 𝒯(f,â·b̂)= 1/2{sin c[f/2f_*(1-â·b̂)]] ×exp[-if/2f_*(3+â·b̂)] +sinc[f/2f_*(1+â·b̂)] ×exp[-if/2f_*(1+â·b̂)]}, where sinc(x)=sin x/x, f_*=c/(2π L) is the transfer frequency and L is the arm length of the detector. The transfer frequencies for space-based detectors are 19 mHz for LISA, 16 mHz for Taiji, and 275 mHz for TianQin. Due to the arm length, the transfer frequency of ground-based detectors is significantly higher than their sensitivity frequency band. The angular response functions F^A(λ, β, ψ) or F^A(α, δ, ψ) are obtained by substituting the detector and ecliptic/equatorial coordinates into Eqs. (<ref>)-(<ref>) using Euler rotation conversion. To present the angular response function concisely, we introduce the combined tensor and vector modes: F^T=√(| F^+ |^2+| F^×|^2 ) , F^V=√(| F^X |^2+| F^Y |^2 ). According to Ref. <cit.>, the angular response functions of combined tensor mode F^T, combined vector mode F^V, breathing mode F^B, and longitudinal mode F^L are independent of the polarization angle ψ. In this section, we study the response of different modes using these four independent modes of ψ and the six basic modes for subsequent research and calculations. On this basis, we calculate the angular response functions for two different frequencies in the LISA detector coordinates to plot Fig. <ref>. According to Eq. (<ref>), at the low-frequency limit f≪ f_*, 𝒯→ 1 leads to F^B=-F^L <cit.>. Ground-based detectors operate within sensitivity frequency bands below the low-frequency limit, making it impossible to distinguish between these two modes. Moreover, space-based detectors may break through the low-frequency limit and resolve this degeneracy. Figure. <ref> shows that at the low-frequency limit, the angular response functions for the breathing mode and longitudinal mode are degenerate at every position. Beyond the transfer frequency, the optimal response position of the breathing mode shifts from ϕ_d=45^∘,135^∘,225^∘,315^∘ to ϕ_d=90^∘,270^∘. This shift, which differs from the longitudinal mode's optimal response position, makes it possible to break the degeneracy of these two modes. As shown in Fig. <ref>, the optimal response positions of the combined tensor mode and the combined vector mode change with frequency. The optimal position for the combined tensor mode shifts from a direction perpendicular to the constellation plane to a direction closer to it. For the combined vector mode, the optimal positions at ϕ_d=90^∘ and ϕ_d=270^∘ disappear as frequency increases. In summary, at frequencies beyond the low-frequency limit, there is no degeneracy among the modes, and the optimal response positions generally are not overlapped. Figure <ref> illustrates the response at two specific frequencies. To further examine the relationship between response and frequency, we introduce the averaged angular response function averaged over the source locations <cit.>: R_A(f)=1/4π∫_0^2π∫_0^π|F^A|^2sinθ_ddθ_ddϕ_d, where A={T,V,B,L} denotes four modes independent of ψ. We also introduce effective strain noise to measure the detector's sensitivity to different modes varying with frequency: h^A_eff(f)=√(S_n(f)/R_A(f)), where S_n(f) is the noise PSD of the detector. After our calculations, we derive the averaged response functions and effective strain noise for LISA's four modes, as shown in Fig. <ref>. At the low-frequency limit, the averaged response function remains constant, with R_T=R_V=2(sin^2γ)/5 and R_B=R_L=(sin^2γ)/15. As the transfer frequency approaches, R_A starts to decrease, showing three distinct damping trends. R_T and R_V diverge with increasing frequency, similar to R_B and R_L. Additionally, R_T and R_B show differences at the low-frequency limit. As the frequency increases, the values of those two modes tend to converge. Note that while the average value of these two modes is the same, their responses at individual positions differ, preventing degeneracy between R_T and R_B. Based on Ref. <cit.>, we analyze the detector's capability to detect various polarizations using the response function. For ground-based detectors, the sensitive frequency band is below the transfer frequency, preventing the distinction between the breathing mode and the longitudinal mode. Space-based detectors, however, can break this degeneracy at non-low-frequency limits. In Sec. <ref>, we simulate GW signals to evaluate the performance of ground- and space-based detectors in constraining polarization. § METHODOLOGY §.§ Data analysis In general, the SNR ρ of GW can be defined as ρ^2=(h|h), where the inner product (·|·) generalizes the time-domain correlation product and is conventionally defined as (a|b)=4Re[∫_0^∞ã^*(f)b̃(f)/S_n(f)df], where ã(f) and b̃(f) are the Fourier transforms of a(t) and b(t) respectively. In our study, we consider fourteen GW parameters in total, including ppE parameters, which are ξ={t_c,m_1,m_2,z,ι,Φ_0,ϕ_e,θ_e, ψ,α_Q,α_D,α_V,α_B,α_L}, where t_c is the coalescence time, z is the redshift of the source, (ϕ_e,θ_e) is the sky position, representing ecliptic (λ ,β) or equatorial (α ,δ) coordinates. For assessing the limitations of the detector on different polarizations and the uncertainty in estimating all parameters, we use the FIM method, defined as Γ_ij=(∂ h/∂ξ_i|∂ h/∂ξ_j), where ξ_i represents the parameter in Eq. (<ref>). For high SNR, the inverse of the FIM, Σ=Γ^-1, is the variance-covariance matrix, with the diagonal elements representing variance <cit.>. Thus, the uncertainty Δξ_i of the parameters is given by Δξ_i=√(Σ_ii). When calculating the FIM in Eq. (<ref>), we use the numerical differentiation approximation from Refs. <cit.>. Additionally, for the observation network, the total SNR and FIM are obtained by summing the inner products calculated by each detector. §.§ BBH source selection Space- and ground-based detectors have different sensitive frequency bands, resulting in the detection of various GW sources. For BBH sources, ground-based detectors primarily observe SBBHs capturing the complete CBC process of three phases, whereas space-based detectors mainly observe that of MBHBs. Additionally, space-based detectors can detect SBBHs, as many SBBHs inspiral in the low-frequency band before merger, entering the sensitivity band of these detectors. The different BBH sources we selected are presented in Fig. <ref>. For all CBC processes, since additional polarizations contribute more significantly in the inspiral phase and Eq. (<ref>) does not apply to the merger and ringdown phases, our waveform model concentrates on the inspiral phase before the binary reaches the innermost stable circular orbit (ISCO). The frequency of ISCO is given by <cit.> f_ISCO=c^3/6√(6)π GM . Ground-based detectors typically observe SBBH GWs entering the frequency band a few seconds to minutes before merger. Thus, we select three typical SBBHs with equal mass ratios and total masses of 3 M_⊙, 20 M_⊙, and 100 M_⊙, respectively. We calculate the waveforms for the 10 minutes leading up to ISCO. For MBHB sources observed by space-based detectors, we select MBHBs with three typical masses of 10^5 M_⊙, 10^6 M_⊙, and 10^7 M_⊙, calculating waveforms for the 3 months before reaching ISCO. For multiband observations, we extend two types of SBBHs with masses of 20 M_⊙ and 100 M_⊙ to the low-frequency band. Those waveforms last for a year before reaching 0.1 Hz. After 0.1 Hz, those SBBHs would take 3 months (20 M_⊙) and 6 days (100 M_⊙) to enter the ground-based detector observation frequency band, providing an opportunity to study multiband observations. For the selection of sky positions for BBH sources, we use identical parameters for each source, generating 100 distributions consistent with Table <ref>. To study the influence of inclination angle, we consider 45 different inclination angles, resulting in 4500 calculations of SNR and FIM for each BBH source, with the results presented in Secs. <ref> and <ref>. Finally, for selecting fiducial values for the ppE parameter, we set it to 10^-4 (cf. <cit.>). § CONSTRAINTS ON PARAMETERS §.§ Results with inclination We specifically examine the relationship between the uncertainty of ppE parameters and the inclination angle. According to Eq. (<ref>), we can define the distribution of inclination of the tensor mode p_T(ι) as p_T(ι)∝√(((1+cos^2ι)/2)^2+(cosι)^2) . Similarly, we can determine that the distributions of vector mode p_V(ι) and scalar mode p_S(ι) are p_V(ι) ∝√((cosι)^2+1^2), p_S(ι) ∝√((sinι)^2+(sinι)^2). We present the distributions of those three modes in Fig. <ref>. From Fig. <ref>, it is evident that the distribution of scalar mode with inclination is opposite to that of tensor and vector modes, reaching its maximum value at ι=π/2 and minimum at ι=0,π. From the perspective of inclination angle influence alone, the scalar mode is superior to the other two modes at ι=π/2, while tending towards 0 at ι=0,π. Tensor and vector modes exhibit the same trend of change, with tensor modes showing a greater amplitude of variation. Furthermore, we calculate the SNR and FIM to obtain the results for the uncertainty of the ppE parameters, as shown in Fig. <ref>. As the distribution in Fig. <ref>, the variation of the scalar mode with inclination in Fig. <ref> is opposite to other modes. Furthermore, due to the distribution of scalar mode being zero at ι=0,π, there is almost no signal near those two values, resulting in the parameter uncertainty of the scalar mode in Fig. <ref> approaching infinity. In addition, considering the low-frequency limit, the Δα_B and Δα_L of a 10^5 M_⊙ MBHB with low-SNR are smaller than that of a 10^7 M_⊙ MBHB with high-SNR. Moreover, due to the superior sensitivity of LISA compared to TianQin and the lower transfer frequency of LISA, the Δα_B and Δα_L of LISA are significantly better than that of TianQin. For the degeneracy caused by arm length, ground-based detectors cannot distinguish between the breathing mode and the longitudinal mode, so the Δα_B and Δα_L are much greater than that of space-based detectors and becomes invalid. Besides the scalar mode, a higher SNR signifies a more robust signal, typically leading to reduced parameter uncertainties. In our calculations, SNR is calculated based on the GR case, where the contribution of tensor modes outweighs that of other modes. Thus, the SNR value is primarily influenced by the tensor modes. In contrast to tensor and scalar modes, the uncertainty of the vector mode is notably higher at ι=π/2 in TianQin. This is because TianQin's fixed orientation restricts the response function's variability at that specific angle, leading to a pronounced increase in Δα_V near ι=π/2. The transfer frequency not only directly impacts the degeneracy between breathing and longitudinal modes but also affects other aspects. The transfer function 𝒯 is directly related to frequency, providing constraints beyond the low-frequency limit. α_Q and α_D determine the variation of the orbital angular frequency, and the influence of 𝒯 can reduce parameter uncertainty. Therefore, the Δα_Q of 10^5 M_⊙ MBHB is smaller than that for a 10^6 M_⊙ MBHB, and Δα_D for both MBHBs also varies with the relationship near ι=π/2. Because of the low-frequency limit, ground-based detectors are not affected by 𝒯, causing the uncertainties Δα_Q and Δα_D for SBBHs to depend entirely on SNR. This section focuses on presenting and analyzing the relationship between the uncertainty of ppE parameters and the inclination angle for two typical space-based and ground-based detectors. §.§ Results with ppE parameters We evaluate the performance of several typical BBHs using different detectors and various parameters. The results for different detectors and their networks are illustrated in Fig. <ref>. Regarding space-based detectors, Taiji, which shares a similar configuration with LISA, surpasses LISA across all metrics, demonstrating superior SNR and reduced parameter uncertainty. Moreover, TianQin's distinct orbital configuration results in inferior performance compared to LISA and Taiji. Additionally, the significant rise in TianQin's parameter uncertainty at ι=π/2, particularly in Δα_V, leads to the upper limit of the box plot for Δα_V being substantially higher than its median value in the comprehensive results presented in Fig. <ref>. When considering the space-based detector network, significant improvements are observed across all metrics compared to individual detectors. Since both LISA and Taiji utilize heliocentric orbits, their network configuration remains stable, maintaining a consistent angle between the detectors. LISA and the three alternative orbital configurations of Taiji exhibit different angles, which introduce variations in the outcomes. A larger angular separation between the detectors correlates with greater coverage of the sky area with high sensitivity, resulting in differences in parameter uncertainty under similar SNR conditions. Overall, the LISA+TJm configuration achieves the most favorable results, with LISA+TJp surpassing LISA+TJc, especially noticeable in parameters like Δα_V and ΔΩ. Moreover, it is evident that the LISA+TJ combination outperforms LISA+TQ, and a network of three detectors surpasses two. For more detailed analysis of space detector networks, refer to Ref. <cit.>. Single ground-based detectors such as Virgo and KAGRA do not perform well in detecting additional polarization parameters and accurately determining the sky position of the source. This limitation arises because a single detector has only one response angle, and a 10-minute GW signal is relatively short compared to the ground-based detector's observational period (Earth's rotation period). In the ground-based detector's coordinate system, a 10-minute SBBH source travels only 2.5^∘, whereas a 90-day MBHB source moves nearly 90^∘ in the space-based detector coordinate system. Consequently, the parameter uncertainty range for ground-based detectors is larger. Furthermore, both LIGO and ET have three detectors, but ET detectors are all coplanar, whereas LIGO's are not. This situation is similar to the superiority of LISA+TJm over LISA+TJc, where the relative positions of LIGO's three detectors offer greater advantages compared to ET. Therefore, despite the SNR of ET is one order of magnitude higher than LIGO, their ΔΩ values are still very similar. Regarding the ground-based detector network, currently operational LVK cannot fully compensate for the sensitivity differences between second-generation and third-generation detectors. Compared to ET, LVK exhibits a difference of one to two orders of magnitude in the limitations of ppE parameters. Moreover, due to the varied response angles of LVK, ΔΩ values are slightly better than those of ET. The LVK+ET combination is the most comprehensive, significantly enhancing network sensitivity. For instance, Δα_Q and Δα_D can approach the sensitivity level of space-based detector networks, with Δα_V potentially surpassing them. Due to the arm length, the Δα_B and Δα_L of the ground-based detector network is four to six orders of magnitude higher than that of the space-based detector network, making it impossible to distinguish between breathing and longitudinal modes. §.§ Results with redshift In our previous calculations and analysis, we assume fixed redshift: z=0.01 for SBBH and z=1 for MBHB. Different redshifts also lead to variations in the GW signal, potentially impacting the final results. From Eq. (<ref>), the strain is inversely proportional to the luminosity distance, represented as h∝ 1/D_L∼ 1/z. Roughly speaking, as redshift increases, the strain decreases. Regarding the ppE parameter ξ, the partial derivative of the strain with respect to it, ∂ h/∂ξ∝ 1/z. Consequently, the FIM is (∂ h/∂ξ|∂ h/∂ξ )∝ 1/z^2, and thus the uncertainty of the ppE parameter scales as Δξ∝ z. In other words, the uncertainty of the ppE parameter is directly proportional to the redshift: Δξ(z) =𝒦z+Δξ(0), where 𝒦 is a proportionality constant. We calculate the results for SBBH and MBHB at different redshifts, as shown in Table <ref>. From Eq. (<ref>), it is clear that the uncertainty of the ppE parameters is proportional to the redshift. Additionally, the rate of change in uncertainty with redshift varies for different ppE parameters. Table <ref> illustrates that the influence of redshift on Δα_Q and Δα_D is negligible, as most 𝒦 values are less than 10^-4. That is because these two parameters are directly related to frequency, and the change in GW amplitude due to redshift has a negligible effect on frequency resolution. Moreover, ground-based detectors exhibit relatively larger 𝒦 values compared to space-based detectors. The 𝒦 for Δα_V is significantly larger compared to the first two parameters. Space-based detectors typically have 𝒦 values around 10^-2. In contrast, ground-based detectors like Virgo and KAGRA have 𝒦 values around 10^2. LIGO and ET perform better, with 𝒦 < 1 in most cases. Those observations are consistent with the analysis in last section, explaining these differences based on detector performance. Due to the degeneracy in ground-based detectors, Δα_B (Δα_L) exhibits significantly large 𝒦 values, exceeding 10^6. In contrast, space-based detectors, which break this degeneracy, yield noticeably better results. The relationship between the frequency range of MBHBs and the transfer frequency determines the results in Table <ref>. Smaller mass MBHBs exhibit lower frequencies closer to the transfer frequency, enhancing the resolution of these two modes and leading to smaller 𝒦 values. The results are similar for different detectors: Taiji has the smallest transfer frequency, while TianQin has the largest, corresponding to their respective 𝒦 values. §.§ Multiband observation Space-based detectors can not only observe MBHBs, but also SBBHs. Among tens of thousands of sources, a small number of SBBHs increase in frequency rapidly and merge into the ground-based detector frequency band within a short period, enabling multiband observations by both space- and ground-based detectors <cit.>. We calculate FIM for two typical SBBHs observed across multiple frequency bands and compared their results with those observed only by ground-based detectors, as shown in Fig. <ref>. Multiband observations provide a significant enhancement to ground-based detectors, particularly in resolving the degeneracy between the breathing and longitudinal modes. The addition of space-based detectors effectively breaks the degeneracy, allowing multiband SBBH observations to limit Δα_B and Δα_L to levels comparable with space-based detector observations of MBHB. Additionally, multiband observations greatly improve the performance of Virgo and KAGRA in Δα_V, reducing them by nearly three orders of magnitude, approaching the levels achieved by LIGO and ET. Furthermore, the Δα_V level of the ground-based detector network remains higher than that of the space-based detector network, so the improvement from multiband observations is not as significant. Regarding Δα_Q and Δα_D, multiband observations have varying improvements for SBBH depending on the mass. The results for SBBH with M=20 M_⊙ do not show significant improvement, while those for M=100 M_⊙ show some improvement. Due to the high sensitivity of ET, multiband observation does not significantly improve ET and LVK+ET, it significantly enhances the performance of only second-generation detectors. Within the space-based detector frequency band, although both types of SBBH have a duration of one year, their frequency variation ranges differ. Specifically, the frequency variation for M=20 M_⊙ SBBHs is 46 mHz, whereas for M=100 M_⊙ it increases to 78 mHz. Greater frequency variations lead to stronger constraints, thereby affecting the observed improvements. Moreover, longer observation times result in larger frequency variations. While four-year observations generally outperform one-year observations, this improvement is not significant. This is because the further away from the merger, the smaller the frequency change over the same time period. For example, in calculations for SBBH with M=100 M_⊙, the frequency change in one year leading up to 0.1 Hz is 78 mHz, but the change over four years is only 87 mHz, indicating a minimal additional change of 9 mHz over three years. Hence, our choice of a one-year observation duration remains reasonable. The performance of different space-based detectors significantly affects the results of multi band observations, for example, Taiji is superior to LISA. Moreover, the selected cutoff frequency of SBBHs within the space-based detector band is 0.1 Hz, lower than TianQin's 0.28 Hz transfer frequency. That disparity renders TianQin less effective than LISA and Taiji in observing breathing and longitudinal modes, resulting in a multiband observation improvement for Δα_B and Δα_L that is two orders of magnitude weaker compared to the other detectors. Conversely, TianQin exhibits significantly enhanced multiband observations for Δα_Q, Δα_D, and Δα_V, because TianQin's sensitivity within the selected SBBH frequency band surpasses that of the other detectors, leading to superior results. Overall, the addition of space-based detectors compensates for the degeneracy of ground-based detectors in scalar modes and also improves other aspects, enabling the observation results of SBBH to reach the level of MBHB results. §.§ Multimessenger observation Apart from multiband observations using space- and ground-based detectors, multimessenger observations, which include the assistance of electromagnetic (EM) observations, can similarly enhance the performance of GW observations. EM observations offer a unique perspective about BBH sources distinct from GW observations. References <cit.> have demonstrated that mergers of MBHBs can emit EM radiation from accretion disks, while mergers of SBBHs in active galactic nuclei can produce jets, making both scenarios promising targets for EM detectors. Accurate determination of the source's sky position enables subsequent EM follow-up observations to search for counterparts. The results of EM observations can then serve as valuable priors to reduce uncertainties in GW observations. The EM effects produced by BBH mergers are expected to be detected by infrared, optical, and X-ray observatories, and the results from these observatories have different impacts on the enhancement of GW. For simplicity, we consider an ideal scenario where EM observations accurately determine specific parameters, ignoring differences between EM detectors. According to Refs. <cit.>, when considering the improvement from EM observations, the corresponding row and column in the FIM are removed to reduce the uncertainties of other parameters in the GW data. That method assesses the performance enhancement of GW observations under ideal multimessenger conditions. We quantify these enhancements when EM observations perfectly determine redshift, sky position, or inclination angle, comparing them against results from GW observations alone. Notably, although this study only focuse on ppE parameters, multimessenger observations also offer varying degrees of enhancement for time, mass, polarization, and other parameters. For details on enhancements in non-ppE parameters, refer to Ref. <cit.>. For LISA and Taiji, multimessenger observations demonstrate notable enhancements primarily for MBHBs with M=10^7 M_⊙. Determining the redshift or inclination angle through multimessenger observations can reduce Δα_V, Δα_B, and Δα_L by 15% to 24%. Determining the sky position can reduce these parameters by 36% to 49%. Multimessenger observations yield even more significant improvements for TianQin, enhancing detection capabilities across all three typical MBHB masses. When the redshift, inclination angle, or sky position is accurately determined, Δα_V can be reduced by 40% to 57%. Furthermore, due to arm length, reductions in Δα_B and Δα_L are more modest, ranging from 9% to 17%. For ground-based detectors, the impact of multimessenger observations on Δα_B and Δα_L is considerably smaller compared to multiband observations, as it does not resolve the degeneracy and thus is not considered further. LIGO shows less significant improvement compared to ET, due to its less diverse response angles. Determining the redshift or inclination angle results in a reduction of Δα_V by 5% to 7%, with negligible changes observed when determining the sky position. In contrast, ET exhibits more substantial enhancements. When any one of the three parameters is determined, Δα_V can be reduced by 15% to 43%, with larger reductions for higher-mass SBBHs. Moreover, multimessenger observations yield remarkable improvements for Virgo and KAGRA, achieving reductions in Δα_V by 97% to 99.4%. This improvement is only one order of magnitude lower than that of LIGO and significantly surpasses the outcomes from GW observations alone. Multimessenger observations, whether for space- or ground-based detectors, do not improve the other two ppE parameters, Δα_Q and Δα_D. That is because these two ppE parameters are directly tied to GW frequency, where GW observation precision is highest. Theoretically, multimessenger observations could enhance constraints on all ppE parameters, but in practice, some improvements are too marginal. Therefore, we only present results where the parameter uncertainty is reduced by more than 5%, with most unreported results being less than 1%. In summary, multimessenger observations can moderately enhance the performance of GW observations, providing stronger constraints on ppE parameters. § CONCLUSION In this paper, we investigate the expected targets for constraining GW polarization using space- and ground-based detectors within the ppE framework. Specifically, we adopt a model-independent ppE framework that incorporates a GW waveform with all six polarization modes, adhering to current GR test results. For space-based detectors, we consider LISA, Taiji, and TianQin, along with alternative orbital configurations, both individually and in network scenarios. For ground-based detectors, we include the currently operational LVK and the third-generation detector ET, evaluated both individually and in network. Our analysis focuses on detector performance across different polarization modes, emphasizing the response function's perspective. Furthermore, by simulating three typical masses of MBHB and SBBH, we use the FIM to quantify ppE parameter constraints for both space- and ground-based detectors, presenting the results under different combinations. Furthermore, we explore how multiband and multimessenger observations enhance ppE constraints, offering a comprehensive analysis from diverse angles. For space-based detectors, Taiji provides the best constraints on GW polarization modes, with LISA performing better than TianQin. Specifically, TianQin's constraints on scalar modes are significantly weaker than those of LISA and Taiji, while the uncertainties in other ppE parameters among the three detectors differ by less than an order of magnitude. In network scenarios, the combination of LISA+TJm performs slightly better than other combinations, and the network of all three detectors outperforms two-detector networks. For ground-based detectors, the differences in constraints on frequency-related ppE parameters are minor, but LIGO's results for vector modes are substantially better than those of Virgo and KAGRA, approaching the level of ET. In network scenarios, the LVK combination does not surpass the results of ET alone, with the LVK+ET network providing the best constraints. Overall, ground-based detectors are unable to distinguish between breathing and longitudinal modes and are significantly less capable of constraining scalar modes compared to space detectors. Other than this, the constraints on ppE parameters from space- and ground-based detectors are comparable. For multiband observation, it can effectively break the degeneracies of ground-based detectors in breathing and longitudinal modes, addressing the inability of ground detectors to distinguish scalar modes. They also improve the ability of Virgo and KAGRA to constrain vector modes, reaching the level of LIGO, and offer certain improvements for other ppE parameters. Moreover, TianQin brings the greatest improvement, followed by Taiji, and LISA has the least improvement. For multimessenger observation, it provides the most significant enhancement for TianQin, bringing its results for vector modes close to those of LISA. For scalar modes, LISA and Taiji see greater improvements. ET shows a larger improvement compared to LIGO, and Virgo and KAGRA experience substantial enhancements, with their results for vector modes approaching the level of LIGO. Overall, both multiband and multimessenger observations can enhance GW detection results to a certain extent, leading to better constraints on GW polarization modes. In future research, we plan to approach the study from several aspects. First, we will use longer-duration GW signals, which can contain more information, and consider the non-light-speed propagation caused by different modes, where vector and scalar modes may arrive before or simultaneously with tensor modes <cit.>. Second, we aim to delve into the effects brought by Time-Delay Interferometry (TDI) technology, as different TDI combinations may result in subtle differences in the outcomes <cit.>. Additionally, we will consider various sources, examining the impact of ppE parameters on unequal mass BBH systems. Lastly, in the context of multimessenger observations, we will study the performance of EM detectors under realistic conditions, providing results that are more aligned with actual scenarios to enhance GW observations. In summary, through these in-depth studies, we aim to advance the research on GW polarization modes, evaluate detector performance more comprehensively and in greater detail, and provide more valuable results. This work was supported by the National Key Research and Development Program of China (Grant No. 2021YFC2203004), the National Natural Science Foundation of China (Grant No. 12347101), and the Natural Science Foundation of Chongqing (Grant No. CSTB2023NSCQ-MSX0103).
http://arxiv.org/abs/2407.13151v1
20240718043610
Wavelet-based Bi-dimensional Aggregation Network for SAR Image Change Detection
[ "Jiangwei Xie", "Feng Gao", "Xiaowei Zhou", "Junyu Dong" ]
eess.IV
[ "eess.IV", "cs.CV" ]
IEEE GEOSCIENCE AND REMOTE SENSING LETTERS Shell Wavelet-based Bi-dimensional Aggregation Network for SAR Image Change Detection Jiangwei Xie, Feng Gao, Xiaowei Zhou, Junyu Dong This work was supported in part by the National Science and Technology Major Project under Grant 2022ZD0117202, in part by the Natural Science Foundation of Qingdao under Grant 23-2-1-222-ZYYD-JCH, and in part by the Postdoctoral Fellowship Program of CPSF under Grant GZC20241614. (Corresponding author: Xiaowei Zhou.) Jiangwei Xie, Feng Gao, Xiaowei Zhou, and Junyu Dong are with the School of Computer Science and Technology, Ocean University of China, Qingdao 266100, China. ==================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================== § ABSTRACT Synthetic aperture radar (SAR) image change detection is critical in remote sensing image analysis. Recently, the attention mechanism has been widely used in change detection tasks. However, existing attention mechanisms often employ down-sampling operations such as average pooling on the Key and Value components to enhance computational efficiency. These irreversible operations result in the loss of high-frequency components and other important information. To address this limitation, we develop Wavelet-based Bi-dimensional Aggregation Network (WBANet) for SAR image change detection. We design a wavelet-based self-attention block that includes discrete wavelet transform and inverse discrete wavelet transform operations on Key and Value components. Hence, the feature undergoes downsampling without any loss of information, while simultaneously enhancing local contextual awareness through an expanded receptive field. Additionally, we have incorporated a bi-dimensional aggregation module that boosts the non-linear representation capability by merging spatial and channel information via broadcast mechanism. Experimental results on three SAR datasets demonstrate that our WBANet significantly outperforms contemporary state-of-the-art methods. Specifically, our WBANet achieves 98.33%, 96.65%, and 96.62% of percentage of correct classification (PCC) on the respective datasets, highlighting its superior performance. Source codes are available at <https://github.com/summitgao/WBANet>. Change detection; Synthetic aperture radar; Wavelet transform; Bi-dimensional aggregation module. § INTRODUCTION Synthetic aperture radar (SAR) is adept at producing high-resolution images of the Earth's surface, even under conditions of low visibility caused by adverse weather <cit.>. SAR sensors can penetrate cloud cover, making them especially valuable for Earth observation in cloudy or rainy areas. Consequently, SAR data has garnered significant interest from the research community, supporting a range of applications such as object detection <cit.>, disaster assessment <cit.>, change detection <cit.>, and image classification <cit.>. Among these, change detection serves as a crucial tool for identifying changes in land cover, urban growth, and deforestation. Recently, various convolutional neural network based models for change detection have been developed, demonstrating significant advancements in performance. Hou et al. <cit.> introduced an end-to-end dual branch architecture that merges CNN with a generative adversarial network (GAN), enhancing the detection of fine-grained changes. Wang et al. <cit.> introduced distinctive patch convolution combined with random label propagation, achieving high accuracy in change detection at a reduced computational cost. Zhao et al. <cit.> utilized a multidomain fusion module that integrates spatial and frequency domain features into complementary feature representations. Zhu et al. <cit.> designed a feature comparison module that limits the number of feature channels in the fusion process, enabling better utilization of fine-grained information in the multiscale feature map for more accurate prediction. The previously mentioned CNN-based methods have demonstrated impressive achievements. Furthermore, Vision Transformer (ViT) <cit.> has showcased high performance in various computer vision tasks, leading to the adoption of attention mechanisms in change detection models. Zhang et al. <cit.> combine convolution and attention mechanisms to improve the performance of SAR image change detection. Although these pioneer efforts have achieved promising performance, designing an attention-based network for SAR change detection is still a non-trivial task, due to the following reasons: 1) High-frequency information loss in self-attention computation. Traditional down-sampling methods on the Key and Value components in efficient attention mechanisms, often result in the loss of high-frequency components like texture details. 2) Limitation in non-linear feature transformation. Existing methods require MLP-like structure for non-linear feature transformation. However, spatial and channel-wise attentions are rarely exploited simultaneously. To address the above two limitations, we propose a Wavelet-based Bi-dimensional Aggregation Network, WBANet for short, which achieves down-sampling without information dropping and fuses both spatial and channel information. Specifically, we design a Wavelet-based Self-attention Module (WSM) which uses Discrete Wavelet Transform (DWT) and Inverse Discrete Wavelet Transform (IDWT) to enable lossless and invertible down-sampling in the self-attention computation. In addition, we develop a Bi-dimensional Aggregation Module (BAM) to enhance the non-linear feature representation capabilities. This module efficiently captures both spatial and channel-wise feature dependencies. In summary, the contributions of this letter can be summarized as follows: * We propose WSM to integrates DWT and IDWT for down-sampling without information loss, thus preserving textures and other high-frequency details. * We develop BAM that captures both spatial and channel-wise feature dependencies effectively. This module merges information from two branches and enhances the non-linear feature representation capabilities. * Extensive experiments are conducted on three public SAR datasets, demonstrating the efficacy of our proposed WBANet. We have made our code publicly available to benefit other researchers. § METHODOLOGY The framework of the proposed WBANet is illustrated in Fig. <ref>. First of all, two multitemporal SAR images (I_1 and I_2), captured at different times over the same geographic region, are fed into the network. The objective of the change detection task is to generate a change map, marking changed pixels as "1" and unchanged pixels as "0". Initially, the pre-classification module uses a logarithmic ratio operator to compute a difference image for pseudo-label generation. Subsequently, the hierarchical fuzzy c-means algorithm <cit.> <cit.> is employed to classify pixels into changed, unchanged, and intermediate categories. Then, some wavelet-based bi-dimensional aggregation block process these data from the pre-classification module. Finally, the output features from this block are passed through fully connected layer to generate the change map. The wavelet-based bi-dimensional aggregation block is comprised of two components: the Wavelet-based Self-attention Block (WSM) and the Bi-dimensional Aggregation Module (BAM). We will present the details of both modules in the following subsections. §.§ Wavelet-based Self-attention Module (WSM) The proposed WSM employs Discrete Wavelet Transform (DWT) and Inverse Discrete Wavelet Transform (IDWT) to facilitate down-sampling in the attention mechanism. The wavelet transform enables feature extraction at both coarse and fine-grained scales, while also ensuring that the down-sampling is invertible. Due to the simple structure and high computational efficiency of Haar wavelet, it can quickly complete the downsampling operation. Furthermore, SAR change images often exhibit considerable sharp high-frequency information while Haar wavelet is adept at effectively capturing these high-frequency components<cit.>. Thus, we select Haar wavelet to conduct the downsample operation. The structure of this module is shown in Fig. <ref>. To efficiently process the input feature X ∈ℝ^H × W × C, we first reduce its channel dimensions to X∈ℝ^H × W ×C/4 using a learnable transformation matrix W_d ∈ℝ^C ×C/4. Following this channel reduction, we apply DWT with the Haar wavelet to down-sample X, and decompose it into four distinct subbands. Haar wavelet is composed of the low-pass filter f_L = (1/√(2), 1/√(2)) and high-pass filter f_H = (1/√(2), -1/√(2)). We first encode X into two subbands X_L and X_H along the rows. Subsequently, these subbands are processed using the same filters along the columns, resulting in four wavelet subbands: X_LL, X_LH, X_HL, and X_HH. Here, X_LL∈ℝ^H/2×W/2×C/4 encodes the low-frequency components, and contains coarse-grained structural information. X_LH, X_HL, X_HH∈ℝ^H/2×W/2×C/4 represent the high-frequency components, and describe fine-grained textures. We then concatenate these four subbands along the channel dimension to get X̂: X̂ = Concat(X_LL, X_LH, X_HL, X_HH) The concatenated output X̂ is transformed into Key (K^w) and Value (V^w) matrices through the convolutional layer, while the Query (Q) remains the original input image X. In this case, wavelet-based multi-head self-attention computes the interaction across these elements for each head as follows: head_i = Attention(Q_i, K_i^w, V_i^w) =Softmax(Q_iK_i^w/√(D_h)) V_i^w where K_i^w denotes the down-sampled key, V_i^w denotes the down-sampled value, and D_h represents the dimension of each head. To enhance the output of the wavelet-based self-attention block, we apply the IDWT to X̂ to produce X^r. The reconstructed X^r mirrors the details of the original input image, providing excellent local contextualization and an expanded receptive field. The final output integrates the contributions of each attention head with this reconstructed map. This integration is essential for effectively capturing information across multiple scales. The overall operation can be formulated as follows: WaveAttn(X) = Concat(head_0, ⋯, head_N_h, X^r) W^O, where N_h represents the number of attention heads, and W^O is the transformation matrix that combines all the heads and the reconstructed image into a single output tensor. The use of wavelet transform in the self-attention mechanism significantly enhances the ability to contextualize information over longer ranges with a reduced computational load compared to conventional self-attention modules. This approach ensures that both global coherence and local detail are preserved and emphasized in the model's outputs. §.§ Bi-dimensional Aggregation Module (BAM) To enhance the non-linear representation capabilities and effectively capture both spatial and channel dependencies, we develop Bi-dimensional Aggregation Module (BAM), as depicted in Fig. <ref>. This module includes two branches: the channel aggregation branch and the spatial aggregation branch. Channel Aggregation: In this branch, average pooling is applied in the spatial dimension of input features X∈ℝ^H×W×C to aggregate global representations. Subsequently, a fully connected layer (FC) coupled with a GELU activation function reduces the channel dimensions from C to C/r, producing an intermediate output X̂. Here, r, the reduction ratio, is set to 2. This is followed by FC layer and Sigmoid activation function to generate the output of the channel attention branch, X^C∈ℝ^1 × 1 × C. Spatial Aggregation: First, a linear transformation, together with a GELU activation, transforms the channel dimension to C/r, while the spatial dimensions remain unchanged. The resulting intermediate output, X, is then concatenated with X̂ to form X^'∈ℝ^H × W ×2C/r. The process culminates similarly to the channel attention branch, resulting in the final output X^S∈ℝ^H × W × 1. The outputs of both branches, X^C and X^S, are merged through an element-wise summation, ensuring the final output retains the same dimensions as the original input X. This integration optimally combines the refined channel and spatial information, enhancing the overall feature representation while maintaining focus on both global context and local details. § EXPERIMENTAL RESULTS AND ANALYSIS §.§ Datasets and Evaluation Metrics To validate the effectiveness of the proposed WBANet, we conducted comprehensive experiments on three distinct SAR datasets: Chao Lake, the Yellow River, and Sulzberger Datasets. Chao Lake Dataset: This dataset includes images of Chao Lake in China, captured in May 2020 using the Sentinel-1 sensor. This period coincides with the highest recorded water levels in the lake's history, providing a dynamic range of changes to detect. Sulzberger Dataset: Captured by the European Space Agency's Envisat satellite over five days in March 2011, this dataset documents the breakup of an ice shelf, offering a unique perspective on drastic natural events. Yellow River Dataset: This dataset focuses on the Yellow River Estuary in China, with data collected from June 2008 to June 2009 using the Radarset-2 SAR sensor. This dataset is particularly challenging due to the pronounced speckle noise. The hierarchical fuzzy c-means algorithm could classify pixels into changed, unchanged, and intermediate categories. Pixels from the changed and unchanged groups are randomly selected as training data, while intermediate group pixels are the test data. For a thorough assessment of our model, we employed five commonly used evaluation metrics: False Positives (FP), False Negatives (FN), Overall Error (OE), Percentage of Correct Classification (PCC), and the Kappa Coefficient (KC). §.§ Experimental Results and Discussion We evaluated our WBANet against five state-of-the-art methods: CWNN <cit.>, SAFNet <cit.>, DDNet <cit.>, LANTNet <cit.>, and CAMixer <cit.>, implemented using default parameters from their studies. All the experiments, except for the CWNN running with Matlab, were conducted on the Google Colab platform with Python 3.10.12, PyTorch 2.1.0, and an NVIDIA Tesla T4 GPU with 15 GB of memory. Quantitative results are detailed in Table <ref>. For the Chao Lake dataset, our WBANet excels in all metrics except false negatives (FN), with significant improvements in the Kappa Coefficient (KC) by 24.37%, 3.83%, 2.34%, 1.18%, and 0.82% over CWNN, SAFNet, DDNet, LANTNet, and CAMixer, respectively. On the Sulzberger dataset, WBANet outperforms other methods in overall error (OE), PCC, and KC. Although CAMixer and CWNN show lower FP and FN rates respectively, they both register higher OEs compared to our WBANet. Similarly, on the Yellow River dataset, WBANet leads in all metrics apart from FN. Notably, it enhances the KC value by 5.69%, 4.46%, 4.39%, 2.79%, and 2.35% over CWNN, SAFNet, DDNet, LANTNet, and CAMixer, respectively. While CWNN records a lower FN rate, it lags significantly behind in FP and OE. Fig. <ref> illustrates the visual comparison of change maps produced by different methods on three datasets. Compared to the baseline methods, such as CWNN and SAFNet, our WBANet generates change maps that are visually closer to the ground truth and contain less noise. For instance, in the Yellow River dataset, where speckle noise greatly impacts performance, it is challenging to produce accurate change maps. Here, the performance of CWNN and SAFNet is notably degraded, while DDNet, LANTNet, and CAMixer frequently misclassify changed pixels as unchanged. Experimental results on three SAR datasets confirm that our proposed WBANet outperforms other state-of-the-art methods. The effectiveness of our WSM and BAM demonstrates significant contributions to attention feature extraction and non-linear representation modeling. §.§ Ablation Study To evaluate the effectiveness of the proposed Wavelet-based Self-attention Block and Bi-dimensional Aggregation Module, ablation experiments were performed on three datasets. We designed three variants: (1) Basic Network, which is the WBANet without the WSM and BAM; (2) w/o WSM, which omits the Wavelet-based Self-attention Module; and (3) w/o BAM, which lacks the Bi-dimensional Aggregation Module. The results in Table <ref> clearly show that both the WSM and the BAM significantly enhance the non-linear representation capabilities, thereby improving change detection performance. Additionally, we utilized the t-SNE <cit.> tool to visualize feature characteristics before and after applying the Wavelet-based Self-attention Block. As depicted in Fig. <ref>, the representations post-application display more distinct, well-defined clusters compared to the original input. §.§ Analysis of the Block Number The number of Wavelet-based Bi-dimensional Aggregation Blocks, denoted as N, is a crucial parameter. We explored the relationship between N and the Percentage of Correct Classification (PCC) by varying N from 0 to 8. As illustrated in Fig. <ref>, PCC consistently improves with an increase in the number of Wavelet-based Bi-dimensional Aggregation Blocks up to 5. However, beyond this point, PCC begins to decline due to the increased model complexity. Consequently, we optimized N for different datasets: N=5 for the Chao Lake dataset, N=2 for the Sulzberger dataset, and N=4 for the Yellow River dataset. § CONCLUSION In this letter, we introduce a novel WBANet for SAR image change detection task. The WBANet utilizes DWT and IDWT to achieve down-sampling without the loss of high-frequency details and other important information. Additionally, we developed BAM to enhance non-linear representation capabilities by capturing spatial and channel dependencies and refining features. Extensive experiments on three SAR datasets have verified the effectiveness and rationality of our solution. IEEEtran
http://arxiv.org/abs/2407.13177v1
20240718053208
Superconformal Indices of 3d $\mathcal{N}=2$ SCFTs and Holography
[ "Nikolay Bobev", "Sunjin Choi", "Junho Hong", "Valentin Reys" ]
hep-th
[ "hep-th" ]
KIAS-P24009 a]Nikolay Bobev, b]Sunjin Choi, c]Junho Hong, d]and Valentin Reys [a]Institute for Theoretical Physics, KU Leuven , Celestijnenlaan 200D, B-3001 Leuven, Belgium [b]School of Physics, Korea Institute for Advanced Study , 85 Hoegi-ro, Dongdaemun-gu, Seoul 02455, Republic of Korea [c]Department of Physics & Center for Quantum Spacetime, Sogang University , 35 Baekbeom-ro, Mapo-gu, Seoul 04107, Republic of Korea [d]Université Paris-Saclay, CNRS, CEA, Institut de physique théorique, 91191, Gif-sur-Yvette, France nikolay.bobev@kuleuven.be sunjinchoi@kias.re.kr junhohong@sogang.ac.kr valentin.reys@ipht.fr We study the superconformal index of 3d =2 superconformal field theories on S^1×_ω S^2 in the Cardy-like limit where the radius of the S^1 is much smaller than that of the S^2. We show that the first two leading terms in this Cardy-like expansion are dictated by the Bethe Ansatz formulation of the topologically twisted index of the same theory. We apply this relation to 3d =2 holographic superconformal field theories describing the low-energy dynamics of N M2-branes and derive closed form expressions, valid to all orders in the 1/N expansion, for the two leading terms in the Cardy-like expansion of the superconformal index. We also discuss the implications of our results for the entropy of supersymmetric Kerr-Newman black holes in AdS_4 and the four-derivative corrections to 4d gauged supergravity. Superconformal Indices of 3d 𝒩=2 SCFTs and Holography [ Received X XX, XXXX; accepted X XX, XXXX ===================================================== § INTRODUCTION The interplay between supersymmetric localization and holography has led to valuable precision test of the AdS/CFT correspondence and a new vantage point towards the structure of string and M-theory on non-trivial flux backgrounds, see <cit.> for reviews and further references. A prominent role in these holographic explorations is played by the superconformal indices, or S^1×_ωS^d-1 supersymmetric partition functions. They capture the degeneracy of supersymmetric states in the SCFT and provide a microscopic counting for the entropy of the dual supersymmetric Kerr-Newman black holes in AdS_d+1. The goal of this work is to continue the exploration of the relation between supersymmetric partition functions and holography in the context of the superconformal index (SCI) of 3d 𝒩=2 SCFTs <cit.>. The SCI can be viewed as a supersymmetric partition function ℐ on the Euclidean manifold S^1×_ω S^2 where ω is the fugacity for the angular momentum on S^2. Using supersymmetric localization, one can derive a matrix integral expression for the SCI that facilitates its evaluation <cit.>. The resulting matrix integral is a complicated function of the various parameters of the SCFT, like real masses or flavor symmetry fugacities, which is hard to evaluate for general values of ω. One way to make progress is to focus on the Cardy-like limit of the SCI where the radius of the S^1 is taken to be much smaller than the one of the S^2, see <cit.>. Due to supersymmetry, this geometric limit amounts to taking the angular fugacity ω to vanish. As shown in <cit.>, in the ω→ 0 limit of the SCI, the two leading terms are of order ω^-1 and ω^0.[The Cardy-like expansion in 1/ ω of the SCI of 4d 𝒩=1 SCFTs terminates at order ω^1, see <cit.>. As we show explicitly in Section <ref>, this is not the case for 3d 𝒩=2 theories, where one generically finds an infinite series in powers of ω.] Moreover, it was noted in <cit.> that in some theories, these two leading terms have an intimate relation to another supersymmetric partition function known as the topologically twisted index (TTI). The TTI is a partition function of 3d 𝒩=2 SCFTs placed on S^1×Σ_𝔤 with a partial topological twist on the Riemann surface of genus 𝔤 by the superconformal U(1) R-symmetry of the theory <cit.>. Using supersymmetric localization, the TTI can be reduced to a matrix integral, which in turn can be evaluated by a version of the residue theorem and recast as a system of algebraic equations. An important role in this “Bethe Ansatz” approach to the evaluation of the TTI is played by the so-called Bethe potential 𝒱. The observation of <cit.>, further studied in <cit.>, is that the ω^-1 term in the Cardy-like expansion of the SCI is determined by the Bethe potential, while the ω^0 term is fixed by the logarithm of the TTI on S^1× S^2. Recently, in <cit.>, we studied this relation between the SCI and the TTI in the context of two models arising from N M2-branes in M-theory, namely the ABJM theory <cit.> and the 3d 𝒩=4 ADHM theory (also called the N_f matrix model) <cit.>. It was shown in <cit.> that the Bethe potential and TTI for these two models can be computed to all orders in the 1/N expansion using precise numerical techniques that lead to exact analytic expressions. These results were then employed in <cit.> to derive the first two leading terms in the Cardy-like expansion of the SCI for these theories. The explicit expressions for the SCI for these two models can then be expanded at large N and the leading N^3/2, N^1/2 and log N terms can be matched to the on-shell action of the dual supersymmetric Euclidean Kerr-Newman black hole solution embedded in M-theory, see <cit.>. The goal of this work is to extend and generalize these results in several ways. In Section <ref>, we build on the results of <cit.> to provide a general derivation of the relation between the order ω^-1 and ω^0 terms in the Cardy-like expansion of the SCI and the Bethe potential and the TTI. Importantly in this derivation, we do not assume that the 3d 𝒩=2 theory at hand is holographic or has any kind of large N limit. In Section <ref>, we apply this relation between the SCI and the TTI to study the SCI for various 3d 𝒩=2 SCFTs arising on the worldvolume of M2-branes. These theories have a dual holographic description in terms of M-theory on an asymptotically AdS_4× Y_7 background, where the details of the SCFT are encoded in the geometry of the 7d Sasaki-Einstein (SE) manifold Y_7. For this class of models, we can leverage the numerical techniques for the calculation of the TTI discussed in <cit.> to find explicit expressions, valid to all orders in the 1/N expansion, for the first two leading terms in the Cardy-like expansion of the SCI. In Section <ref>, we study the holographic implications of these field theory results for the SCI. We first formulate a prediction for the path integral of M-theory on the background of a supersymmetric Kerr-Newman black hole asymptotic to AdS_4× Y_7 to all order in the 1/N expansion. We also show how the N^1/2 terms in the large N expansion of the SCI and TTI can uniquely determine the four-derivative couplings in 4d =2 minimal gauged supergravity following the approach of <cit.>. This interplay between field theory and supergravity allows for a number of consistency checks of our results and leads to new predictions for the path integrals of the 3d SCFTs in question on other compact Euclidean manifolds. We also show that our results for the log N term in the large N expansion of the SCI are in agreement with the recent calculations of logarithmic corrections to the entropy of supersymmetric Kerr-Newman black holes in AdS_4 <cit.>. We conclude our discussion with a summary of some open problems and possible generalizations of our work in Section <ref>. The five appendices contain our conventions for some of the special functions we use as well as various technical aspects of our calculations. § INDICES OF 3D =2 SCFTS In this section, we first briefly review the S^1×_ω S^2 superconformal index (SCI) and the S^1× S^2 topologically twisted index (TTI) of 3d =2 SCFTs. Next, we explain how the Cardy-like limit of the former is related to the latter, generalizing the observation of <cit.> for the ABJM/ADHM theories to a larger class of =2 SCFTs. We first summarize the main results schematically and then provide more details in the subsequent discussion. §.§ Summary of main results Consider 3d =2 Chern-Simons-matter quiver gauge theories with p∈ℕ nodes, each of which represents a gauge group U(N)_k_r with Chern-Simons (CS) level k_r (r=1,…,p). The matter content of the quiver gauge theory consists of 𝒩=2 chiral multiplets, collectively denoted by Ψ, in the _Ψ representation of the gauge group G=⊗_r=1^pU(N)_k_r with R-charge R(Ψ) and flavor charges f_x(Ψ), where x runs over the dimension of the Cartan subgroup of the flavor group. Note that when p=0 the theory has no gauge group and consists only of interacting chiral multiplets. Our focus here is on the =2 SCFTs living at the superconformal IR fixed point of the RG flow originating from the class of =2 asymptotically free CS-matter quiver gauge theories described above in the UV.[Later we will also consider deformations of the SCFTs that break the conformal symmetry, and explore the relation between indices after such deformations. In a slight abuse of notation, we will still refer to these as =2 SCFTs.] For these =2 SCFTs, one can compute various important physical quantities protected by supersymmetry along the RG flow by applying supersymmetric localization to the UV CS-matter theories with explicit Lagrangians. Of particular interest in this work are the SCI and the TTI, and the relation between these two partition functions. For the class of 3d =2 SCFTs described above we show that the SCI in the Cardy-like limit ω→ 0^+ and the TTI are related in the following way, log[SCI]=-1πω[Bethe potential]+[log[TTI]]+φ+(ω) . Here q=e^πω is the fugacity for the angular momentum on S^2, the Bethe potential is introduced in the Bethe-Ansatz (BA) formalism for the TTI discussed in Section <ref>, and the pure imaginary term φ arises from various phase factors that do not depend on the details of a given theory. We refer to the relation above as the Bethe formulation of the SCI. The reason is that all quantities appearing on the r.h.s. of (<ref>) arise naturally in the Bethe formulation of the TTI described in <cit.>. It is worth emphasizing that the relation in (<ref>) does not require any large N limit or the existence of a holographic dual description of the CS-matter theory at hand. Put differently, it is a relation between two field theory partition functions, which is non-perturbative in N and holds for all =2 SCFTs described above. For illustration, here we present perhaps the simplest example that exhibits the relation (<ref>). Consider an 3d =2 theory with two chiral multiplets and a superpotential W=Ψ_1(Ψ_2)^2 . Each chiral multiplet Ψ_i is charged under a U(1)_R R-symmetry and a U(1)_F flavor symmetry with charges (r_i,f_i) respectively. Since the superpotential has R-charge two and is neutral under the flavor symmetry the charges of the chiral multiplets should satisfy the constraints r_1+2r_2=2 and f_1+2f_2=0 . The SCI of this theory can be computed by supersymmetric localization <cit.> and reads _W(q,ξ)=_Ψ_1(q,ξ)_Ψ_2(q,ξ) , where _Ψ_i(q,ξ)=(ξ^-f_iq^2-r_i;q^2)_∞(ξ^f_iq^r_i;q^2)_∞ , in terms of ∞-Pochhammer symbols for the fugacities q = e^iπω and ξ associated with the U(1)_R and U(1)_F global symmetries. In the Cardy-like limit ω→0^+, one can expand the single chiral multiplet contribution as log_Ψ_i(q,ξ)=-1πωLi_2(y_i)+log(y_i^1/21-y_i)^1-_i+(ω) , where we have used the asymptotic expansion of the ∞-Pochhammer symbol (<ref>) and the new parameters defined as y_iq^_i=q^r_iξ^f_i , with |y_i|=1 , 𝔫_i ∈ℝ , q∈ℝ . The logarithm of the SCI, log_W(q,ξ), then takes precisely the form of (<ref>) provided the Bethe potential and the TTI for a single chiral multiplet are identified as _Ψ_i=Li_2(y_i) , and Z_Ψ_i=(y_i^1/21-y_i)^1-_i , respectively. Indeed, as we review in Section <ref>, the Bethe potential and the TTI for a single chiral multiplet are given by (<ref>). §.§ Superconformal index The SCI, or S^1×_ω S^2 partition function, was defined in <cit.> and then analyzed for generic Lagrangian 3d =2 SCFTs via supersymmetric localization in <cit.>. It can be written as a trace over the Hilbert space of the theory in radial quantization, (q,ξ)=[(-1)^2j_3e^-β_1{,^†}q^Δ+j_3∏_xξ_x^F_x] , {,^†}=Δ-R-j_3 , where is a supercharge, Δ is the energy in radial quantization, R is the R-charge, j_3 is the third component of the angular momentum on S^2, and F_x are charges associated with flavor symmetries. We introduce the chemical potential ω associated with the fugacity q as q=e^πω and refer the reader to <cit.> for its precise geometric meaning. Note that by the usual pairing argument, the index is independent of the circumference β_1 of the S^1 part of the geometry, see <cit.> for a summary of the geometric interplay between β_1 and ω. We note that the fermion number operator (-1)^F in the trace formula for the SCI of <cit.> is replaced with (-1)^2j_3 in (<ref>), which takes into account non-trivial phase contributions from magnetic monopoles based on the prescription of <cit.>, see also <cit.> for a related discussion. The matrix integral expression of the SCI trace formula (<ref>) for the class of 3d 𝒩=2 SCFTs described in the previous subsection can be computed by supersymmetric localization and reads <cit.> (q,ξ) =1(N!)^p∑__1,…,_p∈ℤ^N∮(∏_r=1^p∏_i=1^Ndz_r,i2π z_r,iz_r,i^k_r_r,iξ_T_r^_r,i) ×∏_r=1^p∏_i≠ j^Nq^-12|_r,i-_r,j|(1-z_r,iz_r,j^-1q^|_r,i-_r,j|) ×∏_Ψ∏_ρ_Ψ(-1)^12(ρ_Ψ()+|ρ_Ψ()|)(q^1-R(Ψ)e^-ρ_Ψ(h)∏_xξ_x^-f_x(Ψ))^12|ρ_Ψ()| 5em×(e^-ρ_Ψ(h)∏_xξ_x^-f_x(Ψ)q^2-R(Ψ)+|ρ_Ψ()|;q^2)_∞(e^ρ_Ψ(h)∏_xξ_x^f_x(Ψ)q^R(Ψ)+|ρ_Ψ()|;q^2)_∞ , where we have turned on mixed CS terms between gauge and topological symmetries with the corresponding fugacities ξ_T_r. In the matrix model (<ref>), contour integrals for gauge zero modes z_r,i=e^ h_r,i are over the unit circle, ρ_Ψ runs over the weights of the representation _Ψ of the 𝒩=2 chiral multiplet Ψ with respect to the gauge group G=⊗_r=1^pU(N)_k_r, and _r,i stand for integer-quantized gauge magnetic fluxes. We omit N and {k_r} in the argument of the SCI for notational convenience throughout this paper. It is worth mentioning that the fugacities associated with the topological symmetries in the localization formula (<ref>) are not precisely the same as the ones introduced through the F_x in the trace formula (<ref>). We redefined them by absorbing extra phases in the localization formula arising from the replacement (-1)^F→(-1)^2j_3 in the trace formula <cit.>. In Appendix <ref> we present the details of the derivation of the localization formula (<ref>) starting from the convention of <cit.>. §.§.§ Factorization Evaluating the integrals over gauge zero modes and the sum over gauge magnetic fluxes in the localization formula (<ref>) is highly involved in general. In order to make progress, we will work in the Cardy-like limit ω→i0^+ following <cit.>. Firstly, it is useful to rewrite the localization formula (<ref>) as follows: (ω,Δ,) =1(N!)^p∑__1,⋯,_p∈ℤ^N∮(∏_r=1^p∏_i=1^Ndz_r,i2π z_r,iz_r,i^k_r_r,i(y_T_rq^-_r)^_r,i) ×∏_r=1^p∏_i≠ j^N(qz_r,iz_r,j^-1)^-12|_r,i-_r,j|(z_r,i^-1z_r,jq^|_i-_j|;q^2)_∞(z_r,iz_r,j^-1q^2+|_i-_j|;q^2)_∞ ×∏_Ψ∏_ρ_Ψ(-1)^12(ρ_Ψ()+|ρ_Ψ()|)(q^1-_Ψe^-ρ_Ψ(h)y_Ψ^-1)^12|ρ_Ψ()| 5em×(e^-ρ_Ψ(h)y_Ψ^-1q^2-_Ψ+|ρ_Ψ()|;q^2)_∞(e^ρ_Ψ(h) y_Ψ q^_Ψ+|ρ_Ψ()|;q^2)_∞ , where we have introduced new parameters y_Ψ=e^πΔ_Ψ, y_T_r=e^πΔ_T_r (Δ_Ψ,Δ_T_r∈ℝ) and _Ψ,_r∈ℝ via the relations[In <cit.>, the ABJM SCI was analyzed in a similar fashion but the (Δ,) parameters were introduced after the change of integration variables involving the degrees of freedom originally carried by the fugacity associated with the topological symmetry. As a result, the ABJM SCI is written as a function of (Δ_Ψ,_Ψ) only, which can take any configuration compatible with the superpotential charge constraints. We follow the same approach in Section <ref> for other SCFT examples with 2 nodes, namely N^0,1,0 and Q^1,1,1 theories, so that (Δ_T_r,_r) do not carry any extra degrees of freedom.] y_Ψ q^_Ψ=q^R(Ψ)∏_xξ_x^f_x(Ψ) , y_T_rq^-_r=ξ_T_r . We have also replaced the argument of the SCI accordingly, (q,ξ) → (ω,Δ,) , where Δ and collectively represent Δ=(Δ_Ψ,Δ_T_r) and =(_Ψ,_r). Next, using the property of the ∞-Pochhammer symbol (<ref>) and assuming q=e^πω∈ℝ, we can remove the absolute signs in (<ref>) and write (ω,Δ,) =1(N!)^p∑__1,⋯,_p∈ℤ^N∮_|s_r,i|=e^-πω_r,i(∏_r=1^p∏_i=1^Nds_r,i2π s_r,ie^ k_r4πω(_r,i^2-U_r,i^2)+2ω(Δ_T_r-ω_r)(_r,i-U_r,i)) ×(-1)^pN(N-1)2∏_r=1^p∏_i≠ j^N(1-s_r,i^-1s_r,j)^12(1-_r,i^-1_r,j)^12×∏_r=1^p∏_i=1^Ne^-ωℓ_r,i(_r,i-U_r,i) ×∏_Ψ∏_ρ_Ψe^8πω(ρ_Ψ()^2-ρ_Ψ(U)^2)+4ωΔ_Ψ(ρ_Ψ()-ρ_Ψ(U))-4(1-_Ψ)(ρ_Ψ()-ρ_Ψ(U))(e^-ρ_Ψ()y_Ψ^-1q^2-_Ψ;q^2)_∞(e^ρ_Ψ(U) y_Ψ q^_Ψ;q^2)_∞ , where we have introduced new integration variables s_r,i =e^ U_r,i=z_r,iq^-_r,i=e^(h_r,i-πω_r,i) , _r,i =e^-_r,i=z_r,i^-1q^-_r,i=e^-(h_r,i+πω_r,i) , and also, for later convenience, introduced a trivial phase e^-2π_r,iℓ_r,i=e^-ωℓ_r,i(_r,i-U_r,i)=1 (_r,i,ℓ_r,i∈ℤ) , in terms of an arbitrary set of integers {ℓ_r,i}. Importantly, the resulting expression of the SCI (<ref>) is factorized into holomorphic and anti-holomorphic parts with respect to the new integration variables (<ref>), as discussed in <cit.> (see also <cit.>). Before considering the Cardy-like limit of the factorized SCI (<ref>), we highlight a couple of technical steps entering the factorization of the =2 SCI that differ from the previous analysis for the ABJM/ADHM SCI in our previous work <cit.>: * The new integration variables (<ref>) are slightly different from the ones in <cit.>. * The introduction of an arbitrary set of integers {ℓ_r,i} in (<ref>) is a new ingredient that streamlines the analysis. These two differences do not affect the value of the SCI, but they do change its factorized structure and thereby make the comparison between the Cardy-like expansion of the SCI and the TTI more straightforward, as we show below in Section <ref>. §.§.§ Cardy-like limit Now we take the Cardy-like limit q→1^- ⇔ ω→0^+ . Following the details spelled out in Appendix <ref>, one can expand the SCI (<ref>) in the Cardy-like limit (<ref>) as <cit.> (ω,Δ,) =1(N!)^p(-1)^pN(N-1)2∫_ℂ^pN(∏_r=1^p∏_i=1^NdU_r,id_r,i-4π^2ω) ×exp[1πω^(0)[U;Δ,ℓ]+2^(1)[U;Δ,]+(ω)] , where we have introduced the Cardy-like expansion of a holomorphic effective potential, [U;Δ,,ω,ℓ] =^(0)[U;Δ,ℓ]+2πω ^(1)[U;Δ,]+(ω^2) , ^(0)[U;Δ,ℓ] =∑_r=1^p∑_i=1^N[12k_rU_r,i^2-π(2ℓ_r,i-Δ_T_r)U_r,i] +∑_Ψ∑_ρ_Ψ[14ρ_Ψ(U)^2+πΔ_Ψ2ρ_Ψ(U)-Li_2(e^ρ_Ψ(U)y_Ψ)] , ^(1)[U;Δ,] =2∑_r=1^p_r∑_i=1^NU_r,i-12∑_r=1^p∑_i≠ j^NLi_1(e^(U_r,j-U_r,i)) +4∑_Ψ(1-_Ψ)∑_ρ_Ψ(ρ_Ψ(U)+πΔ_Ψ) +12∑_Ψ(1-_Ψ)∑_ρ_ΨLi_1(e^ρ_Ψ(U)y_Ψ) . Observe that the 𝒪(ω^-1) leading order effective potential (<ref>) depends on the set of arbitrary integers {ℓ_r,i} introduced previously. This fact plays a crucial role in matching the leading order effective potential for the SCI (<ref>) and the Bethe potential for the TTI introduced below in Section <ref>. Evaluating the integral (<ref>) using the saddle-point approximation described in Appendix B of <cit.>, we obtain the Cardy-like expansion of the log of the SCI log(ω,Δ,) =1πω^(0)[U^⋆;Δ,ℓ]+2^(1)[U^⋆;Δ,] -12logℍ[U^⋆;Δ]+log(-1)^pN(N-1)2+(ω) , where we have implicitly assumed that the contribution from a particular saddle point {U_r,i^⋆} satisfying the saddle point equation, ∂^(0)[U;Δ,ℓ]∂ U_r,i=0 , yields a dominant contribution to the SCI in the Cardy-like limit. In (<ref>), ℍ denotes the Hessian matrix around the saddle point. Introducing Y_I∈{U_1,i,⋯,U_p,N}, its matrix components are given by (ℍ[U;Δ])_I,J=[ 𝕁[U;Δ] 0; 0 -𝕁[U;Δ] ] , (𝕁[U;Δ])_I,J≡∂^2^(0)[U;Δ,ℓ]∂ Y_I∂ Y_J . Note that the ℓ-dependence disappears in the 2nd derivative of the leading order effective potential 𝒲^(0). One can write down the Cardy-like expansion of the SCI (<ref>) more explicitly as log(ω,Δ,) =1πω^(0)[U^⋆;Δ,ℓ] +log|1𝕁∏_r=1^p[∏_i=1^Ns_r,i^_r∏_i≠ j^N(1-s_r,is_r,j)]∏_Ψ∏_ρ_Ψ(e^ρ_Ψ(U)/2y_Ψ^1/21-e^ρ_Ψ(U)y_Ψ)^1-_Ψ|_U=U^⋆ +log(-1)^pN(N-1)2-12log(-1)^pN_≡φ+(ω) , using the expression (<ref>) and the decomposition of the Hessian matrix (<ref>). We collect the purely imaginary terms coming from the phase factors independent of flavor fugacities and magnetic fluxes and combine them into the quantity φ for notational convenience. This purely imaginary term depends only on the number of nodes p and the rank of the gauge group at each node N, and in particular vanishes for even p. §.§ Topologically twisted index The TTI is defined as the partition function of a 3d =2 theory on S^1×Σ_ with a partial topological twist on the Riemann surface Σ_ that preserves two real supercharges.[In this work we take Σ_ to be a compact Riemann surface of genus 𝔤 without punctures.] The TTI can be written in terms of a matrix model via supersymmetric localization <cit.>. The main focus in this paper will be on the 𝔤=0 case, i.e. Σ_ = S^2. The matrix model for the class of =2 SCFTs described in Section <ref> reads <cit.>[We absorb the extra phase factors in the TTI localization formula due to the periodic boundary condition for fermions along S^1 <cit.> by redefining fugacities associated with the U(1) topological symmetries appropriately, see <cit.>.] Z(Δ,) =1(N!)^p∑__1,…,_p∈ℤ^N∫_𝒞∏_r=1^p[∏_i=1^Ndx_r,i2πix_r,i(x_r,i)^k_r_r,i+_r(y_T_r)^_r,i∏_i≠ j^N(1-x_r,ix_r,j)] 10em ×∏_Ψ∏_ρ_Ψ(e^ρ_Ψ(u)/2y_Ψ^1/21-e^ρ_Ψ(u)y_Ψ)^ρ_Ψ()-_Ψ+1 , where the contour integrals for gauge zero modes x_r,i=e^ u_r,i capture the Jeffrey-Kirwan (JK) residues, see <cit.> for a detailed discussion of the contours. As in the matrix model for the SCI (<ref>), _r,i stand for quantized gauge magnetic fluxes and ρ_Ψ runs over the weights of the representation of the 𝒩=2 chiral multiplet Ψ under the gauge group. The fugacities y_Ψ=e^πΔ_Ψ, y_T_r=e^πΔ_m,r and the background magnetic fluxes _Ψ, _r are associated with flavor symmetries of a given theory. We omit N and {k_r} in the argument of the TTI for notational convenience throughout this paper. The matrix model for the TTI (<ref>) can be evaluated directly using the Bethe Ansatz (BA) formalism <cit.>, which we now briefly review. The first step is to implement the sum over gauge magnetic fluxes in (<ref>) as Z(Δ,) =1(N!)^p∫_𝒞∏_r=1^p[∏_i=1^Ndx_r,i2πix_r,ix_r,i^_re^ MB_r,ie^ B_r,i-1∏_i≠ j^N(1-x_r,ix_r,j)] ×∏_Ψ∏_ρ_Ψ(e^ρ_Ψ(u)/2y_Ψ^1/21-e^ρ_Ψ(u)y_Ψ)^1-_Ψ , where we have introduced a large positive integer M[This cutoff M is introduced to capture the JK residues appropriately, see <cit.> for example. For a vanishing CS level one can take k_r→0^± for the choice of sign[k_r], which has been done implicitly for various examples in <cit.>. Note that the BAE (<ref>) and the corresponding BA formula (<ref>) are ultimately insensitive to the direction of the limit. In Section <ref> we take k_1→0^+ for the V^5,2 theory and k_1=-k_2=k→0^+ for the Q^1,1,1 theory.] and the BA operators B_r,i via e^ sign[k_r]B_r,i=y_T_rx_r,i^k_r∏_Ψ∏_ρ_Ψ(e^ρ_Ψ(u)/2y_Ψ^1/21-e^ρ_Ψ(u)y_Ψ)^(ρ_Ψ)_r,i . We refer to Appendix <ref> for the definition of the symbol (ρ_Ψ)_r,i for a given weight ρ_Ψ. Next, replacing the integrals in (<ref>) with the sum over solutions to the Bethe Ansatz Equations (BAE), e^ sign[k_r]B_r,i=1 , we obtain the BA formula Z(Δ,)=∑_{u_r,i}∈BAE1𝔹∏_r=1^p[∏_i=1^Nx_r,i^_r∏_i≠ j^N(1-x_r,ix_r,j)]∏_Ψ∏_ρ_Ψ(e^ρ_Ψ(u)/2y_Ψ^1/21-e^ρ_Ψ(u)y_Ψ)^1-_Ψ , where the Jacobian matrix is defined as 𝔹≡∂(e^ B_1,1,⋯,e^ B_1,N,⋯,e^ B_p,1,⋯ e^ B_p,N)∂(log x_1,1,⋯,log x_1,N,⋯,log x_p,1,⋯,log x_p,N) . Note that the factor of (N!)^-p in (<ref>) is canceled by the degeneracy of a particular BAE solution from permutations in the BA formula (<ref>). Taking the log, the BA formula (<ref>) can be written as log Z(Δ,) =log[1𝔹∏_r=1^p[∏_i=1^Nx_r,i^_r∏_i≠ j^N(1-x_r,ix_r,j)]∏_Ψ∏_ρ_Ψ(e^ρ_Ψ(u)/2y_Ψ^1/21-e^ρ_Ψ(u)y_Ψ)^1-_Ψ]_u=u^⋆ +(contribution from other BAE solutions) . Note that in this expression we have emphasized the contribution to the TTI from a particular BAE solution {u_r,i^⋆} of the equations in (<ref>). An important fact about the BA formulation of the TTI is that the BAE can be derived from extremizing a single function, as follows. We first rewrite the BAE (<ref>) more explicitly by taking the logarithm of (<ref>) as 2π sign[k_r]n_r,i=πΔ_m,r+k_ru_r,i-∑_Ψ∑_ρ_Ψ(ρ_Ψ)_r,i[Li_1(e^ρ_Ψ(u)y_Ψ)+ρ_Ψ(u)2+πΔ_Ψ2] , for an arbitrary set of integers n_r,i∈ℤ that comes from the ambiguity e^2πℤ=1. The Bethe potential is then introduced as [u;Δ,] =∑_r=1^p∑_i=1^N[-k_r2u_r,i^2+π(2 sign[k_r]n_r,i-Δ_m,r)u_r,i] +∑_Ψ∑_ρ_Ψ[Li_2(e^ρ_Ψ(u)y_Ψ)-14ρ_Ψ(u)^2-πΔ_Ψ2ρ_Ψ(u)] , which yields the BAE (<ref>) upon extremizing with respect to the gauge holonomy u_r,i. Note that in (<ref>) we have implicitly fixed the integration constant. The Bethe potential (<ref>) can also be understood as the effective twisted superpotential that governs the low-energy dynamics on the Coulomb branch of the 2d A-twist =(2,2) theory on the Riemann surface <cit.> that we have chosen here to be Σ_𝔤=S^2. §.§ Relation between indices In this subsection, we derive the Bethe formulation of the SCI (<ref>) in a more precise form based on the Cardy-like expansion of the SCI in Section <ref> and the BA formulation of the TTI in Section <ref>. The key observation is that the leading order effective potential for the SCI in the Cardy-like limit (<ref>) is precisely the same as (minus) the Bethe potential for the TTI (<ref>), ^(0)[U;Δ,ℓ] ↔ -[u;Δ,] , provided we identify various parameters as SCI parameters2em 4em  TTI parameters (U_r,i , Δ_T_r , Δ_Ψ , ℓ_r,i) ↔ (u_r,i , Δ_m,r , Δ_Ψ , sign[k_r]n_r,i) . As a consequence of this identification, the saddle point equation for the SCI (<ref>) and the BAE for the TTI (<ref>) become equivalent, ∂^(0)[U;Δ,ℓ]∂ U_r,i=0 ↔ ∂[u;Δ,]∂ u_r,i=0 , and therefore the saddle point solution becomes equivalent to the BAE solution: {U_r,i^⋆} ↔ {u_r,i^⋆} . The Hessian ℍ for the SCI saddle point approximation (<ref>) and the Jacobian 𝔹 for the TTI BA formula (<ref>) are also related under the identifications (<ref>) and (<ref>) as s_I(𝕁)_I,J|_U=U^⋆ = s_I∂^2^(0)[U;Δ,ℓ]∂ Y_I∂ Y_J|_U=U^⋆  ↔  -(𝔹)_I,J|_u=u^⋆=-s_I∂^2[u;Δ,]∂ y_I∂ y_J|_u=u^⋆ , in terms of y_I=u_r,i and some phases s_I=-sign[k_r], which can be derived from B_r,i=-sign[k_r]∂[u;Δ,]∂ u_r,i . From (<ref>), we can identify the absolute values of the determinants of those matrices as |𝔹 |_u=u^⋆ ↔ |𝕁 |_U=U^⋆ . Finally, the parameters (_Ψ,_r) used in both indices are identified in a straightforward way. Rewriting the Cardy-like expansion of the SCI (<ref>) in terms of the Bethe potential (<ref>) and the BA formula for the TTI (<ref>) based on the above identifications, we find [box=]equation log(ω,Δ,)=-1πω[u^⋆;Δ,]+logZ(Δ,)+φ+(ω) , which completes the derivation of the relation (<ref>) between the two indices. We thus conclude that the leading and first subleading orders in the Cardy-like expansion of the SCI are governed by the Bethe potential and the TTI of the same =2 SCFT, respectively. Before exhibiting this relation in concrete examples of 3d 𝒩=2 SCFTs, we collect a number of comments below. * The relation (<ref>) is derived under the Cardy-like limit of the SCI but does not involve any other limit such as the large N limit for holographic SCFTs. Hence the identification of the first two leading terms of the SCI in the Cardy-like expansion with the Bethe potential and the TTI is valid non-perturbatively in the large N expansion of holographic SCFTs and at finite N <cit.>. We will discuss this further in Section <ref>. * The Bethe potential evaluated at the BAE solution {u^⋆} in the r.h.s. of (<ref>) does not depend on the set of arbitrary integers {n_r,i} introduced in the BAE (<ref>), which is indeed expected since they are unphysical and simply come from the 2πℤ ambiguity in the exponent of the BA operators (<ref>). * In the presentation above, we have assumed that a particular saddle point (resp. BAE solution) yields a dominant contribution to the SCI (resp. TTI) in deriving the index relation (<ref>). However, since the relation between the leading order effective potential and the Bethe potential given in (<ref>) is valid before specifying a saddle point or a BAE solution, one can easily generalize the index relation (<ref>) to the contribution from each saddle point and BAE solution. To be more specific, if we allow for multiple contributions from different saddle points {u^⋆_(σ)} and BAE solutions {U^⋆_(σ)} labeled by σ as (ω,Δ,) =∑_σ_(σ)(ω,Δ,) , Z(Δ,) =∑_σZ_(σ)(Δ,) , based on the expressions (<ref>) and (<ref>) respectively, we obtain log_(σ)(ω,Δ,)=-1πω[u^⋆_(σ);Δ,]+log Z_(σ)(Δ,)+φ+(ω) . This generalizes the index relation (<ref>) to multiple saddles. * The parameters (_Ψ,_r) represent the background magnetic fluxes for flavor symmetries in the TTI. The same parameters in the SCI do not have the same physical meaning: they are simply introduced by redefining flavor fugacities as (<ref>). In particular, we did not turn on background magnetic fluxes for flavor symmetries in the matrix model for the SCI (<ref>). The generalized SCI involving background magnetic fluxes can also be written in terms of a matrix model <cit.> and its Cardy-like limit was also studied in <cit.>. We plan on studying this generalization and its effect on the index relation (<ref>) in future work. * The identification between parameters (<ref>) is not the same as the one proposed for the ABJM/ADHM theories in <cit.>. This is because we have improved a couple of technical steps in the factorization of the SCI (see Section <ref>) and thereby the holomorphic effective potential in the Cardy-like limit (<ref>) is slightly different from the one used in <cit.>. The final Cardy-like expansion of the SCI for the ABJM/ADHM theories obtained in the generic =2 conventions of the present paper are ultimately equivalent to the results of <cit.>, and we show this explicitly in Appendix <ref>. * The Bethe potential and the TTI of a single chiral multiplet Ψ with R-charge r and a U(1) flavor charge f are indeed given by (<ref>). To see this explicitly, one may read off the Bethe potential and the TTI for a single chiral multiplet from (<ref>) and (<ref>) as _Ψ=Li_2(y_Ψ) , and Z_Ψ=(y_Ψ^1/21-y_Ψ)^1-_Ψ , respectively. This is precisely the same as (<ref>) under the identification of SCI/TTI parameters (<ref>) and the reparametrization (<ref>), modulo minor notational differences; in (<ref>), we have simply used a subscript “i” to present fugacities/charges associated with a chiral multiplet Ψ_i. Note that the reparametrization for a single chiral multiplet (<ref>) simply corresponds to a special case of the generic one (<ref>). * The relation between the SCI and the TTI (<ref>) is clearly different from the relation between the A-twist partition function and the TTI recently observed in <cit.>. The latter is motivated by describing the 3d =2 theory on a Seifert manifold in terms of the A-twist of the 2d =(2,2) theory, associated with the parent 3d 𝒩=2 theory after a circle reduction, on the Coulomb branch based on the work of <cit.>. The supersymmetric background for the SCI, however, does not admit any Seifert structure <cit.> and therefore our index relation (<ref>) cannot be derived relying solely on the framework of <cit.>. In fact, our observation (<ref>) suggests that there are interesting relations between 3d supersymmetric partition functions that go beyond the cases where the backgrounds allow for a Seifert description. § 3D HOLOGRAPHIC SCFTS FROM M2-BRANES We now employ the relation (<ref>) to study the SCI of various 3d =2 holographic SCFTs arising on the worldvolume of N coincident M2-branes probing orbifold singularities. To be specific, we present the all-order 1/N perturbative expansion for the first two leading terms of the SCI in the Cardy-like limit. This is achieved by applying recent numerical techniques to study the corresponding TTI <cit.> and using the resulting expressions in conjunction with the relation (<ref>). We consider 3d =2 SCFTs holographically dual to M-theory on AdS_4× Y_7 for three Sasaki-Einstein orbifolds Y_7∈{N^0,1,0/ℤ_k,V^5,2/ℤ_N_f,Q^1,1,1/ℤ_N_f}. This analysis mirrors a similar approach previously used for ABJM/ADHM theories <cit.>. §.§ N^0,1,0 theory We start with the 3d =2 SCFT dual to M-theory on AdS_4× N^0,1,0/ℤ_k, which we simply refer to as the N^0,1,0 theory. We refer the reader to <cit.> for more details about various aspects of this SCFT. The CS-matter theory in the UV is described by the quiver diagram shown in Fig. <ref>. In intermediate stages of the calculation we keep the number of fundamental and anti-fundamental pairs of chiral multiplets r=r_1+r_2 independent from the CS level k as in <cit.>, although we stress that one needs to impose r=k to correctly describe the N^0,1,0 theory <cit.>. To study the N^0,1,0 SCI using the index relation (<ref>), we leverage the results obtained for the N^0,1,0 TTI presented in <cit.>. To facilitate this approach, we first align the generic =2 TTI conventions outlined in Section <ref> with those used in <cit.>. The Bethe potential (<ref>) can be written explicitly for the N^0,1,0 theory as _N^0,1,0[u;Δ,] =∑_i=1^N[-k2(u_1,i^2-u_2,i^2)+π(2n_1,i-Δ_m,1)u_1,i+π(-2n_2,i-Δ_m,2)u_2,i] +∑_a=1^2∑_i,j=1^N[Li_2(e^(u_1,i-u_2,j+πΔ_a))-14(u_1,i-u_2,j)^2-πΔ_a2(u_1,i-u_2,j)] +∑_a=3^4∑_i,j=1^N[Li_2(e^(u_2,j-u_1,i+πΔ_a))-14(u_2,j-u_1,i)^2-πΔ_a2(u_2,j-u_1,i)] +r_1∑_i=1^N[Li_2(e^(u_1,i+πΔ_q_1))-14u_1,i^2-πΔ_q_12u_1,i+Li_2(e^(-u_1,i+πΔ__1))-14u_1,i^2+πΔ__12u_1,i] +r_2∑_i=1^N[Li_2(e^(u_2,i+πΔ_q_2))-14u_2,i^2-πΔ_q_22u_2,i+Li_2(e^(-u_2,i+πΔ__2))-14u_2,i^2+πΔ__22u_2,i] . The BA formula for the TTI (<ref>) can be written explicitly for the N^0,1,0 theory and reads Z_N^0,1,0(Δ,) =∑_{u_r,i}∈BAE1𝔹∏_r=1^2[∏_i=1^Nx_r,i^_r∏_i≠ j^N(1-x_r,ix_r,j)] ×∏_i,j=1^N[∏_a=1^2(e^(u_1,i-u_2,j+πΔ_a)/21-e^(u_1,i-u_2,j+πΔ_a))^1-_a×∏_a=3^4(e^(u_2,j-u_1,i+πΔ_a)/21-e^(u_2,j-u_1,i+πΔ_a))^1-_a] ×∏_i=1^N[(e^(u_1,i+πΔ_q_1)/21-e^(u_1,i+πΔ_q_1))^1-_q_1(e^(-u_1,i+πΔ__1)/21-e^(-u_1,i+πΔ__1))^1-__1]^r_1 ×∏_i=1^N[(e^(u_2,i+πΔ_q_2)/21-e^(u_2,i+πΔ_q_2))^1-_q_2(e^(-u_2,i+πΔ__2)/21-e^(-u_2,i+πΔ__2))^1-__2]^r_2 , where the flavor chemical potentials and magnetic fluxes are constrained as 2 1 =Δ_1+Δ_4=Δ_2+Δ_3 , 1 =Δ_q_1+Δ__1=Δ_q_2+Δ__2 , 1 =_1+_4=_2+_3 , 1 =_q_1+__1=_q_2+__2 . As shown in Appendix <ref>, the expressions (<ref>) and (<ref>) given in the generic =2 TTI conventions indeed match the Bethe potential of the N^0,1,0 theory presented in <cit.>. Given this, we do not need to solve the BAE obtained by differentiating the Bethe potential (<ref>) from scratch and can directly employ the numerical BAE solutions constructed in <cit.> instead.[These solve the N^0,1,0 BAE derived from (<ref>) for a choice of integers (n_1,i,n_2,i)=(1-i+N,i).] Note that the results in <cit.> were obtained for a symmetric quiver with r_1=r_2=r/2 and focusing on the so-called superconformal configuration Δ_a=Δ_q_1,2=Δ__1,2=12 , _a=_q_1,2=__1,2=12 , Therefore, our study of the index relation will be limited to these configurations which we collectively denote by (Δ_sc, _sc) in what follows. The exact N^0,1,0 TTI deduced by substituting these numerical BAE solutions into the BA formula (<ref>) then reads <cit.> log Z_N^0,1,0(Δ_sc,_sc) =-2π(k+r)3√(2k+r)((N̂_k,r)^32-(r4+3k+2r(k+r)^2)(N̂_k,r)^12) -12logN̂_k,r+f̂_0(k,r)+f̂_np(N,k,r) , where the shifted N parameter is given by N̂_k,r=N+7r-2k48+23(k+r) . The non-perturbative correction in (<ref>) is exponentially suppressed at large N, i.e. f̂_np(N,k,r)∼(e^-√(N)). Numerical values of the N-independent contribution f̂_0(k,r) are presented in <cit.> for various configurations of (k,r) and below we provide some of those values. (k,r) f̂_0(k,r) (k,r) f̂_0(k,r) (1,1) -2.2479735914758641588 (2,3) -2.1883848791741933989 (1,2) -2.2917317046811495268 (3,2) -1.6284176444001315906 (4,2) -1.6979156145862367914 (2,6) -4.6689712958005925271 Employing the index relation (<ref>), the N^0,1,0 TTI result (<ref>) determines the ω^0 order coefficient of the SCI in the Cardy-like limit. To determine the ω^-1 leading order coefficient of the SCI, we numerically evaluate the Bethe potential (<ref>) for various BAE solutions {u_⋆} found in <cit.> and based on the very precise numerical data propose a closed form analytic expression. The result is given by _N^0,1,0[u_⋆;Δ_sc,] = 2π[π(k+r)6√(2k+r)N̂_k,r^32+ĝ_0(k,r)+ĝ_np(N,k,r)] , for the Δ-configuration (<ref>). Similarly to the TTI above, the correction ĝ_np is exponentially suppressed in the large N limit and we have only been able to determine the N-independent term ĝ_0 numerically. Below we present numerical values of ĝ_0 in selected examples. (k,r) ĝ_0(k,r) (k,r) ĝ_0(k,r) (1,1) -0.1601419698035751424 (2,3) -0.3315699873474639015 (1,2) -0.2224298877692041968 (3,2) -0.2159504219974937479 (4,2) -0.2261565608197277915 (2,6) -1.0105562008492962901 We refer to Appendix <ref> for the numerical data that supports the analytic expression (<ref>). Substituting the TTI (<ref>) and the Bethe potential (<ref>) into the Cardy-like expansion (<ref>), we obtain the SCI for the N^0,1,0 theory in the Cardy-like limit log_N^0,1,0(ω,Δ_sc,_sc) =-2ω[π(k+r)6√(2k+r)N̂_k,r^32+ĝ_0(k,r)+ĝ_np(N,k,r)] +[-2π(k+r)3√(2k+r)((N̂_k,r)^32-(r4+3k+2r(k+r)^2)(N̂_k,r)^12) 3em-12logN̂_k,r+f̂_0(k,r)+f̂_np(N,k,r)]+𝒪(ω) . Note that the large N non-perturbative corrections of order (e^-√(N)) in the first two leading terms above are determined unambiguously by the corresponding corrections to the Bethe potential and the TTI, respectively. §.§ V^5,2 theory We now proceed to study the SCI of the 3d 𝒩=2 holographic SCFT dual to M-theory on AdS_4× V^5,2/ℤ_N_f, which we call the V^5,2 theory. We refer the reader to <cit.> for details about the V^5,2 theory. The corresponding CS-matter theory in the UV is described by the quiver diagram in Fig. <ref>. As with the N^0,1,0 theory, we first match the TTI conventions used here to the ones in <cit.>. For the V^5,2 theory the Bethe potential (<ref>) reads _V^5,2[u;Δ,] =∑_i=1^Nπ(2n_1,i-Δ_m,1)u_1,i +∑_I=1^3∑_i,j=1^N[Li_2(e^(u_1,i-u_1,j+πΔ_I))-14(u_1,i-u_1,j)^2-πΔ_I2(u_1,i-u_1,j)] +N_f∑_i=1^N[Li_2(e^(u_1,i+πΔ_q))-14u_1,i^2-πΔ_q2u_1,i+Li_2(e^(-u_1,i+πΔ_))-14u_1,i^2+πΔ_2u_1,i] . The TTI BA formula (<ref>) for the V^5,2 theory reads Z_V^5,2(Δ,) =∑_{u_1,i}∈BAE1𝔹∏_i=1^Nx_1,i^_1∏_i≠ j^N(1-x_1,ix_1,j) ×∏_I=1^3∏_i,j=1^N(e^(u_1,i-u_1,j+πΔ_I)/21-e^(u_1,i-u_1,j+πΔ_I))^1-_I ×∏_i=1^N(e^(u_1,i+πΔ_q)/21-e^(u_1,i+πΔ_q))^N(1-_q)(e^(-u_1,i+πΔ_)/21-e^(-u_1,i+πΔ_))^N(1-_) , where the flavor chemical potentials and magnetic fluxes are constrained as 3∑_I=1^3Δ_I =Δ_1+Δ_2+Δ_q+Δ_=2 , and Δ_3 =23 , ∑_I=1^3_I =_1+_2+_q+_=2 , and _3 =23 . The superconformal configuration is given by <cit.> Δ_I=23 , Δ_m=0 , _I=23 , =0 . With this at hand, it is straightforward to check that the expressions (<ref>) and (<ref>) match the Bethe potential and the TTI of the V^5,2 theory presented in <cit.> (see Appendix <ref> below for more details). We can therefore employ the numerical BAE solutions constructed in <cit.>[These solve the V^5,2 BAE derived from (<ref>) for the choice n_1,i =⌊N+12⌋-i.] for various configurations satisfying the constraints (<ref>). The closed form expression for the V^5,2 TTI obtained by substituting those numerical solutions in (<ref>) reads <cit.> log Z^V^5,2(Δ,) =-π√(N_fΔ̃_1Δ̃_2Δ̃_3Δ̃_4)3∑_a=1^4_aΔ̃_a(N̂_N_f,)^32 -π√(N_fΔ̃_1Δ̃_2Δ̃_3Δ̃_4)3(∑_I=1^2(𝔞_IN_f+𝔟_IN_f)_I+_3-_43_3^2_4^2N_f^2)(N̂_N_f,)^12 -12logN̂_N_f,+f̂_0(N_f,Δ̃,)+f̂_np(N,N_f,,) , where the shifted N parameter is given by N̂_N_f,=N-2-Δ_q-Δ_Δ_3N_f24+N_f12(1_1+1_2)+112N_f(1_3+1_4) , and we have also defined Δ̃_a =(Δ_1,Δ_2,2-Δ_q-Δ_2-Δ_mN_f,2-Δ_q-Δ_2+Δ_mN_f) , _a =(_1,_2,2-_q-_2+𝔱N_f,2-_q-_2-𝔱N_f) , 𝔞_I(Δ̃) =-1Δ̃_I2-(23-Δ̃_I)4Δ̃_1Δ̃_2 , 𝔟_I(Δ̃) =-23Δ̃_1Δ̃_2Δ̃_3Δ̃_4-34_3_4-(_3-_4)^28_3^2_4^2 . The non-perturbative correction in (<ref>) is exponentially suppressed at large N, f̂_np(N,N_f,,)∼(e^-√(N)), while numerical estimates for the N-independent term f̂_0 can be found in <cit.>. Some of those numerical values for the superconformal flavor magnetic flux configuration, _sc, are presented below.[Some of the numerical values of f̂_0 in this table are new and not given explicitly in <cit.> since only selected numerical data were presented there.] (N_f,Δ_1,Δ_q,Δ_m) f̂_0(N_f,,_sc) (N_f,Δ_1,Δ_q,Δ_m) f̂_0(N_f,,_sc) (1,23,13,0) -2.7620858097124988759 (3,59,13,N_f9) -4.8860565481247352522 (3,23,13,0) -4.7624014875151824187 (2,712,13,N_f15) -3.3109278391444872740 (2,12,13,0) -3.4523973380968433835 (1,23-12π,16,N_f(23-2π)) -2.8401820199427569809 Now we proceed to the SCI of the V^5,2 theory. Similar to the N^0,1,0 theory case, the only missing component for determining the first two leading terms in the Cardy-like expansion of the V^5,2 SCI is a closed-form expression for the Bethe potential. Using the numerical BAE solutions {u_⋆} constructed in <cit.>, we find that the following analytic expression is in excellent agreement with the numerical data _V^5,2[u_⋆;Δ,]=2π[π√(N_f_1_2_3_4)3N̂_N_f,Δ^32+ĝ_0(N_f,Δ)+ĝ_np(N,N_f,Δ)] . The non-perturbative correction is exponentially suppressed at large N, i.e. ĝ_np(N,N_f,Δ)∼(e^-√(N)). Below we present numerical values of ĝ_0 in selected examples. (N_f,Δ_1,Δ_q,Δ_m) ĝ_0(N_f,Δ) (N_f,Δ_1,Δ_q,Δ_m) ĝ_0(N_f,Δ) (1,23,13,0) -0.13945567062297404931 (3,59,13,N_f9) -0.30398703304578607507 (3,23,13,0) -0.29657318363534945350 (2,712,13,N_f15) -0.19117259832590889123 (2,12,13,0) -0.20242609578739029964 (1,23-12π,16,N_f(23-2π)) -0.14096914088723067753 See Appendix <ref> for numerical data that supports the analytic expression (<ref>). Substituting the TTI (<ref>) and the Bethe potential (<ref>) into the Cardy-like expansion (<ref>), we obtain the SCI for the V^5,2 theory in the Cardy-like limit log_V^5,2(ω,Δ,) =-2ω[π√(N_f_1_2_3_4)3N̂_N_f,Δ^32+ĝ_0(N_f,Δ)+ĝ_np(N,N_f,Δ)] +[-π√(N_fΔ̃_1Δ̃_2Δ̃_3Δ̃_4)3∑_a=1^4_aΔ̃_a(N̂_N_f,)^32 3em-π√(N_fΔ̃_1Δ̃_2Δ̃_3Δ̃_4)3(∑_I=1^2(𝔞_IN_f+𝔟_IN_f)_I+_3-_43_3^2_4^2N_f^2)(N̂_N_f,)^12 3em-12logN̂_N_f,+f̂_0(N_f,Δ̃,)+f̂_np(N,N_f,,)]+φ+𝒪(ω) . §.§ Q^1,1,1 theory As a third example, we consider the SCI of the 3d 𝒩=2 holographic SCFT dual to M-theory on AdS_4× Q^1,1,1/ℤ_N_f. We refer the readers to <cit.> for details about this Q^1,1,1 theory. The corresponding CS-matter theory in the UV is described by the quiver diagram in Fig. <ref>. As in the previous examples we first write down the Bethe potential for the Q^1,1,1 theory using the general formula (<ref>) _Q^1,1,1[u;Δ,] =∑_i=1^N[π(2n_1,i-Δ_m,1)u_1,i+π(-2n_2,i-Δ_m,2)u_2,i] +∑_a=1^2∑_i,j=1^N[Li_2(e^(u_1,i-u_2,j+πΔ_a))-14(u_1,i-u_2,j)^2-πΔ_a2(u_1,i-u_2,j)] +∑_a=3^4∑_i,j=1^N[Li_2(e^(u_2,j-u_1,i+πΔ_a))-14(u_2,j-u_1,i)^2-πΔ_a2(u_2,j-u_1,i)] +N_f∑_n=1^2∑_i=1^N[Li_2(e^(-u_1,i+πΔ__n))-14u_1,i^2+πΔ__n2u_1,i] +N_f∑_n=1^2∑_i=1^N[Li_2(e^(u_2,i+πΔ_q_n))-14u_2,i^2-πΔ_q_n2u_2,i] . The general BA formula for the TTI (<ref>) can be written explicitly for the Q^1,1,1 theory as Z_Q^1,1,1(Δ,) =∑_{u_r,i}∈BAE1𝔹∏_r=1^2[∏_i=1^Nx_r,i^_r∏_i≠ j^N(1-x_r,ix_r,j)] ×∏_i,j=1^N[∏_a=1^2(e^(u_1,i-u_2,j+πΔ_a)/21-e^(u_1,i-u_2,j+πΔ_a))^1-_a×∏_a=3^4(e^(u_2,j-u_1,i+πΔ_a)/21-e^(u_2,j-u_1,i+πΔ_a))^1-_a] ×∏_n=1^2∏_i=1^N(e^(-u_1,i+πΔ__n)/21-e^(-u_1,i+πΔ__n))^N_f(1-__1)(e^(u_2,i+πΔ_q_n)/21-e^(u_2,i+πΔ_q_n))^N_f(1-_q_n) . Here the flavor chemical potentials and magnetic fluxes are constrained as 3 2 =∑_a=1^4Δ_a =Δ_1+Δ_q_1+Δ__1 =Δ_2+Δ_q_2+Δ__2 , 2 =∑_a=1^4_a =_1+_q_1+__1 =_2+_q_2+__2 , and the superconformal configuration reads 3Δ_1 =Δ_2 , Δ_3 =Δ_4 , Δ_q_1,2 =Δ__1,2 , _1 =_2 , _3 =_4 , _q_1,2 =__1,2 . In Appendix <ref> we show that the expressions (<ref>) and (<ref>) match the Bethe potential and the TTI of the Q^1,1,1 theory presented in <cit.>. Note that the numerical BAE solutions constructed in this reference solve the BAE for a choice of integers (n_1,i,n_2,i)=(1-i-⌊ N_f/2⌋+N,i-⌊ N_f/2⌋), and they were obtained for the special configuration (<ref>). At this superconformal point, the BA formula (<ref>) yields <cit.> log Z_Q^1,1,1(Δ_sc,_sc) =-4π√(N_f)3√(3)((N̂_N_f)^32-(N_f4+34N_f)(N̂_N_f)^12) -12logN̂_N_f+f̂_0(N_f)+f̂_np(N,N_f) , where the shifted N parameter is given by N̂_N_f=N+N_f6 . As in the previous example we denote the exponentially suppressed correction as f̂_np(N,N_f)∼(e^-√(N)), and numerical values of f̂_0(N_f) can be found in <cit.>. For the SCI of the Q^1,1,1 theory it suffices to determine the closed form expression of the Bethe potential by using various BAE solutions {u_⋆} provided in <cit.>. The analytic expression we find from this numerical data is _Q^1,1,1[u_⋆;Δ_sc,]=2π[π√(N_f)3√(3)(N̂_N_f)^32+ĝ_0(N_f)+ĝ_np(N,N_f)] , at the special configuration (<ref>). The non-perturbative correction is exponentially suppressed at large N, ĝ_np(N,N_f)∼(e^-√(N)). Below we present numerical values of ĝ_0 together with those of f̂_0 given in <cit.>. N_f ĝ_0(N_f) f̂_0(N_f) 1 -0.12179382823357287453 -2.1415723730798296354 2 -0.060896914126385874431 -2.0385864384989237526 3 0.018581373235204659187 -2.2368141361938090934 4 0.12639484451282630333 -2.6005901148883909862 5 0.26400260477995552485 -3.1045097958934355205 See Appendix <ref> for numerical data that supports the proposed (<ref>). Substituting the TTI (<ref>) and the Bethe potential (<ref>) into the Cardy-like expansion (<ref>), we obtain the final form of the SCI for the Q^1,1,1 theory in the Cardy-like limit log_Q^1,1,1(ω,Δ_sc,_sc)|_(<ref>) =-2ω[π√(N_f)3√(3)(N̂_N_f)^32+ĝ_0(N_f)+ĝ_np(N,N_f)] +[-4π√(N_f)3√(3)((N̂_N_f)^32-(N_f4+34N_f)(N̂_N_f)^12) 3em-12logN̂_N_f+f̂_0(N_f)+f̂_np(N,N_f)]+𝒪(ω) . § HOLOGRAPHY We now proceed with a discussion on the holographic implications of the explicit expressions for the SCI of the 3d =2 SCFTs in Section <ref> and the index relation (<ref>). In most of the discussion below we will focus on the superconformal, or universal, configuration of the fugacities and real masses for the TTI and SCI. As explained in <cit.>, this choice of parameters corresponds to turning on sources and expectation values only for the background fields that couple to the energy-momentum multiplet in the CFT. The holographic dual manifestation of this choice is that the corresponding 4d gravitational backgrounds can be described as solutions of minimal 4d 𝒩=2 gauged supergravity. For this choice of parameters, the field theory results for the TTI and SCI in Sections <ref> and <ref>, along with our previous works <cit.> summarized in Appendix <ref>, can be succinctly expressed as TTI :   -logZ =πα((N - B)^32 + C(N - B)^12) + 12log(N-B) - f̂_0 + 𝒪(e^-√(N)) , SCI : -log =πα2ω(N-B)^32 + 2ωĝ_0 + πα((N - B)^3/2 + C(N - B)^12) + 12log(N-B) - f̂_0 + 𝒪(e^-√(N),ω) . The quantities (α,B,C) that determine both indices are presented for various holographic SCFTs in Table <ref>. As discussed above, the constants ĝ_0 and f̂_0, which are independent of N, are not known analytically as functions of the CS level k or the number of fundamental multiplets N_f and assume distinct values for different holographic SCFTs. We note that the quantities B and C for the V^5,2 and Q^1,1,1 were only partially determined in our previous work <cit.> where the combination B+C was calculated. As we discuss below, the relation between the SCI and TTI we derived above allows to determine B and C separately. §.§ Leading term at large N The SCI of the 3d =2 holographic SCFTs of interest is holographically dual to the Euclidean path integral of M-theory around the 11d background obtained by uplifting the 4d Euclidean supersymmetric Kerr-Newman (KN) AdS black hole solution of =2 minimal gauged supergravity <cit.>. This uplift is guaranteed to exist and is given explicitly by the consistent truncation of 11d supergravity on 7d Sasaki-Einstein (SE) manifolds <cit.>. This holographic duality is discussed extensively in the literature, see for example, <cit.>), and can be succinctly expressed as[The subscript “f” in the product symbol indicates that Y_7 is fibered over the 4d non-compact manifold.] Z_M-theory|_KN EAdS_4×_f Y_7=_SCFT_3 , where the 3d SCFT is specified by the internal manifold Y_7, see Table <ref> for the examples we consider in this work. In the semi-classical limit where the 4d Newton constant, G_N, is much smaller than the square of the EAdS_4 radius, L^2, the Euclidean path integral dual to the SCI can be approximated by the on-shell action of the two-derivative =2 minimal gauged supergravity Euclidean supersymmetric KN AdS black hole solution <cit.> -log Z_M-theory|_KN EAdS_4×_f Y_7 =S^(2∂)_=2 sugra|_KN EAdS_4+o(L^2/G_N) =π L^22G_N(ω+1)^22ω+o(L^2/G_N) . Here ω is the angular velocity of the black hole solution which via supersymmetry also determines the electric charge, see <cit.> for more details on the gravitational background and the evaluation of this on-shell action. We can now use the AdS_4/CFT_3 dictionary to map the 4d gravitational parameters to the number N of M2-branes at the tip of the cone over the internal manifold Y_7, see <cit.>. To leading order in the large N limit one finds L^22G_N=√(2π^427vol[Y_7]) N^32+o(N^32) , where vol[Y_7] is the volume of Y_7. Using this we can express the Euclidean path integral (<ref>) at leading order in the large N limit as -log Z_M-theory|_KN EAdS_4×_f Y_7=√(2π^627vol[Y_7]) (ω+1)^22ω N^32+o(N^32) . For all Sasaki-Einstein orbifolds listed in Table <ref> one can show that α=√(2π^427vol[Y_7]), see Appendix E of <cit.> for more details on this calculation. This in turn shows that in the large N limit the SCI in (<ref>) reads -log_SCFT_3=πα2(1ω+2+(ω))N^32+o(N^32) and indeed precisely agrees with the gravitational on-shell action. A notable difference between the field theory and gravitational results is that the supergravity on-shell action is evaluated for any finite value of ω while in the field theory calculation of the SCI we only keep the leading two terms in the Cardy-like limit. It will of course be very interesting to establish the holographic duality for general values of ω, see <cit.> for some recent work along this direction. An important holographic application of the SCI is that it provides a microscopic description of the entropy of the dual 4d supersymmetric KN AdS black hole. To obtain this entropy to leading order in the large N limit one needs to perform a suitable Legendre transform. This was done in various contexts in the literature, see for instance <cit.>, so we will be brief. One first defines the entropy function[In the discussion below we assume that the expression for the SCI at finite ω agrees with the leading order holographic result in (<ref>).] 𝒮(ω,φ,J,Q,λ)=-2παφ^2ωN^32-2π(ω J+φ LQ+λ(2φ-1-ω)) , which is then extremized with respect to the chemical potentials (ω,φ) and the Lagrange multiplier λ to obtain the Bekenstein-Hawking entropy of the KN black hole. The information about the microscopic details of the dual SCFT, or alternatively the particular embedding of the 4d black hole solution in 11d supergravity, is encoded in the parameter α which takes the values given in Table <ref> for the examples studied in this work. Our results for the large N SCI therefore constitute a non-trivial microscopic counting of the entropy of the 4d supersymmetric KN AdS black hole for all these setups arising from the low-energy dynamics of M2-branes. §.§ Subleading terms at large N We now turn our attention to the holographic implication of the indices in (<ref>) beyond the leading N^3/2 order in the large N limit with a focus on the 4-derivative corrections to 4d =2 minimal gauged supergravity recently analyzed in <cit.>. The key statement of <cit.> is that the four-derivative corrections to the Euclidean 4d =2 minimal gauged supergravity action are characterized by two dimensionless constant coefficients λ_1,2. Moreover, any classical solution of the 2-derivative equations of motion automatically solves the 4-derivative ones and one can explicitly calculate its on-shell action. The result for the 4-derivative regularized on-shell action I_4∂ for any, not necessarily supersymmetric, solution 𝕊 is given by I_4∂(𝕊)=[1+64π G_NL^2(λ_2-λ_1)]π L^22G_N(𝕊)+32π^2λ_1 χ(𝕊) , where χ is the regularized Euler characteristic of the 4d Euclidean manifold and determines the 2-derivative regularized on-shell action of the solution 𝕊 in a normalization in which the regularized on-shell action of empty Euclidean AdS_4 has ℱ=1, see <cit.> for more details. To compare the 4-derivative regularized on-shell action (<ref>) with a dual SCFT partition function, we should extend the AdS/CFT dictionary (<ref>) to the subleading order in the large N limit as L^22G_N = A N^32 + a N^12 + o(N^12) , 32πλ_i = v_i N^12 + o(N^12) , where (A,a,v_1,v_2) are real constants that do not scale with N. This can then be used to rewrite (<ref>) as I_4∂(𝕊) = πℱ(𝕊)(A N^32 + (a + v_2) N^12) - π(ℱ(𝕊) - χ(𝕊)) v_1 N^12 + o(N^12) . Note that the parameters (A,a,v_1,v_2) encode the details of the specific SCFT of interest while the parameters (,χ) are specified by the 4d supergravity solution dual to the particular partition function at hand. The 4d backgrounds holographically dual to the TTI and the SCI are given by the supersymmetric Euclidean Reissner-Nordström (RN) AdS black hole and the supersymmetric Euclidean KN AdS black hole, respectively. The quantities (,χ) for these backgrounds are presented in <cit.> and read RN : {(𝕊_RN),χ(𝕊_RN)}={1,2} , KN : {(𝕊_KN),χ(𝕊_KN)}={(ω+1)^22ω,2} . Using these gravitational parameters and the holographic relations I_4∂(𝕊_RN) =-log Z+o(N^12) , I_4∂(𝕊_KN) =-log+o(N^12) , we can compare the 4-derivative regularized on-shell action (<ref>) with the dual SCFT indices[When taking the logarithm of the two indices, the imaginary part is defined modulo 2πℤ and therefore can be absorbed in the subleading corrections of order o(N^12) in the large N limit.] (<ref>). This in turn allows us to determine the supergravity coefficients (A,a,v_1,v_2) in terms of the field theory data in Table <ref> to find A=α , a+v_2=α2(C-3B) , v_1=α2C . We note that to determine these relations between supergravity and field theory quantities we only need to use the large N expansion of the TTI (<ref>) together with the SCI relation in (<ref>). Substituting (<ref>) back into the 4-derivative regularized on-shell action (<ref>), we find I_4∂(𝕊) = πα[ℱ(𝕊)N^3/2 +Cχ(𝕊)-3Bℱ(𝕊)2N^1/2] + o(N^12) =-log_∂𝕊+o(N^12) , where _∂𝕊 denotes the general 3d partition function on the conformal boundary of 𝕊. For instance, in this general notation, the TTI and the SCI read _∂𝕊_RN=Z and _∂𝕊_KN=, respectively. The expression in (<ref>) has important implications. It allows for the calculation of the two leading order N^32 and N^12 terms in the large N expansion of any 3d partition function for the 3d =2 SCFTs arising from N M2-branes listed in Table <ref>. All we need to know are the quantities ℱ and χ for the 4d Euclidean background corresponding to the given 3d manifold on which the SCFT is placed on. As a first consistency check we note that the SCI for the theories listed in Table <ref> agrees with (<ref>) after using the second line in (<ref>). This is of course expected and compatible with the field theory relation between the SCI and the TTI in (<ref>). An additional stronger consistency check of the relation (<ref>), and a precision test of holography, is provided by considering the S^3 partition function of the ABJM, ADHM, and N^0,1,0 theories. For these models, as summarized in <cit.>, the round S^3 partition function in the absence of sources can be written in terms of an Airy function. Using the large N expansion of the Airy function, and the fact that for the EAdS_4 background with S^3 boundary we have (,χ)=(1,1), one indeed finds that (<ref>) is obeyed for these three SCFTs.[For the leading N^32 order, <cit.> studied not only the SCI and TTI but also the S^3 partition function for general holographic 3d =2 SCFTs and observed that (<ref>) is indeed obeyed to that order. This is also consistent with the large N universality emphasized in <cit.>.] For the theories and path integrals for which large N supersymmetric localization results are not yet available the expression in (<ref>) provides a valuable supergravity prediction for the 3d SCFT. For instance, for the V^5,2 and Q^1,1,1 theories there are no supersymmetric localization results available for the N^12 terms of the squashed S^3 free energy. Using (<ref>) and the fact that (,χ)=(1/4(b+1/b)^2,1) for the supersymmetric U(1)×U(1) squashed S^3, see <cit.>, we find the following holographic prediction -log_S^3_b=πα[1/4(b+1/b)^2N^32+(C2-3B/8(b+1/b)^2)N^12]+o(N^12) , where the coefficients (α,B,C) for the V^5,2 and Q^1,1,1 models can be read off from Table <ref>. Confirming the prediction (<ref>) at the N^12 order for the V^5,2 and Q^1,1,1 theories from supersymmetric localization would be of great interest. Indeed, it was recently shown in <cit.> that (<ref>) is compatible with a saddle point analysis of the S^3 partition function of these models in the IIA limit. In addition, (<ref>) agrees well with the Airy conjecture for this partition function discussed in <cit.>. Another important application of the results above is in the holographic calculation of thermal observables in 3d =2 holographic SCFTs beyond the strict large N limit. As recently demonstrated in <cit.> the result in (<ref>) can be used to calculate the thermal free energy on S^1×ℝ^2 as well as some coefficients in the thermal effective action on S^1× S^2 for theories arising from M2-branes to order N^12. The results of <cit.> can be directly applied, in conjunction with (<ref>), in order to compute these observables also for all theories in Table <ref>. Finally, we make a few comments on the log N terms in the large N expansion of the TTI and SCI in (<ref>). As shown in <cit.>, see also <cit.>, these logarithmic corrections to the free energies of holographic SCFTs are universal and do not depend on continuous parameters. The large N expansion of the SCI and TTI indeed confirms this general result. Moreover, based on the results in <cit.>, we conclude that the logarithm of any path integral on a compact Euclidean manifold ℳ for the 3d SCFTs in Table <ref> (in the presence of arbitrary sources) contains the following universal logarithmic term -log Z_ℳ⊃χ/4log N , where, as above, χ is the regularized Euler number of the 4d Euclidean manifold, with ℳ as its boundary. § DISCUSSION The main result of our work is to establish the relation in (<ref>) between the first two leading terms in the Cardy-like expansion of the SCI and the Bethe potential and TTI for a broad class of 3d 𝒩=2 SCFTs. We then showed how to utilize this result in the context of 3d =2 holographic SCFTs arising from N M2-branes probing certain Calabi-Yau 4-fold singularities. Combining the relation in (<ref>) with the exact TTI results of <cit.>, we derived the all order 1/N-perturbative expansions for the first two leading terms in the Cardy-like expansion of the SCI for a number of such holographic SCFTs. We also discussed how our results can be used in the context of precision holography by combining them with recent advances in our understanding of higher-derivative and logarithmic corrections to 4d gauged supergravity. Below we briefly discuss several open problems and possible generalizations of our work. One of the most conceptually straightforward, yet technically challenging, questions that remain open is the computation of the SCI for general finite values of the parameter ω. Evaluating the SCI localization formula (<ref>) with a finite ω is in general very complicated and, even when employing the saddle point approximation for sufficiently small ω, keeping track of subleading corrections beyond the ω^0 order becomes highly non-trivial, see <cit.>. The divergent series expansion of the ∞-Pochhammer symbol discussed in Appendix <ref> poses another complication in analyzing the SCI perturbatively in ω. A promising starting point to address these difficulties could be to focus on the SCI of holographic SCFTs in the large N limit, see <cit.> for recent attempts in this direction. However, even within this context, the simple (ω+1)^22ω ω-dependence of the N^3/2 term in the large N expansion, as predicted by the holographic dual on-shell supergravity action in (<ref>), has not yet been fully reproduced from the field theory perspective and remains an important open problem. Our work can be generalized by applying the general index relation (<ref>) to different classes of holographic SCFTs beyond the examples arising from M2-branes studied here. Several classes of prime examples for future exploration are theories arising on the worldvolume of D2-branes in massive IIA string theory, or those associated to M5-branes wrapping hyperbolic 3-manifolds, or SCFTs obtained by wrapping D4-branes on a Riemann surface <cit.>. For example, for the D2-brane models in massive IIA, the leading N^5/3 behavior of the Cardy-like limit of the SCI and the TTI were studied in <cit.> and our results above provide the necessary ingredients to extend this beyond the leading order in the large N limit. In pursuing this, the first step would be to evaluate the large N limit of the TTI and subsequently leverage the corresponding Bethe potential and TTI results to derive the small ω expansion of the SCI using the index relation (<ref>). The exact results for the first two leading terms of the SCI of =2 holographic SCFTs in the Cardy-like expansion presented above can be used to explore not only the perturbative expansion of the holographically dual string/M-theory path integrals but also the associated non-perturbative corrections. In particular, the leading exponentially suppressed term in the large N expansion of the SCI should capture the instanton contributions to the dual string/M-theory path integral. For the S^3 partition functions these non-perturbative corrections at large N were studied for the ABJM theory in <cit.>, as well as in the holographically dual string/M-theory background more recently in <cit.>. It is reasonable to expect that a similar holographic comparison can be performed for the 3d =2 SCI using the non-perturbative results for the TTI presented in <cit.> together with the index relation (<ref>). In Section <ref>, we explored the holographic dual backgrounds to the so-called universal SCI and TTI in (<ref>). The field theory results we derived in Section <ref> are however more general since they allow for the presence of general fugacities for flavor symmetries and real mass parameters. Introducing these additional parameters in supergravity should amount to finding more general supersymmetric asymptotically AdS_4 Kerr-Newman and Reissner-Nordström black hole solutions of 11d supergravity. It will be very interesting to construct and study these solutions explicitly since they will provide a fertile ground for additional precision tests of the holographic duality. The relation between the Cardy-like limit of the SCI and the TTI in (<ref>) can be viewed as a consequence of a relation between the TTI and the “disk index” ℐ_S^1×_ω D^2, i.e. the partition function of 3d 𝒩=2 SCFTs on S^1×_ωD^2. As pointed out in <cit.>, the leading order term in the small ω expansion of the logarithm of the disk index is controlled by the Bethe potential. Our work above implies the more general relation valid for any 3d 𝒩=2 SCFT[The detailed derivation of the relation in (<ref>) follows directly the approach of <cit.> combined with the observations in Section <ref> above.] logℐ_S^1×_ω D^2 (ω, Δ, ) = -1/2πω𝒱[u^*;Δ,]+1/2log Z(Δ,) + 1/2 r_G log(-ω) + 𝒪(ω) , where r_G denotes the rank of the gauge group, which equals to pN for the theories studied in this paper. (<ref>) relates the ω^0 term of logℐ_S^1×_ω D^2 to the TTI, i.e. to the S^1× S^2 partially twisted partition function log Z(Δ,). Since the SCI can be thought of, in some sense, as gluing two partition functions on S^1×_ωD^2, the relation between the SCI and TTI in (<ref>) could be viewed as a consequence of (<ref>). One can also contemplate studying the Euclidean path integrals of 3d 𝒩=2 SCFTs on more general compact manifolds. As noted in <cit.>, the supersymmetric partition function of 3d =2 theories on any Seifert manifold exhibits a close relationship with the TTI and can be expressed in terms of appropriate fibering and handle-gluing operators summed over the Bethe vacua. It will be very interesting to leverage the perturbatively exact TTI results of <cit.>, together with the SCI and disk index relations presented above, to understand better the structure of these supersymmetric path integrals and their holographic implications. More generally, our work could be viewed as a consequence of the factorization properties of supersymmetric partition functions of 3d 𝒩=2 theories on Euclidean manifolds <cit.>. Understanding better the consequences of this type of factorization for large N holographic theories and clarifying its implication for the dual string/M-theory path integrals is an important open problem, see <cit.> for some developments in this direction. We hope that our work provides a useful stepping stone in uncovering this structure and studying its consequences for string and M-theory. § ACKNOWLEDGMENTS We are grateful to Arash Arabi Ardehali, Davide Cassani, Pieter-Jan De Smet, Dongmin Gang, Seppe Geukens, Kiril Hristov, Chiung Hwang, Seok Kim, Zohar Komargodski, Silviu Pufu, and Xuao Zhang for valuable discussions. NB is supported in part by FWO projects G003523N, G094523N, and G0E2723N, as well as by the Odysseus grant G0F9516N from the FWO. SC is supported in part by a KIAS Individual Grant PG081602 at Korea Institute for Advanced Study. SC is grateful to KU Leuven for kind hospitality during part of this project. JH is supported by the Sogang University Research Grant of 202410008.01, the Basic Science Research Program of the National Research Foundation of Korea (NRF) funded by the Ministry of Education through the Center for Quantum Spacetime (CQUeST) with grant number NRF-2020R1A6A1A03047877, and the Fonds Wetenschappelijk Onderzoek–Vlaanderen (FWO) Junior Postdoctoral Fellowship with grant number 1203024N. JH is grateful to KIAS and Seoul National University for the warm hospitality during parts of this project. VR is partly supported by a Visibilité Scientifique Junior Fellowship from LabEx LMH and is grateful to the CCPP at New York University for hospitality during part of this project. § ∞-POCHHAMMER SYMBOL Here we briefly summarize the definition and properties of the ∞-Pochhammer symbol. It is defined within the unit disk as (a;q)_∞=∏_n=0^∞(1-aq^n) , (|q|<1) , and can be extended to |q|>1, see Appendix A of <cit.> for more details. The ∞-Pochhammer symbol satisfies the identity (-x)^2(xq^1+;q^2)_∞(x^-1q^1+;q^2)_∞=(-x)^-2(xq^1-;q^2)_∞(x^-1q^1-;q^2)_∞ , (∈ℤ) , and has the following asymptotic expansion (q=e^πω) lim_ω→0^+(aq^x;q^2)_∞ =exp[-2πωLi_2(aq^x-1)](1+𝒪(ω)) =exp[-2πωLi_2(a)+x-12Li_1(a)](1+𝒪(ω)) ,   (a∈ℂ, a∉[1,∞)) in terms of the polylogarithm functions. We comment further on the expansion of the ∞-Pochhammer symbol beyond the ω^0 order below. Since the asymptotic expansion in (<ref>) involves the polylogarithm functions, we also provide the inversion formula Li_n(e^ x)+(-1)^nLi_n(e^- x)=-(2π)^nn!B_n(x2π)    0≤[x]<2π & [x]≥0 0<[x]≤2π & [x]<0 , which is useful to compare the generic Bethe potential (<ref>) with the known expressions for special cases <cit.>. To go beyond the ω^0 order in the asymptotic expansion (<ref>), first we assume |a|<1 and |q|<1. Then one can expand the ∞-Pochhammer symbol as log(aq^x,q^2)_∞ =∑_n=0^∞log(1-aq^2n+x) =-∑_n=0^∞∑_r=1^∞1r(aq^2n+x)^r =∑_r=1^∞1ra^rq^xrq^2r-1 =∑_r=1^⌊1/|ω|⌋1ra^r∑_k=0^∞ B_k(x2)(2rπω)^k-1k!-∑_r=⌊1/|ω|⌋+1^∞1r(aq^x)^r11-q^2r ≠∑_k=0^∞ B_k(x2)Li_2-k(a)(2πω)^k-1k! , where we have used the generating function of Bernoulli polynomials te^xte^t-1=∑_k=0^∞ B_k(x)t^kk! ,  (|t|<2π) , in the 4th line of (<ref>). It is worth nothing a subtle feature. The formula (<ref>) cannot be applied to the 3rd line of (<ref>) for r>⌊1/|ω|⌋ due to the finite radius of convergence and therefore the last line of (<ref>) is in general not a valid asymptotic expansion for the ∞-Pochhammer symbol. In fact the last line of (<ref>) is a divergent series, see Appendix A of <cit.> for example. Hence in the main text we proceed with the the asymptotic expansion (<ref>) based on <cit.> and Appendix A of <cit.>. § PHASES IN THE LOCALIZATION OF THE SCI In this Appendix we derive the SCI localization formula (<ref>) based on <cit.>, see also <cit.> for a nice summary of the conventions. The localization formula for the =2 SCI (<ref>) is given by <cit.> (q,ξ) =1||∑__1,⋯,_r_G∈ℤ∮[∏_ℓ=1^r_Gdz_ℓ2π z_ℓ((-1)^_ℓz_ℓ)^∑_n=1^r_Gk_ℓ n_n]×∏_xξ_x^∑_ℓ=1^r_Gk_xℓ_ℓ ×∏_α∈𝔤q^-12|α()|(1-e^α(h)q^|α()|) ×∏_Ψ∏_ρ_Ψ(q^1-R(Ψ)(-1)^-ρ_Ψ()e^-ρ_Ψ(h)∏_xξ_x^-f_x(Ψ))^12|ρ_Ψ()| 5em×(e^-ρ_Ψ(h)∏_xξ_x^-f_x(Ψ)q^2-R(Ψ)+|ρ_Ψ()|;q^2)_∞(e^ρ_Ψ(h)∏_xξ_x^f_x(Ψ)q^R(Ψ)+|ρ_Ψ()|;q^2)_∞ , where r_G denotes the rank of the gauge group and the mixed CS levels are turned on in general. The underlined extra phase terms due to the replacement (-1)^F→(-1)^2j_3 in the trace formula (<ref>) can be written as ζ=∏_ℓ,n=1^r_G(-1)^k_ℓ n_ℓ_n×∏_Ψ∏_ρ_Ψ(-1)^-12ρ_Ψ()|ρ_Ψ()| . Introducing the explicit linear form of the weights, with 𝔥_G the Cartan subalgebra of the gauge group G, ρ_Ψ(X)=∑_ℓ=1^r_G(ρ_Ψ)_ℓX_ℓ , (X∈𝔥_G , (ρ_Ψ)_ℓ∈ℤ) , and using that the gauge magnetic fluxes are integer quantized, _ℓ∈ℤ, one can rewrite the phase factor (<ref>) as ζ =∏_ℓ,n=1^r_G(-1)^k_ℓ n_ℓ_n×∏_Ψ∏_ρ_Ψ(-1)^12ρ_Ψ()^2(-1)^12(ρ_Ψ()+|ρ_Ψ()|) =∏_ℓ,n=1^r_G(-1)^(k_ℓ n+12∑_Ψ∑_ρ_Ψ(ρ_Ψ)_ℓ(ρ_Ψ)_n)_ℓ_n×∏_Ψ∏_ρ_Ψ(-1)^12(ρ_Ψ()+|ρ_Ψ()|) . The shifted CS levels are integer quantized as <cit.>[In <cit.> the authors take the U(1)_-12 quantization for chiral multiplets so that the bare CS levels are integer quantized. In that case the localization formula for the 1-loop contribution of a chiral multiplet to the SCI also slightly changes <cit.> compared to the convention of <cit.>. In the latter convention we followed in this paper, the integer quantized CS levels correspond to the shifted ones (<ref>).] k_ℓ n+12∑_Ψ∑_ρ_Ψ(ρ_Ψ)_ℓ(ρ_Ψ)_n∈ℤ , and therefore the phase factor (<ref>) can be simplified further as ζ=∏_ℓ=1^r_G(-1)^(k_ℓℓ+12∑_Ψ∑_ρ_Ψ(ρ_Ψ)_ℓ(ρ_Ψ)_ℓ)_ℓ×∏_Ψ∏_ρ_Ψ(-1)^12(ρ_Ψ()+|ρ_Ψ()|) . Substituting the rewritten phase factor (<ref>) back into the localization formula (<ref>) and specializing it to the class of =2 SCFTs of our interest described in Section <ref> where the subscript ℓ labeling the Cartan generators of the gauge group is replaced with the pair (r,i), we obtain (q,ξ) =1(N!)^p∑__1,⋯,_p∈ℤ^N∮[∏_r=1^p∏_i=1^Ndz_r,i2π z_r,iz_r,i^k_r_r,i(-1)^(k_r+12∑_Ψ∑_ρ_Ψ(ρ_Ψ)_r,i(ρ_Ψ)_r,i)_r,iξ_T_r^_r,i] ×∏_r=1^p∏_i≠ j^Nq^-12|_r,i-_r,j|(1-z_r,iz_r,j^-1q^|_r,i-_r,j|) ×∏_Ψ∏_ρ_Ψ(-1)^12(ρ_Ψ()+|ρ_Ψ()|)(q^1-R(Ψ)e^-ρ_Ψ(h)∏_xξ_x^-f_x(Ψ))^12|ρ_Ψ()| 5em×(e^-ρ_Ψ(h)∏_xξ_x^-f_x(Ψ)q^2-R(Ψ)+|ρ_Ψ()|;q^2)_∞(e^ρ_Ψ(h)∏_xξ_x^f_x(Ψ)q^R(Ψ)+|ρ_Ψ()|;q^2)_∞ . Note that we have turned off mixed CS levels between gauge/global symmetries except the unit mixed CS levels between gauge and U(1) topological symmetries. Finally, we redefine the fugacities associated with topological symmetries as ξ_T_r → ξ_T_r(-1)^-(k_r+12∑_Ψ∑_ρ_Ψ(ρ_Ψ)_r,i(ρ_Ψ)_r,i) , upon which (<ref>) yields the localization formula (<ref>) in the main text. Note that the redefinition (<ref>) is allowed if the shifted CS level k_r+12∑_Ψ∑_ρ_Ψ(ρ_Ψ)_r,i(ρ_Ψ)_r,i is independent of the subscript i∈{1,⋯,N}, which is indeed the case for =2 SCFTs involving chiral multiplets in the following representations: * (Anti)-fundamental representation Ψ_s with N weights (ρ_Ψ_s^(j))_r,i=±δ_s,rδ_j,i where j∈{1,⋯,N}, * Adjoint representation Ψ_(s,s) with N^2 weights (ρ_Ψ_(s,s)^(j,k))_r,i=δ_s,r(δ_j,i-δ_k,i) where j,k∈{1,⋯,N}, * Bi-fundamental representation Ψ_(s,t) (s≠ t) with N^2 weights (ρ_Ψ_(s,t)^(j,k))_r,i=δ_s,rδ_j,i-δ_t,rδ_k,i where j,k∈{1,⋯,N}. § CARDY-LIKE EXPANSION OF THE SCI In this appendix we provide details in the derivation of the Cardy-like limit of the SCI (<ref>). To begin with, one can expand the SCI (<ref>) in the Cardy-like limit (<ref>) using the asymptotic expansion (<ref>) as <cit.> (ω,Δ,) =1(N!)^p(-1)^pN(N-1)2∑__1,⋯,_p∈ℤ^N∮_|s_r,i|=e^-πω_r,i(∏_r=1^p∏_i=1^Nds_r,i2π s_r,i) ×exp[-2πω[U;Δ,,ω,ℓ]+2πω[-;-Δ,,-ω,-ℓ]+(ω)] , where we have introduced a holomorphic effective potential [U;Δ,,ω,ℓ] =∑_r=1^p∑_i=1^N[12k_rU_r,i^2-π(2ℓ_r,i-Δ_T_r+ω_r)U_r,i]-πω∑_r=1^p∑_i≠ j^NLi_1(e^(U_r,j-U_r,i)) +∑_Ψ∑_ρ_Ψ[14ρ_Ψ(U)^2+πΔ_Ψ2ρ_Ψ(U)-12πω(1-_Ψ)(ρ_Ψ(U)+πΔ_Ψ)-Li_2(e^ρ_Ψ(U)y_Ψ q^-1+_Ψ)] . In the last line of (<ref>) we have included an extra term proportional to ∼ω(1-_Ψ) Δ_Ψ which cancels in the full exponent of (<ref>) but is useful later. Note that the holomorphic effective potential (<ref>) is expanded in the Cardy-like limit as in (<ref>) using the asymptotic expansion (<ref>). Now using the complex conjugate relations ^(0)[U;Δ,ℓ] =^(0)[-;-Δ,-ℓ] , ^(1)[U;Δ,] =^(1)[-;-Δ,] , for the expanded effective potential (<ref>), one can rewrite (<ref>) as (ω,Δ,) =1(N!)^p(-1)^pN(N-1)2∑__1,⋯,_p∈ℤ^N∮_|s_r,i|=e^-πω_r,i(∏_r=1^p∏_i=1^Nds_r,i2π s_r,i) ×exp[1πω^(0)[U;Δ,ℓ]+2^(1)[U;Δ,]+(ω)] . To simplify (<ref>) further in the Cardy-like limit, we replace the sums over gauge magnetic fluxes with the integrals as ∑__1,⋯,_p∈ℤ^N∮_|s_r,i|=e^-πω_r,i(∏_r=1^p∏_i=1^Nds_r,i2π s_r,i)(⋯) =∫_ℂ^pN(∏_r=1^p∏_i=1^NdU_r,id_r,i-4π^2ω)(⋯)(1+𝒪(ω)) , following <cit.> and Appendix C.1 of <cit.> based on the Euler-Maclaurin formula. Applying the replacement (<ref>) to (<ref>) we arrive at (<ref>). § REVISITING ABJM/ADHM THEORIES In this Appendix we revisit the Cardy-like expansion of the SCI for ABJM/ADHM theories studied in <cit.> following the generic =2 conventions spelled out in Section <ref>. We explicitly show that some technical differences in the intermediate steps of the calculation do not affect the final results of <cit.>. §.§ ABJM theory The generic Bethe potential (<ref>) and the generic BA formula for the TTI (<ref>) for the ABJM theory can be read off from the ones for the N^0,1,0 theory, (<ref>) and (<ref>), simply by setting r_1=r_2=0. One can check that the resulting expressions match the Bethe potential[To match the Bethe potentials up to gauge holonomy independent terms, we have used the inversion formula (<ref>) under the assumption 0<[u_i-_j+πΔ_1,2]<2π . ] and the TTI of the ABJM theory in <cit.> respectively after the identifications (u_1,i,u_2,i) =(u_i,_i) , (Δ_m,1,Δ_m,2) =(0,0) , (2n_1,i,2n_2,i) =(2n_i-1-(-1)^N2+N,2_i-1-(-1)^N2+N) , (_1,_2) =(0,0) , and the constraints ∑_a=1^4Δ_a=2 , ∑_a=1^4_a=2 . Therefore we can use the numerical BAE solutions {u_⋆} constructed in <cit.> for the choice of integers (n_i,_j)=(1-i,j-N) , for various configurations satisfying the constraints (<ref>). Furthermore we can use the closed form expression for the ABJM TTI derived by substituting those numerical BAE solutions to the BA formula which reads <cit.> log Z_ABJM(Δ,) =-π√(2kΔ_1Δ_2Δ_3Δ_4)3∑_a=1^4_aΔ_a(N̂_Δ^32-𝔠_akN̂_Δ^12) -12logN̂_Δ+f̂_0(k,Δ,)+f̂_np(N,k,Δ,) , where we have defined N̂_Δ =N-k24+112k∑_a=1^41Δ_a , 𝔠_a =∏_b≠ a(Δ_a+Δ_b)8Δ_1Δ_2Δ_3Δ_4∑_b≠ aΔ_b . Now we move on to the SCI of the ABJM theory. As in the various examples studied in Section <ref>, it suffices to determine the closed form expression for the Bethe potential using various BAE solutions {u_⋆} in <cit.>. We obtained _ABJM[u_⋆;Δ,n]=2π[π√(2kΔ_1Δ_2Δ_3Δ_4)3N̂_k,Δ^32+ĝ_0(k,Δ)+ĝ_np(N,k,Δ)] , which is exactly the same as the result of <cit.>. In other words, we have shown that the gauge holonomy independent difference between the ABJM Bethe potential (<ref>) with r_1=r_2=0 and the expression used in <cit.> becomes real at the BAE solution and therefore does not affect the result (<ref>). Substituting the TTI (<ref>) and the Bethe potential (<ref>) into the Cardy-like expansion (<ref>), we arrive at the result of <cit.>, which we repeat here for completeness log_ABJM(ω,Δ,) =-2ω[π√(2kΔ_1Δ_2Δ_3Δ_4)3N̂_k,Δ^32+ĝ_0(k,Δ)+ĝ_np(N,k,Δ)] +[-π√(2kΔ_1Δ_2Δ_3Δ_4)3∑_a=1^4_aΔ_a(N̂_k,Δ^32-𝔠_a(Δ)kN̂_k,Δ^12) 3em-12logN̂_k,Δ+f̂_0(k,Δ,)+f̂_np(N,k,Δ,)]+𝒪(ω) . These calculations confirm that the approach based on the generic =2 conventions is consistent with the previous analysis for the ABJM theory in <cit.>. §.§ ADHM theory The generic Bethe potential (<ref>) and the BA formula for the TTI (<ref>) for the ADHM theory are equivalent to those for the V^5,2 theory given in Section <ref> and match the ones in <cit.> under the same identifications (<ref>) but different constraints ∑_I=1^3Δ_I =Δ_3+Δ_q+Δ_=2 , ∑_I=1^3_I =_3+_q+_=2 . Hence we can use the numerical BAE solutions {u_⋆} constructed in <cit.> with the choice of integers n_i=⌊N+12⌋-i for various configurations satisfying the constraints (<ref>). Furthermore we can use the closed form expression for the ADHM TTI derived in <cit.> log Z_ADHM(Δ,) =-π√(2N_fΔ̃_1Δ̃_2Δ̃_3Δ̃_4)3∑_a=1^4_a[1Δ̃_a(N̂_N_f,Δ̃)^32+(𝔠_a(Δ̃)N_f+𝔡_a(Δ̃)N_f)(N̂_N_f,Δ̃)^12] -12logN̂_N_f,Δ̃+f̂_0(N_f,Δ̃,)+f̂_np(N,N_f,,) , where the shifted N parameter is given by (<ref>) and we have also defined 𝔠_a(Δ̃) =(-1Δ̃_1(Δ̃_2+Δ̃_3+Δ̃_4)(Δ̃_1+Δ̃_2)8Δ̃_1Δ̃_2,-1Δ̃_2(Δ̃_1+Δ̃_3+Δ̃_4)(Δ̃_1+Δ̃_2)8Δ̃_1Δ̃_2,  -Δ̃_3+Δ̃_48Δ̃_1Δ̃_2,-Δ̃_3+Δ̃_48Δ̃_1Δ̃_2) , 𝔡_a(Δ̃) =(-(Δ̃_1+Δ̃_2)(Δ̃_2+Δ̃_3+Δ̃_4)(Δ̃_1+Δ̃_3+Δ̃_4)8Δ̃_1Δ̃_2Δ̃_3Δ̃_4,  -(Δ̃_1+Δ̃_2)(Δ̃_2+Δ̃_3+Δ̃_4)(Δ̃_1+Δ̃_3+Δ̃_4)8Δ̃_1Δ̃_2Δ̃_3Δ̃_4,  -1Δ̃_3(Δ̃_3+Δ̃_4)((_1+_2)(_2+_3)(_3+_1)+(_1_2+_2_3+_3_1)_4)8Δ̃_1Δ̃_2Δ̃_3Δ̃_4,  -1Δ̃_4(Δ̃_3+Δ̃_4)((_1+_2)(_2+_4)(_4+_1)+(_1_2+_2_4+_4_1)_3)8Δ̃_1Δ̃_2Δ̃_3Δ̃_4) , in addition to (<ref>). Now we move on to the SCI of the ADHM theory. As in Section <ref>, it suffices to determine the closed form expression for the Bethe potential using various BAE solutions {u_⋆} in <cit.>. We obtained _ADHM[u_⋆;Δ,n]=2π[π√(2N_f_1_2_3_4)3N̂_N_f,Δ^32+ĝ_0(N_f,Δ)+ĝ_np(N,N_f,)] , which is exactly the same as the result of <cit.>. As in the ABJM case discussed above, this implies that the gauge holonomy independent difference between the Bethe potential (<ref>) and the one used in <cit.> becomes real at the BAE solution and therefore does not affect the result (<ref>). Substituting the TTI (<ref>) and the Bethe potential (<ref>) into the Cardy-like expansion (<ref>), we arrive at the result of <cit.>, which we repeat here for completeness log_ADHM(ω,Δ,) =-2ω[π√(2N_f_1_2_3_4)3N̂_N_f,Δ^32+ĝ_0(N_f,Δ)+ĝ_np(N,N_f,)] +[-π√(2N_f_1_2_3_4)3∑_a=1^4_a[1_aN̂_N_f,^32+(𝔠_a()N_f+𝔡_a()N_f)N̂_N_f,^12] 3em-12logN̂_N_f,+f̂_0(N_f,,)+f̂_np(N,N_f,,)]+φ+𝒪(ω) . This again confirms that the approach based on the generic =2 conventions is consistent with the previous analysis for the ADHM theory in <cit.>. § CONVENTIONS AND BETHE POTENTIAL In this Appendix we first provide a relation between the generic =2 TTI conventions introduced in Section <ref> and the conventions of <cit.> for various =2 holographic SCFTs studied in Section <ref>, which allows us to employ the results of <cit.> in analyzing the Cardy-like expansion of the SCI. We then present numerical data that supports the analytic expressions for the Bethe potentials associated with the TTI. §.§ N^0,1,0 theory The Bethe potential (<ref>) and the TTI (<ref>) of the N^0,1,0 theory can be identified with the expressions presented in <cit.> after the identifications (u_1,i,u_2,i) =(u_i,_i) , (Δ_m,1,Δ_m,2) =(N+1,-N-1) , (n_1,i,n_2,i) =(n_i+N,_i+N) , (_1,_2) =(0,0) . To match the Bethe potentials, we have used the inversion formula (<ref>) with the assumptions[Note that the Bethe potential is not written explicitly for the N^0,1,0 TTI in <cit.> but it can be deduced easily from the BAE modulo gauge holonomy independent terms.] 0<[u_i-_j+πΔ_1,2]<2π & 0<[u_i+πΔ_q_1],[_i+πΔ_q_2]<2π . In the comparison of the TTI, we observed a slight phase difference between (<ref>) and the TTI expression in <cit.>, which originates from a subtle branch choice for flavor fugacities. It does not affect the real part of the logarithm of the TTI that is relevant in the index relation (<ref>), however, and therefore the Cardy-like expansion of the N^0,1,0 SCI will not be affected by this minor phase difference. Hence we ignore this phase difference throughout the paper and similarly in other holographic SCFTs. We now turn to the numerical data that supports the all-order 1/N expansion of the N^0,1,0 Bethe potential given in (<ref>). The list of (k,r)-configurations for which we confirmed the analytic expression in (<ref>) is given as follows: k∈{1,2,3,4} , rk∈{12,23,1,32,2,3} . For the above list of (k,r)-configurations, we estimate the numerical coefficients ĝ_3/2^(lmf)(k,r) & ĝ_0^(lmf)(k,r) together with the associated standard errors σ_3/2 and σ_0 in the for the N^0,1,0 Bethe potential. Namely, we evaluate 12π_N^0,1,0[u_⋆;Δ,n]|_(<ref>)=ĝ_3/2^(lmf)(k,r)N̂_k,r^32+ĝ_0^(lmf)(k,r) , following the numerical BAE solutions constructed in <cit.> for N=101∼301 (in steps of 10) at 200. The leading order coefficient is then compared with the corresponding analytic expression in (<ref>), namely ĝ_3/2(k,r)=π(k+r)6√(2k+r) , by calculating the error ratio R_3/2(k,r) = ĝ_3/2^(lmf)(k,r)-ĝ_3/2(k,r)ĝ_3/2(k,r) . The following tables summarize the numerical data described above. k=r R_3/2 σ_3/2 ĝ_0^(lmf) σ_0 k=1 5.725×10^-20 1.802×10^-20 -0.16014196980357514241 5.808×10^-17 k=2 -1.578×10^-26 7.839×10^-27 -0.21428948039679424932 2.526×10^-23 k=3 5.840×10^-15 2.500×10^-15 -0.32655189751806194061 8.058×10^-12 k=4 1.736×10^-13 7.911×10^-14 -0.48717531374612447878 2.551×10^-10 2k=r R_3/2 σ_3/2 ĝ_0^(lmf) σ_0 k=1 1.410×10^-14 4.528×10^-15 -0.22242988776920419681 1.460×10^-11 k=2 4.378×10^-12 1.620×10^-12 -0.50364353625504316942 5.226×10^-9 k=3 1.216×10^-10 4.816×10^-11 -0.98356557594764269955 1.556×10^-7 k=4 9.830×10^-10 4.063×10^-10 -1.6571101147599629963 1.315×10^-6 k=2r R_3/2 σ_3/2 ĝ_0^(lmf) σ_0 k=1 2.009×10^-18 4.850×10^-19 -0.15676419358385312020 1.564×10^-15 k=2 -1.037×10^-21 3.873×10^-22 -0.15305814860409592117 1.247×10^-18 k=3 5.569×10^-20 2.355×10^-20 -0.18114828105867926447 7.583×10^-17 k=4 1.525×10^-17 6.984×10^-18 -0.22615656081972779158 2.248×10^-14 3k=2r R_3/2 σ_3/2 ĝ_0^(lmf) σ_0 k=1 5.496×10^-17 1.783×10^-17 -0.18315250021650044933 5.747×10^-14 k=2 3.819×10^-14 1.463×10^-14 -0.33156998734746390156 4.717×10^-11 k=3 2.571×10^-12 1.068×10^-12 -0.59423904669712970624 3.448×10^-9 k=4 3.351×10^-11 1.466×10^-11 -0.96428722814277342203 4.735×10^-8 2k=3r R_3/2 σ_3/2 ĝ_0^(lmf) σ_0 k=1 3.269×10^-19 8.755×10^-20 -0.15532082590294874648 2.822×10^-16 k=2 -6.275×10^-20 2.394×10^-20 -0.16709079161756907785 7.712×10^-17 k=3 7.998×10^-18 3.421×10^-18 -0.21595042199749374794 1.102×10^-14 k=4 7.039×10^-16 3.239×10^-16 -0.28910459214092956841 1.043×10^-12 3k=r R_3/2 σ_3/2 ĝ_0^(lmf) σ_0 k=1 1.273×10^-11 3.840×10^-12 -0.34513332066468836843 1.239×10^-8 k=2 1.081×10^-9 3.631×10^-10 -1.0105562008492962901 1.174×10^-6 k=3 1.215×10^-8 4.297×10^-9 -2.1264119370284434162 1.392×10^-5 k=4 5.688×10^-8 2.079×10^-8 -3.6898237530453026380 6.753×10^-5 To argue that the in (<ref>) captures the perturbative expansion of the Bethe potential completely up to non-perturbative corrections of order (e^-√(N)), one can follow the numerical technique introduced in <cit.> for the non-perturbative analysis. Based on that approach, we confirmed that (<ref>) is indeed exact up to exponentially suppressed non-perturbative corrections and did a similar check for the V^5,2 and Q^1,1,1 theories. In the discussion below we omit the details of the analysis of the non-perturbative corrections for these two other models since it is parallel to the analysis in <cit.>. §.§ V^5,2 theory The Bethe potential (<ref>) and the TTI (<ref>) of the V^5,2 theory can be identified with the expressions presented in <cit.> after the substitutions u_1,i =u_i , Δ_m,1 =Δ_m-(N+1-2⌊N+12⌋) , n_1,i =n_i , _1 = . To match the Bethe potentials, we have used the inversion formula (<ref>) under the assumptions 0<[u_i-u_j+πΔ_I]<2π . To be more precise, the Bethe potential (<ref>) matches the expression in <cit.> up to gauge holonomy independent terms. Such a difference does not affect the BAE, however, and therefore the numerical BAE solutions of <cit.> can be utilized in the generic =2 conventions of this paper without modification. Next we provide numerical data that supports the all-order 1/N expansion of the V^5,2 Bethe potential given in (<ref>). The list of N_f and Δ-configurations for which we confirmed (<ref>) is given as follows (Δ=(Δ_I,Δ_q,Δ_,Δ_m)). Case 1. Δ=(Δ_1,43-Δ_1,23,13,13,0) 3 N_f ∈{1,2,3,4,5} & Δ_1 =23 , N_f ∈{1,2,3} & Δ_1 =12 , Case 2. Δ=(Δ_1,43-Δ_1,23,13,13,Δ_m) 3 N_f ∈{1,2,3} & (Δ_1,Δ_m) ∈{(59,N_f9),(712,N_f15)} , Case 3. Δ=(Δ_1,43-Δ_1,23,Δ_q,23-Δ_q,Δ_m) 3 N_f ∈{1,2} & (Δ_1,Δ_q,Δ_m) =(23-12π,16,N_f(23-2π)) . For the above listed N_f and Δ-configurations, we estimate numerical coefficients ĝ_3/2^(lmf)(N_f,Δ) & ĝ_0^(lmf)(N_f,Δ) together with the associated standard errors σ_3/2 & σ_0 in the for the V^5,2 Bethe potential, namely 12π^V^5,2[u_⋆;Δ,n]=ĝ_3/2^(lmf)(N_f,Δ)N̂_N_f,Δ^32+ĝ_0^(lmf)(N_f,Δ) , based on the numerical BAE solutions {u_⋆} constructed in <cit.> for N=101∼301 (in steps of 10) at 200. The leading order coefficient is then compared with the corresponding analytic expression in (<ref>), namely ĝ_3/2(N_f,Δ)=π√(N_f_1_2_3_4)3 , by presenting the error ratio R_3/2(N_f,Δ) = ĝ_3/2^(lmf)(N_f,Δ)-ĝ_3/2(N_f,Δ)ĝ_3/2(N_f,Δ) . The following tables summarize the numerical data described above. Case 1. Δ_1=23 R_3/2 σ_3/2 ĝ_0^(lmf) σ_0 N_f=1 -4.530×10^-31 1.278×10^-31 -0.13945567062297404931 4.118×10^-28 N_f=2 7.423×10^-25 2.745×10^-25 -0.18888453456219889345 8.850×10^-22 N_f=3 1.260×10^-19 5.286×10^-20 -0.29657318363534945350 1.705×10^-16 N_f=4 3.228×10^-17 1.473×10^-17 -0.45154281492584592878 4.756×10^-14 N_f=5 1.335×10^-15 6.463×10^-16 -0.65205454918709992787 2.089×10^-12 Case 1. Δ_1=12 R_3/2 σ_3/2 ĝ_0^(lmf) σ_0 N_f=1 -1.307×10^-25 3.377×10^-26 -0.14109222832291037605 1.088×10^-22 N_f=2 1.239×10^-20 4.070×10^-21 -0.20242609578739029964 1.312×10^-17 N_f=3 3.814×10^-17 1.408×10^-17 -0.32947376221940705368 4.545×10^-14 Case 2. (Δ_1,Δ_m)=(59,N_f9) R_3/2 σ_3/2 ĝ_0^(lmf) σ_0 N_f=1 -6.905×10^-25 1.761×10^-25 -0.14076080517355255410 5.675×10^-22 N_f=2 2.478×10^-19 7.910×10^-20 -0.19211160033287729944 2.550×10^-16 N_f=3 3.292×10^-16 1.175×10^-16 -0.30398703304578607507 3.793×10^-13 Case 2. (Δ_1,Δ_m)=(712,N_f15) R_3/2 σ_3/2 ĝ_0^(lmf) σ_0 N_f=1 -5.551×10^-27 1.480×10^-27 -0.14004350452924789065 4.771×10^-24 N_f=2 4.841×10^-21 1.639×10^-21 -0.19117259832590889123 5.285×10^-18 N_f=3 1.685×10^-17 6.428×10^-18 -0.30199934416516391009 2.074×10^-14 Case 3. (Δ_1,Δ_q,Δ_m)=(23-12π,16,N_f(23-2π) R_3/2 σ_3/2 ĝ_0^(lmf) σ_0 N_f=1 -1.619×10^-25 4.173×10^-26 -0.14096914088723067753 1.345×10^-22 N_f=2 2.628×10^-20 8.540×10^-21 -0.20095533539635298960 2.754×10^-17 §.§ Q^1,1,1 theory The Bethe potential (<ref>) and the TTI (<ref>) of the Q^1,1,1 theory can be identified with the expressions presented in <cit.> after the substitutions (u_1,i,u_2,i) =(u_i,_i) , (Δ_m,1,Δ_m,2) =(N+N_f-2⌊N_f2⌋,-N-N_f+2⌊N_f2⌋) , (n_1,i,n_2,i) =(n_i+N,_i+N) , (_1,_2) =(0,0) . To match the Bethe potentials, we have used the inversion formula (<ref>) under the assumptions 0<[u_i-_j+πΔ_1,2]<2π & 0<[_i+πΔ_q_1,2]<2π . As in the V^5,2 theory case, the Bethe potentials are matched up to gauge holonomy independent terms that do not affect the BAE. Next we present numerical data that supports the all-order 1/N expansion of the Q^1,1,1 Bethe potential given in (<ref>). The list of N_f and Δ-configurations satisfying the constraints (<ref>) for which we confirmed (<ref>) is given as follows: 3 N_f ∈{1,2,3,4,5} & Δ_1 =12 , N_f ∈{1,2,3} & Δ_1 ∈{38,512,37} . For the above listed N_f and Δ-configurations, we estimate numerical coefficients ĝ_3/2^(lmf)(N_f,Δ) , & ĝ_0^(lmf)(N_f,Δ) , together with the associated standard errors σ_3/2 & σ_0 in the for the Q^1,1,1 Bethe potential, namely 12π^Q^1,1,1[u_⋆;Δ,n]|_(<ref>)=ĝ_3/2^(lmf)(N_f,Δ)N̂_N_f^32+ĝ_0^(lmf)(N_f,Δ) , based on the numerical BAE solutions {u_⋆} constructed in <cit.> for N=101∼301 (in steps of 10) at 200. The leading order coefficient is then compared with the corresponding analytic expression in (<ref>), namely ĝ_3/2(N_f,Δ)=π√(N_f)3√(3) , by presenting the error ratio R_3/2(N_f,Δ) = ĝ_3/2^(lmf)(N_f,Δ)-ĝ_3/2(N_f,Δ)ĝ_3/2(N_f,Δ) . The following table summarizes the numerical data we used for this analysis. R_3/2 σ_3/2 ĝ_0^(lmf) σ_0 N_f=1 -9.038×10^-20 2.846×10^-20 -0.12179382823357287453 9.159×10^-17 N_f=2 2.925×10^-15 1.114×10^-15 -0.060896914126385874431 3.588×10^-12 N_f=3 8.415×10^-13 3.513×10^-13 0.018581373235204659187 1.133×10^-9 N_f=4 2.296×10^-11 1.014×10^-11 0.12639484451282630333 3.274×10^-8 N_f=5 2.206×10^-10 1.013×10^-10 0.26400260477995552485 3.274×10^-7 The numerical estimates for the Q^1,1,1 Bethe potential do not depend on the Δ_1=Δ_2 value so the above table is valid for all cases listed in (<ref>). JHEP
http://arxiv.org/abs/2407.13430v1
20240718115458
Star and Planet Formation with the Single Aperture Large Telescope for Universe Studies (SALTUS) Space Observatory
[ "Kamber Schwarz", "Alexander Tielens", "Joan Najita", "Jennifer Bergner", "Quentin Kral", "Carrie Anderson", "Gordon Chin", "David Leisawitz", "David Wilner", "Peter Roelfsema", "Floris van der Tak", "Erick Young", "Christopher Walker" ]
astro-ph.IM
[ "astro-ph.IM", "astro-ph.EP", "astro-ph.SR" ]
Precision bounds for quantum phase estimation using two-mode squeezed Gaussian states Shuai Wang July 22, 2024 ===================================================================================== § ABSTRACT The Single Aperture Large Telescope for Universe Studies () is a far-infrared space mission concept with unprecedented spatial and spectral resolution. consists of a 14-m inflatable primary, providing 16× the sensitivity and 4× the angular resolution of Herschel, and two cryogenic detectors spanning a wavelength range of 34-660 and spectral resolving power of 300 - 10^7. Spectroscopic observations in the far-infrared offer many unique windows into the processes of star and planet formation. These include observations of low energy water transitions, the mass tracer HD, many CHONS constraining molecules such as and H_2S, and emission lines from the phonon modes of molecular ices. Observing these species will allow us to build a statistical sample of protoplanetary disk masses, characterize the water snowline, identify Kuiper Belt like debris rings around other stars, and trace the evolution CHONS from prestellar cores, through to protoplanetary disks and debris disks. This paper details details several key star and planet formation science goals achievable with . *Kamber R. Schwarz, schwarz@mpia.de 2 § INTRODUCTION The Single Aperture Large Telescope for Universe Studies () is a far-infrared space mission concept proposed to NASA under the Astrophysics Probe Explorer (APEX) Announcement of Opportunity in November 2023. covers the far-infrared wavelength range ≈ 30-700 , most of which is not covered by any current observatory. The design of consists of a 14m off-axis inflatable primary aperture and two cryogenic instruments: SAFARI-Lite and the High Resolution Receiver (HiRX). The large aperture size allows for unprecedented sensitivity and a spatial resolution of ∼ 1 at 50 μm". The full technical details of the observatory can be found in Arenberg et al., “Design, Implementation and Performance of the Primary Reflector for SALTUS," Kim et al, “SALTUS Observatory Optical Design and Performance," and Harding, Arenberg, Donovan et al.,“SALTUS Probe Class Space Mission: Observatory Architecture & Mission Design,” J. Astron. Telesc. Instrum. Syst. (this issue). SAFARI-Lite is a direct-detection grating spectrometer providing simultaneous 35–230  spectroscopy with a resolving power of R=300. The full technical details can be found in Roelfsema et al., “The SAFARI-Lite Imaging Spectrometer for the Space Observatory,” J. Astron. Telesc. Instrum. Syst. (this issue). HiRX is a multi-pixel, multi-band heterodyne receiver system spanning wavelength ranges 522–659 μm, 136–273 μm, 111.9–112.4 μm, 63.1–63.4 μm, and 56.1–56.4 μm with a resolving power of R= 1× 10^5 - 1× 10^7. The full techincal details can be found in Walker et al., “The High Resolution Receiver (HiRX) for the Single Aperture Large Telescope for Universe Studies (SALTUS),” J. Astron. Telesc. Instrum. Syst. (this issue). This paper provides an overview of the promise of for understanding star and planet formation, including molecular clouds, protostellar cores, protoplanetary disks, and debris disks. Accompanying papers in this issue describe the plans for guaranteed-time (GTO) and guest observing (Chin et al., “Single Aperture Large Telescope for Universe Studies (SALTUS): Probe Mission and Science Overview,” J. Astron. Telesc. Instrum. Syst.), SALTUS’ contributions to High-Redshift Science (Spilker et al., “Distant Galaxy Observations”J. Astron. Telesc. Instrum. Syst.), Milky Way and nearby galaxies science (Levy et al., “Nearby Galaxy Observations” J. Astron. Telesc. Instrum. Syst.), and solar system observations (Anderson et al., “Solar System Science” J. Astron. Telesc. Instrum. Syst.). Additionally, some of SALTUS's key science cases build on the OASIS MIDEX-class mission concept, which used a similar large inflatable aperture for terahertz frequency observations <cit.>. §.§ Programmatic Motivation The star and planet formation science programs presented here address multiple high-priority science questions as identified by Astro2020 <cit.>, detailed below. * Question E-Q1c: How Common Is Planetary Migration, How Does It Affect the Rest of the Planetary System, and What Are the Observable Signatures? will provide the disk water measurements, including measurements of the water snowline location, needed to connect atmosphere compositions to the water distribution of planet-forming disks and thereby to connect JWST observations of exoplanet atmospheres to a formation time and location. (<ref>) * Question E-Q1d: How Does the Distribution of Dust and Small Bodies in Mature Systems Connect to the Current and Past Dynamical States Within Planetary Systems? SAFARI-Lite will determine the occurance of exo-Kuiper belts around the nearest 30 G and K stars known to host debris disks, characterizing the commonality of dust in mature planet systems (<ref>). * Question E-Q3a: How Are Potentially Habitable Environments Formed? will answer this question by observing [CII] at 157 and [OI] at 63 and 145 in debris disks, tracing the C/O ratio of material available for accretion onto terrestrial planets (<ref>). * Question E-Q3b: What Processes Influence the Habitability of Environments? and Question F-Q4b: What Is the Range of Physical Environments Available for Planet Formation? will determine the mass and temperature structure, as well as the abundance of CHONS bearing species, in roughly 1000 protoplanetary disks across evolutionary stages. (<ref>,<ref>,<ref>) § STAR AND PLANET FORMATION SCIENCE WITH SALTUS §.§ Protoplanetary Disk Mass One of the most fundamental properties of planet formation is the mass of a planet-forming disk, which determines the total amount of material available to forming planets and the mechanisms through which planets can form, e.g, through gravitational instability vs. via core accretion <cit.>. The main contributor to the disk mass is , which does not emit for the majority of disk regions since the molecule has no permanent dipole moment, with large energy spacings not well matched to the local temperatures. The ground-state transition is the quadrupole J=2-0 with an energy spacing of 510 K. Thus, exciting an molecule to the J=2 state requires high gas temperatures and emission originates only from the illuminated surface layers of the disk within a fraction of an au of the central star. Since most of the gas is at larger radii and is much colder, alternate tracers must be used to determine the total gas mass. The most commonly used gas mass tracers in protoplanetary disks are continuum emission from dust and emission from rotational transitions of CO. Each method relies on different problematic assumptions. Uncertainties in the dust grain optical properties and the grain size distribution lead to significant uncertainty in the derived dust mass from observed emission. Then, to convert from dust mass to gas mass, a gas-to-dust mass ratio must be assumed. This value is typically assumed to be 100, as has been measured in the interstellar medium (ISM). However, several factors can change this ratio in disks, including loss of gas due to disk winds and accretion onto the central star, which will decrease the gas-to-dust ratio, and growth of dust grains beyond cm sizes, at which point the dust emission is no longer observable. Additionally, assuming a constant gas-to-dust ratio across the disk is not appropriate since high spatial resolution observations at millimeter wavelengths demonstrate that the outer radius of the dust disk is often much smaller than the outer radius of the gas disk <cit.>. The CO abundance relative to in the ISM is well constrained to be 5-5 to 2-4 <cit.>. However, when converting from CO abundance to in a protoplanetary disk, additional corrections must be made to account for the reduced abundance of CO relative to in the surface layer, where CO is photo-dissociated, and near the cold midplane, where CO is frozen out onto dust grains <cit.>. Additional chemical reactions in the gas and on dust grains can also destroy CO <cit.>. The resulting reduction in CO gas abundance, whatever the cause, varies not only across sources but also as a function of radius within a single disk <cit.>. Thus, there are large uncertainties when converting CO flux to total gas mass <cit.>. Given the myriad assumptions that go into each technique, it is not surprising that the two methods of determining disk mass rarely agree. Alternative mass probes, preferably requiring fewer assumptions, are needed to determine the true disk gas mass. One possibility is to use the disk rotation curve to constrain the enclosed mass <cit.>. However, because disks must always be less massive than the central star in order to remain gravitationally stable, the contribution of the disk to the rotation curve is small. This technique is only feasible for a small number of the most massive disks <cit.>. §.§.§ Tracing Mass with HD will use HD to measure the gas mass in hundreds of disks, establishing the variation in this fundamental parameter across systems. Observations of the isotopologue HD are unique to the far-IR, with the ground state 1-0 rotation transition at 112.07  (2.675 THz; SAFARI-Lite LW Band, HiRX-Band 3) and the 2-1 transition at 56.24  (5.331 THz; SAFARI- Lite MW Band, HiRX-Band 4b). HD is the main reservoir of deuterium and its abundance relative to will be close to the elemental D/H abundance; thus, HD can be used to trace disk mass while avoiding many of the limitations of other mass tracers. For example, HD emission is optically thin and not subject to chemical processing that can change abundances of other tracers relative to <cit.>. Observing of only the HD 1-0 are capable of constraining the disk mass to within a factor of 2-10 depending on disk mass, while the additional observation of the HD 2-1 line decreases this uncertainty to no more than a factor of 3 <cit.>. HiRX is designed to observe both lines simultaneously. There is currently no observatory capable of detecting HD. Near the end of its lifetime, Herschel targeted HD in seven massive disk systems, resulting in three detections <cit.>, with HD-derived disk gas masses of 30-210 <cit.>. Crucially, these mass measurements revealed that both CO and gas are depleted in these disks relative to the ISM <cit.>. SAFARI-Lite will measure the total gas mass in hundreds of protoplanetary systems over its 5-year baseline mission, down to masses as low as 0.1 (Figure <ref>). When combined with observations of cold water vapor, this determines the amount of water removed from the outer disk and transformed into water ice in the planet-forming midplane <cit.>. Converting the HD detections into an accurate total gas mass requires knowledge of the disk temperature structure, as HD does not emit appreciably below 20 K. The design allows for full spectral coverage with SAFARI-Lite or simultaneous observations in the four HiRX bands. While integrating on the HD 1-0 and 2-1 lines in HiRX-3,4b, is able to observe multiple optically thick and CO lines in HiRX-1,2 spanning 55-1729 K in excitation energy, compared to 128.49 K for the HD J=1 excited state. These optically thick lines provide direct measurements of the gas temperature throughout the disk. The high spectral resolution of HiRX can then be used to map emission to different physical locations in the disk using a technique known as Doppler tomography or tomographic mapping. Because disk rotation follows a Keplerian velocity profile, the radius at which gas emission originates can be determined from the line profile. Thus, high spectral resolution observations of molecular lines in disks can be used to determine the radial location of the emission without having to spatially resolve the disk. As shown in Figure <ref>, the velocity offset for emission originating in the inner disk is of order several assuming a disk inclination of 45 degrees, while in the outer disk the velocity offset is much smaller. The velocity resolution (Δ v) of SALTUS HiRX is < 1 , sufficient to distinguish emission originating in the inner versus outer disk. Taking the expected HD fluxes and line-to-continuum values into account from Figure <ref> <cit.>, SAFARI-Lite measures the J=1-0 and 2-1 lines at the 5σ level in 1 hour for the limits provided in Figure <ref>, enabling reliable disk gas mass estimates. For a survey of disk mass across systems, which requires only the total HD flux, spectrally unresolved observations with SAFARI-Lite are able to quickly build a catalog of HD detections. The expected continuum flux from a 3-5  disk at 140 pc, where many young stars are found, is 0.02 Jy <cit.>. SNR of 300 requires a sensitivity of 66 μJy at 112, the wavelength of the HD 1-0 transition. Based on the modeled grating sensitivity of SAFARI-Lite (Roelfsema et al., this issue), this can be achieved in less than an hour on source. SAFARI-Lite’s greater sensitivity at 54 , the wavelength of the HD 2-1 transition, achieves SNR 300 in even less time. For a subset of the brightest disks, HiRX spectrally resolves the strong HD lines at a  1 velocity resolution to measure the line profile in detail and use Doppler tomography to constrain the disk structures. As an example, TW Hya has a peak flux of 0.49 Jy. HiRX Band 3 yields a 5σ detection at Δ v = 1 in 20 hours. We can expect to observe five targets per year in the tomographic mode if we allocate 100 hours per year to these observations. These deep HiRX observations of sources spanning several arcseconds on the sky will also provide constraints on the spatial extent of the line emission for these disks, important for validating the models used for interpretation of surveys. In total, will obtain the disk gas masses in hundreds of protoplanetary systems during its nominal five year mission without the need for ancillary data. §.§ The Spatial Distribution of Water in Protoplanetary Disks SALTUS will be the first mission with the sensitivity to measure the distribution and physical properties of water in a large sample of protoplanetary disks. These measurements are key for understanding planet formation and how terrestrial planets acquire water. The instruments are designed to probe both the gas and the solid reservoirs and relate them to the characteristics of the central protostar (luminosity, spectral type) and of the planet-forming disk (evolutionary state, mass, structure, temperature). The large frequency range of HiRX and SAFARI-Lite provide access to many lines with a wide range of excitation energies, tracing the cold-to-warm water vapor in disks, addressing Decadal Questions E-Q1c, E-Q3b, and F-Q4b. The first part of this program focuses on water vapor. Herschel revealed tantalizing but tentative and limited evidence of water removal from the surface layers of outer disks <cit.>. SALTUS’s large improvement in sensitivity relative to Herschel makes observations of water in disks routine and enables a complete survey of water in all protoplanetary disks within 200 pc. This large number of observations will allow users to conclusively identify trends between the distribution of water in disks and other properties, e.g., dust disk size and the presence of substructure <cit.>. Question E-Q1c from the 2020 Decedal asks: “How common is planetary migration, how does it affect the rest of the planetary system, and what are the observable signatures?” The composition of a planet’s atmosphere is related to the composition of the disk where and when it accreted its material and can be used to determine if a planet could have formed at its current location or must have migrated. The chemical composition of the disk at a given location evolves over time due to both chemical and dynamical processes <cit.>. Current observational studies aim to use atmospheric C/O to differentiate between early and late migration of Hot Jupiters <cit.>. Models of how migration changes C/O in a planet’s atmosphere make simplifying assumptions for C/O in the disk <cit.>. The main volatile oxygen reservoir in disks, water, is virtually unconstrained by observations. JWST is already significantly improving our understanding of water in the mid-IR <cit.>. However, as noted by Decadal Question E-Q1c, additional longer wavelength observations of cooler regions of the disk are needed to understand disk composition. HiRX will map the radial distribution of cold water vapor in hundreds of protoplanetary disks. These disks will span a wide range of stellar mass and mass accretion rate, disk dust mass, and disk radial extent, and span multiple starforming regions, covering a variety of evolutionary stages <cit.> HiRX will observe the cold water vapor (not probed by JWST) by targeting the ground state ortho and para transitions (Figure <ref>), allowing users to collect statistics on the cold water abundance in disks across evolutionary stages. will provide the disk water measurements needed to connect JWST observations of exoplanet atmospheres to a formation time and location. will tomographically map the water vapor distribution toward a wide variety of disks, answering the question “Where is the water?” Mapping the location of water in protoplanetary disks is crucial for understanding the transport of water during planet formation <cit.>. Current planet formation models predict that most planets form beyond the snowline (gas/ice giants and possibly smaller planets) and then experience radial migration and dynamical scattering <cit.>. Confirming this prediction requires observational constraints on the water snowline location as functions of e.g., disk mass, stellar mass, and evolutionary state, to compare to the orbital radii of exoplanets in mature systems. The ice snowline, the desorption front located at ∼150-170 K in the disk, controls the radial distribution of the C/O ratio in the gas and solid phase <cit.>, implying that the spectral characteristics of planets are linked to their formation location. Water on terrestrial planets which formed within the snow line (including Earth) is thought to have been at least partially delivered by comet and asteroid impacts originating from cold disk reservoirs beyond the snow line <cit.>. As the mass of solids is expected to be the largest near the snow line, the formation of giant planets, such as Jupiter, is generally linked to the location of the snow line in the solar nebula. Giant planet formation may be aided by the increased “stickiness” of ice grains relative to minerals, which greatly enhances the coagulation of small dust grains <cit.> – the first step in planet formation – in the colder regions of these disks. will probe the midplane water snowline location by observing multiple high upper-state energy (E_u ∼ 1000 K) water lines, which emit mostly from inside the midplane water snowline <cit.>. Using tomographic mapping, will determine if water is returning to the gas with the inward drift of icy dust grains, enriching the water content of the terrestrial planet-forming region. In contrast, JWST primarily probes higher energy emission lines, with upper-state energies of several thousand Kelvin <cit.>. Additionally, the dust continuum in the inner disk at mid-IR wavelengths is optically thin, such that the emission observable by JWST originates in the photosphere. The continuum is less optically thick at the sub-millimeter wavelengths observed by , allowing us to probe deeper in the disk. will make the first measurements of the disk midplane water snowline location in non-outbursting disks, an important landmark in the core accretion picture. enables us to assess the role of the water snowline in determining the architecture of planetary systems <cit.> and the extent to which processes (e.g., migration, dynamical scattering) alter exoplanetary orbital radii; this addresses Decadal Question E-Q1c. The second part of the program will target water ice directly. Water ice is the most abundant non-refractory solid-state component of planet-forming disks, locking up a major fraction of the elemental oxygen. The water ice distribution in protoplanetary disks is of fundamental importance for our understanding of planet formation and their characteristics. As a result of its simultaneous spectral coverage of the full 34-230 range and its high sensitivity, SAFARI-Lite is uniquely suited to study emission in the the diagnostic lattice modes of ices in protoplanetary disks (Figure <ref>), providing temperature, mass, and structure of the emitting ices. Previous far-IR space missions (Spitzer, Herschel, ISO) lacked the wavelength coverage or sensitivity for a systematic study of far-IR ices, especially in planet-forming disks. While the NIRSpec and MIRI instruments on JWST cover the near- and mid-IR region, home to ice fundamental modes, these shorter wavelengths require a very favorable viewing angle – almost edge-on – and cannot perform a systematic study of the role of ices in planet-forming disks. Further, because these features are seen in absorption, they provide only a lower limit on the absorbing column, as a photon's path as it is scattered through the disk is uncertain <cit.>. The far-IR features have the advantage of being seen in emission, and are therefore not subject to same constraints due to viewing angle and scattering. The large wavelength coverage and moderate spectral resolution of SAFARI-Lite are well matched to the expected profile variations in the lattice modes of ice, measuring the temperature history of the ice grains. This is linked to a physical location through models of disk temperature structure, constrained by the gas observations. The gas and solid reservoirs interact through sublimation and condensation as icy grains drift inwards from the cold outer disk to the warm inner disk and through turbulent cycling between the colder mid-plane and the warmer disk photosphere. will quantify the mass of the gaseous and ice reservoirs in a large sample of protostellar and protoplanetary sources, assess the interrelationship of these reservoirs, and connect them to physical characteristics of the stars and their disks and thereby address the importance of the physical processes that link them. §.§ Water in Prestellar Cores Prestellar cores are the gravitationally bound phase of star formation immediately prior to the protostar formation<cit.>, with cold (T<10 K), dense (n>10^5 cm^-3) central regions that are well shielded from the surrounding interstellar radiation field. During this phase, the initial chemical conditions are set for the disk and subsequent planet formation. The direct chemical inheritance from the prestellar phase to the protostellar disk has been established, e.g., reflected in the D/H ratio from ALMA observations of deuterated water <cit.>. Although most of the water in prestellar cores resides in the solid state on the dust grain icy surfaces <cit.>, photodesorption by UV photons can liberate water molecules into the gas phase at abundances that are typically <-9 with respect to <cit.>. Two main sources of UV photons exist: the surrounding interstellar radiation field is the dominant heating component of dust grains <cit.> and a low intensity UV radiation field from excitation due to collisions with electrons that come from cosmic ray ionizations of and He <cit.>. The 1_10 - 1_01 ground state rotational transition of ortho- at 538.2 (557 GHz; HiRX 1) can be observed in absorption against the continuum of the prestellar core <cit.>. The line can also be seen in emission if the central density of the prestellar cores is >10^7 cm^-3, although only a few prestellar cores are known that have this extreme central density <cit.>. The gas phase water in the outer part of the core at low A_V has a photodesorption rate that depends on the strength of the interstellar radiation field (G_0), and a constraint on G_0 is needed to determine the dust temperature and the gas temperature profiles in the outer part of prestellar cores <cit.>. Accurate temperature profiles are crucial for radiative transfer modeling of molecular emission and absorption observed toward prestellar cores, and water vapor observations of prestellar cores will play an important role in constraining the temperature profile in the outer part of the cores. §.§ Astrochemistry: CHONS from cores to disks Hot cores are hot molecular line emission regions within massive star-forming regions, typically characterized by high temperatures (100s of K) and densities (∼7 cm^-2) <cit.>. Originally identified from the detection of hot towards Orion-KL <cit.>, hot cores were subsequently found to host an incredibly rich gas-phase organic chemistry <cit.>. Ice mantles are the main sites of astrochemical complex organic molecule formation, and ice sublimation is the source of the chemical complexity detected in hot cores <cit.>. Observing molecular line emission from hot cores provides powerful constraints on their physical and chemical conditions <cit.>. SALTUS's high sensitivity at far-IR wavelengths will open a new window into studying complex organic molecules in hot cores. Figure <ref> illustrates how a massive star-forming region can appear line-poor with Herschel but harbor hundreds of spectral lines when observed with a higher sensitivity and resolution observatory (ALMA Band 10). With , we similarly expect higher line densities of organics compared to Herschel. While the sensitivity increase with will be more modest than with ALMA, we note that ALMA Band 9 and 10 observations require exceptional weather conditions and do not extend to wavelengths shortward of 315 , whereas will provide access to wavelengths as short as 34 . While many complex organics can be detected at longer wavelengths, there are several advantages to obtaining far-IR observations. First, the lines covered by typically probe higher upper state energies than millimeter-wavelength lines, which can better constrain excitation conditions. This is especially important for high-mass hot cores, in which organics often have excitation temperatures of a few 100 K <cit.>. Constraints on organic molecule excitation temperatures are required to interpret the physical conditions of the emitting regions, as well as the chemical relationships between different classes of molecules, see also<cit.>. The early Class 0 and I stages of low-mass protostellar evolution, characterized by an infalling envelope of gas and dust, are often accompanied by an outflow, which promotes accretion onto the protostar by carrying away angular momentum. Encounters between the outflow and the ambient envelope material produce shocks, which can alter the local chemistry through heating and grain sputtering. In some “chemically rich” outflows, the gas-phase abundances of molecules associated with the ice phase (H_2CO, CH_3OH, CH_3OCHO) are enhanced due to shock-induced ice sputtering <cit.>. Thus, these chemically rich outflows offer a valuable window to probe the organic composition of interstellar ices. Moreover, studies of outflow shock physics and chemistry inform our understanding of the same processes that take place on smaller, disk-forming scales within the protostellar core. The archetypical chemically rich outflow shock, L1157-B1, was observed as part of the Herschel CHESS survey <cit.>. The 471-540 spectrum contained emission lines from high-excitation transitions of grain chemistry tracers like , H_2CO, and CH_3OH <cit.>. An excitation analysis revealed that these lines emit with temperatures ≥ 200 K, intermediate between the cold emission observed by longer-wavelength transitions and the very hot gas traced by emission. Thus, observations of higher-excitation organics towards outflow shocks can help link these different emission regimes and disentangle how the shock chemistry and physics progresses (Figure <ref>). These insights can in turn be used to refine models of shock astrochemistry, which are needed to connect observed gas-phase abundances to the underlying grain compositions <cit.>. Lastly, chemically rich outflow shocks are the only low-mass star forming regions where phosphorus carriers have been detected <cit.>. In shock chemistry models, PH_3 and smaller P-bearing hydrides are predicted to be at least as abundant as the P carriers PN and PO <cit.>. PH_3 has only one strong transition observable longward of 600 , and remains undetected in star-forming regions. SALTUS's broad spectral coverage measurements allow for a more complete inventory of the volatile phosphorus carriers in star-forming regions. §.§.§ Astrochemistry: CHONS in disks While molecules observable at millimeter wavelengths have been extensively studied in disks, there are almost no constraints on the inventories of light hydrides in disks, many of which are observable only at submillimeter/far-IR wavelengths. Perhaps the most exciting observations of light hydrides enabled by are observations of . Indeed, the N budget in disks is poorly constrained given that the dominant N carrier, , cannot be directly observed in the gas. Ice spectroscopy towards low-mass protostars, the evolutionary progenitors of disks, has revealed that is an important N carrier in the ice, with relative abundances of ∼ 5% with respect to compared to <1% in nitriles, or XCN <cit.>. While nitriles are commonly detected towards disks <cit.>, has only been detected towards two disks. The 524.1 transition of o-NH_3 was first detected by Herschel towards the nearby TW Hya disk <cit.>, and was also detected towards the embedded (Class I) disk GV Tau N at mid-IR wavelengths tracing hot emission from the inner few au <cit.>. HiRX Bands 1 and 2 will cover multiple strong transitions tracing cool (upper state energies 27-170 K). observations of multiple lines will allow for the first excitation analysis in the outer disk. Additionally, SALTUS's high spectral resolution will enable a kinematic analysis of the line profiles in sources with high SNR, providing constraints on the spatial origin of the emission and the location of the snowline. Auxiliary constraints on the disk structures, provided by observations of CO isotopologues, HD, and , will permit robust abundance retrievals. The / abundance ratio is of particular interest, as it can be directly compared with the ratio measured in comets to provide insights into how N is inherited by solar system bodies. Another promising avenue for disk science with is S-bearing hydrides. Sulfur is commonly very depleted from the gas in dense star-forming regions, though several S carriers (CS, SO, H_2S, H_2CS) have been detected in disks <cit.>. H_2S was only recently detected in Class II disks: first towards GG Tau A <cit.>, followed by UY Aur and AB Aur<cit.>. Towards other well-known disks, deep searches for H_2S have only produced upper limits <cit.>. To date, only the 1_10-1_01 line at 168.73 GHz has been targeted, which is readily observable by ground-based telescopes but also intrinsically weak compared to the higher-frequency lines covered by SALTUS. The H_2S lines at 160.7 and 233.9 appear particularly promising for detection in disks with , particularly if the emission originates in a somewhat warm environment. In addition to the ice phonon modes discussed above, SAFARI-Lite's broadband coverage spanning 30 to 230 will cover unique spectral signatures from a large number of volatile ice species, most notably , O_2, , CO, , , H_2S, , and HCN. The uniqueness of the lattice modes enables us to clearly distinguish between the amorphous and crystalline ice phases, opening up a window to phase transition temperatures, which ultimately informs on the thermal evolution of the ice. Ice lattice modes are also the best viable way to determine the presence of homo-nuclear molecules such as O_2 and , whose fundamental modes are IR inactive. The possibility to quantify the abundance of ice in protoplanetary disks is particularly interesting as is likely a major carrier of nitrogen <cit.>. §.§ D/H Ratios as a Probe of Interstellar Heritage Water is a key ingredient in the emergence of life and is, therefore, a key aspect in the assessment of the habitability of (exo)planets. Yet, the origin and delivery of water to habitable planets and notably Earth remains unclear. Terrestrial water could have been delivered by water-rich asteroids driven by the migration of Jupiter in the solar nebula and/or by the late heavy bombardment during a solar system-wide rearrangement <cit.>. Outgassing from the deep mantel likely also contributed to Earth's surface water <cit.>. The enhanced D/H ratio in standard mean ocean water (SMOW) of 1.5-4 <cit.> relative to the interstellar elemental D/H ratio [1.5-5; <cit.>] provides support for this view as deuterium fractionation is a chemical signature indicating that a fraction of water formed under cold conditions, likely at the surface of interstellar grains (Figure <ref>) <cit.>. This anomaly would reflect the effects of chemistry at low temperatures in cold prestellar cores where the small zero-point energy difference between D- and H-bearing species can create large deuterium fractionations <cit.>. However, the observed D/H ratio in deeply embedded protostars (hot corinos) – tracing the inherited water content – is higher than the D/H ratio in Earth's water (VSMOW) by factors of  2 to 6 (purple symbols in Figure <ref>). Hence, chemical processing must have occurred in warm gas, reducing the deuterium fractionation. Likely, this reprocessing of the water occurred in the warm surface layers of protoplanetary disks – on a disk-wide scale – where radiation from the young star photo-desorbs from preexisting ices and reforms water through gas phase reactions. The variation in measured D/H ratios for various astronomical objects provides important clues to the formation conditions at different locations in nascent planetary systems. will help to unravel the following questions: What is the HDO/ ratio in protoplanetary disks and how does that depend on the characteristics of the protostar, the conditions in the protoplanetary disk, and the molecular core environment? What processes play a role in the water cycle of protoplanetary disks? will detect deuterated isotopologues of complex organics in hot corino and non-hot corino sources. While a 1-hour HiRX integration will provide >5σ detections of single deuterated CH_3OCH_3, double deuterated CH_3OCH_3, and single deuterated C_2H_5OH in hot corinos, a 10-minute integration will yield a robust detection of the strongest lines of deuterated molecules in protostellar envelopes. The D/H ratio in protoplanetary disks has been probed primarily through trace species such as DCO^+ and DCN <cit.>. ALMA observations have constrained the D/H ratio in water for one disk, V883 Ori, based on detections of HDO and H_2^18O at 200 GHz <cit.>. In this system, the D/H ratio was found to be similar to that for water in the ISM. However, V883 Ori is an exceptionally warm disk currently undergoing an accretion burst, thus increasing the observable water column. The main isotopologues of water are difficult to observe from ground-based facilities, even at high altitude <cit.>. Such observatories are often limited to the much weaker H_2^18O lines, which are still impacted by the low atmospheric transmission at the relevant frequencies <cit.>. We identify the strongest transitions of HDO using the physical/chemical disk model of <cit.> (Figure <ref>). This model of the nearby disk TW Hya reproduces the resolved ALMA observations of multiple CO transitions as well as the total HD 1-0 flux from Herschel and the upper limits on the HDO 225 GHz line from the Submillimeter Array (SMA) <cit.>. The strongest HDO transitions in protoplanetary disks are at 71.4 , and inaccessible from the ground. These observations will provide the link between water in the ISM to water in planetary systems, providing a definitive answer to whether water on terrestrial planets is commonly inherited from the ISM. §.§ Debris Disks The debris disk phase follows the protoplanetary disk phase. Debris disks are gas-poor, with broad disks or rings of second-generation dust thought to be influenced by the presence of planets <cit.>. Debris disk observations allow us to study populations of small bodies around other stars and infer the presence of planets that otherwise evade detection <cit.>. They also provide insight into the composition of solid bodies in other planetary systems <cit.>. Debris disks may play a role in planet formation because of the gas (with total masses up to ∼1 M_⊕) that is now observed in these disks <cit.>. Indeed, this gas could spread and accrete onto planets, thus changing their initial atmospheric compositions between 10-100 Myr <cit.>. This secondary gas component may also be important to understand our own Solar System and find out whether the Kuiper belt can still release gas today or whether it may have contained gas in its youth <cit.>. If this gas could got accrete onto the giants, it may explain, e.g., the high metallicity of Uranus and Neptune <cit.>. has the potential to observe Kuiper Belt analogues, which is be necessary to be able to explore these questions. §.§.§ Kuiper Belt Analogues Debris disks with the same intrinsic luminosity as the Solar System’s Kuiper Belt have yet to be observed <cit.>. These exo-Kuiper Belts have typical temperatures of ∼50 K, corresponding to a black-body emission peak in the far-IR. Updating sensitivity estimates from the original SPICA SAFARI <cit.> to SALTUS’s SAFARI-Lite, will reach the 5σ sensitivity threshold to detect exo-Kuiper belts around the nearest 30 G and K stars with known debris disks in 1 hour of integration. can determine the frequency of exo-Kuiper Belts, characterizing how common dust is in mature planetary systems, thus addressing Decadal Question E-Q1d. Additionally, the angular resolution of should enable mapping of the dust in some of these systems, thereby serving as a Rosetta Stone between the far-IR and longer wavelength observations of ALMA. It is possible that the massive exo-Kuiper belts detected to date prevent the development of life in the habitable zone due to an excessively high bombardment rate. In this case, targeting systems with belts similar to ours (i.e. with low masses) could help optimize the search for life on another planet. §.§.§ Gas in Debris Disks can observe the gas content in debris disks, which are expected to contain detectable levels of carbon and oxygen; based on the gas production model developed by <cit.>. These observations, focusing on ionized carbon and neutral oxygen, will complement those made by ALMA, which targets CO and neutral carbon <cit.>. By doing so, can gather valuable information about the carbon ionization fraction, a crucial factor in understanding the dynamics of gas, including determining the dominant mechanism of angular momentum transport. Possibilities include the magneto-rotational instability (MRI) <cit.>, or MHD winds or even some hydrodynamic instabilities such as vertical shear instability (VSI) or Rossby Wave Instability (RWI) <cit.>. These different mechanisms operate at different ionization fractions, densities, and depend on the magnetic field configuration as well. Only new data for a large variety of systems will allow us to pinpoint the dominant mechanism. One can also use the spatial information (extracted from high spectral resolution) to rule out some mechanisms, as for instance, MHD winds are expected to only produce viscous expansion inwards. In contrast, turbulence will also allow the gas to extend further outwards than its production source. The low surface density in debris disks allows penetration of a high photon flux from the central star, converting molecules like CO, , and into ionized atomic carbon and oxygen via photodissociation and photoionization. By targeting [CII] and [OI] in the far-IR, users gain insights into the initial species released from planetesimals by examining the C/O ratio, e.g., to investigate whether CO, , or is released, which could have strong connections with TNOs in the Solar System for which we can now probe the composition with the JWST <cit.>. These observations provide a comprehensive understanding of the gas disk composition at different radii from the central star. The accretion of carbon and oxygen by young planets may play a pivotal role in the formation of the building blocks of life <cit.> or affect the temperature through greenhouse effects, thus influencing their habitability. Currently, debris disk studies have been mainly with A stars <cit.>. has the sensitivity to detect [CII] and [OI] in the more common FGK stars. These observations determine the C/O ratio across spectral type during the late stages of planet formation, when volatile gasses are delivered to terrestrial planets, and address Decadal question E-Q3a “How are potentially habitable environments formed?” can particularly look at this question around solar mass stars. §.§ References 100 Walker2021 C. K. Walker, G. Chin, S. Aalto, et al., “Orbiting Astronomical Satellite for Investigating Stellar Systems (OASIS): following the water trail from the interstellar medium to oceans,” in Astronomical Optics: Design, Manufacture, and Test of Space and Ground Systems III, T. B. Hull, D. Kim, P. Hallibert, et al., Eds., Society of Photo-Optical Instrumentation Engineers (SPIE) Conference Series 11820, 118200O (2021). roadmap C. Kouveliotou, E. Agol, N. Batalha, et al., “Enduring quests-daring visions (nasa astrophysics in the next three decades),” (2014). Armitage2020 P. J. Armitage, Astrophysics of planet formation, Second Edition (2020). Bergin2013 E. A. Bergin, L. I. Cleeves, U. Gorti, et al., “An old disk still capable of forming a planetary system,” Nature 493, 644–646 (2013). McClure2016 M. K. McClure, E. A. Bergin, L. I. Cleeves, et al., “Mass Measurements in Protoplanetary Disks from Hydrogen Deuteride,” Astrophys. J. 831, 167 (2016). Du2015 F. Du, E. A. Bergin, and M. R. Hogerheijde, “Volatile depletion in the TW Hydrae disk atmosphere,” Astrophys. J. letters 807, L32 (2015). Miotello14 A. Miotello, S. Bruderer, and E. F. van Dishoeck, “Protoplanetary disk masses from CO isotopologue line emission,” Astron. Astrophys. 572, A96 (2014). Schwarz2018 K. R. Schwarz, E. A. Bergin, L. I. Cleeves, et al., “Unlocking CO Depletion in Protoplanetary Disks. I. The Warm Molecular Layer,” Astrophys. J. 856, 85 (2018). Zhang2019 K. Zhang, E. A. Bergin, K. Schwarz, et al., “Systematic Variations of CO Gas Abundance with Radius in Gas-rich Protoplanetary Disks,” Astrophys. J. 883, 98 (2019). Miotello2023 A. Miotello, I. Kamp, T. Birnstiel, et al., “Setting the Stage for Planet Formation: Measurements and Implications of the Fundamental Disk Properties,” in Protostars and Planets VII, S. Inutsuka, Y. Aikawa, T. Muto, et al., Eds., Astronomical Society of the Pacific Conference Series 534, 501 (2023). Veronesi2021 B. Veronesi, T. Paneque-Carreño, G. Lodato, et al., “A Dynamical Measurement of the Disk Mass in Elias 227,” Astrophys. J. letters 914, L27 (2021). Andrews2024 S. M. Andrews, R. Teague, C. P. Wirth, et al., “On Kinematic Measurements of Self-Gravity in Protoplanetary Disks,” arXiv e-prints , arXiv:2405.19574 (2024). Bergin17 E. A. Bergin and J. P. Williams, The Determination of Protoplanetary Disk Masses, vol. 445, Springer (2017). Trap2017 L. Trapman, A. Miotello, M. Kama, et al., “Far-infrared HD emission as a measure of protoplanetary disk mass,” Astron. Astrophys. 605, A69 (2017). Kamp2021 I. Kamp, M. Honda, H. Nomura, et al., “The formation of planetary systems with SPICA,” 38, e055 (2021). Calahan2021 J. K. Calahan, E. Bergin, K. Zhang, et al., “The TW Hya Rosetta Stone Project. III. Resolving the Gaseous Thermal Profile of the Disk,” Astrophys. J. 908, 8 (2021). Schwarz2016 K. R. Schwarz, E. A. Bergin, L. I. Cleeves, et al., “The Radial Distribution of H_2 and CO in TW Hya as Revealed by Resolved ALMA Observations of CO Isotopologues,” Astrophys. J. 823, 91 (2016). Schwarz2023 K. R. Schwarz, J. Najita, J. Bergner, et al., “Protoplanetary Disk Science with the Orbiting Astronomical Satellite Investigating Stellar Systems (OASIS) Observatory,” Space Sci. Rev. 219, 12 (2023). Hogerheijde11 M. R. Hogerheijde, E. A. Bergin, C. Brinch, et al., “Detection of the Water Reservoir in a Forming Planetary System,” Science 334, 338 (2011). Du2017 F. Du, E. A. Bergin, M. Hogerheijde, et al., “Survey of Cold Water Lines in Protoplanetary Disks: Indications of Systematic Volatile Depletion,” Astrophys. J. 842, 98 (2017). Banzatti2020 A. Banzatti, I. Pascucci, A. D. Bosman, et al., “Hints for Icy Pebble Migration Feeding an Oxygen-rich Chemistry in the Inner Planet-forming Region of Disks,” Astrophys. J. 903, 124 (2020). Molliere22 P. Molliere, T. Molyarova, B. Bitsch, et al., “Interpreting the Atmospheric Composition of Exoplanets: Sensitivity to Planet Formation Assumptions,” Astrophys. J. 934, 74 (2022). Oberg2011 K. I. Öberg, A. C. A. Boogert, K. M. Pontoppidan, et al., “The Spitzer Ice Legacy: Ice Evolution from Cores to Protostars,” Astrophys. J. 740, 109 (2011). Perotti2023 G. Perotti, V. Christiaens, T. Henning, et al., “Water in the terrestrial planet-forming zone of the PDS 70 disk,” Nature 620, 516–520 (2023). Gasman2023 D. Gasman, E. F. van Dishoeck, S. L. Grant, et al., “MINDS. Abundant water and varying C/O across the disk of Sz 98 as seen by JWST/MIRI,” Astron. Astrophys. 679, A117 (2023). Sturm2023d J. A. Sturm, M. K. McClure, T. L. Beck, et al., “A JWST inventory of protoplanetary disk ices. The edge-on protoplanetary disk HH 48 NE, seen with the Ice Age ERS program,” Astron. Astrophys. 679, A138 (2023). Banzatti2023b A. Banzatti, K. M. Pontoppidan, J. S. Carr, et al., “JWST Reveals Excess Cool Water near the Snow Line in Compact Disks, Consistent with Pebble Drift,” Astrophys. J. letters 957, L22 (2023). Manara23 C. F. Manara, M. Ansdell, G. P. Rosotti, et al., “Demographics of Young Stars and their Protoplanetary Disks: Lessons Learned on Disk Evolution and its Connection to Planet Formation,” in Protostars and Planets VII, S. Inutsuka, Y. Aikawa, T. Muto, et al., Eds., Astronomical Society of the Pacific Conference Series 534, 539 (2023). Lin1996 D. N. C. Lin, P. Bodenheimer, and D. C. Richardson, “Orbital migration of the planetary companion of 51 Pegasi to its present location,” Nature 380, 606–607 (1996). Raymond2017 S. N. Raymond and A. Izidoro, “Origin of water in the inner Solar System: Planetesimals scattered inward during Jupiter and Saturn's rapid gas accretion,” Icarus 297, 134–148 (2017). Orme2009 C. W. Ormel, D. Paszun, C. Dominik, et al., “Dust coagulation and fragmentation in molecular clouds. I. How collisions between dust aggregates alter the dust size distribution,” Astron. Astrophys. 502, 845–869 (2009). Notsu2016 S. Notsu, H. Nomura, D. Ishimoto, et al., “Candidate Water Vapor Lines to Locate the H2O Snowline through High-dispersion Spectroscopic Observations. I. The Case of a T Tauri Star,” Astrophys. J. 827, 113 (2016). Kennedy2008 G. M. Kennedy and S. J. Kenyon, “Planet Formation around Stars of Various Masses: The Snow Line and the Frequency of Giant Planets,” Astrophys. J. 673, 502–512 (2008). Fernandes2019 R. B. Fernandes, G. D. Mulders, I. Pascucci, et al., “Hints for a Turnover at the Snow Line in the Giant Planet Occurrence Rate,” Astrophys. J. 874, 81 (2019). Sturm2023c J. A. Sturm, M. K. McClure, J. B. Bergner, et al., “The edge-on protoplanetary disk HH 48 NE. II. Modeling ices and silicates,” Astron. Astrophys. 677, A18 (2023). Min16 M. Min, J. Bouwman, C. Dominik, et al., “The abundance and thermal history of water ice in the disk surrounding HD 142527 from the DIGIT Herschel Key Program,” Astron. Astrophys. 593, A11 (2016). Bergin2007 E. A. Bergin and M. Tafalla, “Cold Dark Clouds: The Initial Conditions for Star Formation,” Annual Review of Astron and Astrophys 45, 339–396 (2007). Andre2014 P. André, J. Di Francesco, D. Ward-Thompson, et al., “From Filamentary Networks to Dense Cores in Molecular Clouds: Toward a New Paradigm for Star Formation,” in Protostars and Planets VI, H. Beuther, R. S. Klessen, C. P. Dullemond, et al., Eds., 27–51 (2014). Pineda2023 J. E. Pineda, D. Arzoumanian, P. Andre, et al., “From Bubbles and Filaments to Cores and Disks: Gas Gathering and Growth of Structure Leading to the Formation of Stellar Systems,” in Protostars and Planets VII, S. Inutsuka, Y. Aikawa, T. Muto, et al., Eds., Astronomical Society of the Pacific Conference Series 534, 233 (2023). Jensen2021 S. S. Jensen, J. K. Jørgensen, K. Furuya, et al., “Modeling chemistry during star formation: water deuteration in dynamic star-forming regions,” Astron. Astrophys. 649, A66 (2021). Bergin2002 E. A. Bergin and R. L. Snell, “Sensitive Limits on the Water Abundance in Cold Low-Mass Molecular Cores,” Astrophys. J. letters 581, L105–L108 (2002). vanDishoeck2021 E. F. van Dishoeck, L. E. Kristensen, J. C. Mottram, et al., “Water in star-forming regions: physics and chemistry from clouds to disks as probed by Herschel spectroscopy,” Astron. Astrophys. 648, A24 (2021). Evans2001 I. Evans, Neal J., J. M. C. Rawlings, Y. L. Shirley, et al., “Tracing the Mass during Low-Mass Star Formation. II. Modeling the Submillimeter Emission from Preprotostellar Cores,” Astrophys. J. 557, 193–208 (2001). Prasad1983 S. S. Prasad and S. P. Tarafdar, “UV radiation field inside dense clouds - Its possible existence and chemical implications,” Astrophys. J. 267, 603–609 (1983). Caselli2012 P. Caselli, E. Keto, E. A. Bergin, et al., “First Detection of Water Vapor in a Pre-stellar Core,” Astrophys. J. letters 759, L37 (2012). Young2004 K. E. Young, J.-E. Lee, I. Evans, Neal J., et al., “Probing Pre-Protostellar Cores with Formaldehyde,” Astrophys. J. 614, 252–266 (2004). vanderTak2000 F. F. S. van der Tak, E. F. van Dishoeck, and P. Caselli, “Abundance profiles of CH_3OH and H_2CO toward massive young stars as tests of gas-grain chemical models,” Astron. Astrophys. 361, 327–339 (2000). Morris1980 M. Morris, P. Palmer, and B. Zuckerman, “Hot ammonia in Orion,” Astrophys. J. 237, 1–8 (1980). Blake1987 G. A. Blake, E. C. Sutton, C. R. Masson, et al., “Molecular Abundances in OMC-1: The Chemical Composition of Interstellar Molecular Clouds and the Influence of Massive Star Formation,” Astrophys. J. 315, 621 (1987). Garrod2006 R. T. Garrod and E. Herbst, “Formation of methyl formate and other organic species in the warm-up phase of hot molecular cores,” Astron. Astrophys. 457, 927–936 (2006). Herbst2009 E. Herbst and E. F. van Dishoeck, “Complex Organic Interstellar Molecules,” Annual Review of Astron and Astrophys 47, 427–480 (2009). Garrod2013 R. T. Garrod and S. L. Widicus Weaver, “Simulations of Hot-Core Chemistry,” Chemical Reviews 113, 8939–8960 (2013). Crockett2014b N. R. Crockett, E. A. Bergin, J. L. Neill, et al., “Herschel Observations of Extraordinary Sources: Analysis of the HIFI 1.2 THz Wide Spectral Survey toward Orion KL. I. Methods,” Astrophys. J. 787, 112 (2014). Neill2014 J. L. Neill, E. A. Bergin, D. C. Lis, et al., “Herschel Observations of Extraordinary Sources: Analysis of the Full Herschel/HIFI Molecular Line Survey of Sagittarius B2(N),” Astrophys. J. 789, 8 (2014). Bergner2022 J. B. Bergner, Y. L. Shirley, J. K. Jørgensen, et al., “Astrochemistry with the Orbiting Astronomical Satellite for Investigating Stellar Systems (OASIS),” Frontiers in Astronomy and Space Sciences 8, 246 (2022). Garay1998 G. Garay, I. Köhnenkamp, T. L. Bourke, et al., “Molecular Abundance Enhancements in the Highly Collimated Bipolar Outflow BHR 71,” Astrophys. J. 509, 768–784 (1998). Codella1999 C. Codella and R. Bachiller, “Molecular outflows in intermediate-mass star forming regions: the case of CB3,” Astron. Astrophys. 350, 659–671 (1999). Requena2007 M. A. Requena-Torres, N. Marcelino, I. Jiménez-Serra, et al., “Organic Chemistry in the Dark Clouds L1448 and L183: A Unique Grain Mantle Composition,” Astrophys. J. letters 655, L37–L40 (2007). Arce2008 H. G. Arce, J. Santiago-García, J. K. Jørgensen, et al., “Complex Molecules in the L1157 Molecular Outflow,” Astrophys. J. letters 681, L21 (2008). McGuire2018 B. A. McGuire, “2018 Census of Interstellar, Circumstellar, Extragalactic, Protoplanetary Disk, and Exoplanetary Molecules,” Astrophys. J. Supp. 239, 17 (2018). Ceccarelli2010 C. Ceccarelli, A. Bacmann, A. Boogert, et al., “Herschel spectral surveys of star-forming regions. Overview of the 555-636 GHz range,” Astron. Astrophys. 521, L22 (2010). Codella2010 C. Codella, B. Lefloch, C. Ceccarelli, et al., “The CHESS spectral survey of star forming regions: Peering into the protostellar shock L1157-B1. I. Shock chemical complexity,” Astron. Astrophys. 518, L112 (2010). Burkhardt2019 A. M. Burkhardt, C. N. Shingledecker, R. Le Gal, et al., “Modeling C-shock Chemistry in Isolated Molecular Outflows,” Astrophys. J. 881, 32 (2019). Tielens21 A. Tielens, Molecular Astrophysics (2021). Yamaguchi2011 T. Yamaguchi, S. Takano, N. Sakai, et al., “Detection of Phosphorus Nitride in the Lynds 1157 B1 Shocked Region,” pasj 63, L37–L41 (2011). Bergner2019c J. B. Bergner, K. I. Öberg, S. Walker, et al., “Detection of Phosphorus-bearing Molecules toward a Solar-type Protostar,” Astrophys. J. letters 884, L36 (2019). Jimenez2018 I. Jiménez-Serra, S. Viti, D. Quénard, et al., “The Chemistry of Phosphorus-bearing Molecules under Energetic Phenomena,” Astrophys. J. 862, 128 (2018). Oberg2011a K. I. Öberg, A. C. A. Boogert, K. M. Pontoppidan, et al., “The Spitzer Ice Legacy: Ice Evolution from Cores to Protostars,” Astrophys. J 740, 109 (2011). Dutrey1997 A. Dutrey, S. Guilloteau, and M. Guelin, “Chemistry of protosolar-like nebulae: The molecular content of the DM Tau and GG Tau disks.,” Astron. Astrophys. 317, L55–L58 (1997). Oberg2015 K. I. Öberg, V. V. Guzmán, K. Furuya, et al., “The comet-like composition of a protoplanetary disk as revealed by complex cyanides,” Nature 520, 198–201 (2015). Guzman2017 V. V. Guzmán, K. I. Öberg, J. Huang, et al., “Nitrogen Fractionation in Protoplanetary Disks from the H^13CN/HC^15N Ratio,” Astrophys. J. 836, 30 (2017). Bergner2019b J. B. Bergner, K. I. Öberg, E. A. Bergin, et al., “A Survey of C_2H, HCN, and C^18O in Protoplanetary Disks,” Astrophys. J. 876, 25 (2019). vanTerwisga2019b S. E. van Terwisga, E. F. van Dishoeck, P. Cazzoletti, et al., “The ALMA Lupus protoplanetary disk survey: evidence for compact gas disks and molecular rings from CN,” Astron. Astrophys. 623, A150 (2019). Salinas2016 V. N. Salinas, M. R. Hogerheijde, E. A. Bergin, et al., “First detection of gas-phase ammonia in a planet-forming disk. NH_3, N_2H^+, and H_2O in the disk around TW Hydrae,” Astron. Astrophys. 591, A122 (2016). Najita2021 J. R. Najita, J. S. Carr, S. D. Brittain, et al., “High-resolution Mid-infrared Spectroscopy of GV Tau N: Surface Accretion and Detection of NH_3 in a Young Protoplanetary Disk,” Astrophys. J. 908, 171 (2021). Dutrey2011 A. Dutrey, V. Wakelam, Y. Boehler, et al., “Chemistry in disks. V. Sulfur-bearing molecules in the protoplanetary disks surrounding LkCa15, MWC480, DM Tauri, and GO Tauri,” Astron. Astrophys. 535, A104 (2011). Guilloteau2013 S. Guilloteau, E. Di Folco, A. Dutrey, et al., “A sensitive survey for ^13CO, CN, H_2CO, and SO in the disks of T Tauri and Herbig Ae stars,” Astron. Astrophys. 549, A92 (2013). LeGal2019 R. Le Gal, K. I. Öberg, R. A. Loomis, et al., “Sulfur Chemistry in Protoplanetary Disks: CS and H_2CS,” Astrophys. J. 876, 72 (2019). Phuong2018 N. T. Phuong, E. Chapillon, L. Majumdar, et al., “First detection of H_2S in a protoplanetary disk. The dense GG Tauri A ring,” Astron. Astrophys. 616, L5 (2018). Riviere2021 P. Rivière-Marichalar, A. Fuente, R. Le Gal, et al., “H_2S observations in young stellar disks in Taurus,” Astron. Astrophys. 652, A46 (2021). Riviere2022 P. Rivière-Marichalar, A. Fuente, G. Esplugues, et al., “AB Aur, a Rosetta stone for studies of planet formation. II. H_2S detection and sulfur budget,” Astron. Astrophys. 665, A61 (2022). Schwarz2014 K. R. Schwarz and E. A. Bergin, “The Effects of Initial Abundances on Nitrogen in Protoplanetary Disks,” Astrophys. J 797, 113 (2014). Pontoppidan2019 K. M. Pontoppidan, C. Salyk, A. Banzatti, et al., “The Nitrogen Carrier in Inner Protoplanetary Disks,” Astrophys. J 874, 92 (2019). OBrien2014 D. P. O'Brien, K. J. Walsh, A. Morbidelli, et al., “Water delivery and giant impacts in the ‘Grand Tack’ scenario,” Icarus 239, 74–84 (2014). Broadley2022 M. W. Broadley, D. V. Bekaert, L. Piani, et al., “Origin of life-forming volatile elements in the inner Solar System,” Nature 611, 245–255 (2022). Hagemann1970 R. Hagemann, G. Nief, and E. Roth, “Absolute isotopic scale for deuterium analysis of natural waters. Absolute D/H ratio for SMOW,” Tellus 22, 712–715 (1970). Prodanovic2010 T. Prodanović, G. Steigman, and B. D. Fields, “The deuterium abundance in the local interstellar medium,” Mon. Not. R. Astron. Soc. 406, 1108–1115 (2010). Tielens1983 A. G. G. M. Tielens, “Surface chemistry of deuterated molecules,” Astron. Astrophys. 119, 177–184 (1983). Ceccarelli2014 C. Ceccarelli, P. Caselli, D. Bockelée-Morvan, et al., “Deuterium Fractionation: The Ariadne's Thread from the Precollapse Phase to Meteorites and Comets Today,” in Protostars and Planets VI, H. Beuther, R. S. Klessen, C. P. Dullemond, et al., Eds., 859–882 (2014). Anderson2022 C. M. Anderson, N. Biver, G. L. Bjoraker, et al., “Solar System Science with the Orbiting Astronomical Satellite Investigating Stellar Systems (OASIS) Observatory,” Space Sci. Rev. 218, 43 (2022). doi:10.1007/s11214-022-00911-5. Hart2011b P. Hartogh, E. Lellouch, R. Moreno, et al., “Direct detection of the Enceladus water torus with Herschel,” Astron. Astrophys. 532, L2 (2011). Lis2013 D. C. Lis, N. Biver, D. Bockelée-Morvan, et al., “A Herschel Study of D/H in Water in the Jupiter-family Comet 45P/Honda-Mrkos-Pajdušáková and Prospects for D/H Measurements with CCAT,” Astrophys. J. letters 774, L3 (2013). Bock2012 D. Bockelée-Morvan, N. Biver, B. Swinyard, et al., “Herschel measurements of the D/H and ^16O/^18O ratios in water in the Oort-cloud comet C/2009 P1 (Garradd),” Astron. Astrophys. 544, L15 (2012). Jens2019 S. S. Jensen, J. K. Jørgensen, L. E. Kristensen, et al., “ALMA observations of water deuteration: a physical diagnostic of the formation of protostars,” Astron. Astrophys. 631, A25 (2019). Bive2016 N. Biver, R. Moreno, D. Bockelée-Morvan, et al., “Isotopic ratios of H, C, N, O, and S in comets C/2012 F6 (Lemmon) and C/2014 Q2 (Lovejoy),” Astron. Astrophys. 589, A78 (2016). Vill2009 G. L. Villanueva, M. J. Mumma, B. P. Bonev, et al., “A Sensitive Search for Deuterated Water in Comet 8p/Tuttle,” Astrophys. J. letters 690, L5–L9 (2009). Paga2017 L. Paganini, M. J. Mumma, E. L. Gibb, et al., “Ground-based Detection of Deuterated Water in Comet C/2014 Q2 (Lovejoy) at IR Wavelengths,” Astrophys. J. letters 836, L25 (2017). Lis2019 D. C. Lis, D. Bockelée-Morvan, R. Güsten, et al., “Terrestrial deuterium-to-hydrogen ratio in water in hyperactive comets,” Astron. Astrophys. 625, L5 (2019). Cout2012 A. Coutens, C. Vastel, E. Caux, et al., “A study of deuterated water in the low-mass protostar IRAS 16293-2422,” Astron. Astrophys. 539, A132 (2012). Cout2013 A. Coutens, C. Vastel, S. Cabrit, et al., “Deuterated water in the solar-type protostars NGC 1333 IRAS 4A and IRAS 4B,” Astron. Astrophys. 560, A39 (2013). Cout2014 A. Coutens, C. Vastel, U. Hincelin, et al., “Water deuterium fractionation in the high-mass star-forming region G34.26+0.15 based on Herschel/HIFI data,” Mon. Not. R. Astron. Soc. 445, 1299–1313 (2014). Pers2014 M. V. Persson, J. K. Jørgensen, E. F. van Dishoeck, et al., “The deuterium fractionation of water on solar-system scales in deeply-embedded low-mass protostars,” Astron. Astrophys. 563, A74 (2014). Wang2012 K. S. Wang, F. F. S. van der Tak, and M. R. Hogerheijde, “Kinematics of the inner thousand AU region around the young massive star <ASTROBJ>AFGL 2591-VLA3</ASTROBJ>: a massive disk candidate?,” Astron. Astrophys. 543, A22 (2012). Empr2013 M. Emprechtinger, D. C. Lis, R. Rolffs, et al., “The Abundance, Ortho/Para Ratio, and Deuteration of Water in the High-mass Star-forming Region NGC 6334 I,” Astrophys. J. 765, 61 (2013). vander2006 F. F. S. van der Tak, C. M. Walmsley, F. Herpin, et al., “Water in the envelopes and disks around young high-mass stars,” Astron. Astrophys. 447, 1011–1025 (2006). Helm1996 F. P. Helmich, E. F. van Dishoeck, and D. J. Jansen, “The excitation and abundance of HDO toward W3(OH)/(H_2O).,” Astron. Astrophys. 313, 657–663 (1996). Bona2013 L. Bonal, C. M. O. D. Alexander, G. R. Huss, et al., “Hydrogen isotopic composition of the water in CR chondrites,” Geochim et Cosmochem Acta 106, 111–133 (2013). Yang2013 L. Yang, F. J. Ciesla, and C. M. O. D. Alexander, “The D/H ratio of water in the solar nebula during its formation and evolution,” Icarus 226, 256–267 (2013). Jacq2013 E. Jacquet and F. Robert, “Water transport in protoplanetary disks and the hydrogen isotopic composition of chondrites,” Icarus 223, 722–732 (2013). Gibb2016 E. L. Gibb, B. P. Bonev, M. A. DiSanti, et al., “An Infrared Search for HDO in Comet D/2012 S1 (ISON) and Implications for iSHELL,” Astrophys. J. 816, 101 (2016). Bive2024 N. Biver, R. Moreno, and D. Bockelée-Morvan, “Hdo in comet 46p/wirtanen from alma observations,” (2024). In prep. Huang2017 J. Huang, K. I. Öberg, C. Qi, et al., “An ALMA Survey of DCN/H^13CN and DCO^+/H^13CO^+ in Protoplanetary Disks,” Astrophys. J 835, 231 (2017). Salinas2017 V. N. Salinas, M. R. Hogerheijde, G. S. Mathews, et al., “DCO^+, DCN, and N_2D^+ reveal three different deuteration regimes in the disk around the Herbig Ae star HD 163296,” Astron. Astrophys. 606, A125 (2017). Munoz2023 C. E. Muñoz-Romero, K. I. Öberg, C. J. Law, et al., “Cold Deuterium Fractionation in the Nearest Planet-forming Disk,” Astrophys. J 943, 35 (2023). Tobin2023 J. J. Tobin, M. L. R. van't Hoff, M. Leemker, et al., “Deuterium-enriched water ties planet-forming disks to comets and protostars,” Nature 615, 227–230 (2023). Facchini2024 S. Facchini, L. Testi, E. Humphreys, et al., “Resolved ALMA observations of water in the inner astronomical units of the HL Tau disk,” Nature Astronomy (2024). Qi2008 C. Qi, D. J. Wilner, Y. Aikawa, et al., “Resolving the Chemistry in the Disk of TW Hydrae. I. Deuterated Species,” Astrophys. J. 681, 1396–1407 (2008). Pearce22 T. D. Pearce, R. Launhardt, R. Ostermann, et al., “Planet populations inferred from debris discs. Insights from 178 debris systems in the ISPY, LEECH, and LIStEN planet-hunting surveys,” Astron. Astrophys. 659, A135 (2022). Lagrange10 A. M. Lagrange, M. Bonnefoy, G. Chauvin, et al., “A Giant Planet Imaged in the Disk of the Young Star Pictoris,” Science 329, 57 (2010). Skaf23 N. Skaf, A. Boccaletti, E. Pantin, et al., “The Pictoris system: Setting constraints on the planet and the disk structures at mid-IR wavelengths with NEAR,” Astron. Astrophys. 675, A35 (2023). Mittal15 T. Mittal, C. H. Chen, H. Jang-Condell, et al., “The Spitzer Infrared Spectrograph Debris Disk Catalog. II. Silicate Feature Analysis of Unresolved Targets,” Astrophys. J 798, 87 (2015). Rodigas15 T. J. Rodigas, C. C. Stark, A. Weinberger, et al., “On the Morphology and Chemical Composition of the HR 4796A Debris Disk,” Astrophys. J 798, 96 (2015). Moor2017 A. Moór, M. Curé, Á. Kóspál, et al., “Molecular Gas in Debris Disks around Young A-type Stars,” Astrophys. J. 849, 123 (2017). Kral20 Q. Kral, J. Davoult, and B. Charnay, “Formation of secondary atmospheres on terrestrial planets by late disk accretion,” Nature Astronomy 4, 769–775 (2020). Kral2021 Q. Kral, J. E. Pringle, A. Guilbert-Lepoutre, et al., “A molecular wind blows out of the Kuiper belt,” Astron. Astrophys. 653, L11 (2021). Eiroa13 C. Eiroa, J. P. Marshall, A. Mora, et al., “DUst around NEarby Stars. The survey observational results,” Astron. Astrophys. 555, A11 (2013). Kral2017 Q. Kral, L. Matrà, M. C. Wyatt, et al., “Predictions for the secondary CO, C and O gas content of debris discs from the destruction of volatile-rich planetesimals,” Mon. Not. R. Astron. Soc. 469, 521–550 (2017). Kral2019 Q. Kral, S. Marino, M. C. Wyatt, et al., “Imaging [CI] around HD 131835: reinterpreting young debris discs with protoplanetary disc levels of CO gas as shielded secondary discs,” Mon. Not. R. Astron. Soc. 489, 3670–3691 (2019). Kral2016b Q. Kral, M. Wyatt, R. F. Carswell, et al., “A self-consistent model for the evolution of the gas produced in the debris disc of Pictoris,” Mon. Not. R. Astron. Soc. 461, 845–858 (2016). Cataldi2020 G. Cataldi, Y. Wu, A. Brandeker, et al., “The Surprisingly Low Carbon Mass in the Debris Disk around HD 32297,” Astrophys. J. 892, 99 (2020). Kral2016a Q. Kral and H. Latter, “The magnetorotational instability in debris-disc gas,” Mon. Not. R. Astron. Soc. 461, 1614–1620 (2016). Cui24 C. Cui, S. Marino, Q. Kral, et al., “Dynamics of cold circumstellar gas in debris discs,” Mon. Not. R. Astron. Soc. 530, 1766–1780 (2024). depra24 M. N. De Prá, E. Hénault, N. Pinilla-Alonso, et al., “Widespread CO_2 and CO ices in the trans-Neptunian population revealed by JWST/DiSCo-TNOs,” Nature Astronomy (2024). Sutherland2017 R. S. Sutherland and M. A. Dopita, “Effects of Preionization in Radiative Shocks. I. Self-consistent Models,” Astrophys. J. Supp. 229, 34 (2017). Pabs2021 C. H. M. Pabst, A. Hacar, J. R. Goicoechea, et al., “[C II] 158 m line emission from Orion A I. A template for extragalactic studies?,” Astron. Astrophys. 651, A111 (2021). §.§ Disclosures The authors have no relevant financial interests in the manuscript and no other potential conflicts of interest to disclose. §.§ Code, Data, and Materials Availability This paper reviewer the science cases and potential observations for a future space mission and so data sharing in not applicable at this time. Kamber R. Schwarz holds a postdoctoral position at the Max Planck Institute for Astronomy in Heidelberg. She was a NASA Sagan Postdoctoral Fellow at the Lunar and Planetary Laboratory at the University of Arizona. She received a PhD in Astronomy & Astrophysics at the University of Michigan in 2018. She studies the evolution of volatile gas during planet formation, with the goal of determining the amount of volatile carbon, nitrogen, and oxygen available to form planets. Her research combines observations from the infrared to the millimeter, using facilities such as ALMA, NOEMA, and JWST, with physical/chemical modeling to constrain the timescales and mechanisms of volatile reprocessing. She has authored about 100 publications. Alexander Tielens is a professor of astronomy in the Astronomy Department of the University of Maryland, College Park. He received his MS and PhD in astronomy from Leiden University in 1982. He has authored over 500 papers in refereed journals and has written two textbooks on the interstellar medium. His scientific interests center on the physics and chemistry of the interstellar medium, in particular in regions of star and planet formation. Joan Najita is an Astronomer at NSF’s NOIRLab and its Head of Scientific Staff for User Support. She was formerly the Chief Scientist at the National Optical Astronomy Observatory (NOAO) and served on its scientific staff since 1998. In 1993 she received her PhD from University of California, Berkeley. Najita has been responsible for strategic planning, science career development, science communications, and the health of the scientific environment at the Observatory. Her interests include traditional research topics (such as star and planet formation, exoplanets, and the Milky Way), advocacy for the development of new research capabilities (such as infrared spectroscopy and massively multiplexed wide-field spectroscopy), as well as the sociological context of astronomy (such as the nature of discovery in astronomy, and its science sociology and resource allocation practices). She has a lifelong interest in communicating science to the public and in the role of science in society. Joan Najita has been named a 2021–2022 fellow at Harvard Radcliffe Institute, joining artists, scientists, scholars, and practitioners in a year of discovery and interdisciplinary exchange in Cambridge. She has authored about 190 publications Jennifer Bergner is an Assistant Professor of Chemistry at UC Berkeley. She received her BS degree from University of Virginia and MA and PhD from Harvard in 2019. Her astrochemistry group uses a variety of tools to explore the chemistry at play in protostars and protoplanetary disks, the progenitors of planetary systems. With cryogenic vacuum experiments she mimics the extremely low temperatures and pressures of star-forming regions in the lab to explore the chemical and microphysical behavior of volatile ices. She also uses state-of-the-art telescope facilities like ALMA and JWST to observe the spectral fingerprints of volatile molecules in protostars and protoplanetary disks, providing insight into the chemical landscape of planet formation and the underlying physical processes which drive astrochemical evolution. She has about 128 publications. Quentin Kral is an astronomer at the Paris Observatory (LESIA). His main research interests are debris disks, the solar system, and planetary formation. He is an expert on the new gas component that is now observed in mature extrasolar systems once the young planet-forming disk has dissipated. He mainly uses ALMA to test his models and investigate the gas and dust in exoplanetary systems. He is the PI of the exoplanet.eu catalog of exoplanets. He has published over 130 articles. He received his master's degree from Ecole Normale Supérieure (ENS Paris) and his PhD from Paris Observatory in 2014. Carrie M. Anderson is a research scientist at NASA Goddard Space Flight Center (GSFC). She received a BS in physics from Arizona State University in 2000, and MS and PhD degrees in Astronomy from New Mexico State University in 2003 and 2006, respectively. She is the author of more than 45 papers in refereed journals and has written one book chapter. Her research focuses on the remote sensing of planetary atmospheres, primarily in the areas of thermal structure and composition, using space- and ground-based data, in the visible, near-IR, mid-IR, far-IR, and submillimeter spectral regions. Her research also includes laboratory transmission spectroscopy measurements of ice films in a high-vacuum cryo chamber located in her Spectroscopy for Planetary ICes Environments (SPICE) laboratory at NASA GSFC. Gordon Chin is a research scientist at NASA Goddard Space Flight Center (GSFC). He received his B.A. in physics from Columbia College in 1970, and his M.A., M. Phil., and PhD in physics from Columbia University in 1972, 1974, and 1977, respectively. He is the author of more than 50 refereed journal papers. His current research interests includes the development of sub-millimeter planetary flight spectrometers targeting planetary atmospheres, the lunar exosphere, and ocean world plume environments in the solar system. David T. Leisawitz is an astrophysicist and Chief of the Science Proposal Support Office at NASA’s Goddard Space Flight Center. He received a Ph.D. in Astronomy from the University of Texas at Austin in 1985. His primary research interests are star and planetary system formation, infrared astrophysics, wide-field spatio-spectral interferometry, and far-infrared space interferometry. He is NASA Center Study Scientist for the Far-IR Surveyor, Mission Scientist for the Wide-field Infrared Survey Explorer (WISE), and earlier served as Deputy Project Scientist for the Cosmic Background Explorer (COBE) under Project Scientist and mentor Dr. John Mather. He is Principal Investigator for “Wide-field Imaging Interferometry,” a Co- Investigator on the “Balloon Experimental Twin Telescope for Infrared Interferometry (BETTII),” and member of a three-person External Advisory Panel for the “Far Infrared Space Interferometer Critical Assessment (FISICA),” a European Commission FP7 research program. In 2004-05, he served as Principal Investigator and science team lead for the Space Infrared Interferometric Telescope (SPIRIT) mission concept study. He has authored about 300 publications. David J. Wilner is a Senior Astrophysicist at the Smithsonian Astrophysical Observatory in the Radio and Geoastronomy Division at the Center for Astrophysics, Harvard & Smithsonian. His main research interests are circumstellar disks and the formation of planets, and the development of aperture synthesis techniques. Much of his science program makes use of radio, millimeter, and submillimeter interferometers, including the Submillimeter Array, ALMA, and the VLA. He received an A.B. in Physics from Princeton University and a Ph.D. in Astronomy from the University of California. He frequently lectures on imaging and deconvolution in radio astronomy. He has authored about 450 publications. Peter Roelfsema is a senior scientist/project manager at SRON Netherlands Institute for Space Research. He has been involved in several satellite projects, currently as PM for the Dutch Athena/X-IFU contribution, and before that as PI for SPICA’s SAFARI Far-IR spectrometer and as lead of the international SPICA collaboration. He was PI and ad-interim PI for Herschel/HIFI, and in the early Herschel development phase, he was one of the lead system engineers developing the Herschel ground segment concept and operational systems. Before Herschel he led the ISO/SWS operations team in Villafranca/Spain and the SWS analysis software development team. He started his scientific career as a radio astronomer, utilizing the WSRT, VLA and ATNF to study radio recombination lines of galactic HII regions and nearby active galaxies. With ISO and Herschel he did (Far)IR spectroscopic work on galactic HII regions, studying e.g. PAH properties and metal abundance variations in our galaxy. He has published over 150 papers in astronomical journals conference proceedings and supervised a number of PhD students. Floris van der Tak is a Senior Scientist in the Astrophysics program of the Netherlands Institute for Space Research (SRON), where his research interests include astrochemistry, the habitability of exoplanets, the physics of the interstellar medium, star formation, molecular spectroscopy and radiative transfer. He received a PhD from Leiden University in 2000. He was the Project Scientist for the SPICA/SAFARI instrument. He has authored about 216 publications. Erick Young is a Senior Science Advisor at Universities Space Research Association. He is a widely recognized authority on infrared astronomy and the former Science Mission Operations Director for SOFIA. He specializes in designing science instruments and has participated in many NASA's space infrared astronomy missions. He was responsible for developing the far- infrared detector arrays on the Spitzer Space Telescope’s Multiband Imaging Photometer for Spitzer. As SOFIA Science Mission Operations Director, he manages the airborne observatory's equipment, instruments, support facilities, and infrastructure. He was also responsible for the overall scientific productivity of the facility, including the Guest Investigator program. He has about 385 publications. Christopher K. Walker is a Professor of Astronomy, Optical Sciences, Electrical & Computer Engineering, Aerospace & Mechanical Engineering, and Applied Mathematics at the University of Arizona (UofA). He received his M.S.E.E. from Clemson University (1980), M.S.E.E. from Ohio State University (1981), and Ph.D. in Astronomy from the University of Arizona (1988). He has worked at TRW Aerospace and the Jet Propulsion Laboratory, was a Millikan Fellow in Physics at Caltech, and has been a faculty member at the UofA since 1991, where he has worked to advance the field of terahertz astronomy. He has supervised sixteen Ph.D. students, led numerous NASA and NSF projects, authored/coauthored 130+ papers, and published two textbooks: "Terahertz Astronomy" and "Investigating Life in the Universe".
http://arxiv.org/abs/2407.13575v1
20240718151519
With or Without Replacement? Improving Confidence in Fourier Imaging
[ "Frederik Hoppe", "Claudio Mayrink Verdun", "Felix Krahmer", "Marion I. Menzel", "Holger Rauhut" ]
eess.SP
[ "eess.SP", "cs.IT", "cs.LG", "eess.IV", "math.IT", "stat.AP" ]
With or Without Replacement? Improving Confidence in Fourier Imaging Frederik Hoppe1, Claudio Mayrink Verdun2, Felix Krahmer3, Marion I. Menzel4 and Holger Rauhut5 1 Chair of Mathematics of Information Processing, RWTH Aachen University, Germany 2Harvard John A. Paulson School of Engineering and Applied Sciences, Harvard University, USA 3TUM School of Computation, Information and Technology, Technical University Munich, Germany, and Munich Center for Machine Learning, Germany 4Faculty of Electrical Engineering and Information Technology, TH Ingolstadt, Germany, GE HealthCare, Munich, Germany, and TUM School of Natural Sciences, Munich, Germany 5Department of Mathematics, LMU Munich, Germany, and Munich Center for Machine Learning, Germany ================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================== § ABSTRACT Over the last few years, debiased estimators have been proposed in order to establish rigorous confidence intervals for high-dimensional problems in machine learning and data science. The core argument is that the error of these estimators with respect to the ground truth can be expressed as a Gaussian variable plus a remainder term that vanishes as long as the dimension of the problem is sufficiently high. Thus, uncertainty quantification (UQ) can be performed exploiting the Gaussian model. Empirically, however, the remainder term cannot be neglected in many realistic situations of moderately-sized dimensions, in particular in certain structured measurement scenarios such as Magnetic Resonance Imaging (MRI). This, in turn, can downgrade the advantage of the UQ methods as compared to non-UQ approaches such as the standard LASSO. In this paper, we present a method to improve the debiased estimator by sampling without replacement. Our approach leverages recent results of ours on the structure of the random nature of certain sampling schemes showing how a transition between sampling with and without replacement can lead to a weighted reconstruction scheme with improved performance for the standard LASSO. In this paper, we illustrate how this reweighted sampling idea can also improve the debiased estimator and, consequently, provide a better method for UQ in Fourier imaging. § INTRODUCTION 2024 IEEE. Personal use of this material is permitted. Permission from IEEE must be obtained for all other uses, in any current or future media, including reprinting/republishing this material for advertising or promotional purposes, creating new collective works, for resale or redistribution to servers or lists, or reuse of any copyrighted component of this work in other works.footnote High-dimensional models have become ubiquitous across various scientific disciplines, with notable prominence in fields where machine learning or signal processing techniques are used. Given their extensive application, it has become crucial to accurately assess the uncertainty surrounding the solutions to these models. This necessity arises from the inherent presence of noise in the data, which directly influences the solutions obtained by solving such models with a certain optimization strategy. Quantifying uncertainty in high-dimensional regression models like the LASSO poses a significant challenge. These estimators often introduce a bias in order to not compromise the variance. This results in biased estimates that are unsuitable for making inferences on model coefficients, as shown by asymptotic results for fixed dimension derived in <cit.> for the LASSO estimator. Moreover, the LASSO introduces bias by shrinking all the coefficients towards zero, which helps in variable selection and prevents overfitting in high-dimensional settings, but this shrinkage can lead to underestimation of true effect sizes and compromises the ability to draw accurate statistical inferences about individual coefficients. The problem of uncertainty quantification (UQ) for high-dimensional regression models received a lot of attention recently since, in the case of sparse regression, a few papers initiated a post-selection debiased approach to rigorously obtain confidence intervals for the LASSO coefficients. These methods have great potential to guide decision-making in critical applications like medical imaging <cit.>. The main idea is that a modification of the LASSO based on its KKT conditions, the so-called debiased LASSO, yields a solution that approximately follows a Gaussian distribution. Thus, confidence intervals for the coefficients can be deduced. A key feature of this approach is that it exploits sparsity constraints of the underlying model. Under such assumptions, previous works rigorously quantify the performance for measurement systems that are subgaussian or given by a bounded orthonormal system <cit.>. However, in many applications, including telecommunications and medical imaging, the underlying signal is typically not sparse in the canonical basis. Therefore, in order to use sparse regression techniques for such applications, one needs to work with a sparsifying transform, either a general-purpose representation system such as a wavelet basis or a learned dictionary. In this case, the debiased results established for UQ, e.g., <cit.>, are applicable in a somewhat restricted setting. Even for the simple case of sparsity in the Haar wavelet domain <cit.>, most theory is based on non-uniform sampling with replacement <cit.>, which can lead to many points being sampled multiple times and, consequently, a lower number of distinct samples. As observed in <cit.>, this argument can also be turned around: When a certain number of distinct samples is observed, this corresponds to a sampling-with-replacement model with a larger (virtual) number of measurements provided this transformation is reflected by a reweighting in the LASSO reconstruction. In <cit.>, we explored the effect of this transformation on the reconstruction accuracy for the standard LASSO. In this work, we demonstrate that it can also improve the UQ performance. This is important for Fourier imaging with Haar wavelet sparsity, as too few samples can make the UQ procedure for some coefficients meaningless. This situation is even more challenging if such UQ methods are employed for learning-based methods <cit.>. Our contribution: This work aims to show that the reweighted scheme of <cit.> can overcome the aforementioned problem and improve uncertainty quantification techniques for sparse estimators when the underlying ground truth is sparse in a non-trivial domain instead of the canonical basis. In particular, we show that by using a reweighted sampling without replacement scheme, we can obtain sharper debiased estimators with better convergence properties. This allows for constructing more precise confidence intervals in cases where the ground truth is sparse only after a change of basis. § THE DEBIASED LASSO We consider measurements given by a linear model y = A x^0 + ε with s-sparse ground truth x^0∈ℂ^N, measurements matrix A∈ℂ^m× N and complex Gaussian noise ε∼𝒞𝒩(0, σ^2 I_N× N). The LASSO estimator x̂ <cit.> retrieves the signal by solving the problem _x∈ℂ^N1/2m‖ Ax -y‖_2^2 + λ‖ x‖_1 with regularization parameter λ>0. We assume that the matrix A is normalized such that the sample covariance Σ̂_A:=1/mA^*A has diagonal entries of order one. Thanks to the ℓ_1-regularization, which introduces a shrinkage of the coefficient magnitudes, the LASSO is biased <cit.>. A few works <cit.> established a correction to remove this bias from the LASSO, i.e. x̂^u = x̂ + 1/mA^*(y-Ax̂). The corrected estimator is called debiased LASSO. The main achievement of the debiased estimation theory is the decomposition x̂^u - x^0 = A^*ε/m_=:W + (Σ̂_A - I_N× N)(x^0-x̂)_=:R, with a Gaussian term W ∼𝒞𝒩(0,σ^2Σ̂_A) and a remainder term R, that vanishes asymptotically, i.e. when m→∞ and N=N(m)→∞ such that N/m is constant and s_0log^2 N/m→ 0; see <cit.>. Thus, the debiased LASSO is asymptotically Gaussian with mean x^0. This allows for constructing confidence intervals based on the distribution of W. The confidence region with significance level α∈(0,1) for the complex pixel value x_i^0 is given by J_i(α) = { z ∈ℂ: |x̂^u_i - z |≤δ_i(α)} with radius δ_i(α) = σ̂(Σ̂_A)_ii^1/2/m√(log(1/α)). A detailed derivation of the confidence regions can be found, e.g., in <cit.>. § SAMPLING SCHEMES The theory of compressive sampling for image retrieval requires that the measurement operator is well-behaved on certain sets, e.g., on a union of subspaces. Such a notion is mathematically described by concepts such as incoherence, the restricted isometry property, or the nullspace property <cit.>. In the case when the measurement matrix is given by a subsampled Fourier matrix F_Ω, which is the measurement scheme employed in MRI, it is known that it has the restricted isometry property (RIP) with high probability provided that its rows are sampled uniformly at random <cit.>. However, in cases when a sparsifying transform such as the Haar wavelet is incorporated and hence, not the signal x^0∈ℂ^N, but z^0 = Hx^0 is s-sparse, the new measurement operator A=F_ΩH^* is coherent, see <cit.>. In this case, a non-uniform sampling strategy must be employed to guarantee that the measurement operator is well-behaved. The following result from <cit.> shows that non-uniform sampling ensures that the Fourier-Wavelet measurement scheme fulfills the RIP. We refer to <cit.> for the definition and a discussion of the RIP. <cit.> Let Φ={φ_j}_j=1^N and Ψ ={ψ_k}_k=1^N be orthonormal bases of ℂ^N. Assume the local coherence of Φ with respect to Ψ is pointwise bounded by the function κ, that is sup_1≤ k≤ N |⟨φ_j, ψ_k⟩| ≤κ_j. Let s≳log(N), suppose m ≳δ^-2κ_2^2 s log^3(s) log(N), and choose m (possibly not distinct) indices j ∈Ω⊂ [N] i.i.d. from the probability measure ν on [N] given by ν(j) = κ^2_j/κ_2^2 . Consider the matrix A ∈ℂ^m × N with entries A_j,k = ⟨φ_j, ψ_k⟩, j ∈Ω, k ∈ [N], and consider the diagonal matrix D = diag(d) ∈ℂ^m with d_j = κ_2 / κ_j. Then, with a probability of at least 1-N^-c log^3(s), the restricted isometry constant δ_s of the preconditioned matrix 1/√(m) D A satisfies δ_s ≤δ. The rows of F are now sampled with replacement according to the non-uniform probability measure (<ref>) and the measurement matrix F_ΩH^* is normalized through the preconditioning diagonal matrix D, that depends on the measure ν. The debiased LASSO applied to this problem with measurement matrix B:=DF_ΩH^* yields a decomposition in the sense of (<ref>) ẑ - z^0 = (D^2F_ΩH^*)^*ε/m _=:W^z + (Σ̂_B - I_N× N)(z^0 - ẑ) _=:R^z, where ẑ denotes the LASSO for the equivalent model Dy = DF_ΩH^*z^0 + Dε. In practice, however, this gives rise to a tradeoff: If we sample according to measure (<ref>) with replacement, then many rows will be sampled more than once with high probability. If we sample without replacement, in contrast, which seems much more natural from the perspective of maximizing acquired information, Theorem <ref> does not apply. When considering the debiased LASSO, sampling from ν without replacement has another disadvantage: the matrix Σ̂_B of R^z has diagonal entries of different sizes, which makes uniform normalization impossible and hence slows down the asymptotic convergence of R^z. We overcome this problem by considering reweighted sampling without replacement <cit.>, which can be interpreted as transforming the distinct samples into a virtual model of sampling with replacement. Computationally, one independently draws samples ω_1, ω_n with replacement from a probability measure until obtaining m distinct samples, which one physically acquires. The counts γ_1,γ_m, how often the samples occur in the virtual model, are recorded for the reconstruction procedure, which can be taken into account to mimic a model with replacement with n=∑_i=1^mγ_i samples. § IMPROVING THE DEBIASED LASSO'S CONFIDENCE We can now leverage the sampling strategy to construct an unbiased LASSO with better recovery and inference properties than the standard construction. This standard approach is a direct application of the debiasing step for the LASSO as described in Section <ref>. Our new approach tailors the debiasing step to a Haar-transformed signal using reweighted sampling without replacement. This bridges sampling without replacement (used, e.g., in practical MRI scenarios) with theoretical recovery guarantees for sampling with replacement that are given, e.g., in Theorem <ref>. §.§ Standard Debiasing We select the rows indexed by the set Ω∈ℕ^m with or without replacement and obtain a subsampled Fourier matrix F_Ω. After solving the LASSO _z∈ℂ^p1/2m‖ y - F_ΩH^* z^0‖_2^2 + λ‖ z‖, we construct the debiased LASSO by adding ẑ^u = ẑ + 1/m (F_ΩH^*)^*(y - F_ΩH^*ẑ). This gives us, in the Haar domain, the decomposition ẑ^u - z^0 = (F_ΩH^*)^*ε/m_=:W^z + (HΣ̂_FH^* - I_N× N)(z^0-ẑ)_=:R^z with Σ̂_F = 1/mF_Ω^*F_Ω. In the image domain, we obtain x̂^u-x^0 = F_Ω^*ε/m_=:W^x + (Σ̂_F - I_N× N)(x^0-H^*ẑ)_=:R^x. §.§ Reweighting Sampling Without Replacement Debiasing Our more sophisticated approach takes into account that sampling without replacement but with reweighting yields better numerical performance, as described, e.g., by numerical experiments in <cit.>. Following <cit.>, we define a reweighted version of the LASSO and explain why debiasing this LASSO estimator overcomes the tradeoff mentioned above. Although we restrict ourselves to the Haar domain, the result is transferrable into the image domain by exploiting the fact that the Haar transform is an isometry with respect to the ℓ_2-norm. Assume the setting of Theorem <ref>. Let γ_1,, γ_m be the count records of the reweighted sampling without replacement, C=(√(γ_1),, √(γ_m)) and D∈ C^m× m as defined in Theorem <ref>. Let Ω∈ℕ^m be drawn from ν without replacement and let n = ∑_i=1^mγ_i. Denote by ẑ the LASSO solution of min_z∈ℂ^N1/2n‖ CD (F_ΩH^*x^0-y)‖_2^2+λ‖ z‖_1, and by ẑ̃ the one of min_z∈ℂ^N1/2n‖D̃ (F_Ω̃H^*x^0-y)‖_2^2+λ‖ z‖_1, where Ω̃∈ℕ^n is sampled with replacement, and D̃∈ℂ^n× n the corresponding diagonal matrix. Then, it holds that (CDF_ΩH^*)^*CDF_ΩH^* = (D̃F_Ω̃H^*)^*D̃F_Ω̃H^*. This means that the remainder term R^z of the debiased LASSO ẑ^u derived from (<ref>), i.e. R^z=((CDF_ΩH^*)^*CDF_ΩH^*/n - I_N× N)(z^0-ẑ) can be interpreted as the remainder term of the debiased LASSO ẑ̃^u = ẑ̃ + 1/n(D̃F_Ω̃H^*)^*(D̃y - D̃F_Ω̃H^*ẑ̃), derived from (<ref>). In particular, 𝔼[(CDF_ΩH^*)^*CDF_ΩH^*/n] = I_N× N. This theorem suggests our reweighted debiasing with m distinct samples to behave like debiasing based on ∑_i=1^mγ_i samples drawn with replacement. Since the sampling with replacement is only virtually performed, it overcomes the drawback of resource-intensive sampling. With this result, we have shown that our approach takes advantage of both sampling with replacement and sampling without replacement. On the one hand, from the equivalence to sampling with replacement, we have no normalization obstacle as we had in the sampling without replacement case, and the RIP holds for the measurement matrix. On the other hand, we save resources by only subsampling m distinct rows of F. This is of high interest, especially in MRI. The model y = F_ΩH^*z^0 + ε is equivalent to 1/√(n) CDy = 1/√(n) CDF_ΩH^*z^0 + 1/√(n) CDε, in the sense that the multiplication with CD/√(n) is bijective. From this, we derive the debiased LASSO for z^0 as ẑ^u = ẑ + 1/n (CDF_ΩH^*)^*(CDy - CDF_ΩH^*ẑ). and the decomposition as ẑ^u - z^0 = 1/n(CDF_ΩH^*)^*CDε_=:W^z + (1/n(CDF_ΩH^*)^*CDF_ΩH^* - I_N× N)(z^0-ẑ) _=:R^z. Now, it holds that (CDF_ΩH^*)^*CDF_ΩH^* = H(∑_i=1^m d_i^2· c_i^2 f_ω_i f_ω_i^*)H^*, where f_ω_i denotes the ω_i-th row. Since c_i^2=γ_i is the number of counts for the ω_i-th row it can be written as ∑_i=1^m (d_i^2f_ω_i f_ω_i^*++d_i^2f_ω_i f_ω_i^*)_γ_i-times =∑_j=1^n d̃_j^2· f_ω̃_j f_ω̃_j^* = (D̃ F_Ω̃)^*(D̃F_Ω̃) with f_ω̃_1= = f_ω̃_γ_1=f_ω_1, d̃_1= =d̃_γ_1=d_1 ,, f_ω̃_n-γ_m+1= =f_ω̃_n= f_ω_m, d̃_n-γ_m+1==d̃_n=d_m. This is the same as having n measurements sampled with replacement when deriving the debiased LASSO (<ref>) from the model D̃y = D̃F_Ω̃H^*z^0 + D̃ε where Ω̃ contains the indices ω_i with multiplicity γ_i, i.e. ω̃_̃1̃,,ω̃_̃ñ. In particular, we obtain 𝔼[(CDF_ΩH^*)^*CDF_ΩH^*/n]= H 𝔼[1/n∑_i=1^m d_i^2 c_i^2 f_ω_i f_ω_i^*]H^* = H𝔼[1/n∑_j=1^n d̃_j^2 f_ω̃_j f_ω̃_j^*]H^* =𝔼[(D̃F_Ω̃H^*)^*(D̃F_Ω̃H^*)/n] = I_N× N, where the last equality holds since 1/√(n)D̃F_Ω̃H^* is a random sampling matrix associated to a BOS as shown in <cit.>. § NUMERICAL EXPERIMENTS In this section, we compare the standard debiasing without replacement against our method, the reweighting sampling without replacement debiasing. First, we do not use standard debiasing with replacement due to the large number of required samples. Second, the standard debiasing without replacement strategy suffers from a missing uniform normalization of Σ̂_B. Theorem <ref> shows that our method overcomes this issue while having the same sample complexity, in terms of the required m, as the sampling without replacement method. In the experiments, we simulate the MRI process. For the reconstruction, we use the solver TFOCS <cit.>, which is a first-order solver for a convex conic problem (the chosen algorithm was Auslender and Teboulle's single-projection method <cit.>). As a ground truth, we use a modified version of the Shepp-Logan Phantom (see Figure <ref>) denoted in a vectorized version by x^0∈ℂ^N with N=32768. The underlying MRI model reads as y = F_Ωx^0 + ε, where ε∈ℂ^m is complex Gaussian noise with covariance structure σ^2 I_m× m. The index set Ω∈ℕ^m is sampled from the probability measure in (<ref>) without replacement for the standard debiased LASSO and with reweighting without replacement for our method. Then, x^0 is Haar transformed to z^0 = Hx^0. Both debiasing approaches are performed for λ = k·λ_0 with k ∈{5,10,15,20,25} and λ_0:= σ/m(2+√(12log(N))). The noise level in the standard approach is chosen, such that the signal-to-noise ratio is ‖ε‖_2/‖ F_ΩH^*z^0‖_2≈ 0.045. For comparison reasons, in the reweighting scenario, it is also chosen as ‖ CD ε‖_2/‖ CD F_ΩH^*z^0‖_2≈ 0.045. In practice, the noise level can be precisely measured <cit.>. Therefore, the assumption of known σ does not limit our experiments and allows us to focus on the comparison between the two methods. We compute the average of the estimator errors as well as the remainder and Gaussian term and show the results in Table <ref> and <ref> for the standard and reweighting debiased LASSO, respectively. Due to the isometry property, the ℓ_2-norm of the quantities are the same in the Haar and image domain. The ℓ_∞-norm is considered since we aim for pixelwise confidence interval. The error of the LASSO, the debiased LASSO, and the remainder term is significantly smaller in the reweighting setting than in the standard setting. Their dependency on λ is displayed in Figure <ref>. In addition, the Gaussian term, which is independent of λ, is much smaller in the reweighting scenario, leading to sharper confidence intervals. Here, to achieve a small ratio ‖ R‖_2/‖ W‖_2, and hence a dominating Gaussian term W, a suitable choice is, e.g., λ = 15 λ_0. The resulting confidence intervals for one realization of the sampling pattern and the noise are presented in Figure <ref> for the red line in the Shepp-Logan phantom. Overall pixels, the confidence intervals contain 97.85%, and on the support, they contain 97.77%. § CONCLUSION This work bridges ideas from the sampling with replacement and the sampling without replacement techniques in high-dimensional. In particular, we adapted the debiased LASSO for the case when the underlying signal is sparse on a different basis. Our approach significantly decreases the estimator's error rates as compared to previous methods. In addition, our method provides sharper confidence regions, allowing for sharper uncertainty quantification. § ACKNOWLEDGMENT The authors would like to thank the German Federal Ministry of Education and Research for support through the grant "SparseMRI3D+ (FZK 05M20WOA)". IEEEbib
http://arxiv.org/abs/2407.13547v1
20240718142122
Unified Asymptotics For Investment Under Illiquidity: Transaction Costs And Search Frictions
[ "Tae Ung Gang", "Jin Hyuk Choi" ]
q-fin.MF
[ "q-fin.MF", "91G15" ]
^∗Stochastic Analysis and Application Research Center, Korea Advanced Institute of Science and Technology (gangtaeung@kaist.ac.kr). ^∗∗Department of Mathematical Sciences, Ulsan National Institute of Science and Technology (jchoi@unist.ac.kr). Funding: This work was supported by the National Research Foundation of Korea (NRF) grant funded by the Korea government (MSIT) (No. 002086221G0001278, No. 2019R1A5A1028324 and No. RS-2023-00237770). Unified Asymptotics for Investment under Illiquidity: Transaction Costs and Search Frictions Jin Hyuk Choi^∗∗ July 22, 2024 ============================================================================================ § ABSTRACT This paper investigates the optimal investment problem in a market with two types of illiquidity: transaction costs and search frictions. Extending the framework established by <cit.>, we analyze a power-utility maximization problem where an investor encounters proportional transaction costs and trades only when a Poisson process triggers trading opportunities. We show that the optimal trading strategy is described by a no-trade region. We introduce a novel asymptotic framework applicable when both transaction costs and search frictions are small. Using this framework, we derive explicit asymptotics for the no-trade region and the value function along a specific parametric curve. This approach unifies existing asymptotic results for models dealing exclusively with either transaction costs or search frictions. Keywords: stochastic control, asymptotics, portfolio optimization, illiquidity, transaction costs, search frictions. § INTRODUCTION Understanding the impact of illiquidity on optimal investment is one of the key topics in mathematical finance. Illiquidity arises from various factors, such as exogenous transaction costs, search frictions (difficulty in finding a trading counterparty), and price impacts.[<cit.> have explored how asymmetric information affects price impact and optimal trading strategy in equilibrium, while <cit.> have examined optimal order execution problems with given price impacts.] Building on the idea of <cit.>, this paper investigates an optimal investment problem in a market with two types of illiquidity: transaction costs and search frictions. Assuming perfect liquidity, where assets can be traded at any time without transaction costs, Merton's seminal works <cit.> formulate the optimal investment problem using geometric Brownian motion for a risky asset price and a CRRA (constant relative risk aversion) investor, showing that the optimal strategy is to maintain a constant fraction of wealth in the risky asset. Subsequent research has extended this framework to more general stock price processes and utility functions, deriving broader optimal investment strategies. The perfect liquidity assumption can be relaxed by incorporating transaction costs, such as order processing fees or transaction taxes, which contribute to market illiquidity and have been extensively studied. <cit.> examine the Merton model with proportional transaction costs, demonstrating that the optimal strategy is to keep the investment within a “no-trade region." The boundaries of this region are determined by the free-boundaries of the HJB (Hamilton-Jacobi-Bellman) equation. Models with transaction costs and multiple risky assets have been investigated (e.g., <cit.> for costs on all assets and <cit.> for costs on only one asset). More general stock price processes have been considered within the framework of optimal investment with transaction costs (e.g., <cit.>). Additionally, <cit.> investigate models with quadratic transaction costs. Search frictions, or difficulty in finding a trading counterparty, are another source of market illiquidity. Table 1 in <cit.> presents frequency of trading in various markets, showing that many asset classes are illiquid, with their total sizes rivaling that of the public equity market. An intuitive way to model search frictions is by restricting trade times. For example, <cit.> considers an investor who can change portfolios only at fixed intervals, while <cit.> assume that an illiquid asset can only be traded when randomly occurring opportunities arise, modeled by a Poisson process. <cit.> add the assumption that the asset price is observed only at these trade times. <cit.> further complicates the model by incorporating random intensity of trade times, regime-switching, and liquidity shocks. <cit.> considers trading at deterministic intervals with proportional transaction costs. Due to the lack of explicit solutions to the HJB equations, asymptotic analysis has been employed for small transaction costs or small search frictions. For a small transaction cost parameter ϵ≪ 1, in various models with proportional transaction costs only (e.g., <cit.>), the first correction terms of the no-trade boundaries are of the order of ϵ^1/3, and the first correction term of the value function is of the order of ϵ^2/3. In <cit.>, the parameter λ is the intensity of the Poisson process, and search frictions can be represented by 1λ. For small search frictions 1λ≪1, the first correction terms of the optimal trading strategy and the value function are of the order of 1λ. Merging the aforementioned frameworks, <cit.> studies log-utility maximization of the terminal wealth in a model with both transaction costs and search frictions. As in the models with transaction costs only, the optimal trading strategy in <cit.> is characterized by a no-trade region. In <cit.>, for a small transaction cost parameter ϵ (with fixed λ), the first correction terms of the no-trade boundaries and the value function are of the order of ϵ, instead of ϵ^1/3 or ϵ^2/3 in the models with transaction costs only. The asymptotic results imply that the effects of the transaction costs are more pronounced in the market with fewer search frictions. In this paper, we analyze the power-utility maximization problem with both transaction costs and search frictions. The model setup is the same as that of <cit.> except that we consider power-utility instead of log-utility. In our model, proportional transaction costs (with parameter ϵ) are imposed on an investor, and the investor's trading opportunities arise only when a Poisson process (with intensity λ) jumps. The investor's objective is to maximize the expected utility of wealth at the terminal time T>0. As in other models with proportional transaction costs, the optimal trading strategy in our model is characterized by a no-trade region: there are functions ,:[0,T)→ [0,1] such that the investor tries to keep the fraction of wealth invested in the risky asset within the interval [(t),(t)] whenever trading opportunities arise. The main contribution of this paper is the establishment of a novel framework for asymptotics applicable in scenarios where both transaction costs and search frictions are small, i.e., ϵ≪ 1 and 1λ≪ 1. We focus on the asymptotics of the no-trade region and the value function. The results in <cit.> imply that the limits as ϵ↓ 0 and λ→∞ simultaneously do not exist; the resulting values depend on the order of taking these limits (see discussion around (<ref>) in Section 5). To address this issue, we compare the asymptotics in <cit.> with those in the benchmark cases of transaction costs only (λ=∞) and search frictions only (ϵ=0), leading us to conjecture that a specific scaling relation λ=c ϵ^-2/3 for c>0 is relevant to consider (for details, see discussion for (<ref>) in Section 5). Our findings confirm that along the parametric curve λ=c ϵ^-2/3, the first correction terms of the no-trade boundaries and the value function are of the order of ϵ^1/3 and ϵ^2/3, respectively. Our framework for finding asymptotics along the parametric curve λ=c ϵ^-2/3 offers two notable benefits when dealing with small ϵ and large λ. First, the coefficients of the correction terms in our asymptotics are explicit in terms of the model parameters. In contrast, the coefficients in the asymptotics in <cit.> are expressed in terms of solutions to partial differential equations, making them not explicit.[This lack of explicit expression is the main weakness of the asymptotic results in <cit.>. Additionally, the asymptotics in <cit.> are only for small ϵ, with fixed λ.] Therefore, given model parameters, including ϵ and λ, one can compute the auxiliary parameter c=λϵ^2/3 and use the explicit expressions in Joint_limit_of_Wnt_and_Vd to estimate the optimal trading strategy and value. Second, our framework using λ=c ϵ^-2/3 unifies the existing asymptotic results for seemingly different benchmark models with only transaction costs and only search frictions. Indeed, Joint_limit_of_Wnt_and_Vd bridges the benchmark asymptotics, where the case c→∞ corresponds to the asymptotics with only transaction costs and the case c→ 0 corresponds to the asymptotics with only search frictions (see discussion around (<ref>) in Section 5). Our proof of the asymptotic analysis involves various estimations. One of the main difficulties in the analysis is the rigorous treatment of subtle limiting behaviors that do not appear in the benchmark models with only transaction costs or only search frictions (see discussion after Joint_limit_of_Wnt_and_Vd). The remainder of the paper is organized as follows. Section 2 describes the model. In Section 3, we provide the verification argument and some properties of the value function. In Section 4, we characterize the optimal trading strategy in terms of the no-trade region and present properties of its boundaries. In Section 5, we motivate the relation λ=c ϵ^-2/3 and provide asymptotic results. Section 6 is devoted to the proof of these results. Section 7 summarizes the paper. Proofs of technical lemmas can be found in Appendix. § THE MODEL The model setup is identical to that described in <cit.>, except for the utility function. Consider a filtered probability space (Ω, , (_t)_t ≥ 0, ) satisfying the usual conditions. Under the filtration, let (B_t)_t ≥ 0 be a standard Brownian motion and (P_t)_t ≥ 0 be a Poisson process with constant intensity λ > 0. Then (B_t)_t ≥ 0 and (P_t)_t ≥ 0 are independent as the quadratic covariation of the two Levy processes is zero. We consider a financial market consisting of a constant saving account (zero interest rate) and a stock with its price process ( S_t )_t ≥ 0 defined by the following stochastic differential equation (SDE): d S_t = S_t ( μ d t + σ d B_t ), where μ, σ and S_0 are constants and σ and S_0 are strictly positive. We assume that the market has two types of illiquidity. * Proportional Transaction Costs: These costs are imposed on an investor when purchasing and selling stocks. There are two constants ∈ (0, ∞) and ∈ (0, 1) such that the investor purchases one share of stock at the price of (1 + ) S_t and sells one share at the price of (1 - ) S_t at time t, respectively. * Limited Trading Opportunities: An investor's trading opportunity is available only when the Poisson process (P_t)_t ≥ 0 jumps. Hence, a larger λ implies more frequent trading opportunities on average, resulting in fewer search frictions. Let W_t^(0) and W_t^(1) be the amount of wealth in the saving account and stock at time t ≥ 0, respectively. If the investor tries to obtain the stock worth M_s at time s ∈ [0, t], then W_t^(0) and W_t^(1) satisfy W_t^(0) = w_0^(0) + ∫_0^t( (1 - ) M_s^- - (1 + ) M_s^+) d P_s, W_t^(1) = w_0^(1) + ∫_0^t W_s-^(1)( μ ds + σ dB_s ) + ∫_0^t M_s d P_s, where the pair of nonnegative constants ( w_0^(0), w_0^(1) ) represents the initial position of the investor and we use notation x^± = max{± x, 0 } for x∈. We assume that the initial total wealth is strictly positive, w_0:=w_0^(0)+ w_0^(1)>0. The trading strategy (M_t)_t ≥ 0 is called admissible if it is a predictable process and the corresponding total wealth process W:=W^(0)+W^(1) is nonnegative all the time. Since the rebalancing times are discrete, W_t≥ 0 for all t≥ 0 is equivalent to W_t^(0)≥ 0 and W_t^(1)≥ 0 for all t≥ 0. Therefore, an admissible strategy M satisfies - W_t-^(1)≤ M_t≤W_t-^(0)1 + , t≥ 0. The above inequalities and w_0>0 ensure that the corresponding total wealth process W is strictly positive all the time. For an admissible strategy M and the corresponding solutions W^(0) and W^(1) of the SDEs in (<ref>), let X_t^(1) := W_t^(1)/W_t be the fraction of the total wealth invested in the stock market at time t. Then, the inequalities in (<ref>) imply 0≤ X_t ≤ 1. The SDEs for W and X are d W_t = μ X_t- W_t- d t + σ X_t- W_t- d B_t - ( M_t^+ + M_t^- ) d P_t, d X_t = X_t- (1 - X_t-) ( μ - σ^2 X_t-) d t + σ X_t- ( 1 - X_t-) d B_t + M_t + ( M_t^+ + M_t^-) X_t-W_t- - M_t^+ - M_t^- d P_t, where the initial conditions are W_0=w_0 and X_0=x_0:=w^(1)_0/w_0. Let T>0 be a constant representing the terminal time. The investor's utility maximization problem is defined as follows: for a given γ∈ (0, ∞) ∖{ 1 }, sup_(M_t)_t ∈ [0,T] [ W_T^1 - γ1 - γ], where the supremum is taken over all admissible trading strategies. § THE VALUE FUNCTION Let V be the value function of the utility maximization problem (<ref>): V(t, x, w) = sup_(M_s)_s ∈ [t,T][ W_T^1 - γ1 - γ | ℱ_t] |_(X_t, W_t) = (x, w). The scaling property of the wealth process and the property of the power function enable us to conjecture the form of the value function as V(t, x, w) = w^1 - γ1 - γ· v(t, x) for a function v:[0,T]× [0,1]→. The Hamilton-Jacobi-Bellman (HJB) equation for (<ref>) produces the following partial differential equation (PDE) for v: 1 = v (T, x), 0 = v_t(t, x) + x (1 - x) ( μ - γσ^2 x ) v_x(t, x) + σ^2x^2 (1 - x)^2/2v_x x(t, x) + ( Q(x) - λ) v(t,x) + λ (1 - γ) ·sup_y ∈ [0, 1]( v(t, y)1 - γ( ( 1 + x1 + y)^1 - γ 1_{ x≤ y } + ( 1 - x1 - y)^1 - γ 1_{ x>y }) ), where v_t, v_x, v_x x are partial derivatives and Q(x) := - γ(1-γ)σ^22(x-y_M)^2 + γ(1-γ)σ^2 y_M^22 with y_M := μγσ^2. Note that y_M is the Merton fraction, the optimal fraction in the frictionless market. There exists a unique v ∈ C^1, 2 ( [0, T] × (0, 1) ) ∩ C( [0, T] × [0, 1] ) that satisfies the following conditions: (i) v satisfies (<ref>) for (t, x) ∈ [0, T] × (0, 1). (ii) For x∈{0,1}, the map t↦ v(t,x) is continuously differentiable on [0,T] and satisfies 1 = v (T, x), 0 = v_t(t, x) + ( Q(x) - λ) v(t,x) + λ (1 - γ) ·sup_y ∈ [0, 1]( v(t, y)1 - γ( ( 1 + x1 + y)^1 - γ 1_{ x≤ y } + ( 1 - x1 - y)^1 - γ 1_{ x>y }) ). (iii) v_t(t, x), x (1 - x) v_x(t, x), x^2 (1 - x)^2 v_x x(t, x) are uniformly bounded on (t, x) ∈ [0, T] × (0, 1). See Appendix <ref>. The next theorem provides the verification. Its proof is similar to the proof of Theorem 3.5 in <cit.>. Let V be as in (<ref>), and v be as in v_classical_solution_and_properties. Then, for (t,x)∈ [0,T]× [0,1], V(t, x, w) = w^1 - γ1 - γ· v(t, x). Without loss of generality, we prove V(0, x_0, w_0) = w_0^1 - γ/1 - γ· v(0, x_0). Let M be an admissible trading strategy and (W,X) be the corresponding solution of (<ref>). Then (<ref>) and (<ref>) imply 0 ≥ W_t - W_t-= - ( M_t^+ + M_t^- ) d P_t≥ - 1+ W_t-^(0) - W_t-^(1). The above inequalities imply that there is a constant c_0∈ (0,1) such that c_0 W_t-≤ W_t ≤ W_t- for t∈ [0,T]. Then, (<ref>) and (<ref>) produce w_0 c_0^P_t e^∫_0^t μ X_s d s ℰ(σ X · B)_t ≤ W_t ≤ w_0 e^∫_0^t μ X_s d s ℰ(σ X · B)_t, where ℰ(σ X · B) is the Doléans-Dade exponential of the process (∫_0^t σ X_s dB_s )_t≥ 0. Since 0≤ X ≤ 1, Novikov's condition implies that ℰ(4(1-γ)σ X · B) is a martingale. For dd|_ℱ_t=ℰ(4(1-γ)σ X · B)_t, t∈ [0,T] and a constant b>0, [ b^P_tℰ(σ X · B)_t^2(1-γ)] ≤√([ b^2P_t] ·[ ℰ(σ X · B)_t^4(1-γ)] ) =e^b^2-1/2λ t√(^[e^2(3-4γ)(1-γ)σ^2 ∫_0^t X_s^2 ds]) ≤ e^|b^2-1/2| λ T + |(3-4γ)(1-γ)| σ^2 T. We combine (<ref>) and (<ref>) to conclude that sup_t∈ [0,T][ W_t^2 (1 - γ)]<∞. Let τ_n :=T ∧inf{ t ≥ 0 : P_t = n } for n∈ and τ_0:=0. We observe that for t∈ [τ_n,τ_n+1), if X_τ_n=0, then X_t=0. if X_τ_n=1, then X_t=1. if X_τ_n∈ (0,1), then X_t∈ (0,1). We apply Ito's formula to W_t^1 - γ/1 - γ· v(t, X_t) with (<ref>) and (<ref>), and use the fact that X_t and W_t can only jump at t=τ_n for n∈ to obtain W_τ_n+1^1 - γ1 - γ· v(τ_n + 1, X_τ_n + 1) - W_τ_n^1 - γ1 - γ· v(τ_n, X_τ_n) =∫_τ_n^τ_n+1W_s-^1 - γ1 - γ(( v_t(s,x)+ x (1 - x) ( μ - γσ^2 x ) v_x(s, x) + σ^2x^2 (1 -x)^22 v_x x(s, x) + Q(x) v(s,x) )|_x=X_s-ds + σ( (1-γ)x v(s,x)+x(1-x)v_x(s,x))|_x=X_s- dB_s ) if X_τ_n∈ (0,1), ∫_τ_n^τ_n+1W_s-^1 - γ1 - γ ( v_t(s,0) + Q(0)v(s,0)) ds if X_τ_n=0, ∫_τ_n^τ_n+1W_s-^1 - γ1 - γ( ( v_t(s,1) + Q(1)v(s,1)) ds + (1-γ)σ v(s,1) dB_s ) if X_τ_n=1, + W_τ_n+1^1 - γ1 - γ· v(τ_n + 1, X_τ_n + 1)-W_τ_n+1-^1 - γ1 - γ· v(τ_n + 1, X_τ_n + 1-). Since lim_n→∞τ_n = T almost surely, the above expression produces W_T^1 - γ1 - γ· v(T, X_T)- w_0^1 - γ1 - γ· v(0, x_0) =∑_n=0^∞(W_τ_n+1^1 - γ1 - γ· v(τ_n + 1, X_τ_n + 1) - W_τ_n^1 - γ1 - γ· v(τ_n, X_τ_n) ) =∫_0^T W_s-^1 - γ1 - γ(( v_t(s,x)+ x (1 - x) ( μ - γσ^2 x ) v_x(s, x) + σ^2x^2 (1 -x)^22 v_x x(s, x) + Q(x) v(s,x) ) 1_{ x∈(0,1)} + (v_t(s,x)+Q(x)v(s,x)) 1_{ x∈{0,1}})|_x=X_s- ds +∫_0^T W_s-^1 - γ1 - γ( σ( (1-γ)xv(s,x)+x(1-x)v_x(s,x)) 1_{ x∈(0,1)} + (1-γ)σ v(s,1) 1_{x=1}) |_x=X_s- dB_s + ∑_0<s≤ T(W_s^1 - γ1 - γ· v(s, X_s) - W_s-^1 - γ1 - γ· v(s, X_s-) ). The stochastic integral term above is a martingale due to v_classical_solution_and_properties (iii) and (<ref>). The sum of jumps term above can be written as ∫_0^T W_s-^1 - γ1 - γ( v(s, y)( ( 1 + x1 + y)^1 - γ 1_{ x≤ y } + ( 1 - x1 - y)^1 - γ 1_{ x>y }) - v(s,x) )|_(x,y)=(X_s-, Y_s) dP_s, where Y_s:=X_s-W_s-+M_s/W_s-- M_s^+ - M_s^-. Since (P_t-λ t)_t∈ [0,T] is a martingale, v_classical_solution_and_properties (iii) and (<ref>) imply that the expected value of the above expression is [ ∫_0^T W_s-^1 - γ1 - γλ( v(s, y) ( ( 1 + x1 + y)^1 - γ 1_{ x≤ y } + ( 1 - x1 - y)^1 - γ 1_{ x>y }) - v(s,x) )|_(x,y)=(X_s-, Y_s) ds ]. We combine these observations to obtain [ W_T^1 - γ1 - γ· v(T, X_T) ] - w_0^1 - γ1 - γ· v(0, x_0) =∫_0^T W_s-^1 - γ1 - γ(( v_t(s,x)+ x (1 - x) ( μ - γσ^2 x ) v_x(s, x) + σ^2x^2 (1 -x)^22 v_x x(s, x)+ (Q(x)-λ) v(s,x) + λ v(s, y) ( ( 1 + x1 + y)^1 - γ 1_{ x≤ y } + ( 1 - x1 - y)^1 - γ 1_{ x>y }) ) 1_{ x∈(0,1)} + (v_t(s, x) + ( Q(x) - λ) v(s,x) + λ v(s, y) ( ( 1 + x1 + y)^1 - γ 1_{ x≤ y } + ( 1 - x1 - y)^1 - γ 1_{ x>y }) ) 1_{ x∈{0,1}}) |_(x,y)=(X_s-, Y_s)ds. The above equality and v_classical_solution_and_properties imply that for any admissible trading strategy M, [ W_T^1 - γ1 - γ] ≤w_0^1 - γ1 - γ· v(0, x_0). To complete the proof, we construct an optimal strategy M̂ that satisfies the equality in (<ref>). We observe that the following map is continuous on (t, x, y) ∈ [0, T] × [0, 1]^2: (t, x, y) ↦ v(t, y)1 - γ( ( 1 + x1 + y)^1 - γ1_{ x≤ y } + ( 1 - x1 - y)^1 - γ1_{ x>y }). Then, due to meas_lem, there exists a measurable function ŷ: [0,T] × [0,1] → [0, 1] such that ŷ(t, x) ∈_y ∈ [0, 1]( v(t, y)1 - γ( ( 1 + x1 + y)^1 - γ1_{ x≤ y } + ( 1 - x1 - y)^1 - γ 1_{ x>y }) ). We define a measurable function m:[0,T]× [0,∞)× [0,1] → as m(t, w,x) := w (ŷ(t, x) - x)1 + ŷ(t, x)· 1_{ x≤ŷ(t, x) } + w (ŷ(t, x) - x)1 - ŷ(t, x)· 1_{ x>ŷ(t, x) }. Let (Ŵ, X̂) be the solution of the SDEs in (<ref>) with M_t = m(t,W_t-,X_t-), and M̂_t := m(t,Ŵ_t-,X̂_t-). By construction, we have ŷ(t, X̂_t-)= X̂_t-Ŵ_t- + M̂_tŴ_t- - M̂_t^+ - M̂_t^-. Then (<ref>), (<ref>) and v_classical_solution_and_properties produce (<ref>) with the equality. Therefore, we conclude (<ref>) and the optimality of M̂. The next lemma shows that x↦v(t,x)1-γ is strictly concave and v has uniform bounds independent of , and λ. To treat the concavity part, we define : [0,T]× ([0,∞)^2∖{(0,0)}) → as Ṽ(t,a,b):=V(t, ba+b,a+b). We notice that Ṽ(t,a,b) is the value function of our control problem with W_t^(0)=a and W_t^(1)=b. (i) For t∈ [0,T), the maps (a,b)↦Ṽ(t,a,b) and x↦v(t,x)1-γ are strictly concave. (ii) There are constants v≥v>0 independent of , and λ such that v≤ v(t, x) ≤v for (t, x) ∈ [0, T] × [0, 1]. (i) This part of the proof is essentially the same as the proof of Proposition 3.6 in <cit.>. (ii) Let M be an admissible trading strategy and (W,X) be the corresponding solution of (<ref>). Then the right-hand side inequality in (<ref>) implies W_T^1-γ1-γ≤w_0^1-γ1-γ e^∫_0^T Q(X_s)dsℰ((1-γ)σ X · B)_T, where Q is defined in (<ref>). Since 0≤ X ≤ 1, Novikov's condition implies that ℰ((1-γ)σ X · B) is a martingale. Then (<ref>) implies that for dd|_ℱ_T=ℰ((1-γ)σ X · B)_T, [W_T^1-γ1-γ] ≤w_0^1-γ1-γ·^[e^∫_0^T Q(X_s)ds]. The definition of V in (<ref>), Verification_v and (<ref>) produce the following inequalities: v(0,x_0)≤ e^ Q _∞ T for 0<γ<1, v(0,x_0)≥ e^ - Q _∞ T for γ>1. Since M_s = 0 for all s ∈ [0, T] is an admissible strategy, we have w_0^1-γ1-γ· v(0,x_0)=V(0,x_0,w_0) ≥[ 11-γ( (1-x_0)w_0+x_0 w_0 e^(μ-σ^2/2)T + σ B_T)^1-γ] ⟹ v(0,x_0)1-γ≥11-γ[ ( 1-x_0+x_0 e^(μ-σ^2/2)T + σ B_T)^1-γ]. The following inequalities can be checked easily: If 0<γ<1, then ( 1-x+x a)^1-γ≥ 1-x + x a^1-γ for x∈ [0,1] and a>0. If γ>1, then ( 1-x+x a)^1-γ≤ 1+ a^1-γ for x∈ [0,1] and a>0. We combine (<ref>), (<ref>) and [ e^(1-γ)(μ-σ^2/2)T + (1-γ)σ B_T]= e^(1-γ)(μ-γσ^2/2)T to obtain v(0,x_0)≥ 1-x_0 + x_0 e^(1-γ)(μ-γσ^2/2)T≥ e^-|(1-γ)(μ-γσ^2/2)T| for 0<γ<1, v(0,x_0)≤ 1+ e^(1-γ)(μ-γσ^2/2)T≤ 1+ e^|(1-γ)(μ-γσ^2/2)T| for γ>1. We check that the inequalities in (<ref>) and (<ref>) still hold after replacing v(0,x_0) by v(t,x). § THE OPTIMAL TRADING STRATEGY In this section, we characterize the optimal trading strategy in terms of the no-trade region. We start with the construction of the candidate boundary points and of the no-trade region. For each t ∈ [0, T), there exist 0 ≤(t) ≤(t) ≤ 1 such that {(t) } = _y ∈ [0, 1]( v(t, y)(1 - γ) ( 1 + y )^1 - γ), {(t) } = _y ∈ [0, 1]( v(t, y)(1 - γ) ( 1 - y )^1 - γ). To be more specific, the following statements hold: (i) The map y ↦v(t, y)/(1 - γ) ( 1 + y )^1 - γ strictly increases on y∈ [0,(t)] and decreases on y∈ [(t),1]. If 0<(t)<1, then (t) satisfies v_x(t, (t))1 - γ = v(t, (t))1 + (t). (ii) The map y ↦v(t, y)/(1 - γ) ( 1 - y )^1 - γ strictly increases on y∈ [0,(t)] and decreases on y∈ [(t),1]. If 0<(t)<1, then (t) satisfies v_x(t, (t))1 - γ = - v(t, (t))1 - (t). (iii) For (t, x) ∈ [0, T) × [(t), (t)], - 1 - v≤v_x(t, x)1 - γ≤ v with v appears in v_concave. Recall Ṽ in (<ref>). Due to Verification_v, the following equation holds: Ṽ(t,1-(1+)η,η) = v(t,y)(1-γ)(1+ y)^1-γ|_y=η/1-η for η∈ [0,11+]. v_concave (i) implies that the map η↦Ṽ(t,1-(1+)η,η) is strictly concave on η∈ [0,11+]. Let η(t):=_0≤η≤1/1+Ṽ(t,1-(1+)η,η) be the unique maximizer. Since the map η↦η1-η strictly increases on η∈ [0,11+], the definition of η(t) and (<ref>) imply that the left-hand side equation of (<ref>) holds with (t)=η(t)1-η(t) and the statements in (i) hold. Similarly, the following equation holds: Ṽ(t,1-(1-)η,η) = v(t,y)(1-γ)(1- y)^1-γ|_y=η/1+η for η∈ [0,11-]. The strict concavity of η↦Ṽ(t,1-(1-)η,η) ensures the existence of the unique maximizer η(t):=_0 ≤η≤1/1-Ṽ(t,1-(1-)η,η). Then, the right-hand side equation of (<ref>) holds with (t)=η(t)1+η(t) and the statements in (ii) hold. We observe that (i) and (ii) imply - v(t, x)1 - x≤v_x(t, x)1 - γ≤ v(t, x)1 + x for x∈ [(t),(t)]. Then, we conclude (iii) by this observation and v_concave (ii). It only remains to check (t)≤(t). The inequality holds when (t)=1. Suppose that (t)<1. Then, (ii) implies v_x(t, (t))1 - γ≤ - v(t, (t))1 - (t)≤ v(t, (t))1 + (t), where the second inequality is due to the positivity of v. This observation and (i) produce (t)≤(t). In the proof of Verification_v, we construct the optimal trading strategy via ŷ in (<ref>). The next theorem explicitly characterizes ŷ(t,x) in terms of (t) and (t) as defined in Boundaries_of_NT_region. For fixed t ∈ [0, T), the argmax in (<ref>) is a singleton and ŷ is ŷ(t, x) = (t) if x∈ [0,(t)) x if x∈ [(t),(t)] (t) if x∈ ((t),1] where (t) and (t) are determined in Boundaries_of_NT_region. We rephrase the maximization in (<ref>) as max_y ∈ [0, 1]( v(t, y)1 - γ( ( 1 + x1 + y)^1 - γ1_{ x≤ y } + ( 1 - x1 - y)^1 - γ 1_{ x>y }) ) = max{max_y ∈ [x, 1]( v(t, y)1 - γ( 1 + x1 + y)^1 - γ), max_y ∈ [0, x]( v(t, y)1 - γ( 1 - x1 - y)^1 - γ) }. Using Boundaries_of_NT_region, we evaluate max_y ∈ [x, 1]( v(t, y)1 - γ( 1 + x1 + y)^1 - γ) = v(t, x)1 - γ if x ≥(t) v(t, (t))1 - γ( 1 + x1 + (t))^1 - γ if x < (t) , max_y ∈ [0, x]( v(t, y)1 - γ( 1 - x1 - y)^1 - γ) = v(t, x)1 - γ if x ≤(t) v(t, (t))1 - γ( 1 - x/1 - (t))^1 - γ if x >(t) . Since (t) ≤(t), (<ref>) and (<ref>) imply max_y ∈ [0, 1]( v(t, y)1 - γ( ( 1 + x1 + y)^1 - γ1_{ x≤ y } + ( 1 - x1 - y)^1 - γ 1_{ x>y }) ) = v(t, (t))1 - γ( 1 + x1 + (t))^1 - γ if x ∈ [0, (t)) v(t, x)1 - γ if x ∈ [(t), (t)] v(t, (t))1 - γ( 1 - x1 - (t))^1 - γ if x ∈ ((t), 1] and we conclude that the corresponding unique maximizer is as in (<ref>). Our next task is to provide stochastic representations of v and v_x. One may apply the Feynman-Kac formula to PDE (<ref>) directly and obtain the stochastic representations. In that case, the representations are written in terms of a stochastic process which is not explicit.[Indeed, the SDE in (<ref>) does not have an explicit solution.] More detailed analysis in the later section requires expressions for v_xx, v_xxx, v_xt, v_xxt, and it would be useful to obtain representations in terms of explicit stochastic processes. For this purpose, we define A_s, t, Y_s^(t, x) and Z_s^(t, x) for x∈ [0,1] and 0≤ t ≤ s ≤ T as A_s, t := e^(μ-σ^2/2)(s - t) + σ (B_s - B_t) > 0, Y_s^(t, x) := x A_s, tx A_s, t + 1 - x∈ [0, 1], Z_s^(t, x) := ( x A_s, t + 1 - x )^1 - γ > 0. Then we observe that Z_s^(t, x) = ( 1 - x1 - Y_s^(t, x))^1 - γ for x∈ [0,1), and A_s, t and Y_s^(t, x) solve the following SDEs: dA_s,t = μ A_s,t ds + σ A_s,t dB_s, A_t,t=1, dY_s^(t,x) = Y_s^(t,x)(1-Y_s^(t,x))(μ-σ^2 Y_s^(t,x))ds + σ Y_s^(t,x)(1-Y_s^(t,x)) dB_s, Y_t^(t,x)=x. The following lemma is used to justify our later applications of the Leibniz integral rule. For nonnegative integers n and m and nonnegative constants k and l, max_0≤ t ≤ s ≤ T𝔼[ max_0≤ x≤ 1( | ∂^m Z_s^(t, x)∂ x^m |^k·| ∂^n Y_s^(t, x)∂ x^n |^l) ] < ∞. For n∈, direct computations produce ∂^n Y_s^(t, x)∂ x^n = n! A_s,t(1-A_s,t)^n-1(x A_s,t+1-x)^n+1, ∂^n Z_s^(t, x)∂ x^n =(1-γ)(-γ)⋯ (2-γ-n)(A_s,t-1)^n (x A_s,t+1-x)^1-γ-n. Observe that (x A+1-x)^c ≤ 1+ A^c for c∈, x∈ [0,1], A>0, |A-1|^c ≤ 1+ A^c for c≥ 0, A>0, 𝔼 [A_s, t^c] ≤exp( ( |c (μ-σ^22)| + c^2σ^22) T ) for c∈, 0≤ t≤ s≤ T. The expression in (<ref>) and the inequalities in (<ref>) produce (<ref>). Let A_s, t, Y_s^(t, x) and Z_s^(t, x) be defined as in (<ref>), and let L(t,x) be defined by L(t, x) := v(t, (t)) ( 1 + x1 + (t))^1 - γ if x ∈ [0, (t)], v(t, x) if x ∈ ((t), (t)), v(t, (t)) ( 1 - x1 - (t))^1 - γ if x ∈ [(t), 1]. (i) For (t,x)∈ [0,T]× [0,1], v has the following representation: v(t, x) = e^- λ (T - t)[ Z_T^(t, x)] + λ∫_t^T e^- λ (s - t)[ Z_s^(t, x) L( s, Y_s^(t, x)) ] d s. (ii) For (t,x)∈ [0,T) × (0,1), the function L is continuously differentiable with respect to x and L_x(t,x) = (1 - γ) v(t, (t))1 + x( 1 + x1 + (t))^1 - γ if x∈ (0,(t)], v_x( t, x ) if x ∈ ( (t), (t) ), - (1 - γ) v(t, (t))1 - x( 1 - x1 - (t))^1 - γ if x ∈ [(t), 1). For (t,x)∈ [0,T)× (0,1), v_x(t,x) has the following representation: v_x(t, x) = e^- λ (T - t)[ ∂ Z_T^(t, x)∂ x] + λ∫_t^T e^- λ (s - t)[ Z_s^(t, x) L_x ( s, Y_s^(t, x) ) ∂ Y_s^(t, x)∂ x + ∂ Z_s^(t, x)∂ x L( s, Y_s^(t, x)) ] d s. Furthermore, v_x(t, 0) := lim_x ↓ 0 v_x(t, x) and v_x(t, 1) := lim_x ↑ 1 v_x(t, x) are well-defined and finite. (i) We observe that (<ref>) and (<ref>) imply L(t,x)=(1-γ) sup_y ∈ [0, 1]( v(t, y)1 - γ( ( 1 + x1 + y)^1 - γ 1_{ y ∈ [x, 1] } + ( 1 - x1 - y)^1 - γ 1_{ y ∈ [0, x) }) ). Let ṽ(t,x):=e^-λ t v(t,x)(1-x)^1-γ for (t,x)∈ [0,T]× [0,1). Then, (<ref>) and (<ref>) imply 0 = ṽ_t(t, x) + x (1 - x) ( μ - σ^2 x ) ṽ_x(t, x) + σ^2/2 x^2 (1 - x)^2ṽ_x x(t, x) + e^-λ tλ L(t,x)(1-x)^1-γ, e^-λ T(1-x)^1-γ = ṽ (T, x). Let x∈ [0,1) fixed. We apply Ito's formula to ṽ(s,Y_s^(t,x)) and use (<ref>) and (<ref>) to produce e^-λ T(1-Y_T^(t,x))^1-γ + ∫_t^T λ e^-λ s L(s,Y_s^(t,x))(1-Y_s^(t,x))^1-γ ds =ṽ(t,x)+ ∫_t^Tσ (1-Y_s^(t,x))Y_s^(t,x)ṽ_x (s, Y_s^(t,x)) dB_s. Observe that the expectation of the stochastic integral term above is zero, because [ ∫_t^T ( (1-Y_s^(t,x))Y_s^(t,x)ṽ_x (s, Y_s^(t,x)))^2 ds] = ∫_t^T [ ( e^-λ s·(1-Y_s^(t,x))Y_s^(t,x) Z_s^(t,x) v_x (s, Y_s^(t,x)) +(1-γ)Y_s^(t,x) Z_s^(t,x) v(s, Y_s^(t,x))(1-x)^1-γ)^2 ] ds<∞, where the equality is due to the definition of ṽ and (<ref>), and the inequality is due to v_classical_solution_and_properties (iii) and YZ_bound1. Therefore, (<ref>) and (<ref>) produce (<ref>) for (t,x)∈ [0,T]× [0,1). Since v∈ C ( [0, T] × [0, 1] ), to complete the proof, it is enough to check that lim_x↑ 1( e^- λ (T - t)[ Z_T^(t, x)] + λ∫_t^T e^- λ (s - t)[ Z_s^(t, x) L( s, Y_s^(t, x)) ] d s ) = e^- λ (T - t)[ Z_T^(t, 1)] + λ∫_t^T e^- λ (s - t)[ Z_s^(t, 1) L( s, Y_s^(t, 1)) ] d s. Indeed, YZ_bound1 and ‖ L ‖_∞<∞ allow us to apply the dominated convergence theorem above. (ii) We differentiate (<ref>) with respect to x and obtain (<ref>) for x∈ (0,1) ∖{(t),(t) }, and the continuity at x∈{(t),(t) } is due to Boundaries_of_NT_region. By Boundaries_of_NT_region (iii) and (<ref>), ‖ L_x ‖_∞≤ C ( + ) for a constant C>0. We take derivative with respect to x in (<ref>), and put the derivative inside of the expectation (YZ_bound1, ‖ L ‖_∞<∞, and (<ref>) allow us to do this) to obtain (<ref>). Finally, lim_x ↓ 0 v_x(t, x) and lim_x ↑ 1 v_x(t, x) are well-defined because x ↦v(t, x)1 - γ is strictly concave by v_concave (i), and these limits are finite due to YZ_bound1, ‖ L ‖_∞<∞ and (<ref>). The next proposition presents properties about the boundaries of the no-trade region. Recall that we denote the Merton fraction as y_M := μ/γσ^2. (i) Let t ∈ [0,T). If 0<y_M, then (t)>0. If y_M<1, then (t)<1. (ii) If 0<y_M<1 and at least one of and is strictly positive, then (t) < (t) for t∈ [0,T). (iii) If 0<y_M and t∈ [0, T] is the solution of the equation (if a solution doesn't exist, we set t=0) e^μ T = μ (1 + ) ∫_t^T e^(μ + λ) s( e^- λ T + λ∫_s^T e^- λ u v(u, (u))( 1 + (u) )^1 - γ d u ) d s, then (t)>0 for t∈ [0,t) and (t)=0 for t∈ [t,T). If y_M<1 and t∈ [0, T] is the solution of the equation (if a solution doesn't exist, we set t=0) = (γσ^2 - μ) ∫_t^T e^(γμ - γ(1+γ)σ^2/2)(T-s)( e^- β (T-s) + λ∫_s^T e^- β (u-s) v(u, (u)) ( 1 - 1 - (u))^1 - γ d u ) d s, where β:= λ-(1-γ)μ + γ(1-γ) σ^22, then (t)<1 for t∈ [0,t) and (t)=1 for t∈ [t,T). Figure <ref> illustrates the no-trade region and t and t in Property_of_NT. (i) Assume that 0<y_M (y_M<1 case can be treated similarly). Boundaries_of_NT_region implies that v_x(t, 0)1 - γ + v(t, 0) > 0 (t) > 0. The expressions of Y_s^(t,x) and Z_s^(t, x) in (<ref>), L in (<ref>) and L_x in (<ref>) imply lim_x↓ 0 Y_s^(t,x)=0, lim_x↓ 0∂ Y_s^(t, x)∂ x=A_s,t, lim_x↓ 0 Z_s^(t, x) =1, lim_x↓ 0∂ Z_s^(t, x)∂ x =(1-γ) (A_s,t - 1 ), lim_x↓ 0 L(t,x)= v(t, (t))( 1 + (t) )^1 - γ 1_{(t) > 0 } + v(t,0) 1_{(t) = 0 }, L_x(t, 0):=lim_x↓ 0 L_x(t, x) = (1-γ) v(t, (t))( 1 + (t) )^1 - γ 1_{(t) > 0 } + v_x (t,0) 1_{(t) = 0 < (t) } - (1-γ) v(t, 0) 1_{(t) = 0 }, where we use v_x(t, 0) := lim_x ↓ 0 v_x(t, x) in vvx_representation (ii). v>0 and (<ref>) imply L(t,0) ≥ v(t,0) 1_{(t) = 0 }, L_x(t, 0)1-γ≥ - v(t,0) 1_{(t) = 0 }. We take limit x↓ 0 in (<ref>) and (<ref>) and apply the dominated convergence theorem (justified by YZ_bound1) to obtain v_x(t, 0)1 - γ + v(t, 0) = e^- λ (T - t) [ A_T,t - 1+ ] + λ∫_t^T e^- λ (s - t)( [ A_s,t] L_x(s,0)1-γ + [ A_s,t-1+ ] L(s,0) ) ds ≥ e^- λ (T - t)( e^μ (T - t) - 1+) + λ∫_t^T e^- λ (s - t)( e^μ (s - t) - 1 ) (1 - ) v(s,0) 1_{(s) = 0 } d s > 0, where the inequalities are due to (<ref>), v>0 and μ>0 (implied by y_M>0). Therefore, by (<ref>) and the above inequality, we conclude that (t)>0. (ii) Suppose that 0<y_M<1 and at least one of and is strictly positive. By part (i) result, we have (t)<1 and (t)>0. In case (t)=0 or (t)=1, then (t)<(t). Therefore, it remains to consider the case that 0<(t) ≤(t)<1. By Boundaries_of_NT_region and v>0, we have v_x(t, (t))1 - γ = v(t, (t))1 + (t) > - v(t, (t))1 - (t) = v_x(t, (t))1 - γ. The above inequality and the strict concavity of x↦v(t,x)1-γ in v_concave imply (t)<(t). (iii) Assume that 0<y_M (y_M<1 case can be treated similarly). Boundaries_of_NT_region implies that v_x(t, 0)1 - γ - v(t, 0) > 0 (t) > 0. By the same way as in the proof of part (i), we take limit x↓ 0 in (<ref>) and (<ref>) and obtain v_x(t, 0)1 - γ - v(t, 0) = e^- λ (T - t) [ A_T,t - 1- ] + λ∫_t^T e^- λ (s - t)( [ A_s,t] L_x(s,0)1-γ + [ A_s,t-1-] L(s,0) ) ds = e^- λ (T - t)( e^μ (T - t) - 1-) + λ∫_t^T e^- λ (s - t)( e^μ (s - t) - 1 ) (1 + ) v(s, (s))( 1 + (s) )^1 - γ d s + λ∫_t^T e^- λ (s - t) e^μ (s - t)( v_x(s, 0)1 - γ - v(s, 0) ) 1_{(s) = 0 } d s, where the second equality is due to (t)>0 by part (i). Let f, g:[0,T]→ be defined as f(t):= e^- λ t( v_x(t, 0)1 - γ - v(t, 0) ), g(t):=λ e^-λ t v(t, (t))( 1 + (t) )^1 - γ. Then, we can rewrite (<ref>) as f(t) = e^- λ T( e^μ (T - t) - 1-) + (1 + )∫_t^T( e^μ (s - t) - 1 ) g(s) d s + λ∫_t^T e^μ (s - t) f(s) 1_{ f(s) ≤ 0 } d s, where we use the equivalence of f(t)≤ 0 and (t)=0 by (<ref>). We differentiate above to obtain f'(t) = - μ(f(t) + (1 + ) ( e^- λ T + ∫_t^T g(s) d s ) ) - λ f(t) 1_{ f(t) ≤ 0 }. We define t as t := inf{ t ∈ [0, T] : f(s) ≤ 0 for all s∈ [t,T] }, then the set in (<ref>) is non-empty because f(T) = - e^- λ T≤ 0. Since μ>0 (due to y_∞>0) and g>0, the form of ODE (<ref>) and definition of t above imply f(t)>0 for t∈ [0,t). This observation and (<ref>) imply (t)>0 for t∈ [0,t) and (t)=0 for t∈ [t,T). To determine t, it is enough to observe that the solution of ODE (<ref>) for t∈ [t, T) is e^(μ + λ) t f(t) = μ (1 + ) ∫_t^Te^(μ + λ) s( e^- λ T + ∫_s^T g(u) d u ) d s - e^μ T. If there is no solution to (<ref>), then f(t)> 0 for t∈ [0,T) and t=0. If there is a solution to (<ref>), then such a solution should be unique and f(t)=0. § ASYMPTOTIC RESULTS In this section, we provide asymptotic results to analyze the utility maximization problem when both transaction costs and search frictions are small. For convenience, we assume throughout this section that = =:ϵ∈ (0,1) and 0<y_M<1. We focus on the asymptotics of the no-trade region and the value function as ϵ↓ 0 and λ→∞ simultaneously, and then compare these results with the already-known asymptotic results in the benchmark cases of transaction costs only (λ=∞) and search frictions only (ϵ=0). Our heuristic inspection using the results in <cit.> implies that the limits as ϵ↓ 0 and λ→∞ simultaneously do not exist in general. It turns out that a specific scaling relation λ=c ϵ^-2/3 for c>0 is relevant to consider, as explained below. For the log utility and fixed λ<∞, Section 5 in <cit.> provides asymptotic results as ϵ↓ 0. According to Proposition 5.5 and Proposition 5.7 in <cit.>, one can check the following limits: lim_λ→∞( lim_ϵ↓ 0“no-trade region width"λϵ)= 2σ^2≠ 0 = lim_ϵ↓ 0( lim_λ→∞“no-trade region width"λϵ), lim_λ→∞( lim_ϵ↓ 0“decrease of value"√(λ)ϵ)= σ y_M (1-y_M)(T-t)√(2)≠ 0 = lim_ϵ↓ 0( lim_λ→∞“decrease of value"√(λ)ϵ). Therefore, lim_ϵ↓ 0, λ→∞“no-trade region width"λϵ and lim_ϵ↓ 0, λ→∞“decrease of value"√(λ)ϵ do not exist in general. On the other hand, it is well known in the literature (see <cit.>) that in the case of transaction costs only (λ=∞), the asymptotics are as follows: “no-trade region width" = O(ϵ^1/3), “decrease of value" =O( ϵ^2/3). In the case of search frictions only (ϵ=0, see <cit.>), the decrease of value is O(1λ) and the width of the no-trade region is zero. We combine this observation with (<ref>) and (<ref>) and attempt to match the orders. We naturally conjecture that a suitable relation between ϵ and λ for the asymptotics would satisfy λϵ∼ϵ^1/3 and √(λ)ϵ∼ϵ^2/3∼1λ ⟹ λ∼ϵ^-2/3. Motivated by the above discussion, we make the following assumption, which holds throughout this section. (i) For c>0 and ϵ∈ (0,1), = =ϵ and λ=c ϵ^-2/3. (ii) y_M ∈ (0,1). (i) Under ass, to emphasize their dependence on ϵ∈ (0,1) (with λ dependence through the relation λ=c ϵ^-2/3), we denote v,v_x,v_xx,v_t, , , L, L_x, etc. by v^ϵ,v_x^ϵ,v_xx^ϵ,v_t^ϵ, ^ϵ, ^ϵ, L^ϵ, L_x^ϵ, etc. (ii) The case of the perfectly liquid market (ϵ=0 and λ=∞) corresponds to the classical Merton problem, and we denote the value function and optimal fraction as v^0 and y_M. (iii) In the case of search frictions only (no transaction costs, ϵ=0), we denote the value function and optimal fraction as v^SO,λ and ŷ^SO, λ to emphasize their dependence on λ. (iv) In the case transaction costs only (no search frictions, λ=∞), we denote the value function and the no-trade boundaries as v^TO,ϵ, ^TO, ϵ and ^TO, ϵ to emphasize their dependence on ϵ. As benchmarks for our asymptotic results, we present the asymptotic results for the cases of transaction costs only (see <cit.>) and search frictions only (see <cit.>). * In the case where there are no transaction costs or search frictions (ϵ = 0 and λ = ∞), the utility maximization problem becomes the classical Merton problem <cit.>. The explicit formula and HJB equation for the corresponding value function v^0 are: v^0(t) = e^Q(y_M) (T - t), v_t^0(t) + Q(y_M) v^0(t) = 0, v^0(T)=1. * In the case where there are transaction costs only (ϵ∈ (0,1) and λ=∞), the utility maximization problem becomes the problem investigated in <cit.>. The asymptotic results are as follows: ^TO, ϵ(t) =y_M + 12( 12y_M^2(1-y_M)^2γ)^1/3·ϵ^1/3 + o(ϵ^1/3), ^TO, ϵ(t) =y_M - 12( 12y_M^2(1-y_M)^2γ)^1/3·ϵ^1/3 + o(ϵ^1/3), v^TO,ϵ(t,y_M) =v^0(t) - (1-γ)γσ^28( 12y_M^2(1-y_M)^2γ)^2/3 v^0(t)(T-t) ·ϵ^2/3 + o(ϵ^2/3). * In the case where there are search frictions only (ϵ=0 and λ<∞), the utility maximization problem becomes the problem investigated in <cit.>. The asymptotic results are as follows: ŷ^SO, λ(t) = y_M + σ^2 y_M(1-y_M)(2y_M-1) ·1λ + o(1λ), v^SO,λ(t, y_M) = v^0(t) - (1-γ)γσ^4 y_M^2 (1-y_M)^22 v^0(t) (T-t) ·1λ + o(1λ). Notice that the no-trade region vanishes in this case, y^SO, λ(t) =^SO,λ(t)=^SO,λ(t). The following theorem is the main result of this paper. Along the parametric curve λ=c ϵ^-2/3 (see the discussion for (<ref>)), the boundaries of the no-trade region and the value function have asymptotic expansions in terms of ϵ^1/3 and ϵ^2/3, respectively. Let ass hold and a_1, a_2: (0, ∞) → (0, ∞) be defined as a_1(c) := σ y_M (1 - y_M)√(2 c)( ( 3 √(2) c^3/2γσ^3 y_M (1 - y_M) + 1 )^1/3 - 1 ), a_2(c) := γ (1-γ) σ^4 y_M^2 (1 - y_M)^24 c( ( 3 √(2) c^3/2γσ^3 y_M (1 - y_M) + 1 )^2/3 + 1 ). Then, for t ∈ [0, T), ^ϵ(t) =y_M+ a_1(c)·ϵ^1/3 + o(ϵ^1/3), ^ϵ(t) =y_M - a_1(c)·ϵ^1/3 + o(ϵ^1/3), v^ϵ(t,y_M) = v^0(t) - a_2(c)v^0(t) (T - t) ·ϵ^2/3 + o(ϵ^2/3). Alternatively, due to the relation λ=c ϵ^-2/3, the above asymptotics can be written in terms of λ: ^ϵ(t) =y_M+ √(c) a_1(c)·1√(λ) + o(1√(λ)), ^ϵ(t) =y_M- √(c) a_1(c)·1√(λ) + o(1√(λ)), v^ϵ(t,y_M) = v^0(t) - c a_2(c)v^0(t) (T - t) ·1λ + o(1λ). The proof of the theorem is postponed to Section <ref>. One of the main difficulties in the analysis is the rigorous treatment of subtle limiting behaviors that do not appear in the benchmark cases. For example, lim_ϵ↓ 0 v_xx^ϵ(t,x)/ϵ^2/3 depends on the choice of x∈ [^ϵ(t),^ϵ(t)] in our model, whereas lim_ϵ↓ 0 v_xx^TO,ϵ(t,x)/ϵ^2/3=0 for x∈ [^ϵ(t),^ϵ(t)]. We present really_used and v_xx_conv_lem to address these subtle limiting behaviors. Figure <ref> illustrates the asymptotics in Joint_limit_of_Wnt_and_Vd, and Figure <ref> shows how the coefficients in the asymptotics depend on c. We notice that these functions are monotonic in c. From (<ref>), direct computations using L'Hopital's rule produce the following limits: lim_c →∞ a_1(c) =12( 12y_M^2(1-y_M)^2γ)^1/3, lim_c →∞ a_2(c)=(1-γ)γσ^28( 12y_M^2(1-y_M)^2γ)^2/3, lim_c → 0√(c) a_1(c) =0, lim_c → 0 c a_2(c) =(1-γ)γσ^4 y_M^2 (1-y_M)^22 . Using the above limits, we can rephrase (<ref>) and (<ref>) to clarify the connection of our asypmtotics in Joint_limit_of_Wnt_and_Vd with the benchmark asymptotic results: ^TO, ϵ(t) =y_M+ ( lim_c →∞ a_1(c))·ϵ^1/3 + o(ϵ^1/3), ^TO, ϵ(t) =y_M - ( lim_c →∞ a_1(c))·ϵ^1/3 + o(ϵ^1/3), v^TO, ϵ(t,y_M) = v^0(t) - ( lim_c →∞ a_2(c)) v^0(t) (T - t) ·ϵ^2/3 + o(ϵ^2/3), ŷ^SO,λ(t) = y_M + (lim_c→ 0√(c) a_1(c) ) ·1√(λ) + o(1√(λ)), (∵ lim_c → 0√(c) a_1(c) =0) v^SO, λ(t,y_M) = v^0(t) - ( lim_c → 0 c a_2(c)) v^0(t) (T - t) ·1λ + o(1λ). Indeed, Joint_limit_of_Wnt_and_Vd bridges the benchmark asymptotics in (<ref>) and (<ref>) through the parametric relation λ=c ϵ^-2/3, where the case c=∞ corresponds to (<ref>) and the case c=0 corresponds to (<ref>). In this sense, our approach of using λ=c ϵ^-2/3 unifies the asymptotics for transaction costs and search frictions. Section 5 in <cit.> contains asymptotics for ϵ, which differ from Joint_limit_of_Wnt_and_Vd: we let ϵ↓ 0 and λ→∞ at the same time through the relation λ=c ϵ^-2/3 in Joint_limit_of_Wnt_and_Vd, whereas λ is fixed in Theorem 5.4 and Theorem 5.6 in <cit.>. It is also worth noting that the correction terms in Theorem 5.4 and Theorem 5.6 in <cit.> are not explicit (in terms of solutions of some PDEs), whereas a_1(c) and a_2(c) in Joint_limit_of_Wnt_and_Vd are explicit in terms of the model parameters. Therefore, given model parameters μ,σ,γ,λ,ϵ, one can compute the auxiliary parameter c=λϵ^2/3 and use the formulas in Joint_limit_of_Wnt_and_Vd to estimate the optimal trading strategy and value. § PROOF OF JOINT_LIMIT_OF_WNT_AND_VD We prove Joint_limit_of_Wnt_and_Vd in this section, starting with some technical lemmas. We use Notation <ref>. Under ass, by Property_of_NT, there are t^ϵ, t^ϵ∈ [0,T) such that 0<y^ϵ(t)<1 if t∈ [0,t^ϵ) y^ϵ(t)=0 if t∈ [t^ϵ,T) , 0<y^ϵ(t)<1 if t∈ [0,t^ϵ) y^ϵ(t)=1 if t∈ [t^ϵ,T) . The following lemma estimates the location of t^ϵ and t^ϵ. Let ass hold. Then, there are ϵ_0>0 and C> C > 0 such that t^ϵ, t^ϵ∈ [ T - Cϵ, T - Cϵ] for ϵ∈ (0,ϵ_0]. We prove the inequalities for t^ϵ (t^ϵ can be treated by the same way). Considering the lower bound of v^ϵ in (<ref>), for small enough ϵ, there exists t^ϵ∈ [0,T) that satisfies (<ref>). Hence, ϵ e^μ T = μ (1 + ϵ) ∫_t^ϵ^T e^(μ + λ) s( e^- λ T + λ∫_s^T e^- λ uv^ϵ(u, ^ϵ(u))( 1 + ϵ^ϵ(u) )^1 - γ d u ) d s ≤ C μ (1 + ϵ) ∫_t^ϵ^T e^(μ + λ) s( e^- λ T + λ∫_s^T e^- λ u d u ) d s = C(1+ϵ) ( e^μ T - e^μt^ϵ), where C>0 is a constant independent of ϵ. This implies that there exists ϵ_0>0 such that t^ϵ ≤ T + 1μln( 1 - ϵC (1 + ϵ)) ≤ T- Cϵ for ϵ∈ (0,ϵ_0], where C>0 is a constant independent of ϵ and the second inequality is due to lim_x ↓ 0ln (1 - x)x = - 1. Similarly, for small enough ϵ, (<ref>) produces ϵ e^μ T ≥μ (1 + ϵ) ∫_t^ϵ^T e^(μ + λ) s e^- λ T d s = μ(1+ϵ)μ+λ e^-λ T(e^(μ+λ)T - e^(μ+λ) t^ϵ). This implies that there exists ϵ_0>0 and C>0 such that t^ϵ,λ ≥ T + 1μ+λln( 1 - (μ + λ) ϵμ (1 + ϵ)) ≥ T-Cϵ for ϵ∈ (0,ϵ_0], where the second inequality is due to lim_x ↓ 0ln (1 - x)x = - 1 and the relation λ=c ϵ^-2/3. Before we start to prove Joint_limit_of_Wnt_and_Vd, we list three technical lemmas whose proofs are provided in the appendices. Let ass hold. (i) There exist a constant C>0 independent of (t,ϵ)∈ [0,T)× (0,1) such that |^ϵ(t) -y_M |, |^ϵ(t) -y_M |, |^ϵ(t) -^ϵ(t) |≤C ϵ^1/3min{ 1, λ (T - t) }. (ii) For fixed t∈ [0,T), _ϵ↓ 0^ϵ(t) - ^ϵ(t)ϵ^1/3>0. (iii) There is ϵ_0>0 such that for (t,ϵ)∈ [0,T]× (0,ϵ_0], we have ^ϵ(t)<y_M<^ϵ(t). See Appendix <ref>. Let ass hold. (i) For (t,ϵ, x)∈ [0,T)× (0,1)× [ ^ϵ(t), ^ϵ(t)], there is C>0 independent of (t,ϵ, x) such that | v_x^ϵ(t, x) | ≤ C ϵ, | v^ϵ(t, x) - v^0(t) | ≤ C ϵ^2/3, | v_t^ϵ(t, x) - v_t^0(t) |≤ C ( 1+ 1min{ 1, λ^2 (T - t)^2 })ϵ^2/3. (ii) For fixed t∈ [0,T), lim_ϵ↓ 0(sup_x_1,x_2 ∈ [ ^ϵ(t), ^ϵ(t)]| v^ϵ(t, x_1)-v^ϵ(t, x_2)ϵ^2/3| ) =0, lim_ϵ↓ 0(sup_x_1,x_2 ∈ [ ^ϵ(t), ^ϵ(t)]| v_t^ϵ(t, x_1)- v_t^ϵ(t, x_2)ϵ^2/3| )=0. See Appendix <ref>. Let ass hold and G^ϵ (t, x):= x^2 (1 - x)^1 + γλ v_x x^ϵ(t, x)1 - γ. Let t∈ [0,T) be fixed, h(z):=e^z1+e^z, and x_ϵ∈ [ ^ϵ(t), ^ϵ(t)]. Then, G^ϵ (t, x_ϵ) + y_M^2(1-y_M)^1+γγσ^2 v^0(t) - ∫_^ϵ(t)^^ϵ(t) G^ϵ (t,h(z)) √(2 λ)2 σ e^- √(2 λ)/σ| z - z_ϵ| dz ϵ↓ 0⟶0, where ^ϵ(t):=h^-1(^ϵ(t)), ^ϵ(t):=h^-1(^ϵ(t)) and z_ϵ:= h^-1(x_ϵ). See Appendix <ref>. We now proceed to prove Joint_limit_of_Wnt_and_Vd. The proof consists of three steps. Throughout this proof, C>0 is a generic constant independent of (t,s, x,ϵ)∈ [0,T)× [t,T)×(0,1) × (0,1) (also independent of λ due to relation λ=c ϵ^-2/3) that may differ line by line. Let t∈ [0,T) be fixed. Since 0<y_M<1, Merton_fraction_inside_NT implies that for small enough ϵ>0, 0 < ^ϵ(t) < y_M < ^ϵ(t) < 1. In the end, we are interested in the limiting behaviors as ϵ↓ 0. Hence, we assume that ϵ>0 is small enough and the above inequalities hold. For x∈ [ ^ϵ(t), ^ϵ(t)], using L^ϵ(t,x)=v^ϵ(t,x) and (<ref>), we rewrite (<ref>) as 0 = v_t^ϵ(t, x) - v_t^0(t) + Q(x) ( v^ϵ(t, x) - v^0(t) ) + v^0(t)(Q(x) - Q(y_M)) + x (1 - x) ( μ - γσ^2 x ) v_x^ϵ(t, x) + σ^2x^2 (1 - x)^22 v_x x^ϵ(t, x) for x∈ [ ^ϵ(t), ^ϵ(t)]. Step 1. We define I^ϵ(t, x) as I^ϵ(t, x) := G^ϵ (t, x) - γ (1 - y_M)^γ - 1 v^0(t) λ (x - y_M)^2, where G^ϵ is as in v_xx_conv_lem. We multiply λ1-γ to (<ref>) and obtain 0 = λ (v_t^ϵ(t, x) - v_t^0(t) )1 - γ + Q(x) λ ( v^ϵ(t, x) - v^0(t))1 - γ + γσ^22( ( 1 - x1 - y_M)^1 - γ - 1 ) v^0(t) λ ( x - y_M)^2 + x (1 - x) ( μ - γσ^2 x ) λ v_x^ϵ(t, x)1 - γ + σ^22 (1 - x)^1 - γ I^ϵ(t, x) for x∈ [ ^ϵ(t), ^ϵ(t)]. The above equality with x=y_M, together with Merton_fraction_inside_NT (i) and really_used, produces λ (v_t^ϵ(t, y_M) - v_t^0(t) )1 - γ + Q(y_M) λ ( v^ϵ(t,y_M) - v^0(t))1 - γ + σ^22 (1 - y_M)^1 - γ I^ϵ(t, y_M)ϵ↓ 0 ⟶ 0, sup_x_1,x_2 ∈ [ ^ϵ(t), ^ϵ(t)]| I^ϵ( t, x_1 ) - I^ϵ( t, x_2 ) | ϵ↓ 0 ⟶ 0. Let h, x_ϵ, z_ϵ, ^ϵ(t),^ϵ(t) be as in v_xx_conv_lem. Note that ^ϵ(t) ≤ x_ϵ≤^ϵ(t) and ^ϵ(t) ≤ z_ϵ≤^ϵ(t). Since z∈ [ ^ϵ(t),^ϵ(t)] is equivalent to h(z)∈ [ ^ϵ(t),^ϵ(t)], the convergence in (<ref>) produces ∫_^ϵ(t)^^ϵ(t)| I^ϵ ( t, h(z)) - I^ϵ (t,x_ϵ) |√(2 λ)2 σ e^- √(2 λ)/σ| z - z_ϵ| d z ≤sup_x_1,x_2 ∈ [ ^ϵ(t), ^ϵ(t)]| I^ϵ( t, x_1 ) - I^ϵ( t, x_2 ) | ·∫_^ϵ(t)^^ϵ(t)√(2 λ)2 σ e^- √(2 λ)/σ| z - z_ϵ| d z ϵ↓ 0 ⟶ 0, where we also use the following observation for the convergence part above: ∫_^ϵ(t)^^ϵ(t)√(2 λ)2 σ e^- √(2 λ)/σ| z -z_ϵ| d z = 1- 12( e^- √(2 λ)/σ(z_ϵ-^ϵ(t)) + e^- √(2 λ)/σ(^ϵ(t)-z_ϵ)) <1. We combine v_xx_conv_lem and (<ref>) to obtain I^ϵ (t, x_ϵ) + γ(1-y_M)^γ-1v^0(t) λ (x_ϵ-y_M)^2 + y_M^2(1-y_M)^1+γγσ^2 v^0(t) - ∫_^ϵ(t)^^ϵ(t)( I^ϵ(t,x_ϵ) + γ (1 - y_M)^γ - 1 v^0(t) λ (h(z) - y_M)^2) √(2 λ)2 σ e^- √(2 λ)/σ| z - z_ϵ| dz ϵ↓ 0⟶0. Using the explicit form of the integral in (<ref>), the above convergence can be written as 12( e^- √(2 λ)/σ(z_ϵ -^ϵ(t)) + e^- √(2 λ)/σ(^ϵ(t)-z_ϵ))I^ϵ (t, x_ϵ) + γ(1-y_M)^γ-1v^0(t) J^ϵ(t,x_ϵ) ϵ↓ 0⟶0, where J^ϵ(t,x) is defined as J^ϵ(t,x):= λ (x -y_M)^2 - ∫_^ϵ(t)^^ϵ(t)λ (h(z) - y_M)^2√(2 λ)2 σ e^- √(2 λ)/σ| z - h^-1(x) | dz + σ^2 y_M^2(1-y_M)^2. In (<ref>), we substitute x_ϵ =^ϵ(t) and x_ϵ=^ϵ(t), then subtract the resulting expressions to obtain the following equation: 12( e^- √(2 λ)/σ(^ϵ(t)-^ϵ(t)) +1) ( I^ϵ (t, ^ϵ(t))- I^ϵ (t, ^ϵ(t))) +γ(1-y_M)^γ-1v^0(t) ( J^ϵ (t, ^ϵ(t))- J^ϵ (t, ^ϵ(t))) ϵ↓ 0⟶0. We combine (<ref>) and (<ref>) and conclude that J^ϵ (t, ^ϵ(t))- J^ϵ (t, ^ϵ(t)) ϵ↓ 0⟶0. Let z_M:=h^-1(y_M). Observe that (<ref>), ^ϵ(t)=h(^ϵ(t)) and ^ϵ(t)=h(^ϵ(t)) imply |^ϵ(t) - ^ϵ(t) | ≤C ϵ^1/3min{ 1, λ (T - t) }. Then, h(z_M)=y_M, h'(z)=h(z)(1-h(z)), λ=c ϵ^-2/3 and (<ref>) imply sup_z_1, z_2∈ [ ^ϵ(t),^ϵ(t)]√(λ) | h(z_1) - h(z_2) - y_M(1-y_M) ( z_1 - z_2 ) | ϵ↓ 0⟶ 0 , sup_z ∈ [ ^ϵ(t),^ϵ(t)]λ | ( h(z)- y_M )^2 -y_M^2(1-y_M)^2 (z - z_M )^2 | ϵ↓ 0⟶ 0. The limit in (<ref>) and the bound in (<ref>) produce ∫_^ϵ(t)^^ϵ(t)λ((h(z) - y_M)^2- y_M^2(1-y_M)^2 (z- z_M )^2 ) √(2 λ)2 σ e^- √(2 λ)/σ| z - z_ϵ| dz ϵ↓ 0⟶0. From (<ref>), we obtain the following: 0 =lim_ϵ↓ 0( J^ϵ (t, ^ϵ(t))- J^ϵ (t, ^ϵ(t)) ) =lim_ϵ↓ 0( λ(^ϵ(t)-y_M)^2 -λ(^ϵ(t)-y_M)^2 - y_M^2(1-y_M)^2 ∫_^ϵ(t)^^ϵ(t)λ (z- z_M )^2 √(2 λ)2 σ( e^- √(2 λ)/σ| z - ^ϵ(t) | - e^- √(2 λ)/σ| z - ^ϵ(t) |) dz ) =lim_ϵ↓ 0( λ(^ϵ(t)-^ϵ(t)) (^ϵ(t)+^ϵ(t)- 2y_M ) - λ y_M^2 (1-y_M)^2 (^ϵ(t)-^ϵ(t)) (^ϵ(t)+^ϵ(t)- 2z_M ) + y_M^2(1-y_M)^2 √(λ)( ^ϵ(t)+^ϵ(t)- 2z_M ) ( 1 - e^- √(2 λ)/σ( ^ϵ(t) - ^ϵ(t) ))(σ√(2) + √(λ)(^ϵ(t) - ^ϵ(t) )2) ) =lim_ϵ↓ 0(1 - e^- √(2 λ)/σ( ^ϵ(t) - ^ϵ(t) )) √(λ)(^ϵ(t)+^ϵ(t)- 2y_M ) ( √(λ)(^ϵ(t)-^ϵ(t))2 + σ y_M (1-y_M)√(2)), where the second equality is due to (<ref>), the third equality is due to integration parts, and the last equality is due to (<ref>). Merton_fraction_inside_NT (ii), (<ref>) and λ=c ϵ^-2/3 imply _ϵ↓ 0√(λ)(^ϵ(t)-^ϵ(t))>0. Therefore, (<ref>) implies lim_ϵ↓ 0( √(λ)(^ϵ(t)-y_M ) - √(λ)(y_M-^ϵ(t) ) )=0. We substitute x=y_M in (<ref>) and x_ϵ =y_M in (<ref>) and use integration by parts to obtain 0 =lim_ϵ↓ 0(J^ϵ(t,y_M)+ y_M^2(1-y_M)^2∫_^ϵ(t)^^ϵ(t)λ (z-z_M)^2√(2 λ)2 σ e^- √(2 λ)/σ| z - z_M | dz - σ^2 y_M^2(1-y_M)^2 ) =lim_ϵ↓ 0( J^ϵ (t, y_M) - y_M^2 (1-y_M)^2 ( e^- √(2 λ)/σ( z_M - ^ϵ(t) )( λ(z_M - ^ϵ(t))^22 +σ√(λ)(z_M - ^ϵ(t))√(2) + σ^22) +e^- √(2 λ)/σ( ^ϵ(t) -z_M )( λ(^ϵ(t) -z_M)^22 +σ√(λ)(^ϵ(t) -z_M)√(2) + σ^22))) =lim_ϵ↓ 0( J^ϵ (t, y_M) - (e^- √(2 λ)/σ( z_M - ^ϵ(t) ) + e^- √(2 λ)/σ( ^ϵ(t) -z_M )) ·( λ( ^ϵ(t) - ^ϵ(t))^28 +σ y_M(1-y_M)√(λ)( ^ϵ(t) - ^ϵ(t))2√(2) + σ^2y_M^2 (1-y_M)^22) ), where we apply (<ref>), (<ref>) and (<ref>) to obtain the last equality. We substitute x_ϵ =y_M in (<ref>) and combine with (<ref>) to obtain I^ϵ (t, y_M) +γ(1-y_M)^γ-1v^0(t) ( ( √(λ)( ^ϵ(t) - ^ϵ(t))2 + σ y_M(1-y_M)√(2))^2 +σ^2 y_M^2 (1-y_M)^22) ϵ↓ 0⟶0. By (<ref>), the above limit and λ=c ϵ^-2/3, we conclude 0 =lim_ϵ↓ 0( c (v_t^ϵ(t, y_M) - v_t^0(t) )(1 - γ)ϵ^2/3 + c Q(y_M)( v^ϵ(t,y_M) - v^0(t))(1 - γ)ϵ^2/3 - γσ^2 v^0(t)2( ( √(c)( ^ϵ(t) - ^ϵ(t))2ϵ^1/3 + σ y_M(1-y_M)√(2))^2 +σ^2 y_M^2 (1-y_M)^22) ). Step 2. We inspect the integrals of the terms in (<ref>) with respect to x, from ^ϵ(t) to ^ϵ(t). By the mean value theorem, there exist x_ϵ^*, x_ϵ^**∈ [^ϵ(t),^ϵ(t)] such that ∫_^ϵ(t)^^ϵ(t) v_t^ϵ(t, x) - v_t^0(t)(1-γ)ϵ dx - v_t^ϵ(t, y_M) - v_t^0(t)(1-γ)ϵ^2/3·^ϵ(t) - ^ϵ(t)ϵ^1/3 =v_t^ϵ(t, x_ϵ^*) - v_t^ϵ(t, y_M)(1-γ)ϵ^2/3·^ϵ(t) - ^ϵ(t)ϵ^1/3ϵ↓ 0⟶0, ∫_^ϵ(t)^^ϵ(t)Q(x) ( v^ϵ(t, x) - v^0(t))(1-γ)ϵ dx - Q(y_M)(v^ϵ(t, y_M) - v^0(t) )(1-γ)ϵ^2/3·^ϵ(t) - ^ϵ(t)ϵ^1/3 =(Q(x_ϵ^**)(v^ϵ(t, x_ϵ^**) - v^0(t) )(1-γ)ϵ^2/3 - Q(y_M)(v^ϵ(t, y_M) - v^0(t) )(1-γ)ϵ^2/3) ^ϵ(t) - ^ϵ(t)ϵ^1/3ϵ↓ 0⟶0, where the convergences are due to Merton_fraction_inside_NT (i) and really_used. By (<ref>), we have ∫_^ϵ(t)^^ϵ(t)Q(x) - Q(y_M)(1-γ)ϵ dx + γσ^224(^ϵ(t)-^ϵ(t)ϵ^1/3)^3 = - γσ^26( (^ϵ(t)-y_Mϵ^1/3)^3 - (^ϵ(t)-y_Mϵ^1/3)^3 - 14(^ϵ(t)-^ϵ(t)ϵ^1/3)^3 ) ϵ↓ 0⟶ 0. By (<ref>) and Merton_fraction_inside_NT (i), we obtain |∫_^ϵ(t)^^ϵ(t)x (1 - x) ( μ - γσ^2 x ) v_x^ϵ(t, x)(1-γ)ϵ dx | ≤ C | ^ϵ(t)-^ϵ(t) | ϵ↓ 0⟶ 0. By integration by parts and Boundaries_of_NT_region, ∫_^ϵ(t)^^ϵ(t)σ^2 x^2 (1 - x)^2 v_x x^ϵ, λ(t, x)2(1-γ)ϵ dx = - σ^2^ϵ(t)^2(1-^ϵ(t))^2 v^ϵ(t, ^ϵ(t))2(1-ϵ^ϵ(t)) - σ^2^ϵ(t)^2(1-^ϵ(t))^2 v^ϵ(t, ^ϵ(t))2(1+ϵ^ϵ(t)) -∫_^ϵ(t)^^ϵ(t)σ^2x (1 - x)(1-2x) v_x^ϵ(t, x)(1-γ)ϵ dx ϵ↓ 0⟶ - σ^2 y_M^2(1-y_M)^2 v^0(t), where the convergence is due to Merton_fraction_inside_NT (i) and really_used (i). Now we integrate the right-hand side of (<ref>) with respect to x from ^ϵ(t) to ^ϵ(t) and multiply it by c(1-γ)ϵ, then apply (<ref>)-(<ref>) to obtain the following: 0=lim_ϵ↓ 0( ( c(v_t^ϵ(t, y_M) - v_t^0(t) )(1 - γ)ϵ^2/3 + c Q(y_M)( v^ϵ(t,y_M) - v^0(t))(1 - γ)ϵ^2/3) ^ϵ(t) - ^ϵ(t)ϵ^1/3 - cγσ^2 v^0(t)24( ^ϵ(t) - ^ϵ(t)ϵ^1/3)^3 -c σ^2 y_M^2(1-y_M)^2 v^0(t) ). Step 3. We multiply ^ϵ(t) - ^ϵ(t)ϵ^1/3 to (<ref>) and subtract (<ref>) to obtain 0=lim_ϵ↓ 0( -c γ12(^ϵ(t) - ^ϵ(t)ϵ^1/3+ √(2)σ y_M(1-y_M)√(c))^3 + c y_M^2(1-y_M)^2 + γσ^3 y_M^3(1-y_M)^33√(2 c )). The above equation implies that lim_ϵ↓ 0^ϵ(t) - ^ϵ(t)ϵ^1/3 = √(2)σ y_M(1-y_M)√(c)( ( 3√(2) c^3/2γσ^3 y_M(1-y_M) +1)^1/3-1 )=2a_1(c). We conclude the desired asymptotic result for ^ϵ(t) and ^ϵ(t) by the above equation and (<ref>). It remains to prove the asymptotic result for v^ϵ(t,y_M). Using (<ref>), we rewrite (<ref>) as ∂∂ t( e^Q(y_M)t ( v^ϵ(t, y_M) - v^0(t) ) (1 - γ)ϵ^2/3) - γσ^2 e^Q(y_M)T2( ( ^ϵ(t) - ^ϵ(t)2ϵ^1/3 + σ y_M(1-y_M)√(2 c ))^2 +σ^2y_M^2 (1-y_M)^22 c)ϵ↓ 0⟶ 0. Then, (<ref>) and (<ref>) imply lim_ϵ↓ 0∂∂ t( e^Q(y_M)t(v^ϵ(t, y_M) - v^0(t) ) (1 - γ)ϵ^2/3) =e^Q(y_M)T a_2(c)1-γ . The bounds in really_used (i) enable us to use the dominated convergence theorem as below: e^Q(y_M)T a_2(c)1-γ (T-t) =∫_t^T lim_ϵ↓ 0∂∂ s( e^Q(y_M)s (v^ϵ(s, y_M) - v^0(s)) (1 - γ)ϵ^2/3) ds =lim_ϵ↓ 0∫_t^T ∂∂ s( e^Q(y_M)s (v^ϵ(s, y_M) - v^0(s)) (1 - γ)ϵ^2/3) ds =- e^Q(y_M)t·lim_ϵ↓ 0 v^ϵ(t, y_M) - v^0(t) (1 - γ)ϵ^2/3, where the last equality is due to v^ϵ(T, y_M) = v^0(T) = 1. We conclude the desired asymptotic result for v^ϵ(t,y_M). § CONCLUSION This paper investigates the optimal investment problem in a market with two types of illiquidity: transaction costs and search frictions. Building on the framework established by <cit.>, we extend the analysis to a power-utility maximization problem. Our main contribution is the development of a novel asymptotic framework applicable when both transaction costs and search frictions are small (ϵ≪ 1 and 1λ≪ 1). We derive explicit asymptotics for the no-trade region and the value function along the parametric curve λ = c ϵ^-2/3 for c>0. This approach unifies the existing asymptotic results for models with only transaction costs or only search frictions, providing a coherent methodology for handling both types of illiquidity simultaneously. Additionally, our framework offers explicit expressions for the correction terms, facilitating practical computation of the optimal trading strategy and value. Our asymptotic analysis provides insights into the limiting behaviors not present in models with only one source of illiquidity. As a future research, we plan to extend our results to a multi-asset model. siam § PROOF OF V_CLASSICAL_SOLUTION_AND_PROPERTIES In the proof, we assume 0<γ<1 (the case of γ>1 can be treated similarly). Let a:=max_x∈ [0,1] Q(x) where Q is in (<ref>), h(z) := e^z/1 + e^z, and C_b([0,T]×) be the set of all bounded (with the uniform norm) continuous functions. For f∈ C_b([0,T]×), we define ϕ(f) as ϕ(f)(t, z) := [ e^a T e^∫_t^T ( Q(h(Υ_u^(t,z)))-λ-a ) du + λ∫_t^T e^∫_t^s ( Q(h(Υ_u^(t,z)))-λ-a ) du K_f ( s, Υ_s^(t, z) ) d s ], where K_f is K_f(t, z) := sup_ζ∈( f(t, ζ) g(z,ζ) ) with g(z,ζ):=( 1 + h(z)1 + h(ζ))^1 - γ 1_{ζ≥ z } + ( 1 - h(z)1 - h(ζ))^1 - γ 1_{ζ<z } and Υ_s^(t, z) for (s,z)∈ [t,T]× is the solution of the following SDE: dΥ_s^(t,z)= ( μ-σ^22 + (1-γ)σ^2 h(Υ_s^(t,z))) ds + σ dB_s, Υ_t^(t,z)=z. Since K_f and Q ∘ h are bounded and continuous, one can check ϕ(f)∈ C_b([0,T]×) by the dominated convergence theorem. From the definition of a, we observe that for z∈, -∞< min_x∈ [0,1] Q(x) - λ - a ≤ Q(h(z))-λ- a ≤ - λ. We check that ϕ is a contraction map: for f_1, f_2 ∈ C_b([0,T]×), ‖ϕ(f_1)- ϕ(f_2) ‖_∞ ≤λ∫_t^T e^-λ(s-t)·sup_ζ∈| f_1(s,ζ)-f_2(s,ζ) | ds ≤ (1-e^-λ T ) ‖ f_1-f_2‖_∞, where the first inequality is due to (<ref>) and ‖ g‖_∞≤ 1. Therefore, by the Banach fixed point theorem, there exists a unique function f̂∈ C_b([0,T]×) such that ϕ(f̂) = f̂. K_f̂∈ C^1/2, 1 ([0,T]×). Throughout the proof of this claim, C>0 is a generic constant independent of (t,s,z, δ)∈ [0,T]× [t,T] ×× [0,1] and paths. For δ∈ [0,1] and t∈ [0,T], | K_f̂ (t,z+δ) - K_f̂ (t,z) | ≤‖f̂‖_∞·sup_ζ∈| g(z+δ,ζ) - g(z,ζ)| ≤ C δ, where the second inequality is due to ‖∂ g∂ z‖_∞ <∞. By SDE (<ref>), for s∈ [t,T], we have | Υ_s^(t+δ,z)- Υ_s^(t,z) | ≤| σ(B_t - B_t+δ) - ∫_t^t+δ( μ-σ^22 + (1-γ)σ^2 h(Υ_u^(t,z))) du | + | ∫_t+δ^s (1-γ)σ^2 ( h(Υ_u^(t+δ,z))-h(Υ_u^(t,z)) ) du | ≤ C ( | B_t+δ - B_t | + δ + ∫_t+δ^s | Υ_u^(t+δ,z)- Υ_u^(t,z) | du ). We apply Gronwall's inequality (see Gronwall_lemma) to above and obtain | Υ_s^(t+δ,z)- Υ_s^(t,z) | ≤ C ( | B_t+δ - B_t | + δ). Using (<ref>) and (<ref>), we produce the following estimate: for s∈ [t,T] and δ∈ [0,1], [ | e^∫_t^s ( Q(h(Υ_u^(t,z)))-λ-a ) du - e^∫_t+δ^s ( Q(h(Υ_u^(t+δ,z)))-λ-a ) du| ] ≤ C [| ∫_t^s ( Q(h(Υ_u^(t,z)))-λ-a ) du - ∫_t+δ^s ( Q(h(Υ_u^(t+δ,z)))-λ-a ) du | ] ≤ C√(δ) , where we also used ‖dd zQ(h(z)) ‖_∞ <∞ for the last inequality. Similarly, (<ref>) and (<ref>) produce [| K_f̂ ( s, Υ_s^(t+δ, z) )- K_f̂ ( s, Υ_s^(t, z) ) | ] ≤ C [ | Υ_s^(t+δ,z)- Υ_s^(t,z) | ] ≤ C√(δ). For δ∈ [0,1] and (t,z)∈ [0,T]×, (<ref>), (<ref>) and (<ref>) produce | ϕ(f̂)(t+δ,z)-ϕ(f̂) (t,z) | ≤ C √(δ). The above inequality implies that | K_f̂(t+δ,z)- K_f̂(t,z)| ≤sup_ζ∈| f̂(t+δ,ζ)-f̂ (t,ζ) | = sup_ζ∈| ϕ(f̂)(t+δ,ζ)-ϕ(f̂) (t,ζ) | ≤ C √(δ). We conclude K_f̂∈ C^1/2, 1 ([0,T]×) by the above inequality and (<ref>). Let α∈ (0,1) be fixed. K_lemma and Theorem 9.2.3 in <cit.> guarantee that there exists a unique solution f̃∈ C^1 + α/2, 2 + α([0, T] ×ℝ) of the following PDE: 0 = f_t(t, z) +( μ-σ^22 + (1-γ)σ^2 h(z)) f_z(t, z) + σ^22 f_z z(t, z) + (Q(h(z)) - λ-a ) f(t, z) + λ K_f̂(t, z) f(T, z)=e^aT By the Feynman-Kac formula (i.e., see Theorem 5.7.6 in <cit.>), we have f̃=ϕ(f̂)=f̂. Observe that h can be continuously extended to z=±∞ as h(∞):=1 and h(-∞):=0. Then, lim_z→∞ e^-atf̃(t,z) = lim_z→∞ e^-atϕ(f̃)(t,z) = e^ (Q(1)-λ)(T-t) + λ∫_t^T e^ (Q(1)-λ) (s-t)sup_ζ∈(e^-asf̃(s,ζ) ( 1-1- h(ζ))^1-γ) ds, where the last equality is due to the dominated convergence theorem. Therefore, we continuously extend f̃ to z=+ ∞ and the above equality produces 0=∂∂ t( e^-atf̃(t, ∞)) + (Q(1)-λ) ( e^-atf̃(t, ∞)) + λsup_ζ∈( e^-atf̃(t, ζ) ( 1-1- h(ζ))^1-γ). We can treat lim_z→ -∞ e^-atf̃(t,z) by the same way and obtain 0=∂∂ t( e^-atf̃(t, -∞)) + (Q(0)-λ) ( e^-atf̃(t, -∞)) + λsup_ζ∈( e^-atf̃(t, ζ) ( 11+ h(ζ))^1-γ). We set v(t,x):=e^-atf̃(t,h^-1(x)) for (t,x)∈ [0,T]× [0,1]. The PDE for f̃ in (<ref>) implies that v satisfies (<ref>) for (t,x)∈ [0,T)× (0,1). We check (ii) by (<ref>) and (<ref>). To check (iii), since f̃∈ C^1 + α/2, 2 + α([0, T] ×ℝ), it is enough to observe that for z=h^-1(x), v_t(t,x)=e^-at(f̃_t(t,z) - a f̃(t,z)), x(1-x)v_x(t,x)= e^-atf̃_z(t,z), x^2(1-x)^2 v_xx(t,x)=e^-at(f̃_zz(t,z)-(1-2x)f̃_z(t,z) ). § PRELIMINARY ANALYSIS FOR SECTION 5 This appendix is devoted to presenting and proving preliminary asymptotic results used in the proof of the lemmas in Section 5. As in Section 5, we set ϵ = = ∈ (0,1) and assume that y_M ∈ (0, 1). Recall that A_s, t, Y_s^(t, x) and Z_s^(t, x) are defined in (<ref>). Let C>0 be a generic constant independent of (t, s, x,λ)∈ [0,T]× [t,T]× (0,1)× [1,∞) that may differ line by line. (i) For nonnegative integers n, m, k, l, |𝔼[ ( ∂^m Z_s^(t, x)∂ x^m)^k( ∂^n Y_s^(t, x)∂ x^n)^l] |≤ C (s - t) + 0, m≥ 1 or n ≥ 2 x^l, m=0 and n=0 1, m=0 and n=1 In particular, 𝔼[ | Z_s^(t, x)| ] ≤ 1+C(s-t), 𝔼[ | ∂^2 Z_s^(t, x)∂ x^2| ] ≤ C(s-t), 𝔼[ | ∂ Z_s^(t, x)∂ x| ] ≤ C√(s-t). (ii) Let n∈ and F: [0,T]× (0,1) →. Suppose that y↦ F(t,y) is piecewise continuous for each t∈ [0,T] and sup_(t,y)∈ [0,T]× (0,1)|(y(1-y))^n-1F(t,y)|=C_F<∞. Then, | ∂∂ x(∫_t^T λ e^-λ(s-t)𝔼[ Z_s^(t, x)(∂ Y_s^(t, x)∂ x)^n F( s, Y_s^(t, x)) ] ds ) | ≤C · C_F√(λ)(x (1 - x))^n. (iii) Let F: (0,1) → be a continuous function. Suppose that F' is continuous on (0,1), F” is continuous on (0,1) except finitely many points, the left and right limits of F” exist at the discontinuous points and _y∈ (0,1)( | F(y) | + | y(1-y) F'(y)| + |y^2(1-y)^2 F”(y)| ) <∞. Then, the derivative below exists and |∂∂ t[ ∂ Z_s^(t, x)∂ x F (Y_s^(t, x)) ] |≤ C ( | (y_M -x) F(x) | + | x(1-x) F'(x) | ) + C (s-t), |[ ∂ Z_s^(t, x)∂ x F (Y_s^(t, x)) ] |≤ C ( | (y_M -x) F(x) | + | x(1-x) F'(x) | ) (s-t) + C (s-t)^2. Throughout this proof, C>0 is a generic constant independent of (t,s,x,λ)∈ [0,T]× [t,T]× (0,1) × [1,∞) that may differ line by line. (i) To obtain (<ref>), we apply Ito's lemma to ( ∂^m Z_s^(t, x)∂ x^m)^k( ∂^n Y_s^(t, x)∂ x^n)^l using the expression in (<ref>) and the SDE for A_s^(t,x) in (<ref>), then we apply the inequalities in (<ref>). Since Z_s^(t, x)> 0, we obtain the first inequality in (<ref>) by (<ref>) (with m=n=0, k=1 and l=0). The expression in (<ref>) and (<ref>) (with m=2, k=1 and l=0) imply 𝔼[ | ∂^2 Z_s^(t, x)∂ x^2| ] = sgn(γ-1) ·𝔼[ ∂^2 Z_s^(t, x)∂ x^2] ≤ C(s-t). Hölder's inequality and (<ref>) (with m=1, k=2 and l=0) produce the last inequality in (<ref>). (ii) The probability density function φ(y;s - t, x) of Y_s^(t, x) is calculated as φ(y;s - t, x) := ∂∂ yℙ( Y_s^(t, x)≤ y ) = exp( - 1/2 σ^2 (s - t)( (σ^2/2-μ) (s - t) + ln( y (1 - x)/(1 - y) x) )^2)σ y (1 - y) √(2 π (s - t)) ⟹ φ_x(y;s - t, x) = φ(y;s - t, x) ·1σ^2 x (1 - x)(σ^22-μ + 1s-tln( y (1 - x)(1 - y) x) ). We use the expression above to obtain ∫_0^1| ∂∂ x( ( 1 - x1 - y)^1 - γ( y (1 - y)x (1 - x))^n F(s, y) φ(y;s - t, x) ) | d y = ∫_0^1( 1 - x1 - y)^1 - γ( y (1 - y)x (1 - x))^n| F(s, y)| ·| 1σ^2 x (1 - x)(σ^22-μ+ 1s-tln( y (1 - x)(1 - y) x) ) - n (1 - 2 x) + x (1 - γ)x (1 - x)| ·φ(y;s - t, x) d y =1(x(1-x))^n 𝔼[ Z_s^(t, x)∂ Y_s^(t, x)∂ x (Y_s^(t, x) (1-Y_s^(t, x) ))^n-1| F ( s, Y_s^(t, x) ) | ·| B_s - B_tσ (s - t) - n - x (1 - γ - 2 n) | ] ≤C· C_F(x (1 - x))^n√(s-t), where the inequality is due to (<ref>) and Hölder's inequality. This implies that ∫_0^1∂∂ x( ( 1 - x1 - y)^1 - γ( y (1 - y)x (1 - x))^n F(s, y) φ(y;s - t, x) ) d y =1(x(1-x))^n𝔼[ Z_s^(t, x)∂ Y_s^(t, x)∂ x(Y_s^(t, x) (1-Y_s^(t, x) ))^n-1 F( s, Y_s^(t, x) ) ( B_s - B_tσ (s - t) - n - x (1 - γ - 2 n) ) ] is well-defined and continuous in x. Therefore, together with the estimate (<ref>), we validate the interchange of integration and differentiation below: | ∂∂ x(∫_t^T λ e^-λ(s-t)𝔼[ Z_s^(t, x)(∂ Y_s^(t, x)∂ x)^n F( s, Y_s^(t, x)) ] ds ) | =| ∫_t^T λ e^-λ(s-t)∫_0^1∂∂ x( ( 1 - x1 - y)^1 - γ( y (1 - y)x (1 - x))^n F(s, y) φ(y;s - t, x) ) d y ds | ≤C· C_F √(λ)(x (1 - x))^n, where the inequality is due to (<ref>) and Order_of_exponential_times. Especially, when n=1, we have ∂∂ x(∫_t^T λ e^-λ(s-t)𝔼[ Z_s^(t, x)∂ Y_s^(t, x)∂ x F ( s, Y_s^(t, x) ) ] ds ) =1x(1-x)∫_t^T λ e^-λ(s-t)𝔼[ Z_s^(t, x)∂ Y_s^(t, x)∂ x F( s, Y_s^(t, x)) ( B_s - B_tσ (s - t) - 1 + x (1 + γ ) ) ] ds . (iii) By (<ref>), we have ∂ Z_s^(t, x)∂ x F(Y_s^(t, x))=f(A_s,t) with f(a):=(1-γ)(a-1)(xa+1-x)^-γ F(xaxa+1-x). The conditions for F allow us to apply Ito's lemma (see 6.24 Problem in p.215 of <cit.>) and obtain f(A_s-t,0)= ∫_0^s-t g(A_u,0) du + ∫_0^s-t f'(A_u,0) σ A_u,0 dB_u, where g(a):=f'(a) μ a + 12 f”(a) σ^2 a^2. The inequalities in (<ref>) and (<ref>) imply that the stochastic integral term above is a square integrable martingale. Therefore, [ ∂ Z_s^(t, x)∂ x F(Y_s^(t, x))] = [f(A_s,t)] =[f(A_s-t,0)] =∫_0^s-t [g(A_u,0)] du. We differentiate above with respect to t, apply Ito's lemma and use the inequalities in (<ref>), then | ∂∂ t[ ∂ Z_s^(t, x)∂ x F (Y_s^(t, x)) ] | = | [g(A_s-t,0)] | ≤ | g(1) | + C(s-t). Since g(1)= γ(1-γ)σ^2(y_M -x) F(x) + (1-γ)σ^2 x(1-x) F'(x), (<ref>) implies (<ref>) and (<ref>). Recall Notation <ref> (ii) and (iii). When ϵ=0 and λ=∞, the value function is v^0(t) in (<ref>) and optimal fraction is y_M=μγσ^2. When ϵ=0 and λ<∞, we denote the value function as v^SO,λ(t,x) and optimal fraction as ŷ^SO, λ(t). Similarly, we denote L^SO,λ(t,x):=L(t,x)|_ϵ=0, where L is defined in (<ref>). Observe that ŷ^SO, λ(t)= (t)|_ϵ=0= (t) |_ϵ=0 and L^SO, λ(t, x) = v^SO, λ(t, ŷ^SO, λ(t)) for all x∈ [0,1]. The following results can be found in <cit.>: there is a constant C>0 independent of (t,λ)∈ [0,T)× [1,∞) such that |ŷ^SO, λ(t) - y_M |≤Cλ, | v^SO,λ(t,ŷ^SO, λ(t)) - v^0(t) |≤Cλ, lim_λ→∞λ( v^0(t) - v^SO, λ(t, y_M) )1 - γ = γσ^4 y_M^2 (1 - y_M)^22· e^Q(y_M) (T - t) (T - t). For the later analysis, we provide more estimates in this direction. The functions v^SO, λ(t, x), v_x^SO, λ(t, x), v_x x^SO, λ(t, x) obtained by substitution ϵ = 0 can be continuously extended to x = 0 and x = 1. There exist positive constants C, C, C independent of (t, x, λ)∈ [0,T]× (0,1)× [1,∞) such that | ∂^n∂ x^n v^SO, λ(t, x) | ≤Cλ for n∈, C·min{ 1, λ (T - t) }≤ -λ v_x x^SO, λ(t, x)1 - γ≤C·min{ 1, λ (T - t) }. Furthermore, for (t,x) ∈ [0, T)× (0,1), lim_λ→∞λ v_x x^SO, λ(t, x)1 - γ = - γσ^2 v^0(t). Throughout this proof, C>0 is a generic constant independent of (t,x,λ)∈ [0,T)× (0,1) × [1,∞) that may differ line by line. Using (<ref>), the representation in (<ref>) for ϵ=0 becomes v^SO,λ (t, x) = e^- λ (T - t)[ Z_T^(t, x)] + λ∫_t^T e^- λ (s - t) v^SO, λ(s, ŷ^SO, λ(s)) [ Z_s^(t, x)] d s. We take derivative with respect to x above. Using YZ_bound1 and the dominated convergence theorem, we put the derivative inside of the integrals: for n∈, ∂^n∂ x^n v^SO, λ(t, x) = e^- λ (T - t)[ ∂^n Z_T^(t, x)∂ x^n] + λ∫_t^T e^- λ (s - t) v^SO, λ(s, ŷ^SO, λ(s)) [ ∂^n Z_s^(t, x)∂ x^n] d s. The above equality, YZ_bound1, (<ref>) and the dominated convergence theorem enable us to conclude that v^SO, λ(t, x), v_x^SO, λ(t, x), v_x x^SO, λ(t, x) can be continuously extended to x=0 and x=1. Observe that x e^-x≤min{ 1, x} and 0≤ 1- e^-x - x e^-x≤min{ 1, x} for x≥ 0, x e^-x≥xe for 0≤ x ≤ 1, 1- e^-x - x e^-x≥ 1-2e>0 for x≥ 1. By the above inequalities, for positive constants c_1 and c_2, we can find positive constants c and c such that c·min{ 1, x}≤ c_1 x e^-x + c_2 (1- e^-x - x e^-x) ≤c·min{ 1, x} for x≥ 0. We apply (<ref>) to (<ref>) and obtain | ∂^n∂ x^n v^SO, λ(t, x) | ≤ C(T-t) e^- λ (T - t) + Cλ∫_t^T e^-λ(s-t)(s-t)ds ≤Cλ, where the second inequality is due to (<ref>). Thus, we conclude (<ref>). The expressions in (<ref>) and (<ref>) imply min{ A_s, t^- 1 - γ, 1 }≤( x A_s, t + 1 - x )^- 1 - γ≤max{ A_s, t^- 1 - γ, 1 } for 0≤ x≤ 1, lim_s↓ t[ min{ A_s, t^- 1 - γ, 1 }( A_s, t - 1√(s - t))^2] = lim_s↓ t[ max{ A_s, t^- 1 - γ, 1 }( A_s, t - 1√(s - t))^2] =σ^2>0, -1(1 - γ) (s - t)[ ∂^2 Z_s^(t, x)∂ x^2] = γ [ ( x A_s, t + 1 - x )^- 1 - γ( A_s, t - 1√(s - t))^2]. We combine the above inequalities, limits and expression to conclude that lim_s↓ t(-1(1 - γ) (s - t)[ ∂^2 Z_s^(t, x)∂ x^2] ) = γσ^2>0, and there exist positive constants c and c independent of (t,s,x,λ) such that c≤ -1(1 - γ) (s - t)[ ∂^2 Z_s^(t, x)∂ x^2] ≤c. From (<ref>) for n=2, we obtain the following expression: - λ v_x x^SO, λ(t, x)1 - γ = λ (T - t) e^- λ (T - t)·( -1(1 - γ) (T - t)[ ∂^2 Z_T^(t, x)∂ x^2] ) + λ^2 ∫_t^T e^- λ (s - t) (s - t) · v^SO, λ(s, ŷ^SO, λ(s)) ·( -1(1 - γ) (s - t)[ ∂^2 Z_s^(t, x)∂ x^2] ) d s We apply the inequalities in v_concave (ii), (<ref>) and (<ref>) to the above expression to conclude (<ref>). Finally, in the above expression, we substitute u=λ(s-t) and let λ→∞: lim_λ→∞ - λ v_x x^SO, λ(t, x)1 - γ = lim_λ→∞∫_0^λ (T-t) e^-u u · v^SO, λ(uλ+t, ŷ^SO, λ(uλ+t)) ·( -1(1 - γ) u/λ[ ∂^2 Z_u/λ+t^(t, x)∂ x^2] ) du =∫_0^∞ e^-u u du · v^0(t) ·γσ^2 = γσ^2 v^0(t), where the second equality is due to (<ref>), (<ref>), (<ref>) and the dominated convergence theorem. Let ass hold. Let C>0 be a generic constant independent of (t,s, x,ϵ)∈ [0,T)× [t,T)×(0,1) × (0,1) (also independent of λ due to relation λ=c ϵ^-2/3) that may differ line by line. Then, the followings hold: x (1 - x) | v_x x^ϵ(t, x) - v_x x^SO, λ(t, x) |≤ C ϵ^2/3, | v_x^ϵ(t, x) - v_x^SO, λ(t, x) |≤ C ϵ, | v^ϵ(t, x) - v^SO, λ(t, x) |≤ C ϵ^2/3, - v_x x^ϵ(t, x) 1 - γ≥ C(ϵ^2/3-ϵ) min{λ (T - t), 1 }, x^2 (1 - x)^2 | v_xxx^ϵ(t, x) - v_xxx^SO, λ(t, x) |≤ C ϵ^1/3, where v_xxx^ϵ(t, x) exists and is continuous in (t,x)∈ [0,T) × (0,1). For convenience, let g^ϵ:[0,1]^2 → be g^ϵ(x,y):=( ( 1 + ϵ x1 + ϵ y)^1 - γ - 1 ) 1_{ y>x } + ( ( 1 - ϵ x1 - ϵ y)^1 - γ - 1 ) 1_{ y<x }. Then, the mean value theorem (consider ϵ as a variable) produces | g^ϵ(x,y) | ≤ C |x-y| ϵ for x,y∈ [0,1]. From the expression of L in (<ref>), we obtain | L^ϵ( t, x ) - L^SO, λ( t, x ) | ≤sup_y ∈ [0, 1]| v^ϵ(t, y) g^ϵ (x,y)| + sup_y ∈ [0, 1]| v^ϵ(t, y) - v^SO, λ(t, y) | ≤ C ϵ + sup_y ∈ [0, 1]| v^ϵ(t, y) - v^SO, λ(t, y) |, where the second inequality is due to (<ref>) and (<ref>). The above inequality and (<ref>) produce | v^ϵ(t, x) - v^SO, λ(t, x)| =| λ∫_t^T e^- λ (s - t)[ Z_s^(t, x)( L^ϵ ( s, Y_s^(t, x) ) - L^SO, λ ( s, Y_s^(t, x) )) ]ds | ≤λ∫_t^T e^- λ (s - t)[ Z_s^(t, x)] ( C ϵ + sup_y ∈ [0, 1]| v^ϵ(s, y) - v^SO, λ(s, y) |) ds ≤λ∫_t^T e^(C - λ) (s - t)( Cϵ +sup_y ∈ [0, 1]| v^ϵ(s, y) - v^SO, λ(s, y) |) d s, where the last inequality is due to [ Z_s^(t, x) ] ≤ 1+ C(s-t) ≤ e^C (s - t) by (<ref>). For convenience, we define f(t):=e^(C-λ)tsup_y ∈ [0, 1]| v^ϵ(t, y) - v^SO, λ(t, y) |. Then, inequality (<ref>) can be written as f(t) ≤ C ϵ∫_t^T λ e^(C-λ)sds + ∫_t^T λ f(s) ds. Since f is measurable due to meas_lem, we apply Gronwall's inequality (see Gronwall_lemma) to obtain f(t) ≤ C ϵ∫_t^T λ e^(C-λ)sds + ∫_t^T (C ϵ∫_s^T λ e^(C-λ)udu ) λ e^λ(s-t)ds =C ϵλλ - C·( e^(C - λ) t - e^(C - λ) T) + C ϵλ e^- λ tλ - C·( λ( e^C T - e^C t)C - e^C T( 1 - e^- λ (T - t)) ). We apply λ=c ϵ^-2/3 to the above inequality and conclude that sup_y ∈ [0, 1]| v^ϵ(t, y) - v^SO, λ(t, y) | = e^(λ-C) t f(t) ≤ C ϵ^1/3. Notice that (<ref>) is not obtained yet. Since the value function V in (<ref>) should decrease in ϵ, we have v^ϵ(t, x)1 - γ≤v^SO, λ(t, x)1 - γ. Therefore, the expression of L in (<ref>), together with (<ref>) and (<ref>), implies that -C ϵ^1/3≤L^ϵ(t, x)-L^SO, λ(t, x)1 - γ≤ 0. (Proof of (<ref>)) We take derivative with respect to x in (<ref>), and put the derivative inside of the expectation (YZ_bound1, ‖ L ‖_∞<∞ and (<ref>) allow us to do this) to obtain the following expression: v_x x^ϵ(t, x) = e^- λ (T - t)[ ∂^2 Z_T^(t, x)∂ x^2]+ λ∫_t^T e^- λ (s - t)[ ∂^2 Z_s^(t, x)∂ x^2 L^ϵ ( s, Y_s^(t, x) ) + ∂ Z_s^(t, x)∂ x L^ϵ_x ( s, Y_s^(t, x) ) ∂ Y_s^(t, x)∂ x] d s + ∂∂ x(λ∫_t^T e^-λ(s-t)𝔼[ Z_s^(t, x) L^ϵ_x ( s, Y_s^(t, x) ) ∂ Y_s^(t, x)∂ x] ds ). This implies that | v_x x^ϵ(t, x) - v_x x^SO, λ(t, x) | ≤| λ∫_t^T e^- λ (s - t)[ ∂^2 Z_s^(t, x)∂ x^2( L^ϵ ( s, Y_s^(t, x) ) - L^SO,λ ( s, Y_s^(t, x) )) ] d s | + | λ∫_t^T e^- λ (s - t)[ ∂ Z_s^(t, x)∂ x L^ϵ_x ( s, Y_s^(t, x) ) ∂ Y_s^(t, x)∂ x] d s | + | ∂∂ x(λ∫_t^T e^-λ(s-t)𝔼[ Z_s^(t, x) L^ϵ_x ( s, Y_s^(t, x) ) ∂ Y_s^(t, x)∂ x] ds ) |. In the right side of the above inequality, the first term is bounded by C λ∫_t^T e^- λ (s - t)ϵ^1/3(s-t) d s due to (<ref>) and (<ref>), the second term is bounded by C λ∫_t^T e^- λ (s - t)ϵ d s due to YZ_bound1 and (<ref>) and the third term is bounded by Cx(1-x)ϵ^2/3 due to Boundedness_of_various_expectation (ii) (with n=1 and F=L_x^ϵ) and (<ref>). These bounds and λ=c ϵ^-2/3 produce (<ref>). (Proof of (<ref>)) The expression of L_x in (<ref>) implies that L_x^ϵ(t,x) is continuously differentiable with respect to x, except x=^ϵ(t) and x=^ϵ(t). To be specific, L_xx^ϵ(t, x) = - γ (1 - γ) ϵ^2 v^ϵ(t, ^ϵ(t)) (1 + ϵ x)^2( 1 + ϵ x1 + ϵ^ϵ(t))^1 - γ if x ∈ (0, ^ϵ(t)), v_x x^ϵ(t,x) if x ∈ ( ^ϵ(t), ^ϵ(t) ), - γ (1 - γ) ϵ^2 v^ϵ(t, ^ϵ(t)) (1 - ϵ x)^2( 1 - ϵ x1 - ϵ^ϵ(t))^1 - γ if x ∈ (^ϵ(t), 1). v_concave (i), (<ref>) and (<ref>) imply - C ϵ^2/3x(1-x)≤v_x x^ϵ( t, x )1-γ≤ 0. We combine v_concave (ii), (<ref>) and (<ref>) to obtain -C ϵ^2/3x(1-x)≤ L_xx^ϵ(t, x)1-γ≤ 0. To apply Boundedness_of_various_expectation (iii), we set F(y)=L^ϵ ( s, y) - L^SO, λ ( s,y) and observe that F'(y)=L^ϵ_x ( s, y) and F”(y)=L^ϵ_xx ( s, y) except y∈{^ϵ(s), ^ϵ(s)}. We check that (<ref>) is satisfied due to (<ref>), (<ref>) and (<ref>). Therefore, Boundedness_of_various_expectation (iii) is applicable and (<ref>), together with (<ref>), (<ref>) and (<ref>), produces | [ ∂ Z_s^(t, x)∂ x( L^ϵ ( s, Y_s^(t, x)) - L^SO, λ ( s, Y_s^(t, x)) ) ] | ≤ C(ϵ^1/3 +ϵ )(s-t)+ C(s-t)^2. We use the stochastic representation (<ref>) to obtain | v_x^ϵ(t, x) - v_x^SO, λ(t, x) | ≤| λ∫_t^T e^- λ (s - t)[ ∂ Z_s^(t, x)∂ x( L^ϵ ( s, Y_s^(t, x) ) - L^SO, λ ( s, Y_s^(t, x) ) ) ] d s | + | λ∫_t^T e^- λ (s - t)[ Z_s^(t, x) L_x^ϵ ( s, Y_s^(t, x) ) ∂ Y_s^(t, x)∂ x] ds |. The first term in the right-hand side is bounded by Cλ∫_t^T e^- λ (s - t)((ϵ^1/3 +ϵ ) (s-t)+(s-t)^2) ds due to (<ref>) and the second term is bounded by C λ∫_t^T e^- λ (s - t)ϵ ds due to YZ_bound1 and (<ref>). These bounds and λ=c ϵ^-2/3 produce (<ref>). (Proof of (<ref>)) Using (<ref>) and (<ref>), we obtain 0 ≤11-γ( L^SO, λ ( s, Y_s^(t, ŷ^SO, λ(t))) - L^ϵ ( s, Y_s^(t, ŷ^SO, λ(t)) ) ) = 11-γ( v^SO, λ(s, ŷ^SO, λ(s)) - L^ϵ ( s, Y_s^(t, ŷ^SO, λ(t))) ) ≤11-γ( v^SO, λ(s, ŷ^SO, λ(s))-v^ϵ(s, ŷ^SO, λ(s)) - v^ϵ(s, ŷ^SO, λ(s)) g^ϵ(Y_s^(t, ŷ^SO, λ(t)),ŷ^SO, λ(s) ) ) ≤ C ϵ | Y_s^(t, ŷ^SO, λ(t))-ŷ^SO, λ(s) | + v^SO, λ(s, ŷ^SO, λ(s)) - v^ϵ(s, ŷ^SO, λ(s))1 - γ ≤ C ϵ (| Y_s^(t, ŷ^SO, λ(t))-ŷ^SO, λ(t) | + 1λ) + v^SO, λ(s, ŷ^SO, λ(s)) - v^ϵ(s, ŷ^SO, λ(s))1 - γ , where the second inequality is due to (<ref>) and (<ref>), the third inequality is due to (<ref>) and the last inequality is due to (<ref>). By the same way as we prove Boundedness_of_various_expectation (i), we can check that [(Y_s^(t,x)-x)^2] ≤ C(s-t). Hence, by Boundedness_of_various_expectation (i) and Hölder's inequality, we produce [ Z_s^(t, x)·| Y_s^(t, x)-x | ] ≤√((1+C(s-t)) C(s-t))≤ C e^C(s-t)√(s-t). The stochastic representation in (<ref>) produces 0 ≤11-γ(v^SO, λ(t, ŷ^SO, λ(t)) - v^ϵ(t, ŷ^SO, λ(t))) = λ∫_t^T e^- λ (s - t)[ Z_s^(t, ŷ^SO, λ(t))11-γ( L^SO, λ ( s, Y_s^(t, ŷ^SO, λ(t)) ) - L^ϵ ( s, Y_s^(t, ŷ^SO, λ(t)) )) ]ds ≤λ∫_t^T e^- λ (s - t)[ Z_s^(t, ŷ^SO, λ(t)) C ϵ (| Y_s^(t, ŷ^SO, λ(t))-ŷ^SO, λ(t) | + 1λ) ] ds +λ∫_t^T e^- λ (s - t)[ Z_s^(t, ŷ^SO, λ(t))] v^SO, λ(s, ŷ^SO, λ(s)) - v^ϵ(s, ŷ^SO, λ(s))1 - γ ds ≤ C λ∫_t^T e^(C - λ) (s - t)( ϵ( √(s - t) + 1λ) +v^SO, λ(s, ŷ^SO, λ(s)) - v^ϵ(s, ŷ^SO, λ(s))1 - γ) d s, where the first and second inequalities are due to (<ref>) and the last inequality is due to (<ref>) and (<ref>). Since the map t↦ v^SO, λ(t, ŷ^SO, λ(t)) - v^ϵ(t, ŷ^SO, λ(t)) is measurable due to meas_lem, by the same way as we treated (<ref>), we apply Gronwall's inequality (see Gronwall_lemma), Order_of_exponential_times and λ=c ϵ^-2/3 to the above inequality to obtain | v^SO, λ(t, ŷ^SO, λ(t)) - v^ϵ(t, ŷ^SO, λ(t)) |≤ C ϵ^2/3. The above inequality and (<ref>) produce (<ref>): for (t,x)∈ [0,T)× (0,1), | v^ϵ(t, x) - v^SO, λ(t, x) | = |v^ϵ(t, ζ) - v^SO, λ(t, ζ) + ∫_ζ^x( v_x^ϵ(t, z) - v_x^SO, λ(t, z) ) d z | |_ζ=ŷ^SO, λ(t)≤ C ϵ^2/3. (Proof of (<ref>)) By (<ref>) and (<ref>), we have | L^ϵ ( t, x) - L^SO,λ ( t, x )| ≤ C ϵ^2/3. Since (Y_s^(t, x)∉{^ϵ(s), ^ϵ(s)}) =1 for t<s, YZ_bound1 and (<ref>) and the continuity of x↦ L^ϵ_x(t,x) allow us to put the derivative inside of the expectation in (<ref>): v_x x^ϵ(t, x) = λ∫_t^T e^- λ (s - t)[ ∂^2 Z_s^(t, x)∂ x^2 L^ϵ ( s, Y_s^(t, x) ) + (2∂ Z_s^(t, x)∂ x∂ Y_s^(t, x)∂ x + Z_s^(t, x)∂^2 Y_s^(t, x)∂ x^2)L^ϵ_x ( s, Y_s^(t, x) ) ] d s + e^- λ (T - t)[ ∂^2 Z_T^(t, x)∂ x^2] + λ∫_t^T e^-λ(s-t)𝔼[ Z_s^(t, x) L^ϵ_xx ( s, Y_s^(t, x) ) (∂ Y_s^(t, x)∂ x)^2 ] ds. The above expression produces the following equality: v_x x^ϵ(t, x) - v_x x^SO, λ(t, x)1 - γ =λ∫_t^T e^-λ(s-t)𝔼[ Z_s^(t, x)L^ϵ_xx ( s, Y_s^(t, x) )1-γ(∂ Y_s^(t, x)∂ x)^2 ] ds + λ∫_t^T e^- λ (s - t)[ ∂^2 Z_s^(t, x)∂ x^2L^ϵ ( s, Y_s^(t, x) )- L^SO,λ ( s, Y_s^(t, x) )1-γ] ds +λ∫_t^T e^- λ (s - t)[ (2∂ Z_s^(t, x)∂ x∂ Y_s^(t, x)∂ x + Z_s^(t, x)∂^2 Y_s^(t, x)∂ x^2)L^ϵ_x ( s, Y_s^(t, x) )1-γ] d s. In the right-hand side of the above equality, the first term is bounded above by 0 due to (<ref>), the second term is bounded above by Cλ∫_t^T e^-λ(s-t)ϵ^2/3(s-t)ds due to (<ref>) and (<ref>) and the third term is bounded above by C λ∫_t^T e^-λ(s-t)ϵ ds due to and (<ref>) and YZ_bound1. These bounds and λ=c ϵ^-2/3, together with 1-e^-λ(T-t)≤min{λ (T - t), 1 }, produce - v_x x^ϵ(t, x)1 - γ≥ - v_x x^SO, λ(t, x)1 - γ- C ϵmin{λ (T - t), 1 }. We combine the above inequality and (<ref>) to conclude (<ref>). (Proof of (<ref>)) By the same way as we obtain (<ref>), we take derivative with respect to x in (<ref>): v_xxx^ϵ(t, x) = e^- λ (T - t)[ ∂^3 Z_T^(t, x)∂ x^3] + ∂∂ x(λ∫_t^T e^-λ(s-t)𝔼[ Z_s^(t, x) L^ϵ_xx ( s, Y_s^(t, x) ) (∂ Y_s^(t, x)∂ x)^2 ] ds) + λ∫_t^T e^- λ (s - t)[ ∂^3 Z_s^(t, x)∂ x^3 L^ϵ ( s, Y_s^(t, x) ) + (2∂ Z_s^(t, x)∂ x∂ Y_s^(t, x)∂ x + Z_s^(t, x)∂^2 Y_s^(t, x)∂ x^2)L^ϵ_xx ( s, Y_s^(t, x) ) ∂ Y_s^(t, x)∂ x + (3∂^2 Z_s^(t, x)∂ x^2∂ Y_s^(t, x)∂ x + 3∂ Z_s^(t, x)∂ x∂^2 Y_s^(t, x)∂ x^2 + Z_s^(t, x)∂^3 Y_s^(t, x)∂ x^3) L^ϵ_x ( s, Y_s^(t, x) ) ] d s. This implies that | v_xxx^ϵ(t, x)-v_xxx^SO, λ(t, x) | ≤| ∂∂ x(λ∫_t^T e^-λ(s-t)𝔼[ Z_s^(t, x) L^ϵ_xx ( s, Y_s^(t, x) ) (∂ Y_s^(t, x)∂ x)^2 ] ds) | +| λ∫_t^T e^- λ (s - t)[ ∂^3 Z_s^(t, x)∂ x^3( L^ϵ ( s, Y_s^(t, x) )- L^SO,λ ( s, Y_s^(t, x) )) + (2∂ Z_s^(t, x)∂ x∂ Y_s^(t, x)∂ x + Z_s^(t, x)∂^2 Y_s^(t, x)∂ x^2)L^ϵ_xx ( s, Y_s^(t, x) ) ∂ Y_s^(t, x)∂ x + (3∂^2 Z_s^(t, x)∂ x^2∂ Y_s^(t, x)∂ x + 3∂ Z_s^(t, x)∂ x∂^2 Y_s^(t, x)∂ x^2 + Z_s^(t, x)∂^3 Y_s^(t, x)∂ x^3) L^ϵ_x ( s, Y_s^(t, x) ) ] d s |. In the right-hand side of the above inequality, the first integral is bounded by C ϵ^2/3√(λ)x^2(1-x)^2 due to Boundedness_of_various_expectation (ii) (with n=2 and F=L_xx^ϵ) and (<ref>) and the second integral is bounded by C λ∫_t^T e^- λ (s - t) (ϵ^2/3+ 1x(1-x)ϵ^2/3 + ϵ ) ds due to YZ_bound1, (<ref>), (<ref>) and (<ref>). These bounds and λ=c ϵ^-2/3 produce (<ref>). Let ass hold. Let C>0 be a generic constant independent of (t,s, x,ϵ)∈ [0,T)× [t,T)×(0,1) × (0,1) (also independent of λ due to relation λ=c ϵ^-2/3) that may differ line by line. Recall that t^ϵ and t^ϵ appear in (<ref>). (i) v_xt^ϵ(t, x)=v_tx^ϵ(t, x) exist and are continuous in (t,x)∈ [0,T) × (0,1). Furthermore, | v_t^ϵ(t,x) |≤ C, | v_xt^ϵ(t,x) |≤ C. (ii) There exists ϵ_0>0 such that for ϵ∈ (0,ϵ_0] and t∈ [0,min{t^ϵ,t^ϵ}), ^ϵ_t(t):=∂^ϵ(t)∂ t= v_x t^ϵ(t, ^ϵ(t))1 - γ- ϵ v_t^ϵ(t, ^ϵ(t))/1 + ϵ^ϵ(t)/- v_x x^ϵ(t, ^ϵ(t))/1 - γ - γϵ^2 v^ϵ(t, ^ϵ(t))/(1 + ϵ^ϵ(t))^2 ^ϵ_t(t):=∂^ϵ(t)∂ t= v_x t^ϵ(t, ^ϵ(t))1 - γ+ ϵ v_t^ϵ(t, ^ϵ(t))/1 - ϵ^ϵ(t)/- v_x x^ϵ(t, ^ϵ(t))/1 - γ - γϵ^2 v^ϵ(t, ^ϵ(t))/(1 - ϵ^ϵ(t))^2. Obviously, ^ϵ_t(t)=0 for t∈ (t^ϵ,T) and ^ϵ_t(t)=0 for t∈ (t^ϵ,T). (iii) Recall that L^ϵ appears in (<ref>). For ϵ∈(0,ϵ_0] with ϵ_0 in (ii) and t∈ [0,T) ∖{t^ϵ, t^ϵ}, L_t^ϵ(t,x)= v_t^ϵ(t, ^ϵ(t)) ( 1 + ϵ x1 + ϵ^ϵ(t))^1 - γ if x∈ (0,^ϵ(t)) v_t^ϵ( t, x ) if x ∈ ( ^ϵ(t), ^ϵ(t) ) v_t^ϵ(t, ^ϵ(t)) ( 1 - ϵ x1 - ϵ^ϵ(t))^1 - γ if x ∈ (^ϵ(t), 1) L_xt^ϵ(t,x)= ϵ (1 - γ) v_t^ϵ(t, ^ϵ(t))1 + ϵ x( 1 + ϵ x1 + ϵ^ϵ(t))^1 - γ if x∈ (0,^ϵ(t)) v_xt^ϵ( t, x ) if x ∈ ( ^ϵ(t), ^ϵ(t) ) - ϵ (1 - γ) v_t^ϵ(t, ^ϵ(t))1 - ϵ x( 1 - ϵ x1 - ϵ^ϵ(t))^1 - γ if x ∈ (^ϵ(t), 1) | L_t^ϵ(t,x)|≤ C, | L_xt^ϵ(t,x)| ≤ C for x∈ (0,1). (iv) For (t,x,ϵ)∈ [0,T)× (0,1) × (0,ϵ_0] with ϵ_0 in (ii), | v_xt^ϵ(t,x) - λ∫_t^T e^-λ(s-t)[ Z_s^(t, x)∂ Y_s^(t, x)∂ x v_xt^ϵ(s, Y_s^(t,x)) · 1_{^ϵ(s)<Y_s^(t, x)< ^ϵ(s) }] ds | ≤ C( e^-λ(T-t) + ϵ^2/3). (i) Various_orders, (<ref>), (<ref>) and (<ref>) imply that for (t,x)∈ [0,T)× (0,1), |v^ϵ(t,x)| ≤ C, |v_x^ϵ(t,x)| ≤ C ϵ^2/3, |x(1-x) v_xx^ϵ(t,x)| ≤ C ϵ^2/3, |x^2(1-x)^2v_xxx^ϵ(t,x)| ≤ C ϵ^1/3. Since L^ϵ(t,x)=v^ϵ(t,x) for x∈ [^ϵ(t),^ϵ(t)], the mean value theorem and the bounds of L_x^ϵ and v_x^ϵ in (<ref>) and (<ref>) imply | L^ϵ(t,x)-v^ϵ(t,x)| ≤ C ϵ^2/3 for (t,x)∈ [0,T)× (0,1). By (<ref>), (<ref>) and (<ref>), we observe that for (t,x)∈ [0,T)× (0,1), | L^ϵ(t,x)-v^0(t)| =| L^ϵ(t,x)-L^SO, λ(t,x) + v^SO, λ(t,ŷ^SO,λ (t))-v^0(t)| ≤ C ϵ^2/3. Let f^ϵ: [0,T)× (0,1)→ be defined as f^ϵ(t, x) := x (1 - x) ( μ - γσ^2 x ) v_x^ϵ(t, x) + σ^2x^2 (1 - x)^22 v_x x^ϵ(t, x) + λ ( L^ϵ(t,x)-v^ϵ(t,x)). Then, (<ref>), (<ref>) and (<ref>) imply that for (t,x)∈ [0,T)× (0,1), | f^ϵ(t, x) | ≤ C, | f_x^ϵ(t, x) | ≤ C. From (<ref>) and (<ref>), we obtain the bound of v_t^ϵ in (<ref>): for (t,x)∈ [0,T)× (0,1), | v_t^ϵ(t, x) | = | -Q(x)v^ϵ(t, x) - f^ϵ(t, x)| ≤ C, where the inequality is due to (<ref>) and (<ref>). Using (<ref>), we rewrite (<ref>) as 0 = ∂∂ t( e^Q(x) t v^ϵ(t, x) ) + e^Q(x) t f^ϵ(t, x) with v^ϵ(T, x)=1 ⟹ v^ϵ(t, x) = e^Q(x) (T - t) + ∫_t^T e^Q(x) (s - t) f^ϵ(s, x) d s. We differentiate (<ref>) with respect to t and x (x and t, respectively) to obtain v_x t^ϵ(t, x) =v_t x^ϵ(t, x)=- Q(x) v_x^ϵ(t, x) - Q'(x) v^ϵ(t, x) - f_x^ϵ(t, x), where the differentiations are justified by the bounds in (<ref>) and (<ref>). The above expression shows the continuity of v_x t^ϵ and the boundedness of v_xt^ϵ in (<ref>) due to (<ref>) and (<ref>). (ii) We prove the result for ^ϵ_t(t) (^ϵ_t(t) case can be treated by the same way). Observe that _ϵ↓ 0inf_t ∈ [0, t^ϵ)( - λ v_x x^ϵ(t, ^ϵ(t))1 - γ - γϵ^2λ v^ϵ(t, ^ϵ(t))(1 + ϵ^ϵ(t))^2)/ϵ^1/3≥_ϵ↓ 0inf_t ∈ [0, T-Cϵ) - λ v_x x^ϵ(t, ^ϵ(t))(1 - γ)ϵ^1/3≥ C>0, where the first inequality is due to NT_boundaries_hitting_times_bound and (<ref>) and the second inequality is due to (<ref>). The above observation implies that there exists ϵ_0>0 such that - v_x x^ϵ(t, ^ϵ(t))1 - γ - γϵ^2 v^ϵ(t, ^ϵ(t))(1 + ϵ^ϵ(t))^2≥ C ϵ>0 for (t,ϵ)∈ [0,t^ϵ)× (0,ϵ_0]. By Boundaries_of_NT_region, we have v_x^ϵ(t, ^ϵ(t))1 - γ = ϵ v^ϵ(t, ^ϵ(t))1 + ϵ^ϵ(t) for t∈ [0,t^ϵ). For (t,ϵ)∈ [0,t^ϵ)× (0,ϵ_0], we apply the implicit function theorem to this equality (justified by (<ref>)) and conclude that ^ϵ_t(t) exists and is as in (<ref>). (iii) We differentiate (<ref>) and (<ref>) with respect to t and apply (<ref>), then we obtain (<ref>) and (<ref>).[Alternatively, considering the maximization problems in (<ref>), one may apply a suitable version of the envelope theorem to obtain the result.] These expressions and (<ref>) imply (<ref>). (iv) As before, in this part of the proof, one can justify the interchanges of differentiations and integrations using suitable bounds such as Boundedness_of_various_expectation (i) and (<ref>). Since lim_t↑ T^ϵ(t)=0 and lim_t↑ T^ϵ(t)=1, (<ref>), (<ref>), v^ϵ(T,x)=1 and v_x^ϵ(T,x)=0 imply lim_t↑ T L^ϵ(t,x)=1, lim_t↑ T L_x^ϵ(t,x)=0. Since (Z_s^(t,x), Y_s^(t,x)) and (Z_s-t^(0,x), Y_s-t^(0,x)) have the same probability distribution, we have λ∫_t^T e^- λ (s-t)[ Z_s^(t, x)∂ Y_s^(t, x)∂ x L_x^ϵ (s, Y_s^(t, x)) ] d s = λ∫_0^T-t e^- λ u[ Z_u^(0, x)∂ Y_u^(0, x)∂ x L_x^ϵ (t+u, Y_u^(0, x)) ] d u. We differentiate above with respect to t, then (<ref>) and the continuity of t↦ L_x^ϵ(t,x) imply ∂∂ t( λ∫_t^T e^- λ (s-t)[ Z_s^(t, x)∂ Y_s^(t, x)∂ x L_x^ϵ (s, Y_s^(t, x)) ] d s ) = λ∫_0^T-t e^- λ u[ Z_u^(0, x)∂ Y_u^(0, x)∂ x L_xt^ϵ (t+u, Y_u^(0, x)) ] d u =λ∫_t^T e^- λ (s-t)[ Z_s^(t, x)∂ Y_s^(t, x)∂ x L_xt^ϵ (s, Y_s^(t, x)) ] d s. We differentiate (<ref>) with respect to t and obtain v_x t^ϵ(t, x) = λ e^- λ (T - t)[ ∂ Z_T^(t, x)∂ x] + e^- λ (T - t)∂∂ t[ ∂ Z_T^(t, x)∂ x] + ∂∂ t( λ∫_t^T e^- λ (s-t)[ Z_s^(t, x)∂ Y_s^(t, x)∂ x L_x^ϵ (s, Y_s^(t, x)) + ∂ Z_s^(t, x)∂ x L^ϵ ( s, Y_s^(t, x) ) ] d s ) =e^- λ (T - t)∂∂ t[ ∂ Z_T^(t, x)∂ x] + λ∫_t^T e^- λ (s-t)[ Z_s^(t, x)∂ Y_s^(t, x)∂ x L_xt^ϵ (s, Y_s^(t, x)) ] ds + ∂∂ t( λ∫_t^T e^- λ (s-t)[ ∂ Z_s^(t, x)∂ x L^ϵ ( s, Y_s^(t, x) ) ] d s )+ λ e^- λ (T - t)[ ∂ Z_T^(t, x)∂ x], where the second equality is due to (<ref>). Using ∂ Z_t^(t, x)∂ x=0, we obtain | ∂∂ t( λ∫_t^T e^- λ (s-t)[ ∂ Z_s^(t, x)∂ x( L^ϵ ( s, Y_s^(t, x) ) - v^0(s)) ] d s ) | = | λ^2 ∫_t^T e^- λ (s-t)[ ∂ Z_s^(t, x)∂ x( L^ϵ ( s, Y_s^(t, x) ) - v^0(s)) ] d s + λ∫_t^T e^- λ (s-t)∂∂ t[ ∂ Z_s^(t, x)∂ x( L^ϵ ( s, Y_s^(t, x) ) - v^0(s)) ] d s | ≤ C λ^2 ∫_t^T e^- λ (s-t)( ϵ^2/3(s-t) + (s-t)^2 ) d s + C λ∫_t^T e^- λ (s-t)( ϵ^2/3+s-t ) d s ≤ C ϵ^2/3, where the first inequality is due to Boundedness_of_various_expectation (iii) (with F(y)=L^ϵ ( s, y) - v^0(s)), (<ref>) and (<ref>) and the second inequality is due to Order_of_exponential_times and λ=c ϵ^-2/3. Similarly, | ∂∂ t( λ∫_t^T e^- λ (s-t)[ ∂ Z_s^(t, x)∂ x v^0(s) ] d s ) + λ e^- λ (T - t)[ ∂ Z_T^(t, x)∂ x] | =| ∂∂ t( λ∫_0^T-t e^- λ u[ ∂ Z_u^(0, x)∂ x v^0(u+t) ] d u ) + λ e^- λ (T - t)[ ∂ Z_T-t^(0, x)∂ x] | = | - λ∫_0^T-t e^- λ uQ(y_M) v^0(u+t) [ ∂ Z_u^(0, x)∂ x] d u | ≤ Cλ∫_0^T-t e^-λ u u du ≤ C ϵ^2/3, where the second equality is due to (<ref>), the first inequality is due to Boundedness_of_various_expectation (i) and the last inequality is due to λ=c ϵ^-2/3. Combining (<ref>), (<ref>), and (<ref>), we obtain | v_x t^ϵ(t, x) - λ∫_t^T e^- λ (s-t)[ Z_s^(t, x)∂ Y_s^(t, x)∂ x L_xt^ϵ (s, Y_s^(t, x)) ] ds | =| e^- λ (T - t)∂∂ t[ ∂ Z_T^(t, x)∂ x] + ∂∂ t( λ∫_t^T e^- λ (s-t)[ ∂ Z_s^(t, x)∂ x L^ϵ ( s, Y_s^(t, x) ) ] d s )+ λ e^- λ (T - t)[ ∂ Z_T^(t, x)∂ x] | ≤ C( e^-λ(T-t) + ϵ^2/3), where the boundedness of ∂∂ t [ ∂ Z_T^(t, x)∂ x ] is due to Boundedness_of_various_expectation (iii) (with F(y)=1). The expression of L_xt^ϵ in (<ref>) and the bounds in Boundedness_of_various_expectation (i) and (<ref>) imply | λ∫_t^T e^- λ (s-t)[ Z_s^(t, x)∂ Y_s^(t, x)∂ x( L_xt^ϵ (s, Y_s^(t, x)) - v_xt^ϵ (s, Y_s^(t, x)) · 1_{^ϵ(s)<Y_s^(t, x)<^ϵ(s) }) ] d s | ≤ C ϵ. Finally, we conclude the desired result by the above inequality and (<ref>). § PROOF OF MERTON_FRACTION_INSIDE_NT Throughout this proof, C, C_1, C_2>0 are generic constants independent of (t,s, x,ϵ)∈ [0,T)× [t,T)×(0,1) × (0,1) (also independent of λ due to relation λ=c ϵ^-2/3) that may differ line by line. (i) By Boundaries_of_NT_region and (<ref>), we have v_x^SO, λ(t, ŷ^SO, λ(t))=0. By the mean value theorem, | v_x^SO, λ(t, ^ϵ(t))| = | v_x^SO, λ(t, ^ϵ(t)) - v_x^SO, λ(t, ŷ^SO, λ(t)) |≥inf_y∈ (0,1)| v_x x^SO, λ(t, y) | ·|^ϵ(t) - ŷ^SO, λ(t) | ≥ C min{ 1, λ (T - t) }λ·|^ϵ(t) - ŷ^SO, λ(t) | , where the last inequality is due to (<ref>). By Boundaries_of_NT_region (iii) and (<ref>), we obtain | v_x^SO, λ(t, ^ϵ(t))| ≤| v_x^ϵ(t, ^ϵ(t)) - v_x^SO, λ(t, ^ϵ(t)) | + | v_x^ϵ(t, ^ϵ(t)) | ≤ C ϵ. We combine (<ref>), (<ref>) and (<ref>) to obtain |^ϵ(t) -y_M |≤C ϵ^1/3min{ 1, λ (T - t) }. By the same way, we obtain the other inequalities in (<ref>). (ii) Let t∈ [0,T) be fixed. Since min{1,λ(T-t)}=1 for small enough ϵ, the inequalities in (<ref>) and 0<y_M<1 imply that 0<_ϵ↓ 0^ϵ(t)≤_ϵ↓ 0^ϵ(t)<1. By the mean value theorem, there exists x^ϵ∈ [^ϵ(t), ^ϵ(t)] such that for small enough ϵ, v_x x^ϵ(t, x^ϵ) (^ϵ(t) - ^ϵ(t)) = v_x^ϵ(t, ^ϵ(t)) - v_x^ϵ(t, ^ϵ(t)) = - ϵ (1 - γ) v^ϵ(t, ^ϵ(t))1 - ϵ^ϵ(t) - ϵ (1 - γ) v^ϵ(t, ^ϵ(t))1 + ϵ^ϵ(t), where the second equality is due to (<ref>) and Boundaries_of_NT_region. We observe that (<ref>) and (<ref>) imply _ϵ↓ 0-(1-γ) v_x x^ϵ(t, x^ϵ)ϵ^2/3>0. Therefore, (<ref>) and (<ref>) produce _ϵ↓ 0^ϵ(t) - ^ϵ(t)ϵ^1/3= _ϵ↓ 0-(1-γ) v_x x^ϵ(t, x^ϵ)ϵ^2/3( v^ϵ(t, ^ϵ(t))1 - ϵ^ϵ(t) + v^ϵ(t, ^ϵ(t))1 + ϵ^ϵ(t))>0. (iii) Considering Boundaries_of_NT_region (i), it is enough to show that _ϵ↓ 0sup_t∈ [0,T)( v_x^ϵ(t, y_M)ϵ (1 - γ) - v^ϵ(t, y_M)1 + ϵ y_M) <0 and _ϵ↓ 0inf_t∈ [0,T)( v_x^ϵ(t, y_M)ϵ (1 - γ) + v^ϵ(t, y_M)1 - ϵ y_M) >0. We prove the first inequality above. The other inequality can be proved by the same way. By (<ref>), we have |^ϵ(s) -y_M |≤ C ϵ^1/3 for 0≤ s ≤ T-1λ. The expression of Y_s^(t, x) in (<ref>) implies that ( Y_s^(t, y_M)≥^ϵ(s) ) =( B_s - B_t ≥ (σ2 - μσ)(s-t) + 1σln(^ϵ(s)(1-y_M)y_M(1-^ϵ(s))) ) ≥( B_1 ≥ C(1+ 1√(s-t)ϵ^1/3) ) for t< s ≤ T-1λ, where the inequality is due to (<ref>) and the mean value theorem. By the same way as we prove (<ref>), we check that for t≤ s, [ | Z_s^(t, x)∂ Y_s^(t, x)∂ x - 1 | ] |_x=y_M≤ C√(s-t), [ | Z_s^(t, x)(∂ Y_s^(t, x)∂ x - 1 ) | ] |_x=y_M≤ C√(s-t). By (<ref>) and Boundaries_of_NT_region (i), for x>^ϵ(t), v_x^ϵ(t,x)ϵ(1-γ) - v^ϵ(t, x)1 + ϵ y_M≤v^ϵ( t, x )1 + ϵ x - v^ϵ(t, x)1 + ϵ y_M≤ C ϵ. From the expression of L in (<ref>) and L_x in (<ref>), we obtain L_x^ϵ ( t, x )ϵ (1 - γ) - L^ϵ ( t, x )1 + ϵ y_M = ϵ (y_M-x) v^ϵ(t, ^ϵ(t))(1 + ϵ x)(1+ϵ y_M)( 1 + ϵ x1 + ϵ^ϵ(t))^1 - γ 1_{ x ≤^ϵ(t) } + (v_x^ϵ(t,x)ϵ(1-γ) - v^ϵ(t, x)1 + ϵ y_M) 1_{ x ∈ ( ^ϵ(t), ^ϵ(t) ) } - (2+ϵ(y_M -x))v^ϵ(t, ^ϵ(t))(1 - ϵ x)(1+ϵ y_M)( 1 - ϵ x1 - ϵ^ϵ(t))^1 - γ 1_{ x ≥^ϵ(t) } ≤ C_1 ϵ - C_2 1_{ x ≥^ϵ(t) }, where the inequality is due to (<ref>) and (<ref>). Since Z_s^(t, x)∂ Y_s^(t, x)∂ x>0, (<ref>) and (<ref>) imply [ Z_s^(t, x)∂ Y_s^(t, x)∂ x( L_x^ϵ ( s, Y_s^(t, x) )ϵ (1 - γ) - L^ϵ ( s, Y_s^(t, x) )1 + ϵ x) ] |_x=y_M ≤[ (Z_s^(t, x)∂ Y_s^(t, x)∂ x-1) ( C_1 ϵ - C_2 1_{ Y_s^(t, x)≥^ϵ(s) }) ] |_x=y_M +[ C_1 ϵ - C_2 1_{ Y_s^(t, y_M)≥^ϵ(s) }] ≤ C_1 ( √(s-t) + ϵ) - C_2 ( Y_s^(t, y_M)≥^ϵ(s) ) for t≤ s ≤ T. By (<ref>) and Order_of_exponential_times, we have λ∫_t^T e^- λ (s - t)[ Z_s^(t, x)∂ Y_s^(t, x)∂ x( L_x^ϵ ( s, Y_s^(t, x) )ϵ (1 - γ) - L^ϵ ( s, Y_s^(t, x) )1 + ϵ x) ] d s |_x=y_M ≤ C_1 ϵ^1/3 - C_2 λ∫_t^T e^- λ (s - t)( Y_s^(t, y_M)≥^ϵ(s) ) ds ≤ C_1 ϵ^1/3 - C_2 ∫_0^(λ(T-t)-1)^+ e^-u∫_C(1+1/√(u))^∞e^- z^2/2√(2 π) d z du, where the last inequality is due to (<ref>) and the substitution u=λ(s-t). Boundedness_of_various_expectation (iii) (with F(y)=1 and F(y)=L^ϵ(s,y), respectively) and (<ref>) produce |[ ∂ Z_s^(t, x)∂ x] ||_x=y_M≤ C (s-t)^2, |[ ∂ Z_s^(t, x)∂ x L^ϵ ( s, Y_s^(t, x) ) ] | |_x=y_M≤ C ( ϵ (s-t) + (s-t)^2 ). By the same way as we prove (<ref>), we check that [Z_T^(t,x)]≥ 1-C(T-t). Then (<ref>) produces e^- λ (T - t)[ 1ϵ (1 - γ)∂ Z_T^(t, x)∂ x - Z_T^(t, x)1 + ϵ x] |_x=y_M ≤ C_1 e^- λ (T - t)·(T - t)^2ϵ - e^- λ (T - t)( 1- C_2(T-t)) ≤ Cϵ^1/3 - e^- λ (T - t), where the last inequality is due to sup_t≥ 0 t^n e^-t<∞ for n=1,2. By (<ref>), (<ref>) and Order_of_exponential_times, λ∫_t^T e^- λ (s - t)[ ∂ Z_s^(t, x)∂ xL^ϵ ( s, Y_s^(t, x) )ϵ (1 - γ)] d s |_x=y_M≤ C λ∫_t^T e^- λ (s - t)( (s-t) + 1ϵ(s-t)^2 ) ≤ Cϵ^1/3, λ∫_t^T e^- λ (s - t)[ Z_s^(t, x)(∂ Y_s^(t, x)∂ x - 1 ) L^ϵ ( s, Y_s^(t, x) )1 + ϵ x] d s |_x=y_M≤ C λ∫_t^T e^- λ (s - t)√(s-t)≤ Cϵ^1/3. The representation of v in (<ref>) and v_x in (<ref>) produce v_x^ϵ(t, x)ϵ (1 - γ) - v^ϵ(t, x)1 + ϵ x = e^- λ (T - t)[ 1ϵ (1 - γ)∂ Z_T^(t, x)∂ x - Z_T^(t, x)1 + ϵ x] + λ∫_t^T e^- λ (s - t)[ ∂ Z_s^(t, x)∂ xL^ϵ ( s, Y_s^(t, x) )ϵ (1 - γ)] d s + λ∫_t^T e^- λ (s - t)[ Z_s^(t, x)∂ Y_s^(t, x)∂ x( L_x^ϵ ( s, Y_s^(t, x) )ϵ (1 - γ) - L^ϵ ( s, Y_s^(t, x) )1 + ϵ x) ] d s + λ∫_t^T e^- λ (s - t)[ Z_s^(t, x)(∂ Y_s^(t, x)∂ x - 1 ) L^ϵ ( s, Y_s^(t, x) )1 + ϵ x] d s. We substitute x=y_M above and apply (<ref>), (<ref>) and (<ref>) to obtain _ϵ↓ 0sup_t∈ [0,T)( v_x^ϵ(t, y_M)ϵ (1 - γ) - v^ϵ(t, y_M)1 + ϵ y_M) ≤ C _ϵ↓ 0sup_t∈ [0,T)(- e^- λ (T - t) - ∫_0^(λ(T-t)-1)^+ e^-u∫_C_2(1+1/√(u))^∞e^- z^2/2√(2 π) d z du ) ≤ C sup_a∈ [0,∞)(- e^- a - ∫_0^(a-1)^+ e^-u∫_C_2(1+1/√(u))^∞e^- z^2/2√(2 π) d z du ) <0. § PROOF OF REALLY_USED Throughout this proof, C>0 is a generic constant independent of (t,s, x,ϵ)∈ [0,T)× [t,T)×(0,1) × (0,1) (also independent of λ due to relation λ=c ϵ^-2/3) that may differ line by line. (i) We obtain (<ref>) by Boundaries_of_NT_region (iii) and (<ref>). We combine (<ref>), (<ref>) and (<ref>) to obtain (<ref>). To check (<ref>), we rewrite (<ref>) using (<ref>) as v_t^ϵ(t, x) - v_t^0(t) = Q(y_M) ( v^0(t) - v^ϵ(t, x) )+ γ (1 - γ) σ^22 (x - y_M)^2 v^ϵ(t, x) - f^ϵ(t, x), where f^ϵ is defined in (<ref>). Since L^ϵ(t,x)=v^ϵ(t,x) for x∈ [ ^ϵ(t), ^ϵ(t)], (<ref>) and (<ref>) imply that | f^ϵ(t, x) | ≤ C ϵ^2/3 for x∈ [ ^ϵ(t), ^ϵ(t)]. We apply this inequality, (<ref>), (<ref>) and (<ref>) to (<ref>) and conclude (<ref>). (ii) For x_1,x_2 ∈ [ ^ϵ(t), ^ϵ(t)], the mean value theorem and (<ref>) imply | v^ϵ(t, x_1)-v^ϵ(t, x_2)| ≤ C ϵ. Then we conclude (<ref>). Since v_tx^ϵ is continuous by Various_orders2 (i), the mean value theorem and (<ref>) imply |v_t^ϵ(t, x_1) - v_t^ϵ(t, x_2)| ≤sup_x∈ [ ^ϵ(t), ^ϵ(t)]| v_tx^ϵ(t, x)| ·C ϵ^1/3min{ 1, λ (T - t) } for x_1,x_2 ∈ [ ^ϵ(t), ^ϵ(t)]. The above inequality and the following lemma produce (<ref>). Let ass hold. Let ϵ_0>0 be as in Various_orders2 (ii). For α∈ (0,1), there exists a positive constant C independent of (t,x,ϵ)∈ [0,T)× (0,1) × (0,ϵ_0] such that |v_x t^ϵ(t, x) | ≤ C( ϵ^2/3 + e^- αλ (T - t)). By (<ref>), (<ref>) and (<ref>), we have | f_x^ϵ(t, x) | ≤ C ϵ^1/3. We apply this inequality, (<ref>) and (<ref>) to the expression in (<ref>) to obtain |v_xt^ϵ(t,x) · 1_{^ϵ(t)<x< ^ϵ(t) }| ≤ C ϵ^1/3 for (t,x,ϵ)∈ [0,T-1λ]× (0,1)× (0,1). By (<ref>), (<ref>) and Order_of_exponential_times, we have | λ∫_t^(T-1/λ)∨ t e^-λ(s-t)[ (Z_s^(t, x)∂ Y_s^(t, x)∂ x-1) v_xt^ϵ(s, Y_s^(t,x)) · 1_{^ϵ(s)<Y_s^(t, x)< ^ϵ(s) }] ds | ≤ Cλ∫_t^(T-1/λ)∨ t e^-λ(s-t)√(s-t) ϵ^1/3 ds ≤ C ϵ^2/3. Since λ∫_T-1/λ^T e^-λ(s-t) ds ≤ C e^- λ (T - t), Boundedness_of_various_expectation (i) and (<ref>) produce λ∫_T-1/λ^T e^-λ(s-t)[ | Z_s^(t, x)∂ Y_s^(t, x)∂ x v_xt^ϵ(s, Y_s^(t,x)) · 1_{^ϵ(s)<Y_s^(t, x)< ^ϵ(s) }| ] ds ≤ C e^- λ (T - t). We combine Various_orders2 (iv), (<ref>) and (<ref>) to conclude that for (t,x,ϵ)∈ [0,T)× (0,1) × (0,ϵ_0], | v_x t^ϵ(t, x) | ≤ C ( e^- λ (T - t) + ϵ^2/3) + λ∫_t^(T-1/λ)∨ t e^-λ(s-t)[ | v_xt^ϵ(s, Y_s^(t,x))| · 1_{^ϵ(s)<Y_s^(t, x)< ^ϵ(s) }] ds. For α∈ (0,1), let k^ϵ(α):=sup_(t,x)∈ [0,T)×(0,1) | v_xt^ϵ(t,x)|e^- αλ (T - t) + ϵ^2/3. Then, the above inequality implies | v_xt^ϵ(t,x)|e^- αλ (T - t) + ϵ^2/3 ≤ C + λ∫_t^(T-1/λ)∨ t e^-λ(s-t) e^- αλ (T - s) + ϵ^2/3e^- αλ (T - t) + ϵ^2/3 k^ϵ(α) [ 1_{^ϵ(s)<Y_s^(t, x)< ^ϵ(s) }] ds ≤ C + k^ϵ(α) λ∫_t^(T-1/λ)∨ t e^-(1-α)λ(s-t)[ 1_{^ϵ(s)<Y_s^(t, x)< ^ϵ(s) }] ds. Observe that for t<s< (T-1/λ)∨ t, the definition of Y_s^(t,x) in (<ref>) produces ( ^ϵ(s)<Y_s^(t, x)< ^ϵ(s) ) =( ln((1-x)^ϵ(s)x(1-^ϵ(s))) - (μ-σ^22)(s-t)<σ (B_s - B_t)< ln((1-x)^ϵ(s)x(1-^ϵ(s))) - (μ-σ^22)(s-t) ) ≤( -C ϵ^1/3 < B_s - B_t <C ϵ^1/3)=( -Cϵ^1/3√(s-t) < B_1 < Cϵ^1/3√(s-t)) , where the inequality is due to (a<B_s - B_t <b ) ≤(-b-a2<B_s - B_t <b-a2) for a<b, (<ref>) and the mean value theorem. Then, (<ref>) with the substitution u=λ(s-t) produces λ∫_t^(T-1/λ)∨ t e^-(1-α)λ(s-t) [ 1_{^ϵ(s)<Y_s^(t, x)< ^ϵ(s) }] ds ≤ C_α:= ∫_0^∞ e^-(1-α)u ( -C√(u) < B_1 < C√(u)) du. Since the constant C_α above does not depend on t,x,ϵ and C_α<1, (<ref>) implies k^ϵ(α)≤C1-C_α. § PROOF OF V_XX_CONV_LEM Throughout this appendix, C>0 is a generic constant independent of (t,s, x,ϵ)∈ [0,T)× [t,T)×(0,1) × (0,1) (also independent of λ due to relation λ=c ϵ^-2/3) that may differ line by line. First, we prove the following lemma. Let ass hold. Let ϵ_0>0 be as in Various_orders2 (ii) and α∈ (0,1). Then, there exists ϵ_00∈ (0,ϵ_0] such that x(1-x) | v_x x t^ϵ(t, x) |≤ C( ϵ^1/3 + √(λ) e^- αλ (T - t)) for (t,x,ϵ)∈ [0,T)× (0,1) × (0,ϵ_0], | ^ϵ_t(t) | , | ^ϵ_t(t) | ≤ C for (t,ϵ)∈ [0,T- ϵ^1/3]× (0,ϵ_00]. Using (<ref>) with F=L_x^ϵ, we rewrite (<ref>) as v_x x^ϵ(t, x) = e^- λ (T - t)[ ∂^2 Z_T^(t, x)∂ x^2] +λ∫_t^T e^- λ (s-t)[ ∂^2 Z_s^(t, x)∂ x^2 L^ϵ ( s, Y_s^(t, x) ) ] d s + λ∫_t^T e^-λ(s-t)𝔼[ ( 1x(1-x)Z_s^(t, x)∂ Y_s^(t, x)∂ x( B_s - B_tσ (s - t) - 1 + x (1 + γ ) ) + ∂ Z_s^(t, x)∂ x∂ Y_s^(t, x)∂ x) L_x^ϵ(s, Y_s^(t, x)) ] ds = e^- λ (T - t)[ ∂^2 Z_T-t^(0, x)∂ x^2] +λ∫_0^T-t e^- λ u [ ∂^2 Z_u^(0, x)∂ x^2 L^ϵ ( u+t, Y_u^(0, x) ) ] d u + λ∫_0^T-t e^-λ u 𝔼[ ( B_u/σ u - 1 + x (1 + γ )x(1-x)Z_u^(0, x)∂ Y_u^(0, x)∂ x + ∂ Z_u^(0, x)∂ x∂ Y_u^(0, x)∂ x) L_x^ϵ(u+t, Y_u^(0, x)) ] du, because (Z_s^(t,x), Y_s^(t,x), B_s-B_t) and (Z_s-t^(0,x), Y_s-t^(0,x),B_s-t) have the same distribution. By the same way as in the proof of Boundedness_of_various_expectation (iii), we can check that | ∂∂ t[ ∂^2 Z_T^(t, x)∂ x^2]| ≤ C. We observe that (<ref>) and the inequalities in (<ref>), (<ref>) and (<ref>) produce | ∂∂ t( λ∫_0^T-t e^- λ u [ ∂^2 Z_u^(0, x)∂ x^2 L^ϵ ( u+t, Y_u^(0, x) ) ] d u+ e^- λ (T - t)[ ∂^2 Z_T-t^(0, x)∂ x^2] )| = | λ∫_0^T-t e^- λ u [ ∂^2 Z_u^(0, x)∂ x^2 L_t^ϵ ( u+t, Y_u^(0, x) ) ] d u + e^- λ (T - t)∂∂ t[ ∂^2 Z_T^(t, x)∂ x^2] | ≤ C ( λ∫_0^T-t e^- λ u u du +e^- λ (T - t)) ≤ C ( ϵ^2/3 + e^- λ (T - t)). The expression of L_xt^ϵ in (<ref>) and the bound in (<ref>) imply | L_xt^ϵ(t,x)| ≤ C( ϵ^2/3 + e^- αλ (T - t)). Using (<ref>) again, we obtain | ∂∂ t( λ∫_0^T-t e^-λ u 𝔼[ ( B_u/σ u - 1 + x (1 + γ )x(1-x)Z_u^(0, x)∂ Y_u^(0, x)∂ x + ∂ Z_u^(0, x)∂ x∂ Y_u^(0, x)∂ x) L_x^ϵ(u+t, Y_u^(0, x)) ] du ) | =| λ∫_0^T-t e^-λ u 𝔼[ ( B_u/σ u - 1 + x (1 + γ )x(1-x)Z_u^(0, x)∂ Y_u^(0, x)∂ x + ∂ Z_u^(0, x)∂ x∂ Y_u^(0, x)∂ x) L_xt^ϵ(u+t, Y_u^(0, x)) ] du | ≤ C 1x(1-x)λ∫_0^T-t e^-λ u( 1+ 1√(u)) ( ϵ^2/3 + e^- αλ (T - t-u)) du ≤ C 1x(1-x)( ϵ^1/3 + √(λ) e^- αλ (T - t)), where the first inequality is due to Boundedness_of_various_expectation (i) and (<ref>) and the second inequality is due to Order_of_exponential_times and λ=c ϵ^-2/3. We combine (<ref>), (<ref>) and (<ref>) to conclude (<ref>). The bounds in (<ref>), (<ref>), (<ref>) and (<ref>) imply that there exists ϵ_00∈ (0,ϵ_0] such that for (t,ϵ)∈ [0,T- ϵ^1/3] × (0,ϵ_00], | v_x t^ϵ(t, ^ϵ(t))1 - γ- ϵ v_t^ϵ(t, ^ϵ(t))1 + ϵ^ϵ(t)| ≤ C ϵ^2/3, - v_x x^ϵ(t, ^ϵ(t))1 - γ - γϵ^2 v^ϵ(t, ^ϵ(t))(1 + ϵ^ϵ(t))^2≥ Cϵ^2/3. The above inequalities and (<ref>) produce the inequality for ^ϵ_t(t) in (<ref>). The inequality for ^ϵ_t(t) can be checked by the same way. Now we prove v_xx_conv_lem. By the mean value theorem and (<ref>), we have | λ (v_x x^SO, λ(t, x_ϵ)-v_x x^SO, λ(t, y_M))1 - γ| ≤ C | x_ϵ - y_M | ϵ↓ 0⟶ 0, where the convergence is due to (<ref>). The above inequality and (<ref>) imply lim_ϵ↓ 0λ v_x x^SO, λ(t, x_ϵ)1 - γ = -γσ^2 v^0(t). Direct computations using (<ref>) and the definition of G^ϵ produce λ^2 ∫_t^T e^-λ(s-t)𝔼[ Z_s^(t, x)(∂ Y_s^(t, x)∂ x)^2 v^ϵ_xx ( s, Y_s^(t, x) ) 1-γ· 1_{^ϵ(s)<Y_s^(t, x)<^ϵ(s) }] ds =λx^2 (1 - x)^1 + γ∫_t^T e^-λ(s-t)𝔼[ (Y_s^(t, x))^2 (1-Y_s^(t, x))^1+γλ v^ϵ_xx ( s, Y_s^(t, x) )1-γ· 1_{^ϵ(s)<Y_s^(t, x)<^ϵ(s) }] ds =λx^2 (1 - x)^1 + γ∫_t^T e^-λ(s-t)∫_^ϵ( s )^^ϵ( s ) G^ϵ( s, y ) φ(y;s - t, x) d y d s. In (<ref>), we apply (<ref>) and follow the same procedure after (<ref>) to obtain | λ v_x x^ϵ(t, x) -λ v_x x^SO, λ(t, x)1 - γ -λ^2 ∫_t^T e^-λ(s-t)𝔼[ Z_s^(t, x)(∂ Y_s^(t, x)∂ x)^2 v^ϵ_xx ( s, Y_s^(t, x) ) 1-γ· 1_{^ϵ(s)<Y_s^(t, x)<^ϵ(s) }] ds | ≤ C ϵ^1/3. We combine (<ref>), (<ref>) and the above inequality to conclude that G^ϵ (t, x_ϵ) + y_M^2(1-y_M)^1+γγσ^2 v^0(t) - λ∫_t^T e^- λ (s - t)∫_^ϵ( s )^^ϵ( s ) G^ϵ( s, y ) φ(y;s - t, x_ϵ) d y d s ϵ↓ 0⟶ 0. Therefore, to complete the proof, it is enough to prove the following: λ∫_t^T e^- λ (s - t)∫_^ϵ( s )^^ϵ( s ) G^ϵ( s, y ) φ(y;s - t, x_ϵ) d y d s - ∫_^ϵ(t)^^ϵ(t) G^ϵ (t,h(z)) √(2 λ)2 σ e^- √(2 λ)/σ| z - z_ϵ| dz ϵ↓ 0⟶0. By (<ref>), there exists ϵ_00'∈ (0,ϵ_00] such that 1x_ϵ(1-x_ϵ)≤ C for (t,ϵ) ∈[0, T-1λ]× (0,ϵ_00']. Then the form of φ in (<ref>) implies 0≤φ(y;s-t, x_ϵ) ≤C√(s-t) for (s,y,ϵ)∈ (t, T-1λ]× (0,1)× (0,ϵ_00']. The mean value theorem, (<ref>), (<ref>) and (<ref>) imply that for (s,ϵ)∈ (t, T-ϵ^1/3) × (0,ϵ_00'], | ∫_^ϵ(s)^^ϵ(s) G^ϵ(s, y) φ(y;s-t, x_ϵ) d y - ∫_^ϵ(t)^^ϵ(t) G^ϵ(s, y) φ(y;s-t, x_ϵ) d y | ≤ C √(s-t). The mean value theorem and (<ref>) imply that for (s,ϵ)∈ (t, T-ϵ^1/3) × (0,ϵ_0], | ∫_^ϵ(t)^^ϵ(t) G^ϵ(s, y) φ(y;s-t, x_ϵ) d y - ∫_^ϵ(t)^^ϵ(t) G^ϵ(t, y) φ(y;s-t, x_ϵ) d y | =| ∫_^ϵ(t)^^ϵ(t)λ y (1-y)^γ·y(1-y) ( v_x x^ϵ(s, y)- v_x x^ϵ(t, y)) 1 - γφ(y;s-t, x_ϵ) d y | ≤ C λ( ϵ^1/3 + √(λ) e^- αλ (T - s)) (s-t) ·( ^ϵ(t)< Y_s^(t,x_ϵ)< ^ϵ(t)) ≤ C( √(λ) + λ^3/2 e^- αλ (T - s)) (s-t). We combine (<ref>) and (<ref>) to obtain | λ∫_t^T e^- λ (s - t)( ∫_^ϵ(s)^^ϵ(s) G^ϵ( s, y ) φ(y;s-t, x_ϵ) d y - ∫_^ϵ( t )^^ϵ( t ) G^ϵ( t, y ) φ(y;s-t, x_ϵ) d y ) d s | ≤ C λ∫_t^T e^- λ (s - t)(√(s-t) + (√(λ) + λ^3/2 e^- αλ (T - s)) (s-t) )ds ϵ↓ 0⟶ 0, where the convergence can be checked using Order_of_exponential_times and λ=c ϵ^-2/3. Observe that | λ∫_t^∞ e^- λ (s - t)∫_^ϵ(t)^^ϵ(t) G^ϵ(t, y) (φ(y;s-t, x_ϵ) - 1σ y (1 - y) √(2 π (s - t))exp( - ( ln( y (1 - x_ϵ)(1 - y) x_ϵ) )^22 σ^2 (s - t))) d y ds | ≤ C ∫_0^∞ e^-u∫_-∞^∞| exp( - (z+ (σ^2/2-μ) √(u/λ))^2/2 σ^2) - exp( - z^2/2 σ^2)σ√(2 π)| d z ds ϵ↓ 0⟶ 0, where the inequality is due to the boundedness of G^ϵ and the substitution z=h^-1(y)- h^-1(x_ϵ)√(s-t) and u=λ(s-t) and the convergence is due to the dominated convergence theorem. The boundedness of G^ϵ also implies | λ∫_T^∞ e^- λ (s - t)∫_^ϵ(t)^^ϵ(t) G^ϵ(t, y) φ(y;s-t, x_ϵ) dy ds |≤ C e^-λ(T-t)ϵ↓ 0⟶ 0. We substitute z=h^-1(y) and u=√(λ(s-t)), then use Fubini's theorem below: λ∫_t^∞ e^- λ (s - t)∫_^ϵ(t)^^ϵ(t) G^ϵ(t, y) 1σ y (1 - y) √(2 π (s - t))exp( - ( ln( y (1 - x_ϵ)(1 - y) x_ϵ) )^22 σ^2 (s - t)) d y ds = ∫_^ϵ(t)^^ϵ(t) G^ϵ(t, h(z)) ∫_0^∞√(2λ)σ√(π)exp(-u^2 - λ( z- z_ϵ)^22 σ^2u^2) d u dz =∫_^ϵ(t)^^ϵ(t) G^ϵ (t,h(z)) √(2 λ)2 σ e^- √(2 λ)/σ| z - z_ϵ| dz, where the last equality is due to the observation that for k>0, e^-u^2-k/u^2 =ddu( e^-2√(k)2∫_√(k)/u-u^∞ e^-ζ^2 dζ - e^2√(k)2∫_√(k)/u+u^∞ e^-ζ^2 dζ) ⟹ ∫_0^∞ e^-u^2-k/u^2 du = √(π)2e^-2√(k). Finally, we combine (<ref>), (<ref>), (<ref>) and (<ref>) to conclude (<ref>). § ADDITIONAL LEMMAS Let α, β, f : [0, T] →ℝ be measurable and β≥ 0. Assume that ∫_0^T |f(t)| β(t) dt<∞ and f(t) ≤α(t) + ∫_t^Tβ(s) f(s) d s for t ∈ [0, T]. Then, f satisfies the following inequality: for t∈ [0,T], f(t) ≤α(t) + ∫_t^Tα(s) β(s) e^∫_t^sβ(r) dr d s. Let F:[0,T]× [0,1]^2→ be continuous. We define f:[0,T]×[0,1]→ [0,1] as f(t,x):=max{ z: z∈_y∈ [0,1] F(t,x,y) }, then f is upper semicontinuous (which is obviously Borel-measurable). This is Lemma D.1 in <cit.>. There is a constant C_α independent of t∈ [0,T), λ∈ [1,∞) such that λ∫_t^T e^- λ (s - t) (s - t)^α d s ≤ C_αλ^- αmin{ 1, λ (T - t) } if α≥ 0 C_αλ^- α if α∈ (-1, 0) Simple change of variable implies λ∫_t^T e^- λ (s - t) (s - t)^α d s = λ^- α∫_0^λ (T - t) e^- u u^α d u ≤ C_αλ^- αmin{ 1 , λ (T - t) } if α≥ 0, C_αλ^- α if α∈ (- 1, 0), where we use the fact that ∫_0^∞ e^- u u^α d u<∞ for α>-1 and e^-uu^α≤ 1 for α≥ 0 and u≤ 1.
http://arxiv.org/abs/2407.12764v1
20240717174225
Jigsaw Game: Federated Clustering
[ "Jinxuan Xu", "Hong-You Chen", "Wei-Lun Chao", "Yuqian Zhang" ]
cs.LG
[ "cs.LG" ]
Contrastive Adversarial Training for Unsupervised Domain Adaptation Jiahong Chen1 Zhilin Zhang1 Xin Li1Behzad Shahrasbi1Arjun Mishra1 ====================================================================== § ABSTRACT Federated learning has recently garnered significant attention, especially within the domain of supervised learning. However, despite the abundance of unlabeled data on end-users, unsupervised learning problems such as clustering in the federated setting remain underexplored. In this paper, we investigate the federated clustering problem, with a focus on federated k-means. We outline the challenge posed by its non-convex objective and data heterogeneity in the federated framework. To tackle these challenges, we adopt a new perspective by studying the structures of local solutions in k-means and propose a one-shot algorithm called (Federated Centroid Aggregation). adaptively refines local solutions on clients, then aggregates these refined solutions to recover the global solution of the entire dataset in a single round. We empirically demonstrate the robustness of under various federated scenarios on both synthetic and real-world data. Additionally, we extend to representation learning and present , which combines and for unsupervised feature learning in the federated setting. § INTRODUCTION Federated learning (FL) has emerged as a promising framework, enabling model training across decentralized data. This approach addresses data privacy concerns by allowing data to remain on individual clients. The goal of FL is to collaboratively train a model across multiple clients without directly sharing data. Within this context, FedAvg <cit.> has been considered the standard approach in FL, designed to obtain a centralized model by averaging the models trained independently on each client's data. Although FL has seen widespread applications in the domain of supervised learning, particularly in tasks like classification <cit.>, its utilization in the unsupervised learning sphere is still largely unexplored, even though it holds significant potential and applicability in numerous practical situations. A notable example is the large collections of unlabeled photographs owned by most smartphone users. In such instances, federated unsupervised learning can be a powerful paradigm, enabling the use of unsupervised learning approaches to leverage the “collective wisdom” of these unlabeled data while safeguarding user privacy. In this paper, we investigate federated unsupervised learning, particularly focusing on the popular clustering problem of k-means. In prior studies, clustering methods have been applied in FL mainly focusing on problems such as client selection <cit.> and privacy enhancement <cit.>, without a deep investigation into the unsupervised learning aspect. Moreover, existing distributed clustering methods overlook the unique challenges in FL, such as data heterogeneity and communication efficiency, making it difficult to apply in the federated setting. Our study extends to federated clustering, incorporating unsupervised clustering on individual clients within a federated framework. One key challenge of federated clustering is the inherent non-convexity of clustering problems, presenting multiple equivalent global solutions and potentially even more local solutions. Standard algorithms like Lloyd's algorithm <cit.> can only find a local solution of the k-means problem, without guaranteeing global optimum. We note that the term “local solution” in this context refers to a local optimal in optimization, not the solution learned from a client[For clarity, throughout this paper, we use the client's solution for the result obtained from a client. If the solution happens to be a local solution, we name it the client's local solution.]. This challenge is amplified in the federated setting, where each client's data is a distinct subset of the entire dataset. Even under the IID data sample scenario, each client's clustering results might be suboptimal local solutions containing spurious centroids far from the true global centroids. And this issue could become even more pronounced under non-IID scenarios. To this end, we propose a one-shot federated k-means algorithm: Federated Centroid Aggregation (), offering a new approach by exploiting structured local solutions. In the k-means problem, local solutions carry valuable information from the global solution. The proposed algorithm resolves these local solutions and leverages their benign properties within the federated clustering framework. is built upon theoretical studies <cit.> derived in a centralized setting, which suggests that every local solution is structured and contains nontrivial information about the global solution. Specifically, a local solution consists of estimates of the k ground truth centers, with a subset of these estimates being accurate. One common concern of FL lies in the potential decrease in performance compared to centralized models due to data heterogeneity across clients. However, from the perspective of local solutions, federated clustering could benefit from the decentralized framework. Each client's solution, whether a local optimum or not, carries partial information about the global solution of the entire dataset. By incorporating multiple clients' solutions, the central server could potentially recover the global optimal solution in one shot, akin to assembling a jigsaw puzzle of clients' solutions. For instance, if a true centroid is missing from one client's solution, it might be identified in the solutions of other clients. Therefore, is designed to recover the global solution for k-means clustering in a federated setting by refining and aggregating solutions from clients. First, Lloyd's algorithm for k-means is performed on each client's data. Then, adaptively refines spurious centroids using their structural properties to obtain a set of refined centroids for each client. Then refined centroids are sent to the central server, where aggregates them to recover the global solution of the entire dataset. By exploiting the structure in local solutions, is able to accurately identify the true k centroids of the entire dataset in one shot. We further extend beyond a pre-defined feature space to the modern deep feature framework <cit.>. Specifically, we present , a federated representation learning algorithm from decentralized unlabeled data. Concretely, we pair with clustering-based deep representation learning models such as  <cit.>, which assign pseudo-labels according to k-means clustering and then train the neural network in a supervised manner. The resulting algorithm, , alternates between applying to the current features and using for further training. This iterative process enhances the model's ability to learn meaningful representations from the decentralized data. We evaluate both and on benchmark datasets, including S-sets <cit.>, CIFAR <cit.>, and Tiny-ImageNet <cit.>. consistently outperforms baselines in various federated settings, demonstrating its effectiveness in recovering the global solution. Furthermore, shows promising performance in federated representation learning. § RELATED WORK Federated learning. Mainstream FL algorithms <cit.> adopt coordinate-wise averaging of the weights from clients. However, given the limited performance of direct averaging, other approaches have been proposed: <cit.> identify the permutation symmetry in neural networks and then aggregate after the adaptive permutation; <cit.> replace weight average by model ensemble and distillation. These studies enhance the performance of the synchronization scheme but overlook the impact of local solutions on clients. Federated Clustering. Many distributed clustering methods <cit.> have been proposed, but they overlook the heterogeneous challenge in FL. For synchronizing results returned from different clustering solutions, consensus clustering has been studied widely <cit.>. But it works on the same dataset, unlike FL. In the context of FL, <cit.> focus on communication efficiency or privacy-preserving. A recent federated clustering study <cit.> proposes weighted averaging for Fuzzy c-means but requires multiple rounds. The study most relevant to ours introduces k-FED <cit.>, a one-shot federated clustering algorithm, under a rather strong assumption that each client only has data from a few true clusters. It is still underexplored for federated clustering and usage of local solutions. Federated representation learning.  <cit.> studies supervised representation learning by alternating updates between classifiers and feature extractors. <cit.> study federated semi-supervised learning with the server holding some labeled data and clients having unlabeled data. For federated unsupervised learning, <cit.> proposes self-supervised learning in non-IID settings with a divergence-aware update strategy for mitigating non-IID challenges, distinct from our clustering focus. <cit.> adopts the contrastive approach for model training on clients. A recent framework <cit.> introduces federated unsupervised learning with constrained clustering for representation learning, while our focus lies on exploring federated clustering via local solutions. § BACKGROUND Clustering. Given a d-dimensional dataset ={x_1∈^d, …, x_N∈^d}, the goal of k-means problem is to identify k centroids ={c_1∈^d,…,c_k∈^d} that minimize the following objective G() ≐∑_n=1^N min_j∈[k]x_n - c_j_2^2. Federated clustering. In the federated setting, the dataset is decentralized across M clients. Each client m∈[M] possesses a distinct subset _m of the entire dataset . Despite different data configurations, the goal of federated clustering remains the same – to identify k centroids ={c_1,…,c_k} for = ∪_m _m. Under this federated framework, the optimization problem in <ref> can be reformulated as min_ G() = ∑_m=1^M G_m(), where G_m is the k-means objective computed on _m. Due to privacy concerns that restrict direct data sharing among clients, the optimization problem described in <ref> cannot be solved directly. Thus, the proposed algorithm utilizes a collaborative approach between clients and a central server. Initially, each client m independently minimizes G_m() to obtain a set of k centroids ^(m)={c_1^(m),…,c_k^(m)} from their dataset _m. Then the server aggregates centroids ∪_m ^(m) to find a set of k centroids for . We note that when clients perform standard Lloyd's algorithm for k-means clustering, they usually end up with local solutions ^(m), resulting in suboptimal performance even with IID distributed data _m. These local solutions can significantly complicate the aggregation process on the central server. Thus, the key challenge in federated clustering lies in effectively resolving the client's local solutions and appropriately aggregating them on the central server. §.§ Structure of Local Solutions To better resolve the federated clustering problem, we propose to take a deeper look at local solutions in k-means, which often significantly differ from the global minimizer. Recent theoretical works by <cit.> have established a positive result that under certain separation conditions, all the local solutions share a common geometric structure. More formally, suppose a local solution identifies centroids {c_1,…,c_k}. Then, there exists a one-to-one association between these centroids and the true centers {c_1^*,…,c_k^*} from the global solution. This association ensures that each centroid c_i belongs to exactly one of the following cases with overwhelming probability[Such structure of local solutions holds even when k≠ k^*, where k^* is the number of true clusters in the dataset.]: * Case 1 (one-fit-many association): centroid c_i is associated with s (s>1) true centers {c_j_1^*,…,c_j_s^*}. * Case 2 (one/many-fit-one association): t (t≥ 1) centroids {c_i_1,…,c_i_t} are all associated with one true center c_j^*. Namely, a centroid c_i in a local solution is either a one-fit-many centroid that is located in the middle of multiple true centers (case 1, when s>1), or a one/many-fit-one centroid that is close to a true center (case 2). Notably, when c_i is the only centroid near a true center (case 2, when t=1), it is considered a correctly identified centroid that closely approximates a true center. An illustration is provided in <ref>. Next, we will introduce how our algorithm utilizes such local solution structures to obtain unified clustering results in the federated setting. § JIGSAW GAME – FECA The proposed federated clustering algorithm is built upon the collaboration between clients and a central server. Each client m shares its refined centroid solution ^(m) with the server, where each ^(m) carries partial information of the global solution, similar to pieces of a jigsaw puzzle. The server then aggregates received centroids ∪_m^(m) to obtain a unified complete solution ^*, akin to assembling puzzle pieces in a jigsaw game. only requires one communication between clients and the server, thanks to its adaptive refinement of local solutions on the client side. The detailed procedure is presented in Algorithm <ref> and illustrated in <ref>. Privacy concern. While privacy is crucial in FL, it is not our main focus. However, an advantage of our one-shot algorithm is its minimal information exchange compared to standard iterative approaches like distributed clustering. In , sending refined centroids to the server is viewed as no more privacy risk than mainstream FL approaches of sending models with classifiers, which more or less convey class or cluster information. §.§ Client Update Algorithm This step aims to refine the spurious local solutions of k-means clustering on clients. Each client m first performs standard Lloyd's algorithm to obtain a set of k centroids ^(m)={c_1^(m),…,c_k^(m)}, and this solution is only guaranteed to be a local solution. As discussed in <ref>, despite the variations in solutions across different clients, each ^(m) always possesses some centroids (one/many-fit-one) that are proximate to a subset of ground truth centers. To facilitate the aggregation process on the server, we propose retaining only centroids from ^(m) that are positioned close to true centers. The client update step in focuses on refining the solution ^(m) by eliminating one-fit-many centroids that are distant from any true center. Specifically, a one-fit-many centroid is always located in the middle of multiple nearby true clusters, making it distant from most data points in those clusters and leading to a high standard deviation for its cluster. Conversely, many-fit-one centroids, which fit the same true center, are close to each other and thus have a small pairwise distance. As presented in Algorithm <ref>, we first use these properties to detect the candidate one-fit-many c_i^(m) and many-fit-one c_p^(m), c_q^(m) centroids, which are likely from a spurious local solution. Next, for further refinement, we need to confirm if these candidate centroids indeed originate from a local solution. To this end, for detected one-fit-many centroid c_i^(m), we first calculate the objective value G_i^(m) of its cluster 𝒟_i^(m) as G_i^(m)=∑_x∈_i^(m)x-c_i^(m)_2^2. For detected many-fit-one centroids, we merge their clusters _p^(m), _q^(m) to form a new cluster _j^(m) with the corresponding centroid c_j^(m). And then we calculate the objective value G_j^(m) as G_j^(m)=∑_x∈{_p^(m)∪_q^(m)}x-c_j^(m)_2^2. If the current solution ^(m) is only locally optimal, _i^(m) should contain data from multiple true clusters with a large G_i^(m), while _j^(m) only contains data from one true cluster with a small G_j^(m). Therefore, if G_i^(m) is greater than G_j^(m), it confirms that these candidate centroids stem from a local solution. In such cases, Algorithm <ref> removes c_i^(m) from ^(m) for not being close to any true center. Otherwise, if G_i^(m) is less than G_j^(m), these centroids are regarded as the correct portion with no need for further refinement. Notably, there may be multiple groups of centroids that possess the local structure. The algorithm is designed to iteratively identify and refine the local solution. Our theoretical analysis (Lemma <ref> in the appendix) demonstrates that Algorithm <ref> effectively removes all one-fit-many centroids from local solutions under the Stochastic Ball Model. In this model, we assume that each client's data is sampled independently and uniformly from one of k disjoint balls centered at the ground truth centers. A formal definition is provided in <ref>. §.§ Radius Assign Algorithm After removing one-fit-many centroids in the ClientUpdate phase, only centroids near true centers (one/many-fit-one) would be sent to the server. This RadiusAssign step prepares these centroids for server-side aggregation by assigning a specific radius to each. This setup allows the server to utilize these radii for effective aggregation. The primary goal of this step is to determine the radius that best approximates the true cluster radius of the entire dataset. In this section, we present two algorithmic variants for the RadiusAssign step. The first variant Algorithm <ref> is designed for theoretical validation purposes, while the second variant Algorithm <ref> is tailored for empirical experimentation. Through the theoretical variant, we establish Theorem <ref> that characterizes the performance of our algorithm under the Stochastic Ball Model. This theoretical variant generates a tentative solution ^(m) by discarding any potential many-fit-one centroids within ^(m). Following this, a radius r^(m) is then calculated according to the minimum pairwise distance among centroids in ^(m), and this radius is assigned to every centroid in the original solution ^(m) from Algorithm <ref>. We note that the method for identifying many-fit-one centroids utilized in this theoretical variant is only applicable under the Stochastic Ball Model. However, in real-world applications, especially under non-IID data sample scenarios, it is both challenging and unnecessary to eliminate all many-fit-one centroids from clients' solutions, as they often align closely with true centers. Accordingly, we develop an empirical variant, Algorithm <ref>, which assumes only one-fit-many centroids are excluded and assigns a unique radius r_i^(m) to each centroid c_i^(m)∈^(m). As for remaining many-fit-one centroids, their radii are estimated as half of their pairwise distances, which are typically much smaller compared to those of correct centroids. The server then groups all received centroids based on these radii, prioritizing the largest ones first. This ensures that the smaller radii associated with many-fit-one centroids minimally impact the aggregation process. An in-depth analysis of Algorithm <ref> is provided in <ref>, showcasing its effectiveness across a variety of experimental settings, including those with high data heterogeneity. It is worth noting that the theoretical variant is designed for theoretical analysis under the Stochastic Ball Model assumption. This assumption enables easy identification of many-fit-one centroids for clearer cluster separation approximation and accurate radius assignment. In contrast, the empirical variant does not need to remove many-fit-one centroids, as they are close to true centers and aid in reconstructing the global solution on the server side. This approach allows the empirical variant to assign distinct radii to each remaining centroid, enhancing the algorithm's effectiveness and practicality without relying on limited assumptions. A detailed comparison of the theoretical and empirical variants is provided in <ref>. §.§ Server Aggregation Algorithm At the server stage, the goal is to aggregate all received centroids ={^(1),…,^(M)} from M clients into a unified set of k centroids ^*. This task presents apparent challenges: due to the preceding refinement stage, clients may contribute varying numbers of centroids, and the indices of these centroids often lack consistency across clients. However, assuming the refinement phase in Algorithm <ref> effectively removes spurious one-fit-many centroids far from true centers, the returned centroids on the server would be closely grouped around true centers. This phenomenon enables a straightforward classification of all returned centroids into k distinct groups, each aligned with one of the k true centers, as presented in Algorithm <ref>. Equivalently, this is another clustering problem based on returned centroids under a high Signal-to-Noise Ratio (SNR) separation condition. Finally, the server calculates the means of centroids within each group to obtain ^*. In some extreme cases where the number of groups n might be less than k, such as when all clients converge to the same local solution. In such cases, Algorithm <ref> removes one-fit-many centroids associated with the same true clusters from all clients. This renders it impossible for Algorithm <ref> to reconstruct corresponding true centers without receiving any associated centroids from clients. It is important to note that this scenario is trivial within the federated framework, where all clients share the same local solutions. Essentially, it is akin to having only one client encountering a local solution. Further discussion on cases when n<k is provided in the <ref>. §.§ Theoretical Analysis We now state our main theorem, which characterizes the performance of under the Stochastic Ball Model. Assume the data x^(m) of client m is sampled independently and uniformly from one of k disjoint balls 𝔹_s with radius r, each centered at a true center θ_s^*, s∈ [k]. Each ball component under the Stochastic Ball Model has a density f_s(x) = 1/Vol(𝔹_s)1_𝔹_s(x). Additionally, we define the maximum and minimum pairwise separations between the true centers {θ_s^*}_s∈ [k] as Δ_max:=max_s≠ s'θ_s^* - θ_s'^* _2, Δ_min:=min_s≠ s'θ_s^* - θ_s'^* _2. (Main Theorem) Under the Stochastic Ball Model, for some constants λ≥ 3 and η≥ 5, if Δ_max≥ 4λ^2k^4r and Δ_min≥ 10ηλ k^2√(rΔ_max), then by utilizing the radius determined by Algorithm <ref>, any output centroid c_s^* from Algorithm <ref> is close to some ground truth center: c_s^* - θ_s'^* _2 ≤4/5ηΔ_min. Theorem <ref> characterizes the performance of our main algorithm , utilizing the radius from Algorithm <ref>. The proof, provided in <ref>, builds on the infinite-sample and high SNR assumptions established in <cit.> which characterizes local solutions of centralized k-means. Next, we will provide a discussion of both conditions. * Separation Condition: the separation between true centers Δ_min and Δ_max cannot be too small is generally necessary for a local solution to bear the structural properties described in <ref> <cit.>. Additionally, the ratio Δ_max/Δ_min indicates how evenly spaced the true centers are, with the ratio approaching 1 when the true centers are nearly evenly spaced. * Technical Assumptions: our main theorem heavily depends on the Stochastic Ball Model and infinity sample assumptions. We would love to note that the local solution structure also holds when the data follows the Gaussian mixture model or has finite data samples  <cit.>. We view these technical assumptions as less important than the above separation condition and will corroborate using both synthetic and real clustering data to demonstrate the effectiveness of our algorithm. Note that the above assumptions are often not met in practice. Thus, we develop another variant, Algorithm <ref>, which does not require the elimination of many-fit-one centroids and assigns a unique radius to each returned centroid from the client. A detailed empirical evaluation of these radii determined by Algorithm <ref> is presented in <ref>, showcasing their effectiveness in supporting our algorithm . §.§ Discussions on Heterogeneity We assume that the client's local solution for its dataset _m is also a local solution of the entire dataset =∪_m _m, which allows us to leverage the structures discussed in <ref>. This assumption holds when _m is an IID-sampled subset from . Our experiments showcase our algorithm's robustness even under non-IID conditions. Here we provide an explanation in <ref>, where right plots illustrate two clients' non-IID sampled data and the corresponding global solutions (achieve a global optimum when k=k^*[k^* indicates the number of true centers in the entire dataset.]). We found that despite the data heterogeneity, the clients' global solutions share similar structures as described in <ref>. We attribute these observations to the fact that under non-IID conditions, clients' data tend to concentrate on some of the true clusters. This increases the chance that clients' global solutions contain many-fit-one centroids for those true clusters, which can be aggregated together on the server by our algorithm . This scenario also suggests that even if a client can recover the global solution on its non-IID data, such a global solution coincides with a local solution on IID data and we still need to deploy to produce the final solution. § With , we can learn centroids ^* in the pre-defined feature space collaboratively with multiple clients. In this section, we extend to unsupervised representation learning <cit.>, aiming to learn a feature extractor f_ parameterized by from the unlabeled data set , such that the extracted feature f_(x) of data x can better characterize its similarity or dissimilarity to other data instances. Some studies <cit.> integrate clustering approaches with unsupervised representation learning. For instance, <cit.> proposed , which learns f_(x) from an unlabeled dataset ={x_n}_n=1^N by repeating two steps: * Perform Lloyd's algorithm on {f_(x_n)}_n=1^N to obtain a set of k centroids ={c_j}_j=1^k; * Create a pseudo-labeled set ={(x_n, ŷ_n)}_n=1^N where ŷ_n=_jx_n - c_j_2, and learn f_ with a linear classifier in a supervised fashion for multiple epochs. is known for its simplicity and has been shown to perform on par with other more advanced self-supervised learning methods for representation learning <cit.>. In this paper, we extend to the federated setting and propose , which integrates within the  <cit.> framework. Namely, iterates between local training of f_(x) on each client and global model aggregation for multiple rounds. At the end of each round, applies to update the centroids that are used to assign pseudo labels. Algorithm <ref> outlines our approach, where the red text corresponds to , the blue text corresponds to , and the green text corresponds to the element-wise weight average of . § EXPERIMENTS We first evaluate on benchmark synthetic datasets, which have well-established true centers. Then we extend our evaluation to frozen features of real-world image data extracted from pre-trained neural networks. Additionally, we assess the representation learning capabilities of by training a deep feature extractor network from scratch in the federated framework. To simulate the non-IID data partitions, we follow <cit.> to split the data drawn from Dirichlet(α) for multiple clients. Smaller α indicates that the split is more heterogeneous. We also include the IID setting, in which clients are provided with uniformly split subsets of the entire dataset. Furthermore, we have standardized the number of clients to M=10 for all experiments in this section. A detailed discussion on the impact of varying the number of clients is provided in <ref>. Baselines. We mainly compare three baselines: * Match Averaging (M-Avg): matches different sets of centroids from clients by minimizing their ℓ_2-distances and returns the means of matched centroids. * k-FED: the one-shot federated clustering method <cit.> designed for heterogeneous data, assuming that each client's data originates from k'≤√(k^*) true clusters. The method utilizes a small k' for k-means clustering on clients. In the following experiments, we select a single k' for each dataset, with detailed tuning experiments provided in <ref>. * FFCM:  <cit.> focuses on fuzzy c-means clustering and presents two versions of aggregation algorithms, which are weighted averaging centroids (v1) and applying k-means on centroids (v2). Since this method is not designed for a one-shot setting, we report its results for both round 1 and round 10 in the following experiments. Additionally, we include a centralized benchmark, representing the performance of k-means clustering on the entire dataset without federated splits. In the following experiments, we set k=k^* for all methods except for k-FED. Evaluation metric. For synthetic datasets with known true centers, we assess recovered centroids by calculating the ℓ_2-distance between output centroids and true centers. In contrast, for real datasets where true centers are unknown, we adopt the standard clustering measures including Purity and Normalized Mutual Information (NMI). The average Purity for all clusters is reported. NMI measures the mutual information shared between clustering assignments X and true labels Y defined as NMI(X,Y)=2I(X;Y)/(H(X)+H(Y)), where I denotes the mutual information and H is the entropy. §.§ Evaluation On synthetic datasets. We evaluate on benchmark datasets in <cit.> with known true centers. Specifically, we focus on S-sets, comprising synthetic data characterized by four different degrees of separation. S-sets includes four sets: S1, S2, S3, and S4, each consisting of 15 Gaussian clusters in ℝ^2. Visualizations of S-sets are provided in <ref>. We assess recovered centroids by calculating the ℓ_2-distance to ground truth centers, with mean results and standard deviation from 10 runs reported in <ref>. Additionally, the Purity and NMI of clustering assignment quality are presented in <ref>. We investigate three data sample scenarios in the federated setting: IID, Dirichlet(0.3), and Dirichlet(0.1). And we select k'=5 for k-FED after careful tuning (detailed in <ref>). Results in <ref> indicate that our algorithm consistently outperforms all baselines in recovering the global solution across all experimental settings. <ref> provides a visualization of these results, demonstrating that even under the challenging non-IID scenario – Dirichlet(0.3), 's recovered centroids closely approximate the true centers. Note that <ref> suggests that (and some other federated clustering methods) can even outperform the centralized k-means. From the perspective of federated learning, this may look odd. But it makes perfect sense from the local solution point of view of k-means. Solving centralized k-means likely leads to a local solution with suboptimal performance. However, with multiple clients independently solving k-means, their solutions together have a higher chance to collaboratively recover the global solution. To explore this advantage of , we conduct experiments on the impact of varying numbers of clients on the synthetic dataset, as detailed in <ref>. On frozen features. We evaluate on features extracted from pre-trained neural networks using real datasets – CIFAR10/100 <cit.>. And the frozen features are generated from the ImageNet pre-trained ResNet-50 <cit.> model. For k-FED algorithm, we select k'=3 for CIFAR10 and k'=10 for CIFAR100, following the suggestion k'≤√(k^*) from their paper. In <ref>, we present the average Purity and NMI calculated from three random runs. Our algorithm demonstrates robust performance across different data sample scenarios. And it outperforms all baseline models in most cases. Under the extreme non-IID scenario – Dirichlet(0.1), the k-FED algorithm tends to outperform others. This is because k-FED is designed for high heterogeneity, assuming each client possesses data from k'≤√(k^*) true clusters. Without considering local solution structures in k-means, k-FED relies on an accurate selection of k' to reduce the chance of occurrence of spurious centroids in scenarios of high heterogeneity, as illustrated in <ref>(right). This strategy makes its performance highly sensitive to the choice of k', and it tends to decrease rapidly in less heterogeneous cases. This situation highlights the importance of addressing local solutions in federated clustering. §.§ Evaluation We validate on the CIFAR10/100 and the Tiny-ImageNet dataset with 64× 64 resolution. We randomly initialize a ResNet-18 model and train it for 150 rounds, with all 10 clients fully participating in the process. Each round we train 5 local epochs for clients' models with batch size 128. We modified the official implementation of -v2 <cit.> and followed their training details. The test accuracy is reported using linear evaluation, with the mean results of three runs presented in <ref>. Since each round of FFCM requires one communication between the server and clients, we only perform FFCM(Rd=1) given the computation resource constraint. And we set k'=20 for k-FED on Tiny-ImageNet. We first confirm that the centralized training of -v2 reaches reasonable accuracy on three datasets. Also, we see that the performance in federated framework drops sharply compared to centralized learning, showing the challenges of learning deep representation on decentralized data. One possible reason could be the limited performance of k-means clustering results on noisy features, especially in the early rounds, leading to unreliable pseudo labels in the following supervised training step. However, our method shows encouraging improvements. The outperforms the baselines significantly and consistently across all settings on three datasets. From experiments, we demonstrate: (1) it is challenging for applications of -v2 in federated settings to match its centralized performance; (2) it is possible for current baselines to learn meaningful features in federated representation learning; (3) the proposed serves as a strong baseline, which outperforms current algorithms by a notable gain. § CONCLUSIONS AND FUTURE WORKS We investigate federated clustering, an important yet under-explored area in federated unsupervised learning, and propose a one-shot algorithm, , by leveraging structures of local solutions in k-means. We also adopt FeCA for representation learning and propose . Through comprehensive experiments on benchmark datasets, both FeCA and DeepFeCA demonstrate superior performance and robustness, outperforming established baselines across various settings. Towards other challenging settings. Throughout the whole paper, we consider either the infinite sample scenario (for theory) or the large sample scenario (for experiments), where a considerable large data sample size is still required for the local solution to have the desired structure. This corresponds to the cross-silo federated learning as introduced in the review paper <cit.>, where the number of clients is limited but sufficient data is available on each client. The other cross-device federated learning setting, where limited data is available on each client, can be adapted to tentatively mimic the cross-silo setting. One could group all the clients into a few groups to guarantee sufficient data samples in each group, and then apply FedAvg <cit.> or other federated algorithm over the clients within each group, and at last deploy our federated k-means algorithm on solutions returned by different groups. §.§.§ Acknowledgments J. Xu and Y. Zhang acknowledge support from the Department of Electrical and Computer Engineering at Rutgers University. H.-Y. Chen and W.-L. Chao are supported in part by grants from the National Science Foundation (IIS-2107077 and OAC-2112606) and Cisco Research. tmlr § PROOF OF THEOREM <REF> In this section, we prove Theorem <ref>. Under the Stochastic Ball Model and high Signal-to-Noise Ratios (SNR) condition, we will demonstrate: (1) all the clients returned one/many-fit-one centroids corresponding to the same ground truth center are bounded within a ball of some radius (determined by Algorithm <ref>); (2) for the final output centroids ^*={c_1^*,…,c_k^*} from Algorithm <ref>, the distance between any centroid c_s^*∈^*, s∈[k] and its corresponding true center is upper bounded. Specifically, the proof is composed of the following three steps: * Step 1 (Proof of the effectiveness of removing one-fit-many in Algorithm <ref>): On the client end, we prove the effectiveness of removing all one-fit-many centroids by Algorithm <ref>; * Step 2 (Proof of the effectiveness of assigning radius in Algorithm <ref>): On the server end, we prove that the radius assigned by Algorithm <ref> effectively encloses all the centroids (returned from the clients) associated with the same true center; * Step 3 (Proof of Theorem <ref>): On the server end, under the assumption that there does not exist any one-fit-many centroid (proved in Step 1), we first prove Algorithm <ref> correctly classifies all the returned centroids ={^(1),…,^(M)} from M clients using radii assigned by Algorithm <ref>, and then derive the error bound between recovered centroids and their associated ground truth centers. For completeness, we outline some notations used in the following proof and a formal description of the Stochastic Ball Model below. Stochastic Ball Model and Notations. Let θ_1^*,…,θ_k^*∈ℝ^d represent k distinct true cluster centers, and f_s be the density of a distribution with mean θ_s^* for each s∈[k]. We assume each data point x^(m)∈ℝ^d of client m∈[M] is sampled independently and uniformly from a mixture f of distributions {f_s}_s∈[k], with the density f(x) = 1/k∑_s=1^k f_s(x). The Stochastic Ball Model is the mixture f where each ball component has density f_s(x) = 1/Vol(𝔹_s)1_𝔹_s(x), s∈ [k], where 𝔹_s denotes a ball component centered at θ_s^* with radius r. In the context of the k-means problem, to identify a set of k centroids ^(m)={c_1^(m),…,c_k^(m)} on client m, we consider the goal as minimizing the following objective: G(^(m)) = N∫min_i∈[k] x^(m)-c_i^(m)_2^2 f(x^(m))dx^(m) = 1/k∑_s=1^k ∫min_i∈[k] x^(m)-c_i^(m)_2^2f_s(x^(m)) dx^(m). The above objective function represents the infinite-sample limit of the objective (<ref>) on client m. In the following proof, we denote the objective G(^(m)) in (<ref>) on client m as G^(m). Given a set of k centroids ^(m), we denote the associated Voronoi set as {𝒱_1^(m),…,𝒱_k^(m)}, where 𝒱_j^(m) is the region consisting of all the points closer to c_j^(m) than any other centroid in ^(m). Formally, for each j∈[k], we define 𝒱^(m)_j = {x : x - c_j^(m)_2 ≤ x - c_l^(m)_2, ∀ l ≠ j, l ∈ [k]}. In addition, we define the maximum and minimum pairwise separations between true centers {θ_s^*}_s∈ [k] as Δ_max:=max_s≠ s'θ_s^* - θ_s'^* _2 and Δ_min:=min_s≠ s'θ_s^* - θ_s'^* _2. §.§ Step 1 (Proof of the effectiveness of removing one-fit-many in Algorithm <ref>) In this section, we prove the effectiveness of Algorithm <ref> in eliminating all one-fit-many centroids under the Stochastic Ball Model. Specifically, on each client m∈ [M], after applying Lloyd's algorithm, we obtain a set of k centroids ^(m). If ^(m) is a non-degenerate local minimum that is not the global optimum, then it must contain both one-fit-many and many-fit-one centroids, as discussed in <ref>. Next, the algorithm identifies a candidate one-fit-many centroid c_i^(m)∈^(m) whose corresponding Voronoi set 𝒱_i^(m) contains data points with the largest standard deviation of distances to c_i^(m). In this proof, with the infinite-sample limit, we derive the objective G_i^(m) in <ref> as G_i^(m) = ∫_𝒱_i^(m) x-c_i^(m)_2^2 f(x) dx. In addition, the algorithm pinpoints two candidate many-fit-one centroids c_p^(m) and c_q^(m) from ^(m), characterized by the minimal pairwise distance. Then we tentatively merge the respective Voronoi sets 𝒱_p^(m) and 𝒱_q^(m) to form a new region _j^(m) and subsequently obtain a corresponding centroid c_j^(m). The objective G_j^(m) in <ref> is then calculated as G_j^(m) = ∫__j^(m) x-c_j^(m)_2^2 f(x)dx , where _j^(m) = 𝒱_p^(m)∪𝒱_q^(m). To test if these candidate centroids c_i^(m) and {c_p^(m), c_q^(m)} are one-fit-many and many-fit-one centroids, respectively, Algorithm <ref> compares G_i^(m) and G_j^(m). If G_i^(m)≥ G_j^(m), then c_i^(m) is confirmed as a one-fit-many centroid to be removed, and {c_p^(m), c_q^(m)} are many-fit-one centroids to be kept. This is proved in the following Lemma <ref>. Under the Stochastic Ball Model, for some constants λ≥ 3 and η≥ 5, if Δ_max≥ 4λ^2 k^4 r and Δ_min≥ 10ηλ k^2 √(rΔ_max), then Algorithm <ref> eliminates all the one-fit-many centroids in a local minimizer ^(m) on client m. If ^(m) is a non-degenerate local minimum that is not globally optimal on client m, then it must contain both one-fit-many and many-fit-one centroids. Without loss of generality, assume that c_i^(m)∈^(m) is associated with multiple true centers θ_1^*,…, θ_t^*, where t≥ 2. Additionally, let {c_p^(m), c_q^(m)}∈^(m) (potentially along with other centroids) are associated with the same true center θ_t+1^*. Then the objective G^(m) of ^(m) is G^(m) = G_i^(m) + T_1 + B, where T_1 = ∫_𝒱_p^(m)∪𝒱_q^(m)min{ x-c_p^(m)_2^2, x-c_q^(m)_2^2 } f(x) dx, and G_i^(m) is defined in (<ref>). B denotes the objective value contributed by the Voronoi set other than {𝒱_i^(m), 𝒱_p^(m),𝒱_q^(m)}. We construct a hypothetical solution _h^(m) by: (1) merging Voronoi sets 𝒱_p^(m) and 𝒱_q^(m) into a new region _j^(m) with the centroid c_j^(m); (2) dividing the Voronoi set 𝒱_i^(m) into two regions _i_1^(m) and _i_2^(m), with new centroids c_i_1^(m) and c_i_2^(m) respectively: _i_1^(m) = {x∈𝒱_i^(m): x-c_i_1^(m)_2≤x-c_i_2^(m)_2 }, _i_2^(m) = {x∈𝒱_i^(m): x-c_i_2^(m)_2≤x-c_i_1^(m)_2 }; (3) keeping the remaining centroids in ^(m). Thus, we have _h^(m) = ^(m)∖{c_i^(m), c_p^(m), c_q^(m)}∪{c_i_1^(m), c_i_2^(m), c_j^(m)}. The objective G_h^(m) for this hypothetical solution _h^(m) is G_h^(m) = G_j^(m) + T_2 + B, where T_2 = ∫_𝒱_i^(m)min{x-c_i_1^(m)_2^2, x-c_i_2^(m)_2^2 } f(x)dx, and G_j^(m) is defined in (<ref>). By selecting centroids c_i_1^(m) = c_i^(m) and c_i_2^(m)=max_x∈𝒱_i^(m) x-c_i^(m) and applying Lemma <ref>, we have G^(m)- G_h^(m)≥Δ_min^2/36k. This inequality implies that ^(m) is a local solution with a suboptimal objective value G^(m). In Algorithm <ref>, instead of comparing G^(m) and G_h^(m), we evaluate G_i^(m) and G_j^(m) to determine whether ^(m) is a local solution. The difference between objective values G^(m) and G_h^(m) is G^(m) - G_h^(m) = G_i^(m) - G_j^(m) - (T_2 - T_1). Following the selection of centroids c_i_1^(m) and c_i_2^(m) as above, we have the proved claim that c_i_1^(m) - c_i_2^(m)≥Δ_min/2-r from the proof of Lemma A.2 in <cit.>. For the term T_2, it follows that T_2 = ∫__i_1^(m) x - c_i_1^(m)_2^2 f(x)dx + ∫__i_2^(m) x-c_i_2^(m)_2^2f(x)dx ≥∫__i_2^(m)( c_i_1^(m) - c_i_2^(m)_2 - x - c_i_1^(m)_2 )^2f(x)dx ≥∫__i_2^(m)( Δ_min/2-r -2r )^2 f(x)dx ≥1/k( Δ_min/2-3r )^2 ≥1/k( 2Δ_min/5)^2 Equation (<ref>) follows from 𝒱_i^(m)=_i_1^(m)∪_i_2^(m). Inequality (<ref>) uses the triangle inequality, and (<ref>) follows from the choice of c_i_1^(m) and c_i_2^(m). Inequality (<ref>) stems from the ball component's volume being 1/k. Inequality (<ref>) follows the claim that r ≤Δ_min/20ηλ^2 k^4≤Δ_min/30, which is derived from the separation assumption in (<ref>). For the term T_1, we have T_1 = ∫_𝒱_p^(m) x-c_p^(m)_2^2 f(x)dx + ∫_𝒱_q^(m) x-c_q^(m)_2^2f(x)dx ≤∫_𝒱_p^(m)( x-θ_t+1^* _2 + θ_t+1^*-c_p^(m)_2 )^2f(x)dx + ∫_𝒱_q^(m)( x-θ_t+1^* _2+ θ_t+1^*-c_q^(m)_2 )^2f(x)dx ≤∫_𝒱_p^(m)( r + 8λ k^2√(rΔ_max))^2 f(x)dx + ∫_𝒱_q^(m)( r + 8λ k^2√(rΔ_max))^2f(x)dx = ∫__j^(m)( r + 8λ k^2√(rΔ_max))^2 f(x)dx ≤1/k( r + 4Δ_min/5η)^2 ≤1/k( Δ_min/3)^2 Equation (<ref>) follows from the definition of the term T_1. Inequality (<ref>) follows from the triangle inequality. Inequality (<ref>) utilizes the error bound from Theorem 1 (many/one-fit-one association) in <cit.>, with each ball component's radius as r. Inequality (<ref>) is based on the volume of each ball component being 1/k, the proved claim r≤Δ_min/30, and the assumption that η≥5. Combining the above inequalities, we have T_2 -T_1≥ 0. Thus, the equation (<ref>) can be derived as G_i^(m) - G_j^(m) = G^(m)- G_h^(m) + (T_2-T_1) ≥ G^(m)- G_h^(m)≥Δ_min^2/36k≥ 0. Therefore, if G_i^(m)≥ G_j^(m), it implies that ^(m) is a local solution with a suboptimal objective value G^(m). Subsequently, by comparing G_i^(m) and G_j^(m) for each candidate centroid c_i^(m), Algorithm <ref> can effectively eliminates all one-fit-many centroids from ^(m). Under the Stochastic Ball Model, for some constants λ≥ 3 and η≥ 5, if Δ_max≥ 4λ^2 k^4 r and Δ_min≥ 10ηλ k^2 √(rΔ_max), then when c_i_1^(m)=c_i^(m) and c_i_2^(m)=max_x∈𝒱_i^(m) x-c_i^(m), the following holds: G^(m) - G_h^(m)≥Δ_min^2/36k. The difference between objective values of the solution C^(m) and the hypothetical solution C_h^(m) is G^(m) - G_h^(m) = (G_i^(m) - T_2) - (G_j^(m) - T_1). Lemma A.2 in <cit.> establishes that by choosing centroids c_i_1^(m)=c_i^(m) and c_i_2^(m)=max_x∈𝒱_i^(m) x-c_i^(m), we have G_i^(m) - T_2 ≥Δ_min^2/18k, and G_j^(m)-T_1 ≤4r^2/k, which follows from the volumes of the ball components under the Stochastic Ball Model, each equating to 1/k. Then, we derive G^(m)-G_h^(m)≥Δ_min^2/18k - 4r^2/k≥Δ_min^2/36k. The second inequality in (<ref>) follows the claim that r≤Δ_min/20ηλ^2k^4≤Δ_min/30, which is derived from the separation assumption in (<ref>). §.§ Step 2 (Proof of the radius assignment in Algorithm <ref>) In this section, we present a theoretical analysis demonstrating the effectiveness of the radius assignment in Algorithm <ref>, ensuring coverage of all centroids associated with one true center. On one hand, following the removal of all one-fit-many centroids by Algorithm <ref> (proved in step 1), all returned centroids 𝒞={𝒞^(1),…, 𝒞^(M)} on the server end are concentrated around true centers {θ_s^*}_s∈ [k]. Thus, this is equivalently another clustering problem on centroids 𝒞 with (extremely) high SNR separation condition. Let {𝒮^*_1,…,𝒮^*_k} be the ground truth clustering sets of all returned centroids 𝒞, where for each s∈[k], centroids within 𝒮^*_s are all associated with the one true center θ_s^*. Algorithm <ref> classifies these returned centroids 𝒞 into k sets {𝒮_1,…,𝒮_k}, using the radius in ℛ={ℛ^(1),…,ℛ^(M)} determined by Algorithm <ref>. On each client m∈[M], Algorithm <ref> first generates a new set of centroids ^(m) by discarding any potential many-fit-one centroid from ^(m). It then identifies the minimal pairwise distance Δ_min^(m) in ^(m) aiming to approximate Δ_min, formulated as: Δ_min^(m) = min_c_i,c_j∈^(m), i≠ jc_i - c_j _2. Subsequently, Algorithm <ref> calculates a uniform radius r^(m) = 1/2Δ_min^(m), and assigns it to every centroid in ^(m). These centroid-radius pairs are then sent to the server. Under the Stochastic Ball Model, for some constant λ≥ 3 and η≥ 5, if Δ_max≥ 4λ^2k^4r and Δ_min≥ 10ηλ k^2√(rΔ_max), then for centroids in 𝒮^*_s which are associated with the true center θ_s^*, s∈[k], we have the following inequality holds on all clients m∈[M]: max_c_i,c_j∈𝒮^*_s, i≠ j c_j - c_i _2 ≤1/2Δ_min^(m), ∀ s∈[k]. For centroids in 𝒮_s^* which are associated with one true center θ_s^*, s∈[k], we upper bound their maximum pairwise distance using the triangle inequality: max_c_i,c_j∈𝒮^*_s, i≠ j c_j - c_i _2 ≤ max_c_i,c_j∈𝒮^*_s, i≠ j( c_j - θ_s^* _2 + θ_s^*-c_i _2 ) ≤ 2max_c_i ∈𝒮^*_s c_i - θ_s^* _2. Under the Stochastic Ball Model with radius r, we apply the error bound from Theorem 1 (many/one-fit-one association) in <cit.> to the above inequality, and for some constant λ≥ 3 we have max_c_i,c_j∈𝒮^*_s, i≠ j c_j - c_i _2 ≤ 2max_c_i ∈𝒮^*_s c_i - θ_s^* _2 ≤ 16λ k^2√(rΔ_max). By combining the above inequality and the assumption Δ_min≥ 10ηλ k^2√(rΔ_max) in (<ref>), we obtain max_c_i,c_j∈𝒮^*_s, i≠ j c_j - c_i _2 ≤8/5ηΔ_min, where the constant η≥ 5. Next, we derive the approximation error between Δ_min^(m) and Δ_min as: | Δ_min^(m) - Δ_min | ≤ max_c_s∈𝒮_s^* c_s - θ_s^* _2 + max_c_s'∈𝒮_s'^* c_s' - θ_s'^* _2, s∈ [k], s'∈ [k], s≠ s' ≤ 2 max_c_s∈𝒮_s^* c_s - θ_s^* _2, s∈ [k]. Given that all clients follow the same mixture distributions under the Stochastic Ball Model, the above inequality holds for all clients m∈[M]. Similar to inequality (<ref>), we again utilize the error bound from Theorem 1 (many/one-fit-one association) in <cit.>, and obtain: | Δ_min^(m) - Δ_min | ≤8/5ηΔ_min, ∀ m∈[M]. Reorganizing the terms in inequality (<ref>) gives 5η/5η + 8Δ_min^(m)≤Δ_min≤5η/5η - 8Δ_min^(m), ∀ m∈[M]. Then by combining inequalities (<ref>) and (<ref>), for each s∈[k], we obtain max_c_i,c_j∈𝒮^*_s, i≠ j c_j - c_i _2 ≤8/5η - 8Δ_min^(m)≤1/2Δ_min^(m), ∀ m∈[M], where the last inequality follows from the assumption that η≥ 5. §.§ Proof of Theorem <ref> In this section, we complete the proof of our main theorem by demonstrating: (1) Algorithm <ref> correctly classifies all returned centroids in alignment with their corresponding true centers, utilizing the radius assigned by Algorithm <ref>; (2) we derive the error bound between the final output centroids ^* from Algorithm <ref> and the corresponding true centers. Let cluster labels be s = 1,…,k. During the grouping process of all returned centroids on the server end, Algorithm <ref> first selects a centroid in 𝒞 with the largest radius. Without loss of generality, we assume that this selected centroid c_s∈ is returned by the client m and associated with the true center θ_s^*. Thus, this centroid belongs to the ground truth clustering set as c_s∈𝒮_s^* and its assigned radius is r_s= 1/2Δ_min^(m). Algorithm <ref> then groups the centroids located within the ball centered at c_s with radius r_s, resulting in the formation of the grouped cluster 𝒮_s = { c : c∈𝒞, c-c_s ≤ r_s }. On one hand, Lemma <ref> implies that for each s∈[k], the maximum pairwise distance between centroids in 𝒮_s^* is bounded by 1/2Δ_min^(m) for any client m. Consequently, 𝒮^*_s ∈𝒮_s can be readily inferred based on the definition of 𝒮_s. On the other hand, for other centroids c_s'∈𝒮^*_s', s'∈[k], s≠ s', we have c_s' - c_s _2 ≥θ_s'^* - θ_s^* _2 - c_s' - θ_s'^* _2 - c_s - θ_s^*_2 ≥Δ_min - c_s' - θ_s'^* _2 - c_s - θ_s^*_2. Utilizing the error bound from Theorem 1 (many/one-fit-one association) in <cit.> gives c_s' - c_s _2 ≥Δ_min - 16λ k^2√(rΔ_max)≥Δ_min - 8/5ηΔ_min, where the last step follows from the assumption Δ_min≥ 10ηλ k^2√(rΔ_max). Applying the lower bound in (<ref>) to the above inequality, for any client m∈[M], it follows that c_s' - c_s _2 ≥5η-8/5η+8Δ_min^(m) > 1/2Δ_min^(m), where the constant η≥ 5. Thus, following the definition of 𝒮_s, the above inequality implies that centroids c_s'∈𝒮_s'^*, s≠ s' do not belong to 𝒮_s. Combining the proved claim that 𝒮_s^*∈𝒮_s, this suggests that 𝒮_s = 𝒮_s^*, s∈[k], up to a permutation of cluster labels. This further implies that Algorithm <ref> correctly classifies all returned centroids in according to their associated true centers. For each s∈[k], Algorithm <ref> computes the mean of _s, denoted as c_s^* = mean(_s). The collection of these mean centroids, ^*={c_1^*, …, c_k^*}, constitutes the final set of centroids output by Algorithm <ref>. Then the proximity of c_s^* to its associated true center θ_s^* can be bounded as: c_s^* - θ_s^* _2 ≤max_c∈_s c - θ_s^* _2 ≤max_c∈_s^* c-θ_s^* _2 , ∀ s∈[k] ≤ 8λ k^2√(r Δ_max)≤4/5ηΔ_min, for some constants η≥ 5. The inequality (<ref>) follows from the proved statement _s = _s^*, s∈[k]. The inequality (<ref>) first utilizes the error bound from Theorem 1 (many/one-fit-one association) in <cit.>, followed by the application of the separation assumption Δ_min≥ 10ηλ k^2√(rΔ_max). Thus, any output centroid c_s^*∈^* from Algorithm <ref> is close to some true center θ_s'^* as: c_s^* - θ_s'^* _2 ≤4/5ηΔ_min, thereby proving Theorem <ref>. § EVALUATION ON THE RADIUS ASSIGNED BY ALGORITHM <REF> (EMPIRICAL). This section evaluates the radius produced by the empirical algorithm variant, Algorithm <ref>. While the proof of our main theorem characterizes the performance of the Algorithm <ref>, it is important to note that most real-world scenarios do not satisfy a homogeneous data sample assumption. Consequently, we have implemented an empirical procedure that assigns a unique radius to each returned centroid. In this context, it is also not necessary to remove many-fit-one centroids on clients, as these centroids typically concentrate around true centers and will be grouped together via the aggregation algorithm on the server. This empirical algorithm variant is specifically designed to adapt our main algorithm for more general scenarios. In this section, we empirically assess the radius assigned by Algorithm <ref> under both IID and non-IID scenarios, demonstrating its effectiveness for the proceeding aggregation step in Algorithm <ref>. Specifically, our objective is to empirically show that utilizing the selected centroid-radius pair (c_s,r_s), s∈[k] (assigned by Algorithm <ref>), Algorithm <ref> can effectively group all returned centroids corresponding to the same true center θ_s^* on the server end. Recall that we denote a set of returned centroids associated with one true center θ_s^* as _s^*. Essentially, we aim to validate that the distance between any centroid c∈_s^* to c_s is bounded by the assigned radius r_s. Our goal is thus formulated as follows: max_c_i∈_s^* c_i - c_s _2 ≤ r_s. The left side of the above inequality indicates the maximum distance between any two centroids in _s^*, and it can be further elaborated as max_c_i∈_s^* c_i - c_s _2 ≤max_c_s∈_s^*( c_i-θ_s^* _2 + θ_s^* - c_s _2 ) ≤ 2 max_c_i∈_s^* c_i-θ_s^* _2. Then our goal in (<ref>) can be reformulated as max_c_i∈_s^* c_i-θ_s^* _2/r_s≤1/2. Next, we present empirical results on the synthetic dataset, S-sets (S1), with known ground truth centers {θ^*_s}_s∈[k]. These results demonstrate that the inequality (<ref>) holds across both IID and non-IID cases. For this purpose, we define a new parameter σ as σ_i:= c_i-θ_s^* _2/r_s, c_i∈_s^* s∈[k]. This parameter σ_i represents the distance between the returned centroid c_i and its fitted true center θ_s^* scaled by the radius r_s. Our empirical results demonstrate that values of σ_i remain below 0.5 for all returned centroids in ={^(1),…,^(M)}, in accordance with our goal inequality (<ref>). Results. <ref> illustrates the evaluation of σ_i, as determined using the radius assigned by Algorithm <ref>, in varied inhomogeneous settings on S-sets (S1). This figure presents σ_i values for all returned centroids across three random runs, categorized according to their respective true centers in different colors. Results consistently indicate that σ_i values stay below 0.5, thereby empirically substantiating the validity of the inequality (<ref>) in our analysis. Consequently, it demonstrates the efficacy of aggregating centroids using the radius assigned by Algorithm <ref> on the server end. We note that the number of returned centroids associated with each true center may vary. It is because we selectively remove one-fit-many centroids on the client side, while it is possible for many-fit-one centroids to be present. In some extreme non-IID cases, assuming a client only contains a few secluded data points from one true cluster but they all far deviate from the true center, it may occur that a returned centroid is not covered by the radius. Then it will be considered noisy and discarded by Algorithm <ref>. Concretely, the recovered centroids of this cluster will be contributed by returned centroids from other clients. § SUPPLEMENTARY EXPERIMENTS ON THE SYNTHETIC DATASET This section presents additional experimental results on the synthetic dataset S-sets <cit.>. The S-sets comprise four sets: S1, S2, S3, and S4, each consisting of 15 Gaussian clusters in 2-dimensional data with varying degrees of overlap, specifically 9%, 22%, 41%, and 44%. For the visualization of S-sets, refer to <ref> from their paper <cit.>. §.§ Evaluations on the clustering assignments This section shifts focus to the evaluation of clustering assignments on the synthetic dataset S-sets, diverging from the analysis of recovered centroids. While <ref> in the paper assesses the ℓ_2-distance between recovered centroids and known ground truth centers, we herein present the average results of Purity and NMI across 10 random runs in <ref> under three different data sample scenarios. The findings consistently demonstrate that our algorithm surpasses all baseline algorithms in performance across every tested scenario, underlining its effectiveness in federated clustering tasks. §.§ More visualizations of recovered centroids by different methods on S-sets To further demonstrate the superior performance of our algorithm, we present more visualizations corresponding to the results detailed in <ref> for S-sets(S2) and S-sets(S3). <ref> displays the centroids recovered by various federated clustering algorithms under the non-IID condition – Dirichlet(0.3). Our algorithm's ability to resolve and leverage the structures of local solutions enables it to outperform other baseline methods that fail to address these critical aspects, especially in challenging non-IID settings. This emphasizes the critical role of resolving local solutions for enhanced algorithmic performance. §.§ Ablation study on eliminating one-fit-many centroids in Algorithm <ref> Removing one-fit-many centroids in Algorithm <ref> plays a crucial role in enhancing the algorithm's performance. These centroids are typically far from any true centers. By eliminating one-fit-many centroids at the client end, we effectively prevent the transmission of these problematic centroids to the server. It significantly simplifies the task of Algorithm <ref> on the server side, which involves grouping received centroids close to the same true center. In this section, we conduct an ablation study on eliminating one-fit-many centroids in Algorithm <ref>. In the following experiments, one-fit-many centroids are not removed on clients and then sent to the server. We present mean square errors between recovered centroids and ground truth centers in <ref>. The comparative results clearly demonstrate a performance degradation when these centroids are not removed, underscoring the significance of eliminating the one-fit-many step in Algorithm <ref>. Not enough output centroids. Not removing one-fit-many centroids can lead to a scenario where the number of reconstructed centroids is less than k. This occurs because the cluster of one-fit-many centroid typically contains data points from multiple true clusters, resulting in a significantly larger radius than that assigned to the true cluster. Consequently, the server may prioritize these centroids with large radii during the grouping process, forming a large group erroneously containing centroids associated with different true centers. In <ref>, we provide visualizations of reconstructed centroids without removing one-fit-many, demonstrating a notable decrease in performance. This emphasizes the necessity of their removal in our algorithm. §.§ Comparison between Algorithm <ref> (theoretical) and Algorithm <ref> (empirical) Algorithm <ref> (theoretical) is designed for theoretical analysis only under a strict setup, specifically the Stochastic Ball Model assumption. This assumption allows for the straightforward identification and removal of many-fit-one centroids. However, in practical scenarios, eliminating many-fit-ones is challenging and unnecessary, as they often carry crucial information about the global solution. In this section, we explore the applicability of Algorithm <ref> (theoretical) beyond its constraints by conducting experiments on S-sets under various heterogeneous conditions. Table <ref> presents results, revealing that the performance of Algorithm <ref> is suboptimal when assumptions of Stochastic Ball Model are not met, particularly in non-IID cases. This suboptimal performance is due to its reliance on specific assumptions for identifying many-fit-one centroids. Consequently, in practical scenarios, this approach may erroneously eliminate true centroids, leading to less effective outcomes. §.§ Tuning k' for k-FED on the synthetic dataset In this section, we provide the results of experiments conducted to select k' for k-FED <cit.> on the synthetic dataset. Given that all four subsets of S-sets have the same number of true clusters k^*=15, here we utilize S-sets(S1) for tuning k', which has the largest degree of separation. This choice is based on the separation condition mentioned in their paper. We present the ℓ_2-distance between recovered centroids and true centers for k' values ranging from 2 to 15. And results (mean±std) from 10 random runs are reported in <ref>. Additionally, we evaluate clustering assignments generated from the recovered centroids using Purity and NMI metrics. The mean results from 10 random runs are included in <ref>. Considering k-FED is designed for heterogeneous cases, we adopt a Dirichlet(0.3) data sample scenario in our experiments. It is also important to note that when k'=k^*, the aggregation step in the k-FED algorithm essentially becomes redundant, and the recovered centroids are equivalent to the set of centroids returned by one randomly chosen client. §.§ Presence of local solutions in k-means In this section, we perform centralized k-means on 50%, 75%, and 100% of data from S-sets. In <ref>, we illustrate objective values calculated as <ref> using 10 random seeds. As depicted, Lloyd's algorithm frequently converges to local solutions with objective values significantly larger than that of global solutions. This empirical result demonstrates the presence of local solutions with poor performance is independent of the data sample size. Also, it highlights the necessity of our algorithm, which is specifically designed to address and resolve these suboptimal local solutions. To better understand the structures of local solutions, we visualize some local solutions of centralized k-means on S-sets, shown in <ref>. This visualization reveals that despite variations in data separations, spurious local solutions exhibit structures as discussed in <ref>, containing both one-fit-many and many-fit-one centroids. This emphasizes the necessity of the steps in our algorithm that discard one-fit-many and aggregate many-fit-one centroids. Almost-empty cases. As outlined in <cit.>, local solutions of k-means can be composed of one/many-fit-one, one-fit-many and almost-empty centroids. The first two have been discussed in detail in <ref>. Addressing almost-empty cases involves identifying centroids that are far from any true centers and its cluster is almost empty with a small measure. This typically occurs when the dataset contains isolated points that are significantly far from the true centers. It is worth noting that almost-empty cases are more theoretical than practical, with rare occurrences in empirical experiments. However, if such a case does occur, our algorithm can handle it in the aggregation step by Algorithm <ref>. Centroids from almost-empty clusters can be treated as noisy data and discarded during the grouping process. Since they are distant from true centers and other received centroids, they do not contribute meaningfully to the final grouping. § DISCUSSION ON VARYING NUMBERS OF CLIENTS In this section, we explore the impact of varying client numbers on our federated approach. We conduct experiments by allocating a fixed dataset portion (5% randomly sampled from the entire dataset) to each client and then perform our algorithm across varying numbers of clients. We first evaluate the recovered centroids by calculating ℓ_2-distance between these centroids and the ground truth centers. Subsequently, we apply a one-step Lloyd's algorithm using the recovered centroids for initialization and then evaluate the clustering assignments by calculating Purity and NMI. Results for the S-sets (S1) are presented in <ref>. It is noteworthy that only centralized k-means clustering is performed when the number of clients is one. In such cases, centralized k-means often results in large ℓ_2-distance due to convergence to local optima with suboptimal performance. The findings presented in <ref> are visually depicted in <ref> (left), where a trend of decreasing ℓ_2-distance is observed as the number of clients M increases. This trend indicates that collaboration among multiple clients can significantly mitigate the negative impact of local solutions. Specifically, when a client encounters a local minimum, integrating benign results from other clients can help alleviate this issue. This collaborative mechanism underscores the effectiveness of federated approaches in improving performance by leveraging the distributed nature of client contributions. Additionally, we assess the impact of varying client numbers in a standard federated setting, particularly under IID data sample scenario for S-sets(S1). In the following experiments, with M denoting the number of clients, each client is allocated 1/M of the data points from S-sets(S1). We note that centralized k-means is performed when M=1 on the entire dataset. We evaluate the performance of our algorithm by reporting the ℓ_2-distance between recovered centroids and true centers, alongside the Purity and NMI of clustering assignments, detailed in <ref>. Moreover, <ref> (right) features visual representations of the ℓ_2 distance results, emphasizing the robustness of our federated algorithm across varying numbers of clients. § DISCUSSION ON VARYING K The selection of k is an important aspect of the k-means problem. In this section, we present supplementary experiments investigating the performance of our algorithm with varying k. For clarity, in this study, we use k to denote the number of recovered centroids desired by our algorithm, which is also the number of output centroids by Algorithm <ref>. k' denotes the parameter used to perform k-means clustering on clients in Algorithm <ref>, while k^* indicates the number of true centers. In the following, we present experiments with varying values of k' and k, respectively. Varying values of k'. We perform on S-sets(S1) under three data sample scenarios, varying values of k'. We chose k'=10 (undershoot case) and k'=20 (overshoot case) with k^*=15. Mean square errors between recovered centroids and ground truth centers are shown in <ref>. Note that for undershoot cases, the number of recovered centroids k might be less than k^*. In these instances, we identify the top k matching centroids with recovered centroids from k^* true centers and calculate the mean square error between them. Visualizations of clustering results on different clients and 's recovered centroids are illustrated in the following figures. When k'≠ k^*, even if clients converge to the global solution, this global solution exhibits similar structures (one-fit-many and many-fit-one) to those observed in local solutions when k'=k^*. As shown in <ref>, when k'=10, clients' clustering results demonstrate the presence of one-fit-many, while k'=20 showcases many-fit-one structures illustrated in <ref>. Therefore, this underscores the necessity of our algorithm, as it effectively addresses such structured solutions. In the undershoot case (k'=10<k^*), clients' clustering results might contain multiple one-fit-many but no many-fit-one centroids. This complicates the elimination of one-fit-many centroids in Algorithm <ref>, because identifying one-fit-many becomes challenging without many-fit-one centroids for reference. Under non-IID conditions, this might occur in extreme undershoot cases with rather small k', as clients' data may concentrate on partial true clusters. As discussed in <ref>, if one-fit-many centroids are not eliminated on clients, the number of output centroids k may be less than k^*, shown in <ref>. In contrast, in the overshoot case (k'=20>k^*), the clients' clustering results include multiple many-fit-one centroids but no one-fit-many centroids. Since our algorithm effectively addresses many-fit-one centroids in the aggregation step on the server, can still accurately approximate the true centers, as demonstrated in <ref>. This is because the elimination of many-fit-one centroids on the clients' side is unnecessary; these centroids can contribute meaningfully to the grouping process on the server side, as discussed in <ref>. Varying values of k. We perform on S-sets(S1) under three data sample scenarios, selecting k'=k^*=15, and k=10. Mean square errors between recovered centroids and ground truth centers are reported in <ref>. When k<k^*, Algorithm <ref> outputs the mean centroids of the top k groups containing the largest number of elements. As illustrated in <ref> and <ref>, with k<k^*, the recovered centroids can still accurately approximate partial true centers.
http://arxiv.org/abs/2407.12220v1
20240717000630
Questionable practices in machine learning
[ "Gavin Leech", "Juan J. Vazquez", "Misha Yagudin", "Niclas Kupper", "Laurence Aitchison" ]
cs.LG
[ "cs.LG", "cs.CL", "cs.CY" ]
Development of MMC-based lithium molybdate cryogenic calorimeters for AMoRE-II A. Agrawal 0000-0001-7740-5637, V.V. Alenkov 0009-0008-8839-0010, P. Aryal 0000-0003-4955-6838, H. Bae 0000-0003-1393-8631, J. Beyer 0000-0001-9343-0728, B. Bhandari 0009-0009-7710-6202, R.S. Boiko 0000-0001-7017-8793, K. Boonin 0000-0003-4757-7926, O. Buzanov 0000-0002-7532-5710, C.R. Byeon 0009-0002-6567-5925, N. Chanthima 0009-0003-7774-8367, M.K. Cheoun 0000-0001-7810-5134, J.S. Choe 0000-0002-8079-2743, S. Choi 0000-0002-9448-969X, S. Choudhury 0000-0002-2080-9689, J.S. Chung 0009-0003-7889-3830, F.A. Danevich 0000-0002-9446-9023, M. Djamal 0000-0002-4698-2949, D. Drung 0000-0003-3984-4940, C. Enss 0009-0004-2330-6982, A. Fleischmann 0000-0002-0218-5059, A.M. Gangapshev 0000-0002-6086-0569, L. Gastaldo 0000-0002-7504-1849, Y.M. Gavrilyuk 0000-0001-6560-5121, A.M. Gezhaev 0009-0006-3966-7007, O. Gileva 0000-0001-8338-6559, V.D. Grigorieva 0000-0002-1341-4726, V.I. Gurentsov 0009-0000-7666-8435, C. Ha 0000-0002-9598-8589, D.H. Ha 0000-0003-3832-4898, E.J. Ha 0009-0009-3589-0705, D.H. Hwang 0009-0002-1848-2442, E.J. Jeon 0000-0001-5942-8907, J.A. Jeon 0000-0002-1737-002X, H.S. Jo 0009-0005-5672-6948, J. Kaewkhao 0000-0003-0623-9007, C.S. Kang 0009-0005-0797-8735, W.G. Kang 0009-0003-4374-937X, V.V. Kazalov 0000-0001-9521-8034, S. Kempf 0000-0002-3303-128X, A. Khan 0000-0001-7046-1601, S. Khan 0000-0002-1326-2814, D.Y. Kim 0009-0002-3417-0334, G.W. Kim 0000-0003-2062-1894, H.B. Kim 0000-0001-7877-4995, H.J. Kim 0000-0002-8265-5283, H.J. Kim 0000-0001-9787-4684, H.L. Kim 0000-0001-9359-559X, H.S. Kim 0000-0002-6543-9191, M.B. Kim 0000-0003-2912-7673, S.C. Kimsckim@ibs.re.kr 0000-0002-0742-7846, S.K. Kim 0000-0002-0013-0775, S.R. Kim 0009-0000-2894-2225, W.T. Kimwootaekim0726@gmail.com 0009-0004-6620-3211, Y.D. Kim 0000-0003-2471-8044, Y.H. Kim 0000-0002-8569-6400, K. Kirdsiri 0000-0002-9662-770X, Y.J. Ko 0000-0002-5055-8745, V.V. Kobychev 0000-0003-0030-7451, V. Kornoukhov 0000-0003-4891-4322, V.V. Kuzminov 0000-0002-3630-6592, D.H. Kwon 0009-0008-2401-0752, C.H. Lee 0000-0002-8610-8260, D.Y. Lee 0009-0006-6911-4753, E.K. Lee 0000-0003-4007-3581, H.J. Lee 0009-0003-6834-5902, H.S. Lee 0000-0002-0444-8473, J. Lee 0000-0002-8908-0101, J.Y. Lee 0000-0003-4444-6496, K.B. Lee 0000-0002-5202-2004, M.H. Lee 0000-0002-4082-1677, M.K. Lee 0009-0004-4255-2900, S.W. Lee 0009-0005-6021-9762, Y.C. Lee 0000-0001-9726-005X, D.S. Leonard 0009-0006-7159-4792, H.S. Lim 0009-0004-7996-1628, B. Mailyan 0000-0002-2531-3703, E.P. Makarov 0009-0008-3220-4178, P. Nyanda 0009-0009-2449-3552, Y. Oh 0000-0003-0892-3582, S.L. Olsen 0000-0002-6388-9885, S.I. Panasenko 0000-0002-8512-6491, H.K. Park 0000-0002-6966-1689, H.S. Park 0000-0001-5530-1407, K.S. Park 0009-0006-2039-9655, S.Y. Park 0000-0002-5071-236X, O.G. Polischuk 0000-0002-5373-7802, H. Prihtiadi 0000-0001-9541-8087, S. Ra 0000-0002-3490-7968, S.S. Ratkevich 0000-0003-2839-4956, G. Rooh 0000-0002-7035-4272, M.B. Sari 0000-0002-8380-3997, J. Seo 0000-0001-8016-9233, K.M. Seo 0009-0005-7053-9524, B. Sharmabijayasharma22@gmail.com 0009-0002-3043-7177, K.A. Shin 0000-0002-8504-0073, V.N. Shlegel 0000-0002-3571-0147, K. Siyeon 0000-0003-1871-9972, J. So 0000-0002-1388-8526, N.V. Sokur 0000-0002-3372-9557, J.K. Son 0009-0007-6332-3447, J.W. Song 0009-0002-0594-7263, N. Srisittipokakun 0009-0009-1041-4606, V.I. Tretyak 0000-0002-2369-0679, R. Wirawan 0000-0003-4080-1390, K.R. Woo 0000-0003-3916-294X, H.J. Yeon 0009-0000-9414-2963, Y.S. Yoon 0000-0001-7023-699X, Q. Yue 0000-0002-6968-8953 Received: date / Revised version: date ============================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================== § ABSTRACT Evaluating modern ML models is hard. The strong incentive for researchers and companies to report a state-of-the-art result on some metric often leads to questionable research practices (QRPs): bad practices which fall short of outright research fraud. We describe 43 such practices which can undermine reported results, giving examples where possible. Our list emphasises the evaluation of large language models (LLMs) on public benchmarks. We also discuss “irreproducible research practices”, i.e. decisions that make it difficult or impossible for other researchers to reproduce, build on or audit previous research. § INTRODUCTION > If, like truth, falsehood had but one face, we'd be better off: we could take as true the opposite of what a liar says. But the opposite of the truth has a hundred thousand faces and a limitless field. – <cit.> To understand the actual capabilities of models like large language models (LLMs) and develop reliable systems based on them, it is critical to have trustworthy evaluations comparing different models and approaches on meaningful benchmarks. However, researchers and companies have strong incentives to engage in `questionable research practices' (QRPs) to inflate their reported results. For instance, inflated results help researchers publish in high-impact venues and help companies attract investment and users. QRPs seriously complicate the use of benchmark scores to rank systems or estimate their capabilities. Not only is there motive to engage in QRPs: the complexity of the pre-training, post-training and evaluation procedure also gives ample opportunity to use them. These opportunities fall into three families. First, contamination, in which test-set information is used in the pre-training, post-training, runtime, or prompt – and high-capacity models like LLMs can memorise arbitrary pretraining examples <cit.> and are known to be exposed to test data during training <cit.> Second, cherrypicking, in which researchers `hack' experimental settings (selecting those under which their model works better than others after testing multiple times), or try to “nerf” (i.e. degrade the performance of) baselines. Third are various forms of misreporting, such as making broad claims (e.g. about “reasoning”) based on narrow benchmarks (see <ref>). We additionally consider “irreproducible research practices” (IRPs), which are practices that make it more difficult for other researchers to reproduce build on or audit previous research. The most obvious and prevalent example is dataset hiding (<ref>): not sharing data or metadata about the training dataset used to create a model. Given the recent industrialization of ML research, this hiding aims to retain the lab's competitive advantage over other labs and to reduce the risk of copyright lawsuits. left=0cm,right=0cm, top=1cm 1.19 empty § RELATED WORK The two lenses we use, QRPs and researcher degrees of freedom, originate from psychological science <cit.> – but similar issues have been studied in machine learning using different terminology. §.§ Questionable research practices in ML We import the name `QRP' from psychology. But obviously ML researchers are well-aware of methodological issues under other names. <cit.> explore the effect of inflated language, bad communication, and superfluous design, while <cit.> review examples where lack of rigour has slowed overall progress in the field, along with the incentives that cause this. A report from the NeurIPS 2019 Reproducibility programme <cit.> explores access to data, model specification, code availability, metric specification, improper use of statistical methods and over-claiming of results. <cit.> look at the mismatch between benchmarks and actual problems of real-world interest. <cit.> explore methodological problems in LLM evaluation. <cit.> is a useful systematic guide of `what not to do in ML' which mentions many QRPs. Conventionally, we use benchmark scores to estimate the model's generalisation to the broader task or capability that the benchmark is a proxy for. <cit.> focus on this external validity of benchmarks: what a benchmark says about data beyond (external to) the benchmark, and so whether good performance on the benchmark allows us to infer good performance on the task in general. There is a dearth of concrete examples of QRPs in the ML literature, which we put down to professional courtesy and sparse academic rewards for correcting past work. As of writing, Twitter is one of the only sources of leads and up-to-date bad practices (and to a lesser extent Reddit). Our work is similar in spirit to <cit.>'s famous analysis of misleading description and visualisation in statistical work, `How to Lie with Statistics'. §.§ Researcher degrees of freedom A fruitful way to understand the replication crisis in the empirical sciences <cit.> is to note that scientific analyses have many `researcher degrees of freedom' (RDOFs) <cit.> – free choices in the experiment design and data analysis that can be manipulated by a researcher to give themselves more chances to get a (real or spurious) `significant' result <cit.>. This is unavoidable: no science has a one-to-one mapping between theories and experiments (see for instance the labour involved in physical `phenomenology' – i.e. deriving quantitative predictions from fundamental theories <cit.>), nor between experiments and analyses. It does, however, mean that most scientific analyses have a backdoor: an unwary or dishonest researcher can create spurious results despite using valid methods on valid data and honestly reporting their final analysis. Each degree of freedom in the research process is an opportunity to intentionally or unintentionally introduce a QRP. ML researchers run experiments to compare the effectiveness of methods. An evaluation usually has a main method (usually the researchers' own new contribution) and a set of baselines. To publish usually requires that their method be statistically significantly better than the baselines. This gives an incentive for researchers to exploit RDOFs to make their method look better. In particular, they could exploit RDOFs in the evaluation procedure, such as the choice of datasets, data subsets, or in an LLM, shared aspects of the experimental configuration such as the prompt. Notably, it is essentially never acceptable to “optimize” any aspect of the evaluation procedure / experimental setting. In contrast, researchers also have considerable freedom in how they implement their method and the baselines (e.g. in the choice of a hyperparameter such as the learning rate), and it is okay to optimize these hyperparameters. In fact, it is necessary that researchers optimize the hyperparameters of the baselines and of their method a “similar amount”, although how to actually operationalise this notion is fraught in practice. All research has many researcher degrees of freedom (including good research). LLM development is chaotic and adaptive, involving 1) loops over abandoned training runs (for instance to fix critical bugs in the training code); 2) within-run training restarts (to handle loss spikes or skip problematic data, change the learning rate, or replace faulty devices) <cit.>; and 3) design restarts upon being disappointed by poor final evaluation scores. Hundreds of decisions must be made along the way; in this paper we focus on those decisions liable to produce misleading results (Table <ref>) or harm reproducibility (Table <ref>). The concept of RDOFs has not yet made it into the ML literature under that name (with the exception of <cit.> and <cit.>). Naively, there would seem to be less room for RDOFs in ML, since the point of the field is to forego human modelling. But, as noted above, many decisions remain up to human researchers (notably in the mostly-fully-manual ML evaluation process). We can thus frame a scientific study as a combination of design choices, some of which are underdetermined or arbitrary, and some of which lead to different final conclusions. Accounting for RDOFs results in a large tree of possible analyses, with each analytic decision as a node and each leaf as an alternate result. So-called multiverse analysis runs the entire tree of analyses (often thousands of leaf-analyses) to help researchers understand the extent to which their results are sensitive to arbitrary design or analysis choices <cit.>. Our work can be viewed as a cross-field remake of <cit.>. <cit.> apply multiverse analysis to ML; to overcome the far larger search space and computational complexity of ML models (which make running thousands of training runs prohibitively expensive), they model the `multiverse' using a Gaussian process surrogate for each set-up, that is, for each combination of `used' degrees of freedom. §.§ Irreproducible research practices In his study of ML papers that proposed at least one new algorithm or implemented one for the first time, <cit.> failed to reproduce 36.5% of 255 papers included. Previous work reported an even higher rate that could not be reproduced: 74% <cit.>. <cit.> provide a comprehensive overview of availability, documentation and access for more than forty LLMs. More recent discussions include <cit.> and <cit.>. § THE FUNDAMENTAL TRICKS Most ways to mislead others, or delude yourself, in ML evaluation fall into one of the following categories: * Contamination: Any way in which the test set influences the training process – e.g. by training on data semantically identical to test examples, implicitly tailoring the model design to the test set, by reusing hyperparameters from models tested on this test set (<ref>), or just straight-up training on the test set (<ref>). * Cherrypicking: Choosing among runs and configurations to make your system look good or relatively good. This includes picking weak competitors (<ref>), undertuning a strong competitor (<ref>, and optimising your inference parameters or prompt more <ref>). * Misreporting: any error or misleading presentation of the model's specification or evaluation results <cit.>. These include reporting only point estimates of performance (<ref>), under-reporting the model's size (<ref>) or attributing success to modules or layers which are actually inert (and failing to do the ablation studies which would undermine this claim) (<ref>), or claiming that general task performance has been attained based on tests without clear external validity (<ref>). Additionally, we consider a less fundamental category: Amplifiers, which have indirect effects on results by enabling other QRPs. These QRPs are summarised in Table <ref> which includes a column for the `stage' of model development: i.e. the path leading from system design, data collection, training, evaluation, to reporting (i.e. writing up the results). This basic process is shown in Figure <ref>. These QRPs are described in depth in the remainder of this section. Define a baseline as any method besides your own which you use as a comparator. Define a model configuration as an assignment of values to all hyperparameters in your method. Define a shared experimental setting as an assignment of values to all variables that the evaluation should hold constant across methods: e.g. the task, the benchmark, the few-shot examples, as well as nebulous quantities like `amount of optimisation effort used to obtain the final user prompt'. §.§ Contamination Contamination (AKA leakage) is any influence of test-set information on model development, from subtle influence (e.g. reusing hyperparameters, see <ref>) to blatant influence (training on the test set). Contamination can totally invalidate a reported benchmark score, if the score is construed (as usual) as evidence about the model's general capability on the task. The use of poorly-filtered web-scale training corpuses has led to many cases of plausibly accidental contamination <cit.>. We can see this with new versions of existing benchmarks, which generally exhibit substantial decreases in performance <cit.>. We thus need to account for the very high prior probability of contamination in web corpuses and take suitable countermeasures (for instance looking for canary strings <cit.> or exact label matches), such that readers can trust the resulting evaluation results <cit.>. §.§.§ Training contamination: using test set information at train-time With training data corpuses scaling up to terabytes and beyond <cit.>, it has become difficult to know what exactly models are trained on, since full manual inspection is ruled out due to the vast size of modern datasets <cit.>. When training GPT-3, researchers discovered that the training set was indeed contaminated, despite efforts to scrub the training data of test data occurrences <cit.>. No modern LLMs credibly claim to have removed all contamination, and indeed contamination has been explicitly reported in GPT-4 <cit.>, GLaM <cit.>, Llama 2 <cit.> and Gemini <cit.>. Note that such contamination can occur in both the pre- and post-training phases; we consider these two phases together, both because the issues are similar as post-training datasets become larger, and because the issues are are difficult to distinguish over API. Given that it is very difficult to avoid contamination in large LLM pretraining datasets, we need to understand the severity of the impact on downstream benchmark performance. Early work argued that GPT-3 is relatively insensitive to contamination, as the large amount of data involved implies that little over-fitting or memorisation should occur <cit.>. On the other hand, as models get larger, they memorise more <cit.>, and as datasets get larger, we might expect that the chances of accidentally ingesting test data increase. Below, we discuss three key strategies for understanding the impact of contamination; all indicate that data contamination causes large changes in benchmark performance. The first approach to measuring the effect of contamination is to compare the effect of a careful data filtering strategy that eliminates contamination vs a less careful strategy that allows some contamination. As an example, <cit.> showed that Gemini 1.0 Ultra increased its performance on HumanEval from 74.4% to 89.0% if exposed to the test set even once in pre-training. <cit.> recently demonstrated that popular open source training corpuses `The Pile' and `The Stack' are contaminated with HumanEval and MBPP. Removing the HumanEval contaminated subset from training led to a 57.1% drop in accuracy for Pythia-12B trained on `The Pile', and a 25.9% drop for StarCoderBase-15.5B trained on `The Stack'. The second approach to measuring the effect of contamination is to design a new test set that mimics the original test set. This new test set cannot have leaked, because it was designed after the model pre/post-training occurred. As an example, <cit.> created the GSM1K benchmark as a new test set that was of similar difficulty to the GSM8K benchmark. We observe that models launched with reports that do not mention contamination, including Mistral, Mixtral and Phi-3 <cit.>, all drop around 10% in score. GPT-4, Gemini 1.5 and Llama 3 deviate by less than 3%. As with all evaluations, these results were, however, sensitive to the choice of prompt. Another potential example of training data contamination is presented by GPT-4's notable drop in performance on Codeforces (competitive programming) problems. On `easy' Codeforces problems from before the September 2021 knowledge cutoff date, GPT-4 reported 100% performance; on problems added to the site after the cutoff it recorded 0% performance <cit.>, suggesting that the answers were being memorised. <cit.> find the same behaviour for Project Euler mathematical programming. A final strategy to detect contamination is to make changes to a test set that leave the problem semantically unchanged but which cause drops in performance. For instance, <cit.> noticed a decrease in MMLU <cit.> accuracy when the order of answer choices is shuffled, which may be explained by contamination and memorisation. Subtler ways to leak test information include: using the whole dataset (before splitting into train and test sets) to calculate summary statistics, normalisation constants, or inputs to feature selection. Arguably, also, the presence of similar instances in both the training set and test set is effectively contamination; see <ref>. §.§.§ Prompt contamination: using test-set information at runtime While the most contamination is likely to happen at pre/post training, it is now common to use multiple relevant examples in the prompt. The number and choice of examples can have a large influence on performance <cit.>. We define Prompt contamination as either drawing few-shot examples from the benchmark data, or just directly including the answer in the prompt. This can occur accidentally if the few-shot examples are drawn in an automated manner from contaminated training data <cit.>. More often, however, few-shot examples are engineered by hand, and this hand-engineering can implicitly involve contamination (especially through tuning the prompt on the test set Sec. <ref>. §.§.§ RAG contamination: using test-set information through retrieval <cit.> introduced a method of connecting and training a LLM alongside a dense vector index of a chosen database. They report that their models generate more specific, diverse and factual answers <cit.>. With an additional data base comes an additional potential source of contamination. We call such a contaminated reference database the RAG contamination (where RAG refers to Retrieval-augmented Generation). This of particular importance in the case of comparing LLMs and LLM based agents, since an agent could for instance use a lookup table to achieve perfect accuracy on a benchmark <cit.>. §.§.§ Dirty paraphrases: creating data equivalent to the test-set Direct contamination with actual test examples is far from the only way that contamination can occur. We define dirty paraphrasing as the practice of altering test data into a semantically equivalent form before training on it, with the effect of evading string-checking or exact extraction attacks <cit.>. <cit.> showed in principle that rephrased samples taken from benchmarks can evade simple n-gram decontamination methods while boosting model scores, and that this rephrasing can be achieved at scale by existing LLMs. <cit.> show possible examples of dirty paraphrasing in practice by finding a high similarity between most HumanEval questions and the `Starcoder V2 OSS Instruct' dataset <cit.> and (in particular) the `evol Instruct' <cit.> synthetic code training dataset. One approach to mitigating rephrasing is to use LLMs to detect similar content which might not be detected by n-gram methods. <cit.> provide some evidence of this working better than other methods. However, the extremely high cost of this approach to filtering is likely to render it impractical in practice for large pretraining datasets. Moreover, <cit.> raises problems with this approach in principle: In an ideal world, we would have a way to automatically detect when two sentences express the same content but in different words. Unfortunately, our best tools for determining whether two sentences are semantically equivalent are the very models we are seeking to evaluate. This problem drives many of the approaches to LM benchmarking, and many problems in LM evaluation stem from there not being any silver bullets for solving the Key Problem. §.§.§ Contamination laundering: using test-set information via synthetic data In contamination laundering, a student model is trained with a signal from a teacher model which was trained on test data (so-called knowledge distillation, e.g. <cit.>). The student model is thus contaminated. This is closely related to dirty paraphrasing, but with one key difference that makes contamination laundering much easier to do accidentally. In particular, in dirty paraphrasing, the downstream researcher explicitly asks an LLM (or human) to rephrase the test data. Thus, in dirty paraphrasing, it is you (the downstream researcher) who puts in the contamination. In contrast, in contamination laundering, you do not put in any contamination yourself; you just ask e.g. GPT-4 to generate synthetic data by following a carefully crafted prompt. The problem is that e.g. GPT-4 itself contaminated. Indeed, GPT-4 has a reported HumanEval <cit.> contamination of 25% <cit.>. <cit.> claim that 12.8% of the content of synthetic datasets (such as CodeAlpaca <cit.>) generated using GPT-3.5 are rephrased from HumanEval <cit.>, pointing to contamination. Consider also Phi-1's <cit.> high performance on HumanEval – matching models more than ten times its size. The developers curated a synthetic dataset using GPT-3.5. It is possible that their high performance on HumanEval arises through contamination in GPT-3.5, despite using decontamination techniques such as n-gram overlap and similarity analyses. §.§.§ Thieved test: obtaining a private test set Data contamination can only happen if the test set is available to train on. This motivates a recent trend towards private test sets, with results e.g. served as an aggregate score over an API <cit.>. This defense can be subverted in several ways. * Collusion with benchmark makers: The easiest option is just to collude with a benchmark creator. * Scrape the labels: If the dataset is scraped from the internet, then it may be possible re-create part of the dataset scraping pipeline to recover the private test set, as described in <cit.>. * Reverse-engineered labels: In the case where the test inputs and labels are private and a metric is received upon submission (and multiple are allowed), one could reverse engineer them by making many different submissions <cit.>. This becomes easier to do when the test labels are private but test inputs are public, in which case one could simply hand-annotate the test inputs. The easiest way to get around this issue is to limit the number of submissions, although that has proven to not be enough <cit.>. * Relabelling the test inputs: If the test inputs are available, then you could hand-label the test inputs. Some test sets involve (low) thousands of labels. This is within the ability of one researcher to create; alternatively, paid data labelling services could be employed. Benchmarks are often created with the help of large volunteer collaborations (e.g. <cit.>); participating in such efforts could easily grant an attacker access to test labels, or test examples without labels. §.§.§ User contamination: post-training on users entering test data If user data is used for fine-tuning, and if (naturally enough) users sometimes prompt the model with test data, then data contamination can result even if the test data was totally absent from the pretraining corpus. Commercial model developers are known to train on user data, for instance ChatGPT <cit.> (but plausibly not the OpenAI API). Users leaking test data causes two problems for evaluators and users: * The model behaviour changes over time, possibly without a public version number changing (see <ref>). * Normal contamination: the model is being trained on test data, spuriously boosting future evaluation scores. There are efforts ongoing to detect user leaks, for instance <cit.>. §.§.§ Over-hyping: tuning hyperparameters on the test set Another common way to leak information is to tune on the test set: training a model, evaluating it on the test set, and then doing further hyperparameter search or testing again with a different evaluation metric (e.g. accuracy vs F1 score; e.g. positive-to-negative sentiment steering instead of the reverse steering <cit.>). This tuning can iterate many times. The resulting models are in some sense being implicitly fitted to the test set (since we use the test score as a signal to build the next model); also, multiple comparisons are implicitly being made and very likely not corrected for. To distinguish this from classic contamination (training on test data), <cit.> call this `over-hyping' and note that it biases results even if every iteration of the cycle uses cross-validation properly. §.§.§ Meta-contamination: training on test at the field-level Tuning hyperparameters on the test set can also — in effect — happen across multiple papers or teams. Specifically, ML is accretive: successful papers have their architectures and hyperparameters reused and their code forked. This is not problematic in itself, but when a single test set is, simultaneously, reused as the main form of evaluation for the descendent lineage of papers (as ImageNet <cit.> was for computer vision), this multi-team process is effectively training on the test set <cit.>: implicitly, we are designing successors based on a summary statistic of the test set. For lodestar benchmarks like ImageNet, this can involve tens of thousands of papers, each representing hundreds or thousands of hyperparameter settings. Initial empirical tests (testing modern models on new but near-identically distributed test sets) found that the novel-test-set error (i.e. the effect of meta-contamination) was 4-15% higher than the original-test-set error <cit.>. This turns out to be pessimistic: theoretical work by <cit.> implies that the meta-overfitting error on ImageNet must be less than 7-10%. Simple checks to verify dataset contamination could be used to decide on the right parent LLM-benchmark pair. Different methods for lightweight zero-shot information and dataset extraction have been proposed <cit.>. Achieving high benchmark scores in this way is almost never malicious. However, with extensively used benchmarks such as ImageNet, meta-contamination is an ever-present issue, and may raise questions about the generalisation of methods and architectures tuned primarily on such datasets. §.§.§ Split twins: too-similar points in both train and test To be a good test of generalisation, the test set should be differently distributed from the training set (and if the overall difficulty of tasks differs between sets, the test set should be harder). We call a pair of nearly-identical points in both train and test sets split twins. One example is <cit.> finding that for 3.3% of CIFAR-10 and 10% of CIFAR-100 <cit.> test images there is a near-duplicate image in the training set (and that removing duplicates causes a significant drop in accuracy). Molecular ML benchmarks also suffer from similar test-train data in the context of molecule structure. <cit.> discusses modern benchmarks such as MoleculeNet's ESOL and HIV <cit.>, and MolData <cit.> where more than 50% of the test molecules have a neighbour molecule in the train set; where neighbour molecules share similar structure under some criterion (Tanimoto similarity <cit.>). For example , consider predicting harvests from data on vineyards. Vineyards are often in close proximity to one another and grow the same type of vine on the same soil with similar treatment and weather. So if we just randomly split vineyards into training and test sets, while we will get technically different vineyards in each set, practically we will have very similar vineyards in both sets. One can also hack the split into train and test by iterating over random splits until one finds a particularly easy version of the test set (see <ref> for variations of this). Modern practice is to provide training and test sets already split up, which fixes this degree of freedom. (This does, however, require the benchmark creator to consider the resulting train/test distributions before release, since it may be difficult to update this decision once the initial split is in use, as baselines must be backwards-compatible.) When a split is not provided by the benchmark creator (for instance for papers which collect their own novel dataset), another QRP opens up: train/test split hacking could be used to silently nerf a baseline, by using different splits for competitor methods which result in a harder test set. Dirty paraphrasing is a related issue in which examples semantically identical to test examples are actively added to the training set <cit.> (see <ref>). §.§ Cherrypicking Cherrypicking results from running multiple tests and reporting the best. A subtler form, with deniability, is to report the best in the main table and only discuss the variations in the Appendix. This is largely enabled by, and becomes hard to detect due to the poor or non-existent documentation of `contextual factors' (seeds, parameters, prompts), as noted by <cit.>. Most cherrypicking opportunities present themselves during evaluation. To better understand the relevant problems it will be useful to visualise an idealised evaluation practice using a causal graph <cit.>. We use two idiosyncratic terms for two groups of QRPs: those involving hacking or nerfing. * Nerfing is intervening to weaken baselines (or unfairly strengthen your own model), for instance by tuning their hyperparameters less (compared to the original study or to the amount of optimisation on your method). Usually baselines are nerfed by looking at their performance on the test-set. * Hacking refers to selecting shared experimental settings post-hoc – after obtaining your results. A hacker can then report only those most favourable to them (for instance those baselines with lower scores than their method, or those tests among many with p-values below 5%); they may also fail to correct for these multiple comparisons. §.§.§ Baseline nerfing & Baseline hacking: creating or picking weak comparators Baseline hacking involves seeking out and reporting the results of weaker/older comparator methods. Baseline nerfing instead involves picking competitive methods and then under-tuning them relative to the tuning performed for your system <cit.>. This is particularly easy to do unintentionally (for example if you are reusing reported results and their paper fails to report the total compute they used). A classic example of baseline hacking can be seen in the release of Claude 3 <cit.>, where GPT-4 is used as a baseline rather than the more powerful (and available) GPT-4T model (<cit.> mention higher scores for GPT-4T <cit.> in a footnote). A rare example of a researcher admitting their failure to tune the baselines is <cit.>. When comparing an ML method to a baseline one can ensure fairness by fixing cost indicators or efficiency metrics: for instance, parameters, or computation time <cit.>. <cit.> demonstrate that when evaluating training algorithms (<cit.> and <cit.> among others) at fixed computation budgets several methods do not improve over their baselines, indicating that the baselines in the original publications of the evaluated methods may have been under-tuned. <cit.> also discuss current challenges in model comparisons and find that with sufficient hyperparameter optimisation, most Generative Adversarial Networks (GAN) <cit.> algorithms score similarly during evaluation. A particularly subtle form of baseline hacking is comparing your `post-trained' model to someone else's base model. Pretraining produces a `base' model which is then `instruction-tuned' on labelled examples of (request, response) pairs; this greatly improves its ability to complete e.g. open-ended tasks as found in some benchmarks <cit.>. (A further step, relevant for many benchmarks like detoxification, is the RLHF or DPO subsequently performed on the instruction-tuned model.) One way to undermine a competitor model is thus to compare your instruction-tuned model to its base model (or, similarly, to compare your RLHFed model to their instruction-tuned model). Human baselines (e.g. those crowdsourced using Amazon Mechanical Turk) can also be nerfed. Human performance of course varies enormously by participant, training, and incentive level, and the social sciences have for some years been tracking a `data quality crisis' in crowdsourced studies <cit.>. Moreover, crowdsourced data is subject to a substantial risk of being generated by bots instead of humans <cit.> – more than half of responses may be bot-generated <cit.>. Researchers should be lauded for reimplementing algorithms and retesting baselines in subsequent papers when possible. If used on the same evaluation code, reproducing the results of others removes a possible confounder. However, reimplementing baselines opens the possibility of nerfing. And in any case, full retesting is becoming ever more unrealistic for smaller labs and universities due to their limited compute relative to large corporate labs. Beyond just being unable to retrain their own model as specified, small labs might not even have the funds to fully run the benchmarks, and since some of the models are only accessible through APIs there is no guarantee that the model they are testing is even the same model as in the original report. §.§.§ Runtime nerfing: weakening comparators at runtime By convention, developers seek an extra 1% performance on any widely-recognised benchmark to claim SOTA. One way to do this is to nerf your baselines by not optimising their output hyperparameters (temperature, max tokens, top-p, etc.) or not using comparable prompting or soft-prompting techniques (majority voting, chain of thought, etc.). We call this runtime nerfing. This is a known problem across the community <cit.>. For example, the Gemini launch results <cit.> openly compared Gemini's majority-voted k=32 score to GPT-4-Turbo's 5-shot (CoT, Chain of Thought) score for the GSM8K benchmark <cit.>. An example of benchmark reporting practices that avoid hacking can be seen in Table 9 of <cit.>, which reports GSM8K and MATH <cit.> scores which were obtained using the same prompting strategy for all models. §.§.§ Runtime hacking: post-hoc picking the best inference settings To recap, the `runtime' of an LLM is made up of all parameters used in sampling (`running inference') from it. The most prominent runtime parameter is clearly the user prompt. The sheer power of prompting is demonstrated in <cit.>, which showed that GPT-4 can achieve SOTA performance on the MultiMedQA benchmark suite <cit.> (a 4% absolute improvement) by just tuning the prompt without any additional finetuning of weights. To obtain fair comparisons (that e.g. avoid nerfing) we thus might decide to use the same prompt and runtime hyperparameters (e.g. temperature, top-k, top-p, response length, repetition penalties and hand-crafted stopping sequences) for all models compared. But reusing an identical prompt for each model is, counterintuitively, not necessarily fair: it is well known that the effectiveness of prompts is highly model-specific <cit.> and we might expect the same to be true of other inference parameters in the runtime. However, acknowledging this and allowing the prompts to vary gives researchers a degree of freedom that they could use to hack the shared experimental setting while retaining the appearance of fair comparison. As an example, the Gemini team <cit.> reported SOTA on MMLU (90.04%) using `uncertainty-routed chain-of-thought' (while still using majority-voted answers with k=32 samples). However, in Appendix 10.2, they show that GPT-4 outperforms Gemini for key settings: the `top-1' (k=1) setting and the `CoT@32' (majority vote answers with k=32) settings, where GPT-4 obtained 87.29% to Gemini's 84.99%. Only when `uncertainty-routed chain-of-thought' (URCOT) is applied to both does Gemini Ultra overtake GPT-4. The issue here is not a spurious number – it is likely that some configuration of Gemini can indeed reach 90%, and URCOT (as configured) did not improve GPT-4 – but instead the use of a cherrypicked, unrepresentative result to argue for the superiority of Gemini over GPT-4. §.§.§ Benchmark hacking: picking easy tasks to test on To be sure of hitting the target, shoot first, and call whatever you hit the target. – attributed to Ashleigh Brilliant It is possible to make a model look SOTA by abusing the choice of benchmarks, which we dub benchmark hacking.[In the original humorous paper <cit.> it was called `Data Set Selection'.] This can either be classic “hacking”, reporting only benchmarks on which your method performs better than baselines, perhaps by looking at special cases or subtasks. But it also includes using an outdated or easy benchmark. This remains a problem in general but, happily, LLM evaluation has settled on a roughly-fixed set of hard benchmarks. The current basket includes: MMLU <cit.>, GSM8K <cit.>, MATH <cit.>, GPQA <cit.> and HumanEval <cit.>. It is thus possible to notice when a new system omits scores on one or more of these linchpin benchmarks. However, it is noticeable that LLM developers as a whole avoid reporting results on certain very hard benchmarks like the Abstraction and Reasoning Corpus (ARC) <cit.>[Not to be mistaken for the much easier AI2 Reasoning Challenge <cit.>.]. This can be seen as a form of benchmark hacking which does not affect the relative ranking of models, but instead directs attention away from the weaknesses of the general method. Benchmark hacking is particularly damaging when an easy task becomes a consensus member of the standard evaluation set used in frontier model releases. It is often combined with reification (<ref>): for instance, the HellaSwag benchmark (which tests a specific form of pragmatics of sequences of events represented in brief text fragments, with extreme label noise <cit.>) is often rounded off to “a test of (general) commonsense reasoning” <cit.>. More subjective but less artificial and gameable tests are being explored. The LMSYS Arena <cit.> (which crowdsources binary human preferences between models on arbitrary prompts) solves many problems – though we note that it too is susceptible to hacking (for instance by improving the style of a model without improving accuracy or reasoning, or even by paying raters to score your model preferentially using tell-tale tokens). §.§.§ Subset hacking: picking the easy part of a hard task A more subtle degree of freedom than simply the choice of training dataset arises because model evaluation has become very costly due to the number of benchmarks available, dataset sizes, and the associated inference costs (see <cit.> for a detailed discussion). A common solution for smaller labs is to use a subset of the benchmark to estimate the full score. But the choice of subset is a degree of freedom that can give rise to hacking. In addition to just choosing a subset on which your method performs better, subsetting a test set (for example, the MATH dataset of <cit.>), allows for several more subtle forms of hacking, especially if comparing against performances measured by other papers on the full dataset: * Re-generating subsets until an easier (by level, or subject) set of questions is found; * Stratifying problems by difficulty, then sampling more from the easier levels (easy for benchmarks like MATH where the difficulties are given <cit.>); * Testing on the full benchmark with high k (that is, e.g. 50 samples per test data entry), then constructing a subset with a representative difficulty distribution using only those hard questions that your model can solve. <cit.> mentions several papers which subsetted a benchmark during evaluation, which may not necessarily constitute subset hacking. For example, <cit.> did not report removing three problems from HumanEval, and <cit.> evaluated on a subset excluding 8 problems from WebArena. Reporting all seeds used for dataset generation could be one way to increase transparency, yet does not fully solve the problem. Preregistering does not necessarily solve the problem as we do not know if the resulting subset distributions will be representative. The most transparent solution is probably for the benchmark developer to provide `official' subsets, when possible. §.§.§ Harness hacking: choosing evaluation details after test We usually default to assuming that results on a given benchmark are comparable, but this is not always true. It is possible to create or choose evaluation code (`harness') which favours your model <cit.>. This is a consequence of overlooking the implementation details of benchmarks, and of evaluation loops in particular. A clear example of this occurred following the release of Falcon-40b, when the OpenLLM leaderboard evaluated Llama-65b's MMLU score as significantly lower than those published in the LLaMA paper <cit.>, and lower than models such as Falcon-40b. The OpenLLM leaderboard used the EleutherAI LM Evaluation Harness. This contrasts with two other implementations, namely the original harness from the MMLU paper <cit.> and the HELM <cit.> implementation, which scored Llama-65b nearly 30% higher than its Eleuther result <cit.>. The difference between the three prompts in the three harnesses were extremely subtle: First sentence, instruction, and topic... HELM adds an extra space, and the Eleuther LM Harness does not include the topic line Question: HELM and the LM Harness add a “Question:” prefix Choices: Eleuther LM Harness prepends them with the keyword “Choices” In this instance, even using well-designed benchmarks and one open evaluation harness, and not naively using reported values (which could have masked uncontrolled variations in the evaluation) but instead re-running everything, did not prevent serious error. Where multiple harnesses exist, one could instead iterate over all of them and report the best performance among them for each model tested. (Harness hacking differs from prompt hacking in a subtle way: harness hacking in the above example involves differences in the formatting of the benchmark questions as part of the user prompt.) Another type of harness hacking is metric hacking, the post-hoc selection of a metric to score the models. This can generally be prevented and mitigated by either reporting performance under multiple metrics <cit.> or using a standard metric for all models. For example, <cit.> explains that the evaluation method proposed in <cit.> had access to the test label during a decision process involving top-2 kNN accuracy (i.e. whether either of the 2 largest-logit output tokens was correct), giving it an unfair advantage over the baseline. When running the kNN classifier with k=1, finds that the the proposed model does not compare as favourably to the baselines, with a 5% drop in performance in most of the reported benchmarks. §.§.§ Golden seed: manufacturing luck An addition degree of freedom is the random seed, used in psuedo-random number generators, which often has a considerable impact on performance <cit.>. This allows researchers to choose a “golden” seed <cit.> to make their method look better. Using a golden seed might be nerfing (if you “optimize” the seed for your method but not the baselines) or hacking, if you use a single fixed seed for all methods, but choose that seed to make your method look better. While golden seed issues are not likely to be relevant for pretraining of SOTA LLMs (if only because the training runs are too expensive to run several times), post-training or finetuning procedures remain highly dependent on the choice of random seed. For instance, <cit.> demonstrates that even iterating over output layer training alone can provide a significant improvement (in his case, on ImageNet, he achieved a spurious gain of 1.82% accuracy from the best of 10,000 training seeds). <cit.> demonstrate the high variance in results for different fine-tuning seeds for four datasets from the General Language Understanding Evaluation benchmark (GLUE) <cit.>. They find increasing expected validation performance <cit.> for an increasing number of random seed assignments. This is also observed in <cit.>, where varying scores are obtained for 20 random restarts of a Bidirectional Encoder Representations from Transformers (BERT) model <cit.>, for the same dataset. §.§ Misreporting As already stated, little of the above would be unsalvageable if researchers honestly reported all their work in sufficient and correct detail (with honest summaries in the main text). Failures of reproducibility are partially failures of reporting (if we include missing code as failed reporting). We include Table <ref> to specifically look at these types of failures. One possible next stage for publications that employed QRPs is a retraction, which can be used as one indicator of scientific reporting integrity (see <cit.> for a discussion). §.§.§ Superfluous cog: adding redundant model modules to claim novelty To be able to publish results, a researcher has to be able to claim novelty, for instance the novelty of being the first to record some SOTA benchmark score <cit.>. One simple way to achieve this is to modify a SOTA model without significantly improving it <cit.>. A superfluous cog is an extra ML module added to a system which has no effect on model performance, which nonetheless is publishable since it may match SOTA. This need not be intentional if, as is common, ablation studies are omitted under time pressure and the lack of improvement is thus not noticed. This may be combined with unintentional baseline nerfing which reduces the original system's apparent performance (<ref>). Examples include: Core Vector Machines <cit.> not beating simpler Support Vector Machines <cit.>; modern recurrent networks like Recurrent Highway Networks (2016) and Neural Architecture Search (2016) not beating Long Short-term Memory networks on the Penn Treebank dataset <cit.>; and two large replication studies of recommender systems <cit.> which both found that simple baselines perform on par or even better than more complicated alternatives when correctly implemented and properly optimised. In all these cases the spurious improvement was caused by insufficient optimisation of the baseline methods.[We could view the Transformer architecture as a grossly successful ablation of previous sequence-to-sequence models which used sequence-aligned recurrences and convolutions as well as attention <cit.>. ] When designing new architectures the researcher must ask themselves the following questions <cit.>. * Are all parts of the system essential? Have I performed ablation? * Are the comparison baselines used suitable for the experiment and expectations? * Have I sufficiently optimised the hyperparameters of all models, including the baseline model? * Can I provide a list of improvements/modifications that showcases the novelty of the proposed architecture over related and past works? * Have any necessary ablations been omitted? This includes ablating every change from the baseline in isolation and in combination. * Post-experiment questions: Can an exhaustive list of any restricted tools (e.g. those locked behind an API) used be provided? Is the implementation modular where possible to facilitate modification? Have I reported all of the above? In the causal graph presented in Figure <ref>, the superfluous cog is due to problems in design and evaluation. In the design step the researcher has not sufficiently justified, either empirically or theoretically, the choice of model. More importantly, in the evaluation step the researcher has not sufficiently compared the model to the baseline or ablated versions of itself. We go into depth on baselines in Sections <ref> and <ref>. These extra modules and over-complicated architectures can also lead to increased costs. <cit.> present simple baseline agents which outperform many SOTA complex agent architectures on HumanEval, and argue the need to include cost as an important evaluation metric. In the context of this QRP, using a benchmark metric that takes into account both accuracy and cost could provide a fairer comparison against architectures with unnecessary layers of complexity. The extreme case of superfluous cog is the outright theft of a method. A case of this is the (now taken down) Llama3-V project stealing model structures and configuration files from the MiniCPM-Llama3-V 2.5 repository <cit.>. §.§.§ Whack-a-mole: ad-hoc post-training to fake generalisation A Twitter user shares a screenshot of a frontier LLM making an embarrassing error – failing to count the sides of a decagon <cit.>, say, or failing to count the parity of a short binary string <cit.>, or making up information about US presidents <cit.> – along with the prompt used; other users reproduce it; but, some days later, the prompt stops working. We can speculate that the model was fine-tuned to specifically fix the errant example and the weights used by the API silently updated (or perhaps the example was added to a list in a software wrapper around the model), without changing the public model version number. Call this process of patching a model in an ad-hoc manner when it outputs an error, whack-a-mole. It is understandable in the context of unlawful activity such as copyright infringement or hate speech <cit.> or security issues like personal info leaks or jailbreaks[Though patching specific jailbreaks as they arise does make it harder to evaluate the model's general alignment and robustness to adversaries, and so should count as whack-a-mole for that specific evaluation task.] <cit.>. However, nothing stops the developer from patching the model for other purposes, like faking generalisation. It also seems likely that whack-a-mole examples are included in the training data of future models – actively, or passively through web scraping – leading to a permanent distortion of evaluations <cit.>. <cit.> provide more examples and start a crowd-sourced collection of whacked examples. The practice is anecdotally attested in the community under names like `patched' and `unannounced updates' <cit.>, but has not been demonstrated definitively. A existence proof of a related practice comes in the tendency of `jailbreaks' (prompts which circumvent post-training safeguards) to stop working on commercial systems like ChatGPT much faster than the major model release cycle <cit.>. Since ChatGPT does not expose a model version number or report any of these patches <cit.>, we can regard all such examples as silent updates. We discuss this speculative QRP further in <ref>. §.§.§ Benchmark decoration: pretraining on post-training data As coined by <cit.>, benchmark decoration is the inclusion of labelled data (for instance instruction-answer pairs or benchmarks) in the pretraining dataset. Besides the usual contamination questions this raises, the primary effect is to produce misleading zero-shot performance (see <ref>) and to make comparisons to other `base' models unfair (see <ref>). <cit.> show the effect of benchmark decoration by comparing their base model to the `annealed' (decorated) . The annealed version scored consistently higher across all tested benchmarks. <cit.> uses the `unconditioned' distribution (i.e. completions resulting from an empty prompt) of a language model to uncover anecdotal cases of decoration. He finds that the Llama3-8B <cit.> and Phi-3 <cit.> base models output math question-answer pairs and what appears to be part of a coding question; this hints at fine-tuning data being included in pre-training. An example of partially mitigating benchmark decoration is the Gemini team <cit.> identifying HellaSwag <cit.> training data in their finetuning steps and thus opting to test on different reasoning benchmarks as well. §.§.§ p-hacking: manufacturing significance by testing many times p-hacking is the opportunistic use of degrees of freedom in the research process to obtain statistically significant results <cit.>. In the context of selective reporting, this QRP refers to claiming SOTA after obtaining many p-values and not correcting the significance test for this multiplicity. To take a p-value at face value requires that just one hypothesis test was conducted (for instance, one test of the significance of the difference between our system's performance against one prespecified baseline) <cit.>. This does not always (or often) hold in ML, and, in our experience, explicit correction for multiple comparisons is rare <cit.>. Other significance-test degrees of freedom which can be abused in ML evaluation include: the choice of test, whether it is used one- or two-tailed, and using a one-sample test against a point estimate of the baseline's performance (as opposed to a two-sample test against a probably-freshly-run set of baseline results). §.§.§ Point scores: evaluation without uncertainty and robustness When reporting evaluation scores, it does not suffice to report the performance of a single run <cit.>, owing to the nondeterminism of nonzero-temperature LLM sampling. Instead we should (at least) average over a set of runs, or, better, provide error bars on the performance estimate from a set of runs. We call the practice of reporting one run point scores. <cit.> and <cit.> show the problem, which is particularly severe for small datasets such as GPQA, where the difference between the highest and lowest scores over 10 runs can reach 10%. §.§.§ Outright lies: outright lies For completeness we should mention the possibility of complete fabrication of results, even though there is not much to analyse in this case (and it is beyond our definition of a QRP). Among the other advantages of reproducibility and open code sharing (e.g. allowing results to be easily built on, catching errors, and permitting deeper understanding of an experiment), it is the only countermeasure we have against outright lies. A case of fabrication which intersects with several other QRPs is the now withdrawn claim that GPT-4 achieves a perfect score on MIT exam questions <cit.>. The authors built their own benchmark, containing severe cases of Label Noise, Split Twins and Thieved Test, among others. The resulting claim was criticised <cit.> and consequently retracted by the authors <cit.>, although not all issues raised were addressed. §.§.§ Overclaiming & underclaiming: conclusions which don't reflect results Let overclaiming be drawing exaggerated qualitative conclusions from benchmark results <cit.>. For instance, consider the hypothetical: you obtain a 2% absolute improvement over a previous model, but only improve SOTA from (say) 26% to 28%, and you then claiming that the task is `solved', without reference to e.g. human-level performance. The Galactica model <cit.> is a clear case: it was initially presented as a model that could help scientists with research tasks (especially literature review), but, as usual with LLMs, it hallucinated information quite freely <cit.>. The size of this mismatch led to a rare admission of exaggeration: the demo was retracted within three days and a statement was added to the project website highlighting its unsuitability for its originally stated purpose <cit.>. A laudable example of owning up to overclaiming is <cit.>, whose paper claimed erroneously that Ordinary Differential Equation (ODE) solvers automatically meet a given error tolerance. The converse, underclaiming <cit.>, is rarer but can be seen in some papers attempting to debunk LLM capabilities. An example is <cit.>, who find severe problems with ChatGPT's performance on coding tasks: “we leverage the state-of-the-art ChatGPT, a recent product, as the representative of LLMs for evaluation”. However, only in a footnote do they mention that all experiments used the less-powerful GPT-3.5 (despite GPT-4 being available). <cit.> highlights other examples: failing to provide context for claims on the performance of weaker models (e.g. papers using BERT over superior models like RoBERTa <cit.>), or the use of adversarially collected test sets to make general claims without acknowledging the intentionally higher difficulty. §.§.§ Reification: mistaking the proxy for the thing itself Benchmarks give evidence for general performance claims – e.g. `my model scored high on MATH so it must be good at math'. But this evidential link often collapses into a direct claim - that a model's performance on MATH is its performance on general mathematical tasks, rather than an easily-defeasible estimate of the general capability. However, there are cases in which datasets are either too narrow or too fundamentally flawed to be used as a justification for such general statements, and this can lead to reification: overclaiming on external validity. <cit.> offers a related take on benchmarking, focusing on external validity (i.e. what a benchmark says about data beyond (external to) itself, whether performance on the benchmark allows us to infer performance on the task in general). As <cit.> puts it: [T]he best way to identify sketchy scientific claims is by their level of abstraction. If a new paper claims that giant sloths were more common in present-day Arizona than in present-day Nevada before the last ice age, I would not identify that as a sketchy claim; it is limited in time and place, and not particularly abstract. However, if a new paper claims that attractive people are more (or less) generous than ugly people, I would be highly suspicious. “Generosity” is a very abstract and nebulous quality, and I would want to know how it was measured—and I know I would not be satisfied with the answer (it will be a survey or an economic simulation game). The profound abstractions of everyday life—happiness, relationships, sleep, nutrition, motivation, pain, belonging—are the concepts most vulnerable to scientific abuse. <cit.> explains how papers using the Weather.gov corpus are likely only reverse-engineering a rule-based system <cit.>, and not learning to imitate the behaviour of human forecasters. We consider the previously mentioned paper `Exploring the MIT Mathematics and EECS Curriculum Using Large Language Models' <cit.> as an example of reification. §.§.§ Nonzero zero-shot: claiming ‘zero-shot’ while training on examples One important performance metric for a model is its performance in the zero-shot setting (i.e. prompting the model without giving examples of worked solutions of the task, as opposed to the few-shot setting). However, if training examples from the benchmark are present in the model's training corpus, this puts all evaluations of this model on this benchmark effectively (and nearly undetectably) into the few-shot setting. The prompt may be zero-shot, but the performance is altered due to the presence of benchmark data in the training corpus. <cit.> call this `task contamination'; we call it nonzero-shot. (Compared to memorising test data, nonzero-shot is a minor matter of interpretation: a slight downgrade to the model's apparent generalisation ability on the task.) A notable example: the PaLM model <cit.> is apparently capable of zero-shot translation between languages. <cit.> scanned the PaLM training corpus for 44 common languages, and found that 0.34% of examples contained paired translations of sentences. They note that they "were able to mine such pairs across all languages studied; therefore, none of these languages is truly zero-shot in the context of translation". In vision language models, popular class labels tend to appear in multiple training corpuses. As a result, most of the performance in an apparent zero-shot classification setting actually comes from the model recognising class labels <cit.>. As a result, <cit.> suggests a defensive attitude towards claims of `zero-shot performance' given the difficulty of vetting a modern training corpus: “Just because you don’t know what’s in your training data, you cannot just call it zero-shot”. §.§.§ Misarithmetic mean: aggregating wrong Some model evaluations use normalised scores (for instance when comparing current machine performance to human performance = 100 <cit.>). But, as <cit.> note, using the arithmetic mean on normalised scores is strictly invalid: the result is sensitive to the reference method chosen and any resulting comparisons are not meaningful. Instead, the geometric mean may be appropriate. Normalised scores could also exacerbate baseline hacking, by obscuring the absolute performance numbers that might allow readers to notice a weak field <cit.>. §.§.§ Parameter smuggling: games with reported model size Models are often ranked within size classes - that is, we mostly compare models (like Gemma-7B <cit.>) to their counterpart seven-billion-parameter models like (Mistral-7B <cit.>). However, a degree of freedom lies in how to count the model parameters. Strictly speaking, there are four types of adaptive parameters in an LLM: the position embeddings, word embeddings, attention parameters, and feed-forward parameters. One typical way of counting is to look at just the Transformer parameters (i.e. the combined size of the attention and feed-forward mechanisms). There is then an incentive to under-report or round-down models, since models branded as smaller in parameter count are judged more leniently. (In effect, the smuggler invites the reader to do baseline hacking for them.) Call the practice of under-reporting your model's size parameter smuggling. A clear example of this is Gemma-7B, whose model size is actually 8.54B parameters <cit.>. A distinct form of parameter smuggling occurs at model design time instead of reporting time: substituting larger embedding matrices in place of Transformer parameters, in order to hit a threshold Transformer size like 7B or 8B. For instance, Gemma-7B has 0.79B embedding parameters <cit.> and Qwen2-7B has 1.1B <cit.> - more than 9% of total parameters. These are unusually large embedding matrices: for reference, GPT-3-175B has approximately 0.62B embedding parameters (0.4% of its total parameters) <cit.>. See <cit.> for a full discussion of the performance tradeoff between embedding and Transformer parameters, and examples of models with legitimately huge embedding matrices (up to 71% of total parameters). §.§.§ File drawer: underreporting negative results The bias against accepting negative results for publication is well-known <cit.>. As well as incentivising researchers to (consciously or unconsciously) abuse RDOFs or overlook QRPs to obtain spurious or lucky positive results, this bias presumably has a chilling effect on writing-up failed experiments even for personal blogs or arXiV, and so the entire field suffers from filtered evidence and a failure to signpost dead-ends for fellow researchers. Another version of this QRP is failing to report negative results in an otherwise positive paper; see the various hacking QRPs in <ref>. §.§ Amplifiers The QRPs discussed so far can be further exacerbated by different amplifiers. §.§.§ Fishing & Half-fishing: `confirmatory' research without a hypothesis One form of cheating that applies to all empirical fields is `fishing' for results (e.g. a statistically significant result): conducting multiple tests of a hypothesis that has not been stipulated beforehand (and not correcting the test statistics for these multiple comparisons). Fishing is also known as HARKing – hypothesising after results are known – or data dredging <cit.>. We do not include this practice in Table <ref> because it does not apply cleanly to the LLM training process, and because it is covered by more specific practices like SOTA p-hacking. Nevertheless, any explanatory work in ML (involving testing hypotheses about the system or `adaptive' data analyses of model results) can suffer from a direct analogue of the classic scientific fishing expedition. A valid approach to circumvent this is to `fish' and evaluate on different datasets within the same domain or task, effectively expanding your working dataset (CIFAR-10 and ImageNet for instance). A more specific example of fishing with LLMs is prompt hacking; when casually evaluating a model (e.g. trying to discover some impressive single result), users iterate over prompts until an impressive completion is found and only report the best resulting completion. This is fishing (optional stopping) using the human user to validate. Applying such tuning methodologies arbitrarily when comparing models can lead to baseline nerfing (See <ref>). Iterating over algorithms is an important part of how the field progresses, and is innocuous if it avoids silent multiple comparisons and other QRPs. When performing this type of iterative exploratory research the researcher is at risk of deluding themselves via multiple testing. To combat this, researchers can declare, and commit to, an initial experimental process through preregistration. §.§.§ Inductive smuggling: handcrafting bias and silent labelling Many ML methods are so computationally intensive that the hypothesis space that is searched for models must be severely restricted by the researcher. One way to do this is by carefully selecting the architecture of candidate models (`language bias') <cit.>. Under the names `program template', `mode declaration' and `metarule', this remains standard practice in symbolic AI, where inductive biases usually cannot be learned <cit.>. As <cit.> put it, “ML experts have to spend a long time inspecting the data and performing many rounds of trial and error to develop an effective language bias”. This is not in itself a questionable practice. We include it to draw attention to cases where what is reported as system performance is in fact dependent on human judgment. This may seem isolated to approaches which still involve handcrafted features or inductive biases – but there is an equivalent QRP in deep learning, satirised by : The problem with [chain of thought prompting of LLMs] is that it is highly susceptible to the Clever Hans effect, where the LLM is merely generating guesses, and it is the human in the loop, with the knowledge of right vs. wrong solutions, who is steering the LLM – even if they didn’t set out to do so deliberately. The credit and blame for the ensuing accuracy, if any, belongs squarely to the human in the loop. The relevance of such a framework becomes questionable when the human-in-the-loop doesn’t know (or is unable to verify) the answer to the reasoning/planning problem themselves (and hence my tongue-in-cheek proposal for the forest of jumbled thoughts prompting). What we call inductive smuggling is more general than handcrafted language bias: it consists in any part of a model training process that requires human input for the model to perform well, as in the optional stopping of chain-of-thought examples. In relation to Figure <ref> we can interpret the human in the loop as test data. This `contaminates' the evaluation process itself, in the sense that a source of task mastery (the human evaluator) is providing signal (for instance, optional stopping). §.§.§ Label noise: benchmarking on bad labels Benchmark datasets are large enough (routinely over 100,000 examples) to make full validation by hand expensive. As a result, many benchmarks contain many errors, the most significant of which are mislabelled instances. Call the use of benchmarks known to be error-ridden, abetting label noise. <cit.> check the popular MMLU benchmark for errors and find that more than 9% of examples are incorrect. Some subsets of the benchmark are much worse: for example, 57% of analysed virology questions contain some error. More than 60% of the conversation data in the Wizard of Wikipedia, CMU-DOG, and TOPI- CALCHAT datasets <cit.>; ten standard machine learning benchmarks contain label errors <cit.>; and finally, 36% of the popular Hellaswag benchmark contains syntax errors <cit.>. Particularly unfortunate is that, once a benchmark is released and tested on, the need for backwards compatibility of baselines often means that the benchmark's errors cannot be fixed and so the noisy signal remains in place until it is `saturated' (solved by frontier models) or replaced by a hopefully-better benchmark. Unlike many other QRPs, test label errors do not predictably increase benchmark scores.[One could imagine an adversarial mislabelling process, tailored to increase the scores of some particular systematically-biased model. But we have not seen evidence of this.] The questionable practice here is not `making errors' or even `making errors one third of the time'; instead, it is the community continuing to use a benchmark as evidence of capabilities, long after its flaws become well-known. § IRREPRODUCIBLE RESEARCH PRACTICES Here, we list practices which prevent third parties from reproducing ML training or evaluation (summarised in Table <ref>). While these irreproducible research practices are not in and of themselves classical QRPs, they may enable of QRPs, as they prevent auditing by other researchers. §.§ 11121em §.§ Dataset hiding: gatekeeping your training data ML is an unusually reproducible field: benchmark data are often public, and open code is often required for publication <cit.>. One 2016 estimate lower-bounds the rate of open code in computer science at 32% of papers <cit.>, as compared to e.g. <0.5% in the medical sciences <cit.>. However, this trend has reversed for commercial models, especially `frontier' systems. There are legitimate reasons to withhold training data, for instance to protect user privacy, but these have reproducibility costs. <cit.> is an example of painstakingly replicating an important graph (the Chinchilla data-compute-loss scaling law) when the original data was not made available. §.§ Stochastic runs: intentional and unintentional randomness Some methods intentionally introduce nondeterminism into the training process. Most obviously, stochastic gradient descent uses the gradients of random mini-batches of training data, and randomness in the choice of datapoints in the minibatches implies a random training trajectory <cit.>. In addition, some architectural features such as Dropout add further randomness by design <cit.>. Setting a random seed at the beginning of training is intended to make these sources of randomness reproducible. However, when training on a GPU, further optimisations occur behind the scenes (that is, unmarked by the user code calling the low-level libraries). For instance, the GPU's thread scheduling can affect the order in which floats are summed up <cit.>. These optimisations reintroduce nondeterminism despite the fixed seed <cit.>. §.§ No access: gatekeeping your model The simplest way to ensure irreproducibility is to not offer any access to your model. Gemini Ultra <cit.> claimed SOTA on many popular benchmarks and the marketing material for the Gemini model family uses its scores instead of those of the smaller Gemini models (which underperformed GPT-4 on all benchmarks <cit.>). In fact, as of writing (July 2024), the Ultra model is completely inaccessible except to `Select Developers and Enterprises', preventing any reproduction of the reported results <cit.>. §.§ Closed evaluation: gatekeeping your evaluation data As with hidden training data, it has become common not to sufficiently disclose the evaluation process used <cit.>. Reasons to withhold evaluation data include: protecting proprietary data, avoiding contamination <cit.> or avoiding harm <cit.>. §.§ Runtime hiding: gatekeeping your inference parameters When evaluating LLMs the researcher can optimise various parameters (see runtime nerfing, <ref>). Due to the large search space and expensive inference it is unfeasible for an independent researcher to verify evaluations, potentially leading to not disclosing the parameters used. LLMs are also of interest as a product (see <ref>). For independent evaluations of these models to occur it is important to know the inference parameters that they use. Without this knowledge, it is more difficult to draw a fair comparison between models. §.§ Dummy code: gatekeeping your code An understudied commonplace of many fields is that peer reviewers rarely inspect the code of submitted papers <cit.>. As a result, one can just submit placeholder files with plausible titles to get around new requirements for code. A softer version of this phenomenon is submitting a empty GitHub repository and populating it much later, or only partially populating it. Examples include <cit.> and the previously mentioned <cit.> https://web.archive.org/web/20240520121753/https://raw.githubusercontent.com/jonnypei/acl23-preadd/main/scripts/experiments/evaluate_sentiment.pycode, missing as of 2024-05-20.[We find ourselves unable to guarantee that we have always uploaded all our evaluation code.] §.§ API drift: underreporting model changes A recurring suspicion about models released over API is that silent updates are affecting their performance. It is difficult to infer performance degradation (as opposed to ordinary variance in performance given arbitrary tasks and arbitrary prompts that the complainants are using) from any single occasion. But clear within-model (and within-model-version-string) behaviour changes have been observed over time on maths, sensitive question answering, code generation, and visual reasoning <cit.>. <cit.> point out that the experiments conducted in the previous paper should only be considered in the context of behaviour shift and not capability, and that these changes are specific to the particular prompts used. The HAPI dataset <cit.> is a longitudinal dataset of commercial ML API predictions. More than 60% of the 63 APIs evaluated had substantially performance changes over time. §.§ 11111em 1.4 § DISCUSSION We now move on to discussing defences against these QRPs and to speculating on their root-causes. §.§ Misreporting is all you need? We could say that the really fundamental QRP is inadequate reporting – since problems of interpretation and even outright cheating would in theory evaporate if researchers exhaustively reported what they actually did, for instance through releasing an end-to-end Weights and Biases log <cit.>, and if (for instance) abstracts did not misrepresent the generality or significance of results.[Consider that even the archetypal bad practice, training on a test set, could be valid in edge cases like measuring a model's compression of data (rather than its predictive loss), as in <cit.>.] However, this assumes that the vast amounts of data generated by a modern model training process can be sensibly analysed by readers; even condensed supplementary information can comprise hundreds of pages <cit.>. We see the the normalisation of selective reporting in the practice of dataset hiding (<ref>), which nearly completely foils outsiders' ability to distinguish memorisation from actual generalisation <cit.>. We now usually do not know the contents of the training corpus, even at the level of metadata <cit.>. Thus, except in rare cases (when testing on data created after the training cut-off date, or on some other newly-digitised or -shared data), we cannot know the relative share of task performance from memorisation and generalisation. §.§ Defences Despite the above issues, ML evaluation is less misleading than it could be. Some existing practices that help, and some that seem underused, include: * Standardised evaluation harnesses. We can completely solve baseline nerfing by having all developers use a fixed evaluation codebase (`harness'). For instance, the EleutherAI Harness aims to make best practices (e.g. Minerva prompt strategies <cit.>) the default and is continuously updated <cit.>. It also solves the orchestration problem: re-implementing, installing and debugging dozens of small libraries, and so also removes an extremely common source of human error (coding bugs). Finally, they make it easier to test a larger set of diverse benchmarks, which reduces the scope for cherrypicking and RDOFs. The UK AI Safety Institute's Inspect tool <cit.> is another advanced harness including self-critique and `agentised workflows' (prompt loops). * Semantic decontamination. To fully solve data contamination, would require semantic deduplication: removing from the training data any example too close to a test example. LLMs can be used to decontaminate datasets, following <cit.>. One drastic solution (for major releases) is to use a fresh test to estimate generalisation, alongside the use of past benchmarks for relative baselines. In Table 1 of <cit.>, very informative decontaminated accuracy measures are used by providing benchmark scores at different similarity thresholds. * Contamination databases. Various efforts at detecting contaminated training corpuses <cit.> are in progress, including a new ICML workshop <cit.>. A good attempt to collate models known to be contaminated by some dataset (and thus prevent false claims of generalisation) has been made by <cit.>. * Private benchmarks. To avoid contamination, some benchmark creators have resorted to keeping their test set private. While this raises practical issues, such as ensuring benchmark quality and commercial unwillingness to submit weights (and the possibility of logging test data from any API calls), these can be overcome by, for example, closed auditing by third-parties <cit.>. Private test sets also enable the provider to count the number of runs per model and so prevent some forms of cherrypicking. * Benchmark refresh. As in <cit.> and <cit.>, we can collect new data distributed identically to the original benchmark. This is expensive and so best done annually as part of a consortium of research teams. * Gestalt evals with human preference. A change that has already been made, and mitigates contamination and reification, is the use of human preferences as a final metric. The LMSYS chatbot arena for crowdsourced evaluations allows volunteer users (numbering in the thousands) to submit arbitrary prompts and select the overall best-looking completion from two anonymised models <cit.>. One problem with this is that it seems possible to greatly improve arena scores merely by improving a model's style or sycophancy rather than its capability on the prompt's intended task. * Canaries to prevent or detect accidental contamination. The BIG-Bench dataset includes a `canary string', a unique identifier that allows model developers who wish to avoid data contamination to find and exclude its test set from the training corpus (while also allowing extraction attacks an unambiguous target to check if it was included) <cit.>. However, without the cooperation of model developers, such efforts from benchmark makers have no impact in the prevention of test training. In Appendix <ref> we show that Claude 3.5 Sonnet (via web interface) knows the BIG-BENCH canary string. Reusing their specific string actually allows for a zero-overhead solution: new benchmarks can include the exact string and benefit from whatever filters honest developers have put in place for it. * Full logging and log summarisation. Speculatively, for developers extremely devoted to transparency: software like the Weights and Biases library <cit.> could in principle capture and summarise an entire research project – the number of training runs, and the number of evaluations. This could solve all selective reporting and cherrypicking problems, while making it easier for (likely post-publication) reviewers to spot other issues. <cit.> proposes increasing transparency through experiment tracking using tools such as MLflow <cit.>. * Preregistration of analyses. An overarching solution that bridges the above is to preregister experiments, and require researchers to report any voluntary decisions and changes implemented throughout their experiment. This allows for faithful re-testing of the original hypothesis by anyone attempting to replicate the paper <cit.>. This approach has been previously explored for ML by <cit.> in the form of a results-blind review process where the hypothesis and experiments are pre-declared and peer reviewed, experiments are conducted, and the papers are published regardless of their results. Most model design processes are too chaotic and iterative to be preregistered, but evaluation is somewhat different; the possible choices of benchmark and baseline are discrete and knowable, and consensus choices already exist to constrain benchmark and baseline hacking. Many of the QRPs in this paper are insidious because they are totally contextual: they are normal and innocuous practices when performed pre-hoc (before testing on the hold-out set); it is only after test that they become questionable. Preregistration is the best way to certify that a practice actually was pre-hoc. (Applying the same security mindset as above, however, we can imagine a bad actor running experiments and then falsely preregistering them, what <cit.> calls PARKing – Preregistering After Results are Known.) §.§ Root causes Two major root-causes of QRPs and IRPs arise from the problematic incentives for researchers and large companies. §.§.§ Researchers need to self-certify SOTA The current publication process has a deep conflict of interest: we make algorithm designers self-certify their own performance, fully coupling design to evaluation <cit.>, and in almost all cases prevent the publication of negative results <cit.>. Moreover, to get your paper published at a major conference, it is often necessary to show almost strict dominance (bolded results for your method in all columns of the results table). Since the designer is the evaluator, all evaluations are heavily incentivised to fool themselves and upwardly bias results, for instance through the many routes we identify in Table <ref>, and this bias is often not intentional. This ultimately arises from an expectation that model designers perform their own evaluations. In reality, all that should be necessary for model designers is to show that “this method is potentially interesting to the research community”: a quite different, though difficult-to-pin down standard. If the conference system was not already groaning under the strain of ordinary peer review <cit.>, we would suggest making evaluation an act of service for researchers: let other people run your method for you. Instead (assuming that cursory peer review will continue to monopolise the field's available service time), the practical move is to continue work on open evaluation harnesses and partially automating review <cit.>. Finally, we can say that a last enabler of QRPs is the weakness of pre-publication peer review <cit.> and the near-total absence of formal post-publication review. §.§.§ Industrialization of AI research The present industrial era of AI <cit.> retains the scientific trappings of the previous academic era <cit.>. We still minimise empirical risk <cit.>, aim to avoid contamination of the training set with a `held-out' test set, and act as if fairly testing the generalisation of the method (e.g. the model architecture) is the primary purpose of reporting benchmark values. The above presupposes the academic view that the purpose of evaluation is to accurately rank methods, understand systems, and/or make fair comparisons in a maximally reproducible way. Of course, for AI products these are not the only goals: others include “build the best product you can” and “get as much attention as possible” (i.e. marketing). Consider a broader set of purposes for testing an ML system: * AI science: Fairly test a new method, controlling all sources of variation and degrees of freedom, in effect testing the hypothesis `this method is better than all previous methods'; * AI science: Test a theory by testing a system derived from its principles; * AI science: Create the best base model you can to see how far deep learning, or some other architecture, goes; * AI science: Solve generalisation, create the best model you can; * AI product: Create the most useful product you can, for instance by having it memorise the answers to everything on the internet (in which case failing at generalisation is irrelevant); * AI business: Get the highest leaderboard rank you can, e.g. to attract investment. These goals correlate, but come apart on questions like “does training on the test set promote my ultimate goals?” A scientist would obviously respond “no”, as training on the test set makes it impossible to fairly compare methods. But a business might say “yes”, because the test set is more data, and more data almost always leads to better performance (though the effect is likely to be small enough for real-world capabilities that we doubt that this factor outweighs the benefits from fair comparisons). Perhaps the biggest divergences between scientific and business incentives and outlooks happen not on QRPs, but on IRPs such as obscuring the training dataset. There are strong incentives to avoid disclosing the training data, both for competitive reasons (i.e. to prevent competitors catching up ) and for legal reasons (i.e. to make it harder to sue for copyright infringement ). There is nothing inherently wrong with the “business” perspective. The issues arise when unscientific means, such as data contamination, are used for product development, then unjustifiably used to make scientific claims about e.g. performance on a benchmark. In such cases, since the norms of academic evaluation are the default, departures from science should be declared explicitly. §.§ Limitations We do not expect Table <ref> to be exhaustive, and welcome contributions of further QRPs and clear examples of existing ones. We also recommend watching the crowdsourced work led by <cit.>. This paper is dual use: a bad actor could use it to improve their efforts to mislead. But in the current regime, little stops bad actors from conducting simpler methods like fabrication. We instead hope to aid those who aim at scientific estimates to avoid foiling their own goals. It is difficult to systematically detect QRP in papers or repositories, so we are limited to existence proofs: this work does not quantify the prevalence or severity of the QRPs, so we cannot tell you how much to worry. We plan followup studies to estimate their prevalence and effect size, and to investigate the effect of suggested defences against QRPs. We also aim to study how internal and external validity is affected by particular QRPs. § CONCLUSIONS We reviewed 43 QRPs, most of which involve some form of contamination, cherrypicking or misreporting, which affect the internal and external validity of machine learning evaluations and LLM evaluation in particular. Next, we reviewed ... IRPs which prevent other researchers from reproducing, building on and auditing prior work. We listed possible mitigations and suggested two explanations for QRPs: researcher incentives and the industrialization of research. We thank Zhengdong Wang, Thomas McGrath, Ada Böhm, Maxime Robeyns, Robert Kirk, Peter Barnett, Simon Steshin, Fabien Roger and Eli Lifland for helpful comments. Many of these phenomena first came to our notice on Twitter, so, as well as the individual accounts we have cited, we'd like to thank the platform in general. It is the best actually-existing forum for scientific criticism. GL is funded by the UKRI Centre for Doctoral Training in Interactive Artificial Intelligence (EP/S022937/1). plainnat § APPENDIX: QRPS IN ORDER OF TIME Another way to think about QRPs is chronologically, working through the stages of model development in order. §.§ Design We can caricature the design of an ML system as * taking an existing approach * trying a number of more or less theoretically motivated tweaks to (1) * finding a tweak that performs slightly better than (1) * writing a post-hoc story about why (3) performs better. That is, we are exploring rather than designing; indeed, <cit.> call the standard ML research workflow `explorimentation'. Underscoring this is the scarcity of pre-registered experiments in the field - for instance, while approximately 43% of top psychology studies are now preregistered <cit.>, we could only find 36 ML papers with preregistrations, all from two self-consciously unusual conference workshops on the topic <cit.>. <cit.> make suggestions for adapting preregistration to predictive modelling. Questionable research practices can occur during: * Architecture design and adjacent technical decisions (<ref>). * Ideation and (intermittent) generation and testing of hypotheses (<ref>). §.§ Collection A researcher has the following degrees of freedom when choosing a dataset (be it a real-world dataset or a choice of data generator): * Curation * Cleaning * Annotation * Preprocessing §.§ Training Any observed statistical regularity will tend to collapse once pressure is placed upon it for control purposes. – <cit.> The training regime of large language models, is spread across different stages: pre-training, supervised fine-tuning (SFT), and Reinforcement Learning from Human Feedback. This increases the number of degrees of freedom in the process. We split the training QRPs into three groups: * Benchmark data contamination at any stage of training (training corpus contains test labels, test labels in pre-train, second-hand use of contaminated data). * Indiscriminate parameter iteration until a golden training run is crafted (<ref>). * Ad-hoc post-training to: patch specific failures (<ref>), obtain general model improvements (<ref>), or tune hyperparameters further after test (<ref>). §.§ Evaluation Evaluation in the context of QRPs is largely explained in the Cherrypicking discussion in <ref>. §.§.§ Prompt hacking & Prompt nerfing: picking the best prompting strategy and unfair baseline prompts Once a benchmark (or suite of benchmarks) is chosen, researchers optimise the model's task prompt. This can provide extremely large performance improvements, on the order of 10% better absolute accuracy on hard tasks <cit.>. <cit.> chose random `few-shot' examples to add to GPT-3 prompts by drawing random examples from the training set. But full optimisation of this and other parts of the prompt are now common <cit.>. Depending on the domain, other prompting methods are employed. The `Minerva prompt' <cit.> samples the model (at non-zero temperature) multiple times and uses majority voting to determine the accepted final answer. These more complicated prompting methods give a researcher access to further tuneable hyperparameters, exposing them to a higher risk of cherrypicking. Contamination can accidentally result if few-shot examples (or near-duplicates) are in both the training and the test set. We refer to contamination during prompt optimisation as Prompt contamination – see Section <ref>. They also have to worry about prompt nerfing, an unequal level of prompt optimisation between models. Preventing this is non-trivial because different models have different optimal prompts, so the naive fair move of using the same prompt may not result in a fair comparison if it was developed on one model. Again, for example, the Gemini launch results <cit.> on the GSM8K benchmark compared Gemini's majority-voted k=32 score to a weaker sampling method on GPT-4-Turbo (the 5-shot score, apparently without majority voting from multiple samples). Using the same prompt for both models is the most immediate idea to achieve a level of fairness but has several problems (for both zero temperature or non-zero temperature). There are cases in which unlucky or malicious optimised choices of prompts far prefer one model over another <cit.>. This can be additionally effected by runtime hacking, that is, differences in tokenizers, seed choice (if temperature is non-zero), choice of temperature and choice of context (few-shot vs one-shot). §.§ Reporting Reporting in the context of QRPs is largely explained in the Misreporting discussion in <ref>. § APPENDIX: CLAUDE 3.5 SONNET KNOWS THE BIG-BENCH CANARY STRING In Fig <ref>, we replicate the conversation with Claude in <cit.> to obtain the canary string included in the BIG-BENCH benchmark. § ALTERNATIVE TABLE Table <ref> shows the QRPs discussed ordered by model development stage. left=0cm,right=0cm, top=0.3cm 1.14 empty
http://arxiv.org/abs/2407.13557v1
20240718142924
Spontaneous Scalarization of Schwarzschild Black Hole in Scalar-Torsion Teleparallel Gravity
[ "P. A. González", "Eleftherios Papantonopoulos", "Joaquín Robledo", "Yerko Vásquez" ]
gr-qc
[ "gr-qc", "hep-th" ]